date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,543,156,167,000 |
just noticed I was using SDD for SSD. Corrected
I need help interpreting this situation. /dev/sda is a data disk backed up and with reproducible data so this is not system critical but I'd like to avoid the effort of restoring/reconstructing the data some of which will be quite time consuming
Is recovery / repair possible?
If so how? If I wipe the disk for re-use what is its reliability?
Summary (detailed reports below):
will not mount: bad superblock
badblocks finds no bad blocks
smartctl reports no errors
fsck cannot set superblock flags
fdisk shows clean partition
dmesg shows write errors
parted shows 792 GB free of 1 TB drive
Mount ssd fails as so:
[stephen@meer ~]$ sudo mount /dev/sda1 /mnt/sda
mount: /mnt/sda: can't read superblock on /dev/sda1.
dmesg(1) may have more information after failed mount system call.
[stephen@meer ~]$
but badblocks finds no bad blocks
[root@meer stephen]# badblocks -v /dev/sda1
Checking blocks 0 to 976760831
Checking for bad blocks (read-only test): done
Pass completed, 0 bad blocks found. (0/0/0 errors)
But smartctl finds no errors
[root@meer stephen]# smartctl -a /dev/sda
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.17.9-arch1-1] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: WD Blue / Red / Green SSDs
Device Model: WDC WDS100T2B0A-00SM50
Serial Number: 213159800516
LU WWN Device Id: 5 001b44 8bc4fdc6e
Firmware Version: 415020WD
User Capacity: 1,000,204,886,016 bytes [1.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
TRIM Command: Available, deterministic, zeroed
Device is: In smartctl database 7.3/5319
ATA Version is: ACS-4 T13/BSR INCITS 529 revision 5
SATA Version is: SATA 3.3, 6.0 Gb/s (current: 1.5 Gb/s)
Local Time is: Tue May 24 16:06:23 2022 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 0) seconds.
Offline data collection
capabilities: (0x11) SMART execute Offline immediate.
No Auto Offline data collection support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
No Conveyance Self-test supported.
No Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 10) minutes.
SMART Attributes Data Structure revision number: 4
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0032 100 100 --- Old_age Always - 124
9 Power_On_Hours 0x0032 100 100 --- Old_age Always - 1470
12 Power_Cycle_Count 0x0032 100 100 --- Old_age Always - 134
165 Block_Erase_Count 0x0032 100 100 --- Old_age Always - 4312400063
166 Minimum_PE_Cycles_TLC 0x0032 100 100 --- Old_age Always - 1
167 Max_Bad_Blocks_per_Die 0x0032 100 100 --- Old_age Always - 65
168 Maximum_PE_Cycles_TLC 0x0032 100 100 --- Old_age Always - 14
169 Total_Bad_Blocks 0x0032 100 100 --- Old_age Always - 630
170 Grown_Bad_Blocks 0x0032 100 100 --- Old_age Always - 124
171 Program_Fail_Count 0x0032 100 100 --- Old_age Always - 128
172 Erase_Fail_Count 0x0032 100 100 --- Old_age Always - 0
173 Average_PE_Cycles_TLC 0x0032 100 100 --- Old_age Always - 2
174 Unexpected_Power_Loss 0x0032 100 100 --- Old_age Always - 90
184 End-to-End_Error 0x0032 100 100 --- Old_age Always - 0
187 Reported_Uncorrect 0x0032 100 100 --- Old_age Always - 0
188 Command_Timeout 0x0032 100 100 --- Old_age Always - 64
194 Temperature_Celsius 0x0022 070 053 --- Old_age Always - 30 (Min/Max 18/53)
199 UDMA_CRC_Error_Count 0x0032 100 100 --- Old_age Always - 0
230 Media_Wearout_Indicator 0x0032 001 001 --- Old_age Always - 0x002600140026
232 Available_Reservd_Space 0x0033 097 097 004 Pre-fail Always - 97
233 NAND_GB_Written_TLC 0x0032 100 100 --- Old_age Always - 2703
234 NAND_GB_Written_SLC 0x0032 100 100 --- Old_age Always - 2842
241 Host_Writes_GiB 0x0030 253 253 --- Old_age Offline - 466
242 Host_Reads_GiB 0x0030 253 253 --- Old_age Offline - 622
244 Temp_Throttle_Status 0x0032 000 100 --- Old_age Always - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 1470 -
Selective Self-tests/Logging not supported
and fsck fails as so:
[root@meer ~]# e2fsck -cfpv /dev/sda1
/dev/sda1: recovering journal
e2fsck: Input/output error while recovering journal of /dev/sda1
e2fsck: unable to set superblock flags on /dev/sda1
/dev/sda1: ********** WARNING: Filesystem still has errors **********
May 24 15:38:29 meer kernel: I/O error, dev sda, sector 121899008 op 0x1:(WRITE) flags 0x800 phys_seg 1 prio class 0
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 CDB: Write(10) 2a 00 07 44 08 00 00 00 08 00
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 Add. Sense: Unaligned write command
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 Sense Key : Illegal Request [current]
May 24 15:38:29 meer kernel: sd 2:0:0:0: [sda] tag#31 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=0s
May 24 15:38:29 meer kernel: ata3.00: configured for UDMA/33
May 24 15:38:29 meer kernel: ata3.00: error: { ABRT }
May 24 15:38:29 meer kernel: ata3.00: status: { DRDY ERR }
May 24 15:38:29 meer kernel: ata3.00: cmd ca/00:08:00:08:44/00:00:00:00:00/e7 tag 31 dma 4096 out
res 51/04:08:00:08:44/00:00:07:00:00/e7 Emask 0x1 (device error)
May 24 15:38:29 meer kernel: ata3.00: failed command: WRITE DMA
May 24 15:38:29 meer kernel: ata3.00: irq_stat 0x40000001
May 24 15:38:29 meer kernel: ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
May 24 15:38:29 meer kernel: ata3: EH complete
May 24 15:38:29 meer kernel: ata3.00: configured for UDMA/33
May 24 15:38:29 meer kernel: ata3.00: error: { ABRT }
May 24 15:38:29 meer kernel: ata3.00: status: { DRDY ERR }
May 24 15:38:29 meer kernel: ata3.00: cmd ca/00:08:00:08:44/00:00:00:00:00/e7 tag 6 dma 4096 out
res 51/04:08:00:08:44/00:00:07:00:00/e7 Emask 0x1 (device error)
May 24 15:38:29 meer kernel: ata3.00: failed command: WRITE DMA
May 24 15:38:29 meer kernel: ata3.00: irq_stat 0x40000001
May 24 15:38:29 meer kernel: ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
Partitioning as seen by fdisk.
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WDS100T2B0A
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3F701164-2CF8-6D48-A94E-478634C140BE
Device Start End Sectors Size Type
/dev/sda1 2048 1953523711 1953521664 931.5G Linux filesystem
From dmesg
[ 5292.895300] ata3.00: configured for UDMA/33
[ 5292.895315] ata3: EH complete
[ 5293.021851] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[ 5293.021859] ata3.00: irq_stat 0x40000001
[ 5293.021864] ata3.00: failed command: WRITE DMA
[ 5293.021866] ata3.00: cmd ca/00:08:00:08:44/00:00:00:00:00/e7 tag 18 dma 4096 out
res 51/04:08:00:08:44/00:00:07:00:00/e7 Emask 0x1 (device error)
[ 5293.021874] ata3.00: status: { DRDY ERR }
[ 5293.021877] ata3.00: error: { ABRT }
parted :
root@meer stephen]# parted /dev/sda
GNU Parted 3.5
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print free
Model: ATA WDC WDS100T2B0A (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 1000GB 1000GB ext4
1000GB 1000GB 729kB Free Space
|
I don't know what you've been doing with this disk, but that's crazy numbers! Looking at that output that SSD has been on:
1470 hours (61 days)
performed 4312400063 (2.0GiB) block erases
163210068006 (76TiB) media writes.
That's a constant 16MiB a second of writes over 61 days.
I imagine you've got internal NAND failure. You might not be able to get your data back.
I suggest your best solution here going forwards is to use a raid mirror of some form to buffer the errors between multiple disks.
Ideally, it would be two disks of different ages and/or different production batches to attempt to spread out the distribution of errors and failures between multiple disks.
Just to clarify, I consider that an abnormally high amount of writes over a very short period. You're going to need to factor that in to the storage setup you go with.
| ssd won't mount: bad superblock but no bad blocks: write errors |
1,543,156,167,000 |
For every multipath disk label in /dev/mapper I have another with 1 in the end. Are they the same? is there some relationship?
For example:
/dev/mapper/mpathaj and /dev/mapper/mpathaj1
or
/dev/mapper/mpathai and /dev/mapper/mpathai1
when I issue the command od --read-bytes=128 --format=c /dev/mapper/mpathai, the disk seems clean:
[root@server02 ~]# od --read-bytes=128 --format=c /dev/mapper/mpathai
0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
0000200
But the other with 1 in the end show some rows:
[root@server02 ~]# od --read-bytes=128 --format=c /dev/mapper/mpathai1
0000000 001 202 001 001 \0 \0 \0 \0 003 \0 \0 200 220 . 5 213
0000020 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000120 3 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
some characters I needed to remove to don't show costumer content.
0000160 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000200
That happen to every disk. One is clean and the other no.
And, the reason I am asking it: Can I lost one (mpathaj) without lost the other (mpathaj1)? I've seen they point to different /dev/dm-xx.
ie: /dev/mapper/mpathaj is /dev/dm-18 and /dev/mapper/mpathaj1 is /dev/dm-19
|
I would expect that /dev/mapper/mpathai to be the whole disk/LUN, and /dev/mapper/mpathai1 the first partition on that disk/LUN. But it also could be something like a LUKS encryption layer with a confusingly chosen name.
On device-mapper based devices (multipaths, encrypted disks, software RAID...), partition detection is done in userspace (often via the command kpartx), and a new device-mapper entry (/dev/dm-<number>) is created for each of them.
The only way to be sure would be to use dmsetup ls and/or dmsetup table as root to view the mappings and see their relations to each other.
On a modern Linux system, you might begin with dmsetup ls --tree -o blkdevname: it's probably the easiest way to visualize the relationships between the different device-mapper entries, if there are any.
Unfortunately, the dmsetup ls --tree listing won't include the type of mapping, so you may still need to refer to dmsetup table to identify the type: if the mapping of mpathai1 is of type linear and refers to the mpathai device by major:minor numbers, then mpathai1 is a linear sub-mapping of mpathai, which usually means it's a partition within the disk device.
If mpathai1 is of type crypt, then mpathai could be an encrypted disk (LUKS or some other method that is understood by cryptsetup), which has been configured to have the decrypted view of the device appear as mpathai1 whenever encryption is unlocked. In other words, the encryption would be unlocked with a command like:
cryptsetup open /dev/mapper/mpathai mpathai1 --type <luks,loopaes,tcrypt,bitlk...>
If encryption is used, I would expect /etc/crypttab to also mention the device(s).
If you can't identify the mapping type on your own, please post the output of e.g. dmsetup table mpathai1 and dmsetup table mpathaj1.
If mpathaj1 is a partition of mpathaj, you could lose mpathaj1 by e.g. corrupting/overwriting the partition table. If the partition table no longer has a valid entry for the mpathaj1 partition, the system would no longer show it, even if the underlying disk mpathaj is 100% fine.
The same is true if mpathaj1 is the decrypted view of encrypted mpathaj, then if the encryption key (e.g. the encrypted master key within the LUKS header) is lost for any reason, you will no longer be able to unlock the encryption, and then mpathaj1 and all data within it are effectively lost to you.
| Multipath - what is the differences between /dev/mapper/mpathxx and /dev/mapper/mpathxx1? |
1,543,156,167,000 |
we have the following disks from lsblk , all disks are not lvm
sdc 8:32 0 80G 0 disk /var/hadoop1
sdd 8:48 0 80G 0 disk /var/hadoop2
sde 8:64 0 80G 0 disk
sdf 8:80 0 80G 0 disk
sdc and sdd disks are full ( 100% used )
the status is that sdc and sdd disks are full and we can use them
but we have a new disks sde and sdf , each disk with size of 20G
so
is it possible to add the sde disk to sdc in order to give another 20G for sdc ?
|
Instead of adding disks on the level of operation system you can do this directly in hadoop. You can add them to the dfs.datanode.data.dir property. The format is
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///disk/c0t2,/disk/c0t3,/dev/sde,/dev/sdf</value>
</property>
I am not 100% sure hadoop can handle RAW disks. In such case you can create on each new disk one big partition, format it, mount it /var/hadoop3, /var/hadoop4 and use format:
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///disk/c0t2,/disk/c0t3,/var/hadoop3,/var/hadoop4</value>
</property>
| is it possible to increase disk size by using/adding another clean disk |
1,543,156,167,000 |
I have a server on Hetzner. It theoretically has 3Tb of space
But if I run df -h I see this:
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
213.133.99.101:/nfs 295G 134G 146G 48% /root/.oldroot/nfs
overlay 7.7G 7.7G 0 100% /
tmpfs 7.7G 0 7.7G 0% /dev/shm
tmpfs 7.7G 20M 7.7G 1% /run
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/0
So, the 3TB are missing. ...And the disk is full.
If I run lsblk I see this:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 4G 1 loop
sda 8:0 0 2.7T 0 disk
├─sda1 8:1 0 8G 0 part
├─sda2 8:2 0 512M 0 part
├─sda3 8:3 0 1T 0 part
├─sda4 8:4 0 1.7T 0 part
└─sda5 8:5 0 1M 0 part
Seems the 3Tb are out there.
How can I use them?
PS. Yes, I am not a linux expert..
|
This looks to me like you are currently running the "delivery system".
You should be able to start an installer from there, and set up your system as you need it. Search the hetzner knowledge-base for infos on installer images.
When installing from such an installer, keep in mind:
Hetzner-servers in general come with two identical harddisk to be able to set up a raid1-array. This should be done for safety of your data.
as you cannot see a second physical disk, you may consider asking hetzner about it
when you install a fresh system, you can safely delete all partitions, so the space will be available for you.
as a recommendation, reserve some space (create partitions) for
boot (1-2 Ggb)
/ (around 30Gb)
swap (2 times ram-size)
/home (rest)
you need to setup the raid1 (if you do) before installing the system. you would then mount '/dev/mdX' instead of '/dev/sdX'
| Filesystem problem |
1,543,156,167,000 |
I'm using Oracle Linux 7.6, which is a distribution based on RHEL 7.6. The following testing should be the same on RHEL 7.6 or other RHEL 7.6 based distributions.
I'm running the Oracle Linux 7.6 server in VMware Workstation on Windows 10. What I'm trying to do is to add a disk to the Linux guest virtual machine without rebooting the Linux server. I googled around and found this page: https://rahsupport.wordpress.com/2017/08/10/vmware-add-disk-to-linux-without-rebooting-the-vm/. Basically, what it does is:
Add the disk from VMware Workstation to the Linux VM
Go to /sys/class/scsi_host/
Run echo '- - -' > host1/scan
Then by running fdisk -l, you can see the newly added disk
I tested it on in my environment. There are three such host directories and each of them has a scan file in it:
root:[/sys/class/scsi_host]# ls -la
total 0
drwxr-xr-x. 2 root root 0 Aug 24 22:49 .
drwxr-xr-x. 54 root root 0 Aug 24 22:49 ..
lrwxrwxrwx. 1 root root 0 Aug 24 22:49 host0 -> ../../devices/pci0000:00/0000:00:07.1/ata1/host0/scsi_host/host0
lrwxrwxrwx. 1 root root 0 Aug 24 22:49 host1 -> ../../devices/pci0000:00/0000:00:07.1/ata2/host1/scsi_host/host1
lrwxrwxrwx. 1 root root 0 Aug 24 22:49 host2 -> ../../devices/pci0000:00/0000:00:10.0/host2/scsi_host/host2
root:[/sys/class/scsi_host]#
root:[/sys/class/scsi_host]# ls -la */scan
--w-------. 1 root root 4096 Aug 24 22:50 host0/scan
--w-------. 1 root root 4096 Aug 24 22:50 host1/scan
--w-------. 1 root root 4096 Aug 24 22:50 host2/scan
root:[/sys/class/scsi_host]#
Originally, the Linux server can't recognize the disk:
root:[/sys/class/scsi_host]# fdisk -l
Disk /dev/sda: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d3e78
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 976895 487424 83 Linux
/dev/sda2 976896 2059401215 1029212160 83 Linux
/dev/sda3 2059401216 2101344255 20971520 83 Linux
/dev/sda4 2101344256 2147483647 23069696 5 Extended
/dev/sda5 2101348352 2143289343 20970496 83 Linux
/dev/sda6 2143291392 2147483647 2096128 82 Linux swap / Solaris
But when I run echo '- - -' > host0/scan, the disk showed up:
root:[/sys/class/scsi_host]# echo '- - -' > host0/scan
root:[/sys/class/scsi_host]# fdisk -l
Disk /dev/sda: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d3e78
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 976895 487424 83 Linux
/dev/sda2 976896 2059401215 1029212160 83 Linux
/dev/sda3 2059401216 2101344255 20971520 83 Linux
/dev/sda4 2101344256 2147483647 23069696 5 Extended
/dev/sda5 2101348352 2143289343 20970496 83 Linux
/dev/sda6 2143291392 2147483647 2096128 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root:[/sys/class/scsi_host]#
I reverted my Linux VM to its original state to test again. This time it showed that echo '- - -' > host1/scan doesn't work, but echo '- - -' > host2/scan works.
root:[/sys/class/scsi_host]# fdisk -l
Disk /dev/sda: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d3e78
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 976895 487424 83 Linux
/dev/sda2 976896 2059401215 1029212160 83 Linux
/dev/sda3 2059401216 2101344255 20971520 83 Linux
/dev/sda4 2101344256 2147483647 23069696 5 Extended
/dev/sda5 2101348352 2143289343 20970496 83 Linux
/dev/sda6 2143291392 2147483647 2096128 82 Linux swap / Solaris
root:[/sys/class/scsi_host]# echo '- - -' > host1/scan
root:[/sys/class/scsi_host]# fdisk -l
Disk /dev/sda: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d3e78
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 976895 487424 83 Linux
/dev/sda2 976896 2059401215 1029212160 83 Linux
/dev/sda3 2059401216 2101344255 20971520 83 Linux
/dev/sda4 2101344256 2147483647 23069696 5 Extended
/dev/sda5 2101348352 2143289343 20970496 83 Linux
/dev/sda6 2143291392 2147483647 2096128 82 Linux swap / Solaris
root:[/sys/class/scsi_host]# echo '- - -' > host2/scan
root:[/sys/class/scsi_host]# fdisk -l
Disk /dev/sda: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d3e78
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 976895 487424 83 Linux
/dev/sda2 976896 2059401215 1029212160 83 Linux
/dev/sda3 2059401216 2101344255 20971520 83 Linux
/dev/sda4 2101344256 2147483647 23069696 5 Extended
/dev/sda5 2101348352 2143289343 20970496 83 Linux
/dev/sda6 2143291392 2147483647 2096128 82 Linux swap / Solaris
Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
root:[/sys/class/scsi_host]#
My question is, what's these host directories? Why echo '- - -' > host0/scan and echo '- - -' > host2/scan will make Linux server recognize the disk, but echo '- - -' > host1/scan can't?
Btw, I'm pretty new in Linux and still learning it.
|
The different host directories correspond to different disk controllers. What controllers map to depends on the technology involved; AHCI SATA hosts have one host per port, NVMe uses one host per controller, etc. The exact situation in your case will depend on your VM setup.
Basically what this means is that you should rescan all hosts.
| What are these "host" directories? |
1,543,156,167,000 |
My storage administrator assigned LUNS for a RHEL 5.10 Server.
Now when I run /sbin/scsi_id -g -u -s /block/sdN, it shows:
360060160c8803500183327ae00a2e711
Now I want to see the exact name, just like the storage administrator assigned, such as:
DB2_LUN_1
What am I missing?
The back end storage from EMC.
|
You can not. From my experience as storage and linux admin I can tell you, DB2_LUN_1 is alias on storage level. You can confirm the unique id from storage admin which is similar to 360060160c8803500183327ae00a2e711 (well most of the part). if you are using dm-multipath drivers then you can set alias in your /etc/multipath.conf file.
multipath {
wwid 360060160c8803500183327ae00a2e711
alias DB2_LUN_1
}
and then rescan the scsi_host and run multipath -ll
| Get SAN Lun Names |
1,543,156,167,000 |
I'm mess my mounting of the directory with device:
lorancechen@schan:~$ sudo df -hl
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 790M 9.0M 781M 2% /run
/dev/mapper/schan--vg-root 226G 6.6G 208G 4% /
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda2 473M 463M 0 100% /boot
/dev/sda1 511M 3.4M 508M 1% /boot/efi
tmpfs 100K 0 100K 0% /run/lxcfs/controllers
tmpfs 790M 0 790M 0% /run/user/1000
/home/lorancechen/.Private 226G 6.6G 208G 4% /home/lorancechen
Obviously, 226G for a home user.I'm not sure what's the means of /home/lorancechen/.Private?
Besides, sudo fdisk -l showing Linux LVM use 237.5G:
lorancechen@schan:/boot/grub$ sudo fdisk -l
Disk /dev/sda: 238.5 GiB, 256060514304 bytes, 500118192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: B9F1D7BF-5BE5-4457-8589-B7BA73C5298E
Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 2050047 999424 488M Linux filesystem
/dev/sda3 2050048 500117503 498067456 237.5G Linux LVM
Disk /dev/mapper/schan--vg-root: 229.6 GiB, 246503440384 bytes, 481452032 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/schan--vg-swap_1: 7.9 GiB, 8489271296 bytes, 16580608 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/cryptswap1: 7.9 GiB, 8488747008 bytes, 16579584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
I'm sure my file system has some problem and now I can't use sudo apt-get --fix-broken install install anything because of
cannot copy extracted data for './boot/vmlinuz-4.4.0-93-generic' to
'/boot/vmlinuz-4.4.0-93-generic.dpkg-new':
failed to write (No space left on device)
What's the problem of the disk assign? how can I reassign the disk?
Thanks
|
/home/lorancechen/.Private looks like file encryption to me. Have a look at the output of
cat /proc/mounts | grep lorancechen
Your problem with apt-get is due to the fact that your /boot partition is full. Some Linux distros do not delete old kernels. So you have to install some script for that or do it manually from time to time. Go to /boot and delete some of the old kernels and initrds.
| How to reassign disk to file system? |
1,543,156,167,000 |
I have a Debian system on which we migrated to a SSD for faster execution. Before that we had a 2.0Tb hard disks in RAID. Now we want to use the RAID drives to perform storage generated by the application.
I tried using the mount command to mount one of the disks, but it failed.
fdisk -l output :
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00089ca4
Device Boot Start End Blocks Id System
/dev/sdb1 2048 33556480 16777216+ fd Linux raid autodetect
/dev/sdb2 33558528 34607104 524288+ fd Linux raid autodetect
/dev/sdb3 34609152 3907027120 1936208984+ fd Linux raid autodetect
Disk /dev/sdc: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00047ef7
Device Boot Start End Blocks Id System
/dev/sdc1 2048 33556480 16777216+ 82 Linux swap / Solaris
/dev/sdc2 33558528 34607104 524288+ 83 Linux
/dev/sdc3 34609152 937701040 451545944+ 83 Linux
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000275d2
Device Boot Start End Blocks Id System
/dev/sda1 2048 33556480 16777216+ fd Linux raid autodetect
/dev/sda2 33558528 34607104 524288+ fd Linux raid autodetect
/dev/sda3 34609152 3907027120 1936208984+ fd Linux raid autodetect
As you can see there are two 2Tb hard disks in RAID. Is there any way I can format them to one single partition on both drives and mount them to lets say /media/attachment?? Any help would be nice. Thanks a lot. :-)
|
there are two 2Tb hard disks in RAID. Is there any way I can format them to one single partition on both drives and mount them to lets say /media/attachment
For the purposes of this answer I am using /dev/sda and /dev/sdb. It is your responsibility to ensure that this matches your situation.
You can do this provided you are happy to erase all the data on these two disks.
Ensure the disks are unused and you have taken a backup of any data on them that you wanted to keep
Using fdisk or your preferred alternative, erase the partition table and create a single partition covering the entire disk. This will leave you with partitions /dev/sda1 and /dev/sdb1
EITHER
Create a RAID 1 device, which we will identify as /dev/md1, using these two physical partitions
mdadm --create /dev/md1 --level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1
OR
Create a RAID 0 device, also identified as /dev/md1
mdadm --create /dev/md1 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Save the metadata for boot time
mdadm --examine --brief /dev/sda1 /dev/sdb1 >> /etc/mdadm/mdadm.conf
Create the filesystem. Notice that the RAID device is /dev/md1 and from this point on you rarely need to reference /dev/sda1 or /dev/sdb1
mkfs -t ext4 -L bigdisk /dev/md1
Mount it. Don't forget to update /etc/fstab if you want this configured permanently
mkdir -p /media/attachment
mount /dev/md1 /media/attachment
You can cat /proc/mdstat to see the state of the RAID device. If you are running as RAID 1 this will show you the synchronisation status.
| Debian : Mounting a raid array |
1,543,156,167,000 |
Im helping a friend with his small, self-hosted Ubuntu server. He wanted to install a larger hard drive. That's why I used Clonezilla to clone the HDD to a larger SSD so that the server doesn't have to be set up from scratch. This worked great, but of course the operating system doesn't use the new storage space just like that. I tried using a bootable gparted USB stick to enlarge the operating system partition from ‘outside’. But somehow the memory in the operating system remains unchanged.
I uploaded to screenshots to imgur: One from gparted (i enlarged the dev/sda3/ Partition)and one from inside Ubuntu, using df -h --total and for some strange Reason it shows completly different partitions. Can you help me and tell me how to enlarge the Partition for the Ubuntu Server?
|
On that system, /dev/sda3 is an LVM PV. After enlarging it, you'll need to extend the LV's too. There's only one LV in your df screenshot: /dev/mapper/ubuntu--vg-ubuntu--lv. If you want to give all the free space in that VG to it, then run lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv as root. If you want to only extend that LV by some smaller amount, then do lvextend -r -L newsize /dev/mapper/ubuntu--vg-ubuntu--lv instead, substituting the actual new size you want for newsize.
| Extend hard drive on Ubuntu (Server) |
1,543,156,167,000 |
I am having an issue where I can mount my disks manually using the mount command. I than add the disks to fstab. Once I restart: sda1 points at the correct mount point (/mnt/da), but the rest do not. Please help, I am out of ideas.
Server setup:
2 x nvme drives in software raid1
10x 16tb drives without raid, stand alone disks (had to remove raid from these after initial setup)
OS: Debian 12
xfs file system
Using UUID, I get from the blkid command, to add the devices to fstab
Tried:
Manually mounted the disks
Tried mounting one disk at a time
Tried adding one disk at a time to fstab and reloading
Tried adding all the disks to fstab and reloading
df -h
root@data7 ~ # df -h
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 896K 13G 1% /run
/dev/md2 875G 1013M 829G 1% /
tmpfs 63G 0 63G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/md1 989M 66M 873M 7% /boot
/dev/sdb1 15T 104G 15T 1% /mnt/db
/dev/sdd1 15T 104G 15T 1% /mnt/dc
/dev/sda1 15T 104G 15T 1% /mnt/da
tmpfs 13G 0 13G 0% /run/user/0
lsblk
root@data7 ~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 14.6T 0 disk
└─sda1 8:1 0 14.6T 0 part /mnt/da
sdb 8:16 0 14.6T 0 disk
└─sdb1 8:17 0 14.6T 0 part /mnt/db
sdc 8:32 0 14.6T 0 disk
└─sdc1 8:33 0 14.6T 0 part
sdd 8:48 0 14.6T 0 disk
└─sdd1 8:49 0 14.6T 0 part /mnt/dc
sde 8:64 0 14.6T 0 disk
└─sde1 8:65 0 14.6T 0 part
sdf 8:80 0 14.6T 0 disk
└─sdf1 8:81 0 14.6T 0 part
sdg 8:96 0 14.6T 0 disk
└─sdg1 8:97 0 14.6T 0 part
sdh 8:112 0 14.6T 0 disk
└─sdh1 8:113 0 14.6T 0 part
sdi 8:128 0 14.6T 0 disk
└─sdi1 8:129 0 14.6T 0 part
sdj 8:144 0 14.6T 0 disk
└─sdj1 8:145 0 14.6T 0 part
sdk 8:160 0 57.7G 0 disk
nvme0n1 259:0 0 894.3G 0 disk
├─nvme0n1p1 259:1 0 4G 0 part
│ └─md0 9:0 0 4G 0 raid1 [SWAP]
├─nvme0n1p2 259:2 0 1G 0 part
│ └─md1 9:1 0 1022M 0 raid1 /boot
└─nvme0n1p3 259:3 0 889.3G 0 part
└─md2 9:2 0 889.1G 0 raid1 /
nvme1n1 259:4 0 894.3G 0 disk
├─nvme1n1p1 259:5 0 4G 0 part
│ └─md0 9:0 0 4G 0 raid1 [SWAP]
├─nvme1n1p2 259:6 0 1G 0 part
│ └─md1 9:1 0 1022M 0 raid1 /boot
└─nvme1n1p3 259:7 0 889.3G 0 part
└─md2 9:2 0 889.1G 0 raid1 /
blkid
GETTING UUID
root@data7 ~ # blkid | g sda
/dev/sda1: UUID="cea5e8d9-1ddf-4502-a609-3a17af37082c" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="d0a89050-f533-7245-877e-5006d974516c"
root@data7 ~ # blkid | g sdb
/dev/sdb1: UUID="e3ae1145-d37b-41d7-ac1f-5c6a646bd5ed" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="da1f85c1-9054-d147-a9b8-0965020b4d67"
root@data7 ~ # blkid | g sdc
/dev/sdc1: UUID="47bf4e70-ec50-4369-88c2-9dfd8dd5d422" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="f458a5c3-6b6c-054d-a069-27930dcb02f2"
fstab
/etc/fstab - WAS CORRECT BEFORE RESTART .. CHANGED IT 5+ TIMES AND BREAKS AFTER EVERY RESTART
proc /proc proc defaults 0 0
# /dev/md/0
UUID=e2f568f6-846b-4657-b88d-3c8108d5600c none swap sw 0 0
# /dev/md/1
UUID=a5868f6d-b7e5-43b1-ab81-4770a543d83a /boot ext3 defaults 0 0
# /dev/md/2
UUID=612c81e1-94e4-415e-863f-6dfcbe127dee / ext4 defaults 0 0
# /dev/sda1
UUID=cea5e8d9-1ddf-4502-a609-3a17af37082c /mnt/da xfs defaults 0 2
# /dev/sdb1
UUID=e3ae1145-d37b-41d7-ac1f-5c6a646bd5ed /mnt/db xfs defaults 0 2
# /dev/sdc1
UUID=2b28f001-d9a0-4759-8f29-4bf45a18aeb6 /mnt/dc xfs defaults 0 2
My Other Servers (in response to the comment that device names dont persist across reboots)
root@data2:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 1.3G 24G 6% /run
/dev/sda3 5.5T 4.9T 246G 96% /
tmpfs 126G 0 126G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/sda2 923M 79M 781M 10% /boot
/dev/sde1 5.5T 5.0T 236G 96% /mnt/de
/dev/sdf1 5.5T 4.1T 1.2T 79% /mnt/df
/dev/sdd1 5.5T 4.7T 468G 92% /mnt/dd
/dev/sdc1 5.5T 5.1T 49G 100% /mnt/dc
/dev/sdb1 5.5T 5.1T 74G 99% /mnt/db
tmpfs 26G 0 26G 0% /run/user/1000
tmpfs 26G 0 26G 0% /run/user/0
root@data3:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 2.5G 23G 10% /run
/dev/sda3 5.5T 4.2T 1.1T 81% /
tmpfs 126G 0 126G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/sda2 923M 79M 781M 10% /boot
/dev/sdc1 11T 11T 660G 95% /mnt/df
/dev/sdf1 11T 9.2T 1.8T 84% /mnt/de
/dev/sdd1 11T 9.8T 1.2T 90% /mnt/dc
/dev/sde1 11T 11T 191G 99% /mnt/dd
/dev/sdb1 11T 11T 855G 93% /mnt/db
tmpfs 26G 0 26G 0% /run/user/1001
tmpfs 26G 0 26G 0% /run/user/0
root@data4:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 126G 0 126G 0% /dev
tmpfs 26G 2.5G 23G 10% /run
/dev/sda3 11T 11T 249G 98% /
tmpfs 126G 0 126G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 126G 0 126G 0% /sys/fs/cgroup
/dev/sda2 923M 80M 781M 10% /boot
/dev/sdc1 11T 9.3T 1.7T 85% /mnt/dc
/dev/sdd1 11T 9.0T 2.0T 82% /mnt/dd
/dev/sdb1 11T 9.8T 1.2T 90% /mnt/db
tmpfs 26G 0 26G 0% /run/user/1002
tmpfs 26G 0 26G 0% /run/user/1000
tmpfs 26G 0 26G 0% /run/user/1005
tmpfs 26G 0 26G 0% /run/user/0
|
The device ID's are not guaranteed to be consistent across reboots...
you are using wrong vocabulary.
There are 6 ways to mount disks in linux, as shown under /dev/disk; this is using RHEL-7.9...
by-id/ by-label/ by-partlabel/ by-partuuid/ by-path/ by-uuid/
by-id is a scsi identifier or wwn (world-wide-number) identifier.
The unreliable (or inconsistent) way is mounting by-device-name which for example is doing just /dev/sdb1 /data in /etc/fstab, which is what you had been doing and are incorrectly referring to as device id's. The id's are consistent, the device-names (which are sda,sdb,sdc and so on) are not.
You will see that everything under those six /dev/disk/ folders are all links that point up and above to the device name {/dev/sda2 for example}
as mentioned in the comments, by device-name gets mapped out after boot by order the devices are recognized. Adding a new disk connected by SATA cable, does not put it at the end of the sda,sdb,sdc... list. It often times gets put at the front as sda and then everything shifts down, which is how the inconsistency comes about. Simply swap two disks and the SATA ports they are connected to on the motherboard - same issue.
by-UUID is very consistent - hence the name universally unique id
by-id as in the scsi id or wwn id, should also be very reliable; you will often find the wwn on the label on the disk
by-label should be reliable, up until you do something like label multiple disks (partitions actually) with the same label name.
I think by-path is inconsistent, for the reason if disks get connected onto different SAT/SAS ports then that is now a different path.
| Disks mount points inconsistent after each reboot. Using UUID in fstab |
1,543,156,167,000 |
My disk is sda and I have these size files:
/sys/dev/block/8:0/size
/sys/class/block/sda/size
/sys/block/sda/size
Which one should I use? The first one is used by lsblk. Are there any differences?
|
Check out
ls -l /sys/dev/block/8:0 /sys/class/block/sda /sys/block/sda
You’ll see that all three point to the same directory.
There’s no difference between the files, other than their path.
| Which "size" file should I use to get the disk size? |
1,543,156,167,000 |
List SCSI devices in my os:
debian@debian:~$ lsscsi
[0:0:0:0] disk ATA ST500DM002-1BD14 KC66 /dev/sda
[1:0:0:0] disk ATA WDC WD2500AAKX-0 1H15 /dev/sdb
[4:0:0:0] disk ATA ST1000VX000 SC23 /dev/sdc
[9:0:0:0] disk Innostor Innostor 1.00 /dev/sdd
What does the 5th column mean in the lsscsi output?
debian@debian:~$ lsscsi | awk '{print $5}'
KC66
WD2500AAKX-0
SC23
1.00
|
The columns are:
[scsi_host:channel:target_number:LUN]
SCSI peripheral type
vendor name
model name
revision string
So the fifth column is the revision string. You can also use lsscsi -c to print in the classic mode, where these are printed in a different form and prefixed:
$ lsscsi
[9:0:5:0] disk QEMU QEMU HARDDISK 2.5+ /dev/sda
$ lscscsi -c
Host: scsi9 Channel: 00 Target: 05 Lun: 00
Vendor: QEMU Model: QEMU HARDDISK Rev: 2.5+
Type: Direct-Access ANSI SCSI revision: 05
Btw. in your case WDC WD2500AAKX-0 actually appears to be the model name with a space and 1H15 is the revision.
You can read more about lsscsi here.
| What does the 5th column mean in the lsscsi output? |
1,543,156,167,000 |
I have a 4GB sd card but what i'd like to be able to do is have more free space on the / partition I don't actually need a swap partition either so how would i resize/move partitions for example using fdisk?
Disk /dev/mmcblk0: 3.7 GiB
Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 * 2048 3844095 3842048 1.9G 83 Linux
/dev/mmcblk0p2 3846142 7772159 3926018 1.9G 5 Extended
/dev/mmcblk0p5 3846144 7772159 3926016 1.9G 82 Linux swap / Solaris
Filesystem Size Used Avail Use% Mounted on
udev 920M 0 920M 0% /dev
tmpfs 187M 20M 168M 11% /run
/dev/mmcblk0p1 1.8G 1.3G 417M 76% /
tmpfs 935M 0 935M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 935M 0 935M 0% /sys/fs/cgroup
tmpfs 187M 0 187M 0% /run/user/1001
|
Stop the swap using swapoff -a
Remove the swap (/dev/mmcblk0p5) and the extended (/dev/mmcblk0p2) partitions. To remove a partition with fdisk just run fdisk /dev/mmcblk0 and use d to delete a partition (it will ask which one).
Don't forget to remove swap entry from /etc/fstab and GRUB config.
Resize the / partition. Resizing a partition with fdisk means deleting it and then creating a new one with same start sector and different end sector. You can follow for example this answer.
Don't forget to resize the filesystem on /dev/mmcblk0p1 after you resize it. Use tool for your filesystem resize2fs /dev/mmcblk0p1 for Ext4 or xfs_growfs /dev/mmcblk0p1 for XFS.
As always with storage, make sure to make a backup first.
| Moving space from swap to / |
1,543,156,167,000 |
I've ended up in a situation where I cut my disk space in half, and would like to merge two partitions together. Here is a picture from Gnome Disks:
The blue partition on the left is unused, and Partition 7 on the right is what I boot into and currently use. It looks like in Gnome Disks, I can just click the red minus sign to delete the blue, unmounted partition. Is this going to let me combine my two partitions?
Thanks.
|
Partitions cannot be resized to the left, you actually need to copy all the data from the second partition to the start of the free space and then resize it to the right. GNOME Disks cannot do this, you need to use GParted for this and its Resize/Move operation. This cannot be done on a mounted filesystem so you'll need to use the LiveCD. Also make sure to make a backup first, this can be potentially dangerous operation (if you for example lose power during the operation, recovery would be really hard).
| How to reclaim unmounted disk space |
1,543,156,167,000 |
I have an old Seagate 4TB internal drive from a crapped out pc that I was planning to repurpose as a spare drive for gaming.
Figured I'd run some smartctl scans on it first just to be safe so I did smartctl -t short /dev/sdb and got back results. They looked ok to me bc I didn't see anything listed in the 'WHEN_FAILED' column (and originally I had been mostly concerned with the temperature related errors). But then I saw an article from 2018 mentioning that 'Current_Pending_Sector' is pretty serious... And mine is not zero... And I did have some errors besides... Since I can't really make sense of whether or not to be concerned about them, I figured I'd try SE.
My best guess so far is that I shouldn't put anything critical on it but that it might be fine to use for games if I symlink the save folders so they exist somewhere else (on a drive with better smart results) and don't mind re-downloading the installed game in the event of drive failure. Also not sure if the 'READ DMA EXT' errors are indications of imminent failure or if that could be a cable or other one-time event (I can only see errors 35-39 and they all occurred at "16936 hours"... not sure if there's a way to see all of the errors or if literally only the last 5 are stored like it says). OTOH, I didn't have any issues mounting it or copying data off it (it was a relative's and they didn't want it anymore; just some pics/videos off it).
If there's at least decent odds that the drive might have some life left, I don't mind chancing it for less important stuff. But it is highly likely to fail in the near future, I'd prefer not to waste any time with it for anything but acquiring a new magnet :-) Any advice / recommendations?
Anyway, I reran with smartctl -t long /dev/sdb waited till the next day and ran smartctl -a /dev/sdb. Here are the results for that:
I_AM_ROOT@fedora35:~
# smartctl -a /dev/sdb
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.7-200.fc35.x86_64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate Desktop HDD.15
Device Model: ST4000DM000-1F2168
Serial Number: <Redacted>
LU WWN Device Id: <Redacted>
Firmware Version: CC54
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5900 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Fri Dec 17 11:44:49 2021 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 118) The previous self-test completed having
the read element of the test failed.
Total time to complete Offline
data collection: ( 168) seconds.
Offline data collection
capabilities: (0x73) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
No Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 528) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x1085) SCT Status supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 119 099 006 Pre-fail Always - 233492808
3 Spin_Up_Time 0x0003 092 091 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 099 099 020 Old_age Always - 1890
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 044 039 030 Pre-fail Always - 678608011490
9 Power_On_Hours 0x0032 065 065 000 Old_age Always - 30836
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 099 099 020 Old_age Always - 1206
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 061 061 000 Old_age Always - 39
188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 0 0
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 071 058 045 Old_age Always - 29 (Min/Max 27/32)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 304
193 Load_Cycle_Count 0x0032 084 084 000 Old_age Always - 32204
194 Temperature_Celsius 0x0022 029 042 000 Old_age Always - 29 (0 12 0 0 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 16
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 16
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 23293h+16m+41.533s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 19236444339
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 27220280383
SMART Error Log Version: 1
ATA Error Count: 39 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 39 occurred at disk power-on lifetime: 16936 hours (705 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
25 00 08 ff ff ff ef 00 04:59:22.764 READ DMA EXT
25 00 40 ff ff ff ef 00 04:59:22.762 READ DMA EXT
25 00 00 ff ff ff ef 00 04:59:22.736 READ DMA EXT
25 00 08 ff ff ff ef 00 04:59:22.735 READ DMA EXT
ef 10 02 00 00 00 a0 00 04:59:22.735 SET FEATURES [Enable SATA feature]
Error 38 occurred at disk power-on lifetime: 16936 hours (705 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
25 00 08 ff ff ff ef 00 04:59:18.709 READ DMA EXT
25 00 00 ff ff ff ef 00 04:59:18.696 READ DMA EXT
25 00 00 ff ff ff ef 00 04:59:18.693 READ DMA EXT
25 00 00 ff ff ff ef 00 04:59:18.631 READ DMA EXT
25 00 08 ff ff ff ef 00 04:59:18.631 READ DMA EXT
Error 37 occurred at disk power-on lifetime: 16936 hours (705 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
25 00 08 ff ff ff ef 00 04:57:53.914 READ DMA EXT
25 00 08 ff ff ff ef 00 04:57:53.914 READ DMA EXT
25 00 00 ff ff ff ef 00 04:57:53.882 READ DMA EXT
ef 10 02 00 00 00 a0 00 04:57:53.881 SET FEATURES [Enable SATA feature]
27 00 00 00 00 00 e0 00 04:57:53.881 READ NATIVE MAX ADDRESS EXT [OBS-ACS-3]
Error 36 occurred at disk power-on lifetime: 16936 hours (705 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
25 00 08 ff ff ff ef 00 04:57:49.903 READ DMA EXT
25 00 08 ff ff ff ef 00 04:57:49.903 READ DMA EXT
25 00 08 ff ff ff ef 00 04:57:49.903 READ DMA EXT
25 00 08 ff ff ff ef 00 04:57:49.903 READ DMA EXT
25 00 08 ff ff ff ef 00 04:57:49.903 READ DMA EXT
Error 35 occurred at disk power-on lifetime: 16936 hours (705 days + 16 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
25 00 00 ff ff ff ef 00 04:57:45.210 READ DMA EXT
25 00 00 ff ff ff ef 00 04:57:45.181 READ DMA EXT
25 00 00 ff ff ff ef 00 04:57:45.179 READ DMA EXT
25 00 00 ff ff ff ef 00 04:57:45.178 READ DMA EXT
25 00 58 ff ff ff ef 00 04:57:45.149 READ DMA EXT
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed: read failure 60% 30817 3723785408
# 2 Short offline Completed without error 00% 30812 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
|
SMART is not very bad but your HDD firmware cannot fix some errors.
Considering you now perfectly know which sector is broken you can try to reallocate it using dd:
https://www.smartmontools.org/wiki/BadBlockHowto
Still, this HDD is now unsafe to use, so even if you manage to fix it, consider using the drive only for non-essential info you're ready to lose.
| Is my hard drive failing? / Need help with smartctl -a output |
1,543,156,167,000 |
I have a PC with a small disk:
Filesystem Size Used Avail Use% Mounted on
/dev/root 3.5G 3.1G 249M 93% /
devtmpfs 459M 0 459M 0% /dev
tmpfs 463M 0 463M 0% /dev/shm
tmpfs 463M 36M 428M 8% /run
tmpfs 5.0M 8.0K 5.0M 1% /run/lock
tmpfs 463M 0 463M 0% /sys/fs/cgroup
/dev/mmcblk0p1 63M 25M 38M 40% /boot
tmpfs 64M 140K 64M 1% /mnt/ramdisk
I have a simple database, but one of the tables has more than 5 million rows, taking up a lot on such a small record:
# ls -lh /var/lib/mysql/datalogger/
total 1.1G
...
-rw-rw---- 1 mysql mysql 1.1G Nov 10 09:56 avg_values.ibd
...
My goal, done on other similar computers without so much space occupied (about half in that table) was to run a query that deleted data more than 6 months old. The query is irrelevant to the case. after the query, running optimize NO_WRITE_TO_BINLOG table avg_values; free up ibd space.
The problem with the computer in question, is that -as I believe- the available space is very limited, you can not perform queries on such a large table (by cache, I understand), causing instability in communications, even when the query has a limit:
MariaDB [datalogger]> select * from avg_values limit 10;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 5
Current database: datalogger
ERROR 2013 (HY000): Lost connection to MySQL server during query
With other smaller tables there are usually no problems (although it also indicates error):
MariaDB [datalogger]> select * from raw_values;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id: 5
Current database: datalogger
+----+---------------------+-------------------+------+-------+
| id | timestamp | channel | raw | value |
+----+---------------------+-------------------+------+-------+
| 1 | 2021-11-10 09:46:51 | AI1 | 1 | 0.011 |
| 2 | 2021-11-10 09:46:51 | AI2 | 1 | 0.011 |
| 3 | 2021-11-10 09:46:51 | AI3 | 2 | 0.022 |
| 4 | 2021-11-10 09:46:51 | AI4 | 2 | 0.022 |
| 5 | 2021-11-10 09:46:51 | pyr1_sensor1_data | 113 | 113 |
| 6 | 2021-11-10 09:46:51 | pyr1_body_temp | 42 | 42 |
| 7 | 2021-11-10 09:46:51 | pyr2_sensor1_data | NULL | NULL |
| 8 | 2021-11-10 09:46:51 | pyr2_body_temp | NULL | NULL |
| 9 | 2021-11-10 09:46:46 | AI1 | 1 | 0.011 |
| 10 | 2021-11-10 09:46:46 | AI2 | 1 | 0.011 |
| 11 | 2021-11-10 09:46:46 | AI3 | 2 | 0.022 |
| 12 | 2021-11-10 09:46:46 | AI4 | 2 | 0.022 |
| 13 | 2021-11-10 09:46:46 | pyr1_sensor1_data | 115 | 115 |
| 14 | 2021-11-10 09:46:46 | pyr1_body_temp | 42 | 42 |
| 15 | 2021-11-10 09:46:46 | pyr2_sensor1_data | NULL | NULL |
| 16 | 2021-11-10 09:46:46 | pyr2_body_temp | NULL | NULL |
| 17 | 2021-11-10 09:46:52 | AI1 | 1 | 0.011 |
| 18 | 2021-11-10 09:46:52 | AI2 | 1 | 0.011 |
| 19 | 2021-11-10 09:46:52 | AI3 | 2 | 0.022 |
| 20 | 2021-11-10 09:46:52 | AI4 | 2 | 0.022 |
| 21 | 2021-11-10 09:46:52 | pyr1_sensor1_data | 112 | 112 |
| 22 | 2021-11-10 09:46:52 | pyr1_body_temp | 42 | 42 |
| 23 | 2021-11-10 09:46:52 | pyr2_sensor1_data | NULL | NULL |
| 24 | 2021-11-10 09:46:52 | pyr2_body_temp | NULL | NULL |
+----+---------------------+-------------------+------+-------+
24 rows in set (0.02 sec)
Is there any way I can run these queries, especially with this little disk in order to free up space?
|
There is no easy way to do optimize with limited storage space or any sane query for that matter, because you may be running out of space during the query execution. You have two options:
Mount an NFS drive from a remote machine and point mysql's tmpdir to it
Or
Point mysql's tmpdir to /dev/shm. Be very cautious with this and always have a backup before performing such operations
| Free up mysql space on low-disk PC |
1,543,156,167,000 |
I'm looking on my Debian 11 Server for the easiest way to allocate 100GB of extra space after the /dev/sda1 device in command line.
The sda1 partition is almost full and needs to be resize with the unallocated space.
Here is the structure of my hard drive:
Disk: /dev/sda
Size: 200 GiB, 214748364800 bytes, 419430400 sectors
Label: dos, identifier: 0xea1313af
Device Boot Start End Sectors Size Id Type
>> /dev/sda1 * 2048 192940031 192937984 92G 83 Linux
/dev/sda2 192942078 209713151 16771074 8G 5 Extended
└─/dev/sda5 192942080 209713151 16771072 8G 82 Linux swap / Solaris
Free space 209713152 419430399 209717248 100G
Partition type: Linux (83) │
│ Attributes: 80 │
│Filesystem UUID: b4804667-c4f3-4915-a95d-d3b83fac302c │
│ Filesystem: ext4 │
│ Mountpoint: / (mounted)
Could you help me to easily achieve this in command line? Thanks!
Best regards
|
The free space is not directly after the sda1 partition so you can't use it, you need to remove (or move, but removing is easier) the swap partition sda5.
Stop the swap using swapoff /dev/sda5
Remove the sda5 partition and the sda2 extended partition.
Resize the sda1 partition. Don't forget to resize the filesystem too using resize2fs. You can check this question for more details about resizing partitions using fdisk.
Create a new swap partition (optionally a logical one inside a new extended partition if you want setup similar to your current one).
Update your /etc/fstab swap record with the new partition number or UUID.
| Extend 100GB of unallocated space on /dev/sda1 device in command line |
1,543,156,167,000 |
I have two disks 4G each. I create raid with lvm. The size for Available is wrong or is my mistake.I don't understand why it show 6.9T and not 7.2T or 7.3T :/ thanks
lsblk
sda 8:0 0 3.7T 0 disk
└─backup 253:0 0 7.3T 0 lvm /media/backup
sdb 8:16 0 3.7T 0 disk
└─backup 253:0 0 7.3T 0 lvm /media/backup
fdisk -l
Size Used Avail Use%
7.3T 93M 6.9T 1%
|
The available size is correct, by default ext4 reserves 5 % of the space for root (95 % of 7.3 TiB is 6.93 TiB). This isn't that important for other filesystems than / (where this prevents unprivileged processes from filling the filesystem), but it also helps with preventing fragmentation. You can change the reserve using tune2fs -m 1 <device> to set the reserve to 1 % (you can also change this when creating the filesystem using the same -m 1 option with mke2fs).
From tune2fs manpage:
-m reserved-blocks-percentage
Set the percentage of the filesystem which may only be allocated by privileged processes. Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem. Normally, the default percentage of reserved blocks is 5%.
| Disk size is show wrong value for Available |
1,543,156,167,000 |
To my understanding, an LVM snapshot has a size upon creation. According to the LVM man page, a snapshot can have all its space used up and be rendered unusable(unless extended). But how does its space gets used up ?
Let's say: I have a file of 10M on the source(the original logical volume). When I create a snapshot, it will be identical. I appended some data in the source file. Now the file size is 15M. If I do the same thing all over again, by making the 15M file to 20M in size, will the 15M file replace the 10M file on the snapshot ? if not, then in what way does the snapshot gets its available space used up ?
|
Copy-on-write LVM snapshots contain the differences between the snapshot and the origin. So every time a change is made to either, the change is added to the snapshot, potentially increasing the space needed in the snapshot. (“Potentially” because changes to blocks that have already been snapshotted for a previous change don’t need more space, they overwrite the previous change — the goal is to track the current differences, not the history of all the changes.)
LVs aren’t aware of the structure of data stored inside them. Appending 5MiB to a file results in at least 5MiB of changes written to the origin, so the changed blocks need to be added to the snapshot (to preserve their snapshotted contents). Writing another 5MiB to the file results in another 5MiB (at least) of changes to the origin, which result in a similar amount of data being written to the snapshot (again, to preserve the original contents). The contents of the file, or indeed the volume, as seen in the snapshot, never change as a result of changes to the origin.
| LVM Snapshot - does the new data gets replaced by previous data? |
1,543,156,167,000 |
I've setup a server with luks devices (not used for root partition), they are listed in /etc/crypttab this way
# <target name> <source device> <key file> <options>
luks_device_1 /dev/mapper/vg-lv_1 none luks
...
I've also setup a tang server and bound the devices to tang using the command
clevis luks bind -d /dev/mapper/vg-lv_1 tang '{"url":"http://svr"}'
finally I enable the units clevis-luks-askpass.path and clevis-luks-askpass.service to have the automatic unlocking mechanism working at boot.
However the devices are not unlocked at boot, the password is asked on the console, unless I add in the file /etc/crypttab the string _netdev in the options section. But I'm not really fond of that because _netdev is supposed to be used for network devices.
Did I miss something ?
|
Actually, according the manpage clevis-luks-unlockers(7) having the option _netdev in /etc/crypttab is necessary to trigger the automatic unlocking.
After a reboot, Clevis will attempt to unlock all _netdev devices listed in /etc/crypttab when systemd prompts for their
passwords. This implies that systemd support for _netdev is
required.
| tang / clevis: automatic unlocking for luks device not triggered unless defined as _netdev |
1,543,156,167,000 |
I have partitions in my CentOS 8 machine as below:
[root@XXXXX]# fdisk -l
Disk /dev/sda: 80 GiB, 85899345920 bytes, 167772160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbccac24e
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 2099199 2097152 1G 83 Linux
/dev/sda2 2099200 167768063 165668864 79G 8e Linux LVM
Disk /dev/mapper/cl-root: 76.5 GiB, 82137055232 bytes, 160423936 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/cl-swap: 2.5 GiB, 2684354560 bytes, 5242880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@XXXXX]#
/dev/mapper/cl-root shows a size of 76.5 GiB.
But df -h is showing a different size, 22G, for root:
[root@XXXXX]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 9.5M 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/mapper/cl-root 22G 22G 62M 100% /
/dev/sda1 976M 256M 654M 29% /boot
tmpfs 379M 1.2M 377M 1% /run/user/42
tmpfs 379M 4.6M 374M 2% /run/user/1000
[root@XXXXX]#
I need to expand my root filesystem, because it is running out of space.
I am not able to extend or create new partition for the remaining 54GB
(76GB − 22GB) that is already available for /dev/mapper/cl-root.
Below are the results for pvdisplay and lsblk:
[root@XXXXX]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name cl
PV Size <79.00 GiB / not usable 1.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 20223
Free PE 0
Allocated PE 20223
PV UUID XXXXXX-XXXX-XXXX-XXXX-XXXX-XXXXX-XXXXX
[root@XXXXX]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 79G 0 part
├─cl-root 253:0 0 76.5G 0 lvm /
└─cl-swap 253:1 0 2.5G 0 lvm [SWAP]
sr0 11:0 1 1024M 0 rom
[root@XXXXX]#
lvdisplay is showing the below output:
[root@XXXXX]# lvdisplay
--- Logical volume ---
LV Path /dev/cl/swap
LV Name swap
VG Name cl
LV UUID 000000000000000000000000000
LV Write Access read/write
LV Creation host, time localhost, 2020-01-29 21:17:47 -0500
LV Status available
# open 2
LV Size 2.50 GiB
Current LE 640
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:1
--- Logical volume ---
LV Path /dev/cl/root
LV Name root
VG Name cl
LV UUID 00000000000000000000000000
LV Write Access read/write
LV Creation host, time localhost, 2020-01-29 21:17:48 -0500
LV Status available
# open 1
LV Size <76.50 GiB
Current LE 19583
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 8192
Block device 253:0
[root@XXXXX]#
I am using CentOS 8.
What should I do?
|
Your LV /dev/cl/root is already at the maximux available size. what you need is to resize the file system. If you let the default options, it's xfs, so the command should be:
xfs_growfs / this command will extend xfs volume to all available space. Have in mind that xfs volumes cannot be shrinked.
Using xfs_growfs / -D size, you can specify a size to extend, but it's expressed in filesystem blocks, not MBs/GBs. xfs_info command will show you the block size.
This command will help you see the difference between LV size and xfs volume size:
lsblk -o name,fstype,size,fssize,mountpoint,label,model,vendor
Also pvs, vgs, lvs give a nice summary of your LVM physical volumes, volume groups and logical volumes respectively.
| Cannot expand file system on CentOS 8 |
1,543,156,167,000 |
I wrote a script that loops through every file in a folder and performs a simple operation on each file. That folder will almost always be empty and only occasionally contain a file, but I'd like the script to run automatically (and relatively promptly) when a file does appear. What's the best practice to do that?
Right now, I just have cron running the script every minute. Is there a problem doing it that way? If I just leave that going long-term, will that make a difference in longevity of the drive?
Thanks!
|
incrond can run a command when a file appears. It uses inotify underneath.
As has been pointed out in a comment, systemd can also monitor a directory and trigger actions.
[Unit]
Wants= my.service
[Path]
DirectoryNotEmpty= /path/to/monitored/directory
| Cron loop best practices |
1,543,156,167,000 |
When you create a virtual machine with VirtualBox, you are greeted with two options while creating the virtual disk:
Dynamically allocated
Fixed size
The dynamically allocated just creates a sparse file that grows when the requirements expand.
But the fixed size creates a file that consumes the allocated disk space.
Creating both dynamically allocated and fixed-sized partitions will not take a long amount of time.
What kind of file does the fixed size create? Is it gets written to the disk and increases the total terabytes written SMART count? Can I create such a file that consumes a huge space but takes no time to get created?
Edit:
I noticed that a newly created very small disk worth 194M has 5 lines of binary data.
I also created a 12G file, which also is mostly empty. But it takes no time to create the 12G file.
|
File data is stored in blocks on the hard drive. Information on which blocks make up a file is stored in meta data about the file. This is what we call a file system.
So to create a 1TB file, the OS only needs to allocate 1TB of worth of blocks. This makes those blocks unavailable to any other file. But the OS does not need to write 1TB unless it is actually given 1TB of data to write. You can see this behaviour with the truncate command.
To extend a file this way, you can either do this at the command line with truncate or programmatically with truncate()
| What really is "A fixed size hard disk" in virtualbox? |
1,543,156,167,000 |
I had 0 bytes free and even when I deleted something the number of free bytes stayed still zero. So I tried to restart it and I can't login now. I tried console Ctrl+Alt+F1, but I don't know login. I wrote the name that is normally displayed above the password, but it is wrong. What should I do?
|
Make a LiveUSB on another machine
Boot from that LiveUSB on your PC.
Access your boot drive, whether SSD or HDD (makes no difference).
Either delete files you don't need, or move them to another drive (such as a USB or external SSD or HDD).
After this problem is resolved, you may wish to consider the use of tmpreaper or equivalent to delete temp files between reboots, and also a method to monitor file space available (I use conky on my desktop for this).
| Mint - disk is full (prevents login) |
1,543,156,167,000 |
the linux ( OS ) installed on sda
and machine also have additional disk - sdb
sdb 8:16 0 20G 0 disk
but we want to add two new partitions as sdb1 and sdb2
as the following:
sdb 8:16 0 20G 0 disk
├─sdb1 8:1 0 500M 0 part
└─sdb2 8:2 0 500M 0 part
how to create the disk to be parted with partitions - sdb1 and sdb2?
|
Assuming GNU parted, and assuming the snippets you are showing are from the output of lsblk, here is a minimal set of actions you can perform to accomplish what you are asking for:
Running parted as root, select the device you want to act upon:
(parted) select /dev/sdb
Create a MBR partition table (no particular reason for choosing this type; in parted, type help mklabel for a list of available types):
(parted) mklabel msdos
Then, create the partitions:
(parted) unit MiB
(parted) mkpart primary 1 501
(parted) mkpart primary 501 1000
parted is instructed here to use the mebibyte as unit because that is the default for lsblk. This way you can outright type the same numbers posted in your question.
The starting point of the first partition is arbitrarily at 1 MiB because 1) it cannot be at 0 and 2) 1 MiB is a relatively lazy but safe choice for alignment. A discussion on partition alignment seems out of scope here.
You can then check your changes with:
(parted) print
| how to create parted disk |
1,540,671,306,000 |
My linux root OS device is /dev/sda, and my external hard drive is /dev/sdb (empty drive)
I don't want to loss data on /dev/sda, and setup raid level 0.
Is this possible?
|
With RAID 0, your data is split in half (evens and odds) between the drives. In other words, there are chunks 1, 3, 5, .... etc. on the first drive, and a second group of chunks 2, 4, 6, etc. are on the second drive.
If one of the drives dies, you've instantly lost 50% of the chunks. Imagine opening up your program and deleting every other line out of it. That's what happens when you lose a striped disk. Depending on the stripe size, you may be able to recover some data or even whole files out of the remaining disk (it's very possible that a 10KB file would be completely intact on one disk since your blocks should be larger than that). However, a file that's ten times your block size would have 5 blocks on each disk, meaning you just lost 50% of your file.
Important to note that you can't set up RAID 0 without formatting (initializing) the drives as RAID 0. You would have to back-up your data before creating the raid.
| Is it possible to create raid-0 without losing data? |
1,540,671,306,000 |
I have just moved a 3TB disk from an external USB enclosure to inside a computer and I cannot see the only one ext4 partition which is supposed to be there. The disk has extremely important data that I cannot lose. Please advise how to proceed, here are some details:
$ sudo mount -vvv -t ext4 /dev/sdb1 /mnt/
mount: /mnt: /dev/sdb1 is not a valid block device.
$ sudo fdisk -l /dev/sdb
GPT PMBR size mismatch (732566645 != 5860533167) will be corrected by w(rite).
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdb1 1 732566645 732566645 349.3G ee GPT
Partition 1 does not start on physical sector boundary.
$ sudo parted /dev/sdb print
Error: /dev/sdb: unrecognised disk label
Model: ATA WDC WD30EZRX-00D (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:
lshw output (excerpt):
*-scsi:1
physical id: 2
logical name: scsi1
capabilities: emulated
*-disk
description: ATA Disk
product: WDC WD30EZRX-00D
vendor: Western Digital
physical id: 0.0.0
bus info: scsi@1:0.0.0
logical name: /dev/sdb
version: 0A80
serial: WD-WCC1T1561951
size: 2794GiB (3TB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
*-volume UNCLAIMED
description: EFI GPT partition
physical id: 1
bus info: scsi@1:0.0.0,1
capacity: 349GiB
capabilities: primary nofs
|
The comment answerers are not reading the output in your question. The output tells us this:
GPT PMBR size mismatch (732566645 != 5860533167) will be corrected by w(rite). fdisk is telling you that you have an EFI partition table with a so-called "protective" old-style MBR partition record. But the protective partition record does not correctly protect the contents of your disc, because it ends way before the actual end of the disc, leaving a couple of TiB of free space unaccounted for. fdisk says that it will fix this for you. Do not attempt to use fdisk to do so. fdisk is wrong.
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdb1 1 732566645 732566645 349.3G ee GPT So fdisk has decided to not show you the EFI partition table at all. It is showing you the "protective" old-style MBR partition table instead, as if that were how you had partitioned your disc. That contains one entry, which is really (since it is type ee) a dummy entry that is supposed to encompass the entire disc, including the EFI partition table. But it is only 732566645 sectors long, which is roughly 349GiB, not 2.7TiB.This is one of several reasons why it is wrong to run fsck against this. It is not a disc volume containing a formatted filesystem. It is a dummy old-style partition that is supposed to span the entire disc.
Partition 1 does not start on physical sector boundary. This is a red herring. Your dummy protective partition is supposed to begin at sector 1. Sector 1 is where the EFI partition table begins. It is the alignment of the real partitions, recorded in the new EFI partition table that fdisk isn't reading, that matters, and that for performance reasons. You should be able to mount misaligned volumes. But you haven't even got as far as using the right partition table, so whether this is even a problem in the first place is unknown.However, it is likely that it is not. Alignment is likely entirely a red herring here. Because what you are experiencing is well known, and is something else.
$ sudo parted /dev/sdb print
Error: /dev/sdb: unrecognised disk label parted is failing to read your EFI partition table, too. Unlike fdisk, it isn't falling back to treating your disc as being partitioned in the old style, and reporting one big dummy partition. It is failing outright.
size: 2794GiB (3TB)
…
description: EFI GPT partition
physical id: 1
bus info: scsi@1:0.0.0,1
capacity: 349GiB lshw is seeing a 3TB (2.7TiB) disc. It is also seeing the EFI partition table. But your EFI partition table claims that this is a 349GiB disc.
Why did 2.7TiB become 349GiB?
Well, notice what you get when you multiply 349GiB by 8.
When it is in your USB disc enclosure, the system thinks that your disc has 4KiB sectors, and everything has been accessing it using that as the sector size. In the USB enclosure, the rest of the system sees your disc with its native, true, sector size.
Moreover, with 4KiB sectors 732566645 sectors really does encompass the entire 2.7TiB of your disc, and both the old-style protective partition and the actual EFI partition table have the right numbers.
Outwith your USB disc enclosure, your disc is being read in "512e" compatibility mode, where most of the system pretends that your disc has 0.5KiB sectors. (There is a more complex explanation to do with a second inverse transformation undoing the first when the USB enclosure is involved, but I am glossing over that here, as it is beyond the scope of this answer.) The partition start and size numbers in your partition tables, and indeed anything else that points to a logical block address on your disc, are all wrong.
4KiB is 8 times 0.5KiB.
Downgrading from native 4KiB sector sizes to "512e" is possible, but it is not for the fainthearted. I recommend as the far simpler course of action that you put the disc back into the enclosure to read it, where it will be seen with its true 4KiB sector size by the rest of the system and the numbers will come out right.
Further reading
https://superuser.com/questions/719844/
https://superuser.com/questions/985305/
https://superuser.com/questions/1271871/
https://superuser.com/questions/852475/
Jonathan de Boyne Pollard (2011). The gen on disc partition alignment.
. Frequently Given Answers.
https://superuser.com/questions/339288/
https://superuser.com/questions/331446/
| Cannot mount partition - does not start on physical sector boundary? |
1,540,671,306,000 |
I found that if I detach a disk from my Linux server (CentOS 7), the related /dev/sd* file will disappear automatically about 10 seconds later.
I'm wondering how does Linux know a disk has been detached? Is there something like a sweeper keep scanning all the devices?
And is it possible to make this quicker?
|
The delay is likely caused by udisk2 & udev.
Research
$ ps -eaf|grep [u]disk
root 17041 1 0 09:48 ? 00:00:00 /usr/libexec/udisks2/udisksd
Can query it for storage devices like so:
$ udisksctl status
MODEL REVISION SERIAL DEVICE
--------------------------------------------------------------------------
VBOX HARDDISK 1.0 VBc5aaf476-f419b1f1 sda
If you look at the udisk2 process:
$ lsof -p $(pidof udisksd) | tail
udisksd 17041 root 3u unix 0xffff88003a49d400 0t0 611852 socket
udisksd 17041 root 4u a_inode 0,9 0 4852 [eventfd]
udisksd 17041 root 5u a_inode 0,9 0 4852 [eventfd]
udisksd 17041 root 6u unix 0xffff88003a49c000 0t0 611853 socket
udisksd 17041 root 7u a_inode 0,9 0 4852 [eventfd]
udisksd 17041 root 8r REG 0,3 0 611907 /proc/17041/mountinfo
udisksd 17041 root 9r REG 0,3 0 4026532019 /proc/swaps
udisksd 17041 root 10r a_inode 0,9 0 4852 inotify
udisksd 17041 root 11u netlink 0t0 611910 KOBJECT_UEVENT
udisksd 17041 root 12u a_inode 0,9 0 4852 [eventfd]
Not a lot to go on there, the thing that catches my eye there is inotify. Whenever I see that, I immediately think udev.
Looking for udev rules
$ find /etc/udev/rules.d/ /usr/lib/udev/rules.d | grep sto
/usr/lib/udev/rules.d/90-alsa-restore.rules
/usr/lib/udev/rules.d/60-persistent-storage.rules
/usr/lib/udev/rules.d/60-persistent-storage-tape.rules
the 2nd file looks interesting, take a look inside. This line looks like the cause:
$ cat /usr/lib/udev/rules.d/60-persistent-storage.rules
...
# enable in-kernel media-presence polling
ACTION=="add", SUBSYSTEM=="module", KERNEL=="block", ATTR{parameters/events_dfl_poll_msecs}=="0", ATTR{parameters/events_dfl_poll_msecs}="2000"
ATTR{parameters/events_dfl_poll_msecs}=="0",
ATTR{parameters/events_dfl_poll_msecs}="2000"
References
https://wiki.archlinux.org/index.php/udev#Waking_from_suspend_with_USB_device
| How soon will linux notice that a disk has been detached ? And can it be quicker? |
1,540,671,306,000 |
we have disk devices as ( on redhat machine version 7.2 )
sdb
sdc
sdd
sde
we want to change the names to
vxb
vxc
vxd
vxe
is it possible
expected results
from lsblk output we need to see
vxb
vxc
vxd
vxe
|
I do not expect you will be able to do that without changing the Linux kernel as /dev/sd* are a matter of Linux kernel. And lsblk uses data from kernel (sysfs, to be precise) and udev as a device daemon.
You can however make symlinks, that create /dev/vx* you want. For that I suggest symlinking based on UUID or label by referencing /dev/disk/by-label/<label> or /dev/disk/by-uuid/<UUID> as /dev/sd* may change between boots. Please note that this will not change the lsblk output.
Either way, the answer might be helpful if you'll explain why you are trying to rename the disks as this is quite an unusual request.
| linux + how to change disks name to other names |
1,540,671,306,000 |
This is almost a little embarrassing to ask, but I just can not figure this one out myself really.
I'm trying to find out where the disks of my machine are used and if they are actually used at all.
If I check my available disks in my debian, I can find 4 of them. This seems correct. There should be a 50GB one, a 120GB one and two 1.5TB disks. Here by uuid and by path.
root@HK-MSA-DEB6-32-SHOP2:/dev/disk/by-uuid# ls -la
total 0
drwxr-xr-x 2 root root 140 Oct 22 20:19 .
drwxr-xr-x 5 root root 100 Oct 22 20:19 ..
lrwxrwxrwx 1 root root 10 Oct 22 20:19 18617f6c-d460-43ed-ac61-7f67c99fb710 -> ../../dm-2
lrwxrwxrwx 1 root root 10 Oct 22 20:19 27aca453-55a3-42c9-b3d5-131b1e42b8c8 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Oct 22 20:19 90bef7ec-3cff-4bbb-980b-6552f67fe0c5 -> ../../dm-0
lrwxrwxrwx 1 root root 10 Oct 22 20:19 90f63e9b-86ab-455c-b555-b8e28ecec13b -> ../../dm-3
lrwxrwxrwx 1 root root 10 Oct 22 20:19 f2d73f18-ebff-4a34-874b-1766c6ee7e20 -> ../../sda1
root@HK-MSA-DEB6-32-SHOP2:/dev/disk/by-path# ls -la
total 0
drwxr-xr-x 2 root root 260 Oct 22 20:19 .
drwxr-xr-x 5 root root 100 Oct 22 20:19 ..
lrwxrwxrwx 1 root root 9 Oct 22 20:19 pci-0000:00:07.1-scsi-1:0:0:0 -> ../../sr0
lrwxrwxrwx 1 root root 9 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:0:0 -> ../../sda
lrwxrwxrwx 1 root root 10 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:0:0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 10 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:0:0-part2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:0:0-part5 -> ../../sda5
lrwxrwxrwx 1 root root 9 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:1:0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:1:0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 9 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:2:0 -> ../../sdc
lrwxrwxrwx 1 root root 10 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:2:0-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 9 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:3:0 -> ../../sdd
lrwxrwxrwx 1 root root 10 Oct 22 20:19 pci-0000:00:10.0-scsi-0:0:3:0-part1 -> ../../sdd1
But when I check my mount, I can not find all of those disks.
root@HK-MSA-DEB6-32-SHOP2:/dev/disk/by-id# mount
/dev/mapper/HK--MSA--DEB6--32--SHOP1-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext2 (rw)
/dev/mapper/MSAdocuments-public on /var/MSAdocuments/public type ext4 (rw)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
192.168.10.2:/var/local/share/webshop-backup on /mnt/msa-server2-backup type nfs (rw,sync,hard,intr,vers=4,addr=192.168.10.2,clientaddr=192.168.10.21)
nfsd on /proc/fs/nfsd type nfsd (rw)
If I check the disk usage, I can not identify any of the big 1.5TB disks
root@HK-MSA-DEB6-32-SHOP2:/dev/disk/by-id# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/HK--MSA--DEB6--32--SHOP1-root
48G 19G 27G 42% /
tmpfs 1.5G 0 1.5G 0% /lib/init/rw
udev 1.5G 124K 1.5G 1% /dev
tmpfs 1.5G 0 1.5G 0% /dev/shm
/dev/sda1 228M 28M 189M 13% /boot
/dev/mapper/MSAdocuments-public
119G 68G 45G 61% /var/MSAdocuments/public
192.168.10.2:/var/local/share/webshop-backup
926G 348G 579G 38% /mnt/msa-server2-backup
But I'm actually already lost here, I think I'm looking a the wrong thing. Can anyone point me at what I have to look for if I simply want to know how and where my hard drives are used in the debian OS?
It's just that if the two 1.5TB are actually not used at all, I could take them out and use it for something else...
Thank you very much in advance for helping.
edit:
root@HK-MSA-DEB6-32-SHOP2:/dev/disk/by-id# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/mapper/HK--MSA--DEB6--32--SHOP1-root / ext3 errors=remount-ro 0 1
# /boot was on /dev/xvda1 during installation
UUID=f2d73f18-ebff-4a34-874b-1766c6ee7e20 /boot ext2 defaults 0 2
/dev/mapper/HK--MSA--DEB6--32--SHOP1-swap_1 none swap sw 0 0 /dev/xvdd
#/media/cdrom0 udf,iso9660 user,noauto 0 0
UUID=90f63e9b-86ab-455c-b555-b8e28ecec13b /var/MSAdocuments/public ext4 defaults 0 2
#UUID=4975df26-0ee2-4ef9-838d-414d77e0f697 /var/MSAdocuments/private xfs defaults 0 2
192.168.10.2:/var/local/share/webshop-backup /mnt/msa-server2-backup nfs rw,sync,hard,intr 0 0
root@HK-MSA-DEB6-32-SHOP2:/dev/disk/by-id# fdisk -l
Disk /dev/sda: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009ae9e
Device Boot Start End Blocks Id System
/dev/sda1 * 1 32 248832 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 32 6527 52176897 5 Extended
/dev/sda5 32 6527 52176896 8e Linux LVM
Disk /dev/sdb: 128.8 GB, 128849018880 bytes
255 heads, 63 sectors/track, 15665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 15665 125829081 8e Linux LVM
Disk /dev/sdc: 1649.3 GB, 1649267441664 bytes
255 heads, 63 sectors/track, 200512 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0bf85aa1
Device Boot Start End Blocks Id System
/dev/sdc1 1 200512 1610612608+ 8e Linux LVM
Disk /dev/sdd: 1649.3 GB, 1649267441664 bytes
255 heads, 63 sectors/track, 200512 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xd19c2d7c
Device Boot Start End Blocks Id System
/dev/sdd1 1 200512 1610612608+ 8e Linux LVM
Disk /dev/dm-0: 51.4 GB, 51355058176 bytes
255 heads, 63 sectors/track, 6243 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 2071 MB, 2071986176 bytes
255 heads, 63 sectors/track, 251 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/dm-1 doesn't contain a valid partition table
Disk /dev/dm-2: 3298.5 GB, 3298526494720 bytes
255 heads, 63 sectors/track, 401023 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/dm-2 doesn't contain a valid partition table
Disk /dev/dm-3: 128.8 GB, 128844824576 bytes
255 heads, 63 sectors/track, 15664 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/dm-3 doesn't contain a valid partition table
edit2:
root@HK-MSA-DEB6-32-SHOP2:/# pvscan
PV /dev/sdc1 VG temp_vslide lvm2 [1.50 TiB / 0 free]
PV /dev/sdd1 VG temp_vslide lvm2 [1.50 TiB / 0 free]
PV /dev/sdb1 VG MSAdocuments lvm2 [120.00 GiB / 0 free]
PV /dev/sda5 VG HK-MSA-DEB6-32-SHOP1 lvm2 [49.76 GiB / 0 free]
Total: 4 [3.17 TiB] / in use: 4 [3.17 TiB] / in no VG: 0 [0 ]
alright, now I'm totally confused... there are two volume groups called "temp_vslide" but they are not mounted anywhere?
edit in my continuous hunt for the usage of those two drives
root@HK-MSA-DEB6-32-SHOP2:/var/local/share/msa-server2-backup/mysql# pvscan
PV /dev/sdc1 VG temp_vslide lvm2 [1.50 TiB / 0 free]
PV /dev/sdd1 VG temp_vslide lvm2 [1.50 TiB / 0 free]
PV /dev/sdb1 VG MSAdocuments lvm2 [120.00 GiB / 0 free]
PV /dev/sda5 VG HK-MSA-DEB6-32-SHOP1 lvm2 [49.76 GiB / 0 free]
Total: 4 [3.17 TiB] / in use: 4 [3.17 TiB] / in no VG: 0 [0 ]
root@HK-MSA-DEB6-32-SHOP2:/var/local/share/msa-server2-backup/mysql# lvdisplay
--- Logical volume ---
LV Name /dev/temp_vslide/tsy
VG Name temp_vslide
LV UUID baAN28-RiLO-PfPj-mS6c-4Jef-zwnx-tS9vEA
LV Write Access read/write
LV Status available
# open 0
LV Size 3.00 TiB
Current LE 786430
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:2
root@HK-MSA-DEB6-32-SHOP2:/mnt/temp_vslide# dmsetup info
Name: temp_vslide-tsy
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 254, 2
Number of targets: 2
UUID: LVM-yWxHUf3aTDZJ3K0ya4PRl4oDujG0si4jbaAN28RiLOPfPjmS6c4JefzwnxtS9vEA
root@HK-MSA-DEB6-32-SHOP2:~# ls -la /mnt/temp_vslide/
total 8
drwxr-xr-x 2 root root 4096 Nov 14 2013 .
drwxr-xr-x 6 root root 4096 Nov 14 2013 ..
root@HK-MSA-DEB6-32-SHOP2:~# ls -la /dev/temp_vslide/tsy
lrwxrwxrwx 1 root root 7 Oct 22 20:19 /dev/temp_vslide/tsy -> ../dm-2
|
You could also try:
sudo pvscan
This will show you if any of the disks are in use by the logical volume manager. You can also use fdisk to determine which device corresponds to each physical drive:
sudo fdisk -l /dev/sda
sudo fdisk -l /dev/sdb
sudo fdisk -l /dev/sbc
sudo fdisk -l /dev/sdd
| Where are all the disks used? |
1,540,671,306,000 |
I have a number of NTFS disks with unused capacity. They are leftover external hard drives with a Windows history. I have a very limited number of USB ports on my machine, and want to find a cheap way to combine these disks as one logical disk, as it appears to the OS. Are there any methods in Linux to implement this?
Is this like RAID or span arrays?
|
You'd have to find some kind of enclosure for several disks (unlikely) or an USB hub (preferrably one with external power) to connect all your disks in enclosures (again, preferrably with independent power). The machine/operating system will see them all as separate disks, perhaps you could build a RAID over them. But USB is much slower than directly connected disks, more so if several share the same path to the machine. Performance will most probably suck. And due to vagaries in the order in which the machine sees the disks comming on line or such they might get shuffled around each boot.
| Redundant array of NTFS disks |
1,540,671,306,000 |
I have an external drive with an ext4 partition /dev/sda1 I use for my local borg backups.
It is simply plugged in via usb port, and, mounted with an fstab generated systemd automount entry. I ran a backup yesterday in the evening without any errors, and this morning, I plugged it in, it was not recognized anymore. The drive would show up with lsblk, but no partition under it.
I ran sudo fsck -R -C -V -t ext4 /dev/sda1 and got the following output:
fsck from util-linux 2.39.2
[/usr/bin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 -C0 /dev/sda1
e2fsck 1.47.0 (5-Feb-2023)
fsck.ext4: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda1
Could this be a zero-length partition?
/dev/sda1: status 8, rss 3232, real 0.002321, user 0.001784, sys 0.000000
I have no idea how to interpret that. I only can see the exit code status 8 the man page describes as an 'operational error'.
BEGIN EDIT
Output for sudo parted /dev/sda print
Error: Invalid partition table on /dev/sda -- wrong signature 0.
Ignore/Cancel? I
Model: SABRENT (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 8225kB 1000GB 1000GB extended lba
Output of sudo dmesg right after plugging the drive in
[16265.871467] usb 2-6.4: new SuperSpeed USB device number 15 using xhci_hcd
[16265.889474] usb 2-6.4: New USB device found, idVendor=152d, idProduct=1561, bcdDevice= 2.04
[16265.889486] usb 2-6.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[16265.889491] usb 2-6.4: Product: SABRENT
[16265.889495] usb 2-6.4: Manufacturer: SABRENT
[16265.889499] usb 2-6.4: SerialNumber: DB9876543214E
[16265.899660] scsi host4: uas
[16265.900160] scsi 4:0:0:0: Direct-Access SABRENT 0204 PQ: 0 ANSI: 6
[16268.706521] sd 4:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
[16268.706530] sd 4:0:0:0: [sda] 4096-byte physical blocks
[16268.706759] sd 4:0:0:0: [sda] Write Protect is off
[16268.706768] sd 4:0:0:0: [sda] Mode Sense: 53 00 00 08
[16268.707113] sd 4:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[16268.707265] sd 4:0:0:0: [sda] Preferred minimum I/O size 4096 bytes
[16268.707270] sd 4:0:0:0: [sda] Optimal transfer size 33553920 bytes not a multiple of preferred minimum block size (4096 bytes)
[16268.724287] sda: sda1 < >
[16268.724396] sd 4:0:0:0: [sda] Attached SCSI disk
[16296.811964] usb 2-6.3: reset SuperSpeed USB device number 14 using xhci_hcd
[16340.865861] sda: sda1 < >
Following telecoM advice, I ran sudo losetup --sector-size 4096 -P -f /dev/sdx. I have a loop1p1 device/partition now.
❯ sudo parted /dev/loop1p1 print
Error: /dev/loop1p1: unrecognised disk label
Model: Unknown (unknown)
Disk /dev/loop1p1: 4096B
Sector size (logical/physical): 4096B/4096B
Partition Table: unknown
Disk Flags:
❯ sudo fsck.ext4 -f /dev/loop1p1
e2fsck 1.47.0 (5-Feb-2023)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/loop1p1
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
END EDIT
Should I give up on trying to recover this partition ? (I have redundant backups, it is not a catastrophe for me, I just want to learn in anticipation for the day it might be one).
Thanks in advance for your help.
|
My first assumption would be a faulty case, particularly as you've got pluggable USB involved. Check that it's plugged in properly (both ends of the cable) and there's sufficient power.
I would also check the partition table, which you added to your question. Unfortunately it too shows a faulty read from the device, which is why I suspect the hardware.
Sadly, there are lots of "it doesn't work" posts with the particular vendor id (0x152d) and the product id (0x1561) that you showed. As an example I searched Google for "linux sabrent 152d 1561 usb". You might be better with a different case (I've found no issues with my RSHTECH 3.5in SATA case, but as they're all built to low budgets my experience suggests it's often a matter of potluck.)
| Recovering an ext4 partition |
1,540,671,306,000 |
I installed Linux Mint in a dual boot alongside Windows 11, however now I am trying to get completely rid off Windows 11, however I haven't been able to combine the two partitions that I used for personal data storage (one of them is empty, so no need of keeping files).
This is what my partitions look like on lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 238,5G 0 disk
├─nvme0n1o1
│ 259:1 0 260M 0 part /boot/efi
├─nvme0n1p5
│ 259:2 0 27,9G 0 part /
├─nvme0n1p6
│ 259:3 0 5,7G 0 part
├─nvme0n1p7
│ 259:4 0 65G 0 part /media/user/
262f
└─nvme0n1p2
259:5 0 139,6G 0 part
I would like to combine the 'nvme0n1p2' and 'nvme0n1p7', I don't know if that could be possible as I haven't been able to do it on GParted.
|
You can merge two partitions only if they are adjacent. If they are not adjacent, you can always use LVM to initialize each partition as a Physical Volume, add the two Physical Volumes to a Volume Group, and create a Logical Volume from the Volume Group; then format the Logical Volume as usual.
The commands would be more or less this:
pvcreate /dev/nvme0n1p7
pvcreate /dev/nvme0n1p2
vgcreate myvg /dev/nvme0n1p7
vgextend myvg /dev/nvme0n1p2
lvcreate -L 204G -n mylv myvg
mkfs -t xfs /dev/myvg/mylv
Note that this will wipe the content of both partitions, so if they contain any data you want to keep, backup the data onto another device before proceeding.
| How to combine two partitions? |
1,540,671,306,000 |
I was cleaning up my system and obviously got a little carried away. I ended up running: sudo rm -rf /dev/sda1. I can still see it running lsblk and df -h that /dev/sda1 is mounted on /, however it does not exists as special device under /dev/.
I can't reboot this server.
# lsblk
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 49.9G 0 part /
How to fix this?
|
First of all, Don't panic.
You did not, in fact, erase your entire drive. All your data is still there, as evidenced by the fact that your system is still working.
All you did was delete the device file that linux, like most Unices, uses to identify and address that partition directly, and by and large the only times it needs to do that is when mounting or modifying the partition.
If your system is even remotely modern, chances are that it uses udev or something similar to automatically populate /dev/ during boot time and things will continue to work fine, but just for your own surety I'd recommend taking Jaromanda's advice and recreate the node by running sudo mknod /dev/sda1 b 8 1
Then make sure its permissions are set properly with the following commands:
sudo chown root.disk /dev/sda1
sudo chmod 660 /dev/sda1
EDIT A bit more clarification, as requested:
The mknod command does exactly what it says on the tin - it makes a device node. In this particular case, you tell it to create a b block device with major number 8 and minor number 1, which happens to translate to "The first partition of the first SCSI device presenting as a disk".
(For more explanation on device nodes, this tutorial is informative but a bit outside the scope of this question)
| Accidentally deleted the device node /dev/sda1 |
1,540,671,306,000 |
I have a Hyper-V RHEL VM where root has run out of space. In Hyper-V, I reallotted 1TB of disk space to the VM (it was about 120 GB before).
In RHEL, I have this:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel_rhel-root 51G 38G 14G 74% /
I did lvextend -r -l +100%FREE /dev/mapper/rhel_rhel-root
which returned
Size of logical volume rhel_rhel/root unchanged from <50.25 GiB (12863 extents).
Logical volume rhel_rhel/root successfully resized.
I've never run out of space before so I don't really know what I'm doing. Why did lvextend not work?
|
After more research and experimentation, I found the solution here: https://networklessons.com/uncategorized/extend-lvm-partition
There was one difference for my server. Since I have XFS instead of Ext4, for the last command I replaced resize2fs with xfs_growfs /root.
| Trying to extend partition |
1,540,671,306,000 |
I have a pool that had a drive fail and zfs is being stupid about it. I added a disk which ended up going to /dev/sdl I used the disk by id to add it in and due to that after the other drive failed during or just before a reboot i get the following line.
5642991870772164099 UNAVAIL 0 0 0 was /dev/sdl1
any idea how to get the info to find what the serial number of 5642991870772164099 is?
|
On a Linux system, if the drive is still nominally functional, then lsblk likely will work:
$ lsblk -do name,model,serial /dev/sdl
NAME MODEL SERIAL
sdl ST6000NM0125-1YY ZADAEV8S
FreeBSD users would use diskinfo:
$ diskinfo -s da11
WD-WMC1S5694795
OTOH, if you're unsure whether that UUID still associates with /dev/sdl, you could search the /dev/disk/ tree and grep for the UUID you're looking for:
$ find /dev/disk/ -ls | grep 5642991870772164099
484 0 lrwxrwxrwx 1 root root 10 Jun 8 22:24 /dev/disk/by-uuid/5642991870772164099 -> ../../sdl
| locate specific drive after failure in zfs pool |
1,540,671,306,000 |
Currently I believe I'm experiencing drive (HDD) failure. It is a single partition drive for extra storage. When I attempt to mount it I get the following error:
# mount /dev/sdc1 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Checking dmesg as suggested:
# dmesg | tail
[12641.405658] blk_update_request: critical medium error, dev sdc, sector 2064
[12641.410139] Buffer I/O error on dev sdc1, logical block 2, async page read
[12641.415774] EXT4-fs (sdc1): couldn't mount as ext3 due to feature incompatibilities
[12641.420578] EXT4-fs (sdc1): couldn't mount as ext2 due to feature incompatibilities
[12644.186523] sd 5:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[12644.186543] sd 5:0:0:0: [sdc] tag#0 Sense Key : Medium Error [current]
[12644.186556] sd 5:0:0:0: [sdc] tag#0 Add. Sense: Unrecovered read error
[12644.186570] sd 5:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 00 00 08 10 00 00 08 00
[12644.186580] blk_update_request: critical medium error, dev sdc, sector 2064
[12644.191255] EXT4-fs (sdc1): can't read group descriptor 1
As I said, I'm assuming the drive is going night night, so I'd like to at least save whatever information I have on there (which is by the way more on the significant side). I tried chopping 500GB off the drive, just to see if it would work:
# ddrescue -d -s 500G /dev/sdc data.img data.log
Unfortunately I was running that over ssh and my pipe broke or something so I ended up with a ~150GB img file which when I try to mount I get the same error I got when trying to mount the drive itself (duh):
# mount data.img /mnt -o loop
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
How can I grab the information that is supposedly saved?
|
For anyone who finds this question, I managed to resolve the issue without resorting to photorec or other salvaging tools. I made a full disk image, just in case, using ddrescue but ended up not needing it.
Running badblocks and looking at SMART data I determined that there is 1 damaged sector which is somewhere in the beginning of the drive. Apparently it was where the superblock was stored and that's why the partition was not recognized and could not be mounted.
I tried running e2fsck -cfpv /dev/sdc1 and got
e2fsck: Attempt to read block from filesystem resulted in short read while trying to open /dev/sdc1
Could this be a zero-length partition?
Me being the noob that I am I have no idea what's going on but apparently overwriting that sector with zeroes and re-running e2fsck does some kind of magic and fixed the partition, after which I was able to mount the partition and copy all my files before tossing that hdd out the window. Here are the commands I issued (Yes, I stopped e2fsck instantly once I noticed the partition is recognized and can be mounted):
# dd if=/dev/zero of=/dev/sdc1 bs=4096 count=1 seek=0
1+0 records in
1+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000367146 s, 11.2 MB/s
# e2fsck -fy -b 32768 /dev/sdc1
e2fsck 1.46.2 (28-Feb-2021)
Superblock needs_recovery flag is clear, but journal has data.
Recovery flag not set in backup superblock, so running journal anyway.
/dev/sdc1: recovering journal
Pass 1: Checking inodes, blocks, and sizes
^C/dev/sdc1: e2fsck canceled.
/dev/sdc1: ***** FILE SYSTEM WAS MODIFIED *****
Full explanation and all credit goes to this guy to whom I wish the universe sends a good amount of health and wealth and fulfils his deepest desires!
| How to browse an img from a broken drive (restore lost data) |
1,540,671,306,000 |
I have created CentOS 7 VM in oracle VM box with the disk size of 10 GB.
When I run fdisk -l /dev/sda command, it reports that the disk size is 10.7 GB. Can someone explain why fdisk shows higher disk space than the actual disk space?
|
If you look at the fdisk output, you can see that the disk is reported as being exactly 10*1024*1024*1024 bytes. That suggests that whatever created that disk actually created it to be 10 GiB, although your screenshot shows that VirtualBox calls it GB, but I just take that as (yet another) indication that Oracle sucks!
| Discrepancy of disk size in fdisk command output |
1,540,671,306,000 |
I am trying to install Ubuntu 20.04 on a NVMe disk.
The installation wizard shows below disk info:
I don't quite understand it. My questions are:
Why the /dev/mapper/vgubuntu-root and /dev/mapper/vgubuntu-swap_1 are listed twice, respectively?
The /dev/mapper parts are for LVM, which is the logical view. Why /dev/nvme0n1 also needs to be listed, which is the physical view for the same disk?
Why I can do nothing when right clicking the /dev/nvme0n1p2? But I can change/delete the dev/nvme0n1p1?
I see the size of dev/nvme0n1p2 - two free space = /dev/mapper/vgubuntu-swap_1 + /dev/mapper/vgubuntu-root. Is this some coincidence?
ADD 1 - 9:25 PM 9/18/2021
Some more info.
I did clicked the Advanced features and choose the Use LVM for once as below. But I didn't click the Install Now button because I am not so sure about the LVM. I am not sure if the /dev/mapper paths are caused by this. If it is caused by this, is there way to revert the effect?
ADD 2 - 12:11 PM 9/18/2021
I dug a bit about the Linux LVM.
As I understand now, below diagram just gives me 2 different views of my block device.
In the yellow box, it is the LVM logical volume view.
In the red box, it is the traditional PC partition view.
LVM is the Linux native way of disk partitioning, which has some advantages and complexities.
Current questions:
1 - Where are the info of these 2 different views stored?
2 - If both of them are stored on the disk, is it kind of redundant?
|
I think most of your questions can be answered with: "Displaying LVM devices is not easy and the Ubuntu installer isn't doing great job here".
tl;dr description of LVM: LVM adds a second "logical" layer of storage that allows you to do things like joining multiple disks to a one device, you can setup RAIDs, cached devices etc. You have three types of device in LVM:
Physical volumes (PVs): these are existing block devices, like partitions or disks.
Volume groups (VGs): one or more PVs form a volume group. This is the part where two disks can be joined to a one device: a new VG can consist of two PVs on sda1 and sdb1.
Logical volumes (LVs): these are block devices allocated from a VG, you can picture these as partitions but created on a VG instead of a disk.
lsblk does a better job when displaying this structure and you see the devices are actually stacked on top of each other:
└─nvme0n1p3 259:3 0 892,7G 0 part
└─luks-dfcda59b-1322-4705-bb04-e09a72b2d678 253:0 0 892,7G 0 crypt
├─fedora_aida-root 253:1 0 70G 0 lvm /
└─fedora_aida-home 253:2 0 822,7G 0 lvm /home
(This is my setup with one encrypted PV, VG called fedora and two LVs root and home.)
In your case you have one PV on the second partition of your NVMe drive: /dev/nvme0n1p2. This PV is used by your VG called vgubuntu. And you have two logical volumes: root mounted on / and swap used as swap. (And you also have second swap on your first partition /dev/nvme0n1p1, I have no idea why).
To answer your questions:
Why the /dev/mapper/vgubuntu-root and /dev/mapper/vgubuntu-swap_1 are listed twice, respectively?
No idea. It's either a bug or weird UI decision.
The /dev/mapper parts are for LVM, which is the logical view. Why /dev/nvme0n1 also needs to be listed, which is the physical view for the same disk?
This is decision of whoever did the UI design of the installer. You can hide the PVs and show only the LVs or show both. In this case showing the PVs makes it a little bit confusing IMHO. But as I said, visualization of complex storage setups is not easy.
Why I can do nothing when right clicking the /dev/nvme0n1p2? But I can change/delete the /dev/nvme0n1p1?
/dev/nvme0n1p1 is a partition that is not part of the LVM setup so the installer allows you to change it. /dev/nvme0n1p2 is a PV and it has already the full LVM setup stacked on top of it so it makes sense the installer won't allow to delete it.
I see the size of dev/nvme0n1p2 - two free space = /dev/mapper/vgubuntu-swap_1 + /dev/mapper/vgubuntu-root. Is this some coincidence?
No, that's correct and this is how LVM works -- the LVs are allocated on the PV so the sum of all LVs (plus free space plus some LVM metadata) will be equal to the VG size which will be sum of the sizes of the PVs (so in your case of /dev/nvme0n1p2).
| Question about Ubuntu 20.04 disk partition during installation |
1,540,671,306,000 |
I encountered an issue with a disk drive which I already posted on. (For the curious: Issue with device after formatting)
In short, one of my disks stopped allowing me to copy files by drag and drop using Dolphin (Debian), and allowed me only if I was doing it from the terminal using sudo.
I researched about my issue and noticed something:
This has already happened to me with another disk drive.
That disk drive and this one were erased with dd if=/dev/zero of=/dev/sdX where sdX is the drive in question
It did not happen with other disk drives which were not erased with dd but only formatted (with mkfs) and/or partitioned (e.g. gpt partition created with multiple primary partitions).
In that disk and this one, the owner was changed to root, and no longer user.
So my questions are:
Why did this happen with fully erased disks and not with formatted or partitioned disks?
How do permissions work exactly? Are they written into the disks? Or is ownership written into the disk?
Is it possible to change the owner of the disk so that the change is persistent across Linux distributions?
Edit: I tried to format the disk with exfat. Drap and drop with Dolphin works and the ownership is changed to user. I tried to format the disk with ext4. Drag and drop does not work anymore. The ownership was changed to root. I tried to change the ownership of the disk drive to the current user. The command line exited without issue (terminal: sudo chown ...: /dev/sdX -R -w). However, when using it with Dolphin, drag and drop does not work. Dolphin still lists ownership as root. If manually mounted from terminal, the directory created for mounting will only show ownership as root (even though the directory was created without requiring sudo). If automatically mounted from Dolphin, it will also only show ownership as root. Mount point name changes between two automatic mountings by Dolphin.
I should also add that I did format other drives with ext* filesystems. There are no issues with them (even with ext4) as long as I did not run dd if=... of=... on them (to erase them completely).
Can you explain to me what is going on?
Why does it seemingly show that ext* format automatically makes root the owner and the exfat format not? Both commands were run using mkfs.
Edit: Forgot to write that I use Debian.
|
When a drive (or partition, or other block device, or a disk image file, etc) is formatted, the top-level directory of the filesystem is owned by the user running the mkfs command.
Usually, that is root unless you're formatting a disk image file (or a block device you happen to have RW perms on) as a non-root uid.
If you want to change the ownership, mount it and then chown the mounted directory. This will change the ownership of that top-level directory in the formatted fs itself, so the ownership change will persist after unmounting. For example (as root):
mkfs.ext4 /dev/sdaX
mount /dev/sdaX /mnt
chown user:group /mnt
This has to be done while the fs is mounted, otherwise it will only change the owner of the mount-point itself (i.e. the directory in the parent filesystem), and this will be over-ridden by the owner in the mounted filesystem when you mount it.
For example, /mnt is just a directory in / until you mount another filesystem on it. It has whatever ownership and perms that are set for it in the / fs. When you mount another fs on /mnt, it now has whatever ownership and permissions are set for the top-level directory of that filesystem.
FAT is not a unix filesystem and does not support unix ownership or permissions. When you mount a FAT filesystem, you specify the ownership and permissions of all files in the fs when you mount it (default is the uid & gid of the the mounting process).
Note that mkfs for some filesystems allow you to specify the owner when formatting but, because each such fs has its own method of doing that, it's generally easier just to chown it after mounting it for the first time (as shown above), and not have to remember a minor convenience feature of a rarely-used tool. e.g. mkfs.ext4 does this with an extended option (-E):
mkfs.ext4 -E root_owner=uid:gid /dev/sdaX
| Ownership, disk drives and permissions |
1,540,671,306,000 |
I'm trying to automatically identify which disk (not partition) the OS resides on for a certain program which will format many attached disks (so it doesn't accidently format the OS disk).
I'm currently using dmidecode -s system-uuid, but I think that this gives the parition UUID.
I could ask the user for it, but that would be a hassle.
Is there any way to identify the disk (as in /dev/sdX) and UUID that would be usable in a script format?
If it produces the exact result as in: run CODE_HERE, and gets result /dev/sdX, so much the better :)
System:
Host: MidnightStarSign Kernel: 5.12.9-1-MANJARO x86_64 bits: 64 compiler: gcc
v: 11.1.0 Desktop: KDE Plasma 5.21.5 Distro: Manjaro Linux base: Arch Linux
Machine:
Type: Desktop Mobo: ASUSTeK model: PRIME X570-PRO v: Rev X.0x
serial: <superuser required> UEFI: American Megatrends v: 3001
date: 12/04/2020
CPU:
Info: 16-Core model: AMD Ryzen 9 5950X bits: 64 type: MT MCP arch: Zen 3
rev: 0 cache: L2: 8 MiB
flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 sse4a ssse3 svm
bogomips: 217667
Speed: 3728 MHz min/max: 2200/3400 MHz boost: enabled Core speeds (MHz):
1: 3728 2: 3664 3: 4122 4: 3754 5: 3678 6: 3659 7: 3682 8: 3661 9: 3670
10: 3683 11: 3664 12: 3658 13: 3660 14: 4580 15: 3660 16: 4585 17: 3668
18: 4585 19: 3662 20: 3671 21: 3662 22: 3670 23: 3660 24: 3662 25: 3661
26: 3661 27: 3732 28: 3662 29: 4573 30: 3721 31: 4575 32: 3681
Graphics:
Device-1: NVIDIA GA104 [GeForce RTX 3070] vendor: ASUSTeK driver: nvidia
v: 465.31 bus-ID: 0b:00.0
Device-2: Microdia USB 2.0 Camera type: USB driver: snd-usb-audio,uvcvideo
bus-ID: 7-1:2
Display: x11 server: X.Org 1.20.11 driver: loaded: nvidia resolution:
1: 1920x1080~60Hz 2: 1920x1080 3: 1920x1080
OpenGL: renderer: NVIDIA GeForce RTX 3070/PCIe/SSE2 v: 4.6.0 NVIDIA 465.31
direct render: Yes
Audio:
Device-1: NVIDIA vendor: ASUSTeK driver: snd_hda_intel v: kernel
bus-ID: 0b:00.1
Device-2: AMD Starship/Matisse HD Audio vendor: ASUSTeK driver: snd_hda_intel
v: kernel bus-ID: 0e:00.4
Device-3: JMTek LLC. Plugable USB Audio Device type: USB
driver: hid-generic,snd-usb-audio,usbhid bus-ID: 3-1:2
Device-4: Schiit Audio Schiit Modi 3+ type: USB driver: snd-usb-audio
bus-ID: 3-2:3
Device-5: ASUSTek ASUS AI Noise-Cancelling Mic Adapter type: USB
driver: hid-generic,snd-usb-audio,usbhid bus-ID: 5-5:3
Device-6: Microdia USB 2.0 Camera type: USB driver: snd-usb-audio,uvcvideo
bus-ID: 7-1:2
Sound Server-1: ALSA v: k5.12.9-1-MANJARO running: yes
Sound Server-2: JACK v: 0.125.0 running: no
Sound Server-3: PulseAudio v: 14.2 running: yes
Sound Server-4: PipeWire v: 0.3.30 running: yes
Network:
Device-1: Realtek RTL8125 2.5GbE driver: r8169 v: kernel port: d000
bus-ID: 05:00.0
IF: enp5s0 state: up speed: 1000 Mbps duplex: full mac: 3c:7c:3f:a6:c3:22
Device-2: Intel I211 Gigabit Network vendor: ASUSTeK driver: igb v: kernel
port: c000 bus-ID: 07:00.0
IF: enp7s0 state: down mac: 24:4b:fe:5b:08:2a
Bluetooth:
Device-1: Cambridge Silicon Radio Bluetooth Dongle (HCI mode) type: USB
driver: btusb v: 0.8 bus-ID: 3-5.3:6
Report: rfkill ID: hci0 rfk-id: 0 state: up address: see --recommends
Drives:
Local Storage: total: 3.89 TiB used: 1.83 TiB (47.1%)
ID-1: /dev/nvme0n1 vendor: Western Digital model: WDS100T3X0C-00SJG0
size: 931.51 GiB
ID-2: /dev/nvme1n1 vendor: Western Digital model: WDS100T2B0C-00PXH0
size: 931.51 GiB
ID-3: /dev/sda vendor: Seagate model: ST2000LM015-2E8174 size: 1.82 TiB
ID-4: /dev/sdb type: USB vendor: Generic model: USB3.0 CRW -SD
size: 119.08 GiB
ID-5: /dev/sdd type: USB vendor: Samsung model: Flash Drive FIT
size: 119.51 GiB
ID-6: /dev/sde type: USB vendor: Toshiba model: TransMemory size: 14.92 GiB
ID-7: /dev/sdf type: USB vendor: SanDisk model: Gaming Xbox 360 size: 7.48 GiB
Partition:
ID-1: / size: 767 GiB used: 726.35 GiB (94.7%) fs: btrfs dev: /dev/dm-0
mapped: luks-466d5812-64c7-4a28-bcc4-a1a5adfa9450
ID-2: /boot/efi size: 511 MiB used: 26.1 MiB (5.1%) fs: vfat
dev: /dev/nvme0n1p1
ID-3: /home size: 767 GiB used: 726.35 GiB (94.7%) fs: btrfs dev: /dev/dm-0
mapped: luks-466d5812-64c7-4a28-bcc4-a1a5adfa9450
Swap:
ID-1: swap-1 type: partition size: 64 GiB used: 128.2 MiB (0.2%)
dev: /dev/dm-1 mapped: luks-81b2dc57-06f5-4471-b484-77c3a516f307
Sensors:
System Temperatures: cpu: 79.6 C mobo: 0 C gpu: nvidia temp: 41 C
Fan Speeds (RPM): N/A gpu: nvidia fan: 0%
Info:
Processes: 964 Uptime: 1d 4h 28m Memory: 62.78 GiB used: 39.02 GiB (62.2%)
Init: systemd Compilers: gcc: 11.1.0 clang: 12.0.0 Packages: 2014 Shell: Bash
v: 5.1.8 inxi: 3.3.04
|
I don't think it's possible to easily solve the general case. But it's not too hard for very basic cases. Getting the root partition is as simple as running df -h /. Now a partition is either /dev/sdXdigit (eventually /dev/hdXdigit) or /dev/xxxxxpdigit where xxxxx is something like nvme0n1 or mmcblk0. So in the simple case, getting the system disk is relatively straight forward :
get the / partition name
if part name ends in digitpdigits, remove pdigits
otherwise just remove the trailing digits
You could do it like this :
df -h / | awk '
NR == 2 && $1 ~ /[0-9]p[0-9]+$/ {
disk=$1
sub( /p[0-9]+$/, "", disk )
print disk
}
NR == 2 && $1 ~ /[sh]d.[0-9]+$/ {
disk=$1
sub( /[0-9]+$/, "", disk )
print disk
}
'
But the general case is really complicated. / could be on an lvm volume, built on top of a crypto device itself built on top some raid device involving at least 2 disks (or lvm < raid < crypto) or it could be on a multipathed device ( 2 controllers get to the same device for redundancy and you end up with 2 device names for one physical device) or on a zfs raid or I don't know...
So maybe you are interrested in the boot disk, the one where /boot/efi is, assuming that your system boots in EFI mode. In this case, just replace df -h / by df -h /boot/efi and you're good. Why use /boot/efi ? Because EFI firmware can only boot simple partitions, not zfs raid, luks, linux raid or other weird things that Linux can use. Note that you still have the problem of detecting system disks ; it's perfectly possible to have /boot/efi somewhere on /dev/sda and / on /dev/sdb or on zfs raid, lvm, linux raid etc...
| Identifying The Disk that the OS Parition Resides On (for a script)? |
1,540,671,306,000 |
How am I able to reduce /var/lib/vz logical volume (/dev/vg/data) and use it/increase the current swap size?
/etc/fstab
UUID=c4408a1c-aa5b-4ce2-a9e8-1673660331e9 / ext4 defaults 0 1
LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1
UUID=c90b3083-1b43-427c-8016-1d2406c36417 /var/lib/vz ext4 defaults 0 0
UUID=e585755c-9908-4c01-a89b-d7fb1880b8f8 swap swap defaults 0 0
UUID=aea8f278-23a8-4ce0-97ca-4354720ca602 swap swap defaults 0 0
vgdisplay
--- Volume group ---
VG Name vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 386.97 GiB
PE Size 4.00 MiB
Total PE 99065
Alloc PE / Size 99065 / 386.97 GiB
Free PE / Size 0 / 0
VG UUID e2YzU3-HzQe-DIqH-HGNr-tFqc-cWO1-K92uOR
lvdisplay | grep "LV Path|LV Size"
LV Path /dev/vg/data
LV Size 386.97 GiB
|
easy: lvresize to, say, 350 GB (I'm assuming df -h /var/lib/vz gives you something like 340GB; if it's far less, you can of course shrink this way more!):
Since you need to shrink the file system, you first have to unmount it:
umount /var/lib/vz
Then, resize the logical volume; we can ask the LVM tools to correctly resize the underlying file system:
lvresize -L 350G -r /dev/vg/data
| | | |
new size in | | |
bytes | | |
| | |
350GB-/ | |
| |
resize the under- |
lying file sys- |
tem automatically |
|
which LV to resize
This of course only works if there's enough free space in /var/lib/vz, such that the ext4 file system can be successfully shrunk.
If there isn't: tough luck! Can't conjure space out of nothing :(
You can now mount /var/lib/vz again.
Afterwards, create swap to eat up all your free space:
lvcreate -l 100%FREE -n swaplv vg
| | | | |
size in extents-/ | | | |
| | | |
100% of the available | | |
space in the volume | | |
group | | |
| | |
name of the new LV -/--/ |
|
volume group in which to
create the new volume
Note of course that instead of -l 100%FREE you could of course also specify a size (e.g. -L 16G). Note the difference between -l and -L!
"Format" it as swap device:
mkswap /dev/vg/swaplv
finally, you want to add that new swap to /etc/fstab:
/dev/vg/swaplv swap swap defaults 0 0
and enable it right now:
swapon -a
| How can I shrink/use a Logical Volume and use it as swap |
1,540,671,306,000 |
I have an external NVMe disk that was in my old laptop which is encrypted with LUKS. I need to mount that disk and extract some data out of it so this is what I have tried
fdisk -l
/dev/sdc3 2549760 2000408575 1997858816 952.7G Linux filesystem
udisksctl unlock -b /dev/sdc3
Unlocked /dev/sdc3 as /dev/dm-1.
So far so good, however, now I am trying to issue udisksctl mount -b and it won't work with neither /dev/dm-1 or /dev/mapper/luks-96a2dfa5-1f16-45fd-895c-f2dd0505dde9 or /dev/sdc3, it always says that Object /org/freedesktop/UDisks2/block_devices/dm_2d1 is not a mountable filesystem.
lsblk -l output
sdc
├─sdc2 ext4 8df22661-a1f9-4fc6-aa2d-204c605a1626
├─sdc3 crypto_LUKS 96a2dfa5-1f16-45fd-895c-f2dd0505dde9
│ └─luks-96a2dfa5-1f16-45fd-895c-f2dd0505dde9 LVM2_member 5EOtDn-9iM0-630j-1gqO-73cc-5FgB-Wk8SlY
└─sdc1 vfat 86F0-B82B
Output of vgs and lvs
pmensik-Inspiron-7566% sudo vgs
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
VG #PV #LV #SN Attr VSize VFree
elementary-vg 1 2 0 wz--n- 952.65g 21.33g
pmensik-Inspiron-7566% sudo lvs
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root elementary-vg -wi------- 930.37g
swap_1 elementary-vg -wi------- 976.00m
Is it because the disk was used for running the Elementary OS and there are several partitions mounted as different filesystems? How can I mount /home from such a disk and extract data out of it? Thanks a lot
|
You have an LVM setup so after unlocking the LUKS device you need to mount the root logical volume and not the unlocked device itself. In your case the logical volumes were not auto-activated because lvmetad is not running, you can activate them (= tell the system to actually create the logical volume block devices) using vgchange -ay elementary-vg and then mount the root logical volume /dev/elementary-vg/root using either mount or udisksctl mount -b /dev/elementary-vg/root.
| Mount LUKS encrypted drive |
1,540,671,306,000 |
Info
I have two USB flash drives. Both are "SanDisk Cruzer Blade"s. One is 8GB, the other is 64GB.
fdisk -l (8GB):
Disk /dev/sdd: 7.45 GiB, 8004304896 bytes, 15633408 sectors
Disk model: Cruzer Blade
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xcf0c9ad9
fdisk -l (64GB):
Disk /dev/sdc: 57.33 GiB, 61555605504 bytes, 120225792 sectors
Disk model: Cruzer Blade
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3E29E0CB-68C6-3B47-9861-B92FC65CA0D6
Problem
The "Disk model" values of both are "Cruzer Blade". Both have this name in my motherboard's boot menu, so I can't distinguish between the two when choosing a disk to boot from.
Questions
Can "Disk model" be changed?
If so, how?
I'd like to name 8GB "sandisk-8gb-1" and 64GB "sandisk-64gb-1".
My research
Every thread I find shows either how to change the partitions' or the filesystems' labels.
I can't find anything that shows how to change the disk model.
|
Disk model can't be changed, it's reported by the device firmware, fdisk simply reads it from /sys/block/sdX/device/model.
Filesystem labels and partition names (with GPT) are unfortunately only things you can change, but that's not going to help you when booting.
| How to change usb drive's "Disk model", as shown in fdisk -l? |
1,540,671,306,000 |
I'm trying to make a script to select into which disk I should dd to.
A simple bash script to select options works like this:
#!/bin/bash
# Bash Menu Script Example
PS3='Please enter your choice: '
options=("Option 1" "Option 2" "Option 3" "Quit")
select opt in "${options[@]}"
do
case $opt in
"Option 1")
echo "you chose choice 1"
;;
"Option 2")
echo "you chose choice 2"
;;
"Option 3")
echo "you chose choice $REPLY which is $opt"
;;
"Quit")
break
;;
*) echo "invalid option $REPLY";;
esac
done
lsblk is the best way I know to read disks:
lz@vm:~/Downloads$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 55,5M 1 loop /snap/core18/1988
loop1 7:1 0 219M 1 loop /snap/gnome-3-34-1804/66
loop2 7:2 0 64,8M 1 loop /snap/gtk-common-themes/1514
loop3 7:3 0 138,5M 1 loop /snap/inkscape/8049
loop4 7:4 0 51M 1 loop /snap/snap-store/518
loop5 7:5 0 162,9M 1 loop /snap/gnome-3-28-1804/145
loop6 7:6 0 31,1M 1 loop /snap/snapd/11036
loop7 7:7 0 32,3M 1 loop /snap/snapd/11107
sda 8:0 1 14,9G 0 disk
└─sda1 8:1 1 14,9G 0 part
sr0 11:0 1 1024M 0 rom
vda 252:0 0 300G 0 disk
├─vda1 252:1 0 512M 0 part /boot/efi
├─vda2 252:2 0 1K 0 part
└─vda5 252:5 0 299,5G 0 part /
You can see that df -h does not list /dev/sda on my machine:
lz@vm:~/Downloads$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 4,5G 0 4,5G 0% /dev
tmpfs 924M 1,6M 922M 1% /run
/dev/vda5 294G 62G 218G 23% /
tmpfs 4,6G 26M 4,5G 1% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 4,6G 0 4,6G 0% /sys/fs/cgroup
/dev/loop0 56M 56M 0 100% /snap/core18/1988
/dev/loop5 163M 163M 0 100% /snap/gnome-3-28-1804/145
/dev/loop2 65M 65M 0 100% /snap/gtk-common-themes/1514
/dev/loop4 52M 52M 0 100% /snap/snap-store/518
/dev/loop1 219M 219M 0 100% /snap/gnome-3-34-1804/66
/dev/loop7 33M 33M 0 100% /snap/snapd/11107
/dev/loop3 139M 139M 0 100% /snap/inkscape/8049
/dev/loop6 32M 32M 0 100% /snap/snapd/11036
/dev/vda1 511M 4,0K 511M 1% /boot/efi
tmpfs 924M 60K 924M 1% /run/user/1000
/dev/fuse 250G 0 250G 0% /run/user/1000/keybase/kbfs
I don't know why.
Anyways, what would be the best way to list these disks (not the partitions like /dev/sda1, just the disk) so I can create a list of options to select one and dd to it? Is there a way to format these with lsbk so I can insert into my bash script?
Aditionally, it would be nice to ignore the disk where the script lies on, so I can try to prevent writing to the disk that contains the system.
|
try lsblk -d
-d, --nodeps Don't print device holders or slaves. (...)
How could I parse the table generated by this command?
lsblk -d | tail -n+2 | cut -d" " -f1
Would be good if I had a way to collect a name and the size so I can put in the option
lsblk -d | tail -n+2 | awk '{print $1" "$4}'
Should I assume it's simply /dev/NAME?
Yes, that is the location for devices. You can use test -b to check it.
-b FILE
FILE exists and is block special
if [ -b /dev/vda ]; then
echo "is a block device"
fi
if you check your devices with ls -l /dev/vda it should start with a b
brw-rw---- 1 root disk (...) /dev/vda
In the end lsblk is listing block devices - no need to double check.
| bash script to select a disk for dd (lsblk?) |
1,540,671,306,000 |
We want to Add New Disk Using LVM to an Existing Linux System ( according to - How to Add New Disks Using LVM to an Existing Linux System )
We have the following rhel 7.2 server details
pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VOL_GROUP-lv lvm2 a-- <179.00g <25.09g
/dev/sdb1 data_vol_g lvm2 a-- <100.00g 0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 180G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 179G 0 part
├─VOL_GROUP-lv_root 253:0 0 50G 0 lvm /
├─VOL_GROUP-lv_swap 253:1 0 3.9G 0 lvm [SWAP]
└─VOL_GROUP-lv_var 253:2 0 100G 0 lvm /var
sdb 8:16 0 100G 0 disk
└─sdb1 8:17 0 100G 0 part
└─data_vol_g-data_lv 253:3 0 100G 0 lvm /DB
# fdisk -l | grep sda
Disk /dev/sda: 193.3 GB, 193273528320 bytes, 377487360 sectors
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 377487359 187694080 8e Linux LVM
Since we have on Pfree=25 G , then we want to add a new disk or partition, sda3, and the final goal is create an XFS file system on it with ftype=1, since the current OS has an XFS file system with ftype=0, and we can't install Docker which needs ftype=1.
So we start with the fdisk , but we get that:
fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type:
p primary (2 primary, 0 extended, 2 free)
e extended
Select (default p): p
Partition number (3,4, default 3):
No free sectors available
Command (m for help):
Why we get "No free sectors available:" , while we have PFree=25G ?
|
fdisk says you have no free sectors available because all the disk space is allocated to partitions (sda1 and sda2). Thus from fdisk’s perspective, there’s no free disk space, and no room to create a new partition. This can also be seen in lsblk’s output:
sda 8:0 0 180G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 179G 0 part
sda is 180G in size, and contains sda1 which is 1G in size and sda2 which is 179G in size, so there’s no unused space.
sda2 is used as a physical volume in your VOL_GROUP volume group, and has 25GiB’s worth of available physical extents. This free disk space can be used for logical volumes (LVs), either by adding a new LV, or by resizing an existing LV. The VG with free space is VOL_GROUP, so
lvcreate VOL_GROUP ...
will allow you to create an LV there.
| Add New Disks Using LVM to an Existing Linux System |
1,540,671,306,000 |
Before one year I extend the /var on the redhat 7.x server as the following from 100G to 200G
notes:
a. VG-A is the VG
b. server is VM server
c. file-system is XFS
procedure:
1. pvcreate /dev/sdb
2. vgextend VG-A /dev/sdb
3. lvresize --size +100G /dev/mapper/VG-A-lv_var
4. xfs_growfs /var
so /var increased to 200G
now we are thinking to add another 100G to /var by adding a new disk – sdc , as we did in steps 1-4 , in order to increase /var to 300G
so the meaning is that sdb and sdc will be allocated to /var , additional to part of sda disk ( sda disk is the OS disk )
my question is
is it ok to add 2 disks in order to increase the /var?
I am asking because maybe adding number of disks , in order to extend /var size could be problematic for example if one of the disks ( sdb/sdc ) are failed . then /var will not functional
|
First of all — and this is hopefully a reminder — the answer to risk considerations when dealing with storage is to have a working backup and restore strategy.
With that in place, using virtual storage, the risk level is the same whether you extend the virtual disk or add a virtual disk. If you care about speedy recovery, you virtual storage should provide a snapshot feature which you could use.
To answer your main question, yes, adding another virtual disk is fine.
If you were using physical disks, then adding another disk would increase the risk of failure; again, the answer there is backups. For availability, again when using physical disks directly, you might want to consider some form of redundancy. In your case though, availability would be better addressed in your virtual storage layer.
| increase /var partition by adding few disks |
1,540,671,306,000 |
I'm transfering the content from one smaller partition to another larger using rsync -vacHS --progress /oldPartMountPoint/ /newPartMountPoint/. The data is large ~1TB, I left it overnight but at the morning I found it with almost no progress. Also, the two partitions are just ISCSI devices shared from a NAS. The NAS doesn't show any disk ot network activity coming from the VM I am using to transfer the files. I know the name of the last transfered directory. Can I renew the process in some way that I only have to transfer the last of the files that have not being copied , not everything from start?
|
Unfortunately this is one of the least efficient scenarios for rsync. You have two remote filesystems, both mounted locally. This means that rsync cannot optimise file transfers for files that already exist on the target by sending only changed blocks. Furthermore you've specified -c (--checksum) so on restart every completed pair of source and destination files will have to be verified with a full data checksum.
The only improvement I can offer here is for you to remove -c from the list of options. If you don't have hard-linked files then also omit -H (--hard-links).
rsync -vaHS --progress /oldPartMountPoint/ /newPartMountPoint/
You can rerun this command as many times as you need, and it will skip previously copied files.
I'm aware this doesn't address the lack of disk or network activity on your VM; you will have to add details of that into your question if you need answers to address that issue.
| Resume rsync transfer after failure? |
1,582,361,441,000 |
As far as I know, this just amounts to a change in the partition table and it is a relatively safe operation that is rather cheap.
Is there any significant cost or drawback(s) to...
Creating and destroying partitions all the time
Having lots of partitions, mounting them, and actually using them to write and read lots of data and expecting this to perform at the same level as if we were dealing with directories in a larger partition
For the latter, I can think of several factors that could affect this in theory, but all of them should be negligible in practice.
Just general curiosity, this is not something with a real use case
|
The single most important reason to avoid too many partitions on the same harddisk is space usage: As your harddisk fills up, the partitions will fill up to a different degree, and you can't use free space in one partition in another partition if the latter partition gets full.
While you can resize partitions, doing so is time consuming, and in case of SSD causes unnecessary wear.
To a lesser degree, destroying and re-creating partitions is also time consuming: You must not only create the partition, you also must initialize the file system on it (mkfs), and that takes a bit for large partitions.
Otherwise, there's in principle no downside to mounting and using many partitions with respect to performance etc. However, it's a bit difficult to come up with a use case for that, and in particular a use case where it's obvious how to distribute the files you have on those many partitions.
| Is there a reason to avoid frequently creating and 'destroying' partitions and having lots of them? [closed] |
1,582,361,441,000 |
System: Linux Mint 19.1 Cinnamon.
Disks in this question are considered external HDDs either ext4 or ntfs formatted.
I am interested in how do I manage to set my fstab or whatever else to be able to Unmount (umount) those external HDDs under my normal user account?
I have:
one External hard disk over USB 3.0 formatted as ext4
one External hard disk over USB 2.0 formatted as ntfs
Relevant parts of my fstab:
UUID=<the UUID of the Ext4 disk drive> /mnt/external-hdd-2tb-usb3-ext4 ext4 nosuid,nodev,nofail 0 0
UUID=<the UUID of the NTFS disk drive> /mnt/external-hdd-500gb-usb2-ntfs ntfs nosuid,nodev,nofail 0 0
|
You need to add users option to your fstab entries.
Working example on my setup:
UUID=<the UUID of the Ext4 disk drive> /mnt/external-hdd-2tb-usb3-ext4 ext4 nosuid,nodev,nofail,users 0 0
UUID=<the UUID of the NTFS disk drive> /mnt/external-hdd-500gb-usb2-ntfs ntfs nosuid,nodev,nofail,users 0 0
This will allow you (upon reboot) to execute for example:
umount /dev/sdX1
as an ordinary user without sudo.
Additionally, on Linux Mint, there is a Disks GUI, where you can then even power off those drives, I stress: once you unmounted them!, by pressing the Power off this disk button in the top bar, on the right:
| How to set fstab to be able to umount my external HDDs under normal user account? |
1,582,361,441,000 |
I had a drive that was failing on it's SMART test in the form of:
smartctl -a /dev/sdc:
...
# 1 Short offline Completed: read failure 50% 6354 4377408
# 2 Extended offline Completed: read failure 90% 6354 4377408
I then wanted to get this 'sector' marked as a bad sector, so I assumed I'd just need to write a load of data on it. So I used dd to write a bunch of zeros. This filled the drive, after which I than ran another smart test.
It completed successfully, however looking at the SMART attributes I don't see any change in:
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
Besides knowing full well that I'm always at the risk of a drive failure, is the above information correlated with a drive failure?
Here is diff of the before / after of smartctl's attributes:
diff --git a/x.txt b/x.txt
index 4cfe1b7..1bcace5 100644
--- a/x.txt
+++ b/x.txt
@@ -12,7 +12,7 @@ Sector Sizes: 512 bytes logical, 4096 bytes physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 3.0 Gb/s (current: 3.0 Gb/s)
-Local Time is: Sun Feb 24 16:50:01 2019 GMT
+Local Time is: Mon Feb 25 18:33:35 2019 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
@@ -55,31 +55,38 @@ SCT capabilities: (0x70b5) SCT Status supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
- 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
- 3 Spin_Up_Time 0x0027 180 179 021 Pre-fail Always - 5991
- 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 114
+ 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 4
+ 3 Spin_Up_Time 0x0027 177 177 021 Pre-fail Always - 6116
+ 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 116
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
- 9 Power_On_Hours 0x0032 092 092 000 Old_age Always - 6356
+ 9 Power_On_Hours 0x0032 092 092 000 Old_age Always - 6372
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
- 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 57
+ 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 59
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46
-193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 67
-194 Temperature_Celsius 0x0022 122 114 000 Old_age Always - 28
+193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 69
+194 Temperature_Celsius 0x0022 116 114 000 Old_age Always - 34
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
-200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 1
+200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
-# 1 Short offline Completed: read failure 50% 6354 4377408
-# 2 Extended offline Completed: read failure 90% 6354 4377408
+# 1 Extended offline Completed without error 00% 6367 -
+# 2 Short offline Completed: read failure 60% 6361 4377409
+# 3 Short offline Completed: read failure 50% 6361 4377409
+# 4 Extended offline Completed: read failure 90% 6359 4377409
+# 5 Short offline Completed without error 00% 6359 -
+# 6 Short offline Completed: read failure 60% 6356 4377409
+# 7 Short offline Completed: read failure 50% 6354 4377408
+# 8 Extended offline Completed: read failure 90% 6354 4377408
+6 of 6 failed self-tests are outdated by newer successful extended offline self-test # 1
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
And the current output of smartctl -a:
smartctl 6.6 2018-12-05 r4851 [x86_64-linux-4.14.98] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital AV-GP (AF)
Device Model: WDC WD20EURS-63SPKY0
Serial Number: WD-WMC1T2763021
LU WWN Device Id: 5 0014ee 6addb4b7c
Firmware Version: 80.00A80
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 3.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Mon Feb 25 18:49:12 2019 GMT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (27240) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 275) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x70b5) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 4
3 Spin_Up_Time 0x0027 177 177 021 Pre-fail Always - 6116
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 116
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 092 092 000 Old_age Always - 6373
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 59
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 69
194 Temperature_Celsius 0x0022 116 114 000 Old_age Always - 34
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 6367 -
# 2 Short offline Completed: read failure 60% 6361 4377409
# 3 Short offline Completed: read failure 50% 6361 4377409
# 4 Extended offline Completed: read failure 90% 6359 4377409
# 5 Short offline Completed without error 00% 6359 -
# 6 Short offline Completed: read failure 60% 6356 4377409
# 7 Short offline Completed: read failure 50% 6354 4377408
# 8 Extended offline Completed: read failure 90% 6354 4377408
6 of 6 failed self-tests are outdated by newer successful extended offline self-test # 1
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
|
No, you didn't want to mark it as bad sector. You wanted a write op to an unreadable sector :)
As I had quoted yesterday in smartctl reports overall health test as passed but the tests failed?
If the disk can read the sector of data a single time, and the damage is permanent, not transient, then the disk firmware will mark the sector as 'bad' and allocate a spare sector to replace it. But if the disk can't read the sector even once, then it won't reallocate the sector, in hopes of being able, at some time in the future, to read the data from it. A write to an unreadable (corrupted) sector will fix the problem.
If the damage is transient, then new consistent data will be written to the sector. If the damange is permanent, then the write will force sector reallocation.
(bold in parts by me, original source: smartmontools FAQ)
There were no reallocated sectors yesterday and there are no reallocated sectors today. That means the disk is in terms of bad sectors "as healthy" as it already was if we ignore the fact that Raw_Read_Error_Rate went up to 4. That was caused by the offline tests?
But you fixed your unreadable sector in tests 1 and 5. That's good. But it's strange that tests 2-4 also failed.
Hmm, maybe I would run the tests a few more times to see what happens. And have an eye on Raw_Read_Error_Rate when you run tests or write zeros with dd.
| SMART test completes with no failure after failing test previously, without reallocating any sectors? |
1,582,361,441,000 |
I have a computer I use for imaging disks running Ubuntu 16.04. Each disk is inserted into a USB 3.0 dock, imaged/wiped, and then disconnected. The disks don't have any mounted filesystems which need to be dismounted. They disappear from gnome-disks as expected. Eventually, using gparted and/or the gnome-disks, I am no longer able to see any new disks that get added. Sometimes, new disks show up under an old /dev/sdx device and I can access them but they show the old device's partition table and size. I assume this is because /dev/sdx is filling up and the kernel is holding onto pointers to disks which no longer exist?
Edit: I should add that a number of these disks have bad sectors or other issues, so that could be a part of the problem as well. This "block device exhaustion" happens faster when more malfunctioning drives are added/removed. Once it happens, even good drives won't appear when added to the system. But I notice this happens even if all drives I'm adding/removing are healthy and functioning.
What can I do to prevent this behaviour or tell the kernel to "forget" disconnected disks?
|
Before disconnecting, say, /dev/sdX, do first a blockdev -flushbufs /dev/sdX to ensure all the data is fully written on the disk and not waiting in a buffer, just to be sure.
Then do a echo 1 > /sys/block/sdX/device/delete. This will tell the kernel that /dev/sdXwill be going away and should be forgotten. Depending on disks/docks involved, this might even spin down the disk automatically.
| Linux stops detecting new disks/block devices after certain number |
1,582,361,441,000 |
Regarding this post: Sum up numbers with KB/MB/GB/TB/PB... suffixes
I have a few machines that have older Debian version on them and can't be upgraded to a newer version of Debian, which means that the coreutils package does not contain numfmt. I tried to find a another way to get it for it (machine is Debian 7.6), but I am forced to use another way to get my disk size.
I'm currently using the following:
lshw -class disk -class storage | grep size: | cut -d "(" -f2 | cut -d ")" -f1 | sed -e 's/[^0-9]/ /g' | paste -sd+ | bc
I can easily get the size, but I am in need of getting GB/TB or even MB to it also.
If I use:
lshw -class disk -class storage | grep size: | cut -d "(" -f2 | cut -d ")" -f1
I get
160GB
160GB
On my other machines I'd get for an example:
2TB
2TB
2TB
Is there a way to save the word after the numbers in a variable and later print it out?
Also, to eliminate the chance of a machine to have multiple drives with different sizes like
500GB
2TB
3TB
This way my command sadly does not work, it would give you 505.
|
I don't have the lshw command handy, so I faked some as follows:
size: 4200KiB (4200KB)
size: 420MiB (420MB)
size: 42GiB (42GB)
size: 4TiB (4TB)
size: 2PiB (2PB)
(I found an example online and simply copied out the "size:" lines and made up some sizes).
I used awk here because once you find yourself grepping and cutting and seding, it's often easier to combine all that logic into awk.
The below awk script sets the field separators (FS) to open- and close-parenthesis, so that it's more natural to pull out the desired value. It also (redundantly) initializes the running total size (in Gb) to zero.
Each time awk sees an input line that matches the (simple) regular expression size: it begins the real work inside the braces. The value inside the parenthesis ends up in field #2, so we ask awk to match the digits in that field. We expect them to start at position 1, for some number of characters. The value is then extracted based on that length and the suffix is the remainder of the string.
We then grind through the list of possible suffixes (extend as necessary) and multiply the current size by the appropriate ratio (mentioned by Stéphane in a comment on a currently-deleted answer to be 1000-based units).
After all of the input has been consumed, awk prints the total size in GB.
Save the script into a file and run it like
lshw -class disk -class storage | awk -f /path/to/script
Script:
BEGIN {
FS="[()]"
sizegb=0
}
/size: / {
match($2, "[0-9]+")
value=substr($2, RSTART, RLENGTH)
suffix=substr($2, RLENGTH+1)
if (suffix == "KB")
sizegb += value / (1000*1000)
else if (suffix == "MB")
sizegb += value / 1000
else if (suffix == "GB")
sizegb += value
else if (suffix == "TB")
sizegb += value * 1000
else if (suffix == "PB")
sizegb += value * 1000 * 1000
}
END {
printf "Total size: %.2f GB\n", sizegb
}
| Bash variable hold or keep value/data |
1,582,361,441,000 |
Are there any files created and or broadened by the system besides mail and logs?
AFAIK, the only files that are created and/or broadened by the system in default are /var/mail/ files and /var/log/ files (broadened by means of file size).
To cope with that I've redirected `/dev/null on files in these directories.
But are there more files besides mail and logs that I should worry that they'll be created and broadened by the Linux system itself, throughout time?
|
I'd recommend reading up on the Unix filesystem.
The /var/ directory holds frequently changing files. The /etc/ directory holds configurations that don't typically grow too much. the /usr/ directory holds OS files that don't change too much outside of system upgrades. If you have third-party applications running off of /srv/ or /opt/, those directories may grow.
| Are there any files created and or broadened by the system besides mail and logs? |
1,582,361,441,000 |
we want to check the filesystem on the disks as /deb/sdc .... /dev/sdg on each linux redhat machine
the target is to find what are the disks that required e2fsck ( as e2fsck -y /dev/sdb . etc )
according to man page
-n Open the filesystem read-only, and assume an answer of `no' to all questions. Allows e2fsck to be used non-interactively. This option may not be specified at
the same time as the -p or -y options.
when we run the command ( only example )
e2fsck -n /dev/sdXX
we get
e2fsck 1.42.9 (28-Dec-2013)
Warning! /dev/sdc is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
/dev/sdc: clean, 94/1310720 files, 156685/5242880 blocks
so what need to capture from e2fsck output , that required to run e2fsck ?
e2fsck process
init 1
umount /dev/sdXX
e2fsck -y /dev/sdXX ( or e2fsck -C /dev/sdXX for full details )
init 3
|
You probably are looking for the output of tune2fs rather than e2fsck
tune2fs -l /dev/sdXX |grep "Filesystem state\|Last checked\|Check interval"
which should yield something like this:
Filesystem state: clean
Last checked: Mon Nov 28 16:03:44 2016
Check interval: 31536000 (12 months)
| e2fsck -n + how to know if need to run e2fsck in order to fix corrupted blocks? |
1,582,361,441,000 |
I am running CentOS 7.3 on x86_64. I have a two disk, first one is 256GB SSD where /root, /boot, swap and /home is configured. 0Second one is a 4TB HDD which is mounted as /data and currently has more than 1 TB of data.
I want to expand /home, as it's not sufficient and will run out of space soon. To achieve this, I want to make use of the 4TB HDD I have, such that I can use it both as /home and /data.
/data and not just /home because I already have some application and data configured with some absolute paths like /data/xyz/pqr.
Is it possible to achieve this without formatting anything and hopefully not loosing out on any data?
I am sharing below system information, if more details are required please let me know.
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/cl-root 55G 27G 29G 50% /
devtmpfs 55G 0 55G 0% /dev
tmpfs 55G 0 55G 0% /dev/shm
tmpfs 55G 18M 55G 1% /run
tmpfs 55G 0 55G 0% /sys/fs/cgroup
/dev/sda1 1.9G 173M 1.7G 10% /boot
/dev/sdb1 3.6T 708G 2.8T 21% /data
/dev/mapper/cl-home 165G 3.0G 162G 2% /home
tmpfs 11G 12K 11G 1% /run/user/42
tmpfs 11G 0 11G 0% /run/user/1001
cat /etc/fstab
/dev/mapper/cl-root / xfs defaults 0 0
UUID=02663577-6456-477e-8489-3565659de456 /boot xfs defaults 0 0
/dev/mapper/cl-home /home xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
/dev/sdb1 /data ext4 defaults 0 0
|
Yes, it's possible. You'll have to shrink the /data filesystem first. Umount it, check filesystem integrity:
e2fsck /dev/sdb1
Shrink with to 999G (or your desired size)
resize2fs /dev/sdb1 999G
And use gparted to resize the partition /dev/sdb1 to 1000G. Then you can fill the filesystem to the size of the /dev/sdb1 with:
resize2fs /dev/sdb1
Now you have the rest of the /dev/sdb available for your new /home. The best if you create there the LVM2 volume group (VG):
vgcreate lvm01 /dev/sdb2
And logical volume (LV) with sufficient size for your /home (500G is example).
lvcreate -n home.vol -L 500G lvm01
Create filesystem on the new LV
mkfs.ext4 /dev/mapper/lvm01-home.vol
Then mount it under temporary mountpoint, logout from ordinary user and under root move content of the /home to the temporary mounpoint, change /etc/fstab entry of /home to the new LV and restart.
| Mount Existing Hard Disk As /home And /data |
1,582,361,441,000 |
we have a lot of linux working machines
all mount point are configured in the /etc/fstab
as the following:
/dev/sdc /grd/sdc ext4 defaults,noatime 0 0
/dev/sdd /grd/sdd ext4 defaults,noatime 0 0
/dev/sdb /grd/sdb ext4 defaults,noatime 0 0
/dev/sde /grd/sde ext4 defaults,noatime 0 0
/dev/sdf /grd/sdf ext4 defaults,noatime 0 0
I want to change the /etc/fstab configuration to use the UUID instead the current conf
can we reconfigure the fstab to use UUID , after machines are working for along time is OK ?
or maybe too late? , or risky ?
example:
UUID="14314872-abd5-24e7-a850-db36fab2c6a1" /grd/sdc ext4 defaults,noatime 0 0
|
There shouldn't be any issues. If you do changes to your machine configuration (for example add or replace disks) the device names (/dev/sdX) might change at next boot. Using UUIDs avoids this issue.
Since you use device names to name the mount points (/grd/sdX), those might not match the device name anymore should the device names change for any reason.
| reconfigure the fstab file with UUID |
1,582,361,441,000 |
I have a new disk on server (sdb) and I want to mount it to existing root mountpoint (sda1 mounted on /) to merge it. There is an utility called mhddfs which is doing exactly what I want. But the problem is in examples is shown mounting existing mountpoints to one merged called virtual which isn't used before:
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sda1 80G 50G 30G 63% /mnt/hdd1
/dev/sdb1 40G 35G 5G 88% /mnt/hdd2
/dev/sdc1 60G 10G 50G 17% /mnt/hdd3
$ mkdir /mnt/virtual
$ mhddfs /mnt/hdd1,/mnt/hdd2,/mnt/hdd3 /mnt/virtual -o allow_other
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/sda1 80G 50G 30G 63% /mnt/hdd1
/dev/sdb1 40G 35G 5G 88% /mnt/hdd2
/dev/sdc1 60G 10G 50G 17% /mnt/hdd3
mhddfs 180G 95G 85G 53% /mnt/virtual
But my filesystem looks like this:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 76G 58G 15G 80% /
udev 10M 0 10M 0% /dev
tmpfs 3.2G 8.7M 3.2G 1% /run
tmpfs 7.9G 0 7.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
tmpfs 1.6G 0 1.6G 0% /run/user/1008
tmpfs 1.6G 0 1.6G 0% /run/user/1007
tmpfs 1.6G 0 1.6G 0% /run/user/1002
tmpfs 1.6G 0 1.6G 0% /run/user/1000
/dev/sdb1 197G 188M 187G 1% /mnt/sdb1
and these are the disks:
$ fdisk -l
Disk /dev/sda: 101 GiB, 108447924224 bytes, 211812352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000ced15
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 204000000 203997953 97.3G 83 Linux
/dev/sda2 204001280 211812351 7811072 3.7G 5 Extended
/dev/sda5 204003328 211812351 7809024 3.7G 82 Linux swap / Solaris
Disk /dev/sdb: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 145F5C99-D238-4702-B728-04A613B1DBA1
Device Start End Sectors Size Type
/dev/sdb1 2048 419428351 419426304 200G Linux filesystem
So how to mount sdb1 to / with mhddfs? Can I just do this like in example:
$ mhddfs /,/mnt/sda1 / -o allow_other
Of course I don't want to lose existing data on sda and I am not sure what will mhddfs do in this case.
|
As a warning: I have no experience with mhddfs, but there are a few general rules:
All examples of mentioned in the mhddfs readme are based on mounting a new mountpoint - not an existing one.
So just putting an overlay over an already mounted point is not mentioned.
Also consider what you are doing at that point:
/dev/sda1 looks pretty much like your boot and root partition.
Even if you are able to remount it with any tool - how you know that the right files are getting added for new linux-kernels ?
My advice would be:
Figure out which directories are getting quite large. Try to move out this directory to the new drive.
Based on your example:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 76G 58G 15G 80% /
/dev/sdb1 197G 188M 187G 1% /mnt/sdb1
[...]
Try:
sudo du -hs /*
You may end up with 50GB of stuff in /home
If so, you can copy everything from /home to /mnt/sdb1/
if you have done that:
umount /dev/sdb1
rename /home to /home_old
mkdir /home
and mount /dev/sdb1 to /home
this new mountpoint will be configured in /etc/fstab:
As you can see: you need no magic extra tools - this is standard unix admin work ;-)
| Mount new disk to existing root mountpoint using mhddfs (merging filesystems) |
1,582,361,441,000 |
I have a problem with used space and available disk space in lvm.
Please see this results :
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 3 3 0 wz--n- 6.78t 736.00m
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 VolGroup lvm2 a--u 3.50t 0
/dev/sdb1 VolGroup lvm2 a--u 2.50t 0
/dev/sdb2 VolGroup lvm2 a--u 798.72g 736.00m
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_home VolGroup -wi-ao---- 6.73t
lv_root VolGroup -wi-ao---- 50.00g
lv_swap VolGroup -wi-ao---- 4.90g
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
50G 2.5G 45G 6% /
tmpfs 4.9G 0 4.9G 0% /dev/shm
/dev/sda1 477M 28M 425M 7% /boot
/dev/mapper/VolGroup-lv_home
6.7T 5.8T 531G 92% /home
[root@localhost ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 50G 2.5G 45G 6% /
tmpfs tmpfs 4.9G 0 4.9G 0% /dev/shm
/dev/sda1 ext4 477M 28M 425M 7% /boot
/dev/mapper/VolGroup-lv_home
ext4 6.7T 5.8T 529G 92% /home
[root@localhost ~]# arch
x86_64
as you see I assigned about 6.7T to /home via lvm but I can't use more than 6.3T ( difference of Used and Avail in df ) space.
I would be glad if someone could help me.
Thanks
|
I think that the part of storage in lake in the Logical Volume, is the part reseved of root rescue, the 5% reserved for root for emergency situation, this part is 5% by default, it's set when its you create the file system.
look, with dumpe2fs -h /dev/mapper/VolGroup-lv_home | grep -i reserved , will give you the amount of blok reserved, you multiply the value by the size of the block, and you will get the size in bits, you convert to Gb and you will find the lost space.
to have the lost space back to 0% or 1% do this and print us the result of df after :
tune2fs -m 1 /dev/mapper/VolGroup-lv_home
or , for freeing the total space reserved do this :
tune2fs -m 0 /dev/mapper/VolGroup-lv_home
| Inappropriate used and available space in lvm and disk free [duplicate] |
1,582,361,441,000 |
I've got these disk statistics shown in top command:
$ top | head | tail -n1
Disks: 1095425909/52T read, 1016012571/52T written.
It's quiet high number for only 37 day uptime.
Are these numbers are counted since the boot time, or another period? I can't find it in the documentation.
|
The top command tries to report statistics since boot time when it only reports one set; if it's set to report on a loop basis (e.g. with the -d option), the first report is since boot, the second and thereafter are for the most recent loop period only.
| Since when Disks statistics in `top` command are counted? |
1,582,361,441,000 |
I have a Clonezilla image of a 250 GB disk. Are there any caveats or downsides if I would restore this to a 500 GB disk? (I will of course not be able to use more than the 250 GB on the new disk, but this is not a problem.)
|
No, there are no caveats. It will just work. For most filesystems (including ext3/4, xfs, ntfs), clonezilla is even able to automatically expand the filesystem so that you get to use the entire 500GB.
In fact, this is one of the benefits of Clonezilla. In a previous job, I set up a workstation cloning system using clonezilla where we'd make the clonezilla images as small as possible (e.g. 10 or 20GB) in order to minimise network traffic and time required. The images were installed on new machines with 500GB or 1TB drives or larger.
| Are there any caveats when restoring a backup to a larger disk than the original disk with Clonezilla? |
1,582,361,441,000 |
I have a VM running LinuxMint 17.1. I just resized the virtual disk, then used a live CD to resize the system partitions. I now have 18GB+ free. However, when I look in Files, "File System" shows as full. I also have been getting warnings that I only have 147MB free. But there's plenty of space, it's almost as if I need to "refresh" something so it notices that I resized the partition. See below:
josh@LinuxMintVM ~ $ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/mint--vg-root 9.5G 8.9G 92M 100% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 2.0G 8.0K 2.0G 1% /dev
tmpfs 396M 1.1M 395M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 2.0G 42M 1.9G 3% /run/shm
none 100M 20K 100M 1% /run/user
/dev/sdb1 9.8G 1.6G 7.8G 17% /home
/dev/sda1 236M 83M 141M 37% /boot
shared 932G 134G 799G 15% /media/sf_shared
|
looks like your using LVM, so you have to expand your current drive after you have resized it. check the following link.
[resize on the fly]
http://www.linuxuser.co.uk/features/resize-your-disks-on-the-fly-with-lvm
| LinuxMint "File System" shows no space when many GB exist |
1,701,689,831,000 |
I have a (GPT-partitioned) disk, for example /dev/sda.
/dev/sda8 is a partition on that disk. I used the cfdisk utility to create a GPT table with few partitions in /dev/sda8. I expected these partitions to become available via something like /dev/sda8p1. But Linux did not automatically recognize them.
How do I make Linux recognize partitions in a partition, and automatize that if possible?
|
I know of nothing that that will automatically scan a partition as if it were a disk, and indeed it can't even be scanned manually:
partx --add - /dev/sda8
partx: /dev/sda8: error adding partitions 1-2
However, you can use a loop device to map the partition back to a device - and this device can be scanned as if it were a disk. Example for a device /dev/sda8 containing two partitions:
losetup --show --find --partscan /dev/sda8
/dev/loop0
ls -1 /dev/loop0* # Arg is #1, not lowercase "L"
/dev/loop0
/dev/loop0p1
/dev/loop0p2
Remember to delete the loop device when you've finished
losetup -d /dev/loop0
| How to make Linux read partition table in a partition? |
1,701,689,831,000 |
I was shrinking an ext4 partition, however it was taking too long so I thought I'd just start from scratch (making a new partition and wiping the data). However after I cancelled the resize (which was done with gparted), I'm not able to use previously usable commands on this drive:
sudo parted /dev/sdc
echo $?
1
parted just terminates, gparted doesn't show the drive either.
lsblk shows a 0 size drive
lsblk | grep sdc
sdc 8:32 0 0B 0 disk
fdisk
sudo fdisk /dev/sdc
Welcome to fdisk (util-linux 2.37.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
fdisk: cannot open /dev/sdc: No such file or directory
Have I maybe damaged the drive itself doing this operation? According to https://www.scosales.com/ta/kb/104521.html it might be a hardware issue.
If this error appears on the same device, no matter what its
position is, then most likely there is some problem with the device
hardware. Find out if there are any other frequent errors on this
device. If there are, then most likely, it is a failing device.
dmesg logs when plugging in the drive (it's via a NVME -> usb external interface):
[Sun Jul 9 12:14:48 2023] usb 1-4: new high-speed USB device number 8 using xhci_hcd
[Sun Jul 9 12:14:48 2023] usb 1-4: New USB device found, idVendor=0bda, idProduct=9210, bcdDevice=20.01
[Sun Jul 9 12:14:48 2023] usb 1-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[Sun Jul 9 12:14:48 2023] usb 1-4: Product: Best USB Device
[Sun Jul 9 12:14:48 2023] usb 1-4: Manufacturer: ULT-Best
[Sun Jul 9 12:14:48 2023] usb 1-4: SerialNumber: 012938001753
[Sun Jul 9 12:14:48 2023] usb-storage 1-4:1.0: USB Mass Storage device detected
[Sun Jul 9 12:14:48 2023] scsi host6: usb-storage 1-4:1.0
[Sun Jul 9 12:14:49 2023] scsi 6:0:0:0: Direct-Access Realtek RTL9210 1.00 PQ: 0 ANSI: 6
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Sense Key : Illegal Request [current]
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Add. Sense: Invalid field in cdb
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] 0 512-byte logical blocks: (0 B/0 B)
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] 0-byte physical blocks
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Write Protect is off
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Mode Sense: 37 00 00 08
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Sense Key : Illegal Request [current]
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Add. Sense: Invalid field in cdb
[Sun Jul 9 12:14:49 2023] sd 6:0:0:0: [sdc] Attached SCSI disk
|
Plugged the drive directly into my motherboard's NVME slot and everything worked, so it seems it was likely the NVME adapter being the issue here.
Based on lsusb it's a Realtek Semiconductor Corp. RTL9210 M.2 NVME Adapter.
Similar issues with this device are also reported here: https://github.com/raspberrypi/linux/issues/4130
| Cancelled ext4 shrink and now I can't seem to format the drive? parted, fdisk etc won't run with the drive |
1,701,689,831,000 |
Basically, what I need could be similar to what most other people running Windows for work would want, I suppose, so let us get started on my requirements:
Somehow copy a raw disk (e.g. /dev/sda) onto a bigger medium. In my case, Windows is stored on a 1TB SATA SSD and I want to clone it into the VirtualBox VDI format and store it onto my bigger NVMe SSD medium which has about 1.5TB of free space.
After making proper adjustments to the VirtualBox VM settings, start the VM up.
Free as much of the VM's disk space as possible, such as
uninstalling games, etc.. which are irrelevant to the VM's use case.
Then, shrink the VM's disk to the minimum (used space only).
|
Unmount all of the Win10 partition(s) if mounted.
Go to a directory, where you want to store the new image.
Make yourself the superuser or use sudo in front:
VBoxManage convertfromraw /dev/sda win10.vdi --format=VDI
After it has been completed, open the user shell, and change ownership to your user, example follows:
chown vlastimil:vlastimil win10.vdi && chmod 666 win10.vdi
The following VirtualBox setup may be taken as a template, tweak it to your needs:
Note, that I have already installed VirtualBox Extension Pack and Guest additions.
At this point, I suppose you have a few hundred GBs to shrink off of the VM. Try deleting movies and other useless stuff for the VM if not.
Now, we may differ in the approach. I chose to shrink the C: partition with some tool, there is no affiliation between me and this tool:
Using the tool, shrink C:, possibly do other changes in order to have a large free space at the end of the disk.
Using the good old MS tool, also clean free space on C: drive (sdelete -z c:).
Shut down the VM. Close VirtualBox.
Finally, use this command to shrink the VM's disk off of all zeroed parts (no need for sudo):
VBoxManage modifymedium win10.vdi --compact
Be aware the last step you cannot watch e.g. with ls -l, it will not reveal the shrunk size until 100% finished.
UPDATE 2024: I encountered a problem when exporting/importing the appliance because it still thinks the size of the disk is the original size, in my case 931.51 GiB. So you may ask how to go about it... The solution is very simple actually. Just create a new VDI with the desired size and copy the original onto the new medium like this:
VBoxManage clonemedium disk win10.vdi win10-shrunk.vdi --existing
then assign the new VDI file to the virtual machine and you are done.
| How to virtualize Win10 from Linux (into VirtualBox on Linux) and shrink it? |
1,701,689,831,000 |
recently I was asked this question during a phone interview. I know that iostat command can be used to check disk performance in Linux. But I am not sure how to answer this question. Does this mean the disk is fully loaded?
Thanks.
|
You cannot determine how much I/O load is present from %util. It only represents the percentage of sample time during which at least one io was outstanding within the scheduler/driver/storage.
So with iostat 1 (sample rate of 1 second), a %util of 75% means
there was at least 1 io outstanding to storage 750 milliseconds of the total 1 second sample time.
it does not represent anything useful in terms of actual load except in the case where you are dealing with a single physical disk.In the case of a single physical disk (such as a direct attached SATA disk), %util represents roughly what percentage of the time the disk was working on an io.
With a single physical disk and its single physical disk head, once iostat reaches 100% then no additional io work can be done AT THAT LOAD POINT.
Where the load point is comprised of the size of the io, the ratio of reads versus writes, how random or sequential that load is. Determining a load alarm limit is highly dependent upon both the storage technology being used AND the application.
I had one experience testing the performance of one storage product. I use fio tool on Linux system side.
I can still add more workload to the disks even the utilization went to 100%.
fio --name=fiotest --filename=/xxx/fiotest --size=16Gb --rw=write --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sdf 0.00 **2269.67** 0.00 **774805.33** 0.00 **9833.67** 0.00 **81.25** 0.00 **27.00 61.28** 0.00 **341.37 0.44 99.97**
sdd 0.00 **2267.33** 0.00 **774144.00** 0.00 **9835.33** 0.00 **81.27** 0.00 **25.49 57.78** 0.00 **341.43 0.44 99.97**
sde 0.00 **2270.67** 0.00 **775061.33** 0.00 **9833.33** 0.00 **81.24** 0.00 **26.11 59.30** 0.00 **341.34 0.44 99.97**
avg-cpu: %user %nice %system %iowait %steal %idle
Starting 8 processes
Jobs: 8 (f=0): [f(8)][100.0%][w=519MiB/s][w=518 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=8): err= 0: pid=10248: Thu Oct 20 03:26:28 2022
write: IOPS=2002, BW=2003MiB/s (2100MB/s)(117GiB/60040msec); 0 zone resets
fio --name=fiotest --filename=/xxx/fiotest --size=16Gb --rw=read --bs=1M --direct=1 --numjobs=8 --ioengine=libaio --iodepth=8 --group_reporting --runtime=60
Device r/s w/s rkB/s wkB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
sdf **2271.33** 0.00 **775146.67** 0.00 **9842.00** 0.00 **81.25** 0.00 **25.91** 0.00 **58.85 341.27** 0.00 **0.44 99.63**
sdd **2264.67** 0.00 **774464.00** 0.00 **9846.67** 0.00 **81.30** 0.00 **25.02** 0.00 **56.67 341.98** 0.00 **0.44 99.43**
sde **2272.33** 0.00 **775061.33** 0.00 **9841.00** 0.00 **81.24** 0.00 **25.36** 0.00 **57.62 341.09** 0.00 **0.44 99.77**
dm-7 **36341.33** 0.00 **2325845.33** 0.00 0.00 0.00 0.00 0.00 **25.42** 0.00 **923.68 64.00** 0.00 **0.03 100.07**
| what does %util mean in iostat command output? if it is 100%, can we add more workload on it? |
1,701,689,831,000 |
Yesterday I shrank a volume, and installed Ubuntu on it because I wanted to dual boot, I'm dual-booting Windows 11 and Ubuntu 22.04 on the same drive.
I have just noticed that I might need more space for my workflow and I want to shrink it further.
I used Windows Disk Management to shrink a volume to be able to install Ubuntu.
I wanted to ask those who have done this, will this give me problems? Will Ubuntu use the new space?
Furthermore, considering that the free space will now be allocated before the OS I'm kinda worried I might break something.
This is the output for parted -l on the drive in which I want to make more space:
Model: ATA WDC WDS500G2B0A (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 17,4kB 16,8MB 16,8MB Microsoft reserved partition msftres
2 16,8MB 393GB 393GB ntfs Basic data partition msftdata
3 393GB 393GB 538MB fat32 EFI System Partition boot, esp
4 393GB 500GB 107GB ext4
I want to add some space from the ntfs partition to my system.
|
Since your NTFS (Windows) and your EXT4 (Linux) Partitions don't lie back-to-back on the disk, you won't win anything, unless you first shrink the NTFS partition, move the EFI partition to be right after the end of that partition, then move the EXT4 to right after the end of that, and then extend the ext4 partition into the free space that's then at the end of all this.
That's a hassle. If I'm frank, I'd just backup my data from linux, remove the linux partition, install Ubuntu using the alternative installer (see here) to use LVM as partitioning scheme.
That way, you can just add arbitrarily positioned physical volumes later on to extend your available space on your Linux, without having to worry whether the volumes lie contiguously on disk/SSD, or even on the same device.
| Making more room for Ubuntu |
1,701,689,831,000 |
I recently bought Unitek Y-1096 Adapter.
Unfortunately, when I plug SSD with it, it doesn't work.
SSD seems to works properly via SATA III.
It was partitioned, it has only one ext4 partition that covers the whole disk.
Output of uname -a:
Linux fedora 5.18.13-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jul 22 14:03:36 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
output of lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 0B 0 disk
zram0 252:0 0 8G 0 disk [SWAP]
nvme0n1 259:0 0 1.9T 0 disk
├─nvme0n1p1 259:1 0 600M 0 part /boot/efi
├─nvme0n1p2 259:2 0 1G 0 part /boot
└─nvme0n1p3 259:3 0 1.9T 0 part /home
/
In fdisk -l /dev/sda doesn't even show up.
Output of dmesg:
[ +0.002331] usbcore: registered new interface driver uas
[ +1.028746] scsi 10:0:0:0: Direct-Access Asm 225 0 PQ: 0 ANSI: 6
[ +0.000241] sd 10:0:0:0: Attached scsi generic sg0 type 0
[ +0.000338] sd 10:0:0:0: [sda] Media removed, stopped polling
[ +0.000943] sd 10:0:0:0: [sda] Attached SCSI removable disk
|
It seems that plugging it directly to the motherboard was the solution.
| SSD plugged in via adapter |
1,701,689,831,000 |
I have custom LFS installer which contains sfdisk, I am trying to add support for NVME disks on it. When I make partitions with sfdisk on a normal SATA disk, things go as expected but when I do the exact same on a NVME disk, it creates the partitions, but when I am trying to get the size of a partition (with the sfdisk -s /dev/nvme0n1p1 command), it outputs No such device or address while trying to determine filesystem size.
lsblk output:
NAME MAJ:MIN SIZE TYPE
nvme0n1 259:0 1.8T disk
|nvme0n1p1 259:1 200G part
`nvme0n1p2 259:10 1.6T part
sfdisk usage:
,200G,L
,,L
/proc/partitions
major minor #blocks name
259 0 1953514584 nvme0n1
259 2 209715200 nvme0n1p1
259 3 1743798343 nvme0n1p2
They are also listed under /dev as nvme0n1, nvme0n1p1 and nvme0n1p2.
Now if I use sfdisk -s /dev/nvme0n1p1 I get the output: 209715200 and sfdisk -s /dev/nvme0n1p2 gives: No such device or address while trying to determine filesystem size.
Now the strange thing is, if I create the partitions again, and I do sfdisk -s /dev/nvme0n1p1 this now gives: No such device or address while trying to determine filesystem size and sfdisk -s /dev/nvme0n1p2 gives 209715200.
And if I it again over and over, it keeps changing, one partition is usable, other not, it swaps constantly.
Things I tried:
Other SSD (same type), same result;
I am using a pcie adapter for the NVME disk, tried other adapter, same result;
Using the adapter in a running open suze installation, I can execute these comands with no issues;
Normal sata drive, no issues.
[edit] I figured out after a reboot without the partitioning the drive again, it is possible to execute these commands, is this important to a NVME disk, but seems not to normal sata?
I am quite out of ideas now what to try or what the couse of this could be, any help would be appreciated.
|
I managed to find a sollution so I am adding the answer here so it might help others in case they encountered a similar problem.
I used the blockdev --rereadpt /dev/nvme0n1 command. This rereads the partition table, and now I can execute the sfdisk -s /dev/nvme0n1p2 command with no issues without the need of a reboot.
I am still not sure why this is not needed with normal sata drives, so if someone knows why this is not the case, feel free to leave a comment.
| Sfdisk NVME issue, No such device or address |
1,701,689,831,000 |
i have a debian11 ssd server which is installed in RAID 1 MIRRORING
i use the server as webhost and for better performance (whith high volume of users) i need to use RAID 0 so data wil be split across 2 drivers
2 x 1TB disks
QUESTION IS: How to convert from RAID 1 to RAID 0 without reinstalling
the server
|
In my (unfortunately deleted post) I showed the results of testing mdadam (included in Debian) RAID1, RAID0, RAID10, and single disk. The expected RAID1 performance -slightly slower write speed than single disk and nearly same read speed as RAID0- was not observed. After a deep search on the Internet I found an explanation that it is not an error, but a feature. The reason is to give kernel a chance to do two parallel reads for two independent program.
This is not the case in the BSD unix (FreeBSD, NET-BSD, OpenBSD) while it does not use mdadam subsystem.
But I must agree that while using SSD the disk performance (RAID0,1 none) may not be your main problem.
You did not described the server component: web-server (Apache/Lighttp/...), application module (PHP/python/...), database (mysql/postgres/...), number of CPU cores, amount of RAM, what kind of services it realises (static pages/dynamic pages/heavy downloads/...)
If I strictly answer your question: migration from RAID1 to RAID0 cannot be realised on-line. You have to stop the server, make backup to an external disk, then reconfigure the RAID0. It could be more easy, if you have some independent system disk. If not, you must reconfigure the RAID while installing the new Linux OS.
But with respect to comments above, it is highly recommended NOT to do this. The RAID1 is mainly used on server for better data protection.
| Debian: convert from RAID1 to RAID 0 |
1,701,689,831,000 |
I was using Windows 11 on my laptop but I wanted to install a Linux distro on the side.
Therefore, I've shrunk 30GB of the OS disk and I installed Pop_Os. During the installation process, I've created a EFI and root partitions.
I now wanted to extend the ext4 partition. I was able to shrink even more the original OS disk, but the unallocated space is shown on the left when using gParted and I'm unable to resize the root partition.
Below the gparted screenshots and fdisk outputs:
$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 476,94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: WDC PC SN520 SDAPMUW-512G-1101
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 68C5B02D-2F6D-464B-AC15-BC33272E2AD7
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 534527 532480 260M EFI System
/dev/nvme0n1p2 534528 567295 32768 16M Microsoft reserved
/dev/nvme0n1p3 567296 761978879 761411584 363,1G Microsoft basic data
/dev/nvme0n1p4 998166528 1000214527 2048000 1000M Windows recovery environment
/dev/nvme0n1p5 939532288 996937725 57405438 27,4G Microsoft basic data
/dev/nvme0n1p6 996937728 998166525 1228798 600M EFI System
Partition table entries are not in disk order.
Getting a bit out my comfort zone now :)
How should I proceed now to expand the partition?
|
You need to do the resize from a LiveCD -- the nvme0n1p5 partition is currently mounted. Ext4 can be resized when mounted, but only to the right -- resizing it to the left actually means copying/moving the data to the start of the free space and then resizing the partition and that cannot be done with an active (mounted) partition. So boot from a LiveCD (the Pop installation CD has GParted so you can use that) and use the GParted Resize/Move option to resize the partition. Don't forget to make a backup first.
| Extend partition created on laptop with Windows 11 and Pop OS |
1,643,937,919,000 |
I have 2 disks in my PC, an SSD with my linux system on it and a HDD with windows and some other stuff. Now I keep getting the following error, which appeared overnight without changing the system, when I'm trying to mount my HDD, after authentication.
[authentication screen of the HDD when trying to mount it]
An error occurred while accessing 'Basic data partition', the system responded: An unspecified error has occurred: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken
Output of /proc/partitions before trying to mount it:
259 0 976762584 nvme0n1
259 1 524288 nvme0n1p1
259 2 958271734 nvme0n1p2
259 3 17961962 nvme0n1p3
11 0 1048575 sr0
7 1 4 loop1
7 0 66776 loop0
7 2 93316 loop2
7 3 56828 loop3
7 4 93256 loop4
7 5 56820 loop5
7 7 33220 loop7
7 6 168712 loop6
7 8 44308 loop8
8 0 976762584 sda
8 1 102400 sda1
8 2 131072 sda2
8 3 975613223 sda3
8 4 912384 sda4
Output of cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=DB6E-0849 /boot/efi vfat umask=0077 0 2
UUID=7b598707-9f6b-42f3-846e-71fd01752e84 / ext4 defaults,noatime 0 1
UUID=315943cb-caa9-488a-a3e9-308e6218486f swap swap defaults,noatime 0 0
Output of sudo fdisk -l /dev/sda
fdisk: cannot open /dev/sda: Input/output error
Output of sudo fdisk -l:
Disk /dev/nvme0n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDS100T3X0C-00SJG0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 29B5454C-DA51-8545-8062-20EC370C77CF
Device Start End Sectors Size Type
/dev/nvme0n1p1 4096 1052671 1048576 512M EFI System
/dev/nvme0n1p2 1052672 1917596140 1916543469 913.9G Linux filesystem
/dev/nvme0n1p3 1917596141 1953520064 35923924 17.1G Linux swap
Disk /dev/loop1: 4 KiB, 4096 bytes, 8 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop0: 65.21 MiB, 68378624 bytes, 133552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 91.13 MiB, 95555584 bytes, 186632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop3: 55.5 MiB, 58191872 bytes, 113656 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop4: 91.07 MiB, 95494144 bytes, 186512 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop5: 55.49 MiB, 58183680 bytes, 113640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop7: 32.44 MiB, 34017280 bytes, 66440 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop6: 164.76 MiB, 172761088 bytes, 337424 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop8: 43.27 MiB, 45371392 bytes, 88616 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST31000524AS
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 899D4B03-4C45-4868-A3ED-04525A4A516C
Device Start End Sectors Size Type
/dev/sda1 2048 206847 204800 100M EFI System
/dev/sda2 206848 468991 262144 128M Microsoft reserved
/dev/sda3 468992 1951695437 1951226446 930.4G Microsoft basic data
/dev/sda4 1951696896 1953521663 1824768 891M Windows recovery environment
Output sudo smartctl -a /dev/sda
A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
With -T permissive:
Short INQUIRY response, skip product id
=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK
Current Drive Temperature: 0 C
Drive Trip Temperature: 0 C
Read defect list: asked for grown list but didn't get it
Error Counter logging not supported
Device does not support Self Test logging
Output of sudo dmesg -t --level=alert,crit,err,warn:
Expanded resource Reserved due to conflict with PCI Bus 0000:00
ata2: softreset failed (device not ready)
ata2: softreset failed (device not ready)
ata2: link is slow to respond, please be patient (ready=0)
ata2: softreset failed (device not ready)
ata2: limiting SATA link speed to 3.0 Gbps
ata2: softreset failed (device not ready)
ata2: reset failed, giving up
vboxdrv: loading out-of-tree module taints kernel.
VBoxNetAdp: Successfully started.
VBoxNetFlt: Successfully started.
acpi PNP0C14:02: duplicate WMI GUID 05901221-D566-11D1-B2F0-00A0C9062910 (first instance was on PNP0C14:01)
r8168 Copyright (C) 2021 Realtek NIC software team <[email protected]>
This program comes with ABSOLUTELY NO WARRANTY; for details, please see <http://www.gnu.org/licenses/>.
This is free software, and you are welcome to redistribute it under certain conditions; see <http://www.gnu.org/licenses/>.
[drm] dce110_link_encoder_construct: Failed to get encoder_cap_info from VBIOS with error code 4!
[drm] dce110_link_encoder_construct: Failed to get encoder_cap_info from VBIOS with error code 4!
hid-generic 0003:1532:0531.0008: No inputs registered, leaving
thermal thermal_zone0: failed to read out thermal zone (-61)
amdgpu: SRAT table not found
ACPI: \: failed to evaluate _DSM (0x1001)
ACPI: \: failed to evaluate _DSM (0x1001)
ACPI: \: failed to evaluate _DSM (0x1001)
ACPI: \: failed to evaluate _DSM (0x1001)
ACPI: \: failed to evaluate _DSM (0x1001)
ACPI: \: failed to evaluate _DSM (0x1001)
ACPI: \: failed to evaluate _DSM (0x1001)
ACPI: \: failed to evaluate _DSM (0x1001)
usb 1-5: Warning! Unlikely big volume range (=4096), cval->res is probably wrong.
usb 1-5: [11] FU [Sidetone Playback Volume] ch = 1, val = 0/4096/1
ata2: link is slow to respond, please be patient (ready=0)
kauditd_printk_skb: 51 callbacks suppressed
ata2: softreset failed (device not ready)
ata2: softreset failed (device not ready)
kauditd_printk_skb: 13 callbacks suppressed
ata2: softreset failed (device not ready)
ata2: softreset failed (device not ready)
ata2: link is slow to respond, please be patient (ready=0)
ata2: softreset failed (device not ready)
ata2: limiting SATA link speed to 3.0 Gbps
ata2: softreset failed (device not ready)
ata2: reset failed, giving up
ata2.00: disabled
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO
ntfs3: Unknown parameter 'windows_names'
blk_update_request: I/O error, dev sda, sector 468992 op 0x0:(READ) flags 0x80700 phys_seg 4 prio class 0
blk_update_request: I/O error, dev sda, sector 468992 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 0, async page read
blk_update_request: I/O error, dev sda, sector 468994 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 1, async page read
blk_update_request: I/O error, dev sda, sector 468996 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 2, async page read
blk_update_request: I/O error, dev sda, sector 468998 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 3, async page read
blk_update_request: I/O error, dev sda, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
blk_update_request: I/O error, dev sda, sector 1951695232 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
blk_update_request: I/O error, dev sda, sector 1951695232 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 975613120, async page read
blk_update_request: I/O error, dev sda, sector 1951695234 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 975613121, async page read
blk_update_request: I/O error, dev sda, sector 1951695236 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 975613122, async page read
Buffer I/O error on dev sda3, logical block 975613123, async page read
|
Your drive is most likely dead:
ata2: softreset failed (device not ready)
ata2: softreset failed (device not ready)
ata2: link is slow to respond, please be patient (ready=0)
ata2: softreset failed (device not ready)
ata2: limiting SATA link speed to 3.0 Gbps
ata2: softreset failed (device not ready)
ata2: reset failed, giving up
blk_update_request: I/O error, dev sda, sector 468992 op 0x0:(READ) flags 0x80700 phys_seg 4 prio class 0
blk_update_request: I/O error, dev sda, sector 468992 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Buffer I/O error on dev sda3, logical block 0, async page read
and smartctl not being able to run.
| Unable to mount internal hdd - An error occurred while accessing 'Basic data partition' |
1,643,937,919,000 |
I am trying to use best practices to setup a mirrored zfs pool on a ubuntu 20.04 server.
My hardware is 2x 1TB nvme in an external USB-C enclosure GEN2 SSD enclosure.
My issue is both disks seem to have the same disk id!
So I am able to create the pool using sda and sdb but is unstable after reboot the pool gets lost. To show the case I dumped devices properties in files and made fiff.
As you can see below disk by-id are exactly matching whereas disk path are different.
Even a workaround would be welcome.
sudo udevadm info --name=/dev/sda --query=property > sda
sudo udevadm info --name=/dev/sdb --query=property > sdb
diff sda sdb
1,2c1,2
< DEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.0/host1/target1:0:0/1:0:0:0/block/sda
< DEVNAME=/dev/sda
---
> DEVPATH=/devices/pci0000:00/0000:00:14.0/usb2/2-1/2-1:1.0/host1/target1:0:0/1:0:0:1/block/sdb
> DEVNAME=/dev/sdb
5c5
< MINOR=0
---
> MINOR=16
7c7
< USEC_INITIALIZED=1630003
---
> USEC_INITIALIZED=1626316
31,33c31,33
< ID_PATH=pci-0000:00:14.0-usb-0:1:1.0-scsi-0:0:0:**0**
< ID_PATH_TAG=pci-0000_00_14_0-usb-0_1_1_0-scsi-0_0_0_0
< ID_PART_TABLE_UUID=2decf1ce-947b-9548-bef4-0e315c078f4f
---
> ID_PATH=pci-0000:00:14.0-usb-0:1:1.0-scsi-0:0:0:**1**
> ID_PATH_TAG=pci-0000_00_14_0-usb-0_1_1_0-scsi-0_0_0_1
> ID_PART_TABLE_UUID=ace78582-634a-b340-8ac5-3db5984afc5f
35c35
< DEVLINKS=/dev/disk/by-id/scsi-35000000000000001 /dev/disk/by-id/scsi-SASMT_ASM1352R-PM_3000CCCCBBBBAAAA /dev/disk/by-path/pci-0000:00:14.0-usb-0:1:1.0-scsi-0:0:0:0 /dev/disk/by-id/wwn-0x5000000000000001
---
> DEVLINKS=/dev/disk/by-id/wwn-0x5000000000000001 /dev/disk/by-id/scsi-SASMT_ASM1352R-PM_3000CCCCBBBBAAAA /dev/disk/by-id/scsi-35000000000000001 /dev/disk/by-path/pci-0000:00:14.0-usb-0:1:1.0-scsi-0:0:0:1
|
The best way to do this is to:
Determine the manufacturer and serial numbers of your drives
Create a GPT partition table (or "scheme") on each drive
Create a ZFS data partition on each drive, and name the partition in such a way as to reflect the manufacturer (possibly abbreviated) and serial number of that drive
For instance, if you want to create a mirror of two Western Digital drives, serial number WD-WMC1S5694795 and WD-WMC1S5688675, then create identically-sized GPT partitions on each drive, and label the partition something like data-WD-WMC1S5694795 and data-WD-WMC1S5688675, respectively. Be sure to get them labeled correctly, or this time spent is useless. Fortunately, these serial numbers already incorporate a leading WD- so the manufacturer is already encoded. Including the abbreviated manufacturer in the label simply protects against the remotely possible situation that you would have two drives with identical serial numbers from different manufacturers. Unlikely to happen, so use your discretion as to whether to encode the manufacturer in the partition label.
This will give you /dev entries under /dev/disk/by-partlabel/ which you can then use to construct your pool:
# zpool create tank mirror /dev/disk/by-partlabel/data-WD-WMC1S5694795 \
/dev/disk/by-partlabel/data-WD-WMC1S5688675
# zpool status tank
pool: tank
state: ONLINE
scan: scrub repaired 0B in 23h36m with 0 errors on Wed Dec 1 16:00:38 2021
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
data-WD-WMC1S5694795 ONLINE 0 0 0
data-WD-WMC1S5688675 ONLINE 0 0 0
errors: No known data errors
Now when one of the drives fails, you'll know exactly which one to replace, because the human-readable serial number appears on the outside of the drive.
| ZFS - only one single disk/by-id for two nvm devices |
1,643,937,919,000 |
I partially know the answer to this question; I "know" SATA-disks uses the path /dev/sdaX while the nvme uses /dev/nvmeX. Is the path to the disk different when RAID is enabled through BIOS?
Reason I'm asking is beucase: How can I know path to the disk (device) I want to partition (using PXE) without turning on the machine in advance?
|
Unfortunately there are many possible paths.
Depending on how the RAID is implemented, there might be:
a legacy naming style unique to particular RAID controller series, like /dev/cciss/* for old Compaq/HP SmartArray hardware RAID controllers
several different variants managed by dmraid for various BIOS-RAID firmware/software RAID implementations
regular /dev/sd* naming style for some hardware RAID controllers and non-RAID storage controllers
/dev/nvmeXnY naming scheme for NVMe devices, where X = NVMe device number and Y = NVMe namespace number (usually always 1 unless some big enterprise NVMe setup)
no disks visible at all (!) if there is an unconfigured true hardware RAID controller
This is why configuration control is important for large deployments. Usually, you wouldn't start mass PXE deployments of a new model until you had tested that specific model with the expected configuration and worked out its quirks.
Once you gain experience with a particular vendor's hardware, you may eventually be able to make good estimates on how a previously-unknown model is likely to behave based on how that vendor normally sets things up, but without knowing anything about the hardware you'll be PXE booting, there is no universal answers to be had.
Some hardware RAID controllers might automatically set up a sensible default RAID configuration if up to two unused (or totally wiped) disks are plugged in and there is no existing RAID configuration, to ease PXE mass deployments.
Others might require confirmation on pressing a particular key at boot to set a default RAID configuration (since setting a default RAID configuration can be a destructive action if the disks aren't in fact empty). Yet some hardware RAID controllers might require running a RAID configuration tool before you can PXE boot an OS installer. If there is a scriptable version of the RAID configuration tool available, you might be able to integrate it into your PXE deployment process.
| Is path to disk (/dev/mydisk) different from SATA, SSD, NVME or RAID? |
1,643,937,919,000 |
When disk is in use, ex: doing fio testing ( random write ), remove the PCIe SSD at the same time.
Should I expect there is no any I/O error since the system support hotplug?
|
No. If you remove the device which is accessed, you'll get an I/O error. Because trying to access a nonexistent device is an I/O error.
Hotplug guarantees that the removed device itself will not be damaged during the removal, and that the system will continue running flawlessly if it doesn't depend on this particular device (e.g. you can't remove disk where the system currently resides and stores its swap, but you can remove some auxiliary device).
| Doing fio testing and hotplug remove the SSD |
1,643,937,919,000 |
I have a m2 SSD disk and it has 100 bad sectors (over 3 years). Is there a risk that this type of drive will eventually break, should i do something about it or not necessarily ?
thanks
|
First thing first, Always have a backup. Drive deaths can be unnatural (Your ceiling falls and crushes the drive).
Next thing is everything is perishable. It will die someday. Read the wiki article about SSDs. SSDs die more deterministically than HDDs. HDDs are weaker than SSDs, in sense of external forces. They have moving parts which may break and the most common way they die is when their read/write head collide with the platters. But if you are lucky enough they can live decades.
SSDs are different kind of beasts, they have a limited write cycle. No SSD cell return to original state once written, gradually all cells become unwriteable.
Here is the spec sheet of your device. It says 15000 hours as Mean Time Between Failures and 5 years product life but doesn't mention what's the write limit. Also larger SSDs has longer life as they can distribute writes on more cells.
Here is the Debian wiki article on maintenance of SSD. Here is Arch linux wiki. And here is another U&L SE Question
| M2 SSD disk - bad sectors |
1,643,937,919,000 |
I have an old HDD which failed and I'm trying to recover what's possible with testdisk. The plan was to use dd to make an image and then use testdisk to recover files from the image to avoid damaging the disk even more.
I used the following command:
sudo dd if=/dev/sdc of=/mnt/BigDisk/backup.iso status=progress
Everything worked fine until the progress stopped. It didn't go down to 0MB/s it just froze.
I waited for several hours and nothing changed. Then I tryed Ctrl+C out of it, but nothing. In the end I sent it a SIGKILL (sudo kill -9 <pid>) but even this didn't work.
I also tried to run different commands like lsblk which also got hung and did not respond to any signal, included SIGKILL. In particular, I think that every process that tried to read or get information on that device got frozen and "unkillable".
The last thing I tried was powering-off my PC but, even then, the black screen with the blinking white bar continued to stay there and my PC never turned off.
The following day I tried using testdisk directly on /dev/sdc. It detected the partition (ext4, there was just one of them) correctly and was able to read file names, but when I started copying, after some files happened the same thing it happened to dd.
Is this some kind of kernel issue?
System Info:
OS: Arch Linux, Kernel: 5.13.5-arch1-1
/dev/sdc is an HDD with just one ext4 partition on a MBR partition scheme.
/mnt/BigDrive is an external drive which had one NTFS partition on it, which also got damaged and now has a similar behavior as the other disk. It was mounted using ntfs-3g.
|
Finally I managed to rescue my files.
I may have lied in the question because probably between the moment I tried the first time and when I posted the question I had a kernel update (probably from 5.12 to 5.13). I tried again yesterday with the new kernel and a new hard drive with an ext4 partition as destination and it worked fine. ddrescue took something like 12 hours but in the end it finished copying with just a few bites of errors.
Thanks everyone for the advice.
| All processes using a device got hung and even `kill -9` does nothing |
1,643,937,919,000 |
I switched my server from an Ubuntu 18.04 32bit to a Debian 10 64bit system. Both of the hardware uses BIOS and can not boot into UEFI mode.
So I got both of them an HDD for the system with an MBR partition table. For the data, I use 4TB HDD with GPT and one ext4 file system. It works on the old machine; however, on the Debian machine, it shows me the disk (the 4TB disk from the Ubuntu system and a newly bought 4TB disk) as 2TB disks.
How do I get the whole capacity of 4TB?
Outputs
# fdisk -l
Disk /dev/sda: 2 TiB, 2199023254528 bytes, 4294967294 sectors
Disk model: WD...
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3...
Disk /dev/sdc: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Disk model: V...
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe...
Device Boot Start End Sectors Size Id Type
/dev/sdc1 ...
Disk /dev/sdb: 2 TiB, 2199023254528 bytes, 4294967294 sectors
Disk model: S...
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
gdisk and sfdisk gives similar output.
|
I found out, that my BIOS sees 1.8TiB too. It seems to be a old controller, therefore I must buy a new controller or even a new computer.
| How do I get my 4TB HDD size (OS and BIOS shows only 2TB=1.8TiB)? |
1,643,937,919,000 |
I was trying to make a swap partition, but an error message came saying
Primary Partition Not Available
I checked the internet and found out there can't be more than 4 partitions because Linux only has room for 4 by default (for some reason). But I can see there's a sda5 in my partition table.
/dev/sda1 229474304 230518783 1044480 510M 27 Hidden NTFS WinRE
Free space 230518784 230520831 2048 1M
/dev/sda2 230520832 934482553 703961722 335.7G 7 HPFS/NTFS/exFAT
/dev/sda3 934483966 976771071 42287106 20.2G 5 Extended
└─/dev/sda5 934483968 976771071 42287104 20.2G 83 Linux
/dev/sda4 2048 20973567 20971520 10G 83 Linux
How is there more than 4 primary partitions? Is sda5 even a primary partition? Why is sda5 looking like a branch of sda3? Please point me towards the right direction.
(I just wanted to make a swap partition, since LFS is recommending, DO I even need a swap partition when I have 8GB RAM?)
|
/dev/sda5 is an extended partition (aka logical partition), hence it looks like a branch. Yes, that is true, Linux supports only four physical partitions. Your system already have four physical partitions. You can use cfdisk command to create a new logical partition (/dev/sda6) and use that for swap
| cfdisk showing more than 4 partitions |
1,643,937,919,000 |
I'm new to Linux environment and trying to get some clarification on the Disk partitioning.
I've installed new RHEL 8 vmware workstation initially with 20GB Disk and later expanded with additional 20GB. My goal is to create Volume groups & logical volume for later use. I did created a extended partition via fdisk as below,
[root@localhost ~]# fdisk -l
Disk /dev/nvme0n1: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0e287c88
Device Boot Start End Sectors Size Id Type
/dev/nvme0n1p1 * 2048 616447 614400 300M 83 Linux
/dev/nvme0n1p2 616448 4810751 4194304 2G 82 Linux swap / Solaris
/dev/nvme0n1p3 4810752 41943039 37132288 17.7G 83 Linux
/dev/nvme0n1p4 41943040 62914559 20971520 10G 5 Extended
However, when I do a lsblk it shows the Size as 1K!? Tried rebooting the VM but no luck. Looks like Im doing something wrong or my understood the setup wrongly.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
nvme0n1 259:0 0 40G 0 disk
├─nvme0n1p1 259:1 0 300M 0 part /boot
├─nvme0n1p2 259:2 0 2G 0 part [SWAP]
├─nvme0n1p3 259:3 0 17.7G 0 part /
└─nvme0n1p4 259:4 0 1K 0 part
Can someone point to how to get the 10GB under nvme0n1p4? Also, why the disk is getting auto partitioned when I add any new disk in VM?
|
Extended partition can't be used directly it's just a "container" for logical partitions. It's actually a clever hack to overcome the 4 partition limit in MBR. Your nvme0n1p4 is a 10 GiB extended partition, but lsblk shows it as 1 KiB because that is its "real size" -- it doesn't take any space on the disk except the 1 KiB metadata and has "10 GiB of free space" for logical partitions. If you goal is to add a new LVM setup, simply add a new logical partition using fdisk /dev/nvme0n1 (you can also use parted), and use vgcreate /dev/nvme0n1p5 <vg_name> (the new logical partition will be nvme0n1p5) to create a new volume group on it.
| Question about Disk Partitioning |
1,643,937,919,000 |
I made an empty binary image file with fallocate -l 500M sd.img command and then partitioned it using gdisk and now I can see my partitions using gdisk:
Command (? for help): i
Partition number (1-2): 1
Partition GUID code: EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 (Microsoft basic data)
Partition unique GUID: 8B28D50C-C5B5-470D-908D-FF212433AC50
First sector: 2048 (at 1024.0 KiB)
Last sector: 43007 (at 21.0 MiB)
Partition size: 40960 sectors (20.0 MiB)
Attribute flags: 0000000000000000
Partition name: 'Microsoft basic data'
Command (? for help): i
Partition number (1-2): 2
Partition GUID code: 69DAD710-2CE4-4E3C-B16C-21A1D49ABED3 (Linux ARM32 root (/))
Partition unique GUID: 8A6F3384-7AC2-448C-BD76-73A772E9E586
First sector: 43008 (at 21.0 MiB)
Last sector: 247807 (at 121.0 MiB)
Partition size: 204800 sectors (100.0 MiB)
Attribute flags: 0000000000000000
Partition name: 'Linux ARM32 root (/)'
as you can see, I want to format the first partition to FAT32 and the second one to EXT4 for linux root file system.
How can I do this? I know how to format a physical drive with mkfs.fat and mkfs.ext4 but how can I do it for a disk image with 2 separate partitions?
OS: Ubuntu 20 LTS
|
To format the partitions contained in the disk image, you can first create block device files for the partitions. With the device files in place you can use mkfs as you normally would. When you're finished, then you can remove the device files.
Create and list the block device files: kpartx -av sd.img
Format each partition. Ex. mkfs.fat /dev/mapper/loop0p1
Remove the block device files: kpartx -d sd.img
| how to format a partition of a diskimage? |
1,643,937,919,000 |
I followed this guide to turn my raspberry pi into my backup server. It has a 16 GB SD Card in it and Raspbian (based on Debian buster) installed. When I try to update with sudo apt upgrade, it returns the error:
Error writing to output file - write (28: No space left on device) [IP: 93.93.135.141 80]
W: Some index files failed to download. They have been ignored, or old ones used instead.
Which indicates to me, that the SD Card is full even tough it has pretty much only the system on it.
Here is the output of sudo du -hs /*:
646G /backupdrive
9.3M /bin
52M /boot
0 /dev
3.4M /etc
780K /home
348M /lib
16K /lost+found
4.0K /media
4.0K /mnt
41M /opt
du: cannot access '/proc/5385/task/5385/fd/3': No such file or directory
du: cannot access '/proc/5385/task/5385/fdinfo/3': No such file or directory
du: cannot access '/proc/5385/fd/3': No such file or directory
du: cannot access '/proc/5385/fdinfo/3': No such file or directory
0 /proc
24K /root
6.2M /run
8.8M /sbin
4.0K /srv
0 /sys
32K /tmp
625M /usr
167M /var
Here is the output of lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
|-sda1 8:1 0 931.5G 0 part /backupdrive
`-sda2 8:2 0 512B 0 part
mmcblk0 179:0 0 14.9G 0 disk
|-mmcblk0p1 179:1 0 256M 0 part /boot
`-mmcblk0p2 179:2 0 14.6G 0 part /
It seems to me, that the external hdd (sda) is mounted on /, but some of the data is still stored on the normal SD Card. Does anyone have an idea on why this is?
**Edit: **
Output of: df /
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 15023184 14381088 0 100% /
|
@PhilipCouling that is excactly the thing, it uses the sd card for some reason for /backudrive. That's why it is full. You can see that in the output of sudo du -hs /* in the question above
If that's genuinly the case then you should unmount /backupdrive and clear down any files left in there after unmounting. /backupdrive has 646GiB so clearly something is left on your big HD not the SD card.
You probobally don't want to destroy your backup in the process so don't delete everything in /backupdrive that's stored on SD without copying it across to your big hard drive first. You can mount your big backup HDD to /mnt and then use this answer to copy-merge from your SD card (still in /backupdrive) to you backup HDD (now /mnt).
When your done, just umount /mnt and mount the HDD back to /backupdrive.
There will be an obvious follow up question: How this happened? It's quite likely that the backup job ran somehow while the backup HDD was unmounted.
If this happens again, and you're certain the backup drive was mounted correctly at all times, then checkout this problem, referenced in different ways:
What can cause different processes to see different mount points?
Cron and atd jobs cant see manually mounted file systems (mounted while on ssh)
This bug was fixed (see here), but since it happened once, it's worth mentioning as I hit this bug with similar symptoms to those in your question.
| Raspberry PI Mounting / and /backupdrive on different drives does not work |
1,643,937,919,000 |
I'm trying to install Arch Linux on a disk which was previously partitioned. The Disklabel type is automatically set to gpt. I need to change it to dos. How can I do it?
|
I've never actually used cfdisk, but I can tell you how to do it with fdisk.
Run fdisk -l
You should get all of your storage devices.
Find the one you want to partition.
It should be something like /dev/nvme0n1 or /dev/sda but your knowledge may vary.
Once you find it run fdisk <name of drive>. E.x. /fdsik /dev/sda
You should now see something like this
Welcome to fdisk (util-linux 2.35.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help):
Typing m reveals:
o create a new empty DOS partition table
Notes:
Know that changing the labels will delete everything.
Don't forget to run w before exiting to write table to disk.
Check the Arch Wiki for more information.
| Changing label type in Arch Linux cfdisk |
1,643,937,919,000 |
I have a separate disk that i'd like to use for data.
[~] » lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 260M 0 part /boot
├─sda3 8:3 0 200G 0 part /
└─sda4 8:4 0 265.5G 0 part /home
sdb 8:16 0 149.1G 0 disk
└─sdb1 8:17 0 149.1G 0 part
[~] »
I want to mount it on $HOME/data, but i don't want to need root permissions for it.
Since /home/$USER is automatically mounted on boot and my user doesn't require root to use it, maybe i can use the same mount options for mounting the data disk?
|
You can add an entry for /home/.../data to /etc/fstab. You can use the same mount options as for /home.
After the volume has been mounted you have to execute chown $username: /home/$username/data once (as root).
| Using a data disk |
1,643,937,919,000 |
I'm running manjaro and I just set up an additional RAID 0 using mdadm and formatted it as ext4. I started using it and everything worked great. After I rebooted however, the array has disappeared. I figured it just hasn't automatically reassembled itself, but it appears to have completely disappeared:
sudo mdadm --assemble --force /dev/md0 /dev/nvme0 /dev/nvme1 /dev/nvme2
mdadm: cannot open device /dev/nvme0: Invalid argument
mdadm: /dev/nvme0 has no superblock - assembly aborted
cat /proc/mdstat
cat: /proc/mdstat: No such file or directory
cat /etc/mdstat/mdstat.conf
cat: /etc/mdstat/mdstat.conf: No such file or directory
sudo mdadm --assemble --scan --force
mdadm: No arrays found in config file or automatically
sudo mdadm --assemble --force /dev/md0 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1
mdadm: /dev/nvme0n1 has no superblock - assembly aborted
So it appears even the config of that Array has disappeared? And the superblocks? Now for the moment let's assume the drives didn't randomly fail during the reboot, even though that isn't impossible. I didn't store any crucial data on that array of course, but I have to understand where things went wrong. Of course, recovering the array would be great and could save me a few hours of setting things up.
Some extra information
mdadm: Cannot assemble mbr metadata on /dev/nvme0n1"
The disks should be using GPT though, as far as I remember. Is there some param I need to set for it to try using GPT?
I since found out that recreating the array without formatting will restore access to all data on it:
sudo mdadm --create --verbose --level=0 --metadata=1.2 --raid-devices=3 /dev/md/hpa /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
but every time I reboot it disappears again and I have to recreate it.
sudo mdadm --detail --scan
ARRAY /dev/md/hpa metadata=1.2 name=Mnemosyne:hpa UUID=02f959b6:f432e82a:108e8b0d:8df6f2bf
cat /proc/mdstat
Personalities : [raid0]
md127 : active raid0 nvme2n1[2] nvme1n1[1] nvme0n1[0]
1464763392 blocks super 1.2 512k chunks
unused devices: <none>
What else could I try to analyze this issue?
|
Basically, the solution that worked for me was to format my nvme drives with a single fd00 linux raid partition, and then using that for the RAID.
gdisk /dev/nvme[0-2]n1
Command n, then press enter until prompted for a Hex code partition type. Enter fd00, press enter. Command w and confirm.
Repeat this for all drives, then proceed to create your array as before, but this time use the partitions you created insted of the block device, for example:
mdadm --create --verbose --level=0 --metadata=1.2 --raid-devices=3 /dev/md/hpa /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1
| mdadm RAID array disappered after reboot |
1,558,542,208,000 |
we have redhat machine version 7.2 and each machine have disks
as
sdb
sdc
sdd
sde
sdf
in the dmesg report we get the following
[873080.996700] Buffer I/O error on device sdf, logical block 50575986
[873080.996702] Buffer I/O error on device sdf, logical block 50575987
[873080.996703] Buffer I/O error on device sdf, logical block 50575988
[873080.996705] Buffer I/O error on device sdf, logical block 50575989
[873080.996706] Buffer I/O error on device sdf, logical block 50575990
[873742.837309] sd 1:0:2:0: [sdf] tag#0 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK
[873742.837324] sd 1:0:2:0: [sdf] tag#0 CDB: Write(16) 8a 00 00 00 00 00 18 24 50 88 00 00 01 18 00 00
[873742.837329] blk_update_request: I/O error, dev sdf, sector 405033096
[873742.837338] EXT4-fs warning (device sdf): ext4_end_bio:332: I/O error -5 writing to inode 160908517 (offset 860160 size 139264 starting block 50629172)
[873742.837342] Buffer I/O error on device sdf, logical block 50629137
[873742.837347] Buffer I/O error on device sdf, logical block 50629138
[873742.837350] Buffer I/O error on device sdf, logical block 50629139
I not sure if the disk - sdf is the problem here and need to replace it
|
Yes, disk sdf is reporting errors on multiple blocks. Replace it at the earliest opportunity. You might want to also look into using smartctl to get stats on the disk failures
smartctl -a /dev/sdf
More detail at https://linux.die.net/man/8/smartctl and https://www.thomas-krenn.com/en/wiki/Analyzing_a_Faulty_Hard_Disk_using_Smartctl
| how to understand the dmesg |
1,558,542,208,000 |
I've used sudo blkid to mount my drive with fstab like this on /etc/fstab:
UUID=1169dd89-29fe-436c-9aef-fa78ea7ee138 /media/hd ext4 defaults,auto,umask=000,users,rw 0 0
I also tried
PARTUUID=df63cda7-01 /media/hd ext4 defaults,auto,umask=000,users,rw 0 0
However my raspberry pi won't boot. I also formatted the system to ext4 before getting the uuid. It enters emergency mode when booting and asks me to type journalctl -xb
I now added the nofail option so it'll boot at least, so I don't have to keep editing my SD card:
PARTUUID=df63cda7-01 /media/hd ext4 defaults,auto,nofail,umask=000,users,rw 0 0
What is going wrong?
When I boot, I cannot see /dev/sda1 on df -h but it appears on sudo blkid
pi@raspberrypi:~ $ sudo file -sL /dev/sda1
/dev/sda1: Linux rev 1.0 ext4 filesystem data, UUID=1169dd89-29fe-436c-9aef-fa78ea7ee138 (extents) (64bit) (large files) (huge files)
This is dmesg-w when I reconnect the drive
[ 1156.384704] usb 1-1.2: USB disconnect, device number 5
[ 1156.399483] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[ 1156.399843] sd 0:0:0:0: [sda] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00
[ 1165.388431] usb 1-1.2: new high-speed USB device number 6 using dwc_otg
[ 1165.600753] usb 1-1.2: New USB device found, idVendor=0bc2, idProduct=231a
[ 1165.600776] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 1165.600787] usb 1-1.2: Product: Expansion
[ 1165.600796] usb 1-1.2: Manufacturer: Seagate
[ 1165.600806] usb 1-1.2: SerialNumber: NA8Z9LSP
[ 1165.606573] usb-storage 1-1.2:1.0: USB Mass Storage device detected
[ 1165.629063] scsi host0: usb-storage 1-1.2:1.0
[ 1166.650056] scsi 0:0:0:0: Direct-Access Seagate Expansion 0708 PQ: 0 ANSI: 6
[ 1166.657494] sd 0:0:0:0: Attached scsi generic sg0 type 0
[ 1167.877041] sd 0:0:0:0: [sda] 1953525167 512-byte logical blocks: (1.00 TB/932 GiB)
[ 1167.877944] sd 0:0:0:0: [sda] Write Protect is off
[ 1167.877969] sd 0:0:0:0: [sda] Mode Sense: 47 00 00 08
[ 1167.889766] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 1167.936469] sda: sda1
[ 1167.944243] sd 0:0:0:0: [sda] Attached SCSI disk
[ 1168.710434] EXT4-fs (sda1): Unrecognized mount option "umask=000" or missing value
This is my lsusb:
pi@raspberrypi:~ $ lsusb
Bus 001 Device 004: ID 148f:7601 Ralink Technology, Corp. MT7601U Wireless Adapter
Bus 001 Device 006: ID 0bc2:231a Seagate RSS LLC
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
|
It looks like umask is not a supported option...
Actually it doesn't:
- https://github.com/torvalds/linux/blob/master/Documentation/filesystems/ext4/ext4.rst
- https://superuser.com/a/837270/809353
- https://superuser.com/a/637171/809353
| /etc/fstab for my usb drive won't boot |
1,558,542,208,000 |
Hardware: Dell Inspiron 15 Gaming 7577, model number 7577-92774.
Question:
Why my Linux Mint 19 does not see my (Windows) NVMe drive?
I read on ArchWiki:
The Linux NVMe driver is natively included in the kernel since version 3.3. NVMe devices should show up under /dev/nvme*.
However, I have no device under /dev/nvme*.
Disk drives - Screenshot from Windows - Speccy Free:
I have rebooted into Linux now, and am ready to investigate further.
I just installed nvme-cli package and this is its output:
$ nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
|
The culprit was in the RAID mode
If you switch, with special caution, to AHCI mode, it will show up in Linux.
The how-to switch to AHCI is out of scope of this site, refer to my answer on SuperUser.
I am not sure why Windows requires that procedure, but I have read it on multiple places, so I believe it is necessary.
On the contrary, Linux does not need any preceding configuration.
Current status:
$ sudo nvme list
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 Y7DF22FUFQCS KXG50ZNV512G NVMe TOSHIBA 512GB 1 512,11 GB / 512,11 GB 512 B + 0 B AADA4102
| Why my Linux Mint 19 does not see my (Windows) NVMe drive? |
1,558,542,208,000 |
I need to clone a bootable disk to multiple disks (of different sizes) on different computers and it needs to be scriptable, but I can't find a way to do it.
I'm using Ubuntu 16.04 on everything.
First I tried dd, I ran (with the disk unmounted):
$ dd if=/dev/sda bs=1K count=10000000 status=progress | gzip -c > os.img
That's about 10GB, the compressed file is about 3.8GB, the source disk is 120GB, the destination disk I'm testing on is 16GB, so I'm sure that it's going to work on all sizes, I wrote to disk with:
$ gunzip -c os.img | dd bs=1K of=/dev/sda status=progress
But it doesn't boot, I get:
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)
no idea what that means, so I googled it but I didn't find a solution.
Then I tried to mount the disk on a live OS to see if at least that worked, but I get:
$ sudo mount /dev/sda1 /mnt
EXT4-fs (sda1): bad geometry: block count 29306624 exceeds size of device (14653670 blocks)
which doesn't make sense to me. Anyone know if I can fix this or if there's a better method to do it? I see people recommending clonezilla but I can't find a scriptable version, it looks like I can only use it with "terminal GUI".
|
The problem is that the size of the source disk is larger than (some of the) destination disk(s). Which means the partition table doesn't work, because it's made for a disk of different size.
In your place, I'd write a script that uses fdisk etc. to first delete all partitions on the disk, then makes a partition of fixed size (which should be identical to the size of the partition where your image comes from; you may have to create such a partition), and then makes it bootable. This will ensure the partition table is correct for a disk of that size. Finally you copy the partition (e.g. /dev/sda1) instead of the whole disk.
BTW, using both gzip/gunzip and dd doesn't make sense (unless you like the progress display) - all dd does is to make sure the writes are of some particular size. You could just use
gzip -c /dev/sda1 > os.img
gunzip -c os.img > /dev/sdb1
etc. And if you want to display progress, there's also pv.
| Scriptable way to clone source disk to multiple destination disks? |
1,558,542,208,000 |
lsblk can list a partition, but vgs do not list:
The sda is added by me, the vgs do not list the VolGroups of sda2?
How can I mount the sda2 to the sdb's /mnt?
|
That output looks pretty normal, so I think you might just be misunderstanding what it is saying...
Your disk sdb is split into 3 partitions and none of them is using LVM, but they have filesystems directly on them and they're already mounted on your machine. /dev/sdb1 mounted as /home, /dev/sdb2 is the root filesystem and /dev/sdb3 mounted under /web.
Your disk sda has two partitions. The first partition has 500MB and is currently not mounted. (I imagine it was formatted for /boot, but anyways, it's not currently mounted.)
The second partition of sda, which would be /dev/sda2 is in an LVM volume group called VolGroup. That's what vgs and pvs is listing!
You can't mount /dev/sda2 directly, since it's a volume group, so you need to mount the logical volumes from there...
The output of lsblk also gives you a hint of what the volumes are, they are called lv_root (which I imagine is a root filesystem), lv_home (for the home directories) and lv_swap (for the swap partition.)
So looks like sda has another installation of Linux, which is not the one you are currently using.
If you want to mount the root of that installation, you can try a command such as:
# mount /dev/VolGroup/lv_root /mnt
If you also want to mount the home volume, follow that with:
# mount /dev/VolGroup/lv_home /mnt/home
And assuming sda1 is indeed meant for the boot partition:
# mount /dev/sda1 /mnt/boot
I hope that helps you understand what is going on with your second disk!
| `lsblk` can list a partition, but `vgs` do not list the VolGroup |
1,558,542,208,000 |
Wanted to create a list of all the benefits of always running from a live CD (with or without persistence/as long as personal data is saved on encrypted drive locally)?
|
Using a live ISO without persistence means the main filesystem is read-only, so nothing can be changed or "broken" permanently. It also means all changes (new files & data) are in ram and lost on reboot/shutdown.
Saving your personal data manually could lead to better backup habits...
If you have enough ram to use the toram option, all file reads & writes will be at ram speeds, maybe 2GB-5GB per second, much faster than a regular cd/dvd/hard drive or ssd.
The "best encryption" part of the question really is too broad, but just use the defaults the big distros use: GPG, LUKS, eCryptfs
| what are the benefits of always running from live cd for data protection? [closed] |
1,558,542,208,000 |
I am using an Ubuntu 16.04 LTS. I ran into an unusual problem with my disk usage. Some of my applications were aborted with the message on the terminal stating "not enough disk space available".
The following is the out put of
df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 5.7G 0 5.7G 0% /dev
tmpfs tmpfs 1.2G 9.6M 1.2G 1% /run
/dev/nvme0n1p7 ext4 69G 66G 40M 100% /
tmpfs tmpfs 5.8G 102M 5.7G 2% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
/dev/nvme0n1p1 vfat 256M 32M 225M 13% /boot/efi
tmpfs tmpfs 1.2G 84K 1.2G 1% /run/user/1000
My ext4 partition seems to be used up 100% and I find that it is mounted on '/'. I don't know if this is unusual. Before typing the df -hT command, I checked gparted and found that ext4 was mounted on /var/lib/docker/aufs. So hastily I uninstalled docker (since I wasn't using it anyways) and now it shows as '/'.
Also, while trying to find out what is consuming the space, I found that /tmp consumes 15G. But I am not sure how to free that. Any help regarding this is appreciated. Thanks.
|
It is not only normal for a filesystem to be mounted as /,
it is mandatory.
It is common for the root filesystem to be ext4.
To free the space used in /tmp:
cd /tmp.
ls -la.
Look at the files and see whether any of them are important
(they shouldn’t be),
and try to figure out if they are being used by running processes.
rm -r *, or rm everything except the ones you don’t want to remove.
You may need to use sudo to get all the files,
but, if so, try to figure out why.
Are there files there that are owned by other people?
If possible, you might want to reboot before doing the above.
This might just clear out /tmp all by itself.
And, even if it doesn’t,
it should clear out any processes that might be using files in /tmp.
| ext4 mounted on / and tmp consuming disk space |
1,558,542,208,000 |
What are the commands to discover LUN disks in linux and esxi?
|
You are looking for rescanning the scsi bus. If you search your distro + rescan scsi bus on the Internet you can find some guides on how to do that
Redhat based systems
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/rescan-scsi-bus.html
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html-single/Online_Storage_Reconfiguration_Guide/index.html#scanning-storage-interconnects
Debian/Ubuntu
http://www.debuntu.org/how-to-rescan-a-scsi-bus-without-rebooting/
VmWare ESX
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003988
Also found another question/answer on SE
How to rescan scsi bus devices?
| What are the commands to discover lun disks [closed] |
1,413,880,624,000 |
I am getting below errors, what does it means?
Oct 19 01:02:25 amra02 scsi: [ID 107833 kern.warning] WARNING: /pci@7c,0/pci10de,377@f/pci1077,142@0/fp@0,0/disk@w5006016841e00513,0 (sd6):
Oct 19 01:02:25 amra02 drive offline
Oct 19 01:02:25 amra02 scsi: [ID 107833 kern.warning] WARNING: /pci@7c,0/pci10de,377@f/pci1077,142@0/fp@0,0/disk@w5006016841e00513,0 (sd6):
Oct 19 01:02:25 amra02 drive offline
Oct 19 01:02:25 amra02 scsi: [ID 107833 kern.warning] WARNING: /pci@7c,0/pci10de,377@f/pci1077,142@0/fp@0,0/disk@w5006016841e00513,0 (sd6):
Oct 19 01:02:25 amra02 drive offline
Oct 19 01:03:07 amra02 scsi: [ID 107833 kern.warning] WARNING: /pci@7c,0/pci10de,377@f/pci1077,142@0/fp@0,0/disk@w5006016041e00513,0 (sd5):
Oct 17 19:53:19 amra02 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk@g6006016060702d0051a04cd7773de411 (sd23):
Oct 17 19:53:19 amra02 Command failed to complete...Device is gone
Oct 17 19:53:19 amra02 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk@g6006016060702d0051a04cd7773de411 (sd23):
Oct 17 19:53:19 amra02 Command failed to complete...Device is gone
Oct 17 19:53:19 amra02 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk@g6006016060702d0051a04cd7773de411 (sd23):
Oct 17 19:53:19 amra02 Command failed to complete...Device is gone
Oct 17 20:03:19 amra02 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk@g6006016060702d0051a04cd7773de411 (sd23):
Oct 17 20:03:19 amra02 Command failed to complete...Device is gone
|
you loose connectivity (i.e. access to you disk)
5006016841e00513
id a clariion controller, and
6006016060702d0051a04cd7773de411
is a LUN (or disk) on you SAN.
vendor CLARIION
extension 060702d0051a04cd7773de411
CI:LDEV (16) e4:11 (10) 228 - 17
you should either check your hardware, or if harware is OK, zonning configuration on SAN switch.
| getting system level error in solaris system |
1,413,880,624,000 |
I would expect cat /dev/sdd >/dev/null to give approximately the same amount of data each second if the disk does nothing else.
But on this USB disk I see:
Device rkB/s wkB/s %util
sdf 628.39 0.00 2.12
sdf 29696.00 0.00 100.40
sdf 21368.00 0.00 72.40
sdf 0.00 0.00 0.00
sdf 19208.00 0.00 65.20
sdf 29184.00 0.00 99.60
sdf 13952.00 0.00 47.20
sdf 0.00 0.00 0.00
sdf 27264.00 0.00 92.80
sdf 29312.00 0.00 99.60
sdf 6016.00 0.00 20.00
sdf 5112.00 0.00 16.80
sdf 29824.00 0.00 99.20
sdf 27272.00 0.00 92.80
sdf 0.00 0.00 0.00
sdf 13560.00 0.00 46.00
sdf 29192.00 0.00 99.60
sdf 19456.00 0.00 66.40
sdf 0.00 0.00 0.00
sdf 21888.00 0.00 74.40
sdf 29568.00 0.00 99.60
sdf 11008.00 0.00 36.80
sdf 760.00 0.00 2.80
sdf 29448.00 0.00 99.60
sdf 29816.00 0.00 99.20
sdf 2432.00 0.00 8.40
sdf 8072.00 0.00 28.80
sdf 30208.00 0.00 100.40
sdf 24459.41 0.00 81.98
sdf 0.00 0.00 0.00
sdf 16768.00 0.00 56.40
sdf 29440.00 0.00 98.80
sdf 17536.00 0.00 58.40
If I move the USB disk to another system I see the same behaviour. When it pauses it make the sound as if it is seeking ("drrrrr") followed by a short break and another ("drrrrr").
Why? And how can I make it stop?
|
How is the external drive connected and which type of disk is it?
I have several 2.5 inch external USB hard disk drives which click if there isn't enough power available from the USB port they are connected to. Some operations work, while others fail – and the drive usually starts to make click sounds. Maybe the long S.M.A.R.T. self test triggers something that makes it actually use less power (like delayed heads motor action)?
But it's all a big "maybe" really.
I once, but this is a very very long time ago, had two very similar SCSI drives from the same vendor. One of them always worked fine. The other failed reproducibly after a fixed time period in idle (no I/O for some time, just spinnig). The symptom then was that I/O completely stalled. Every read or write would fail and only a restart solved the stalled I/O. My personal "fix" was to write a simple shell script that would create a temporary file, put some ramdom data in it and delete it again, and made a cron job run it every 15 minutes. This solved the problem for me (the drive wouldn't go into "idle" for this long, so it would not stall completely), but it really only worked around the symptom, because to this day I still don't know where the original fault actually came from in the first place. I found no differences for the two drives (other than both were slightly different models from the same series). Also using hdparm (I think actually it was sdparm for SCSI drives) they were identical as far as I could tell. Even switching the connectors (one of them had the SCSI terminator attached) and the SCSI IDs didn't change anything.
Long story short: I don't know. You may also consider the possibility that the drive's controller is simply malfunctioning. S.M.A.R.T. doesn't always reveal faulty hardware.
| USB disk performance 50% of expected |
1,413,880,624,000 |
we want to build new bash script that will join one of the new disks to volume group ( VG )
lets say we have linux machine and we add 10 new disks
the target is to do manual "label" on one of the disks and then by running the bash script , it will select the "label" disk by lsblk or similar command , and will join this disk to VG
|
It's not possible to "label" a disk without actually formatting it something because you need to write the label somewhere, which usually means a filesystem header. You can add tags to PVs but this requires create LVM PV format first on the disk. If you need to identify a specific drive, you need to use information this drive provides so this usually means either WWN/WWID or serial number. lsblk can print both of these:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT SERIAL WWN
sda 8:0 0 931,5G 0 disk S3Z9NB0KB83128X 0x5002538e40aa0206
and these are also used to create symlinks in /dev/disk/by-id
| linux + is it possible to identify specific new disk in order to join the new disk to VG |
1,413,880,624,000 |
Our Goal is to use huge storage as 200TB for Kafka machine
In order to achieve this Goal , we create volume using LVM over 4 physical disks in RAID10 , when each disk size is 50T ,
Note - each disk as sdb/sdc/sdd/sde is actually RAID10 , when each RAID10 is holding 12 disks
In our kafka server we have 4 disks ( example from lsblk )
sdb 8:16 0 50T 0 disk
sdc 8:32 0 50T 0 disk
sdd 8:48 0 50T 0 disk
sde 8:64 0 50T 0 disk
So we wrote the following procedure:
We combine the disks /dev/sdb and /dev/sdc. and /dev/sdd and /dev/sde , by adding them to the same volume group.- DB_vg
vgcreate DB_vg /dev/sdb /dev/sdc /dev/sdd /dev/sde
we create logical volumes ( we are using here 200T size )
lvcreate -L200T -n DB_lv DB_vg
We create the XFS filesystem
mkfs.ext4 -j -m 0 /dev/DB_vg/DB_lv -F
We create folder , in order to mount the LV to this folder
mkdir /var/lib/kafka_broker
mount /dev/DB_vg/DB_lv /var/lib/kafka_broker
from df -h , its should looks like this ( example )
/dev/mapper/DB_vg-DB_lv 200T 61M 200T 1% /var/lib/kafka_broker
from lsblk
sdb
└─DB_vg-DB_lv 253:4
sdc
└─DB_vg-DB_lv 253:4
sdd
└─DB_vg-DB_lv 253:4
sde
└─DB_vg-DB_lv 253:4
from blkid
blkid | grep LVM2_member
/dev/sda2: UUID="Y5MbyB-C5NN-hcPA-wd9R-jmdI-02ML-W9qIiu" TYPE="LVM2_member" <-- sda2 is the OS
/dev/sdc: UUID="mtg8CY-b3j9-K1qZ-CRTV-SZum-gDx3-lbm4ZX" TYPE="LVM2_member"
/dev/sdb: UUID="MvfzqO-PV8N-dror-psl4-PfUf-7coa-6cb8lQ" TYPE="LVM2_member"
/dev/sdd: UUID="C4n63l-Uk3E-D65G-WcgI-xic2-cJLi-eSTUAa" TYPE="LVM2_member"
/dev/sde: UUID="d743Xp-eDxr-Dygk-HGgy-Dh9c-K3cx-kHtyqo" TYPE="LVM2_member"
So now we can use the LV /dev/mapper/DB_vg-DB_lv that mount to /var/lib/kafka_broker folder
and we can use the folder /var/lib/kafka_broker as storage with 200T for kafka broker machine
My Questions are:
is it safe? to use huge storage that created by LVM using 4 Huge disks?
can we create XFS filesystem when storage is around 200T size?
am I right about my procedure or maybe I am missing something?
|
As with any storage, this is as safe as your backups.
XFS is supported up to 500TB, so this is fine.
As far as I can tell, nothing is missing (vgcreate and vgextend can initialize PVs, so you don’t need to run pvcreate).
| Linux + is it correct to create volume using LVM over physical disks when disks size are huge |
1,413,880,624,000 |
I thought that the file is a reserved place on disk and it is a Record collection
but i found that it is a wrong definition because the empty file capacity is 0kb
so it can't be a reserved place on disk ,
mostafa@jamareh:~$ cd Desktop/
mostafa@jamareh:~/Desktop$ touch test
mostafa@jamareh:~/Desktop$ ls -l test
-rw-rw-r-- 1 mostafa mostafa 0 Feb 28 16:55 test
as you see in this example the os shows us there is a file and the capacity is 0kb , so i want to know why the capacity is 0kb and where is test file ?
what is a file on the os level ?
actually i want to know the definition of file in the operating system level
|
A file is an identifier for accessing data. The amount of data can be zero and it can usually change over time as can the data itself. To the file belongs some meta data (specific to the file system type and configuration being used) like access rights and timestamps for different types of accesses.
| What is a file? [closed] |
1,413,880,624,000 |
When I already have a LUKS block device opened using cryptsetup luksOpen, does invoking the command on the same machine with the same arguments including the device name for the second time just do nothing or is doing so unsafe?
|
If you try to open the device with the same name, cryptsetup will simply tell you that the mapped device already exists. If you try different name, the call will fail because the device is in use:
$ sudo cryptsetup luksOpen /dev/sdc1 a
Device a already exists.
$ sudo cryptsetup luksOpen /dev/sdc1 b
Enter passphrase for /dev/sdc1:
Cannot use device /dev/sdc1 which is in use (already mapped or mounted).
| Is it safe to run cryptsetup luksOpen twice? |
1,413,880,624,000 |
I've got a 500gb and a 31gb disks in my laptop. 31gb contains Ubuntu but I need access to the 500gb (I don't know its label or volume). How do I do it?
|
use the lsblk command.
This is bash syntax for an alias, which prints out information to my liking that would make it very easy to know what is where regarding disks.
alias lsblk2='lsblk -o type,name,label,partlabel,size,fstype,model,serial,wwn,uuid'
do a lsblk --help to find all the things you can give it on the -o.
in your case it is not relevant, but if you have a RAID card then you'll just see the raid volume show up as the block device and not the specific information of each disk making up the RAID, for that you would need to use smartctl on the raid block device.
| How do I view all the disks? [duplicate] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.