date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,507,025,938,000
Does the MBR of the disk contain this information and therefore when i call a command like fdisk, a kernel level code eventually runs and reads it from a specific part in MBR? If so, which part of it? What offset? If it's not in the MBR, then how can these types of commands find it? They can't be reading it from the beginning of a partition considering they need to calculate the starting address of that partition and they need the sector size to do so, don't they? How are commands like fdisk implemented to find this information? Where do they read it from?
A device’s sector size isn’t stored in the MBR. User space commands such as fdisk use the BLKBSZGET and BLKSSZGET ioctls to retrieve the sector sizes from disks. Those ioctls are handled by drivers in the kernel, which retrieve the relevant information from the drives themselves. (There isn’t much documentation about the relevant ioctls; you need to check the kernel source code.) You can see the relevant information using other tools which query drives directly, for example hdparm. On a small SSD, hdparm -I tells me [...] Logical Sector size: 512 bytes Physical Sector size: 512 bytes Logical Sector-0 offset: 0 bytes [...] cache/buffer size = unknown Form Factor: 2.5 inch Nominal Media Rotation Rate: Solid State Device [...] On a large spinning disk with 4K sectors, I get instead [...] Logical Sector size: 512 bytes Physical Sector size: 4096 bytes Logical Sector-0 offset: 0 bytes [...] cache/buffer size = unknown Form Factor: 3.5 inch Nominal Media Rotation Rate: 5400 [...]
How do commands like fdisk -l find the sector size?
1,507,025,938,000
I was just introduced to multipathing in our production environment and had never heard of the concept prior. After some digging I think I'm starting to get a handle on how the concept works in theory but I'm having some trouble extrapolating that to what I'm seeing on the box I'm working on. From multipath -ll I get output like: mpath0 (36000d3100088060000000000000000b9) dm-0 COMPELNT,Compellent Vol size=60G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=1 status=active |- 0:0:0:0 sda 8:0 active ready running |- 0:0:1:0 sdd 8:48 active ready running |- 1:0:0:0 sdi 8:128 active ready running `- 1:0:1:0 sdl 8:176 active ready running From fdisk -l I know that those are all 60GB disks, with the same partition setup: Disk /dev/sda: 64.4 GB, 64424509440 bytes 255 heads, 63 sectors/track, 7832 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 7832 62806117+ 8e Linux LVM What is confusing to me though is how the partitions are actually mounted on the server: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 30G 26G 3.8G 87% / /dev/mapper/mpath0p1 99M 49M 46M 52% /boot tmpfs 16G 232M 16G 2% /dev/shm /dev/mapper/mpath2p1 493G 226G 242G 49% /u02 Just considering /boot for now: It is mounted to mpath0p1, I can see that much. But how does this correspond to the physical disk/LVM behind the multipath?
Your multipath'ed device is just an abstraction of multiple paths to one disk. So the corresponding relationship you are asking about is that the mpathN device is the same as the underlying device at the far end of whatever fabric you have. As you saw, you can view the partition table on the mpath device and it's constituent members and see the same layout. Some folks see a similarity between the concepts of multipath and RAID1. They are not related, but I've found it a useful comparison. The underlying devices of a multipath device are not duplicate copies as in RAID1. They are just parallel attachments to the same, typically remote, disk/LUN. Regarding your question about how the partitions are mounted, they are mounted as they could be without multipath (assuming devices aren't hardcoded in fstab and lvm.conf). So you have mpath0p1 mounted at /boot. In your case -- if these devices were not managed by multipathd -- this is the same as mounting /dev/sda1 at /boot (and in your example, sdi1, sdd1, or sdl1 could be substituted for sda1). The difference is that if your fiber (or whatever) connection that presents sda1 is disconnected, your disk will still be accessible, using the multipath driver, via sdd, sdi and sdl. In this case, you have the first partition of the remote disk mpath0 mounted at /boot, the first partition of disk mpath2 at /u02. The second partition in your fdisk output of sda is marked as an LVM physical partition. Presumably this contains the volume group VolGroup00 and in turn the logical volume LogVol00, which is mounted at /
Understanding multipath and mountpoints
1,507,025,938,000
I need to be able to read data sequentially from a file while not storing the data that is being read in the page cache as the file contents are not expected to ever be read again and also because there is memory pressure on the box (want to use the precious memory for useful disk I/O caching). The question I have is about how I can optimize these reads. Since I know that the data that is being read is sequentially placed on the disk (minus the fragmentation), I want to be able to read ahead (by increasing /sys/block/sda/queue/read_ahead_kb) but am not sure if this will lead to any benefit because I have to prevent the data that is being read from being stored in the page cache by using posix_fadvise (with the POSIX_FADV_DONTNEED flag). Will the read ahead data be simply discarded because of the hint to drop the data from the page cache?
Use direct IO: Direct I/O is a feature of the file system whereby file reads and writes go directly from the applications to the storage device, bypassing the operating system read and write caches. Direct I/O is used only by applications (such as databases) that manage their own caches. An application invokes direct I/O by opening a file with the O_DIRECT flag. For example: int fd = open( filename, O_RDONLY | O_DIRECT ); Direct IO on Linux is quirky and has some restrictions. The application IO buffer must be page-aligned, and some file systems require that each IO request be an exact multiple of the page size. That last restriction can make reading/writing the last portion of a file difficult. An easy-to-code way to handle readahead in your application can be done using fdopen and setting a large page-aligned buffer using posix_memalign and setvbuf: // should really get page size using sysconf() // but beware of systems with multiple page sizes #define ALIGNMENT ( 4UL * 1024UL ) #define BUFSIZE ( 1024UL * 1024UL ) char *buffer; ... int fd = open( filename, O_RDONLY | O_DIRECT ); FILE *file = fdopen( fd, "rb" ); int rc = posix_memalign( &buffer, ALIGNMENT, BUFSIZE ); rc = setvbuf( file, buffer, _IOFBF, BUFSIZE ); You can also use mmap() to get anonymous memory to use for the buffer. That has the advantage of being naturally page-aligned: ... char *buffer = mmap( NULL, BUFSIZE, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0 ); rc = setvbuf( file, buffer, _IOFBF, BUFSIZE ); Then just use fread()/fgets() or any FILE *-type read function you want to read from the file stream. You do need to check using a tool such as strace that the actual read system calls are done with a page-aligned and page-sized buffer - some C library implementations of FILE *-based stream processing don't use the buffer specified by setvbuf for just IO buffering, so the alignment and size can be off. I don't think Linux/glibc does that, but if you don't check and the size and/or alignment is off, your IO calls will fail. And again - Linux direct IO can be quirky. Only some file systems support direct IO, and some of them are more particular than others. TEST this thoroughly if you decide to use it. The posted code will do a 1 MB read-ahead whenever the stream's buffer needs to be filled. You can also implement more sophisticated read-ahead using threads - one thread fills one buffer, other thread(s) read from a full buffer. That would avoid processing "stutters" as the read-ahead is done, but at the cost of a good amount of relatively complex multi-threaded code.
Optimizing read I/O with read ahead while avoiding storing data in page cache
1,507,025,938,000
we run the smartctl on sdb disk smartctl -a /dev/sdb smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.el7.x86_64] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org Smartctl open device: /dev/sdb failed: DELL or MegaRaid controller, please try adding '-d megaraid,N' according to the output from smartctl we change it to smartctl -a -d megaraid,0 /dev/sdb smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.el7.x86_64] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Vendor: TOSHIBA Product: MG04SCA20ENY . . and I set the - 0 , according to the first bus ( from smartctl --scan ) smartctl --scan /dev/sda -d scsi # /dev/sda, SCSI device /dev/sdb -d scsi # /dev/sdb, SCSI device /dev/bus/0 -d megaraid,0 # /dev/bus/0 [megaraid_disk_00], SCSI device /dev/bus/0 -d megaraid,12 # /dev/bus/0 [megaraid_disk_12], SCSI device /dev/bus/0 -d megaraid,13 # /dev/bus/0 [megaraid_disk_13], SCSI device /dev/bus/0 -d megaraid,14 # /dev/bus/0 [megaraid_disk_14], SCSI device /dev/bus/0 -d megaraid,16 # /dev/bus/0 [megaraid_disk_16], SCSI device but I am not sure if this value "0" is the right value am I right here ?
Yes, you can use 0, or 12, or 13, or 14, or 16 for N. If your scan output isn't complete, possibly even more numbers. And you already tried with 0, and it worked. So try the others, too.
smartctl megaraid,N ( how to find the right value for N ? )
1,507,025,938,000
I added a new disk (/dev/vdb) of 2TB with existing data from the previous 1TB disk. I used fdisk /dev/vdb to extend its only partition /dev/vdb1 to full capacity of 2TB from previous 1TB. (In other words, I deleted vdb1, and then re-created it to fill the disk. See How to Resize a Partition using fdisk - Red Hat Customer Portal). And then I did: [root - /]$ fsck -n /dev/vdb1 fsck from util-linux 2.23.2 e2fsck 1.42.9 (28-Dec-2013) /dev/vdb1: clean, 46859496/65536000 files, 249032462/262143744 blocks [root - /]$ e2fsck -f /dev/vdb1 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/vdb1: 46859496/65536000 files (0.4% non-contiguous), 249032462/262143744 blocks [root - ~]$ resize2fs /dev/vdb1 resize2fs 1.42.9 (28-Dec-2013) The filesystem is already 262143744 blocks long. Nothing to do! And fdisk -l looks like this: Disk /dev/vdb: 2147.5 GB, 2147483648000 bytes, 4194304000 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x4eb4fbf8 Device Boot Start End Blocks Id System /dev/vdb1 2048 4194303999 2097150976 83 Linux However when I mount it: mount /dev/vdb1 /mnt This is what I got from df -h: /dev/vdb1 985G 935G 0 100% /mnt Which is still the size of the previous partition. What am I doing wrong here? UPDATE I ran partprobe and it told me to reboot: Error: Error informing the kernel about modifications to partition /dev/vdb1 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/vdb1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting. Error: Failed to add partition 1 (Device or resource busy) So I rebooted and then ran this again: mount /dev/vdb1 /mnt But the added file system is still: /dev/vdb1 985G 935G 0 100% /mnt Any ideas? Should I do all the fsck, e2fsck, and resize2fs once again? This is really weird. After the reboot, I ran partprobe again and it was still this error: Error: Error informing the kernel about modifications to partition /dev/vdb1 -- Device or resource busy. This means Linux won't know about any changes you made to /dev/vdb1 until you reboot -- so you shouldn't mount it or use it in any way before rebooting. Error: Failed to add partition 1 (Device or resource busy) Why is the device or resource busy? Even after I rebooted?
I used fdisk /dev/vdb to extend its only partition /dev/vdb1 to full capacity of 2TB from previous 1TB... See How to Resize a Partition using fdisk - Red Hat Customer Portal. And then I did [resize2fs /dev/vdb1]... We can see this did not change the size of your filesystem. Here is why: resize2fs reads the size of the partition from the kernel, similar to reading the size of any other file. fdisk tries to update the kernel after it has written the partition table. However this will fail if the disk is in use, e.g. you have mounted one of its partitions. This is why resize2fs showed the "nothing to do" message. It did not see the extra partition space. The kernel reads the partition table during startup. So you can simply restart the computer. Then you can run resize2fs, it will see the extra partition space, and expand the filesystem to fit. I believe fdisk logs a prominent warning when this happens, as screen-shotted in this (otherwise outdated) document. There is a less friendly but actually up-to-date document, on the Red Hat Customer Portal: How to use a new partition in RHEL6 without reboot? From partprobe was commonly used in RHEL 5 to inform the OS of partition table changes on the disk. In RHEL 6, it will only trigger the OS to update the partitions on a disk that none of its partitions are in use (e.g. mounted). If any partition on a disk is in use, partprobe will not trigger the OS to update partitions in the system because it is considered unsafe in some situations. So in general we would suggest: Unmount all the partitions of the disk before modifying the partition table on the disk, and then run partprobe to update the partitions in system. If this is not possible (e.g. the mounted partition is a system partition), reboot the system after modifying the partition table. The partitions information will be re-read after reboot. If a new partition was added and none of the existing partitions were modified, consider using the partx command to update the system partition table. Do note that the partx command does not do much checking between the new and the existing partition table in the system and assumes the user knows what they are are doing. So it can corrupt the data on disk if the existing partitions are modified or the partition table is not set correctly. So use at one's own risk.
resize2fs fails to resize partition to full capacity?
1,507,025,938,000
If I put a USB-drive in, it will automount. I can see it with lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 1 7,5G 0 disk └─sdb1 8:17 1 7,5G 0 part /media/user/usb-drive If I unmount it with umount umount /media/user/sdb1 it will still be visible with lsblk, but not mounted any more: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb 8:16 1 7,5G 0 disk └─sdb1 8:17 1 7,5G 0 part but if I instead eject it by clicking the eject icon in Thunar (xfce file manager), it will disappear from the list in lsblk. Why is that so?
Mounting just means "set up the operating system to actively use the some (part of) a block device". Often there is some "busy" or "dirty" on the superblock that gets changed when a file system is mounted, but otherwise the hardware is unaffected. OTOH, eject sends a SCSI "START STOP" command to the device, with option "eject" set. The USB controller in a flash ROM stick usually reacts by powering down the device and preventing any further interaction. That means it disappears completely from the USB subsystem, and must be re-enumerated to be able to accessed again. The same command when send e.g. to a CD/DVD drive will eject the disk, and the also existing "load" option of the "START STOP" command will load it again. But this interpretation only applies to devices with removable media. BTW, you can also send this SCSI command from the commandline using eject from the package with the same name, or with sg_start from the package sg3-utils.
Why is usb-drive not visible with `lsblk` after having been ejected from Thunar?
1,507,025,938,000
Background Info: Copying some .bin files to an SD card (to be read by an embedded device, no filesystem) Commissioning the card requires some segments to be wiped (i.e. zero'd), and others to have binary files copied to them Calling dd from a python script using subprocess module (as the dd operations involved are triggered by a sort of configuration script that needs to be parsed and validated first, I also make the user confirm the operation, as they might wipe out an important disk that is mistaken for the SD card) Problem: Writes to the SD card are slow with bs=512. For large spans, bs=8M is much faster. Is it possible to somehow 'bs=512 seek={n_small_blocks}' and then change to 'bs=8M' for the actual write (once I've seek'd to the correct position)? I found the following resource: http://www.delorie.com/gnu/docs/textutils/coreutils_65.html But it's not clear to me why 2 invocations are required, and how they're working together to accomplish what the guide claims they will. UPDATE Found the answer here: https://superuser.com/questions/380717/how-to-output-file-from-the-specified-offset-but-not-dd-bs-1-skip-n See my full solution below
Solution: dd if='input_file.bin' \ of='/dev/sd{X}' \ bs={desired write block size} \ seek={start offset in bytes} \ count={write size in bytes} \ oflag=seek_bytes \ iflag=count_bytes From the man page: count_bytes treat 'count=N' as a byte count (iflag only) ... seek_bytes treat 'seek=N' as a byte count (oflag only) This does seem to slow down the transfer a bit, but at least puts it in MB/s, instead of kB/s. Also, be sure to check the man page on your system, as it seems the ones available on the web (i.e. googling 'man dd') don't include these options.
dd, seek with one block size, write with another block size
1,507,025,938,000
I made a mistake of encrypting the entire LVM physical volume (contains both home, root, and swap) when installing a CentOS 6.4 (2.6.32-358.6.1.el6.x86_64) box. I soon came to realize that moving files takes a horrendous amount of time due to kcryptd running at 90% of CPU and that encryption was not really necessary as it's just a home server containing no crucial data. However, I already configured it and installed loads of packages, tuned it as far as power management goes, and set up all the services. Is there any way to remove the encryption without having to re-install the whole thing and go through the configuration all over again? I'd love an option that would take less than 30 mins but I'm not sure one exists. Also, if anyone has any recommendations on how to make kcryptd more easy to use, let me know. Edit 1 ~]# fdisk -l /dev/sda Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000078c9 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 19458 155777024 83 Linux ~]# dmsetup ls vg_centos-lv_home (253:3) vg_centos-lv_swap (253:2) vg_centos-lv_root (253:1) luks-2ffcc00c-6d6e-401c-a32c-9c82995ad372 (253:0) ~]# pvdisplay --- Physical volume --- PV Name /dev/mapper/luks-2ffcc00c-6d6e-401c-a32c-9c82995ad372 VG Name vg_centos PV Size 148.56 GiB / not usable 4.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 38030 Free PE 0 Allocated PE 38030 PV UUID euUB66-TP3M-ffKp-WhF5-vKI5-obqK-0qKoyZ Edit 2 ~]# df -h / /home /boot Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_centos-lv_root 50G 2.3G 45G 5% / /dev/mapper/vg_centos-lv_home 94G 1.3G 88G 2% /home /dev/sda1 485M 53M 408M 12% /boot
That is possible. It requires another Linux to boot (CD/DVD is OK) some spare space outside the PV (100M would be good) a certain amount of fearlessness... Then you copy a block from the encrypted volume to the area outside the PV and (after success) to the unencrypted base device. After that you increase a counter in the safe area so that you can continue the transformation in case of a crash. Depending on the kind of encryption it may be necessary (or at least useful) to copy from the end of the block device to the beginning. If this is an option for you then I can offer some code. Edit 1 Deactivate the swap partition (comment it out in etc/fstab). Then boot another Linux (from CD/DVD) and open the LUKS volume (cryptsetup luksOpen /dev/sda2 lukspv) but don't mount the LVs. Maybe you need run pvscan afterwards to that the decrypted device is recogniced. Then vgchange -ay vg_centos may be necessary to activate the volumes. As soon as they are you can reduce the file systems in them: e2fsck -f /dev/mapper/vg_centos-lv_root resize2fs -p /dev/mapper/vg_centos-lv_root 3000M e2fsck -f /dev/mapper/vg_centos-lv_home resize2fs -p /dev/mapper/vg_centos-lv_home 2000M After that you can reduce the size of the LVs (and delete the swap LV): # with some panic reserve... shouldn't be necessary lvresize --size 3100M /dev/mapper/vg_centos-lv_root lvresize --size 2100M /dev/mapper/vg_centos-lv_home lvremove /dev/mapper/vg_centos-lv_swap # vgdisplay should show now that most of the VG is free space vgdisplay Now the PV can be reduced (exciting, I have never done this myself ;-) ): vgchange -an vg_centos pvresize --setphysicalvolumesize 5500M /dev/mapper/lukspv Edit: Maybe pvmove is needed before pvresize can be called. In case of an error see this question. Before you reduce the partition size you should make a backup of the partition table and store it on external storage. sfdisk -d /dev/sda >sfdisk_dump_sda.txt You can use this file for reducing the size of the LUKS partition. Adapt the size (in sectors) to about 6 GiB (panic reserve again...): 12582912. Then load the adapted file: sfdisk /dev/sda <sfdisk_dump_sda.mod.txt If everything looks good after rebooting you can create a new partition in the free space (at best not consuming all the space, you probably know why meanwhile...) and make it an LVM partition. Then make the partition a LVM PV (pvcreate), create a new volume group (vgcreate) and logical volumes for root, home and swap (lvcreate) and format them (mke2fs -t ext4, mkswap). Then you can copy the contents of the opened crypto volumes. Finally you have to reconfigure your boot loader so that it uses the new rootfs. The block copying I mentioned in the beginning is not necessary due to the large amount of free space.
What's the easiest way to decrypt a disk partition?
1,507,025,938,000
On most Linux systems udev usually creates symlinks in /dev/disk/by-uuid/ /dev/disk/by-path/ which point to actual device nodes (/dev/sda, /dev/sdb, etc). I don't have udev on my system, and I would like to generate these symlinks manually. I know I can use blkid to generate the by-uuid name. But how can I generate the by-path name for a given disk (e.g. /dev/sda1) without using udev? Specifically, I am looking for a way to find which of my disks is the disk connected via iSCSI from host 10.1.14.22. It could be sdb, or sdc or perhaps other, since I am connected to several (different) iSCSI hosts at the same time.
This script would do the trick, at least for most typical scenarios. It requires on blkid, lsscsi and sed: #!/bin/bash mkdir -p /dev/disk/by-{path,uuid} for dev in `blkid -o device | grep -v block`; do ln -s "$dev" "/dev/disk/by-uuid/$(blkid -o value -s UUID "$dev")" done lsscsi -v | sed 'N;s/\n//' |\ sed 's/.*\(\/dev\/\w\+\).*\(pci\)[0-9]\{4\}[^/]\+\/[^/]\+\/\([0-9:.]\+\)[^ ]*\/\([0-9:]\+\)[]].*/\1 \2-\3-scsi-\4/' |\ sed 's/.*\(\/dev\/\w\+\).*\(pci\)[^/]*\/\([0-9:.]\+\)\/ata[^ ]*\/\([0-9:]\+\)[]].*/\1 \2-\3-ata-\4/' |\ while read dev pci; do pp="/dev/disk/by-path/$pci" ln -s "$dev" "$pp" for part in "${dev}"[0-9]*; do [ -e "$part" ] && ln -s "$part $pp-part${part/$dev/}" done done
Command to generate /dev/disk/-by-path/ name on a system without udev daemon
1,507,025,938,000
I want to capture only the disks from lsblk as showing here fd0 also appears in spite its not really disk for use in this case we can just do lsblk | grep disk | grep -v fd0 but maybe we missed some other devices that need to filter them by grep -v what other disk devices that could be appears from lsblk | grep disk and not really disks ? lsblk | grep disk fd0 2:0 1 4K 0 disk sda 8:0 0 100G 0 disk sdb 8:16 0 2G 0 disk /Kol sdc 8:32 0 2G 0 disk sdd 8:48 0 2G 0 disk sde 8:64 0 2G 0 disk sdf 8:80 0 2G 0 disk lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 150G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 149.5G 0 part ├─vg00-yv_root 253:0 0 19.6G 0 lvm / ├─vg00-yv_swap 253:1 0 15.6G 0 lvm [SWAP] └─vg00-yv_var 253:2 0 100G 0 lvm /var sdb 8:16 0 2G 0 disk /Kol sdc 8:32 0 2G 0 disk sdd 8:48 0 2G 0 disk sde 8:64 0 2G 0 disk sdf 8:80 0 2G 0 disk sr0 11:0 1 1024M 0 rom
If you want only disks identified as SCSI by the device major number 8, without device partitions, you could search on device major rather than the string "disk": lsblk -d | awk '/ 8:/' where the -d (or --no-deps) option indicates to not include device partitions. For reasonably recent linux systems, the simpler lsblk -I 8 -d should suffice, as noted by user Nick.
lsblk + capture only the disks
1,507,025,938,000
I have the following Debian that works in Virtual Box: $ uname -a Linux debian 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux Recently I notice that I don't have any free space: $ df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 1922060 1921964 0 100% / udev 10240 0 10240 0% /dev tmpfs 206128 296 205832 1% /run /dev/disk/by-uuid/ef55765f-dae5-426f-82c4-0d98265c5f21 1922060 1921964 0 100% / tmpfs 5120 0 5120 0% /run/lock tmpfs 511980 0 511980 0% /run/shm /dev/sda3 5841936 163548 5381636 3% /home tmpfs 511980 12 511968 1% /tmp What is the /dev/disk/by-uuid/ef55765f-dae5-426f-82c4-0d98265c5f21? Why does it use all free space on the disk?
That device has the same blocks, used and free space as your rootfs filesystem, so they are probably the same. You can check where the uuid points to with: ls -l /dev/disk/by-uuid/ef55765f-dae5-426f-82c4-0d98265c5f2 My guess is that you just booted from a life filesystem on a CDROM image.
What is the /dev/disk/by-uuid/ and why does it use all free space on the disk
1,507,025,938,000
I installed Redhat 6.4 on KVM Server. Right Now there is only One Disk /dev/vda. Now I need to add extra Disk like /dev/vdb I tried adding New Volume but it is not showing when I do fdisk -l How to add extra Volumes to existing running VMs
You can use the virsh option mentioned above (probably faster, in fact) or you can use the "Add Hardware" option in virt-manager to either add new space or assign existing space. Simply open the VM, go to "Details" (top left), and select "Add Hardware" (bottom left): Storage is the default type of hardware, so it should already be selected by default. FWIW, since it's a new disk, if the guest is Linux, you probably want to add it as VirtIO instead of IDE. VirtIO has better performance but non-Linux platforms need special drivers installed to be able to use VirtIO drives. The GUI is pretty self-explanatory. Since you've already created the .img file you probably want to select the "managed or existing storage" radio and go browsing for it. After that, it should be visible to the guest.
How to Add Extra Disks on KVM based VM
1,507,025,938,000
I currently have a 600gb disk, with Ubuntu installed, 600gb of which is given to the Ubuntu OS: Filesystem Size Used Avail Use% Mounted on /dev/sda1 592G 16G 547G 3% / udev 1.9G 4.0K 1.9G 1% /dev tmpfs 777M 944K 776M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.9G 3.8M 1.9G 1% /run/shm Is it safe to unmount /dev/sda1 to shrink it to say, 300gb will I simply be able to remount it afterwards, or is it going to break everything and just die? Using Gparted? If not then, how does it work in Windows Disk Manager, there I am able to resize mounted disks?
If you are using /dev/sda1 as your current system root, you will be unable to unmount it, and doing so would prevent you from running parted from it anyway. resize2fs is able to enlarge ext3/4 filesystems while mounted on newer kernels, but not shrink them. Your best bet is probably to use the gparted live CD or gparted included with System Rescue CD. These will let you boot Linux on a CD and then resize your hard drive's partition without mounting it. If this is not an option, you will need to have a separate Linux installation on another partition or device that you can boot for resizing; or go through the long painful process of backing up, re-creating the partition from scratch, and restoring the backup.
Is it safe to resize partition on /?
1,507,025,938,000
I would like to reduce the size of an ext4 partition from my disk and I would like to know if it is possible that my files become corrupted during the operation ? I learn that ext4 file system use large extents for each file, so is it possible that a file is located at the end of the partition and become corrupted/deleted during the process ?
Yes, it is safe As long as the process is not interrupted by i.e. power loss, your data will be fine. This is what resize2fs is made for. It will move data around so nothing is lost. it will warn you if you attempt something potentially harmful. I used resize2fs numerous times for offline shrinking and never experienced any problems (except human error).
Is it safe to resize partition in ext4?
1,507,025,938,000
When unlocking a newly-formatted LUKS volume, I received a warning in the kernel log: kernel: device-mapper: table: 253:14: adding target device sdk1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=33553920 According to another question, a false warning is possible, so I confirmed it's a true warning: 33553920 is not divisible by 4096. I further used luksDump to confirm: cryptsetup luksDump /dev/sdk1 | grep 'Payload offset' Payload offset: 65535 which is not a multiple of 8 (4096 ÷ 512 = 8) lsblk -t /dev/sdk confirms Linux is aware of the alignment requirements: NAME ALIGNMENT MIN-IO OPT-IO PHY-SEC LOG-SEC ROTA SCHED RQ-SIZE RA WSAME sdk 0 4096 33553920 4096 512 1 cfq 128 128 32M └─sdk1 0 4096 33553920 4096 512 1 cfq 128 128 32M dmsetup is documented to handle alignment itself, why did it create a misalignment? And are there arguments to luksFormat to avoid the problem?
It appears that dmsetup computes its alignment from the optimal I/O size, without bothering to check that that is actually a multiple of the physical block size. As mentioned in the false warning question, this optimal I/O size makes sense due to USB constraints. So the solution is simple: use --align-payload to override the detected value. A value of 8 should work (and produce the smallest possible header); the default when cryptsetup can't tell is documented as 2048. So I went with the default: cryptsetup luksFormat /dev/sdk1 --align-payload 2048 --verify-passphrase --hash sha512 -s 512 After that, the payload offset is now 4096 (from luksDump), and a kernel warning is still produced: kernel: device-mapper: table: 253:14: adding target device sdk1 caused an alignment inconsistency: physical_block_size=4096, logical_block_size=512, alignment_offset=0, start=2097152 ... but 2097152 is divisible by 4096, so that's the false warning mentioned in the other question. So the problem is resolved.
dmsetup luksFormat creating an alignment inconsistency
1,462,218,184,000
I use badblocks to test my 32GB class-10 microSD card that I use to boot my RPi. I already have a functioning file system on it, so I don't want to scan it with the -w option (destructive read-write test). I have two options: I could use the default read-only test, or I could use a non-destructive read-write test (which is done by backing up the sector, testing it destructively, and then restoring the sector's original content). What should I consider when I choose the test type? I would like it to be as fast as possible, but I also need accurate results.
The read-only test only reads. That's basically the default testing method for just about everything and pretty much the same what disks do for SMART self-tests. The non-destructive read-write test works by overwriting data, then reading to verify, and then writing the original data back afterwards. The only way to verify that writing data works is by actually writing data, no read-only test will ever do that for you. People who only do read tests (the majority, simply because write tests take at least twice as long) simply take it on good faith that when reading works, writing (and being able to read the data that was written later) will probably work too. However, the non-destructive is relative... after all the very write itself might destroy it (on a medium with limited write cycles) and once it's broken there is no way to write the original data back either, so even though the test is non-destructive, if your hardware is faulty it might still lose you some additional data. Therefore you shouldn't use badblocks if there is data on a medium you hope to recover. Especially not if you already know it's going bad... if you don't have a backup already, just do the ddrescue directly. That also happens to be a read-only test and the logfile will tell you where the error zones are...
What are the pros and cons of badblock's two non-destructive tests?
1,462,218,184,000
My VirtualBox filesystem looks like: # df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda2 29799396 5467616 22795012 20% / devtmpfs 1929980 0 1929980 0% /dev tmpfs 1940308 12 1940296 1% /dev/shm tmpfs 1940308 8712 1931596 1% /run tmpfs 1940308 0 1940308 0% /sys/fs/cgroup /dev/sdb 31441920 1124928 30316992 4% /srv/node/d1 /dev/sdc 31441920 49612 31392308 1% /srv/node/d2 /dev/sdd 31441920 34252 31407668 1% /srv/node/d3 /dev/sda1 999320 253564 676944 28% /boot tmpfs 388064 0 388064 0% /run/user/0 Disks /dev/sdb, /dev/sdc, /dev/sdd are VDI data disks. I removed some data from them (not everything) and would like to use zerofree to compress them afterwards. Looks like I can't use zerofree on those disks. Here is an execution: # zerofree -v /dev/sdb zerofree: failed to open filesystem /dev/sdb Is it possible to use zerofree on such disks? If not, is there any alternative solution? I need to keep the existing data on those disks, but use zerofree (or anything else) to fill removed data with zeros.
I didn't find the answer on how to use zerofree on such disks but I found an alternative solution which works well. Mount your disk somewhere (in my case 3 disks are mounted to locations: /srv/node/d1, /srv/node/d2, /srv/node/d3). Enter the directory where your disk is mounted (cd /srv/node/d1). Perform the command: dd if=/dev/zero of=zerofillfile bs=1M Remove the a created file: rm -f zerofillfile Perform the above operations for all disks. P.S. not related to this question, but for virtual box disk compaction, use the command after performing the above commands: VBoxManage modifyhd --compact /path/to/my/disks/disk1.vdi
How to use zerofree on a whole disk?
1,462,218,184,000
I'm learning file operation calls under Linux. The read() and write() and many other functions use cache to increase performance, and I know fsync() can transfer data from cache to disk device. However, is there any commands or system calls that can determine whether the data is cached or written to disk?
Read data is (directly) read from the cache only if it is already there. That implies that cached data was previously accessed by a process and kept in cache. There is no system call or any method for a process to know if some piece of data to be read is already in cache or not. On the other hand, a process can select if it wants written data to be immediately stored on the disk or only after a variable delay which is the general case. This is done by using the O_SYNC flag when opening the file. There is also the O_DIRECT flag which when supported force all I/Os to bypass the read and write cache and go directly to the disk. Finally, the hard-disk itself is free to implement its own cache so even after a synchronous write call has returned, there is no guarantee data is already on the disk platters.
How to determine whether the data is written to disk or cached?
1,462,218,184,000
I have /dev/sda mounted on /, as the root partition. Can I safely run badblocks in read-only mode on this device? Will it show false positives/negatives because it's mounted?
Read-only is just that - reading from the disk. It will pick up sector read errors but (obviously) not sector write errors. Categorically, it is safe to run on a device that is being used a mounted filesystem. With respect to possible false positives, block IO is not "managed", i.e. there are no reader/writer locks. So there is no interaction between badblocks and the filesystem layer.
Can I safely run badblocks in read-only mode on a mounted drive?
1,462,218,184,000
I have a cloud server were we get billed also for disk IO usage. Here is an example from the stats: 04/Sep/2013 07:24:19 04/Sep/2013 08:24:19 0,5 GB / 1 vCPU (0,08 Kr. per hour): Charge for 44.7578125GB disk I/O So for one hour we get billed for around 45 GB disk I/O. To me that sounds like a lot of traffic and I would like to do some monitoring to check it myself. I know about tools like dstat and sysstat etc but I have not found any examples showing totals for one hour (or other timeframe). Most examples is averaging the results, like this command: dstat -tdD total 60 Here, it shows disk I/O measuring for 60 seconds, but it is averaged. So if I copy a large file, I will see the number increase while copying, but as soon as it is finished, the number will decrease again. I other words, I don't end up with a true total for that period. How can I log the total amount of disk I/O in a given timeframe?
You can use the tool iostat to collect the disk utilization information. It takes several arguments, including the switch, -d: -d Display the device utilization report. It also takes an argument in seconds an interval of how frequent it should re-run. The value 3600 would be the number of seconds in an hour. Example $ iostat -d 3600 Linux 2.6.35.14-106.fc14.x86_64 (grinchy) 09/04/2013 _x86_64_ (4 CPU) Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sda 20.53 71.86 1259.61 20308334 356000380 dm-0 4.92 39.02 28.81 11027610 8143376 dm-1 0.54 0.93 3.38 261472 954912 dm-2 156.65 31.87 1227.42 9006394 346902056 The output from this command could be redirected to a file: $ iostat -d 3600 >> iostat_hrly.log Meaning of the units If you consult the man page for iostat it has pretty good descriptions of the units. excerpt Blk_read/s Indicate the amount of data read from the device expressed in a number of blocks per second. Blocks are equivalent to sectors with kernels 2.4 and later and therefore have a size of 512 bytes. With older kernels, a block is of indeterminate size. Blk_wrtn/s Indicate the amount of data written to the device expressed in a number of blocks per second. Blk_read The total number of blocks read. Blk_wrtn The total number of blocks written. So a block is 512 bytes, so the Blk_read/s in terms of MB for device sda would be, 71.86 * 512 bytes = 36.79232 kilobytes/sec. There are additional switches that will change the units automatically in the output. excerpt from iostat man page -h Make the NFS report displayed by option -n easier to read by a human. -k Display statistics in kilobytes per second instead of blocks per second. Data displayed are valid only with kernels 2.4 and later. -m Display statistics in megabytes per second instead of blocks or kilobytes per second. Data displayed are valid only with kernels 2.4 and later. Example in KB/s So this might be more useful, showing the throughput in KB/s: $ iostat -dk 3600 Linux 2.6.35.14-106.fc14.x86_64 (grinchy) 09/05/2013 _x86_64_ (4 CPU) Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn sda 20.85 47.25 663.81 15475096 217427086 dm-0 5.01 20.00 14.43 6549301 4725068 dm-1 0.54 0.58 1.60 189064 524872 dm-2 165.30 26.65 647.78 8730281 212177124
How to measure total disk I/O per hour
1,462,218,184,000
I am running Ubuntu 18.04.1 in a Virtual Machine (VMWare) on a Windows host. I am trying to zero out an entire SD card using dd. This is part of the process I use to release embedded Linux to the software group (SD card images compress much better when the empty FS data is all 0). The command I am using is: sudo dd if=/dev/zero of=/dev/sdc bs=4M status=progress and it completes successfully; I get the printout of records transferred, and a message saying no space left on device. If I then do a sudo cat /dev/sdc | hexdump to look at the disk contents though, the disk is still full of data and isn't zeroes (and not just at the end). Do I have to specify the number of bytes of the SD card for it to work consistently? I don't have this issue every time I zero out an SD card. Complete console output: gen-ccm-root@ubuntu:~$ sudo dd if=/dev/zero of=/dev/sdc bs=4M status=progress 15929966592 bytes (16 GB, 15 GiB) copied, 1274 s, 12.5 MB/s dd: error writing '/dev/sdc': No space left on device 3799+0 records in 3798+0 records out 15931539456 bytes (16 GB, 15 GiB) copied, 1274.19 s, 12.5 MB/s gen-ccm-root@ubuntu:~$ sudo cat /dev/sdc | hexdump [sudo] password for gen-ccm-root: 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0101000 2004 0000 6004 0000 0000 0000 0000 0000 0101010 0000 0000 0000 0000 0000 0000 0000 0000 * 0101400 2005 0000 6005 0000 0000 0000 0000 0000 ...
As said in the comments, the sdcard was with badblocks. The solution I proposed was to run: badblocks -t 0x0000 -sw /dev/sdc CAUTION: this is data destructive like dd if=/dev/zero. And the user received something like: 7234624 done, 39:10 elapsed. (0/0/2417408 errors) Showing the sdcard was damaged. The sdcard was replaced and the problem was solved.
dd doesn't overwrite the disk
1,462,218,184,000
I want to find how can I check the disk usage in linux as in windows task manager. I mean, I don't want to find out how much free space is there on the partition or something like that, I want to find out how much is the disk used at a given moment. Here's a gif: https://i.gyazo.com/4a91e7ed2e519d6fe628811d7c03d6c3.gif It's used more when I open a program.
Your question and the gif are two separate things. Based on the gif what you need is monitoring disk I/O and that can be achieved by iotop you can find some examples here
Disk usage equivalent to Disk usage in windows task manager
1,462,218,184,000
I got a strange issue with extremely slow dnf, to the point where computer freezes for a few seconds during installation of packages. Running Fedora 29, but this is happening for years. Otherwise the computer works fine, never crashes, obviously only dnf hits some weak spot. Not only can "dnf upgrade" take an hour for 100 packages, every year when I upgrade Fedora, I wait for Christmas holidays so I can live without computer over the weekend as 3000+ package update starts on Friday evening and finishes on Sunday. Root directory is mounted to an old SSD disk, /home on a new classic disk (WD Caviar Black), there is another disk just for backups (just to tell there are multiple disks installed). I have suspected the old SSD to misbehave, moved the root partition to one of the classic disks and it did not improve the situation (was even slower). Journalctl or dmesg or /var/log/messages do not show anything special. Sysbench says that disk performance of every installed disk is fine, just as fine as on another (older) computer that also runs Fedora 29 and where dnf is fine (upgrade in a few minutes, annual release upgrade during lunch). Does anyone have an idea where to look for what is happening? P.S. top, iotop, iostat, dmesg, journalctl, such elementary things were checked a lot. iotop shows 99.99% activity on disk by python... (dnf) when it does the upgrade but I guess that's normal. Even sysbench (as stated above) shows the disks are a bit faster than at another computer that does dnf in a breeze. For the make: i7-920 processor, 24 G RAM, no swap
Robert: It would be interesting to know what system call is taking the longest. Can you install a simple (random) package using perf trace and then sort to find the longest calls? For example: # perf trace -s -o /tmp/trace.out dnf install -y xorg-x11-apps-7.7-20.fc28.x86_64 And then post the contents of your /tmp/trace.out file. It's a summary file, so it shouldn't be too large, but it will help point you at what syscall(s) are taking the longest, and if they are way out of normal range.
Extremely slow dnf
1,462,218,184,000
Premise I am running Linux Mint on my sister's computer. Everything was working fine, but then one day she turned it on and she got a strange message she didn't understand. I think it was some kind of question in terminal. Unfortunately my sister, despite her natural intelligence, didn't know what to do and decided to simple turned the computer off. Since then, everything goes wrong. The problem When booting I got classic Linux Mint splash screen, when pressing ESC I get this: The message on third row and later mean: /dev/sda5 clean, 182625/1602496 files, 1140445/6401536 blocks /dev/sda6 clean, 20551/10928128 files, 14921843/43686912 blocks Partition for /home is not ready or present. Continue waiting or press S for skipping the mounting or M for manual mount. Pressing neither of those buttons work. It is simply stuck and does nothing. Attempt 1 - recovery mode So I've realized I am having problem with /dev/sda5 or /dev/sda6. I booted into recovery mode. I tried to check for consistency of packages and I got: Again the same message, only one more: /dev/sda6: the journal is renewing Again nothing happens. Choosing different operation in recovery mode menu leads to the same thing. Attempt 2 - gparted I've downloaded a live cd with gparted to check out the disks. In gparted I've checked the disks, no errors. Hm. I mounted the both /dev/sda6 and /dev/sda5 in terminal, which worked. I was able to see all data unharmed. Attempt 3 - getting rid of /home Because everything has been about the bloody /dev/sda6 which is automatically mounted to /home I've decided to remove it from /etc/fstab and leave only the system disk /dev/sda5. It was easy hence I was in gparted already. I rebooted and nothing happened. Not even splash screen, no message, no sound from computer at all. It was running, but doing nothing. Uff. EDIT: I am still able to get past grub. This nothing happens AFTER choosing Linux Mint in grub. Attempt 4 - reinstall? Since I have the /home on separate partition I was wondering I could simply reinstall the system. I've downloaded ISO, put it on USB, and I was astonished to get another terminal like text and freeze: Attempt 5 - Stack Exchange You are on, guys. Final notes I have the feeling all this leads to hardware problems, but a) I've successfully mounted both partitions and b) I have windows 7 on another partition (but same disk) that works. I guess formatting the disk using gparted and install Linux Mint again would work, however I would prefer saving all the data (I would have to back up everything). Any help how to debug or fix this problem will be much appreciated. Reports fdisk -l https://dl.dropbox.com/u/71390144/keepers/fdiskl fsck -n /dev/sda6 https://dl.dropbox.com/u/71390144/keepers/fsck6 cat /etc/fstab https://dl.dropbox.com/u/71390144/keepers/fstab smartctl -a /dev/sda https://dl.dropbox.com/u/71390144/keepers/smartctl
It's hard to say how much I can concider this as an answer, but it is a solution. I finally decided to give a try to 32bit installation ISO, which miraclously worked. I installed the system and everything works fine. When I tried again (after the install) the 64bit ISO could be loaded. I have no idea why what so ever. Anyway, if you experience any trouble similiar to this, just try to use 32bit installation ISO.
Linux Mint freezes on startup
1,462,218,184,000
I run Ubuntu-20 and I have scanned my laptop using smartctl. The test results are as follows: SMART Attributes Data Structure revision number: 32 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 100 100 050 Pre-fail Always - 807002 3 Spin_Up_Time 0x0023 100 100 002 Pre-fail Always - 1261 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 4358 5 Reallocated_Sector_Ct 0x0033 001 001 005 Pre-fail Always FAILING_NOW 9800 7 Seek_Error_Rate 0x002f 100 100 070 Pre-fail Always - 17337 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 6550 10 Spin_Retry_Count 0x0033 100 100 050 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 2741 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 85 184 End-to-End_Error 0x003b 100 100 097 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 96 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 290 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 191 G-Sense_Error_Rate 0x0032 001 001 000 Old_age Always - 1274 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 347892351057 193 Load_Cycle_Count 0x0032 084 084 000 Old_age Always - 32166 194 Temperature_Celsius 0x0022 032 050 000 Old_age Always - 32 (Min/Max 23/32) 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 1 199 UDMA_CRC_Error_Count 0x0032 100 100 000 Old_age Always - 0 SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed: unknown failure 90% 6523 0 # 2 Short offline Completed: unknown failure 90% 6511 0 # 3 Extended offline Completed: unknown failure 90% 6511 0 # 4 Extended offline Completed: read failure 90% 6507 1632567256 # 5 Extended offline Completed: read failure 90% 6497 1284529824 # 6 Short offline Completed: read failure 10% 6495 1528570456 # 7 Short offline Completed: read failure 10% 6495 1280234720 # 8 Short offline Completed: read failure 10% 6495 1288689848 # 9 Extended offline Completed: read failure 90% 6492 1235843824 #10 Short offline Completed without error 00% 3452 - #11 Short offline Completed without error 00% 1539 - #12 Short offline Completed without error 00% 1230 - Is my drive failing? I am observing a lag in the system. It takes lot of time to read and write files. Also, my filesystem sometimes go read-only (shows read-only filesystem, unable to perform operations). I have good amount of resourses (12GB RAM and i3 7th gen). What steps should I take to recover these things?
Yes, your drive is failing: 5 Reallocated_Sector_Ct 0x0033 001 001 005 Pre-fail Always FAILING_NOW 9800 There’s nothing you can do to make the drive “better”. What you need to do next depends on what backups you have, if any. If you don’t have any, stop using your system and get an external drive at least as large as your laptop’s drive, and then copy as much of your laptop’s drive to it as possible (using ddrescue).
Is my drive failing and is this the same reason my laptop is running slower?
1,462,218,184,000
I've read information about the use of sync command in Linux and how it works. But I'm unable to understand when I really should use it in files and I don't find practical examples. For example, the command sync --data file.txt synchronizes the file data of this one, but I don't find an useful example about when I should use it and how that command is working. Maybe is there a tool for monitoring those changes or what do I have to do for checking the effects of that command?
I/O in Linux is done to the Virtual File System (VFS). The VFS caches data structures from the various types of filesystem (ext?/XFS/BTFS, NFS, COW, FUSE etc). This means that any process I/O can use a common interface. Having the filesystem structure in memory also makes directory and inode lookups far faster than going to disk each time. Reads may still be held up whilst the data is retrieved, but writes will simply move the data into buffers, update the VFS and return.¹ A buffer which contains modified data is called a "dirty" buffer and is written out to disk by the system at a time of its choosing. "Clean" buffers contain a cache of information that is identical to the on-disk copy, so they can be deleted at any time. Using sync forces the system to write out the dirty buffers, and not return until they have been safely moved off the system. As "dr_" mentions, this must be done prior to a dismount. Once a dirty buffer has been written out, it remains in the memory as a clean buffer until such time as the system needs more memory for another purpose. There is one point to bear in mind however; any external cache is unknown to the system and so sync may not be totally safe. NFS is a case in point, once the data has been transmitted to the remote system then the local machine will release the buffers. The remote machine may not have yet written the data to disk, so needs to be handled separately. Another case is the use of external RAID controllers. Again sync will ensure the data has reached the RAID controller, but cannot know that the RAID controller has actually written it to disk. The short answer is that on modern systems you don't normally need to use sync. However, if you have reason to believe that some part of the filesystem may become unavailable, then syncing is a wise precaution. For example, prior to doing any changes with LVM it is good practice to sync just in case there is a failure. Likewise, a sync prior to adjusting the network is advisable (if you have NFS or similar), as it would be before moving a system or doing any work on the power. ¹Unless the file was opened with O_SYNC or similar.
When and why should I sync a file in Linux?
1,462,218,184,000
I am a bit confused with how linux hard drive/ storage device, block files are named. My questions are: How are IDE devices and partitions named? How are EIDE devices and partitions named? How are PATA devices and partitions named? How are SATA devices and partitions named? How are SCSI devices and partitions named? Lastly, I have been reading articles on this subject, and I have seen mentions of 'master drives' and 'slave drives'. What are these, what are they used for, and how are they named?
Introduction First of all, all the devices populate the /dev folder. Also, it is important to note that (E)IDE and PATA terms usually refer to the same thing, which is the interface standard PATA. IDE and PATA are interchangable terms in this context. There was a major change in naming conventions for block devices in Linux, around the release of Linux kernel version 2.6. The kernel supports all ATA devices through libATA, which started with SATA devices support in 2003 and was extended to current PATA support. Therefore, be aware that, depending on your distribution and kernel version, the drives naming convention can differ. Since a while, PATA devices on "modern" distributions are named the ways SATA drives are, since both are now using libATA. For your distribution, you can find this in /lib/udev/rules.d/60-persistent-storage.rules. On my system using Debian 9, it is also the case. For example: $ cat /lib/udev/rules.d/60-persistent-storage.rules | grep "ATA" # ATA KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="scsi", ATTRS{vendor}=="ATA", IMPORT{program}="ata_id --export $devnode" By browsing this file, you will know how your distribution will name every block device you could connect to your machine. Block devices naming conventions IDE drives IDE drives (using the old PATA driver) are prefixed with "hd" the first device on the IDE controller (master) is hda the second device (slave) is hdb Since there can only be two drives on one IDE controller/cable, the master is the first one and the slave is the second one. Since most motherboard are fitted with two IDE controllers, it goes on the same way with the second controller:hdc being the master drive on the second controller and hdd the slave drive. Be aware that, since Linux kernel 2.6.19, the support of IDE drives has been merged with SATA/SCSI drives and, therefore, will be named like them. SATA and SCSI drives This naming convention started with SCSI drives, and was extended to SATA drives with libATA. It applies to SCSI, SATA, PATA as well as others drives, out of the scope of OP question (USB mass storage, FireWire, etc.). Anyway, usually, all the devices using a serial bus use the same denomination nowadays (except for NVMe drive, but this would a story for PCI devices). SATA/SCSI drives start with "sd" the first one is sda the second one is sdb etc. Partitions naming conventions Regarding partitions, each of them is denoted by a number at the end of each disk, named as described previously, starting from 1. Except for some other devices not mentioned in OP, it is always the case. By instance, for the partitions on a SATA drive, they would be listed as sda1, sda2, and so on, for primary partitions. Logical partitions start at the index "5", while the extended partition takes the index "4". Note that this is obviously only true for drives making use of MBR and not GPT. Below, this is the output of lsblk giving an example for disk called sdd, with 3 primary partitions (sdd1,sdd2,sdd3), 1 extended partition (sdd4) and 2 logical partitions (sdd5,sdd6). $ lsblk sdd 8:48 1 1.9G 0 disk ├─sdd1 8:49 1 153M 0 part ├─sdd2 8:50 1 229M 0 part ├─sdd3 8:51 1 138M 0 part ├─sdd4 8:52 1 1K 0 part ├─sdd5 8:53 1 289M 0 part └─sdd6 8:54 1 1.1G 0 part Master and slaves devices A single IDE interface can support two devices. Usually, motherboards come with dual IDE interfaces (primary and secondary) for up to four IDE devices on a system. To allow two drives to operate on the same parallel cable, IDE uses a special configuration called master and slave. This configuration allows one drive's controller to tell the other drive when it can transfer data to or from the computer. The name comes from the fact that the slave drive ask the master if it is communicating with the motherboard; if the master is, it will tell the slave to wait until the operation is finished but if not, it will tell the slave to go ahead. The master/slave role could be chosen thanks to the "Cable Select" feature: you could use a jumper on each drive supporting this feature to select either "Master", "Slave" or "Auto" (this last option meaning that the master is at the end of the IDE cable and the slave is the other).
Linux block devices naming
1,462,218,184,000
I was fiddling around with parted command on a loopback disk and tried to create some partitions using gpt part table but I keep getting Error: Unable to satisfy all constraints on the partition. when trying to create a logical partition $ sudo parted /dev/loop0 (parted) mktable gpt (parted) mkpart primary 1MiB 201MiB (parted) mkpart extended 201MiB -0MiB (parted) unit MiB print Model: Loopback device (loop) Disk /dev/loop0: 102400MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 201MiB 200MiB primary 2 201MiB 102400MiB 102199MiB extended (parted) mkpart logical 202MiB 1024MiB Error: Unable to satisfy all constraints on the partition. Recreating the same partitions using msdos part table doesn't give such error, though. So any idea what's wrong? % sudo parted /dev/loop0 GNU Parted 2.3 Using /dev/loop0 Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mktable msdos (parted) mkpart primary 1MiB 201MiB (parted) mkpart extended 201MiB -0MiB (parted) mkpart logical 202MiB 1024MiB (parted) unit MiB print Model: Loopback device (loop) Disk /dev/loop0: 102400MiB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1.00MiB 201MiB 200MiB primary 2 201MiB 102400MiB 102199MiB extended lba 5 202MiB 1024MiB 822MiB logical
The extended and logical partitions make sense only with msdos partition table. It's only purpose is to allow you to have more than 4 partitions. With GPT, there are only 'primary' partitions and their number is usually limited to 128 (however, in theory there is no upper limit implied by the disklabel format). Note that on GPT none of the partitions could overlap (compare to msdos where extended partition is expected to overlap with all contained logical partitions, obviously). Next thing about GPT is that partitions could have names, and here comes the confusion: the mkpart command has different semantics depending on whether you use GPT or msdos partition table. With msdos partition table, the second argument to mkpart is partition type (primary/logical/extended), whereas with GPT, the second argument is the partition name. In your case it is 'primary' resp. 'extended' resp. 'logical'. So parted created two GPT partitions, first named 'primary' and second with name 'extended'. The third partition which you tried to create (the 'logical' one) would overlap with the 'extended', so parted refuses to do it. In short, extended and logical partitions do not make sense on GPT. Just create as many 'normal' partitions as you like and give them proper names.
Unable to create logical partition with Parted
1,462,218,184,000
I have some HDD disk which has 4096 physical sector size and 512 bytes logical size. It is SATA disk. Now I'd like to use 4kiB in Linux as a logical also sector size - not 512 bytes one. How can I achieve this? Is it possible to switch this disk to operate only in 4kiB mode? How can I be sure that all the partitions I created are aligned to 4kiB? Do I have to manually calculate start and end sector numbers of given partition to have 4kiB alignment? I'm using Linux and sometimes Windows. Mainly I'm creating partitions using Linux fdisk - not Windows one. Maybe using of "fdisk -b 4096" is enough solution? Hm... Probably not, because how Linux will be know which sector size given disk uses?
Unless you use options to force a legacy MS-DOS compatible mode, or use expert mode to specify exact LBA block numbers for the beginning and end of partitions, most modern partitioning tools (Linux and otherwise) will align partitions to multiples of 1MB by default. This is also what modern Windows does, and it guarantees compatibility with both 4kB sector size and various SSD and SAN storage devices which might require alignment to larger powers of two for optimal performance. You can use lsblk -t to check the alignment offsets of each partition. If the value in the ALIGNMENT column is zero, then as far as the kernel knows, the partition should be optimally aligned. To switch the HDD sector size, you would first need to verify that your HDD supports the reconfiguration of the Logical Sector Size. Changing the Logical Sector Size will most likely make all existing data on the disk unusable, requiring you to completely repartition the disk and recreate any filesystems from scratch. The hdparm --set-sector-size 4096 /dev/sdX would be the "standard" way to change the sector size, but if there's a vendor-specific tool for it, I would generally prefer to use it instead - just in case a particular disk requires vendor-specific special steps. On NVMe SSDs, nvme id-ns -H /dev/nvmeXnY will tell (among other things) the sector size(s) supported by the SDD, the LBA Format number associated with each sector size, and the currently-used sector size. If you wish to change the sector size, and the desired size is actually supported, you can use nvme format --lbaf=<number> /dev/nvmeXnY to reformat a particular NVMe namespace to a different sector size.
Switching HDD sector size to 4096 bytes
1,462,218,184,000
For example, when I archive a few gigs of files (using tar), Linux uses quite a lot of disk caching (and some swap) but never cleans it up when the operation has completed. As a result, because there's no free memory Linux will try to swap out something from memory which in its turn creates an additional load on CPU. Of course, I can clean up caches by running echo 1 > /proc/sys/vm/drop_caches but isn't that stupid that I have to do that? Even worse with swap, there's no command to clean up unused swap, I have to disable/enable it completely which I don't think is a safe thing to do at all. UPD: I've run a few tests and found out a few things: The swapped out memory pages during the archive command not related to archived files, it seems it's just a usual swapping out process caused by decreased free memory (because disk caching ate it all) according to swappiness Running swapoff -a is actually safe, meaning swapped pages will move back to memory My current solution is to limit archive command memory usage via cgroups (I run docker container with -m flag). If you don't use docker, there's a project https://github.com/Feh/nocache that might help. The remaining question is when will Linux clean up disk caching and will it at all? If not, is it a good practice to manually clean up disk cache (echo 1 > /proc/sys/vm/drop_caches)?
Nitpick: the CPU time used by swapping is not usually significant. When the system is slow to respond during swapping, the usual problem is the disk time. (1) Even worse with swap, there's no command to clean up unused swap Disabling and then enabling swap is a valid and safe technique, if you want to trigger and wait for the swapped memory to be read back in. I just want to say "clean up unused swap" is not the right description - it's not something you would ever need to do. The swap usage might look higher than you expected, but that does not mean it is not being used. A page of memory can be stored in both RAM and swap at the same time. There is a good reason for this. When a swap page is read back in, it is not specifically erased, and it is still kept track of. This means if the page needs to be swapped out again, and it has not changed since it was written to swap, the page does not have to be written again. This is also explained at linux-tutorial.info: Memory Management - The Swap Cache If the page in memory is changed or freed, the copy of the page in swap space will be freed automatically. If your system has relatively limited swap space and a lot of RAM, it might need to remove the page from swap space at some point. This happens automatically. (Kernel code: linux-5.0/mm/swap.c:800) (2) The remaining question is when will Linux clean up disk caching and will it at all? If not, is it a good practice to manually clean up disk cache (echo 1 > /proc/sys/vm/drop_caches)? Linux cleans up disk cache on demand. Inactive disk cache pages will be evicted when memory is needed. If you change the value of /proc/sys/vm/swappiness, you can alter the bias between reclaiming inactive file cache, and reclaiming inactive "anonymous" (swap-backed) program memory. The default is already biased against swapping. If you want to, you can experiment with tuning down the swappiness value further on your system. If you want to think more about what swappiness does, here's an example where it might be desirable to turn it up: Make or force tmpfs to swap before the file cache Since Linux cleans up disk cache on demand, it is not generally recommended to use drop_caches. It is mostly for testing purposes. As per the official documentation: This file is not a means to control the growth of the various kernel caches (inodes, dentries, pagecache, etc...) These objects are automatically reclaimed by the kernel when memory is needed elsewhere on the system. Use of this file can cause performance problems. Since it discards cached objects, it may cost a significant amount of I/O and CPU to recreate the dropped objects, especially if they were under heavy use. Because of this, use outside of a testing or debugging environment is not recommended.
Why Linux does not clean up disk caches and swap automatically?
1,462,218,184,000
hdparm -I /dev/sda output: Logical Sector size: 512 bytes Physical Sector size: 512 bytes stat somefile output: Size: 509 Blocks: 8 IO Block: 4096 regular file Why IO Block is 4096 ? Isn't it the same as physical sector size which is 512 bytes ?
No. The disc block size says, in how big byte chunks is the data handled on the disk. If you write something to a file, your CPU/motherboard has to say to the drive controller, what bytes should be written in which sector of the disc. This can happen only in 512byte chunks. The distinction between the logical and physical sector size is this: the physical sector size is in which the data is organized in the disc physically. The logical sector size means the chunks with your CPU/motherboard can talk with your driver controller card (which is often also a part of your motherboard, but your OS still has to know, what block sizes should it produce as it executes a disc read/write operation). Since some decades, also the physical sector size is a faked one, and the exact details of it is a business secret of the hard disc manufacturer. But the OSes still have to know this faked data, because it is part of the disc standards (SCSI, PATA, SATA, etc). Thus the physical sector size has no practical meaning in most cases. On some newer discs there is a new development, that they use 4096-logical sectors instead 512. It is needed because the sector numbers on some old ATA protocols have a 32 bit size, thus discs larger as 4billion sectors (=2Terabytes) couldn't be addressed on them. The stat command says the block size of your filesystem. Also most filesystems organize data in blocks on your system. If you create a single byte file, it will have to allocate 4096 bytes on your disc. There is only rarely a non-block oriented filesystems, for example Reiserfs, although it is still organized in blocks, its smallest allocatable disc size is only 32 bytes. Thus an 1 byte file will allocate only 32 byte on a reiserfs filesystem.
Understanding IO Block size
1,462,218,184,000
When the HDD indicator is blinking (for a long period), how could I know which process is taking most disk bandwidth?
Using iotop. Iotop is a Python program with a top like UI used to show of behalf of which process is the I/O going on. It requires Python ≥ 2.5 (or Python ≥ 2.4 with the ctypes module) and a Linux kernel ≥ 2.6.20 with the TASK_DELAY_ACCT CONFIG_TASKSTATS, TASK_IO_ACCOUNTING and CONFIG_VM_EVENT_COUNTERS options on.
Determine which process is taking most of disk bandwidth?
1,462,218,184,000
I have just tried upon tweaking the badblocks utility to use more RAM and possibly achieve a bit higher performance. The exact command I am running is (without HDD's S/N): badblocks -v -b 4096 -c 98304 -w -s /dev/disk/by-id/ata-WDC_WD5000LPCX-24C6HT0_SN >> /root/spare-hdd-badblocks.log 2>&1 & I do not use the badblocks tool very often, however, so if I may ask... What does the -c switch do exactly and why is it suggested to achieve higher speeds? Does it really eat more memory and if so, as I have plenty, could it possibly be wise to further increase it? From its man page: -c: Number of blocks is the number of blocks which are tested at a time. The default is 64. I do not understand it, I just hope someone does. Credit, math, and source of further valuable info: http://www.pantz.org/software/badblocks/badblocksusage.html My system: Debian 11 on a headless Xeon server with 32GB ECC RAM.
The -c flag controls the number of blocks tested in one go. By increasing this number you're reducing overhead (system calls), marginally improving performance. (Consider dd vs dd bs=64M as another example of this optimisation process.) However, I'm less convinced that badblocks is even relevant these days. Disk firmware has got much more sophisticated and the OS no longer needs to omit faulty sectors as the disk does that for you itself. What's more, with SMART you can even get the disk to self-test regularly, and with SMART monitoring you'll be notified when (if) there's a problem - probably in enough time to replace the disk before you lose the data
Speeding up `badblocks` by tweaking its `-c` switch
1,462,218,184,000
I have a 3 years old server with two same disks. I'm planning to replace both before they fails. Can I add two more new disks to the raid and (after it has been rebuild) eventually remove the two old ones? Or which is the best way to do this? Thank you
So assuming you are using mdadm you can do exactly what you suggest The only caveat is that the raid monitoring utility will generally only handle one disk at a time and normally when you have marked one as failed. Further you just need to ensure that it has completed copying the data before removing the old disks from the raid array otherwise you'll end up removing the "live" disks with nothing on the new ones and corrupt your array. Commands that you will find useful for doing this are as follows: To add a new disk to the array: # mdadm /dev/<mddevice> --add /dev/<newdisk> To see the status and recovery process: cat /proc/mdstat To mark the old disk as 'failed' and remove it from the array: # mdadm /dev/<mddevice> --fail /dev/<olddisk> --remove /dev/<olddisk> I would suggest doing one disk at a time the first time and checking the status of the raid array via mdstat as you go before removing the second (and potentially only viable disk) from the array. My only reason for suggesting this is experience teaches you to take several small steps rather than one large one and face total disaster recovery. Prevention is far better than cure.
Replace both disks in a raid 1 mirror
1,462,218,184,000
In some programs percentage of copying large files get to 100% very fast and then I'm waiting much more before it goes next step. It's caused by buffer. How to I see amount of data that are going to be written?
The term for that is "dirty" data (data that has been changed, but not yet flushed to permanent storage). On Linux you can find this from /proc/meminfo under Dirty: $ cat /proc/meminfo | grep Dirty Dirty: 0 kB
How to get size of data that haven't been written to disk yet?
1,462,218,184,000
I've added a fibrechannel disk to a RHEL 5.5 server. The disk is present and shows under /dev/sdxx - But I need to give udev a kick and have it refresh the /dev/disk/by-label/LABEL links; this is where I do my mount points. I do not want to reboot the system.
Try udevtrigger. That should replay all outstanding udev-tasks.
RHEL 5.5 - Need to refresh /dev/disk/by-label links
1,462,218,184,000
when we run the smartctl -a on disk we get a lot of output what is the final status that indicates that disk is bad or good? smartctl -a /dev/sdb smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.10.0-327.el7.x86_64] (local build) Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Vendor: SEAGATE Product: ST2000NX0433 Revision: NS02 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Logical block size: 512 bytes Formatted with type 2 protection Logical block provisioning type unreported, LBPME=0, LBPRZ=0 Rotation Rate: 7200 rpm Form Factor: 2.5 inches Logical Unit id: 0x5000c5009eaededf Serial number: W46064KW Device type: disk Transport protocol: SAS Local Time is: Thu Nov 22 10:38:35 2018 UTC SMART support is: Available - device has SMART capability. SMART support is: Enabled Temperature Warning: Disabled or Not Supported === START OF READ SMART DATA SECTION === SMART Health Status: OK Current Drive Temperature: 23 C Drive Trip Temperature: 60 C Manufactured in week 06 of year 2017 Specified cycle count over device lifetime: 10000 Accumulated start-stop cycles: 49 Specified load-unload count over device lifetime: 300000 Accumulated load-unload cycles: 550 Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1986603075 Blocks received from initiator = 2165723528 Blocks read from cache and sent to initiator = 1298028358 Number of read and write commands whose size <= segment size = 201615101 Number of read and write commands whose size > segment size = 0 Vendor (Seagate/Hitachi) factory information number of hours powered up = 12335.38 number of minutes until next internal SMART test = 26 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 26648753 0 0 26648753 0 83475.092 0 write: 0 0 2 2 2 135145.593 0 verify: 3914513941 0 0 3914513941 0 109628.879 0 Non-medium error count: 14 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 96 2 - [- - -] Long (extended) Self Test duration: 20400 seconds [340.0 minutes] dose the following are good indication ? smartctl -a /dev/sda | grep Completed or smartctl -a /dev/sda echo $?
The overall health status is that part of the output of smartctl -a that addresses the global question Is the drive good or bad? best. In your cited output, that status is reported in the line SMART Health Status: OK which can also be obtained separately (with some header) by using the -H option of smartctl, instead of -a. Note that this assessment does not come from the smartmontools but from the drive itself (see man page smartctl(8) on -H option) and that its meaning is rather coarse: See this quote from Wikipedia: The S.M.A.R.T. status does not necessarily indicate the drive's past or present reliability. If a drive has already failed catastrophically, the S.M.A.R.T. status may be inaccessible. Alternatively, if a drive has experienced problems in the past, but the sensors no longer detect such problems, the S.M.A.R.T. status may, depending on the manufacturer's programming, suggest that the drive is now sound. and (same source): More detail on the health of the drive may be obtained by examining the S.M.A.R.T. Attributes. The overall health status is reflected by bit 3 (counting from 0) of the exit status of smartctl, which is set on failing disk. See section "RETURN VALUES" in man page smartctl(8). Right after executing smartctl, this bit can be evaluated by the (Bash) expression $(($? & 8)) like in if [ $(($? & 8)) -eq 0 ]; then echo Good. else echo Bad. fi Please note that if bit 3 is set, the expression $(($? & 8)) evaluates to 8, not 1. An exit status of zero from smartctl is sufficient for a healthy disk (as far as S.M.A.R.T. can judge), but as a condition this might be to strong: Bit 6 of this status reflects the existence of error records in the device logs, which also may refer to communication errors between drive and host (Read DMA errors). I have several drives whose logs show such errors in their logs since their first hours of lifetime, but I used these drives on a daily basis without any problems for years. So this criterion can give you a lot of false positives. Of course this is arguable since there were errors after all. Anyhow, if you want to take all bits but that one (bit 6) into account, you can use this expression in your test: $(($? & 191)). On the other hand, the criterion smartctl -a /dev/sda | grep Completed that you mentioned says nothing about the health of the drive, since it just reports that a self-test was completed, without taking its result into account.
what is the indications from smartctl that show bad status from disk
1,462,218,184,000
in my bash script I use the syntax lsblk | grep sd in order to capture all disk in my HW machines ( not include flash card or rom ) I just worry that some disk device name will be diff from sd , and I will missed these disks is it possible ? lsblk | grep sd sda 8:0 0 150G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 149.5G 0 part sdb 8:16 0 20G 0 disk /id/sdb sdc 8:32 0 20G 0 disk /id/sdc sdd 8:48 0 20G 0 disk /id/sdd sde 8:64 0 20G 0 disk /id/sde sdf 8:80 0 20G 0 disk /id/sdf sdg 8:96 0 20G 0 disk sdh 8:112 0 20G 0 disk sdi 8:128 0 20G 0 disk sdj 8:144 0 20G 0 disk sdk 8:160 0 20G 0 disk
Most disk drivers use the sd prefix, but not all. Historically sd stood for “SCSI disk”, but most disks use a protocol which is close to SCSI, and most of Linux's disk drivers use the generic sd layer plus a controller-specific part. However, this is not an obligation, so you need to check with your hardware. For example, eMMC devices have the prefix mmcblk. Some hardware RAID drivers use a different prefix. Virtual machine disks may or may not be /dev/sd* depending on the virtualization technology. Note that sd includes removable drivers as well. For example all USB drives have the sd prefix, regardless of whether they're hard disks, USB keys, SDcard readers, etc. Note also that grep sd is very fragile since it matches sd anywhere on the line, for example in a disk or partition label. grep '^sd' would be less fragile. All in all, grep '^sd' does something that isn't very useful, but may happen to work for you, depending on your hardware. If you migrate your installation to different hardware, it may stop working. So you should try to find something else. What else to use depends on what you mean by “all disk (…) (not include flash card or rom)”. Flash is a disk technology, after all, and there's no reason to distinguish it from rotating disks. And it's usually a bad idea to rely on the fact that a machine is or is not virtualized. And if you start using RAID, it isn't clear whether you're interested in the underlying hardware or in the partitions that are available for software. If you want to see only non-removable drives, look in /sys/block/* and check which ones contain 0 in the removable file. This includes “non-hardware” block devices such as RAID/LVM holders and loop devices. Under Linux, I recommend using LVM for non-removable media. It makes administration a lot easier. If you use LVM then either pvdisplay or lvdisplay probably shows the information you're after (but of course I can't tell for sure since you didn't tell what you're after).
Do all disks devices in my HW machines start with - sd?
1,462,218,184,000
Under the /sys/class/scsi_device folder I have the following: root@linux01:/sys/class/scsi_device # ls 1:0:0:0 2:0:0:0 2:0:1:0 3:0:0:0 How can I know how each of these devices is related to the disk? For example, how can I determine if device 2:0:1:0 is disk /dev/sdb? root@linux01:/sys/class/scsi_device # sfdisk -s /dev/sdb: 15728640 /dev/sdc: 524288000 /dev/sda: 153600 [...] # more /etc/redhat-release ( Linux VM machine ) Red Hat Enterprise Linux Server release 6.5 (Santiago)
An easy way to get the correspondence is to look at the device/block subdirectory in the /sys hierarchy: # ls -1d /sys/class/scsi_device/*/device/block/* /sys/class/scsi_device/1:0:0:0/device/block/sr0 /sys/class/scsi_device/2:0:0:0/device/block/sda /sys/class/scsi_device/2:0:1:0/device/block/sdb /sys/class/scsi_device/2:0:2:0/device/block/sdc /sys/class/scsi_device/2:0:3:0/device/block/sdd /sys/class/scsi_device/2:0:4:0/device/block/sde /sys/class/scsi_device/2:0:5:0/device/block/sdf The directory name in there correspond to the block device name in /dev.
Correspondence between SCSI device entries in /sys and the disks in /dev
1,462,218,184,000
I'm writing some automation scripts for full disk backups and I'd like to be fairly precise with which devices are used. I know that one can uniquely identify a partition using UUIDs and blkid, but is there a way to uniquely identify a disk? My use case is that I'm not entirely sure which order disks will be mounted on the Clonezilla distribution, and I'd like to make sure that my backups are targeting the right (whole) disk for backup. Is there a way to find the device identifier (/dev/sdX) for a given disk by certain criteria?
See if it's in /dev/disk/by-id/ which contains links to devices and partitions including brand and serial number. For example /dev/disk/by-id/ata-WDC_WD15EARS-00MVWB0_WD-WMAZA1856149-part1. If knowing the /dev/sdX name is important, you can get it with readlink. $ readlink -f /dev/disk/by-id/ata-WDC_WD15EARS-00MVWB0_WD-WMAZA1856149 /dev/sdi
Uniquely identifying a drive (not a partition)
1,462,218,184,000
I have a physical server with 2 disks, sda and sdb. I want to monitor their I/O and performance. The monitoring element has 3 types: ops, sps, and bps. What are these and which one is better to monitor and give me useful information?
ops = operations per second sps = sectors per second bps = bytes per second More information at https://www.zabbix.com/documentation/2.0/manual/config/items/itemtypes/zabbix_agent They each give good information. Which one is appropriate in your environment will depend on your requirements.
Disk Performance Monitoring [closed]
1,462,218,184,000
I want to set disk quota (limit maximum space used in the file system) that a particular process can use under Linux. There seems to be plenty ways to limit disk quota for a user, but not at a per process granularity. One way I can think about is creating a user for each process but as you can imagine that is not a great solution.
Handling it with different user accounts may well be the only possible way since processes do not own any files and can therefore not have a disk quota. To make it even clearer, at the very best you could manage a quota for the files currently used, should you develop such a kernel patch, but it would still lose its sense to track the files that were written previously and got closed as they are not under its responsibility at all. Doing such a flawed patch would also result in considerable performance degradation and wouldn't make sense in situations where more than one program opens the same file. For those and many other reasons, it theoretically simply can not be done properly.
How to set per process disk quota?
1,462,218,184,000
I was brushing up and diving deeper into filesystem anatomy and in numerous resources it is said to be a requirement that the very first superblock start at an offset of 1024 bytes. I started looking for any sort of documentation as to why 1024 was chosen, it just seemed pretty arbitrary. All I could find was the following: "For the special case of block group 0, the first 1024 bytes are unused, to allow for the installation of x86 boot sectors and other oddities. The superblock will start at offset 1024 bytes, whichever block that happens to be (usually 0). However, if for some reason the block size = 1024, then block 0 is marked in use and the superblock goes in block 1. For all other block groups, there is no padding." Ext4 Disk Layout I figured this region had something to do with the later stages of grub, so I did some more digging and came across this article: Details of GRUB on the PC Which, from the DOS compatibility region section, states that the entire first "cylinder" is reserved, which can be up to 63 sectors, which is far more than a 1024 byte offset, so now i'm just confused. My Question: Can someone please explain, from byte 0 to the first superblock of an EXT filesystem, how a disk is laid out?
The master boot record (MBR) at the beginning of a disk contains only 446 bytes of code, so it is tiny and cannot do much. Therefore, a common booting technique is to do what is called "chain loading," where the MBR loads code at the beginning of the active partition and jumps to that code. By leaving the first two sectors free, the EXT file system allows the beginning of the partition to be used for such chain-loading code, when your EXT file system is on the active partition. More information on how this booting process works can be found here: http://wiki.osdev.org/Boot_Sequence#The_Traditional_Way
EXT filesystem family: Why does the first superblock start at offset 1024?
1,462,218,184,000
I'm pretty sure the Linux kernel has a feature which allows to track all the reads and writes (IO) of an application and all its children however I haven't seen any utilities which can calculate it and show it. For instance for CPU time you could simply use time and get neat CPU use information: $ time cat --version > /dev/null real 0m0.001s user 0m0.001s sys 0m0.000s I'm looking for something similar in regard to IO, e.g. $ calc_io task Bytes read: 123456 Bytes written: 0 Of course, we have /proc/$PID/io which contains runtime information but tracking it for applications which spawn and destroy children dynamically, e.g. web-browsers seems like a daunting task. I guess if you run strace -fF firefox then monitor all children being spawned and try to track in real time /proc/$PID/io - nah, seems like too difficult to implement and then how often will you poll this file for information? Children may exist for a split second. Another idea is to use cgroups but then what if I don't want to use them? Also I've checked /sys/fs/cgroup and I don't see any relevant statistics.
I came across this post and found it very interesting. I thought this problem was not that difficult since the question you are asking is quite natural after all. I could only find an imperfect and incomplete solution. I decided to post it anyway, as the question was not answered yet. This requires a system with systemd and cgroups2 (I read what you said about it but it might be interesting to see this solution). I learned about both, I don't master them. I tested only on an arch-based linux distribution. ~]$ cat /etc/systemd/system/user\@1000.service.d/override.conf [Service] Delegate=pids memory io It seems that you need to "delegate" io controller to your "user systemd sub tree" to use this as an unprivileged user (I can't point one specific place. man systemd.resource-control. https://systemd.io/CGROUP_DELEGATION . https://wiki.archlinux.org/title/cgroups#As_unprivileged_user ) ~]$ cat ~/.config/systemd/user/my.slice [Slice] IOAccounting=true Then create a slice with IOAccounting enabled to run you processes in. reboot ~]$ cat foo.sh #!/bin/sh dd if=/dev/random of=/home/yarl/bar bs=1M count=7 dd if=/dev/random of=/home/yarl/bar bs=1M count=3 ~]$ systemd-run --user --slice=my.slice /home/yarl/foo.sh ~]$ systemctl --user status my.slice ● my.slice - Slice /my Loaded: loaded (/home/yarl/.config/systemd/user/my.slice; static) Active: active since Sun 2021-11-07 20:25:20 CET; 12s ago IO: 100.0K read, 10.0M written Tasks: 0 Memory: 3.2M CPU: 162ms CGroup: /user.slice/user-1000.slice/[email protected]/my.slice nov. 07 20:25:20 pbpro systemd[1229]: Created slice Slice /my.
Utility to show disk IO read/write summary for a task/command/program
1,470,498,701,000
While an application is running, I can monitor disk bandwidth usage using linux tools including dstat. Now I'd like to know how many sequential or random disk I/Os are occurring in the system. Does any one know any ways to achieve this?
Ypu can write your own FUSE filesystem (what you can do using almost any scripting/programming language, even bash) , that would just proxy filesystem calls to pointed filesystem (and eventually translate paths) plus monitor what you minght want to monitor. Otherwise you might investigate output of strace for programs performing I/O calls ofninterest, if possible.
Is there a way to monitor disk i/o patterns? (i.e. random or sequential i/o?)
1,470,498,701,000
is there a command on Linux to remove a file but zeroing it's contents first? so if i do, something like this: rm -rf /var/cache/pacman/pkg/* it would overwrite each file on that directory with 0 values, then erase it i need it for compacting my VMware image files without creating super big file containing zeros first.
The shred command can zero out a file. To do what you want, I think something like this should work find /var/cache/pacman/pkg -type f -exec shred -n 0 -z {} \; \ && rm -rf /var/cache/pacman/pkg/*
How can I zero files out inside a VMware image file so that their space can be reclaimed?
1,470,498,701,000
I have a disk mounted to VM without as a whole. I created a file system on that disk. It has no partitions. Now, I resized the disk from 100G to 200G. Do I need to do anything else to let the file system to make full use of the disk size? For file systems on some disk partition, we need to update the size of the partition that holds the file system. But I'm not sure do we need to do anything in my above senario.
You will need to verify that the kernel has recognized the new size, by e.g. running fdisk -l /dev/<device> or cat /sys/block/<device>/size and checking that the total size matches the new size instead of the old one. If you are using paravirtualized drivers in a VM, most of them will handle this automatically. But if the old size is still displayed, echo 1 > /sys/block/<device>/device/rescan can be used to tell the kernel that the size of the device has changed. Once the kernel knows the new size of the whole device, there is no partition table to edit in your case, so you can proceed directly to extending the filesystem, using a filesystem-dependent tool. For ext2/ext3/ext4 filesystems, you can use resize2fs /dev/<device>, no matter if the filesystem is currently mounted or not. For XFS, the filesystem must be mounted to extend it, and the command will be xfs_growfs <mount point pathname>. Other filesystem types have their own rules and extension tools. If your distribution includes fsadm, it provides an unified method for resizing ext2/ext3/ext4 filesystems, ReiserFS and XFS (hopefully it will be extended to cover other filesystem types in the future). The command would be fsadm resize /dev/<device>.
How to resize a disk without partition?
1,470,498,701,000
I am using Fedora 16. My /dev/sda2, mounted on / (root) with something like 50G got filled 100%: [foampile@~ 13:13:39]> df Filesystem 1K-blocks Used Available Use% Mounted on rootfs 51606140 49025452 0 100% / devtmpfs 2988452 0 2988452 0% /dev tmpfs 2999424 96 2999328 1% /dev/shm /dev/sda2 51606140 49025452 0 100% / tmpfs 2999424 51992 2947432 2% /run tmpfs 2999424 0 2999424 0% /sys/fs/cgroup tmpfs 2999424 0 2999424 0% /media /dev/sda1 99150 79569 14461 85% /boot /dev/sda5 247972844 10782056 224594412 5% /home Q1: Is there a command, or an option with ls, which will list all the files under a directory recursively and sort them in the descending order by size? I would like to see which files/dirs are hogging the device. Q2: My /home is relatively unused. is there a way to repartition the disk and switch some disk space from /dev/sda5 (/home) to /dev/sda2? Thanks
Q1. Try something like sudo du -a -m -x | sort -k1n -r | head -n40. The -a flag to du says to be recursive. The -m flag displays sizes in MB. The -x stays on a single filesystem. This will list both files and directories, and only the 40 largest (because of the -n40 option to head). Some du implementations have a -t SIZE option to only display entries whose size exceeds SIZE. To list files only, you could try instead something like: find / -xdev -type f -size +1M -ls. That will list only files on the same filesystem as / whose size exceeds 1 MB. Q2. Almost certainly. But you should ask about this separately, or search (here or elsewhere) on keywords like "linux" and "repartition" because I've seen it discussed very often. Here are some previous Qs on this site: Change main partition size to install another distribution Can I resize the root partition without uninstalling and reinstalling Linux (or losing data)? How do I resize a partition in Ubuntu linux without losing data?
Disk size management
1,470,498,701,000
The output yielded by df consistent with lsblk debian8@hwy:~$ df -h /dev/sda1 Filesystem Size Used Avail Use% Mounted on /dev/sda1 47G 34G 14G 72% /media/xp_c debian8@hwy:~$ df -h /dev/sda3 Filesystem Size Used Avail Use% Mounted on /dev/sda3 92G 36G 52G 42% / The output yielded by df inconsistent with lsblk debian8@hwy:~$ df -h /dev/sda4 Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev debian8@hwy:~$ df -h /dev/sda5 Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev debian8@hwy:~$ df -h /dev/sda6 Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev debian8@hwy:~$ df -h /dev/sda7 Filesystem Size Used Avail Use% Mounted on udev 10M 0 10M 0% /dev How to explain the output of lsblk and df -h? Sometime df can't get right info about disk. sudo fdisk -l Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x3b2662b1 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 97851391 97849344 46.7G 7 HPFS/NTFS/exFAT /dev/sda2 97851392 195508223 97656832 46.6G 83 Linux /dev/sda3 195508224 390819839 195311616 93.1G 83 Linux /dev/sda4 390821886 449411071 58589186 28G 5 Extended /dev/sda5 390821888 400584703 9762816 4.7G 82 Linux swap / Solaris /dev/sda6 400586752 439646207 39059456 18.6G b W95 FAT32 /dev/sda7 439648256 449411071 9762816 4.7G 7 HPFS/NTFS/exFAT
There's actually two problems. The first is the obvious one that others have pointed out: lsblk lists disk by device and df works on mounted filesystems. So lsblk /dev/sda3 is roughly equivalent to df -h / in your case since /dev/sda3 is mounted on /. Except that it's not. Because lsblk lists the size of the partition while df lists the size of the filesystem. The difference (93.1GB vs 92GB for sda3 in your example) is a combination of unusable space (if any) and filesystem overhead. Some amount of space needs to go to keeping track of the filesystem itself rather than the contents of the files it stores.
why df get result inconsistent with lsblk?
1,470,498,701,000
If I dd my disk and compress the image with lzma or lzo the image is still big. The partition is 10GB used, 90GB available. But the image still around 20GB. I believe that is because I have copied many and deleted many files on that disk and the filesystem doesn't zero the unused blocks from those deletions. How can I zero the unused blocks in order to minimize the disk image? So that dirty bytes don't add up on my image. I'm using ext4.
The tool you think you're looking for is zerofree, as described in this duplicate question Clear unused space with zeros (ext3,ext4), and already available in most distributions. However, you seem to be asking how to take an image backup of a filesystem that excludes unused blocks. In this instance use fsarchiver, as described in this answer over on the AskUbuntu site.
How can I zero the unused blocks on my filesystem in order to minimize the compressed disk image size? [duplicate]
1,470,498,701,000
I'm trying to divide a USB thumb-drive into multiple partitions using gparted, but for some reason the Delete option is greyed out. How come? The drive is unmounted, but the only available option is to resize (which fails). The screenshot didn't want to include drop-down menus for some reason. I guess I can get around it using fdisk, but now I'm curious to why gparted is behaving like this. I did start it as root. I eventually got it working by creating a new partition table. That is, until it failed on actually creating the new table: Any ideas why?
You don't have a partition table on the device, the entire sda drive is formatted to NTFS. If you want to create multiple partitions on it, you first need to create a partition table with Device -> Create Partition Table (note this will destroy the existing NTFS filesystem so if you have some data on it you need to make a backup first) and then add new partition(s) using Partition -> New.
How come I can't edit the partitions of this USB drive?
1,470,498,701,000
There are several hard disk partitions on my system (Linux josDeb 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux). It is working with: bejo@josDeb:~$ ls -l /dev/disk/by-uuid yields: total 0 lrwxrwxrwx 1 root root 10 Apr 13 16:20 00FB-604A -> ../../sdb1 lrwxrwxrwx 1 root root 10 Apr 13 16:19 4425-7572 -> ../../sda1 lrwxrwxrwx 1 root root 10 Apr 13 16:19 8dc07aba-5729-4525-883f-09c32d1a9e98 -> ../../sda2 lrwxrwxrwx 1 root root 10 Apr 13 16:19 95a8efff-92d2-4e31-8632-bf7a640e100f -> ../../sda3 lrwxrwxrwx 1 root root 10 Apr 13 16:19 f5a05b5e-c3ed-4227-bb62-fe4576b72643 -> ../../sda4 Some partition uuids are long, and some are short. I would like to understand why. I thought, uuids always have 16 bytes. How come I have uuids of different sizes?
Actual UUIDs are supposed to be 128-bit long and meant to be unique. Prior to this, various systems provided various serial numbers of various size to be distinguishable. So Linux just takes whatever serial it can find and sticks them in the /dev/by-uuid/ directory even if they aren't matching the UUID definition. That's the case for the FAT32 volume ID: Sector offset FAT32 EBPB offset Length (bytes) Contents 0x043 0x38 4 Cf. 0x027 for FAT12/FAT16 (Volume ID) Historical description: Volume ID (serial number) Typically the serial number "xxxx-xxxx" is created by a 16-bit addition of both DX values returned by INT 21h/AH=2Ah (get system date)[nb 7] and INT 21h/AH=2Ch (get system time)[nb 7] for the high word and another 16-bit addition of both CX values for the low word of the serial number. Alternatively, some DR-DOS disk utilities provide a /# option to generate a human-readable time stamp "mmdd-hhmm" build from BCD-encoded 8-bit values for the month, day, hour and minute instead of a serial number. This is a 32 bits value, which can be displayed for example as 4425-7572. Most likely those two partitions are EFI System partitions since they have to be FAT32. You can get better informations (probably coming from parsing several /dev/disks/by-*/ entries) with the blkid command instead: # blkid or limited to those short entries: # blkid /dev/sda1 /dev/sdb1 The manual suggest to use lsblk instead which doesn't require root. So with the right options that would be lsblk -o +UUID,FSTYPE /dev/sda1 /dev/sdb1. E.g. here: $ lsblk -o +UUID,FSTYPE /dev/sda1 /dev/sdb1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT UUID FSTYPE sda1 8:1 0 200M 0 part /boot/efi 1234-5678 vfat sdb1 8:17 1 200M 0 part 9ABC-DEF0 vfat
Short and long uuids under /dev/disk/by-uuid
1,470,498,701,000
I can't boot my Debian 9.5 with following errors. If I remember correctly ACPI Errors were shown every time I turn on PC. But computer always boot right with these errors, so I didn't care much. The new error starts at "A start job is running.." When did the error start? I was going to sell my older HDD so I erased the disk /dev/sdd via command dd if=/dev/zero of=/dev/sdd bs=1M. The disk was mounted to /diskB_1TB. After erasing the HDD I turned down computer and then disconnected the disk from a motherboard. After that I turned on the computer but the error occured for the first time. I've tried procedure from: https://askubuntu.com/questions/924170/error-on-ubuntu-boot-up-recovering-journal/924335?noredirect=1#comment1512824_924335 but it fixed nothing. I have 4 disks /dev/sda with windows /dev/sdb with linux debian /dev/sdc as 2TB data disk /dev/sdd as 1TB data disk (the former disk I erased and disconnected) Is there anything I can still do in this situation? I'm pretty sure I did delete only /dev/sdd disk. I can still access data (via terminal) located in /dev/sdc and in /deb/sdb where my /home/stepaiv3 is located. Moreover I can normally boot into my windows on /dev/sda.
Systemd assumes certain mounts are critical to the system and as such a failure to mount one results in it switching to emergency mode. Systemd should have reconfigured its automount units when the device was disconnected unless it appears in /etc/fstab or you configured it as a mount unit. So the issue is likely that you still have /diskB_1TB in your fstab. From your emergency mode console try editing your fstab /etc/fstab and remove the line with /diskB_1TB then reboot.
Trouble on Debian boot up "Timed out waiting for device dev-disk..."
1,470,498,701,000
Here is my lsblk -a list: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 10G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 9G 0 part ├─cl-root 253:0 0 8G 0 lvm / └─cl-swap 253:1 0 1G 0 lvm [SWAP] sdb 8:32 0 16G 0 disk sr0 11:0 1 1024M 0 rom drbd0 147:0 0 2G 0 disk I want to remove drbd0. How to do?
(On a previous question) I suggested that you want to remove the DRBD device, however running rm on nodes in /dev/ does not really achieve this. E.g. you should see that the kernel view of block devices in /sys/class/block is not affected by such changes. Removing the device node will hide it from lsblk, but it would not cause any claimed resources to be released! This seems like a bad idea. Rebooting should remove any weird DRBD devices that you are not using any more. (E.g. that were removed from drbd config, but still exist for whatever strange reason). "module load/unload" would be a way to avoid a reboot, if that's what you wanted. modprobe -r drbd to unload. It would require that you have no other DRBD devices in use. If you believed the drbd daemon was messing around with creating or renaming devices in /dev at the same time as udev / devtmpfs is running, and genuinely had a bug which left behind a stale device node (and some stale cache in lsblk), then you'd better do a full reboot to clear up the mess. Because that would be some really broken software, nothing should be doing that anymore.
How to remove a block device from lsblk list on CentOS 7?
1,470,498,701,000
Have a Debian-based Linux that has been hardware cloned a few times. There are long boot delays even though it has an SSD. Originally, there had been a little slowly spinning icon saying it was waiting for a job before it timed out. For this, I found a swap file referenced in /etc/fstab that didn't actually exist so I deleted it's line with the corresponding UUID and that "job/timeout" error went away... ...but got replaced with a long blank screen with blinking cursor that flashes a message before it displays the login prompt. The message it flashes is: Gave up waiting on suspend/resume device. /dev/sda1 [some disk metrics here] /dev/sda1 is the only partition that exists according to gparted. I'm trying to clear up this long boot delay and find the cause of what it's waiting on. Any help would be appreciated. Thanks! EDIT: I tried re-creating a swap file based off of this answer: https://superuser.com/questions/1204627/deleted-a-partition-now-getting-gave-up-waiting-for-suspend-resume-device-mes/1204634 but the same delay occurred but the error message changed to some problem with journaling -- it's too quick for me to see. -- So I just deleted the SWAP file and commented out it's reference in /etc/fstab that brought me back to the problem above.
Happens after deleting the swap partition If a swap partition is deleted (e.g. on purpose when migrating from HD to SSD), the file /etc/initramfs-tools/conf.d/resume should be either completely empty or read RESUME=. Delete any UUID number. RESUME=NONE is not valid. $ sudo gvim /etc/initramfs-tools/conf.d/resume The initial RAM filesystem requires updating for these changes to take effect: $ sudo update-initramfs -u
Long Boot Time (SSD), black screen with blinking cursor, "gave up waiting on suspend/resume device"
1,470,498,701,000
I am having issues with Seagate Laptop SSHD 1TB, PN: ST1000LM014-1EJ164-SSHD-8GB. dmesg | grep ata1: says this: [ 1.197516] ata1: SATA max UDMA/133 abar m2048@0xf7d36000 port 0xf7d36100 irq 31 [ 6.548436] ata1: link is slow to respond, please be patient (ready=0) [ 11.232622] ata1: COMRESET failed (errno=-16) [ 16.588832] ata1: link is slow to respond, please be patient (ready=0) [ 21.269019] ata1: COMRESET failed (errno=-16) [ 26.621223] ata1: link is slow to respond, please be patient (ready=0) [ 56.322386] ata1: COMRESET failed (errno=-16) [ 56.322449] ata1: limiting SATA link speed to 3.0 Gbps [ 61.374591] ata1: COMRESET failed (errno=-16) [ 61.374651] ata1: reset failed, giving up Further, I don't see the drive in GParted. Does this mean this drive is dead or semi-dead?
Since the issue is with the link, rather than an actual error reported by the drive itself, technically it means that either the SATA port, or the SATA cable, or the drive is having issues. In all likelihood though the drive is dead. (But try another cable if you have one!)
Did this drive die?
1,470,498,701,000
I'm trying to monitor disk activity of each process. One way that I found how to do this is to read /proc/pid/io file and compare fields with previous read. That works fine except my monitoring process seem to be able to read only some io files (for example it has no permissions to read apache's). How to read io of others? Also perhaps there is a better way of achieving this goal? Edit Obviously I could run the process as root, but I'd like to avoid that
Use iotop. It should be available in your repo for a Redhat/Centos/Fedora machine (if it is not already installed). It outputs a similar info as top, but instead of the CPU/memory stats, you will get the IO stats (Disk reads, writes and swapin). The options -p , -u and --only might be of interest to you. For example, to see the IO activity of the process with ID 5435, use: iotop -p 5435 From the man page: -p PID, --pid=PID A list of processes/threads to monitor (all by default). -u USER, --user=USER A list of users to monitor (all by default) -P, --processes Only show processes. Normally iotop shows all threads.
how to read any process' /proc/pid/io
1,470,498,701,000
I am looking for a way to compress swap on disk. I am not looking for wider discussion of alternative solutions. See discussion at the end. I have tried... Using compressed zfs zvol for swap is NOT WORKING. The setup of it works, swapon works, swapping does happen somewhat, so i guess one could argue it's technically a working solution, but exactly how "working" is your working solution, if it's slower than floppy disk access, and causes your system to completely freeze forever 10 times out of 10? Tried it several times -- soon as system enters memory pressure conditions, everything just freezes. I tried to use it indirectly as well, using losetup, and even tried using zfs zvol as a backing device for zram. No difference, always same results -- incredibly slow write/read rates, and system inevitably dies under pressure. BTRFS. Only supports uncompressed swapfiles. Apparently, only supports uncompressed loop images as well, because i tried dd-ing an empty file, formatting it with regular ext2, compressing it, mounting as a loop device, and creating a swapfile inside of it. Didn't work, even when i mounted btrfs with forced compression enabled -- compsize showed the ext2 image compression ratio of exactly 1.00 . Zswap -- it's just a buffer between ram and regular disk swap. The regular disk swap keeps on being the regular disk swap, zswap uncompresses pages before writing them on there. Zram -- has a backing device option since it's inception as compcache, and one would think, is a perfect candidate to have had compressed disk swap for years. No such luck. While you can do writeback of compressed in-ram pages to disk at will, the pages get decompressed before they're written. Unlike zswap, doesn't write same- and zero-filled pages though, which both saves i\o, slightly improves throughput, and warrants the use of loop-mounted sparse files as backing_dev. So far, this is the best option I found for swap optimization on low-end devices, despite it still lacking disk compression. Any ideas what else I can try? Maybe there's some compressed block device layer, that I don't know of, that can compress anything written to it, no filesystem required? Maybe there's some compressed overlay I could make use of? Not done in FUSE though, as FUSE itself is a subject to swapping, unless you know a way to prevent it from being swapped out. Since i don't see this being explored much, you're welcome to suggest any madness you like. Please, let's throw stuff at the wall and see what sticks. For experts -- if any of you have read, or even written, any part of linux sourse code that relates to this problem, please describe in as much detail as possible, why do you think this hasn't been implemented yet, and how do you think it could be implemented, if you have any idea. And obviously, please do implement that if you can, that'll be awesome. Discussion Before you mark it as a duplicate -- I'm aware there have been a few questions like that around stackexchange, but none i saw had a working answer, and few had any further feedback. So I'll attempt to describe details, sort of aggregate everything, here, in hopes that someone smarter than me can figure this out. I'm not a programmer, just a user and a script kiddie, so that should be a pretty low bar to jump over. just buy more ram, it's cheap get an ssd swap is bad compression is slow anyway, why bother If all you have to say, are any of the above quotes -- go away. Because the argument is optimization. However cheap RAM is these days, it's not free. Swap is always needed, the fact that it's good for the system to have it, has been established for years now. And compression is nothing, even "heavy" algorithms perform stupidly fast on any processors made in the last decade. And lastly, sure, compression might actually become a bottleneck if you're using an ssd, but not everyone prioritizes speed over disk space usage, and hdd drives, which DO benefit grossly from disk compression, are still too popular and plentiful to dismiss.
I don't have a concrete answer for you: something you might explore is LVM. LVM is primarily an alternative form partitioning. However technically LVM's physical volumes can be any block device. It provides logical block devices ultimately backed by physical ones. Since LVM logical volumes are block devices they can usually be used for swap. LVM has a feature called VDO which provides compression: The Virtual Data Optimizer (VDO) feature provides inline block-level deduplication, compression, and thin provisioning for storage. You can manage VDO as a type of Logical Volume Manager (LVM) Logical Volumes (LVs), similar to LVM thin-provisioned volumes. Trouble you might run into Any form of compression requires some memory. My main concern with any solution that was not designed to work with swap is that it may dynamically request memory from the kernel to compress pages. Since it would be compressing because the kernel needed to free up RAM, requesting RAM would be prone to cause failure. It is of course possible that drivers are written to be aware of this and primitively request all the RAM they might need. The point is this might be a problem if the drivers were not written with swap in mind.
How to compress disk swap
1,470,498,701,000
We are thinking about to change all Linux fstab configuration to work with UUID instead the current configuration Some of the disks are with non RAID and some of the disks are with RAID10 I searched in google and find complain about using UUID for RAID1 : " Unfortunately you MUST NOT use UUID in /etc/fstab if you use software RAID1. Why? Because the RAID volume itself and the first element of the mirror will appear to have the same filesystem UUID. If the mirror breaks or for any other reason the md device isn't started at boot, the system will mount any random underlying disk instead, clobbering your mirror. Then you'll need a full resync. Bad juju." So I just want to know if we can use UUID for RAID10 ? and in which cases ( RAID configuration ) not to use UUID? second - in few lines what are the benefit to use UUID ?
Answer to your second question: an UUID allows you to uniquely identify a device. Devices are assigned as /dev/sda, /dev/sdb, etc. depending on the order the system discovers them. While the drive the system boots on is always the first, for the others their name assignment depends on the order of discovery and might change after a reboot. Also, imagine you have drives /dev/sdc and /dev/sdd, and you physically remove the first drive; after reboot, what was known as /dev/sdd is now called /dev/sdc. This makes identification of devices ambiguous. UUIDs avoid all ambiguity; as the UUID is stored in the superblock (for a block device), it pertains to the device itself.
in which cases it will be problematic to configure UUID in fstab
1,470,498,701,000
We have 3 servers, each of which are running RHEL 7.6. Each machine has 64G of RAM and 15 CPUs. We are preparing to install a set of services --- kafka, zookeeper, schema registry --- on all of the machines. Each service is based on a docker container. We are planning to install docker on all machines, and each machine will have three docker containers. Do docker containers have a negative impact when all containers are part of the OS disk? Should we add additional disks on each machine and allocate the docker containers on the addition disk? What is the best practice here?
I'll answer this from the theoretical standpoint, I don't have the experience with Kafka on Docker. Docker is largely an isolation technology NOT a virtual machine. This means that it is much lighter weight than you might expect. A large portion of it is built around namespaces and mounts including bind mounts. It will depend a little on what you are asking Docker to do: If you use a bind mount or a Docker volume then these are stored directly as files on the host system. Their performance overhead should be no more than that of a Linux bind mount because that's exactly what you get. This performance overhead is near zero. Other storage volumes such as those backed by amazon S3 can come with an overhead. In short the result on disk should generally be very similar to that of running on the host system. Docker just creates a neat sandbox and calls it a container.
Do docker container cause slow Disk/OS performance
1,470,498,701,000
I'm using Fedora server and I have an unused partition on /dev/sda. How can I find it with parted or another command? I can only list mounted and used partitions.
with parted you can see unallocated space if you use print free parted command, like parted /dev/sda print free. Or issue parted /dev/sda command, and then inside parted type print free. Example # parted /dev/sda GNU Parted 2.1 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print free Model: DELL PERC H710 (scsi) Disk /dev/sda: 3999GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 17.4kB 1049kB 1031kB Free Space 1 1049kB 211MB 210MB ext4 boot 2 211MB 4506MB 4295MB 3 4506MB 8801MB 4295MB linux-swap(v1) 4 8801MB 3999GB 3990GB ext4 3999GB 3999GB 1032kB Free Space
How to find unallocated disk partition with parted or another tool?
1,470,498,701,000
I am testing my new PCI-E SSD in Linux. I am using the following command to test its performance (reference: https://www.thomas-krenn.com/en/wiki/Linux_I/O_Performance_Tests_using_dd) (1) dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=2048 --> 2.2GB/sec (2) dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=2048 oflag=direct --> 2.2GB/sec (3) dd if=/dev/zero of=/mnt/nvme0n1/tempfile bs=1M count=2048 --> 80MB/sec (4) dd if=/dev/zero of=/mnt/nvme0n1/tempfile bs=1M count=2048 oflag=direct --> 800MB/sec My guesses are as follows: (3, 4) is writing over the filesystem (formatted as NTFS for some reasons). However, (1, 2) is writing into the block device directly, which gives no overheads of file system. Am I correct or not? Can you give me some explanations on it? Thanks
I would quibble over your wording.  I would say that commands (1) and (2) are over-writing the filesystem, if any; i.e., ignoring it and destroying it (if there is one).  They will behave the same if the device has a filesystem on it before hand or not. Meanwhile, commands (3) and (4) are writing into the filesystem, or through it. Yes, of course the fact that commands (3) and (4) are going through the filesystem code is why you get the performance difference.  (Continued in paragraph 4.) I don't see why the fact that the filesystem is NTFS is really significant.  I suspect that you would get similar results with any filesystem type; e.g., one of the ext family. To build upon point 2: First of all, filesystem I/O largely ignores bs=anything and oflag=direct.  The filesystem code probably treats a write of 1M as 2048 writes of 512 bytes, or perhaps 256 writes of 4K bytes.  Secondly, the filesystem code has the job of maintaining filesystem integrity.  That means, each time it extends your tempfile, it must allocate blocks from the free list and allocate them to the file.  This means it must be continually modifying the free list and the file's inode (or equivalent on whatever the filesystem type is).  This means not only (potentially) three actual writes for every user-visible write, but that the writes would be non-contiguous, and the drive would be seeking all over the place.  Further, if the filesystem has been in use for a while, the free list may have gotten out of order, and so the blocks allocated to the file may be non-contiguous. Suggestions: Do tests (3) and (4) after a mkfs, so the filesystem is clean.  Then repeat them with the file already existing.  This should decrease the amount of bookkeeping I/O. Repeat all the tests with bs in the 512-4K range.  The results for tests (3) and (4) should be nearly unchanged, while those for tests (1) and (2) should be much lower.
Performance differences when writing into /dev/sda and into /mnt/sda/tempfile
1,470,498,701,000
I was using ext4 filesystems for a long time, and it's the first time I see a weird behavior of ext4 filesystem. There is ext4 filesystem in /dev/dm-2 An I/O error happened in the underlying device, and the filesystem was remounted read-only. It is fine and expected by the configuration.But for some unknown reason, now it is not possible to completely unmount the filesystem. The command umount /the/mount/point returned with success. Further runs of that command say "Not mounted". The mount entry is gone from output of mount command. The filesystem is not mounted anywhere else. But. First: I can't see the usual EXT4-fs: unmounting filesystem text in dmesg. In fact, there is nothing in the dmesg. Second thing (it speaks for itself that something is wrong): root# cat /proc/meminfo | grep dirty Dirty: 9457728 kB root# time sync real 0m0.012s user 0m0.000s sys 0m0.002s root# cat /proc/meminfo | grep dirty Dirty: 9453632 kB Third thing: the debug directory /sys/fs/ext4/dm-2 still exists. Tried writing "1" to /sys/fs/ext4/dm-2/simulate_fail in hope that it will bring the filesystem down. But it does nothing, shows nothing in dmesg. Finally the fourth thing which makes the device unusable: root# e2fsck -fy /dev/dm-2 e2fsck 1.46.5 (30-Dec-2021) /dev/dm-2 is in use. e2fsck: Cannot continue, aborting. I understand that it is possible to reboot and etc. This question is not about solving some simple newbie problem. I want somebody experienced in ext4 filesystem to help me understand what can cause this behavior. The dm-2 device is not mounted anywhere else, not bind-mounted, not in use by anything else. There was nothing else using the Dirty Cache at the moment of measuring it with cat /proc/meminfo | grep dirty. The unmount call which succeeded, was not an MNT_DETACH (no -l flag was used). Despite that, it succeeded nearly immediately (it's weird). The mount point is no longer mounted: but as I described above, it can be easily seen that the filesystem is NOT unmounted. Update: as A.B pointed out, I tried to check if the filesystem is still mounted in a different namespace. I didn't mount it in a different namespace, so I didn't expect to see anything. But, surprisingly, it was mounted in a different namespace, surprisingly this (username changed): 4026533177 mnt 1 3411291 an-unrelated-nonroot-user xdg-dbus-proxy --args=43 I tried to enter that namespace and unmount it using nsenter -t 3411291 -m -- umount /the/mount/point It resulted in Segmentation fault (Core dumped), and this in dmesg [970130.866738] Buffer I/O error on dev dm-2, logical block 0, lost sync page write [970130.867925] EXT4-fs error (device dm-2): ext4_mb_release_inode_pa:4846: group 9239, free 2048, pa_free 4 [970130.870291] Buffer I/O error on dev dm-2, logical block 0, lost sync page write [970130.949466] divide error: 0000 [#1] PREEMPT SMP PTI [970130.950677] CPU: 49 PID: 4118804 Comm: umount Tainted: P W OE 6.1.68-missmika #1 [970130.953056] Hardware name: OEM X79G/X79G, BIOS 4.6.5 08/02/2022 [970130.953121] RIP: 0010:mb_update_avg_fragment_size+0x35/0x120 [970130.953121] Code: 41 54 53 4c 8b a7 98 03 00 00 41 f6 44 24 7c 80 0f 84 9a 00 00 00 8b 46 14 48 89 f3 85 c0 0f 84 8c 00 00 00 99 b9 ff ff ff ff <f7> 7e 18 0f bd c8 41 89 cd 41 83 ed 01 0f 88 ce 00 00 00 0f b6 47 [970130.957139] RSP: 0018:ffffb909e3123a28 EFLAGS: 00010202 [970130.957139] RAX: 000000000000082a RBX: ffff91140ac554d8 RCX: 00000000ffffffff [970130.957139] RDX: 0000000000000000 RSI: ffff91140ac554d8 RDI: ffff910ead74f800 [970130.957139] RBP: ffffb909e3123a40 R08: 0000000000000000 R09: 0000000000004800 [970130.957139] R10: ffff910ead74f800 R11: ffff9114b7126000 R12: ffff910eb31d2000 [970130.957139] R13: 0000000000000007 R14: ffffb909e3123b80 R15: ffff911d732beffc [970130.957139] FS: 00007f6d94ab4800(0000) GS:ffff911d7fcc0000(0000) knlGS:0000000000000000 [970130.957139] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [970130.957139] CR2: 00003d140602f000 CR3: 0000000365690002 CR4: 00000000001706e0 [970130.957139] Call Trace: [970130.957139] <TASK> [970130.957139] ? show_regs.cold+0x1a/0x1f [970130.957139] ? __die_body+0x24/0x70 [970130.957139] ? __die+0x2f/0x3b [970130.957139] ? die+0x34/0x60 [970130.957139] ? do_trap+0xdf/0x100 [970130.957139] ? do_error_trap+0x73/0xa0 [970130.957139] ? mb_update_avg_fragment_size+0x35/0x120 [970130.957139] ? exc_divide_error+0x3f/0x60 [970130.957139] ? mb_update_avg_fragment_size+0x35/0x120 [970130.957139] ? asm_exc_divide_error+0x1f/0x30 [970130.957139] ? mb_update_avg_fragment_size+0x35/0x120 [970130.957139] ? mb_set_largest_free_order+0x11c/0x130 [970130.957139] mb_free_blocks+0x24d/0x5e0 [970130.957139] ? ext4_validate_block_bitmap.part.0+0x29/0x3e0 [970130.957139] ? __getblk_gfp+0x33/0x3b0 [970130.957139] ext4_mb_release_inode_pa.isra.0+0x12e/0x350 [970130.957139] ext4_discard_preallocations+0x22e/0x490 [970130.957139] ext4_clear_inode+0x31/0xb0 [970130.957139] ext4_evict_inode+0xba/0x750 [970130.989137] evict+0xd0/0x180 [970130.989137] dispose_list+0x39/0x60 [970130.989137] evict_inodes+0x18e/0x1a0 [970130.989137] generic_shutdown_super+0x46/0x1b0 [970130.989137] kill_block_super+0x2b/0x60 [970130.989137] deactivate_locked_super+0x39/0x80 [970130.989137] deactivate_super+0x46/0x50 [970130.989137] cleanup_mnt+0x109/0x170 [970130.989137] __cleanup_mnt+0x16/0x20 [970130.989137] task_work_run+0x65/0xa0 [970130.989137] exit_to_user_mode_prepare+0x152/0x170 [970130.989137] syscall_exit_to_user_mode+0x2a/0x50 [970130.989137] ? __x64_sys_umount+0x1a/0x30 [970130.989137] do_syscall_64+0x6d/0x90 [970130.989137] ? syscall_exit_to_user_mode+0x38/0x50 [970130.989137] ? __x64_sys_newfstatat+0x22/0x30 [970130.989137] ? do_syscall_64+0x6d/0x90 [970130.989137] ? exit_to_user_mode_prepare+0x3d/0x170 [970130.989137] ? syscall_exit_to_user_mode+0x38/0x50 [970130.989137] ? __x64_sys_close+0x16/0x50 [970130.989137] ? do_syscall_64+0x6d/0x90 [970130.989137] ? exc_page_fault+0x8b/0x180 [970130.989137] entry_SYSCALL_64_after_hwframe+0x64/0xce [970130.989137] RIP: 0033:0x7f6d94925a3b [970130.989137] Code: fb 43 0f 00 f7 d8 64 89 01 48 83 c8 ff c3 90 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 c1 43 0f 00 f7 d8 [970130.989137] RSP: 002b:00007ffdd60f7d08 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 [970130.989137] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f6d94925a3b [970130.989137] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000055ca1c6f7d60 [970130.989137] RBP: 000055ca1c6f7b30 R08: 0000000000000000 R09: 00007ffdd60f6a90 [970130.989137] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 [970130.989137] R13: 000055ca1c6f7d60 R14: 000055ca1c6f7c40 R15: 000055ca1c6f7b30 [970130.989137] </TASK> [970130.989137] Modules linked in: 88x2bu(OE) erofs dm_zero zram ext2 hfs hfsplus xfs kvdo(OE) dm_bufio mikasecfs(OE) simplefsplus(OE) melon(OE) mikatest(OE) iloveaki(OE) tls vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) ip6t_REJECT nf_reject_ipv6 ip6t_rt ipt_REJECT nf_reject_ipv4 xt_recent xt_tcpudp nft_limit xt_limit xt_addrtype xt_pkttype nft_chain_nat xt_MASQUERADE xt_nat nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables binfmt_misc nfnetlink nvidia_uvm(POE) nvidia_drm(POE) intel_rapl_msr intel_rapl_common nvidia_modeset(POE) sb_edac nls_iso8859_1 x86_pkg_temp_thermal intel_powerclamp coretemp nvidia(POE) snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio snd_hda_codec_hdmi cfg80211 joydev snd_hda_intel input_leds snd_intel_dspcfg snd_intel_sdw_acpi snd_hda_codec kvm_intel snd_hda_core snd_hwdep kvm snd_pcm snd_seq_midi rapl snd_seq_midi_event snd_rawmidi intel_cstate serio_raw pcspkr snd_seq video wmi snd_seq_device snd_timer drm_kms_helper fb_sys_fops snd syscopyarea sysfillrect sysimgblt soundcore [970130.989137] ioatdma dca mac_hid sch_fq_codel dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua msr parport_pc ppdev lp parport drm efi_pstore ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear crct10dif_pclmul hid_generic crc32_pclmul ghash_clmulni_intel sha512_ssse3 sha256_ssse3 sha1_ssse3 usbhid cdc_ether aesni_intel usbnet uas hid crypto_simd r8152 cryptd usb_storage mii psmouse ahci i2c_i801 r8169 lpc_ich libahci i2c_smbus realtek [last unloaded: 88x2bu(OE)] [970131.024615] ---[ end trace 0000000000000000 ]--- [970131.203209] RIP: 0010:mb_update_avg_fragment_size+0x35/0x120 [970131.204344] Code: 41 54 53 4c 8b a7 98 03 00 00 41 f6 44 24 7c 80 0f 84 9a 00 00 00 8b 46 14 48 89 f3 85 c0 0f 84 8c 00 00 00 99 b9 ff ff ff ff <f7> 7e 18 0f bd c8 41 89 cd 41 83 ed 01 0f 88 ce 00 00 00 0f b6 47 [970131.207841] RSP: 0018:ffffb909e3123a28 EFLAGS: 00010202 [970131.209048] RAX: 000000000000082a RBX: ffff91140ac554d8 RCX: 00000000ffffffff [970131.210284] RDX: 0000000000000000 RSI: ffff91140ac554d8 RDI: ffff910ead74f800 [970131.211512] RBP: ffffb909e3123a40 R08: 0000000000000000 R09: 0000000000004800 [970131.212749] R10: ffff910ead74f800 R11: ffff9114b7126000 R12: ffff910eb31d2000 [970131.213977] R13: 0000000000000007 R14: ffffb909e3123b80 R15: ffff911d732beffc [970131.215181] FS: 00007f6d94ab4800(0000) GS:ffff911d7fcc0000(0000) knlGS:0000000000000000 [970131.216370] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [970131.217553] CR2: 00003d140602f000 CR3: 0000000365690002 CR4: 00000000001706e0 [970131.218740] note: umount[4118804] exited with preempt_count 1 Machine still works, it's possible to sync other filesystems: root# sync -f / root# But not global sync: root# sync (goes D state forever) The dirty cache related to that ghost filesystem is not gone, the filesystem still "mounted" What can be the cause of these issues?
Disclaimer: I can't and won't explain in this answer why a kernel partial failure was triggered. This looks like a kernel bug, possibly triggered by the I/O error conditions. TL;DR Having a filesystem still in use can happen when a new mount namespace inherits a mounted filesystem from the original mount namespace, but the propagation settings between both didn't make the unmount in the original namespace propagate it in the new namespace. The command findmnt -A -o +PROPAGATION also displays the propagation status of every visible mountpoint in its output. Normally this is not supposed to happen in a systemd environment, because systemd very early makes / a shared mount (rather than the kernel default of private) thus allowing unmounts to propagate within their shared group. I would thus expect this to happen more easily in a non-systemd environment, or anyway if a tool explicitly uses --make-private in some mounts. --make-private still has its use, especially for virtual pseudo-filesystems. One way to prevent this to happen could be, before a new mount namespace is created to change such mountpoint as shared with mount --make-shared .... I made an experiment to illustrate what happens with shared versus non-shared mounts. I attempted to make sure the experiment should work the same in a systemd or a non-systemd environment. Experiment This can be reproduced like below (some values such as /dev/loop0 have to be adapted): # truncate -s $((2**20)) /tmp/test.raw # mkfs.ext4 -Elazy_itable_init=0,lazy_journal_init=0 -L test /tmp/test.raw mke2fs 1.47.0 (5-Feb-2023) Filesystem too small for a journal Discarding device blocks: done Creating filesystem with 1024 1k blocks and 128 inodes Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done # losetup -f --show /tmp/test.raw /dev/loop0 # mkdir -p /mnt/propagation/test This will allow to change later the propagation for the experiment without having to alter the whole system by turning a directory into a mountpoint: # mount --bind /mnt/propagation /mnt/propagation Now different experiments can have different outcomes. unshare(1) tells: unshare since util-linux version 2.27 automatically sets propagation to private in a new mount namespace to make sure that the new namespace is really unshared. It’s possible to disable this feature with option --propagation unchanged. Note that private is the kernel default. Other tools might do otherwise. Here we'll change the underlying /mnt/propagation mountpoint instead and always use --propagation unchanged. This avoids getting different results for this experiment on non-systemd (kernel default: / is private) and systemd (systemd default: / is shared) systems. with shared # mount --make-shared /mnt/propagation # mount /dev/loop0 /mnt/propagation/test # ls /mnt/propagation/test lost+found # cat /proc/self/mountinfo | grep /mnt/propagation/test 862 854 7:0 / /mnt/propagation/test rw,relatime shared:500 - ext4 /dev/loop0 rw Have a second (root) shell and unshare into a new mount namespace (I'll change the prompt to NMNS# to distinguish it): # unshare -m --propagation unchanged -- NMNS# cat /proc/self/mountinfo | grep /mnt/propagation/test 1454 1453 7:0 / /mnt/propagation/test rw,relatime shared:500 - ext4 /dev/loop0 rw NMNS# cd /mnt/propagation/test The same shared:500 links the mount in the two namespaces: umounting from one will unmount it from the other. In the original shell (in the original mount namespace) unmount it: # umount /mnt/propagation/test umount: /mnt/propagation/test: target is busy. Free the resource usage: NMNS# cd / # umount /mnt/propagation/test # This time it worked. And observe it also disappeared in the new mount namespace. NMNS# cat /proc/self/mountinfo | grep /mnt/propagation/test NMNS# The kernel dmesg will have logged the filesystem is unmounted (everywhere), eg: EXT4-fs (loop0): unmounting filesystem e74e0353-ace0-4eff-86ae-30e288db853e. Quit the shell in the new mount namespace to clean up. with private # mount --make-private /mnt/propagation # mount /dev/loop0 /mnt/propagation/test # cat /proc/self/mountinfo | grep /mnt/propagation/test 857 854 7:0 / /mnt/propagation/test rw,relatime - ext4 /dev/loop0 rw Not shared anymore. Elsewhere: # unshare -m --propagation unchanged -- NMNS# cat /proc/self/mountinfo | grep /mnt/propagation/test 1454 1453 7:0 / /mnt/propagation/test rw,relatime - ext4 /dev/loop0 rw NMNS# echo $$ 232529 # umount /mnt/propagation/test # e2fsck /dev/loop0 e2fsck 1.47.0 (5-Feb-2023) /dev/loop0 is in use. e2fsck: Cannot continue, aborting. # The filesystem stayed mounted in the new mount namespace. To find this rogue namespace(s) from the original, one can run something like this: # for pid in $(lsns --noheadings -t mnt -o PID); do nsenter -t "$pid" -m -- findmnt /mnt/propagation/test && echo $pid; done nsenter: failed to execute findmnt: No such file or directory TARGET SOURCE FSTYPE OPTIONS /mnt/propagation/test /dev/loop0 ext4 rw,relatime 232529 # Note: nsenter: failed to execute findmnt: No such file or directory happened where the mount namespace was for a running LXC container where findmnt was not available. The loop did find the PID of the process in the new namespace having the mountpoint (note: in real cases, this could be an other PID in the same mount namespace, it doesn't matter.). In extreme cases, a dedicated command able to change mount namespace, check mounts and perform (u)mounts all-in-one would be required. This mount can be removed either by removing the remaining holding resource (PID 232529), which might be needed if the process actively uses the mounted filesystem (preventing umount to succeed), or by unmounting it in this namespace: # nsenter -t 232529 -m -- umount /mnt/propagation/test # e2fsck /dev/loop0 e2fsck 1.47.0 (5-Feb-2023) test: clean, 11/128 files, 58/1024 blocks Useful references: Mount namespaces and shared subtrees [LWN.net] Mount namespaces, mount propagation, and unbindable mounts [LWN.net]
Why a filesystem is unmounted but still in use?
1,470,498,701,000
I used dd to clone a smaller disk onto a larger disk, however now when booting I'm getting dmesg errors of: [Fri Sep 30 11:48:43 2022] GPT:Primary header thinks Alt. header is not at the end of the disk. [Fri Sep 30 11:48:43 2022] GPT:1953525167 != 3907029167 [Fri Sep 30 11:48:43 2022] GPT:Alternate GPT header not at the end of the disk. [Fri Sep 30 11:48:43 2022] GPT:1953525167 != 3907029167 [Fri Sep 30 11:48:43 2022] GPT: Use GNU Parted to correct GPT errors. How can I resolve this? The error indicates to use parted, but I'm unsure as to what commands to run?
You don't need to do anything special, just use p to print information about the disk, parted will tell you the partition table is wrong and ask you what to do so simply tell it to Fix it: # parted /dev/loop0 GNU Parted 3.5 Using /dev/loop0 Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Warning: Not all of the space available to /dev/loop0 appears to be used, you can fix the GPT to use all of the space (an extra 10485760 blocks) or continue with the current setting? Fix/Ignore? Fix ... (of course replace /dev/loop0 with your disk, e.g. /dev/sda).
Fix GPT after using dd to clone a smaller disk onto a larger disk
1,470,498,701,000
My objective is to free some space from the lvm and create new partition. Below is the devices and lvm ontop of it. # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 200G 0 disk |-vda1 252:1 0 512M 0 part /boot `-vda2 252:2 0 199.5G 0 part |-vg-lv_root 253:0 0 15.5G 0 lvm / |-vg-lv_pwcfg 253:1 0 10.9G 0 lvm /opt/pwcfg |-vg-lv_var 253:2 0 12.5G 0 lvm /var/log |-vg-lv_pw 253:3 0 118.4G 0 lvm /pw `-vg-lv_opt 253:4 0 42.2G 0 lvm /opt I want to make vg-lv_pw to 50 gb. I am doing it with the following command: # lvreduce --resizefs -L 50G /dev/vg/lv_pw fsck from util-linux 2.23.2 mkfs_lv_pw: 11/7007616 files (0.0% non-contiguous), 445504/31039488 blocks resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/mapper/vg-lv_pw to 13107200 (4k) blocks. The filesystem on /dev/mapper/vg-lv_pw is now 13107200 blocks long. Size of logical volume vg/lv_pw changed from <118.41 GiB (30312 extents) to 50.00 GiB (12800 extents). Logical volume vg/lv_pw successfully resized. Yes lvm size is set to 50GB. # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 200G 0 disk |-vda1 252:1 0 512M 0 part /boot `-vda2 252:2 0 199.5G 0 part |-vg-lv_root 253:0 0 15.5G 0 lvm / |-vg-lv_pwcfg 253:1 0 10.9G 0 lvm /opt/pwcfg |-vg-lv_var 253:2 0 12.5G 0 lvm /var/log |-vg-lv_pw 253:3 0 50G 0 lvm `-vg-lv_opt 253:4 0 42.2G 0 lvm /opt Now I have to take that Pfree 68.41GB and create new partition out of it. # pvs PV VG Fmt Attr PSize PFree /dev/vda2 vg lvm2 a-- <199.50g <68.41g # vgs VG #PV #LV #SN Attr VSize VFree vg 1 5 0 wz--n- <199.50g <68.41g How can I use that free space and create a new partition vda3?
First, you need to reduce the size of the physical volume, using pvresize: pvresize /dev/vda2 --setphysicalvolumesize 132g This ensures that all the data and metadata end up inside the first 132GiB of /dev/vda2. I’m playing it safe size-wise here. Then you need to shrink the /dev/vda2 partition entry, using fdisk or a similar tool — delete the partition entry and re-create it with the same starting sector and the appropriate size (slightly more than 132GiB, 276,824,064 512-byte sectors, to stay safe). This will allow you to create a new partition. Finally, resize the PV again, this time using pvresize /dev/vda2 so that it uses all the available space in the partition.
reduce lvm space and create new partition
1,470,498,701,000
I can usually see this log in dmesg: sd 5:0:0:0: [sda] Attached SCSI disk Can you please explain what are these 4 numbers? Will these numbers change after reboot? or it should be constant?
The four numbers represent a SCSI address, often referred to as H:C:T:L. The four components are host, channel (or bus), target, and LUN. With drives you’re likely to encounter on an end-user system (SATA, consumer NVMe, USB), the channel, target, and LUN will all be zero. The host number will depend on which port the drive is connected to, and how it’s enumerated; for fixed drives (SATA, NVMe), it won’t vary most of the time, but it’s not impossible for it to change.
sd 5:0:0:0: [sda] Attached SCSI disk, what are these four numbers? Will they change?
1,470,498,701,000
In some cases when we are not near the HW Linux machine, we are only able to see disks as the following: /dev/sdd 20511312 199536 20295392 1% /grd/sdd /dev/sdb 20511312 487852 20007076 3% /grd/sdb /dev/sde 20511312 91572 20403356 1% /grd/sde /dev/sdf 20511312 45192 20449736 1% /grd/sdf but do not get the info if the disks are in the HW machine or come from an external JBOD. How to know where are the disks located? Maybe by dmidecode or something else?
You can try hdparm -i {device}, like: # hdparm -i /dev/sda /dev/sda: Model=SAMSUNG MZ7TD512HAGM-000L1, FwRev=DXT05L0Q, SerialNo=S151NYADA01701 Config={ Fixed } RawCHS=16383/16/63, TrkSize=0, SectSize=0, ECCbytes=0 BuffType=unknown, BuffSize=unknown, MaxMultSect=16, MultSect=16 CurCHS=16383/16/63, CurSects=16514064, LBA=yes, LBAsects=1000215216 IORDY=on/off, tPIO={min:120,w/IORDY:120}, tDMA={min:120,rec:120} PIO modes: pio0 pio1 pio2 pio3 pio4 DMA modes: mdma0 mdma1 mdma2 UDMA modes: udma0 udma1 udma2 udma3 udma4 udma5 *udma6 AdvancedPM=no WriteCache=enabled Drive conforms to: unknown: ATA/ATAPI-2,3,4,5,6,7 * signifies the current active mode The above is correct output for standard disk drive. I believe it should fail for virtual disks, like JBOD or RAID. Then it displays something like: # hdparm -i /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 HDIO_GET_IDENTITY failed: Invalid argument However if your local disks are also of RAID type provided by some HW controller, the hdparm isn't much usefull. Then you may try the udevadm: udevadm info -a -p $(udevadm info -q path -n /dev/sdb) The output of it is quite long, so I won't paste it here all, but there is enough information to distinguish types of sdX devices on your node, when you compare outputs.
how to know if disks are from Jbod or integral as part of the HW machine
1,470,498,701,000
From last 2 weeks I have problem with my SSD in GNU/Linux. I think it's not device problem but I'm not sure. From time to time (every 1-2 days last days) I loss physical access to the disk, as if it was disconnected or powered off. The error: EXT4-fs error (device: sda2): ext4_find_entry:1465: inode #1308161: comm NetworkManager: reading directory lblock 0 I've typed this error from photo so it can be not fully accurate. Notes: Device is always the same "sda2", haven't noticed error with other (big home) partition. I will try to check this next time. Inode and process name changes but NetworkManager is quite common. lblock is always 0. Hardware: Dell E7270 with SSD disk LITEON CV3-8D512-11 SATA 512GB Software: Debian testing, kernel 4.11. smartctl brief output: Device Model: LITEON CV3-8D512-11 SATA 512GB Serial Number: TW0956WWLOH006CU022Z LU WWN Device Id: 5 002303 100ce15e0 Firmware Version: T89110D User Capacity: 512,110,190,592 bytes [512 GB] Sector Size: 512 bytes logical/physical Rotation Rate: Solid State Device Form Factor: M.2 Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ATA8-ACS, ATA/ATAPI-7 T13/1532D revision 4a SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Wed Jul 5 12:32:39 2017 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled ... SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0003 100 100 000 Pre-fail Always - 0 9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 327 12 Power_Cycle_Count 0x0003 100 100 000 Pre-fail Always - 335 175 Program_Fail_Count_Chip 0x0003 100 100 000 Pre-fail Always - 0 176 Erase_Fail_Count_Chip 0x0003 100 100 000 Pre-fail Always - 0 177 Wear_Leveling_Count 0x0003 100 100 000 Pre-fail Always - 59 178 Used_Rsvd_Blk_Cnt_Chip 0x0003 100 100 000 Pre-fail Always - 0 179 Used_Rsvd_Blk_Cnt_Tot 0x0003 100 100 000 Pre-fail Always - 0 180 Unused_Rsvd_Blk_Cnt_Tot 0x0033 100 100 005 Pre-fail Always - 2688 181 Program_Fail_Cnt_Total 0x0003 100 100 000 Pre-fail Always - 0 182 Erase_Fail_Count_Total 0x0003 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0003 100 100 000 Pre-fail Always - 0 194 Temperature_Celsius 0x0003 100 100 000 Pre-fail Always - 76 195 Hardware_ECC_Recovered 0x0003 100 100 000 Pre-fail Always - 0 199 UDMA_CRC_Error_Count 0x0003 100 100 000 Pre-fail Always - 0 238 Unknown_Attribute 0x0003 097 100 000 Pre-fail Always - 3 241 Total_LBAs_Written 0x0003 100 100 000 Pre-fail Always - 4293005286 242 Total_LBAs_Read 0x0003 100 100 000 Pre-fail Always - 3510503294 SMART Error Log Version: 0 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 298 - # 2 Short offline Completed without error 00% 294 - # 3 Offline Interrupted (host reset) 80% 294 - # 4 Offline Interrupted (host reset) 10% 294 - # 5 Short offline Completed without error 00% 294 - # 6 Short offline Completed without error 00% 1 - # 7 Short offline Aborted by host 90% 1 - Ideas: run bad block check check connections
I think I've fixed this by removing SDD, blowing air into M.2 connector and reinserting it back. When I booted to rescue Debian from USB I've noticed more detailed kernel debug information. While searching I've noticed most solutions were to replace SATA cables. Laptop M.2 connection has no cables. I'm posting screen Some most important log texts: exception Emask 0x10 SAct ... SErr ... action 0xe frozen interface fatal error, PHY RDY changed SError: { PHYRdyChg LinkSeq } failed command: WRITE FPDMA QUEUED Emask 0x10 (ATA bus error) hard resetting link
random SSD turn off - ext4_find_entry , reading directory lblock0
1,470,498,701,000
I was reading Fast File System for UNIX where it says the following on page 3: By ‘‘partition’’ here we refer to the subdivision of physical space on a disk drive. In the traditional file system, as in the new file system, file systems are really located in logical disk partitions that may overlap. This overlapping is made available, for example, to allow programs to copy entire disk drives containing multiple file systems I don't quite understand what "overlapping" means here. Here's my understanding of disk organization: Disks are divided into sectors (physical blocks) which are necessarily consecutive. Partitions are logical divisions of the disk, with blocks of size in integer multiples of the sector size, which have a file system installed. The partition itself necessarily resides in a contiguous chunk of the disk (although files inside the partition may be spread out randomly within the partition). Is my understanding of disk organization correct? What does the paper mean by partition overlapping?
Keep in mind that the text you are reading is from about 35 years ago, and while many of the characteristics of the "Fast File Systems" have survived e.g. in ext2, I assume you are doing that to study the history. Disks are divided into sectors (physical blocks) which are necessarily consecutive. Sort of. Physically, the harddisk is divided in platters, each with a read/write head. A concentric circle on one platter forms a track, and the set of tracks in the same position for each platter forms a cylinder. Tracks are divided into sectors. This is a 3D structure, not a linear one, so it can't be consecutive. However, each sector (on each cylinder, on each head) is given a block number, and these block numbers are consecutive, and consecutive physical blocks are located very close to each other. So from the OS point of view, a harddisk consists of a number of physical blocks, with consecutive physical block addresses (or sector addresses, because each block is a physical sector). On the PC (and not on the PDP-11/VAX as in the document), harddisk addressing went from a cylinder/head/sector scheme (CHS) to a block address scheme (LBA). Partitions are logical divisions of the disk, with blocks of size in integer multiples of the sector size, ... Yes. Block is a dangerous word, because it can mean different things in different contexts. Filesystems use "filesystem blocks" or "allocation blocks" which are multiples of the physical block size. Partitions, at least PC and BSD-style partitions, typically use physical blocks as "partition block size". ... which have a file system installed. Not necessarily. It can also be swap space, or a PC extended partition (a placeholder to allow more than four partitions), or a BSD raw partition (see below). The partition itself necessarily resides in a contiguous chunk of the disk (although files inside the partition may be spread out randomly within the partition). Yes. A partition is just a contigous range of physical blocks (given by beginning and ending block, or beginning block and number of blocks in this paritition). So nothing stops us from defining a partition that contains several other partitions. In fact, if you look at the BDS partition example in bsdlabel, 8 partitions: # size offset fstype [fsize bsize bps/cpg] a: 81920 16 4.2BSD 1024 8192 16 b: 160000 81936 swap c: 1173930 0 unused 0 0 # "raw" part, don't edit Partition a consists of block 16-81935, partition b of blocks 81936-1681936, and partition c of blocks 0-1173929. So partition c "contains" partition a and b (and some extra blocks). The last "raw" partition that spans the entire disk is for convenience: It allows the OS to access the complete disk, for example to copy it entirely. On Linux, this is not necessary, because the OS can access the block device that represents the whole disk. Note that it's impossible for a "container" partition to have a file system, because that would collide with the file systems or other data in the contained partitions.
FFS: Logical and Physical Blocks in Partitions
1,470,498,701,000
After removing a bay mounted SATA connected drive the kernel will most of the time remove the mount. However, sometimes the mount remains even though the disk has been removed. Is there a way to avoid this?
As I ended replied to the original OP, you can always force an unmount with a lazy unmount umount -l <filesystem|partition> Nevertheless the thing about lazy umount is that it ignores the pending buffers to be written to that drive. I would recommend a script a sudo for the user or a group of users that run the app, that only allows to run a script to umount the drive, and that can be invoked by the app. Or even a key on the console programmed to to call a script. (if a physical server)
Hot Removed SATA drive mount remains
1,470,498,701,000
i'm new and i hope to find an answer here. Please tell me, if you need more information. I have an disk encryption for my home partition on Linux 4.13.0-43-generic x86_64 (Ubuntu 16.04.4 LTS). Today when I started the laptop, I got the message, that my disk is full and there is no available space any more. With the disk usage analysis I saw, that the encryption directory (/home/.ecryptfs/bianca/.Private) is completly full - the other partition have enough space. I did not find any answer by Google, but I would like to know, if there may be encryption files, which won't be needed any more because they may be outdated or old or anything? If yes, it is possible to remove these files or directories in this directory? Is there any tool, that can delete files, if they are not used any more? Or do you have any other recommendation, what I can do? It would be glad, if someone made already experience with this and can share it with me. Thank you in advance. Bianca edit: Output of lsblk: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sr0 11:0 1 1024M 0 rom sda 8:0 0 465,8G 0 disk ├─sda2 8:2 0 1K 0 part ├─sda5 8:5 0 465,3G 0 part │ ├─ubuntu--mate--vg-swap_1 253:1 0 15,7G 0 lvm │ │ └─cryptswap1 253:2 0 15,7G 0 crypt [SWAP] │ └─ubuntu--mate--vg-root 253:0 0 449,6G 0 lvm / └─sda1 8:1 0 487M 0 part /boot
/home/.ecryptfs/bianca/.Private contains the encrypted versions of all your home files, when you're logged in they're decrypted on-the-fly to your home (~ or /home/bianca). It should be approximately the same size as your home when you're logged in. Delete (or backup/move) some files out of your home, not directly from /home/.ecryptfs/bianca/.Private since it's probably not clear which home files they really are. Disk Usage Analyzer / baobab is a tool I like, or just du (there are some commands to make it more readable & sorted, a web search or man has more info)
Disk encryption with ecryptfs - full disk
1,470,498,701,000
I've been looking around and I found out that you can find your disk space with df -hT. So I did use it to get the disk space (in total) and how much is left. But the thing is, I wonder if there are any other ways to get the information? The code I copied here, will give total disk space in GB's (I added the B in the end) with awk as you can see, also cut it with awk. This might be too messy for some of you (I'm still learning bash), so if you have any recommendations, then feel free to give them to me. Remember that I am looking for options that work in every server/machine, without software that has to be downloaded with apt-get. df -hT /home | awk '{print $3}' | awk 'FNR == 2 {print $1 "\B"}' awk: cmd. line:1: warning: escape sequence `\B' treated as plain `B' 912GB Also no clue how to get rid of the awk message. This might seem a bit weird, but I have to start from somewhere!
Removing the backslash works for me: df -hT /home | awk '{print $3}' | awk 'FNR == 2 {print $1 "B"}' and can be simplified to df -hT /home | awk 'FNR == 2 {print $3 "B"}'
Linux disk space
1,470,498,701,000
I had an Ubuntu 14.04 running on a 1024 GB disk (disk A), which only used up to 130 GB spaces. I want to clone it to a 256 GB SSD disk ( disk B ). But failed. I used gparted to shrink the partitions on disk A to be only about 180 GB for the preparation for disk B successfully. Can you tell me where I was wrong? 1) restore the partition table I did backup the partition table of disk A. └──╼ $ sudo sfdisk -d /dev/sda # partition table of /dev/sda unit: sectors /dev/sda1 : start= 2048, size= 997376, Id=83, bootable /dev/sda2 : start= 999424, size= 15624192, Id=82 /dev/sda3 : start= 16623616, size=337020928, Id=83 /dev/sda4 : start= 0, size= 0, Id= 0 Tips /dev/sda1 for /boot, /dev/sda2 for swap, /dev/sda3 for /. Save partition table sudo sfdisk -d /dev/sda > partition.table Then I tried to restore the partition table to disk B ( /dev/sdc in this scenario ). I replaced sda with sdc in file partition.table. Then it looks like: # partition table of /dev/sdc unit: sectors /dev/sdc1 : start= 2048, size= 997376, Id=83, bootable /dev/sdc2 : start= 999424, size= 15624192, Id=82 /dev/sdc3 : start= 16623616, size=337020928, Id=83 /dev/sdc4 : start= 0, size= 0, Id= 0 Then do the restore successfully. sudo sfdisk /dev/sdc < partition.table 2) migrating disk partition content sudo dd if=/dev/sda1 of=/dev/sdc1 sudo dd if=/dev/sda2 of=/dev/sdc2 sudo dd if=/dev/sda3 of=/dev/sdc3 After migration, those partitions on /dev/sdc can be mounted and viewed. Failure But if I plugged the SSD disk (disk B) into my laptop, it would not boot up after some Thinkpad BIOS output. No error came out but a blinking cursor... I bet the BIOS even did not detect the /boot on disk B when doing booting. Can you help me? Many thanks! update Some one suggested me to use grub-install /dev/sdc to do the trick. I searched what grub-install is capable -- link Let me try. And I am pretty sure disk A ( had MBR installed ). Update After doing dd if=/dev/sda of=/dev/sdc bs=512 count=1 , insert disk B only, it's still the same blinking cursor. Nothing really after BIOS. After doing grub-install --boot-directory=/mnt/mypartition/boot /dev/sdc I went to boot it up, only disk B. But grub console came out . And reported Update Now it is working!!! Here's how I did it, on the PC running disk A as OS and the disk B (/dev/sdc) as a USB hard drive. sudo mount /dev/sdc3 /mnt sudo mount /dev/sdc1 /mnt/boot sudo grub-install --boot-directory=/mnt/boot /dev/sdc3 Then went to the /mnt/boot/grub/grub.cfg, I did replaced 2 things in file grub.cfg. (remember to give write permission to the file grub.cfg) replace hd1 with hd0 replace /dev/sdc3 with /dev/sda3 Then save the file. -> Power off computer -> Insert disk B via SATA and take out disk A forever. -> Boot -> See grub error but still boot up If you met error Error: invalid environment block. Press any key to continue, please check this to solve it. Press any key will boot your system. https://askubuntu.com/questions/191852/error-invalid-environment-block-press-any-key-to-continue sudo -i Then, run each command, one-by-one. cd /boot/grub rm grubenv grub-editenv grubenv create grub-editenv grubenv set default=0 grub-editenv grubenv list update-grub Now go rebooting, it will work! This is how I shrank my 1024GB disk hard drive and migrated the entire system to a new 256GB SSD disk.
I am not familiar with sfdisk, but you could accomplish the same thing, partition table AND MBR back up using dd. This was in my notes and I am not the author... Backing up the MBR The MBR is stored in the the first 512 bytes of the disk. It consist of 3 parts: The first 446 bytes contain the boot loader. The next 64 bytes contain the partition table (4 entries of 16 bytes each, one entry for each primary partition). The last 2 bytes contain an identifier Clone the MBR as mbr.img: dd if=/dev/sdX of=/path/mbr_file.img bs=512 count=1 Clone partition as pX.img dd if=/dev/sdX of=/path/pX.img bs=1024 Restore the MBR to new disk dd if=/path/mbr_file.img of=/dev/sdY bs=512 Restore Partition to new disk dd if=/path/pX.img of=/dev/sdX bs=1024 OR You could use clonezilla to make an image of the OS and restore it on a disk that already has the partitions created. This way you might need to reinstall grub on the new disk grub-install grub-mkconfig and set your swap partition in the 'new' OS. mkswap swapon
How to shrink, clone an entire Linux disk and boot it?
1,486,330,371,000
Midnight Commander uses virtual filesystem (VFS) for displying files, such as contents of a .tar.gz archive, or of .iso image. This is configured in mc.ext with rules such as this one (Open is Enter, View is F3): regex/\.([iI][sS][oO])$ Open=%cd %p/iso9660:// View=%view{ascii} isoinfo -d -i %f When I press Enter on an .iso file, mc will open the .iso and I can browse individual files. This is very useful. Now my question: I have also files which are disk images, i.e. created with pv /dev/sda1 > sda1.img I would like mc to "browse" the files inside these images in the same fashion as .iso. Is this possible ? How would such rule look like ?
This question is ancient, but I found myself wanting something similar, and found a possible solution (no one stop shop, but a way to solve it). See https://stackoverflow.com/questions/66754449/list-contents-of-floppy-image-file-without-mounting-in-linux?noredirect=1#comment118002316_66754449 Basically you'll need to put a config file in /usr/lib/mc/extfs.d/ which contains functions that perform directory listings, file IO etc. for your image file. If you can find a userspace program that does this, then it's possible. (for me, mtools was the solution, since I just need to read floppy images.)
midnight commander: rules for accessing archives through VFS
1,486,330,371,000
How to get total disk read/write in bytes per hdd device? for example if i have sda, sdb, and sdc, is there any file on /proc that i could use similar to /proc/net/dev for networking?
found it.. /proc/diskstats the 6th and 10th columns are respectively read blocks and write blocks, to get the value in bytes, multiply with 512.. /sys/block/sdX/stat the 3rd and 7th values are respectively the same as above
How to get total disk read/write in bytes per hdd device from /proc
1,486,330,371,000
I have three identical servers, all three same cabeling and correctly placed hard-disk. Allthough one of the server got /dev/sdg /dev/sdh for the sata-ssds while the other two servers got them on /dev/sda /dev/sdb - Im using Proxmox with ZFS. The SATA-SSDs are connected onBoard (SATA-Cabeling to Board) while the Hard-Disks are connected to a SAS-HBA via Single Cable (SAS). How do that names get assigned? Serialnumber? Couldnt find valid information that helped me here. And is there a way to change the devicenames after installation in Debian + ZFS?
The /dev/sd* names are simply assigned in detection order, which may vary from one boot to the next if storage driver module loading order is not exactly the same each time, or if disks are plugged or unplugged. The current wisdom is to use something else in your configuration: in /etc/fstab, you could use the UUID= or LABEL= syntax instead of device names if you're using LVM, it already includes a mechanism to auto-discover physical volumes regardless of device names, and to present the logical volumes using paths that are guaranteed to be persistent if using software RAID, it likewise includes a mechanism to find the RAID members based on what's actually on the disk, not by their device names if using multipathed SAN LUNs, device-mapper-multipath will auto-discover the individual /dev/sd* paths and build a persistent device name for accessing the disk using all those paths, either by WWID, by auto-generated persistent names or by customizable names according to your preference depending on what exactly you're looking for, you may find the disks/partitions named in a suitable way using the symbolic links in /dev/disk/by-*/ directories: /dev/disk/by-id/* by disk model name and serial number /dev/disk/by-uuid/* by filesystem UUIDs (effectively equivalent to the /etc/fstab UUID= syntax for uses that do not involve /etc/fstab) /dev/disk/by-label/* by filesystem labels (effectively equivalent to the /etc/fstab LABEL= syntax for uses that do not involve /etc/fstab) /dev/disk/by-path/* by hardware device path: "bus X, slot Y, function Z, controller slot N" (might be useful if you want a cabling-based name) on GPT-partitioned disks, partitions can also be found using /dev/disk/by-partuuid/* and /dev/disk/by-partlabel/* Some distributions (e.g. SuSE if I recall correctly) may also have a udev-rule-based mechanism that will tie a particular /dev/sd* to a disk with a particular serial number or other identifying information when it's first seen by the OS. Debian does not have that. When booting Debian, the disk controller for the root filesystem is loaded first when the system is still running on initramfs. If your system uses just one disk controller (e.g. AHCI SATA on a desktop, or a hot-plug aware SAS hardware RAID controller on a rack-mount server) it usually detects all the disks connected to it in some stable order (driver-specific, e.g. by SATA connector number or hot-plug slot order) and that's the end of it: such ordering may be quite stable. But if you have multiple different storage controllers, you may have a headache as systemd-based start-up process is not guaranteed to have any persistent deterministic order, meaning that small time differences in an earlier part of boot process may change the ordering of latter parts. And at boot time, many things will be happening in parallel, so you should not rely on implicit ordering anyway. ZFS FAQ has quite a bit to say about choosing the right kind of device names on Linux. Basically: use /dev/sd* for small development/test setups only for small pools (less than about 10 disks), use /dev/disk/by-id/* for larger pools, the optimal solution is to set up an /etc/zfs/vdev.conf file to create nice short names that still reflect the underlying hardware layout alternative solution for large pools is /dev/disk/by-path/* although the names will be long and cumbersome. Fortunately, changing the names on an existing pool is not difficult: it's basically just exporting and re-importing the pool, while specifying the new name scheme on import. For example, if your pool is named zfspool, you could export it and then re-import using /dev/disk/by-id/* names like this: # <prepare pool for export, i.e. unmount mount points or stop VMs as necessary> zpool export zfspool zpool import -d /dev/disk/by-id zfspool # <resume using the pool> (This sort of suggests that ZFS actually may have a similar auto-discovery system as e.g. Linux LVM; it's just that the discovery happens on importing the pool, instead of at every startup.)
How does Debian or Linux in general assign device-names like /dev/sdX on ZFS?
1,486,330,371,000
I've to create a new volume group of 5G with lvcreate but the disk don't have enough space disk : lvcreate -n lv_new -L5G VGroup Volume group "VGroup" has insufficient free space (1279 extents): 1280 required. How to modify the lvcreate command for extend of 1279. I don't know if I've to put a suffix like K for kilobyte or I've to apply the command without any suffix after the '-L' option
You can just, instead of specifying the size using -L, tell lvcreate to use 100% of the free extents, or use a specified number of extents using -l; instead of -L5G, try -l 1279 to use the remaining 1279 extents, or -l 100%FREE to use all remaining free extents (which seem to be 1279).
How to know the suffix of size in lvcreate
1,486,330,371,000
Ubuntu linux: ls -s disagrees with du & stat for the number of blocks used by a small file. ls -s ../nc2/.git/logs/refs/heads/ total 4 du ../nc2/.git/logs/refs/heads/ 8 ../nc2/.git/logs/refs/heads/ stat ../nc2/.git/logs/refs/heads/ File: ‘../nc2/.git/logs/refs/heads/’ Size: 4096 Blocks: 8 IO Block: 4096 ...... sudo blockdev --getbsz /dev/sda 4096 ls -s shows the file using 4 blocks. du & size say it uses 8 blocks. Why is ls -s seemingly wrong? Can it not detect the correct block size? I can make it say the file uses 8 blocks by running 'ls -s --block-size 512'. This is NOT a size of file vs number of blocks question. All commands above are listing block size not file size. Edit: More info requested: ls --version ls (GNU coreutils) 8.21 type ls ls is aliased to `ls --color=auto' LS_BLOCK_SIZE=512 ls -s ../nc2/.git/logs/refs/heads/ total 8
ls -s reports the st_blocks member of the structure returned by the stat()/lstat() system calls. That's a number 512-byte blocks. 512 bytes is usually the minimum storage granularity as that corresponds to early disk sectors. Or at least that's what most ls implementations do including the original Unix implementation, and what POSIX requires. The GNU implementation of ls (also busybox which mimics it, both found on Ubuntu) however changed that to 1024-byte blocks, but goes back to 512-byte blocks if the $POSIXLY_CORRECT (formerly $POSIX_ME_HARDER) variable is in the environment (not for busybox). I suppose that's to make it more readable to a human but that means we lose precision on filesystems that use 512-byte storage granularity and doesn't help with portability. From the ChangeLog: Wed Aug 21 13:03:14 1991 David J. MacKenzie (djm at wookumz.gnu.ai.mit.edu) Version 3.0. du.c, ls.c: Make 1K blocks the default size, and -k a no-op. Down with dumb standards! With GNU ls (not busybox), the block size can also be specified with the --block-size option or the $LS_BLOCK_SIZE environment variable. So you can use ls --block-size=1 -s or LS_BLOCK_SIZE=1 ls -s to get the disk usage in bytes. Other ls implementations like on BSDs use $BLOCKSIZE for that¹ (also recognised as well as $BLOCK_SIZE by GNU ls as shown by @yahol). POSIXly, you can use -k to get the count in kibibytes (which thankfully with GNU or BSD ls takes precedence over the $BLOCKSIZE environment variables). Portably (if you want to take into account busybox ls where, the report in kibibytes is hard coded), to get back to the st_blocks (or at least an approximation thereof), you'd need something like: blocks=$(ls -skd -- "$file" | awk '{print $1*2; exit}') With GNU find, -printf %b reports a number of 512-byte blocks, and -printf %k 1024-byte blocks, and it's not affected by the environment. -printf is GNU specific. In any case, nowadays, that has nothing to do with the filesystem block size. ¹ On BSDs, $BLOCKSIZE is rounded to a multiple of 512 (BLOCKSIZE=1023 is the same BLOCKSIZE=512) and values below 512 are not allowed
ls -s uses wrong block size
1,486,330,371,000
Good evening, I installed ArchLinux on a USB stick, and I would like to boot it in qemu. Unfortunately, most examples I found boot an image (iso..). The stick is definitely bootable as if I restart my computer, it shows a grub that can successfully start ArchLinux. I have tried things such as sudo qemu-system-x86_64 -usb -usbdevice disk:/media/louis/FlyinBaboon/boot/initramfs-linux.img -boot menu=on only to get boot errors. What is the correct way to start qemu by giving it a path to a linux root? (/media/louis/FlyingBaboon corresponding to my USB stick's root)
What kind of errors do you get? Ex: user@marconi ~ $ sudo qemu-system-x86_64 -usb -usbdevice disk:/mnt/usbdrive qemu-system-x86_64: -usbdevice disk:/mnt/usbdrive: could not open disk image /mnt/usbdrive: Is a directory qemu: could not add USB device 'disk:/mnt/usbdrive' If you see something similar, the problem is that you are providing a filesystem path, but 'qemu' wants a reference to a block device. Here is an example. I have a USB drive attached to my system. The block device is /dev/sdb, and the device is mounted at '/mnt/usbdrive' in the filesystem. You can see the relationship by looking at the system mount table: user@marconi ~ $ cat /proc/mounts |grep sdb /dev/sdb /mnt/usbdrive vfat rw,relatime,fmask=0022,dmask=0022,codepage=cp437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 0 If you give qemu the block device name, instead of a path in the filesystem, it should boot as you desire. For my example, the correct invocation would be: user@marconi ~ $ sudo qemu-system-x86_64 -usb -usbdevice disk:/dev/sdb
start qemu by giving it a path to a linux root
1,486,330,371,000
If I need to use a my USB drive on an Ubuntu machine I need to mount it first. If my USB device is /dev/sdb1 then I mount it with: mount /dev/sdb1 /home/some_folder Then use /home/some_folder to read and write data to the USB drive. But tools like dd can work directly with the device without a mounting point: dd if=/dev/sdb1 ... Why is that? Why can't I use my USB without mounting it but dd works fine?
You can use your USB without mounting it. You can use dd or other tools to copy data to it, and copy data from it. It is also common to use storage devices as swap space without mounting them. "Mounting" means attaching a filesystem that resides on a separate storage device to a currently mounted filesystem. This requires a storage device that has been formatted with filesystem structures. After you have done this, you can access files and other objects on the mounted filesystem, without worrying on which device they reside. You just see one seamless filesystem tree. This is only possible when you mount that device.
Why some tools such as dd don't need a mount point to work with a device?
1,486,330,371,000
One of my attached disks had xfs filesystem. I formatted the disk to ext4 using: sudo mkfs.ext4 /dev/sdc1 Now when I run sudo -i blkid, I get this output: /dev/sdc1: UUID="df722345-7e80-4a08-8da1-e6046cc2b0e1" TYPE="ext4" PARTLABEL="xfspart" PARTUUID="1df243b5-2b64-4c39-bd45-4cb31d7ff58e" I can see that the PARTLABEL is xfspart. Before I make any changes to fstab, just want to make sure that PARTLABEL won't cause any problem, if I add this line to fstab UUID=df722345-7e80-4a08-8da1-e6046cc2b0e1 /disk3 ext4 defaults,nofail 1 2
The PARTLABEL is a property of the partition table (GPT), unrelated to partition content (any filesystem or lvm, luks, raid, etc.). Thus it's not overwritten when you mkfs partition content. If you are not using this value for anything, you can ignore it since it means nothing. Or, to avoid confusion, you can change it with any partition software of your choice. Example with parted: # parted /dev/loop0 print Number Start End Size File system Name Flags 1 1049kB 94.4MB 93.3MB xfspart # blkid /dev/loop0p1 /dev/loop0p1: PARTLABEL="xfspart" PARTUUID="a789cf0a-3a18-4b87-af2a-abfed6ca9028" Change the PARTLABEL (partition name in parted) of partition 1 to something else: # parted /dev/loop0 name 1 schnorrgiggl Afterwards: # blkid /dev/loop0p1 /dev/loop0p1: PARTLABEL="schnorrgiggl" PARTUUID="a789cf0a-3a18-4b87-af2a-abfed6ca9028" # parted /dev/loop0 print Number Start End Size File system Name Flags 1 1049kB 94.4MB 93.3MB schnorrgiggl These names also appear under /dev/disk/by-partlabel which can be a convenient way to refer to partition block devices. Consider meaningful names like grub, boot, root, home, ... instead of xfspart or extpart which could be anything at all. However, if you use duplicate labels on separate disks, it's unclear which one the partlabel will point to. PARTUUIDs exists to avoid such naming scheme conflicts, and filesystem UUID is the safest way to refer to a filesystem by content (regardless of where it is stored), so for /etc/fstab it's still best to use UUID= instead of any LABEL=, PARTLABEL=, PARTUUID= etc. alternatives.
Does "PARTLABEL" affects fstab behavior in Ubuntu16.04?
1,486,330,371,000
I'm using dd to overwrite my hard drive with zeros (before I recycle my laptop). Several minutes into running, the visual display was replaced with a black background and a flashing cursor in the top left of the screen. No text is being written to the screen, so it's not clear to me whether dd is still running or how to tell when it stops.
When dd has written the device full it will output a message: dd: writing to '/dev/full': No space left on device Sending USR1 signal to running dd process makes it output the current status. You can use kill to send the signal: kill -USR1 $PID More recent versions of GNU dd have an option status=progress which will show the current progress on terminal. Wiping disk is likely faster with cat /dev/zero > /dev/sdX instead of dd when dd parameters aren't tuned: cat /dev/zero > /dev/sdX You can get a progress bar if you have pv installed: pv /dev/zero > /dev/sdX
How can I tell whether dd is done erasing my hard drive?
1,486,330,371,000
I am using GParted (0.28.1, Fedora 25) to format a external drive and noticed that the command displayed is: mkfs.ext4 -F -O ^64bit -L "INSTALL" /dev/sdd1 When making disks in the past from command line I have just used mkfs.ext4 DEVICE which seemed to work well for various architectures. However the above includes the option -O ^64bit, which I guess removes some default 64bit feature of the filesystem so it works with 32bit. Does it do this and is normally necessary to pass it on modern Linux OSs (to enable compatibility with 32bit etc systems), and what cost could it have other than probably reducing the volume size limit?
The default options for mke2fs including those for ext4 can be found in /etc/mke2fs.conf. They could be different depending on the distro you're using. I'd take a look at that file on any distro you're curious about to see if the -O ^64bit param would be necessary. According to the man page the '^' is indeed the prefix used to disable a feature. The effect of not using 64bit ext4 is that you'll be limited to ~ 15T volumes. Where as you can have 1EiB volumes if you use the 64Bit flag. HOWEVER, 16T is the recommended max volume size for ext4 anyway.
What does this mkfs.ext4 operand mean?
1,486,330,371,000
On a CentOS 5 server: fdisk -l 2>/dev/null shows many /dev-dm-XX disks. /proc/mdstat is empty, so they are not software RAIDs ps -ef | grep -i multipath | grep -v grep and multipath -ll shows NOTHING! So they are not disks from multipath. all have an old "last modification timestamp: 1 year ago strings $(/dev/dm-16 | head) for example, shows that there are/were (?) data on them! mount | grep dm gives nothing pvs | grep dm gives nothing Question: what are these dm- disks? The LVM Volume Group for the OS uses a local disk, that's ok, we are searching that what are these "dm-*" disks :)
How do you think you are mounting your filesystem(s)? $ findmnt / TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/alan_dell_2016-fedora ext4 rw,relatime,seclabel,data=ordered $ ls -l /dev/mapper/alan_dell_2016-fedora lrwxrwxrwx. 1 root root 7 Mar 14 09:53 /dev/mapper/alan_dell_2016-fedora -> ../dm-0 $ ls -l /dev/dm-0 brw-rw----. 1 root disk 253, 0 Mar 14 09:53 /dev/dm-0 $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda4 8:4 0 450M 0 part ├─sda2 8:2 0 128M 0 part ├─sda7 8:7 0 371.4G 0 part │ └─alan_dell_2016-fedora 253:0 0 40G 0 lvm / $ cat /sys/class/block/dm-0/dm/name alan_dell_2016-fedora $ sudo dmsetup table alan_dell_2016-fedora: 0 83886080 linear 8:7 2048 P.S. I also tried running your fdisk -l command on my newer OS (Fedora Linux 30). Instead of showing the /dev/dm-* devices, it shows me the names of all the /dev/mapper/* links.
What are these DM devices?
1,486,330,371,000
The thing is that the owner had Windows 10 already installed and the disk was formatted with the dynamic layout (on MBR scheme). Windows shows that there are 4 existing volumes (C, D, E, F) but Gparted (On a live Linux) doesn't show the last three partitions (D, E, F) as separate partitions, instead it shows them as one whole partition (NTFS) beside the C partition and another partition used by Windows to manage the disk (100MB). At the beginning (Didn't know about that dynamic thing), i thought deallocating the last partition (F) from the Windows disk manager would solve the problem & Linux would read it as an unallocated space & therefore install Linux on it. But i got confused when Linux didn't recognize the free partition and it was still showing the 3 partitions (including the one i freed) as one whole partition. So instead of messing up with the disk, i decided to get informed about it, i read a lot of articles & i discovered that DYNAMIC & BASIC layout thing & from all the choices i had to install Linux on that system is to do a Dynamic disk to Basic disk conversion, but according to MSDN, i have to backup the whole disk which is the thing i can't actually do. Some other resources recommended using EaseUS disk manager or MiniTool partition wizard to do a conversion without backups & of course without data loss. However, I'm still afraid to lose data & the backup thing is not really a choice for my situation. So is using third party applications to do such a conversion is a safe choice ? & is there any other better suggestions about installing Linux on such a disk (with windows) or perform that kind of a conversion ?
The problem is support for Windows' Dynamic disk format under Linux is weak. The Windows' Dynamic disk partitions won't show up under Linux until a tool like ldmtool is installed (which reads the metadata and maps them as device mapper disks). However your typical Linux distro installer is not going to run it and thus will be completely oblivious of said dynamic disks. Additionally you can't use Linux to modify dynamic disk partitions so you would be stuck manually trying to assign filesystems to existing partitions only. I strongly recommend you run Linux as a VM on your existing Windows install in your scenario. The approach you're trying to take is complicated and experience says dual booting in such an environment carries high risk (and you've mentioned you're trying to avoid risk).
How to install Ubuntu on a Windows dynamic disk (MBR scheme)?
1,486,330,371,000
Has anyone seen this issue that can help me solve it? I have got a preinstalled server (Debian GNU/Linux 7.6 (wheezy)), where the disk space was partitioned very badly... :-( The Hardisk is very big but it is partitioned this way: rootfs 323M 320M 0 100% / udev 10M 0 10M 0% /dev tmpfs 406M 1012K 405M 1% /run /dev/disk/by-uuid/aa26072b-e0f4-4962-ba44-76d5e65346de 323M 320M 0 100% / tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 2,4G 0 2,4G 0% /run/shm /dev/sda9 531G 6,4G 498G 2% /home /dev/sda8 368M 11M 339M 3% /tmp /dev/sda5 8,3G 2,2G 5,8G 28% /usr TARGET SOURCE FSTYPE OPTIONS / /dev/disk/by-uuid/aa26072b-e0f4-4962-ba44-76d5e65346de ext4 rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=order ├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime ├─/proc proc proc rw,nosuid,nodev,noexec,relatime │ └─/proc/sys/fs/binfmt_misc binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime ├─/dev udev devtmpfs rw,relatime,size=10240k,nr_inodes=214285,mode=755 │ └─/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 ├─/run tmpfs tmpfs rw,nosuid,noexec,relatime,size=414996k,mode=755 │ ├─/run/lock tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k │ └─/run/shm tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=2507080k ├─/home /dev/sda9 ext4 rw,relatime,user_xattr,barrier=1,data=ordered ├─/tmp /dev/sda8 ext4 rw,relatime,user_xattr,barrier=1,data=ordered └─/usr /dev/sda5 ext4 rw,relatime,user_xattr,barrier=1,data=ordered /opt is linked to /home/opt and /var is linked to /home/var... opt -> /home/opt var -> /home/var But when run apt-get upgrade or install some software it always fails... So can I expand the root partition in any way or create a symlink to some moutpoints somehow? Thank you very much for help.
In order to recover this installation, I suggest: download & boot RIP Linux (11.7 is a version I prefer, although there is 13.7 available too); if you have problems booting the ISO, remember that for RIP Linux is enough to start the kernel and rootfs.cgz as initrd, making it very simple to boot even from an existing installation with gparted resize your /home partition to leave room for a new root partition create the new root partition, ext4 filesystem for example use rsync -aAXv --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} /mnt/your-old-root/* /mnt/your-new-root/ to clone your root partition edit the /mnt/your-new-root/etc/fstab file to correctly mount the new / and /home partitions edit your bootloader (GRUB/GRUB2 for example) kernel parameter (that reads root=UUID=xxxxx) to match with the new UUID of the new root partition (lookup in ls -l /dev/disk/by-uuid/) reboot your system, then verify it's using the new root partition NOTE: because of the critical nature of operations you would be making, you should consider making a backup and always referring to official documentation when you have a doubt. Stuff will break otherwise.
full rootfs on big hardisk, installation or update software not possible
1,486,330,371,000
Pertaining to computer forensics, if the suspect's computer (which cannot be removed from the scene), is Linux, can you directly use tools such as dd or dcfldd on his computer to acquire the disk image? Or do you need to use forensic live cds like Helix, Penguin sleuth or FFCU on top of the existing OS?
While running a command like # dd if=/dev/sda of=/path/to/external/medium/file.img on a live system will work, it's going to result in a number of problems which you won't have if you boot into a separate OS and make the image(s) from there: If you image an entire disk, it probably contains a boot loader and a partition table. Those are going to get in your way when you go to try and do forensics/recovery on the image. What you really want is to image each filesystem independently: # dd if=/dev/sda1 of=/path/to/external/medium/filesystem1.img # dd if=/dev/sda2 of=/path/to/external/medium/filesystem2.img ...etc... Doing it this way makes mounting the filesystems trivial: # mount -oloop,ro filesystem1.img /mnt/fs1 (I show the mount done as root because on some Linuxes, loop devices are locked down, so regular users can't use them.) You're snapshotting live, mounted filesystems, so when you mount them later, it's effectively no different than if you had power-cycled the machine. The partitions will be "dirty," which can make mounting them without forensically damaging them difficult. You're using the suspect machine's copy of dd(1). If someone were trying to hide something from you, they could provide a sneaky or malicious copy of dd(1). Now, all that having been said, there are some good reasons to do an on-line clone. The best reason is that the system is using some form of filesystem or whole-disk encryption, and rebooting it will erase the decryption keys for the mounted volumes. dd is not the right tool for this job, however, since that will just get you a copy of the encrypted-at-rest data. A regular backup is a better idea. E.g., # tar -cvJf --exclude={'/proc/*','/sys/*','/tmp/*'} \ /path/to/external/medium/everything.tar.xz / That won't discover hidden partitions and such, but at least it will force the OS to decrypt every file accessible directly from the root of the filesystem.
How to acquire image of disk on Linux computer?
1,486,330,371,000
I have quite a lot of external hard-drives, and often have a hard time finding what I'm looking for - not to mention not storing the same file several times on different drives or finding free space when I need. I was therefor wondering if there are any database-programs suitable for storing the content of disks; including filenames, size and modification dates? Generally, the more automated the program is in searching each disk and parse information about the files, the better. Ideally, it should use a checksum (or something) to identify an identical files. A database that also stores information about each disk - like partitioning, format (filesystems) and free space remaining (on each partition) - would be a plus. I have MySQL and PostgreSQL, as well as Apache with PHP, running on my computer, so I can use solution based on these. Though I'm really after a more specialized stand-alone program; at least for managing collections, but preferably specialized for keeping track of files on multiple disks. I'm also open to unconventional approaches (using a program intended for something else). Has anybody had a similar problem and found a good solution?
It sounds like what you want is some sort of media content database. There are multiple such available; a few that you may want to have a look at are: Gnome Catalog Hyper's CdCatalog CDCollect Virtual Volumes View Since these are primarily meant for cataloging CDs and DVDs, they should have no problem even if the different hard disks are mounted at the same location.
Any programs suitable for making a database over disk-content?
1,486,330,371,000
I'm having strange problems with my PC and don't know what to do. It all started with the fact that certain characters (Devanagari characters) could no longer be displayed in my Emacs. Whenever I opened a document containing these characters and then scrolled to the place where the characters were supposed to be displayed, Emacs closed automatically. An error is then logged in the Linux Mint system reports, if I want to call it up, then this programme also closes. I have tried everything possible. It is not due to my Emacs config. Firefox or Thunderbird also crashes now and then. But I don't know exactly what causes them. I bought new RAM but also RAM is not causing the issue. I figured out how to access the respective crash report, and it says (dmesg): Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.437715] ata2.00: exception Emask 0x0 SAct 0x80 SErr 0x0 action 0x0 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.437722] ata2.00: irq_stat 0x40000008 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.437726] ata2.00: failed command: READ FPDMA QUEUED Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.437733] ata2.00: cmd 60/08:38:58:01:17/00:00:00:00:00/40 tag 7 ncq dma 4096 in Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.437733] res 41/40:00:58:01:17/00:00:00:00:00/00 Emask 0x409 (media error) <F> Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.437736] ata2.00: status: { DRDY ERR } Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.437739] ata2.00: error: { UNC } Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.443486] ata2.00: configured for UDMA/133 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.443504] sd 1:0:0:0: [sdb] tag#7 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.443507] sd 1:0:0:0: [sdb] tag#7 Sense Key : Medium Error [current] Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.443509] sd 1:0:0:0: [sdb] tag#7 Add. Sense: Unrecovered read error - auto reallocate failed Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.443512] sd 1:0:0:0: [sdb] tag#7 CDB: Read(10) 28 00 00 17 01 58 00 00 08 00 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.443515] blk_update_request: I/O error, dev sdb, sector 1507672 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.443540] ata2: EH complete Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.485670] ata2.00: exception Emask 0x0 SAct 0x1000 SErr 0x0 action 0x0 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.485675] ata2.00: irq_stat 0x40000008 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.485679] ata2.00: failed command: READ FPDMA QUEUED Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.485684] ata2.00: cmd 60/08:60:58:01:17/00:00:00:00:00/40 tag 12 ncq dma 4096 in Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.485684] res 41/40:00:58:01:17/00:00:00:00:00/00 Emask 0x409 (media error) <F> Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.485687] ata2.00: status: { DRDY ERR } Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.485689] ata2.00: error: { UNC } Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.491673] ata2.00: configured for UDMA/133 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.491691] sd 1:0:0:0: [sdb] tag#12 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.491696] sd 1:0:0:0: [sdb] tag#12 Sense Key : Medium Error [current] Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.491699] sd 1:0:0:0: [sdb] tag#12 Add. Sense: Unrecovered read error - auto reallocate failed Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.491703] sd 1:0:0:0: [sdb] tag#12 CDB: Read(10) 28 00 00 17 01 58 00 00 08 00 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.491707] blk_update_request: I/O error, dev sdb, sector 1507672 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0 Apr 9 17:29:01 enjotel-mint20 kernel: [ 1272.491739] ata2: EH complete Apr 9 17:29:01 enjotel-mint20 systemd[1]: Started Process Core Dump (PID 6655/UID 0). Apr 9 17:29:02 enjotel-mint20 systemd-coredump[6656]: Process 6620 (emacs28) of user 1000 dumped core.#012#012Stack trace of thread 6620:#012#0 0x00007f4758af12ab raise (libpthread.so.0 + 0x142ab)#012#1 0x0000000000426a3d n/a (emacs-28.128 + 0x26a3d)#012#2 0x0000000000426efc n/a (emacs-28.128 + 0x26efc)#012#3 0x000000000052eecd n/a (emacs-28.128 + 0x12eecd)#012#4 0x000000000052efbf n/a (emacs-28.128 + 0x12efbf)#012#5 0x00007f4758af1420 __restore_rt (libpthread.so.0 + 0x14420)#012#6 0x00007f4758d280f0 n/a (libotf.so.0 + 0xb0f0)#012#7 0x00007f4758d2a013 OTF_get_features (libotf.so.0 + 0xd013)#012#8 0x00007f4758d2a078 OTF_check_features (libotf.so.0 + 0xd078)#012#9 0x000000000060ba72 n/a (emacs-28.128 + 0x20ba72)#012#10 0x00000000005af03c n/a (emacs-28.128 + 0x1af03c)#012#11 0x00000000005af9d8 n/a (emacs-28.128 + 0x1af9d8)#012#12 0x0000000000610859 n/a (emacs-28.128 + 0x210859)#012#13 0x00000000006110db n/a (emacs-28.128 + 0x2110db)#012#14 0x00000000006115f9 n/a (emacs-28.128 + 0x2115f9)#012#15 0x00000000005ae249 n/a (emacs-28.128 + 0x1ae249)#012#16 0x00000000005fa4be n/a (emacs-28.128 + 0x1fa4be)#012#17 0x00000000005fc793 n/a (emacs-28.128 + 0x1fc793)#012#18 0x00000000005fd518 n/a (emacs-28.128 + 0x1fd518)#012#19 0x00000000005268cd n/a (emacs-28.128 + 0x1268cd)#012#20 0x00000000005942e7 n/a (emacs-28.128 + 0x1942e7)#012#21 0x000000000051650a n/a (emacs-28.128 + 0x11650a)#012#22 0x0000000000594229 n/a (emacs-28.128 + 0x194229)#012#23 0x00000000005164a6 n/a (emacs-28.128 + 0x1164a6)#012#24 0x000000000051be50 n/a (emacs-28.128 + 0x11be50)#012#25 0x000000000051c1a6 n/a (emacs-28.128 + 0x11c1a6)#012#26 0x000000000042e465 n/a (emacs-28.128 + 0x2e465)#012#27 0x00007f4758751083 __libc_start_main (libc.so.6 + 0x24083)#012#28 0x000000000042eb3e n/a (emacs-28.128 + 0x2eb3e)#012#012Stack trace of thread 6623:#012#0 0x00007f475883fbbf __GI___poll (libc.so.6 + 0x112bbf)#012#1 0x00007f475fa0f36e n/a (libglib-2.0.so.0 + 0x5236e)#012#2 0x00007f475fa0f4a3 g_main_context_iteration (libglib-2.0.so.0 + 0x524a3)#012#3 0x00007f475fa0f4f1 n/a (libglib-2.0.so.0 + 0x524f1)#012#4 0x00007f475fa38ae1 n/a (libglib-2.0.so.0 + 0x7bae1)#012#5 0x00007f4758ae5609 start_thread (libpthread.so.0 + 0x8609)#012#6 0x00007f475884c353 __clone (libc.so.6 + 0x11f353)#012#012Stack trace of thread 6624:#012#0 0x00007f475883fbbf __GI___poll (libc.so.6 + 0x112bbf)#012#1 0x00007f475fa0f36e n/a (libglib-2.0.so.0 + 0x5236e)#012#2 0x00007f475fa0f6f3 g_main_loop_run (libglib-2.0.so.0 + 0x526f3)#012#3 0x00007f475fc6bf8a n/a (libgio-2.0.so.0 + 0x11ef8a)#012#4 0x00007f475fa38ae1 n/a (libglib-2.0.so.0 + 0x7bae1)#012#5 0x00007f4758ae5609 start_thread (libpthread.so.0 + 0x8609)#012#6 0x00007f475884c353 __clone (libc.so.6 + 0x11f353)#012#012Stack trace of thread 6625:#012#0 0x00007f475883fbbf __GI___poll (libc.so.6 + 0x112bbf)#012#1 0x00007f475fa0f36e n/a (libglib-2.0.so.0 + 0x5236e)#012#2 0x00007f475fa0f4a3 g_main_context_iteration (libglib-2.0.so.0 + 0x524a3)#012#3 0x00007f47528ef99d n/a (libdconfsettings.so + 0xa99d)#012#4 0x00007f475fa38ae1 n/a (libglib-2.0.so.0 + 0x7bae1)#012#5 0x00007f4758ae5609 start_thread (libpthread.so.0 + 0x8609)#012#6 0x00007f475884c353 __clone (libc.so.6 + 0x11f353) Apr 9 17:29:02 enjotel-mint20 systemd[1]: [email protected]: Succeeded. This is the result of enjotel@enjotel-mint20:~$ sudo smartctl -x /dev/sdb: smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-176-generic] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Marvell based SanDisk SSDs Device Model: SanDisk SSD PLUS 1000GB Serial Number: 20504Z800982 LU WWN Device Id: 5 001b44 8be8fa562 Firmware Version: UH5100RL User Capacity: 1.000.207.286.272 bytes [1,00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: Solid State Device Form Factor: 2.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2 T13/2015-D revision 3 SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Tue Apr 9 18:35:18 2024 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled AAM feature is: Unavailable APM level is: 254 (maximum performance) Rd look-ahead is: Enabled Write cache is: Enabled DSN feature is: Unavailable ATA Security is: Disabled, frozen [SEC2] Wt Cache Reorder: Unavailable === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 120) seconds. Offline data collection capabilities: (0x15) SMART execute Offline immediate. No Auto Offline data collection support. Abort Offline collection upon new command. No Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. No Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 182) minutes. SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 5 Reallocated_Sector_Ct -O--CK 100 100 000 - 0 9 Power_On_Hours -O--CK 100 100 000 - 9891 12 Power_Cycle_Count -O--CK 100 100 000 - 1411 165 Total_Write/Erase_Count -O--CK 100 100 000 - 1774 166 Min_W/E_Cycle -O--CK 100 100 --- - 5 167 Min_Bad_Block/Die -O--CK 100 100 --- - 0 168 Maximum_Erase_Cycle -O--CK 100 100 --- - 26 169 Total_Bad_Block -O--CK 100 100 --- - 1501 170 Unknown_Attribute -O--CK 100 100 --- - 0 171 Program_Fail_Count -O--CK 100 100 000 - 0 172 Erase_Fail_Count -O--CK 100 100 000 - 0 173 Avg_Write/Erase_Count -O--CK 100 100 000 - 5 174 Unexpect_Power_Loss_Ct -O--CK 100 100 000 - 20 184 End-to-End_Error -O--CK 100 100 --- - 0 187 Reported_Uncorrect -O--CK 100 100 000 - 11155 188 Command_Timeout -O--CK 100 100 --- - 0 194 Temperature_Celsius -O---K 073 054 000 - 27 (Min/Max 2/54) 199 SATA_CRC_Error -O--CK 100 100 --- - 0 230 Perc_Write/Erase_Count -O--CK 100 100 000 - 1286 256 1286 232 Perc_Avail_Resrvd_Space PO--CK 100 100 005 - 100 233 Total_NAND_Writes_GiB -O--CK 100 100 --- - 5636 234 Perc_Write/Erase_Ct_BC -O--CK 100 100 000 - 27722 241 Total_Writes_GiB ----CK 100 100 000 - 11337 242 Total_Reads_GiB ----CK 100 100 000 - 12756 244 Thermal_Throttle -O--CK 000 100 --- - 0 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning General Purpose Log Directory Version 1 SMART Log Directory Version 1 [multi-sector log support] Address Access R/W Size Description 0x00 GPL,SL R/O 1 Log Directory 0x01 SL R/O 1 Summary SMART error log 0x02 SL R/O 1 Comprehensive SMART error log 0x03 GPL R/O 16 Ext. Comprehensive SMART error log 0x04 GPL,SL R/O 8 Device Statistics log 0x06 SL R/O 1 SMART self-test log 0x07 GPL R/O 1 Extended self-test log 0x10 GPL R/O 1 NCQ Command Error log 0x11 GPL R/O 1 SATA Phy Event Counters log 0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log 0x80-0x9f GPL,SL R/W 16 Host vendor specific log 0xa1 GPL,SL VS 1 Device vendor specific log 0xa2 GPL,SL VS 2 Device vendor specific log 0xa3-0xa4 GPL,SL VS 1 Device vendor specific log 0xa7 GPL,SL VS 1 Device vendor specific log 0xa9 GPL,SL VS 3 Device vendor specific log SMART Extended Comprehensive Error Log Version: 1 (16 sectors) Device Error Count: 11155 (device log contains only the most recent 64 errors) CR = Command Register FEATR = Features Register COUNT = Count (was: Sector Count) Register LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8 LH = LBA High (was: Cylinder High) Register ] LBA LM = LBA Mid (was: Cylinder Low) Register ] Register LL = LBA Low (was: Sector Number) Register ] DV = Device (was: Device/Head) Register DC = Device Control Register ER = Error register ST = Status register Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 11155 [4] occurred at disk power-on lifetime: 9890 hours (412 days + 2 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 6b b4 7c 6c a0 00 Error: UNC Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:58.722 READ LOG EXT ea 00 00 00 00 00 00 00 00 00 00 a0 00 02:05:58.667 FLUSH CACHE EXT ea 00 00 00 00 00 00 00 00 00 00 a0 00 02:05:49.830 FLUSH CACHE EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.826 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.781 READ LOG EXT Error 11154 [3] occurred at disk power-on lifetime: 9890 hours (412 days + 2 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 6b b4 7c 6c a0 00 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- ea 00 00 00 00 00 00 00 00 00 00 a0 00 02:05:58.667 FLUSH CACHE EXT ea 00 00 00 00 00 00 00 00 00 00 a0 00 02:05:49.830 FLUSH CACHE EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.826 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.781 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.721 READ LOG EXT Error 11153 [2] occurred at disk power-on lifetime: 9890 hours (412 days + 2 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 28 00 00 6b b4 7c 94 a0 00 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- ea 00 00 00 00 00 00 00 00 00 00 a0 00 02:05:49.830 FLUSH CACHE EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.826 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.781 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.721 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.666 READ LOG EXT Error 11152 [1] occurred at disk power-on lifetime: 9890 hours (412 days + 2 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 6b b4 7c 6c a0 00 Error: UNC Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.826 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.781 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.721 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.666 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.621 READ LOG EXT Error 11151 [0] occurred at disk power-on lifetime: 9890 hours (412 days + 2 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 40 -- 51 00 00 00 00 6b b4 7c 6c a0 00 Error: UNC Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.781 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.721 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.666 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.621 READ LOG EXT 2f 00 00 00 01 00 00 00 00 08 30 a0 00 02:05:28.566 READ LOG EXT Warning! SMART Extended Comprehensive Error Log Structure error: invalid SMART checksum. Error 11150 [63] occurred at disk power-on lifetime: 61166 hours (2548 days + 14 hours) When the command that caused the error occurred, the device was in an unknown state. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 00 -- 00 00 00 00 00 00 00 00 00 00 00 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 00 00 00 00 00 00 00 00 00 00 00 00 45 16d+13:40:55.765 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 44 13d+06:08:44.612 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 43 9d+22:36:33.459 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 42 6d+15:04:22.306 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 41 3d+07:32:11.153 NOP [Abort queued commands] Error 11149 [62] occurred at disk power-on lifetime: 61166 hours (2548 days + 14 hours) When the command that caused the error occurred, the device was in an unknown state. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 00 -- 00 00 00 00 00 00 00 00 00 00 00 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 00 00 00 00 00 00 00 00 00 00 00 00 35 16d+13:40:55.765 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 34 13d+06:08:44.612 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 33 9d+22:36:33.459 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 32 6d+15:04:22.306 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 31 3d+07:32:11.153 NOP [Abort queued commands] Error 11148 [61] occurred at disk power-on lifetime: 61166 hours (2548 days + 14 hours) When the command that caused the error occurred, the device was in an unknown state. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 00 -- 00 00 00 00 00 00 00 00 00 00 00 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 00 00 00 00 00 00 00 00 00 00 00 00 25 16d+13:40:55.765 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 24 13d+06:08:44.612 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 23 9d+22:36:33.459 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 22 6d+15:04:22.306 NOP [Abort queued commands] 00 00 00 00 00 00 00 00 00 00 00 00 21 3d+07:32:11.153 NOP [Abort queued commands] SMART Extended Self-test Log Version: 1 (1 sectors) No self-tests have been logged. [To run self-tests, use: smartctl -t] Selective Self-tests/Logging not supported SCT Commands not supported Device Statistics (GP Log 0x04) Page Offset Size Value Flags Description 0x01 ===== = = === == General Statistics (rev 1) == 0x01 0x008 4 1411 --- Lifetime Power-On Resets 0x01 0x010 4 9891 --- Power-on Hours 0x01 0x018 6 23777035689 --- Logical Sectors Written 0x01 0x028 6 26752324966 --- Logical Sectors Read 0x01 0x038 6 9891 --- Date and Time TimeStamp 0x05 ===== = = === == Temperature Statistics (rev 1) == 0x05 0x008 1 27 --- Current Temperature 0x05 0x010 1 - --- Average Short Term Temperature 0x05 0x018 1 - --- Average Long Term Temperature 0x05 0x020 1 53 --- Highest Temperature 0x05 0x028 1 13 --- Lowest Temperature 0x05 0x030 1 27 --- Highest Average Short Term Temperature 0x05 0x038 1 27 --- Lowest Average Short Term Temperature 0x05 0x040 1 - --- Highest Average Long Term Temperature 0x05 0x048 1 - --- Lowest Average Long Term Temperature 0x05 0x050 4 0 --- Time in Over-Temperature 0x05 0x058 1 95 --- Specified Maximum Operating Temperature 0x05 0x060 4 0 --- Time in Under-Temperature 0x05 0x068 1 0 --- Specified Minimum Operating Temperature 0x07 ===== = = === == Solid State Device Statistics (rev 1) == 0x07 0x008 1 1 N-- Percentage Used Endurance Indicator |||_ C monitored condition met ||__ D supports DSN |___ N normalized value Pending Defects log (GP Log 0x0c) not supported SATA Phy Event Counters (GP Log 0x11) ID Size Value Description 0x0003 2 0 R_ERR response for device-to-host data FIS 0x0004 2 0 R_ERR response for host-to-device data FIS 0x0006 2 0 R_ERR response for device-to-host non-data FIS 0x0007 2 0 R_ERR response for host-to-device non-data FIS 0x0009 2 43 Transition from drive PhyRdy to drive PhyNRdy 0x000a 2 44 Device-to-host register FISes sent due to a COMRESET 0x000f 2 0 R_ERR response for host-to-device data FIS, CRC 0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC 0x0001 2 0 Command failed due to ICRC error
The messages in the crash report indicate that the sdb disk had a read error and returned bad data. It says auto reallocate failed, so it might be that any spare capacity to replace detected bad blocks is already used up. You should run sudo smartctl -x /dev/sdb and see what it reports. Also, you should definitely keep your backups up to date, in case the sdb disk suddenly dies completely (as failing SSDs sometimes do). Your disk is dying, data has to be carved up, and disk replaced ASAP. If you try and fail data recovery using GNU ddrescue (gddrescue package in Debian/Ubuntu repositories), and presuming you would need the data from this failing disk, please contact some reputable data recovery centre. There is nothing you can do with software or hardware to revert this SATA SSD damage, total failure is imminent.
Editors crashing with certain characters, programs crashing, how to diagnoze?
1,486,330,371,000
Model: ATA Samsung SSD 850 (scsi) Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 24576B 1048575B 1024000B bios_grub 2 1048576B 537919487B 536870912B fat32 boot, esp 3 537919488B 1611661311B 1073741824B zfs 4 1611661312B 500107845119B 498496183808B zfs parted /dev/sda align-check optimal 1 > 1 not aligned parted /dev/sda align-check optimal 2 > 2 aligned parted /dev/sda align-check optimal 3 > 3 aligned parted /dev/sda align-check optimal 4 > 4 aligned The sector size says 512B, but interally I am guessing 4096B because it is a SSD, either way it should be divisible, 24576 / 512 = 48, 24576 / 4096 = 6. Is there any reason why parted says it is not aligned. I am aware that this current config should not have any effects on performance as it is only read (if at all) at boot, but just curious in why it is reported as it is. For reference the partition layout is the one suggested by Debian ZFS on Root (https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html)
Try align-check minimal 1, it should be OK with that. Irrespective of what parted would consider optimal for your hardware, optimizing the partition layout for flash memory should take into account that chips are organized into pages and erase blocks. You just cannot simply overwrite a page. The processor controlling the device must first erase it and erasing is only possible in units much larger than the page size. When the page size for your device is likely to be in the 2KB-32KB range, the erase-block size will typically be somewhere between 128KB and 2MB (64 times more) depending on the capacity of the disk. 4MB is not unusual in the GBs category. ** All operations on the drive can only happen in these units ** => Forget the page size, open the datasheets of your particular device, find the size of the erase blocks then align your partitions accordingly. Take care : It was a time when Samsung was playing with very surprising values and not particularly eager to disclose this information.
Parted says not aligned, but should be correct
1,486,330,371,000
As UUID's are assigned in partitioning phase (by software), there is no UUID available for entire disk. However, /dev/disk/by-id/ contains promising information for the same purpose. We can get the "UUID" path for - eg. /dev/sdb/ by: $ disk=sdb; ls /dev/disk/by-id/ -l | grep "/$disk$" | awk '{print "/dev/disk/by-id/"$9}' /dev/disk/by-id/ata-ST1000LM048-2E7172_WKP6XK95 /dev/disk/by-id/wwn-0x5000c500ccbb7485 However, as you can see, there are more than one entry for the same drive. The one ends with WKP6XK95 makes more sense since it is physically written on the product tag, in "Serial Number" section. How can I get the only value that is possibly written on the disk? In other words, how is the wwn-... id generated so how can I safely ignore this entry? Would ignoring this entry by ... | grep -v wwn be safe?
The wwn- entry is the World Wide Name of the disk. It is technically not an UUID, because it does not follow the UUID format nor generation rules. On stand-alone SATA and SAS disks, it is reported by the disk firmware and assigned at factory. On SAN storage systems it might be more complicated: as the storage is presented as LUNs (Logical UNits), the storage system assigns WWNs for them. It's like a MAC address, but for disks: the idea is that you should practically never have the same WWN on two different pieces of storage (unless you play tricks with SAN storage virtualization hardware). lsscsi -UU should also display the WWN, although prefixed with naa. instead of wwn-0x. lsblk -o +WWN can also display it. In /dev/disk/by-id/, you should pay attention to the prefixes: you can find the disk WWN string prefixed with wwn-0x and/or scsi-3, depending on version of udev used by your distribution. The concept of a whole-disk UUID assigned when the partition table is written exists for the GPT partitioning scheme. You can see it in e.g. fdisk -l output: # fdisk -l /dev/sda Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors Disk model: Samsung SSD 850 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 2B05CCE8-92BC-4308-B770-174CE63D044F <--- Here! Other partitioning schemes, like MBR, won't necessarily have anything applicable, and even if they have, it is not necessarily in the form of a valid UUID. For MBR partitioning scheme, the closest equivalent is Windows Disk Signature (offset 0x1B8 in the actual MBR), but it's only four bytes long and not guaranteed to exist on all MBR-partitioned disks. I think it was introduced in Windows NT.
How to correctly get the UUID of the entire disk?
1,486,330,371,000
I created a persistent Debian 9 live usb. The persistence is configured with / union. An unexpected consequence, although obvious in hindsight, is the system lags on non-cached reads: holmes@bakerst:~$ # WRITE to disk holmes@bakerst:~$ dd if=/dev/zero of=tempfile bs=1M count=1024; sync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.417477 s, 2.6 GB/s holmes@bakerst:~$ # READ from buffer holmes@bakerst:~$ dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.0907808 s, 11.8 GB/s holmes@bakerst:~$ # Clear cache, non-cached READ speed holmes@bakerst:~$ sudo /sbin/sysctl -w vm.drop_caches=3 vm.drop_caches = 3 holmes@bakerst:~$ dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.3935 s, 69.8 MB/s There is a 169X difference between cached and non-cached read operations! What can I do, if anything, to improve performance?
Get a faster USB 3 pendrive, or maybe even a USB SSD :-) You can easily improve reading from the image of the iso file (after a slow start), put all the content of the squash file system into RAM with the boot option toram, but I don't think it is easy or meaningful to do that with the content of the file/partition for persistence. See this link for more details. The following screenshot of the grub menu of a persistent live system made by mkusb is from Ubuntu, but looks very similar for Debian. There is already a menuentry for toram.
Speed up persistent live usb disk operations
1,486,330,371,000
As far as I know, in linux systems, when modifying the content of a file, the pages containing the content of the file in page cache will be marked dirty, and will be flushed to disk eventually. What I'm wondering is that, when these pages are flushed to disk, are they flushed in blocks? For example, if the block size is 4kB and I need to flush some content of 1024kB, is the disk written 1024 / 4 = 256 times?
That's a pretty complex topic, and depends on the disk, the disk controller and kernel settings. In general, the kernel will attempt to be as efficient as it can. For example, if you update the same block multiple times within an adjustable time window (typically 30 seconds or so), and don't explicitly force syncing all the way to the disk each time, most of your write operations will only update the data in the cache and only the ultimate result will actually go to the disk. If you write a long series of consecutive blocks, the kernel will certainly attempt to execute it in as few and as large chunks as the storage controller and the disk itself will allow. The kernel's I/O scheduler may also optimize the ordering of disk operations to achieve most efficient disk access. This optimization can be mostly irrelevant in virtual machines and on SSDs, and so it can be switched off. (SSDs are plenty fast even if you access random blocks in a shotgun fashion; on virtual machines, the hypervisor will usually redo the optimization based on the entire set of VMs and all their disk operations anyway, so trying to micro-optimize on the level of a single VM is wasted effort.) Some disks may have restrictions or recommendations on I/O operation sizes: # fdisk -l /dev/sdb Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes For example, this HDD internally uses 4k sector size, although it emulates traditional 512-byte disk sectors. As a result, a minimum I/O size of 4k is specified.
Does linux systems flush dirty pages to disk one block by one block?
1,486,330,371,000
I am unpacking a tar file when I have error messages like this: [xxxxx@lo-login-02 ~]$ tar -xvf ontonotes-release-5.0_LDC2013T19.tgz ...... (omitted lines) tar: ontonotes-release-5.0/tools/ontonotes-db-tool-v0.999b/src/on/__init__.py: Cannot open: No such file or directory ontonotes-release-5.0/tools/ontonotes-db-tool-v0.999b/LICENSE tar: ontonotes-release-5.0/tools: Cannot mkdir: Disk quota exceeded tar: ontonotes-release-5.0/tools/ontonotes-db-tool-v0.999b/LICENSE: Cannot open: No such file or directory ontonotes-release-5.0/tools/ontonotes-db-tool-v0.999b/INSTALL tar: ontonotes-release-5.0/tools: Cannot mkdir: Disk quota exceeded tar: ontonotes-release-5.0/tools/ontonotes-db-tool-v0.999b/INSTALL: Cannot open: No such file or directory ontonotes-release-5.0/tools/ontonotes-db-tool-v0.999b/setup.py tar: ontonotes-release-5.0/tools: Cannot mkdir: Disk quota exceeded tar: ontonotes-release-5.0/tools/ontonotes-db-tool-v0.999b/setup.py: Cannot open: No such file or directory ontonotes-release-5.0/index.html tar: ontonotes-release-5.0/index.html: Cannot open: Disk quota exceeded tar: Exiting with failure status due to previous errors I have checked my file system quota, Inodes and disk space, but they all seem to be fine: [xxxxx@lo-login-02 ~]$ quota Disk quotas for user xxxxx (uid 198587): Filesystem blocks quota limit grace files quota limit grace lo-ne-home3:/home3 9005516 16777216 20971520 100000* 80000 100000 [xxxxx@lo-login-02 ~]$ df -i Filesystem Inodes IUsed IFree IUse% Mounted on lo-ne-home3:/home3/xxxxx 21251126 122640 21128486 1% /cluster/home/xxxxx [xxxxx@lo-login-02 ~]$ df -h Filesystem Size Used Avail Use% Mounted on lo-ne-home3:/home3/xxxxx 1.0T 8.5G 1016G 1% /cluster/home/xxxxx
You have reached the maximum number of files not on the filesystem, but for your usage quota. Delete some files (of any size) and new files will be able to be created.
Cannot mkdir: Disk quota exceeded, but iNodes and space are far from exceeding
1,486,330,371,000
i got a big problem when i try to boot my VMware OpenBSD 5.7 i got this : I can see all my data making : cat /root/mydata they have nothing important for me into sd0k (/home) i've try fsck etc, the problem stays what i can do to boot ?
At that prompt, press Return to get a root shell. Then run # fsck sd0k This ought to repair the inconsistencies found. If you have nothing important on sd0k and want to reformat the partition, then, as root, make sure that the filesystem that sd0k contains is no longer mounted, then do # newfs sd0k ... taking great precaution to enter the correct device name and making sure that you know that the partition will be forever lost. You should then recreate your home directory there and use that rather than working with your OpenBSD system as root (this is a bad idea on any Unix system). You should also upgrade to a newer version of OpenBSD. The current stable release is 6.1 and 6.2 is just around the corner. Release 5.7 was released in mid 2015 and is no longer supported.
OpenBSD /dev/sd0k unexpected inconsistency - bad super block
1,486,330,371,000
I know of various ways in which to check when the last fsck occurred on a file system. e.g. $ sudo dumpe2fs -h /dev/sda1 | grep 'Mount count' -A3 dumpe2fs 1.42.12 (29-Aug-2014) Mount count: 74 Maximum mount count: -1 Last checked: Thu Dec 11 21:37:56 2014 Check interval: 0 (<none>) This updates for automatic, fstab-initiated fscks. However, it doesn't seem to take into account manual fscks. $ sudo fsck /dev/sda1 fsck from util-linux 2.25.2 e2fsck 1.42.12 (29-Aug-2014) <VOLUME_NAME>: clean, 1066411/183140352 files, 572576302/732557824 blocks $ sudo dumpe2fs -h /dev/sda1 | grep 'Mount count' -A3 dumpe2fs 1.42.12 (29-Aug-2014) Mount count: 74 Maximum mount count: -1 Last checked: Thu Dec 11 21:37:56 2014 Check interval: 0 (<none>) Is there a way to either update this value, or to find the real last time fsck was run? This is an ext4 volume.
When the partition is in clean state, there is no actual fsck run, which is why the date isn't updated. If you want to force it, the -f option does just that: sudo fsck -f /dev/sda1.
How can I tell when my file system was last fsck-ed at all?
1,390,366,002,000
In my laptop (running Linux) I have only one SSD, connected to the SATA3 port. Why I have two sdx entries in /dev directory? In particular I see /dev/sda and /dev/sdb, and /dev/sda is the SSD: # fdisk -l Disk /dev/sda: 128.0 GB, 128035676160 bytes, 250069680 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Identificativo disco: 0x00034e4b Dispositivo Boot Start End Blocks Id System /dev/sda1 * 2048 125954047 62976000 7 HPFS/NTFS/exFAT /dev/sda2 125954048 190466047 32256000 83 Linux /dev/sda3 190466048 222210047 15872000 83 Linux /dev/sda4 222210048 250068991 13929472 7 HPFS/NTFS/exFAT $ cat /sys/block/sda/queue/rotational 0 The surprising thing is the following: $ cat /sys/block/sdb/queue/rotational 1 So it looks like that /dev/sdb is considered as a magnetic hard disk drive. What's the point? EDIT: # lshw -C disk *-disk description: SCSI Disk product: xD/SD/M.S. vendor: Generic- physical id: 0.0.0 bus info: scsi@8:0.0.0 logical name: /dev/sdb version: 1.00 serial: 3 capabilities: removable configuration: sectorsize=512 *-medium physical id: 0 logical name: /dev/sdb *-disk description: ATA Disk product: SAMSUNG SSD 830 physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: CXM0 serial: S0Z3NSAC905663 size: 119GiB (128GB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sectorsize=512 signature=00034e4b
With the update (lshw … output) there is the answer now: product: xD/SD/M.S. That's your laptop's cardreader. Also, capabilities: removable. UPDATE: As for the mentioned /sys/block/sdb/queue/rotational value being 1, this parameter actually influences the I/O scheduling algorithm in Linux. Probably, it should have been named something like 'minimize-seek' or similar, because this is what it is intended to do. I'm not sure why it has been set to 1 for your particular device, I'm not that good in the flash memory architecture and technologies. But I can easily imagine an implementation that works better when accessing adjacent memory units first is quicker than jumping here and there over the medium (roughly equivalent to seeking).
Why I have two /dev/sdx entries with a single disk?
1,390,366,002,000
I was flashing a new operating system to a device, but after the process was complete, I couldn't boot into it. I then checked the disk, and I noticed something strange. Different tools report different sizes for the disk, even though it should be approximately 120GB in size. So, I decided to: dd if=/dev/zero of=/dev/sda But the results were almost the same. With lsblk: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 1 115.2G 0 disk ├─sda1 8:1 1 243M 0 part └─sda2 8:2 1 2.2G 0 part With fdisk: $ fdisk -l /dev/sda Disk /dev/sda: 7.66 GiB, 8225689600 bytes, 16065800 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xaeaa381a With cfdisk: Disk: /dev/sda Size: 7.66 GiB, 8225689600 bytes, 16065800 sectors Label: dos, identifier: 0xaeaa381a Device Boot Start End Sectors Size Id Type >> Free space 2048 16065799 16063752 7.7G I didn't touch at the hardware. So the disk is probably still at ~120G. What should I do to get the free space back ?
By "flashing a new operating system to a device", did you mean writing a complete disk image, including partition tables? Note that your dd if=/dev/zero of=/dev/sda should have certainly overwritten the DOS-type partition table, but if you did not trigger a rescan of the disk afterwards, the system will still be using the information from the old partition table. As the cached partition table information also includes the size of the original disk the image was created from, it tricks the fdisk and cfdisk tools. On the other hand, lsusb looks at the actual size of the disk, ignoring the total size reported in the partition table. Try partprobe /dev/sda if available, or echo 1 > /sys/block/sda/device/rescan (both commands require root privileges). If you needed to non-destructively update an existing partition table to e.g. recognize the full size of the disk after an imaging operation, the growpart tool might be the easiest way to do that.
Disk size is variable
1,390,366,002,000
Due to shortage of free built-in SATA 3.0 plugs (6 totally) on my motherboard (Gigabyte 970A-DS3 rev.3) I've got an Adaptec RAID 5405 (3G SAS/SATA RAID) to move all "slow" SATA 1.0/2.0 devices to be connected to this card without creating any RAID. Adaptec RAID 5405 has one SFF-8087 connector and allows to connect up to 4 devices using SFF-8087 to 4 SATA cable. Now I have two devices, connected to this controller using this type of cable: DVD-RW (Plextor PX-891SA) and SATA 2.0 HDD (Hitachi HDP725050GLA360). For some reason, connected HDD is not visible as a block device and thus I can't mount the existing partition neither by using non-persistent /dev/sdXX namings, nor by using UUID (there is no such device/partition not only within /dev/disk/by-uuid but also within all dev/disk/by-* subtree). I'm running oldstable Debian Stretch 9.13. uname -a: Linux tekomspb 4.9.0-11-amd64 #1 SMP Debian 4.9.189-3+deb9u2 (2019-11-11) x86_64 GNU/Linux lspci | grep -i adaptec shows me: 06:00.0 RAID bus controller: Adaptec AAC-RAID (rev 09) First, I tried to discover anything from lsscsi -g: [0:1:1:0] disk Hitachi HDP725050GLA360 GM4O - /dev/sg0 [0:3:0:0] cd/dvd PLEXTOR DVDR PX-891SA 1.06 /dev/sr0 /dev/sg1 [1:0:0:0] disk ATA PLEXTOR PX-128M5 1.05 /dev/sda /dev/sg2 [2:0:0:0] disk ATA Hitachi HDP72505 A50E /dev/sdb /dev/sg3 <more disks, attached to the MB SATA connectors> The first row, sixth column says - (nothing), despite the fact that sg device is presented in /dev/ tree. I made some further research and found, that despite it is detected by HBA (both, by initial HBA BIOS at startup time and from shell using Adaptec's arcconf utility), visible in /dev as /dev/sg0, visible by smartctl, using smartctl -d sat -a /dev/sg0, it is not presented as block device in /sys. On the other hand, optical drive is quite well detected as block device both within /sys and /dev (as /dev/sr0 and /dev/sg1). Following is the output of tree -F -d -L 3 --noreport. It is quite well seen that optical drive is detected as block device, but HDD doesn't for some reason. /sys/devices/pci0000:00/0000:00:15.0/0000:06:00.0/host0/ ├── power ├── scsi_host │   └── host0 │   ├── device -> ../../../host0 │   ├── power │   └── subsystem -> ../../../../../../../class/scsi_host ├── subsystem -> ../../../../../bus/scsi ├── target0:1:1 │   ├── 0:1:1:0 │   │   ├── bsg │   │   ├── generic -> scsi_generic/sg0 │   │   ├── power │   │   ├── scsi_device │   │   ├── scsi_generic │   │   └── subsystem -> ../../../../../../../bus/scsi │   ├── power │   └── subsystem -> ../../../../../../bus/scsi └── target0:3:0 ├── 0:3:0:0 │   ├── block │   ├── bsg │   ├── driver -> ../../../../../../../bus/scsi/drivers/sr │   ├── generic -> scsi_generic/sg1 │   ├── power │   ├── scsi_device │   ├── scsi_generic │   └── subsystem -> ../../../../../../../bus/scsi ├── power └── subsystem -> ../../../../../../bus/scsi Output from arcconf getconfig 1: ---------------------------------------------------------------------- Physical Device information ---------------------------------------------------------------------- Device #0 Device is a Hard drive State : Ready Supported : Yes Transfer Speed : SATA 3.0 Gb/s Reported Channel,Device(T:L) : 0,1(1:0) Reported Location : Connector 0, Device 1 Vendor : Hitachi Model : HDP725050GLA360 Firmware : GM4OA52A Serial number : GEAXXXXXXXXXXX Size : 476940 MB Write Cache : Enabled (write-back) FRU : None S.M.A.R.T. : No S.M.A.R.T. warnings : 0 Power State : Full rpm Supported Power States : Full rpm,Powered off,Reduced rpm SSD : No MaxCache Capable : No MaxCache Assigned : No NCQ status : Enabled Device #1 Device is a CD ROM Supported : Yes Transfer Speed : SATA 1.5 Gb/s Reported Channel,Device(T:L) : 2,0(0:0) Vendor : PLEXTOR Model : DVDR PX-891SA Firmware : 1.06 How I can fix this issue to allow HDD to be presented as block device and, thus, be mounted?
There is no possible to expose disk drives directly as block devices through Adaptec RAID controller. Almost all controllers from Adaptec don't support this feature - at least 5405, 5805 and, more general, a whole 3 and 5 series, though no information about 6 series of RAID controllers. Controller's BIOS doesn't allow to do this - it doesn't support HBA functionality at all. Several folks tried to do this, but were unsuccessful. The only thing (workaround) similar to the one described above can be done using (creating) a JBOD volume, that is going to be consisted from the only single disk. The only exceptions that support HBA are: Adaptec Series 7 and Adaptec Series 8 Controllers (see manual). More explanation from Adaptec here You can determine, if your controller is supported such feature by looking at it's BIOS menus. Only if the following (or similar) option: Controller Mode is presented, you can turn you RAID controller into simple HBA. If none of such options exists you can do nothing here.
SATA disk drive behind Adaptec RAID 5405 can't be detected as block device
1,390,366,002,000
When I run ras-mc-ctl --summary I get the following output: No Memory errors. No PCIe AER errors. No Extlog errors. No devlink errors. Disk errors summary: 0:0 has 15356 errors 0:2064 has 4669 errors 0:2816 has 594 errors No MCE errors. Now, I'm not particularly concerned about there errors given that presumably even my CD/DVD drive which I haven't used has them given that I only have 3 SATA devices and it is one of them, but I am regardless curious, how does this number notation line up with my physical drives? If I do lsblk I see a similar syntax which has the header MAJ:MIN (presumably Major:Minor), but the numbers there don't line up at all with the ones here. The numbers in lsblk have 8 as major for all my disks and 11 as major for my CD/DVD drive, which does not line up with the numbers given to me by ras-mc-ctl. How do I figure out which drives the numbers in ras-mc-ctl --summary correspond to and what do they mean?
lsblk will give you MAJ:MIN numbers To calculate the equivalent for ras-mc-ctl, do: d = (MAJ * 256) + MIN To go from ras-mc-ctl to lsblk, do: MAJ=int(d/256) MIN=d % 256 For your case: MAJ=(2064/256)=8 MIN=(2064%256)=16
Interpret disk errors output from ras-mc-ctl --summary