date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,390,366,002,000
why does df command and gparted show different free space? In my case, I have: $ df -hT Filesystem Type Size Used Avail Use% Mounted on [...] /dev/sda4 ext4 184G 173G 1.6G 100% /home and the same results from Nautilus while gparted shows: Size Used Unused 186.47GiB 175.58 GiB (94%) 10.90 GiB (6%)
Previously answered here Gparted will be looking at actual inodes and disk-level info to determine how much is free, whereas df will be looking using the filesystem tools themselves[...]
free space inconsistency
1,390,366,002,000
S.M.A.R.T. has found an unrecoverable read-error on one of my disks, but zpool status lists all disks as ONLINE (I.E. not DEGRADED). Do you know why might that be? I though ZFS would know of any errors as soon as anyone... Do I need to run a scrub in order for it to recheck the status of all disks? Can I have S.M.A.R.T. automatically report to ZFS somehow?
Do you know why might that be? I though ZFS would know of any errors as soon as anyone... Do I need to run a scrub in order for it to recheck the status of all disks? Can I have S.M.A.R.T. automatically report to ZFS somehow? No, it does not check all blocks all the time, it just makes sure that each written block can be accounted for (and restored, if redundancy is available) as soon as it is needed/accessed. Empty space is not checked at all (because you don't have valuable data there, so it would be a waste of time), and normal data is only checked when it is read (as write is append-only). As mmusante correctly said, you will only get error messages if the error is critical and can not be recovered from automatically (otherwise, you just see a notice and error counts in zpool status). Yes. It may be easier to just regularly (via cronjob) scrub the pool. Common recommended times are about once a month for enterprise-quality disks and once a week for consumer-level disks. Otherwise you could start a manual scrubbing with a script from smartmontools: Most of the time, you only need to place a script in /etc/smartmontools/run.d/. Whenever smartd wants to send a report, it will execute smart-runner and the latter will run your script. You have several variables available to your script (again, see the smartd manpage). These come from a test run: SMARTD_MAILER=/usr/share/smartmontools/smartd-runner SMARTD_SUBJECT=SMART error (EmailTest) detected on host: XXXXX SMARTD_ADDRESS=root SMARTD_TFIRSTEPOCH=1267409738 SMARTD_FAILTYPE=EmailTest SMARTD_TFIRST=Sun Feb 28 21:45:38 2010 VET SMARTD_DEVICE=/dev/sda SMARTD_DEVICETYPE=sat SMARTD_DEVICESTRING=/dev/sda SMARTD_FULLMESSAGE=This email was generated by the smartd daemon running on: SMARTD_MESSAGE=TEST EMAIL from smartd for device: /dev/sda Your script also has a temporary copy of the report available as "$1". It will be deleted after you finish but the same content is written to /var/log/syslog. You then just need to map from the device name to your pool (you can parse zpool status).
Why does ZFS not report disk as degraded?
1,390,366,002,000
I am wondering what would happend if I tried to dd a disk image into a partition. I explain myself: I have an SSD Hard Drive that contains two partitions: /dev/sda sda1 sda2 And I have a disk image, made from a vdi file (virtual box virtual disk) that contains: /dev/sdb sdb1 sdb2 Now, what would happend if I launch: sudo dd if=raw.img of=/dev/sda2 Would I get: /dev/sda sda1 sda2 sda21 sda22 Or would I get: /dev/sda sda1 sda21 sda22 Or would it just not work?
It would not work in that you would get sda2 with garbage inside, but a small change can make it work; You need to find the offset of each partition in the img and dd each one into its own (larger) partition on the destination.
What will happen if I dd an image of a disk into a partition
1,390,366,002,000
I tried googling it and just found this https://ubuntuforums.org/showthread.php?t=2234886 and this https://bugs.launchpad.net/ubuntu/+source/gnome-disk-utility/+bug/1165437. But it is not so clear. So I thought the star icon represent a boot drive. And check my first drive the partition is 1.1GB ext4 bootable and the second partition is on LVM2 PV. Then, but when I put my secondary internal drive, backup the data and convert it from ntfs to ext4, all the 3 partition on the second drive has the star icon. The star icon will just show if the partition is mounted at startup. but when I remove them in the /etc/fstab the star button is gone. So what is really the star icon? If it is for the boot drive, my secondary drive is just for data, I will not boot from it. And so how I can remove the start icon without removing it from the /etc/fstab?
I was hoping that reading manual will be enough, but the manual is very limited as well as the other documentation. So the source code had to come for help. Grepping through the code for "icon" keyword showed few occasions which sound like these icons: src/disks/gduvolumegrid.c: g_ptr_array_add (icons_to_render, (gpointer) "user-bookmarks-symbolic"); Checking the icon confirms they are the ones we are looking for: The code shows, what is the trigger for this icon to get rendered: if (element->show_configured) g_ptr_array_add (icons_to_render, (gpointer) "user-bookmarks-symbolic"); The show_configured is assigned when the device is "configured", whatever it means: element->show_configured = is_block_configured (block); We can probably simplify that to "gnome-disks known about this drive and about its configuration".
What is the start icon on the partition on the gnome disk utility?
1,390,366,002,000
Condition: find reliably device name where disk label (MasiWeek) and disk size (2 TB) are known Motivation: trying to determine what Ubuntu's GUI button mount does Characteristics of the system Disk label is the name of the disc given by the user. It is listed in /media/masi/ if mounted correctly. Command lsblk -no name,label,partlabel gives sda ├─sda1 ├─sda2 └─sda3 sdb └─sdb1 MasiWeek I know the disk label is MasiWeek and its size is 2 TB, visible in the command as 1.8T. I want to find reliably such a disc such that I can do the following where I need the info for the variable $label # https://askubuntu.com/a/593375/25388 partition=$(basename $(readlink $label)) sudo mkdir /media/$USER/$label sudo mount /dev/$partition /media/$USER/$label System: Linux Ubuntu 16.04 64 bit Related: What is the Equivalent Command to Ubuntu's GUI “Mount”?
use mount's -L option or specify the mount device with LABEL=name. e.g. mount LABEL=MasiWeek /media/masi/MasiWeek or mount -L MasiWeek /media/masi/MasiWeek mount also has a -U option and understands UUID=uuid if you prefer to use the block device's UUID. The easiest way to get a list of all block devices, along with the LABEL and/or UUID details (if any) is to use blkid. e.g. # blkid /dev/sda1: LABEL="kaliboot" UUID="c0182339-da69-4f30-b131-c2fdb778f6b0" TYPE="ext3" PARTUUID="6fb80985-01" /dev/sda2: UUID="4c367cee-8bed-41d5-b466-38c7f3a03330" TYPE="swap" PARTUUID="6fb80985-02" /dev/sda3: LABEL="kaliroot" UUID="6bb6d228-0581-49ae-9d49-dd148c273ecc" TYPE="xfs" PARTUUID="6fb80985-03" Note that the swap partition has a UUID, but doesn't have a label. That's because I didn't bother to use the -L option when I created it with mkswap. Note also that this can be slow and produce lots of output (one line per block device) if you have lots of LVM LVs or ZFS ZVOLs (as I do on my main machine, which is why i used the output from another machine) or similar.
Find kernel name for a partition when only the label is known
1,390,366,002,000
Suse Tumbleweed attempted a mega update of the system (~6000 packages at once) and filled the root filesystem, that, according to the installer recommendations, was 35 Gb. I attempted ① to delete the cache of RPM files, but zypper/rpm notified me that it needed to create some temporary files on the root partition and failed, ② uninstall a largish package that has no reverse dependencies (Zoom) but rpm notified me that it needed to create some temporary files on the root partition and failed, ③ I used btrfs file system resize +5G / but I was told ERROR: unable to resize '/': no enough free space ④ so I shrinked the /home partition by 20Gb and tried again, same problem. This is from df localhost:~ # df / Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/system-root 36700160 32168820 0 100% / … from the above it looks like there are 4531340 1K blocks free (≈ 4.5Gb) and while I understand that a filesystem needs some elbow space … I am really tempted to copy my user files to an USB key and install Debian, because apt duly informed me of problems with disk space every single time I tried to shoot myself in the foot, but I'd rather wait for an informed suggestion on my next course of action. E.g., that bunch of /.snapshots/xyz directories, looks a promising target for a rm -fr … but — I don't know, I really need some guidance! PS I have learnt something about snapper in the last hour, at least as much as I need to leave /.snapshots alone until an expert unveils to me a different perspective. This is the output of a more appropriate command, localhost:/ # btrfs filesystem df / Data, single: total=33.21GiB, used=28.92GiB System, single: total=32.00MiB, used=16.00KiB Metadata, single: total=1.76GiB, used=1.69GiB GlobalReserve, single: total=73.45MiB, used=0.00B again, there are more than 4 Gb of not used (available?) space and everything fails due to full disk. I'd like to mention that I can boot Windows, if some Windows' tool supports manipulating BTR file systems that could help, couldn't it?
I have openSuSE tubleweed with small ssd - 80GB (will be replacing it in near future) so I totally understand your space issue. The best way to update OpenSuse tumbleweed is: zypper ref && zypper dup --no-allow-vendor-change You need to check your images with snapper snapper is really useful when things go south - it makes snapshots of your system. I have to prune the snapshots regularly as I don't have enough space. Here is how you do it. To list current snapshots: sudo snapper ls which gives you a table with all the snapshots. You can't delete the first one (root one, type single). The subsequent you can delete based on its number. To delete snapshots 2 to 11 do a: sudo snapper rm 2-11 To disable rpm caching you can configure the zypper sudo zypper modifyrepo -K --all modifyrepo commands provide further options to tune the behavior per repository. -K, --no-keep-packages Disable RPM files caching. --all, on all repositories
LARGE zypper dup → Root partition full → (Heaven knows) I'm miserable now
1,390,366,002,000
I know this type of question has been asked frequently, but I cannot seem to figure out what is happening. tl;dr: I cloned an existing disk onto a larger disk, but df is only showing this at the size of the original disk, even though the partition table looks OK. I have a 10TB backup drive on my Debian system at /dev/sda, and added a 12TB drive to serve as an additional backup at /dev/sdc. Eventually I will remove the first backup, to offsite storage. I used parted to create a new partition, using up the entire free space, and then mkfs.ext4 to create a filesystem on it. I then mounted this filesystem, and df -h showed me the expected result: The original disk was 9.1T, the new one was 11T. I copied the original onto the new drive with pv < /dev/sda1 > /dev/sdc1. Since this was a clone, I then created a new UUID for this partition with uuidgen, and used this to mount the disk in /etc/fstab. The new drive has the files I expect. However, df now shows the two drives as being identical: # df -h Filesystem Size Used Avail Use% Mounted on [...] /dev/sda1 9.1T 6.5T 2.6T 72% /mnt/Backup1 /dev/sdc1 9.1T 6.5T 2.6T 72% /mnt/Backup2 This is the case when the disk is first mounted; it's not like any existing operation is holding a file open. The output of fdisk shows that the partition is the expected size: # fdisk -l /dev/sdc Disk /dev/sdc: 10.9 TiB, 12000105070592 bytes, 23437705216 sectors Disk model: Elements 25A3 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 96102C84-3B01-4361-A9C2-B44455AEC02E Device Start End Sectors Size Type /dev/sdc1 2048 23437703167 23437701120 10.9T Linux filesystem as does lsblk: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 9.1T 0 disk └─sda1 8:1 0 9.1T 0 part /mnt/Backup1 sdc 8:32 0 10.9T 0 disk └─sdc1 8:33 0 10.9T 0 part Running parted also seems to confirm that the partition is the correct size: # parted /dev/sdc GNU Parted 3.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: WD Elements 25A3 (scsi) Disk /dev/sdc: 12.0TB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 12.0TB 12.0TB ext4 primary I then tried to let fsck have a go at it, and got this: # fsck.ext4 /dev/sdc e2fsck 1.44.5 (15-Dec-2018) ext2fs_open2: Bad magic number in super-block fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/sdc The superblock could not be read or does not describe a valid ext2/ext3/ext4 filesystem. If the device is valid and it really contains an ext2/ext3/ext4 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> or e2fsck -b 32768 <device> Found a gpt partition table in /dev/sdc I tried the e2fsck options, but got the same result. I have searched for discussions of this fsck issue, without finding anything useful, and I have looked at some of the many discussions of discrepancies between df output and other indications of disk size, also without much luck: usually the reason in this circumstance is that the new disk had an exact copy of the original partition. But my partition does seem to be the correct size. I'd be grateful for any suggestions here. My files do seem to be on the new disk, so wiping it out and starting over again will take up many, many hours of recopying.... Edit: per request, output of gdisk: # gdisk -l /dev/sdc GPT fdisk (gdisk) version 1.0.3 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdc: 23437705216 sectors, 10.9 TiB Model: Elements 25A3 Sector size (logical/physical): 512/4096 bytes Disk identifier (GUID): 96102C84-3B01-4361-A9C2-B44455AEC02E Partition table holds up to 128 entries Main partition table begins at sector 2 and ends at sector 33 First usable sector is 34, last usable sector is 23437705182 Partitions will be aligned on 2048-sector boundaries Total free space is 4029 sectors (2.0 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 23437703167 10.9 TiB 8300 primary
It seems as though you have a misconception as to the relationship between partitions and filesystems. Your partition is actually the correct size, but your filesystem is not. When you ran pv < /dev/sda1 > /dev/sdc1, you copied the filesystem byte by byte from sda1 to sdc1. The filesystem was created on sda1, so mkfs.ext4 made the filesystem take up the exact size of sda1. However, sdc1 is larger than sda1. So the result is that you have a 10TB filesystem inside of a 12TB partition. The solution is to use resize2fs to resize the filesystem such that it takes up the entire partition. You could pass an exact desired filesystem size to resize2fs, but this is unnecessary if you simply want it to be resized to the size of the partition. With /dev/sdc1 unmounted, simply run resize2fs /dev/sdc1 as root, and it should resize your filesystem to be 12TB. Note: You should use this type of filesystem copying sparingly; both the original and the copy will have the same UUID. If both partitions are in the system at once, the identifiers are no longer unique. Thus, either use this method when you're going to wipe the source drive (i.e. you're just moving the partition to a new disk, not copying it), or if you plan to manually change the UUID of the copied partition.
Incorrect partition size on new disk
1,390,366,002,000
I have a Raspberry Pi (running Raspbian) that is booting from a microSD card. Since it's acting as a home server, naturally I want to monitor the microSD card for errors. Unfortunately though, microSD cards don't support SMART like other disks I have, so I am unsure how to monitor the disk for errors. How can I monitor / check disks that do not support SMART for errors when they are still in use / have partitions mounted?
You can replace smartctl -t long selftests with badblocks (no parameters). It performs a simple read-only test. You can run it while filesystems are mounted. (Do NOT use the so-called non-destructive write test). # badblocks -v /dev/loop0 Checking blocks 0 to 1048575 Checking for bad blocks (read-only test): done Pass completed, 0 bad blocks found. (0/0/0 errors) Note you should only use this if you don't already suspect there are bad sectors; if you already know it's going bad, use ddrescue instead. (badblocks throws away all data it reads, ddrescue makes a copy that may come in useful later). Other than that, you can do things that SMART doesn't do: use a checksumming filesystem, or dm-integrity layer, or backups & compare, to actually verify contents. Lacking those, just run regular filesystem checks. MicroSD cards also have failure modes that are hard to detect. Some cards may eventually discard writes and keep returning old data on reads. Even simple checksums might not be enough here - if the card happens to return both older data and older checksums, it might still match even if it's the wrong data... Then there are fake capacity cards that just lose data once you've written too much. Neither return any read or write errors, and it can't be detected with badblocks, not even in its destructive write mode (since the pattern it writes are repetitive). For this you need a test that uses non-repetitive patterns, e.g. by putting an encryption layer on it (badblocks write on LUKS detects fake capacity cards when badblocks write on raw device does not).
How to test a disk that does not support SMART for errors?
1,390,366,002,000
I have Samsung SSD 970 EVO Plus NVMe M.2 500GB mounted on the motherboard and it works fine, but when I open for example gnome-disks or parted to get more info about disk, system doesn't recognise model of the disk. SMART data & Self-Tests are also disabled. This only happens on the M.2 disk. Normal SSD works fine. Is there any kernel option or system configuration that can cause this? I have linux-4.19.44-gentoo kernel.
I just noticed that in the newer version of gnome-disks the model is already displayed correctly.
System doesn't recognise model of my SSD disk
1,390,366,002,000
There are loads of ways a system might uniquely identify a disk or partition, GUID/UUID, how it's connected 'usb-...', and the traditional directory structure '/dev/sda'. zpool seems to choose randomly between them. How can I get a zpool status to list the array using the directory structure as it is the only thing other tools know about? Further Information: The history reveals how the pool was created: zpool history XX History for 'XX': YYYY-MM-DD.HH:MM:SS zpool create -f XX -m /XX raidz sda sdb sdc sdd sde However status now reads: zpool status XX pool: XX ... STATE READ WRITE CKSUM XX 0 0 0 raidz1-0 0 0 0 ata-WDC_WD10EFRX-68PJCN0_WD-XXXXXXXXXXXX ONLINE 0 0 0 ... The names used on build are not the same as those now listed. The array has been moved around a lot once it was created however. Update and conclusion: It looks like most utilities can use the long name ZFS uses in place of the short, via /dev/disk/by-id/* say smartctl --all /dev/disk/by-id/ata-WDC_... While more cumbersome, I agree it is more precise.
zpool uses the device names you have given at pool creation time and when modifying devices (for example attaching disks or adding vdevs to the pool). Therefore, you can either destroy/recreate the pool with your chosen names, or detach/attach all devices one after each other (this is only possible with pool layouts that have enough redundancy, of course). This is how it works on Solaris, there might exist specific caveats on other systems like Linux or BSD.
How to make zpool refer to a disk using a readable string?
1,390,366,002,000
I have a Linux system with kernels 3.10.17 and 4.8.4 installed, but only the older kernel can boot. Trying the newer one, "Gave up waiting for root device" occurs along with a bunch of "modprobe: Can't load module". Since the root device in fact contains the modules, I am inclined to think the former causes the latter. Both GRUB Legacy menu.lst entries are identical, and blkid and /dev/disk-by-uuid/ confirm that they contain the correct UUID. Adding a rootdelay does not help (and is anyway not needed for the older kernel to boot). The other common problem helpfully mentioned in the error text is missing modules. The location of usb-common.ko did change between these two kernels; but modinfo agrees that the usb-common module at the path given is for kernel 4.8.4. Also, if the disk is missing, how could the module format have even been assessed? What's stopping the system from booting kernel 4.8.4? Booting the kernel. Loading, please wait... modprobe: Can't load module usb_common (kernel/drivers/usb/common/usb-common.ko): invalid module format Gave up waiting for root device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enough?) - Check root= (did the system wait for the right device?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/disk/by-uuid/f0b6aabc-433a-46b6-9e03-1aba89384d48 does not exist. Dropping to a shell! modprobe: Can't load module usb_common (kernel/drivers/usb/common/usb-common.ko): invalid module format modprobe: module ehci-orion not found in modules.dep modprobe: Can't load module usb_common (kernel/drivers/usb/common/usb-common.ko): invalid module format ...
I upgraded GRUB, compiled the kernel again, rebuilt the initramfs, and it works. I don't know what the problem was, but a new kernel worked around it.
"Gave up waiting for root device" with one kernel but not another
1,390,366,002,000
I have a server , which has gone rogue(not really), i ran nmon and saw that its Disk utilization is BAD, its 100% busy writing!! Here is the output : Can someone tell me what is keeping my disk busy?
iotop is your friend (assuming your server runs Linux).
How to know what exactly is being written or which process is keeping my storage disk busy?
1,390,366,002,000
To do some tests on SSD disk, I need to monitor operations such as number of reads and writes, access timestamp, which address accessed, write policy and so on. I know there is these commands: $ vmstat $ blktrace which other commands are available? I want a set of this type of commands to compare them and use the bests.
In case the ssd 'test' actually means that you have to test and report the performance of the disk, I'd go for collectd. It's a system performance statistics collection daemon, highly configurable, that has a disk plugin. There are multiple output options like CSV or RRDTool to make nice graphs if needed.
which unix commands for monitoring disk operating are available?
1,390,366,002,000
I have a 1KB partition on my drive, sda4. Here is the output of lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 698.7G 0 disk ├─sda1 8:1 0 500M 0 part /boot ├─sda2 8:2 0 5.8G 0 part [SWAP] ├─sda3 8:3 0 50G 0 part / ├─sda4 8:4 0 1K 0 part └─sda5 8:5 0 642.4G 0 part /home sr0 11:0 1 1024M 0 rom Is there any reason for this? Can it be gotten rid of? Is it a potential problem?
Mi casa, su casa On my Ubuntu 14.04 system I have the exact same situation. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 462G 0 part / ├─sda2 8:2 0 1K 0 part └─sda5 8:5 0 3.8G 0 part [SWAP] sr0 11:0 1 1024M 0 rom Assuming the drive was partitioned using MBR, you can use fdisk to interrogate the drive further. $ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000df6c7 Device Boot Start End Blocks Id System /dev/sda1 * 2048 968912895 484455424 83 Linux /dev/sda2 968914942 976771071 3928065 5 Extended /dev/sda5 968914944 976771071 3928064 82 Linux swap / Solaris So the 1K partition is an extended partition. So in this scenario, no, you cannot delete it. Extended partitions In an MBR partitioned HDD, an extended partition is a special partition which can contain logical partitions. In my case, /dev/sda5 is a logical partition that's contained within the extended partition, /dev/sda2. MBR has 2 types of partitions. Primary & extended. With an MBR style partitioning, you're only allowed 4 primaries. By utilizing extended partitions, you can increase the number of partitions allowed, above that limit. Why? I have no idea why Ubuntu does it this way. As far as I can remember, I went with the default options when I set that system up, so it's just how that particular distro opted to do it. In Fedora, they do things with an LVM - Logical Volume Manager, for example: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 465.3G 0 part ├─fedora_greeneggs-swap 253:0 0 7.7G 0 lvm [SWAP] ├─fedora_greeneggs-root 253:1 0 50G 0 lvm / └─fedora_greeneggs-home 253:2 0 407.6G 0 lvm /home sr0 11:0 1 233.3M 0 rom Here Fedora defaults to setting up 2 partitions. 1 for /boot, and another for everything else. Within that single partition, logical volumes using LVM are used for the various partitions, /, swap, and /home. $ sudo fdisk -l /dev/sda Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0000ccbe Device Boot Start End Blocks Id System /dev/sda1 * 2048 1026047 512000 83 Linux /dev/sda2 1026048 976773119 487873536 8e Linux LVM References MBR - Master Boot Record - Wikipedia LVM - Logical Volume Management - Wikipedia
1kb partition: is it a problem, can it be removed?
1,390,366,002,000
I am trying to write a script to programmatically expand a disk (with LVM enabled). The ubuntu server 22.04 image will be used to create automatically provisioned VM's on a host. I know of the sudo lvextend -L +[size] [partition] command, but I want to automatically query the free space on /dev/sda and then pass it to lvextend. In all scenarios, there will only be 1 disk with variable free space, and I want to automatically expand the linux partition to take up all of it. My df -h looks like this: runner@hya-worker-temp:~$ df -h Filesystem Size Used Avail Use% Mounted on tmpfs 91M 928K 90M 2% /run /dev/mapper/ubuntu--vg-ubuntu--lv 7.6G 3.9G 3.3G 55% / tmpfs 453M 0 453M 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sda2 1.7G 126M 1.5G 8% /boot /dev/sda1 537M 5.3M 532M 1% /boot/efi tmpfs 91M 4.0K 91M 1% /run/user/1001 How can I query the free space and pass it on to lvextend from a .sh script, or is there an even simpler way to expand the storage.
I created the following script that does exactly what I need: growpart /dev/sda 3 pvresize /dev/sda3 lvresize -l+100%FREE --resizefs /dev/mapper/ubuntu--vg-ubuntu--lv
Writing a script to automatically expand disk using all free space with LVM for server workloads
1,390,366,002,000
I have hundreds of disks that need to be plugged in on several Ubuntu desktops. Currently, the disks will not automatically mount under /media/user/ (but can be found under /dev/sd*). However, with GUI, I can use the file explorer Other locations to show all the plugged-in disks. If I click one, it will be mounted and can be found at /media/user/Disk-UUID. The problem is, there are many disks that need to be clicked, and everything restores to original after reboot. So, how can I write a script to mimic the behavior of clicking on the disks to automatically mount all disks that have been plugged in? When I look up the method online, it seems that most people are talking about editing /etc/fstab. However, I do not want to do it this way, because I treat these disks as temporarily plugged in, do not want to name them, and do not want to make permanent changes to the system. Besides, after the disks being mounted, I see no entry in file /etc/fstab, so Ubuntu itself are doing it by other means. How can I achieve the same?
Based on the suggestion of @fra-san, I found that for a disk such as /dev/sdn, udisksctl mount --block-device /dev/sdn is a simple way to achieve the goal. However, by default, that will require authentication. To avoid this, on Ubuntu 20.04 LTS, one needs to edit file /usr/share/polkit-1/actions/org.freedesktop.UDisks2.policy, and change the entries under org.freedesktop.udisks2.filesystem-mount-system (notice that there is another similar entry) to yes: <defaults> <allow_any>yes</allow_any> <allow_inactive>yes</allow_inactive> <allow_active>yes</allow_active> </defaults> Then udisksctl mount --block-device /dev/sdn will no longer require authentication and immediately mount the disk to /media/user/Disk-UUID.
How can I automatically mount multiple disks on Ubuntu 20.04 LTS without editing fstab?
1,596,405,443,000
I have the following df -h output: Filesystem Size Used Avail Use% Mounted on dev 7,8G 0 7,8G 0% /dev run 7,8G 1,8M 7,8G 1% /run /dev/nvme0n1p5 93G 40G 49G 45% / tmpfs 7,8G 157M 7,7G 2% /dev/shm tmpfs 7,8G 48M 7,8G 1% /tmp /dev/nvme0n1p4 204G 173G 31G 86% /run/media/test/drive /dev/nvme0n1p1 96M 33M 64M 35% /boot/efi tmpfs 1,6G 68K 1,6G 1% /run/user/1000 I want to print only the root partition (/dev/nvme0n1p5) with headers - i.e. Size, Used Space, Available Space, etc.
You can ask df to only show the free space on a given device: $ df -h /dev/nvme0n1p5 Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p5 93G 40G 49G 45% /
Print disk space with df, with headers, but only one particular row
1,596,405,443,000
We know there are many files that are pseudo files, i.e. not a real file. ex: /sys/xxx /proc/xxx /dev/xxxx In my understanding, the open() will call x86 ASM code, the ASM code will do hardware interrupt to access to the disk. The problem is if the open() will eventually access to the disk, how pseudo file still get access by open()?
As Fox noted in their answer, the kernel handles the open() syscall, and meanwhile, filesystems implement file operations in their own way. Filesystems, on the other hand, implement their version of syscalls, and that's what kernel should be using. Consider, for instance, ext4 call to open directory or file operations in procfs (which notably has no .open mentioned anywhere), and pipefs which handles named and unnamed pipes. But the general idea is that the open() syscall is not going to be necessarily assembly, nor it is guaranteed to be specific to a particular architecture. And to quote an answer by the user Warren Young who noted that way before this question appeared, there is no single file where mkdir() exists. Linux supports many different file systems and each one has its own implementation of the "mkdir" operation. The abstraction layer that lets the kernel hide all that behind a single system call is called the VFS. So, you probably want to start digging in fs/namei.c, with vfs_mkdir(). The actual implementations of the low-level file system modifying code are elsewhere. For instance, the ext4 implementation is called ext4_mkdir(), defined in fs/ext4/namei.c. As for why open() works this way, this is also due to Unix design philosophy that everything is a file. If you're using an API, you want to deal with consistent interface and not reinvent the wheel for every filesystem in existence (unless a person is a kernel developer, in which case they have our gratitude and respect).
if some files are pseudo, why open() function still can access it?
1,596,405,443,000
When I do sync as a regular user, does this flush all the buffers belonging to other users including root or just my own? man doesn't provide this info. I'm asking about Debian 9 in particular, but more general answers on Linux and Unix are welcome.
The sync command uses the sync system call. The manual of the sync system call says: sync() causes all pending modifications to filesystem metadata and cached file data to be written to the underlying filesystems. So sync will flush all the buffers. The term "belonging to users" doesn't apply to the buffers, the buffers belong to files and to file system metadata, not to users. It is possible that multiple users modify the same file, and it makes no sense for the file system and buffer sub system to track the changes to a specific user.
What does the sync command sync in fact?
1,596,405,443,000
The question How do I create and mount a fake block device (using a large file/disk image) that passes as a legitimate unformatted disk? Backstory I am trying to set up rook with ceph (a distributed storage system) in my hobby kubernetes cluster. Ceph requires an unformatted blockdevice that it will partition and use for storage as it sees fit. I don't have any spare disks I can use, so I thought: Why don't I just create a loopback device and use that? Since my host OS disk has plenty of free space I should be able to create a large file on there and mount that as a loopback device. There are two problems with this (as I understand it): Loopback devices have to be formatted with some sort of filesystem in order to be mounted, which will not work with Ceph since Ceph requires an unformatted blockdevice Loopback devices do not seem to count as block devices. Ceph docs use lsblk -f to test if a device is eligible for Ceph. The device has to show up in the output AND not have any filesystem formatted on them.
losetup will do this for you. If you have an unused loop device /dev/loop0: # Make the file head -c 10240 /dev/zero > /tmp/zeroes # Use it as a block device sudo losetup /dev/loop0 /tmp/zeroes # Remove the device sudo losetup -d /dev/loop0
How do I create a raw (no filesystem) loopback device that passes as a legitimate blockdevice?
1,596,405,443,000
Is it true that a RAID 1 can only be as big as the smallest disk? If it is true, then ten 1GB disks in RAID 1 should only amount for only 1GB of total available space. Is it true that RAID 1 capacity is limited to 50% of the available drives? If it is true, then ten 1GB disks in RAID 1 should amount for 5GB of total available space. In some sites I read RAID 1 capacity is 50%, in other sites I read RAID 1 capacity won't ever be larger than the smallest of the disks. So if you got asked, what's the capacity of ten 1gb disks in RAID1, what would you answer and why? I need clarification. Thanks
It all depends on how you set it up. With ten disks in RAID 1, you get a single disk capacity, with plenty of redundancy: if any 9 disks fail, the data is still there on the single remaining disk. This sounds weird but sometimes you see it as a mdadm RAID 1 for the /boot partition, while everything else is RAID 5/6/10 - which the bootloader might not know how to handle. You could also do five separate RAID 1, with 1GB capacity each and a single level of redundancy. But if that's your goal, you'd probably just go with RAID 10 instead. If you have disks of different sizes, you can partition them and then choose an appropriate RAID level for each set of partitions (akin to Synology Hybrid RAID). If you don't do that, yes it's possible to have (a lot of) wasted capacity in a RAID set. With the exception of mdadm raid0, that additional capacity simply is unavailable until you replace the smaller disks with larger ones. That's why it's common to use same-sized disks for RAID.
Confused about RAID 1 capacity
1,596,405,443,000
I am currently learning about how HDD's function and in particular i am having trouble understanding what a Cylinder in a HDD is. I have read online and my current understanding is that it is when a portition of the hard disk track is aligned with another track from another platter which contains similiar data such as a file. but im not quite sure if that is even correct.
If you look at the wikipedia explanation, a harddisk consists of several platters. Each platter has concentric tracks with data. The set of all tracks in the same position, for all platters, makes up a cylinder. It's called cylinder because it has the geometrical shape of a cylinder (well, more or less). There is no relation to "containing similar data". None at all. At least for early harddisks, the movement of the read-write-heads was coupled, so "cylinder number" was really a description for "how far do the read-write-heads have to move inside on all the platters". Today, the head/sector/cylinder addressing is obsolete, and everyone uses logical block addresses (LBAs). The harddisk firmware is responsible for translating a LBA into head movements etc.
Hard disk Cylinder explaination
1,596,405,443,000
I am using AWS Free tier Ubuntu. It is saying that disk space full in /dev/xvda1 which is mounted on / . I am using df command to check it. I went to the directory / and using command ls -l to check which file is taking much space. But, none of them is taking that much within / . What is the recommended way to find out which file is taking much space ? I have attached the screenshot, please check it. You can see none of them is taking 99892768 space.
To find find the largest files and directories that exist on your system, you can run: du / | sort -n The largest files and directories will be printed at the end. Use tail to find the nth largest: du / | sort -n | tail -n 20 This will print the 20 largest files and directories on the system.
Check the disk space in Ubuntu (with ls)
1,596,405,443,000
What, if any, is the disk size limit for mdadm? I've been using mdadm to manage disk arrays for our Mac network for a number of years now. Our storage requirements have grown to the point I'm considering using 4TB or 8TB disks. Is mdadm known to work with these, and if so, what is its limit?
Yes, mdadm is known to work with 4TB and 8TB disks. Its limits (with superblock format 1.0) are as follows: up to at least 384 component devices (256 in RAID6); component devices with up to 264 512-byte sectors, i.e. 8 ZiB. So you’ve still got lots of breathing space.
What is the disk size limit for mdadm?
1,596,405,443,000
I'm testing a large ZFS pool at the minute and am documenting the process for replacing a failed drive before our environment moves into production. I've built the ZFS volume 'diskpool', which is 3 nested vdevs of 20 x 8TB drives. Everything is working fine. To simulate a disk failure, I've disconnected one of the 8tb drives. I'm a little worried, because with the drive disconnected, If i run 'zpool status', I'm still showing 'ONLINE' as the state against all of my disks. The disk controller that all of the devices are connected to has reporting mechanisms in place, and that immediately alerted me to say a disk has either failed or been removed, but ZFS doesn't seem any the wiser. Can someone shed some light on why it would still report a 'failed' disk as 'ONLINE'?
The ZFS implementation you use doesn't poll its underlying devices unless there is some activity going on. Removing a disk from a pool that is not accessed will then remain unnoticed until you access it.
Checking for a failed drive in a ZFS pool
1,596,405,443,000
I had a USB drive mounted on /dev/sdb1 and I want to reformat from NTFS to ext3. I did umount -l which unmounted the disk. I deleted the old partition using cfdisk. I ran mkfs.ext3, but got the error: /dev/sdb1 is apparently in use... After googling I tried to cat /proc/mounts and found it there: /dev/sdb1 /media/moviesold How do I remove the reference from there? How did it get there? More specifically what can I run to "really umount"?
Use umount. You ran umount -l, which specifically tells umount to leave the filesystem mounted until all processes are done with it. You really shouldn't need umount -l most of the time; the only purpose it serves is freeing up the mount point so you can mount something new there while the currently mounted partition is still in use. Now that you've already lazy unmounted it, if you figure out which process still has a file open on the filesystem and close it it will unmount automatically
Really umount external drive
1,596,405,443,000
There were 3 disk drives on the server, but one failed with input/output error and is not recoverable. When trying to boot with the remaining 2 drives, I get: Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or ^D to try again to boot into dafault mode. Give root password for maintainence (or type control-D to continue): I have tried fsck, but it says nothing besides the drives being clean. How could I proceed, in order to have a bootable system again, without using "format and reinstall"? Or, at least, be sure that this system is not recoverable? Are you using Ubuntu? Sure! cat /etc/issue : Ubuntu 16.04.6 LTS \n \l; Is it a RAID? Human owner said: "no". cat /etc/mdadm.conf : No such file or directory; cat /proc/mdstat : Personalities: [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]. unused devices: <none>. Have you deleted the bad 3rd one from /etc/fstab? I don't think there is anything there to be deleted there, as cat /etc/fstab only lists volumes of the devices sda and sdb, the same devices listed by lsblk. So there is no 3rd drive on /etc/fstab. The only curious thing is that dev/sda1, listed as <mount point> /, has this <option>: errors=remount-ro. And the suggestion to use touch /forcefsck doesn't solve anything, because this file does not exist, as comproved by nano /forcefsck. As it asks, have you run journalctl -xb? I did not, because most of my work experience is with Windows, and any suggestion on a crash screen there can, and should, be completely ignored, as they are not helpful. But I found out that the journalctl -xb is very helpful, and even interesting to read. But I ran now and found these 3 lines in RED color: line 636 Feb 17 07:08:04 ██████ kernel: ERST: Can not request [mem 0xd7e6e000-0xd7e6ffff] for ERST. line 1249 Feb 17 07:09:34 ██████ systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.device . line 1879 Feb 17 07:09:36 ██████ iscsid[1580]: iSCSI daemon with pid=1581 started! I also found out these errors that seems to be interesting: line 1248 Feb 17 07:09:34 ██████ systemd\[1\]: dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.device: Job dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.device/started timed out. line 1249 Feb 17 07:09:34 ██████ systemd\[1\]: Timed out waiting for device dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.device . line 1250 -- Subject: Unit dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.device has failed [...] line 1256 -- The result is timeout. line 1257 Feb 17 07:09:34 ██████ systemd\[1\]: Dependency failed for File System Check on /dev/disk/by-uuid/82271ee0-c055-497a-b52f-566c8e456f29. [...] line 1265 Feb 17 07:09:34 ██████ systemd\[1\]: Dependency failed for /data. [...] line 1273 Feb 17 07:09:34 ██████ systemd\[1\]: Dependency failed for Local File Systems. [...] line 1281 Feb 17 07:09:34 ██████ systemd\[1\]: local-fs.target: Job local-fs.target/start failed with result 'dependency'. line 1282 Feb 17 07:09:34 ██████ systemd\[1\]: local-fs.target: Triggering OnFailure= dependencies. line 1283 Feb 17 07:09:34 ██████ systemd\[1\]: data.mount: Job data.mount/start failed with result 'dependency'. line 1284 Feb 17 07:09:34 ██████ systemd\[1\]: systemd-fsck@dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.service: Job systemd-fsck@dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.service/start failed with result 'dependency'. line 1285 Feb 17 07:09:34 ██████ systemd\[1\]: dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.device: Job dev-disk-by\x2duuid-82271ee0\x2dc055\x2d497a\x2db52f\x2d566c8e456f29.device/start failed with result 'timeout'. Post not suitable for "Ask Ubuntu", because, as said there, "16.04 is EOL and therefore off topic here".
Linux startup will bail out for a number of reasons resulting in emergency mode. One possibility is that /etc/fstab has a mount configured which either no longer exists or is corrupted in some way. In this instance, /dev/disk/by-uuid/82271ee0-c055-497a-b52f-566c8e456f29 which is mounted on /data is not functioning correctly. Therefore, /etc/fstab should be edited with the offending line either deleted or commented out. This should result in the server being brought up as normal from which any additional investigation can begin.
Ubuntu gives message "Welcome to emergency mode !", fsck just says "clean"
1,596,405,443,000
I recognize this is a stretch, but I have a shadow of a memory of a way to do it, and I'm hoping someone here will recognize what I'm talking about and help jog it. Traditional programs, both by terminal and GUI, have a starting sequence and a closing sequence of instructions, which can sometimes take some time to execute. Additionally, they may have other constraints, such as retrieving data from a database which may not always be available. A "memory image" of a program, I am defining as its footprint in RAM while running. That is, a one-to-one mapping of all memory allocated to the program. It is possible to image (and even mount the image of) a physical disk, to create an AppImage for an entire dependency structure, and to keep a virtual machine state (effectively an image) of the state of a virtual OS; so I may be wrong; but is it possible to save the memory state of a program (warts, inefficiency, and all) to a binary file? (One of my concerns is that some references kept by the image may change between boots, but this is technically a solvable problem and might not disqualify the idea.) If so, how would I go about doing this on a *nix system? It clearly isn't always (or even usually) an advantage, but I feel it bears investigation. To illustrate what I am trying to do: Open Vim Write a lengthy amount of text in Vim Serialize Vim to disk without formally closing it Wait several days Deserialize instance of Vim from disk Continue writing the same text So, save the state of the whole program, instead of just the file. One workaround I thought of is to run the program in a virtual instance, but this feels like it might typically be excessive.
Yes. CRIU is the acronym of the technology under Linux that allows that. There's severe restrictions, though, that arise from the logical necessity of the files opened by the program being "frozen" not having changed, sockets still being there, and any external state being the same, or the loss of state at least being recoverable. Which effectively rules out X11 programs, and certainly didn't make curses/graphical tty utilizing programs easier to push through CRIU. Where you find this very commonly is containerized server workloads - the whole file system is private anyways, and network connections are kind of common to be lost, so recovery from that is usually built into server software.
Is it possible to serialize a running program's memory image to disk, instead of closing it?
1,596,405,443,000
I am reading many tutorials about the dd command. There are some examples including the bs and count parameters. Some of them where each one is isolated over the other and other where both are used together, but is not very clear the explicit relation about their values. At a first glance seems enough and straight use only bs - and of course taking in consideration that count by default works with 512 bytes, it because bs by default is 512 bytes. It such as: bs=512 count=#. Question #1: When and why is mandatory use both together? According with some research, a block has a size of 512 bytes. For example - not sure if are valid: bs=1M count=10 bs=1M count=5 bs=1.5M count=7 Extra Question #2 Is there an explicit relation and rate for the values used for them together? It for example to know if - bs=1M count=10 - bs=1M count=5 - bs=1.5M count=7 - each one are correct or not - and why. Note: I am assuming there is a kind of rate or rule to define the values when they are used together and therefore avoid to put any random value to see what happens - and harm the disk for a failed experiment. Correct if I am wrong. Reason: because the dd command must be use it very carefully I want have very clear the use of them, isolated and together. Of course with the correct values. It is the reason to create this question. Goal until now it is mentioned in many tutorials about to create a swap file - in my case for Ubuntu. How was not clear the dd syntax, I did do a research of this command and knew the other features, convert and copy
When and why is mandatory use both together? It's important to know what your goal is. Without supplying count, dd will copy until EOF is reached (which for some block devices, like /dev/zero, will never be the case). Otherwise, dd will copy count blocks of size bs. Is there an explicit relation and rate for the values used for them together? This once again depends on your task. It can be beneficial to tune bs for speed, and count can be used to only copy a part of something. To make things maybe a bit more clear, the following examples will all write a 1024 byte file. dd if=/dev/zero of=/tmp/testfile count=2 dd if=/dev/zero of=/tmp/testfile bs=1k count=1 dd if=/dev/zero of=/tmp/testfile bs=128 count=8 For more information, see the coreutils manual entries regarding block size and dd invocation
dd: when is mandatory use bs and count together?
1,596,405,443,000
Due to hardware problems, there are some Input/output errors. Some of my hard disk sectors are bad. # find . -name 'cpp-service.zip' find: ‘./.cache/chromium’: Input/output error find: ‘./.config/chromium/ShaderCache/GPUCache’: Input/output error find: ‘./.config/chromium/Safe Browsing’: Input/output error find: ‘./.config/chromium/Subresource Filter/Unindexed Rules’: Input/output error find: ‘./.config/chromium/CertificateRevocation’: Input/output error find: ‘./.config/chromium/Crowd Deny’: Input/output error find: ‘./.config/chromium/AutofillRegex’: Input/output error find: ‘./.config/chromium/GrShaderCache/GPUCache’: Input/output error Question How can I mark the above files/folders having Input/output error as void? I mean, I intend to tell the filesystem to completely ignore the above files/folders. As if they don't exist. How can I do that? Move I cannot move them: # mkdir ~/badblocks # mv .cache/chromium ~/badblocks/ mv: cannot stat '.cache/chromium': Input/output error
If your file system is Ext3 or Ext4, you can run a file system check combined with a check for bad blocks, and any bad blocks will be excluded from future use: e2fsck -c -f -k /path/to/device -f will force a check, -c will check for bad blocks (double it to perform a non-destructive write test instead of the default read-only test), -k will keep any existing bad block information. It will be simplest to run this from a recovery environment, e.g. from a bootable live system. Note that a disk with bad blocks is reaching the end of its life and can no longer be trusted.
Files having Input/output error: how to completely ignore them as if they don't exist
1,596,405,443,000
Recently we get a lot of kernel messages on our RHEL VM server as: [Mon Oct 4 11:33:32 2021] EXT4-fs error (device sdb): htree_dirblock_to_tree:914: inode #397095: block 1585151: comm du: bad entry in directory: rec_len is smaller than minimal - offset=0(4096), inode=0, rec_len=0, name_len=0 so we derided too run fsck automatically with -a option ( after umount of course ) $ fsck -a /dev/sdb fsck from util-linux 2.23.2 /dev/sdb contains a file system with errors, check forced. /dev/sdb: Directory inode 397095, block #1, offset 0: directory corrupted /dev/sdb: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY. (i.e., without -a or -p options) in spite we declare to use the -a option , fsck insist to not use it So the last option is to do it manually as # fsck /dev/sdb fsck from util-linux 2.23.2 e2fsck 1.42.9 (28-Dec-2013) /dev/sdb contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Inode 2134692 ref count is 2, should be 1. Fix<y>? yes Unattached inode 2134798 Connect to /lost+found<y>? yes Inode 2134798 ref count is 2, should be 1. Fix<y>? yes Unattached inode 2135050 Connect to /lost+found<y>? yes Inode 2135050 ref count is 2, should be 1. Fix<y>? yes Unattached inode 2135058 Connect to /lost+found<y>? yes Inode 2135058 ref count is 2, should be 1. Fix<y>? yes and as we can see above it takes time Any idea how to force fsck to use -a flag or to run fsck without manual steps ?
The -a (or -p) option is used to tell fsck to try to fix the filesystem without user interaction, if this is not possible (there is a risk of losing data or further corrupting the filesystem by choosing a wrong option) fsck -a will fail and tell you to run it in manual mode and decide yourself how each error should be fixed. From e2fsck man page: Automatically repair ("preen") the file system. This option will cause e2fsck to automatically fix any filesystem problems that can be safely fixed without human intervention. If e2fsck discovers a problem which may require the system ad‐ ministrator to take additional corrective action, e2fsck will print a description of the problem and then exit with the value 4 logically or'ed into the exit code. (See the EXIT CODE section.) This option is normally used by the system's boot scripts. If you want to run fsck completely non-interactively you can use the -y option to answer yes to all question but I would advice against doing that.
fsck + why fsck insist to not use "-a" flag?
1,596,405,443,000
So something strange is with my partitions. root@rescue ~ # mdadm -A --scan mdadm: WARNING /dev/sdb1 and /dev/sdb appear to have very similar superblocks. If they are really different, please --zero the superblock on one If they are the same or overlap, please remove one from the DEVICE list in mdadm.conf. root@rescue ~ # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 4G 1 loop sda 8:0 0 2.7T 0 disk ├─sda1 8:1 0 1G 0 part ├─sda2 8:2 0 64G 0 part ├─sda3 8:3 0 200G 0 part ├─sda4 8:4 0 1M 0 part └─sda5 8:5 0 2.5T 0 part sdb 8:16 0 2.7T 0 disk └─sdb1 8:17 0 2.7T 0 part sdc 8:32 0 223.6G 0 disk On raid check I'm getting an error like this. Is there any way to fix it without data loss? P.S. Added new output root@rescue ~ # fdisk -l /dev/sdb Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: ST3000NM0033-9ZM Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: A93A2325-8454-A346-8133-2ACDF59BE163 Device Start End Sectors Size Type /dev/sdb1 2048 5860533134 5860531087 2.7T Linux RAID root@rescue ~ # mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 0.90.00 UUID : 1ac1670b:7c95ed23:0028a58b:a51e25d4 Creation Time : Mon Dec 2 20:14:13 2019 Raid Level : raid0 Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Mon Dec 2 20:14:13 2019 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a194544e - correct Events : 1 Chunk Size : 8K Number Major Minor RaidDevice State this 1 8 17 1 active sync /dev/sdb1 0 0 8 5 0 active sync /dev/sda5 1 1 8 17 1 active sync /dev/sdb1 root@rescue ~ # mdadm --examine /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 0.90.00 UUID : 1ac1670b:7c95ed23:0028a58b:a51e25d4 Creation Time : Mon Dec 2 20:14:13 2019 Raid Level : raid0 Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Update Time : Mon Dec 2 20:14:13 2019 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : a194544e - correct Events : 1 Chunk Size : 8K Number Major Minor RaidDevice State this 1 8 17 1 active sync /dev/sdb1 0 0 8 5 0 active sync /dev/sda5 1 1 8 17 1 active sync /dev/sdb1
This is a very common problem with old mdadm 0.90 metadata. This metadata is located somewhere at the end of the device, but not in the very last sector but at a 64K-aligned offset: The superblock is 4K long and is written into a 64K aligned block that starts at least 64K and less than 128K from the end of the device (i.e. to get the address of the superblock round the size of the device down to a multiple of 64K and then subtract 64K). Source: https://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-0.90_Superblock_Format Unfortunately, for a whole disk device that is not multiple of 64K large, and has a partition extending to very near the end of disk (into the last partial 64K block), it means the superblock position for the last partition, and the superblock position for the whole drive, turns out to be completely identical. The mdadm manpage also mentions this issue: 0, 0.90 Use the original 0.90 format superblock. This format limits arrays to 28 component devices and limits component devices of levels 1 and greater to 2 terabytes. It is also possible for there to be confusion about whether the superblock applies to a whole device or just the last partition, if that partition starts on a 64K boundary. Indirectly it also suggests another workaround: just don't make the partition 64K-aligned; then the superblock on the partition won't be 64K-aligned to the disk, and as such, it couldn't be seen as superblock for the whole disk. But in your case, your partition is MiB aligned which also makes it 64K aligned. The superblock position for the partition is 2048(start) + 5860531087(size) - 15(size%128) = 5860532992, the superblock position for the disk is 5860533168(size) - 48(size%128) - 128 = 5860532992. In other words, you don't have two superblocks here; it's one and the same. If you mdadm --zero-superblock one as the message suggested, you end up losing both. So please, don't do that. Adding a DEVICE line in mdadm.conf is an easy workaround for one system, but once you boot a Live CD or Initramfs that doesn't have your mdadm.conf, the problem just resurfaces. This is one of several reasons why 0.90 metadata should be avoided. Use 1.x metadata instead. mdadm allows converting from 0.90 to 1.0 metadata format, for example like this: mdadm --stop /dev/mdX mdadm --assemble /dev/mdX --update=metadata /dev/sdx1 /dev/sdy1 From the manpage: The metadata option only works on v0.90 metadata arrays and will convert them to v1.0 metadata. The array must not be dirty (i.e. it must not need a sync) and it must not have a write- intent bitmap. Using the default 1.2 metadata (located at the start instead of end) would be even better, but it would require all data to be shifted and can't be converted in-place.
Getting an error "appear to have very similar superblocks". Ways to fix it?
1,596,405,443,000
I am trying to understand some basics in the I/O area. What I want to know is when people talk about disk I/O speeds on a standard HDD, are the speeds mentioned taken at a physical disk level or based on logical partitions? The reason I am asking this is also because many places where I researched, the question is asked about disk, especially with linux, and in the results or answers people usually refer the mount points that exist in the system and both parties are in consensus. I feel, the terms disk and partitions are used interchangeably which is what confuses me. I am using some distributed applications for which the disk I/O speeds are vital. Right now, I have one single physical disk with multiple mount points on the same. So here, if the disk I/O max capacity is say 70 MB/s, does it mean that all mounts put together, I can only get upto 70 MB/s (meaning the speed is physical disk not partition based) even if the application can push more data in parallel to multiple mount points faster than that or can I get each partition to max out at 70 MB/s? I am inclined towards thinking that it is physical disk based. I just need some additional validation and maybe some material where I can gain some more knowledge regarding this topic. If it is disk based, I am considering adding more disks so that the application can get some extra speed.
The disk I/O speed is determined by the physical characteristics of the hardware: The speed of the bus (SCSI, USB, whatever) where the disk is attached, and the speed of the disk itself (which is always an average, and you have to take disk-internal buffers into account). From an I/O speed aspect, it doesn't make any difference if you do I/O inside logical partitions, inside files, or do raw I/O on the whole disk. So yes, if you have a single physical disk with many partitions ("mount points"), the total speed will be capped by the physical I/O speed of the single disk. It will actually be worse, because the partitions are on different areas of the platter, and the head will have to move ("seek") if you write to several partitions at once. If instead you have multiple physical disks, each for one of your mountpoints, in theory they can work in parallel, and the I/O speed of the physical disks adds up. This only works to a certain point, of course, your computer internal busses also have a bandwidth limit, and you can't feed arbitrarily many disk controllers with data at full speed. So yes, adding more disks can get you extra speed (unless the limitation is not in the physical disk I/O, but somewhere else).
Are I/O speeds based on Physical disks or logical partitions?
1,596,405,443,000
I use this command. df -h and my result is Filesystem Size Used Avail Use% Mounted on /dev/simfs 60G 50G 8.6G 86% / How can i see What files space are occupied ? file name, path file, file size. thanks.
Use the du command instead? E.g. try du -sh ./* to see the totals for each file/directory within the current directory. This can take some time when run from the top of a large/complex filesystem.
show disk usage
1,496,336,101,000
I have a 2TB disk that I have overwritten with random data. fdisk confirms that the device has no recognized partition table. Yet, I see these 5 device files for the disk: /dev/sdc{,1,2,3,4} i.e. # for i in /dev/sdc{,1,2,3,4} ; do fdisk -l -u $i ; done Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc1: 555.1 GiB, 595985804288 bytes, 1164034774 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc2: 1.6 TiB, 1781956913152 bytes, 3480384596 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc3: 928.5 GiB, 997001973760 bytes, 1947269480 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdc4: 1 TiB, 1153125198336 bytes, 2252197653 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Again, the device has no partition table: # fdisk /dev/sdc Welcome to fdisk (util-linux 2.25.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0x56b93c1d. Command (m for help): p Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Why are there partition devices--i.e. why is there /dev/sdc{1,2,3,4} and not just /dev/sdc? Further, why do the partitioned devices have sizes that do not add up to 1.8TiB?
Linux doesn't re-read the partition table except on boot (or disk connect) or when explicitly told to do so (e.g., by fdisk after writing one, or by using partx or blockdev --rereadpt). So until you do one of those, sdc[1-4] will continue to exist. The easiest fix would be to call partprobe to instruct the kernel to re-read the partition table on all devices, or partprobe /dev/sdc to re-read the partition table only on that disk. Or you could use fdisk to write that empty partition table, then fdisk will do the same thing as partprobe. Note also that the kernel won't re-read it if the disk (or rather any of its partitions) are in use (e.g., as a filesystem, swap, LVM PV, etc.). Of course, if any were in use, you've got a problem as you just wiped them. Finally, if you've already tried forcing a reread, it's possible your random data just happens to match a partition table signature. Linux supports a lot of different partition table formats (the list is chosen when compiling the kernel), and the signature on some of them is as small as one byte—so there is a 1/256 chance that random data matches. Others have longer signatures, so much lower chance. I'm not sure what the overall chance is, but a quick check of the kernel logs will show which partition table format the kernel recognized.
Why is Linux Showing Device Partition Block Files for a Disk with no Partitions?
1,496,336,101,000
How much time can I expect this disk (1 TB) to function? I have already made a backup of my important data. But I will be unable to buy a new HDD until February. Is there any way I can extend its life? Perhaps by formatting the disk or something else?
There is no precise time, it could be a minute from now or a month from now. But if it's bound to happen it will and you can't really prolong the disk's life, so make sure you backed up all and stop relying on it.
Disk is likely to fail soon [closed]
1,496,336,101,000
I have a Dell PowerEdge R820 server which is under maintenance by other third party. There are 6 SAS (10K RPM, 6gbps) disks and they are configured as RAID 5 using PERC controller. Currently I am facing performance issue with the server. Basically it is with the disk. When I tried to write 4GB of data, it is taking 12 minutes to complete. I am using a Linux Server. Please see the output of dd command: # # time dd if=/dev/zero of=TestFile bs=4096 count=1024000 1024000+0 records in 1024000+0 records out real 12m 3.56s user 0m 7.94s sys 0m 0.00s I have also checked with the other Desktop made server, where RAID 5 is configured with 4 SATA (7.2K RPM) disks. It is taking only 19 seconds to write 4GB of data to the disk. I can see the clear problem of disk I/O performance issue. But the third party is denying, they are telling that, this is the normal time. But I refuse to agree with them. Can you please tell me what should be the normal time to write 4GB data to the volume configured with 6SAS (10K RPM) disks?
That does seem like a disk performance issue. You should get something in between 20 MB/s to 80 MB/s depending on block size I think. I found this old 10k disk comparison where you can see how different drives are performing http://techreport.com/review/5236/10k-rpm-hard-drive-comparison/7 . I also found a thread from dell forum where someone is facing same kind of issue: http://en.community.dell.com/support-forums/servers/f/906/t/19475037 To answer you question: No 5-6 MB/s is not normal.
Disk I/O performance issue [closed]
1,496,336,101,000
I'm looking at one of those detachable hybrid laptops, which has extra hard drive in the keyboard base; this laptop runs Ubuntu. Sometimes these drives mount at startup, sometimes not - and in inspecting, I just noticed something that I don't understand. So when this drive is mounted and working properly, here is the relevant output of lshw: $ sudo lshw -businfo | grep 'disk\|volume' scsi@4:0.0.0 /dev/sdb disk 500GB HTS545050A7E380 scsi@4:0.0.0,1 /dev/sdb1 volume 222GiB EXT4 volume scsi@4:0.0.0,2 /dev/sdb2 volume 222GiB EXT4 volume scsi@4:0.0.0,3 /dev/sdb3 volume 20GiB Windows NTFS volume With lshw -v, I get the following for this drive: *-scsi:1 physical id: 2 bus info: usb@2:1.2 logical name: scsi4 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk description: SCSI Disk product: HTS545050A7E380 vendor: Hitachi physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: AD04 serial: TE85313R0LU5JK size: 465GiB (500GB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=6 guid=d0ba2288-a760-46db-8675-fe22d9becf8e sectorsize=512 So, it does tell me this drive is connected somehow through USB; and that it is a Hitachi. However, when I do lsusb, it is not listed at all: $ sudo lsusb Bus 004 Device 005: ID 03eb:8808 Atmel Corp. Bus 004 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 004 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 004: ID 114d:0140 Alpha Imaging Technology Corp. Bus 003 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 003: ID 05e3:0735 Genesys Logic, Inc. Bus 002 Device 002: ID 05e3:0612 Genesys Logic, Inc. Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 0483:91d1 STMicroelectronics Sensor Hub Bus 001 Device 005: ID 2a47:0c02 Bus 001 Device 004: ID 05e3:0606 Genesys Logic, Inc. USB 2.0 Hub / D-Link DUB-H4 USB 2.0 Hub Bus 001 Device 002: ID 05e3:0610 Genesys Logic, Inc. 4-port hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub In other words, no Hitachi here. System log /var/log/syslog relevant logs: kernel: [ 2.963255] scsi 4:0:0:0: Direct-Access Hitachi HTS545050A7E380 kernel: [ 2.963490] sd 4:0:0:0: Attached scsi generic sg1 type 0 kernel: [ 2.964196] sd 4:0:0:0: [sdb] 976773152 512-byte logical blocks: (500 GB/465 GiB) kernel: [ 2.966060] sd 4:0:0:0: [sdb] Write Protect is off kernel: [ 2.966063] sd 4:0:0:0: [sdb] Mode Sense: 5f 00 10 08 kernel: [ 2.967007] sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA kernel: [ 3.019250] sdb: sdb1 sdb2 sdb3 kernel: [ 3.021523] sd 4:0:0:0: [sdb] Attached SCSI disk kernel: [ 3.381991] clocksource: Switched to clocksource tsc And finally I checked with udevadm info -a -n sdb; here it finds the "Hitachi" as vendor of device, and in the parent walk, it comes to usb-storage, which is a child of vendor/product 05e3 0735, which is listed by lsusb (Genesys Logic, Inc.,), and for which lsusb -v reports: iManufacturer 1 USB Storage iProduct 2 USB3.0 SATA Bridge So, since lsusb will typically show vendor/product of, say, USB thumbdrives - why doesn't it show this drive, even if it connected through the USB bus?
This drive isn't a USB device, but a SATA device which is accessed via the Genesys bridge (which is a USB device). Since it isn't itself a USB device, it doesn't show up in lsusb's output. USB thumb drives are USB devices without bridges (well, usually), so they do show up as-is on the USB bus and in lsusb's output.
lsusb not listing a SCSI drive, connected through USB (SATA bridge)?
1,496,336,101,000
I've got a large ZFS disk pool; 3 nested RAIDZ2 vdevs. I am documenting the process for replacing a failed disk for my colleagues and so simulated a disk failure by removing a disk from the host. Sure enough, the vdev to which the disk belonged became degraded and the disk unavailable. I offlined the disk like so... zpool offline diskpool sdo A quick 'zpool status' shows the disk as offline... so far so good. I replaced the disk and confirmed on my SATA controller the new disk was detected, which it was. Then i tried to get linux to rescan the scsi bus to detect the disk. This is where my first problem occurs. As far as I know, the following command is used to find the correct host bus to rescan... grep mpt /sys/class/scsi_host/host?/proc_name However on my Centos 7.2 system, this command has no output. It doesn't error, it just gives me null output and waits for my next command. I'm using several specialist cards that allow me to connect many sata devices. I would normally rescan the bus with echo "- - -" > /sys/class/scsi_host/hostX/scan Where hostX is the correct host bus, but as I cannot find the host bus, I cannot complete this step. Is there another way to get this info or has the command changed in Centos 7.2 or something? Furthermore, I opted to reboot the machine to allow me to continue testing. Following a reboot, the ZFS pool was not attached. I had to manually import it with 'zpool import diskpool'. That worked fine, but strangely once its imported, if i do 'zpool status', I no longer see the device IDs like it showed me before... raidz2-2 ONLINE 0 0 0 /dev/sdd ONLINE 0 0 0 /dev/sde ONLINE 0 0 0 /dev/sdf ONLINE 0 0 0 /dev/sdg ONLINE 0 0 0 Instead, it seems to have the drive serial numbers... raidz2-2 ONLINE 0 0 0 ata-ST8000AS0002-1NA17Z_Z840DG66 ONLINE 0 0 0 ata-ST8000AS0002-1NA17Z_Z840DVE0 ONLINE 0 0 0 ata-ST8000AS0002-1NA17Z_Z840CQFB ONLINE 0 0 0 ata-ST8000AS0002-1NA17Z_Z840DP2V ONLINE 0 0 0 This will cause a problem in the future as if a further disk fails, I will struggle to identify the correct disk to replace. Is there a way I can switch this back so I'm shown the device id again? Thanks in advance!
ZFS detects disks not by their name in the filesystem, but by their UUID that is written onto the disk (or at least something similar -- not 100% sure that it's actually a UUID). When zpool import runs, the disks are enumerated, ZFS rebuilds all the pools, and then uses the device name (without actually including any directory IME, usually it's something like sda rather than /dev/sda) in the zpool status output. As such, if you move the drives around (or if the kernel moves the drives around, which can happen with modern kernels on modern hardware), zpool will still detect the disks in the same order as it did before; disks that appeared first in the output will again appear first in the output, even if the kernel doesn't enumerate them in that output anymore. What happened to you here is probably that due to the fact that the original zpool import didn't work, the kernel could complete its boot, udev did a lot more work, and then by the time you did the manual zpool import, the default enumeration of all your disks turned out to have the serial number-based ones first, rather than the sdX-based ones. Most likely, the next time you reboot the machine, the used names will be back to the sdX scheme. Luckily, resolving the names from one naming scheme to the other is fairly straightforward: wouter@gangtai:/dev/disk/by-id$ ls -l total 0 lrwxrwxrwx. 1 root root 9 Mar 31 18:15 ata-SAMSUNG_MZ7TE256HMHP-00004_S1RKNSAFC04685 -> ../../sda lrwxrwxrwx. 1 root root 10 Mar 31 18:15 ata-SAMSUNG_MZ7TE256HMHP-00004_S1RKNSAFC04685-part1 -> ../../sda1 lrwxrwxrwx. 1 root root 9 Mar 31 18:15 wwn-0x50025388a089e89c -> ../../sda lrwxrwxrwx. 1 root root 10 Mar 31 18:15 wwn-0x50025388a089e89c-part1 -> ../../sda1 There are multiple naming schemes (by-id, by-uuid, and by-path), all of which can be found under /dev/disk. Having said all that, I must say I don't agree with your claim that it would be easier to figure out which disk is which by looking at the sdX names. Modern kernels no longer assign static device names to particular devices; this is why modern distributions use UUID-based fstab files, rather than sdX-based ones. The serial number, in fact, is a far more reliable way to figure out which is the broken disk; after all, it's written on the actual disk, in contrast to the sdX name, which may differ from boot to boot (I've actually encountered that on a ZFS box with sixteen hard disks). Any one of the other methods (by-uuid, by-id, and especially by-path in the enterprise-level multi-disk enclosures) is much more reliable than that.
Replacing a failed disk in a ZFS pool
1,496,336,101,000
I have a linux box on my school's server. When I was trying to install Protocol Buffers, disk usage is reported as exceeded. So I checked my disk usage by two command on my home directory: du -h 535M . df -ha home.XXX:/export/home/XXX 9.7T 1.5T 8.3T 15% /home/XXX Are they supposed to be same number? Which one is the real usage of my disk on Linux box?
du tells you how much data there is in the directory where you ran it. df tells you how much data is in total on the volume where your home directory is located. Your home directory is mounted remotely (over NFS); it is likely that it is on the same volume as other home directories, so df reports the data used by all the home directories on the same volume. Your disk space may be exceeded even if there is room left on the device. A school environment is highly likely to have quotas in place. If you got a message “quota exceeded” as opposed to “no space left on device”, then you've exceeded your quota. Run the command quota to see what your quota is and how much of it you're using.
How to find the true disk usage on my Linux box?
1,496,336,101,000
I am attempting to install OpenBSD7.4 on an amd64 arch with a 1TB drive. The machine will run an X windows system and need plently of room to store files. I selected "whole disk GPT" at the prompt (though I'm unsure whether MBR might be the better option). An auto-allocated layout is created: 12 partitions are created, a through l. A summary: partition size (M) fstype mount point(I think) a: 1024 4.2BSD / b: 4129 swap c: 915715 unused d: 4096 4.2BSD /tmp e: 11842 4.2BSD /var f: 30720 4.2BSD /usr g: 1024 4.2BSD /usr/X11R6 h: 20480 4.2BSD /usr/local i: 260 MSDOS j: 3072 4.2BSD /usr/src k: 6144 4.2BSD /usr/obj l: 307200 4.2BSD /home c is ~915GB and is marked as unused. I'd like to adjust the layout to make use of it. Decanting from the man pages I can see the following definitions: partition/mount point Summary from hier / root /tmp Temporary files that are not preserved between system reboots. /var Multi-purpose log, temporary, transient, and spool files. /usr Contains the majority of user utilities and applications /usr/X11R6 Files required for the X11 window system. /usr/local Local executables, libraries, etc. /usr/src BSD and/or local source files. /usr/obj Architecture specific target tree produced by building the /usr/src tree. /usr/home Default location for user home directories. However, I'm struggling to reason about this and have the following questions (with current best guess answers in italics): 1. Why does the automatic layout pick the above mount points in particular? Evolved generic allocation based on historical use and estimates. 2. Why is there an unused partition? An artifact of the automatic disk allocator which sets a maximum limit for the size of partitions - leftovers are unused. 3. Is is a good idea to put it all on a single partition instead? No idea! 4. What might be a good practice allocation for a general use PC (with X windows) - where should I re-allocate the c partition? I guess I should reallocate the unused c to l//usr/home given that I may be saving a lot of files in userspace. *Perhaps there's an obvious man page I missed. Here's what I've seen: https://www.openbsd.org/faq/faq4.html#Partitioning https://man.openbsd.org/disklabel#AUTOMATIC_DISK_ALLOCATION https://man.openbsd.org/hier https://www.openbsdhandbook.com/disk_operations/*
1. Why does the automatic layout pick the above mount points in particular? This layout is suggested based on the experience of the developers and the needs of the system. For example, the / partition contains the minimum required for the system to work, it works even in case of issue with other partitions. /tmp and /var are often written and so are more subject to problems. A problem on these partitions must not prevent the system from booting. As you mentioned, hier(7) describes the filesystem layout. 2. Why is there an unused partition? Quoted from disklabel(8): disklabel supports 15 configurable partitions, ‘a’ through ‘p’, excluding 'c'. The ‘c’ partition describes the entire physical disk, is automatically created by the kernel, and cannot be modified or deleted by disklabel. By convention, the ‘a’ partition of the boot disk is the root partition, and the ‘b’ partition of the boot disk is the swap partition, but all other letters can be used in any order for any other partitions as desired. The c partition is special, it represents the whole device, it's not an unused partition. As a comparison: on Linux /dev/sdX represents the whole device, and /dev/sdX1 a partition on the device. On OpenBSD, /dev/sdXc represents the whole device, and /dev/sdXa a partition on the device. 3. Is is a good idea to put it all on a single partition instead? You could use a different partitioning, depending on your needs. But using a single partition is probably not a good idea. If everything is on the same partition, any problem with the filesystem could prevent the system from booting. On the other hand, having at least a separate root partition allows the system to boot in single user mode in case of issue. Some filesystems are mounted with different options, as you can see in the /etc/fstab file (see mount(8) and fstab(5)). All partitions except / are mounted with option nodev. /tmp is also mounted with option nosuid, which is good for security reasons. With a single partition, you couldn't benefit from this. 4. What might be a good practice allocation for a general use PC (with X windows) - where should I re-allocate the c partition? As explained above, you don't need to re-allocate the c partition since it represents the whole device. Your ~915GB are shared this way: ~1GB to /, to contain /bin, /sbin, and maybe more. ~4GB to the swap. ~4GB to /tmp to contain temporary files. ~11GB to /var to contain logs, backups and more. ~30GB to /usr to contain user utilities and more. ~1GB to /usr/X11R6 to contain X window system's files. ~20GB to /usr/local/ to contain the programs and libraries installed by the user. ~260MB to the boot partition ~3GB to /usr/src/ to contain the source code of OpenBSD. ~6GB to /usr/obj/ to contain the results when building /usr/src. ~307GB to /home to contain your personal files and more. From disklabel(8) "AUTOMATIC DISK ALLOCATION" section, you can see that in the automatic layout, the /home partition can be assigned up to 300GB. > 10GB Free > 2.5GB > 700MB < 700MB / 150MB – 1GB 800MB – 2GB 700MB – 4GB 1MB – 2GB swap 80MB – 256MB 80MB – 256MB 1MB – 256MB /usr 1.5GB – 30GB 1.5GB – 30GB /home 1GB – 300GB 256MB – 2GB /tmp 120MB – 4GB /var 80MB – 4GB /usr/X11R6 384MB – 1GB /usr/local 1GB – 20GB /usr/src 1.5GB – 3GB /usr/obj 5GB – 6GB If you want to use the unused ~528GB, you could increase the size of the /home partition or reinstall OpenBSD and manually adjust the disk layout. Beside these ~528GB, there is enough place for the system to run on a desktop with graphical applications. I recently installed OpenBSD 7.4 on a laptop, and started to take some notes about running OpenBSD on desktop.
Understanding the auto-allocated disk layout on OpenBSD
1,496,336,101,000
I can set the spindown timeout of a disk via hdparm -S <timeout> /dev/sda. How can I verify, i.e. read out, what timeout /dev/sda has set at any given moment? (I read man hdparm to no avail.)
hdparm is a thin wrapper around various drive command sets, in particular ATA/ATAPI. These command sets don’t provide a way to retrieve timeouts; see for example the draft ATA8-ACS — the only “idle” or “standby”-related commands are commands to immediately place the drive in a given power mode or to set the corresponding timeout. Even the “check power mode” (E5h) command only returns the current power mode, it doesn’t provide any information on timeouts (either their current threshold, or their current value).
Read currently set spindown timeout of disk
1,496,336,101,000
After changing a disk size in VMware (for example increasing it by 10 more GB), the next step is to rescan it in Linux so that kernel identifies this size change. For this we use this command: echo 1>/sys/class/block/sda/device/rescan In our scripts, we rescan every couple of minutes from a cron job, in order to verify if we need to resize the relevant disks. I want to know if there is some way to identify if a disk size was changed, without rescanning, and if the disk size was really changed, only then to rescan. So far we have not found a way to verify if the disk size was changed without rescanning, but we hope we can get answers here. The reason for my question is that we do not feel comfortable with rescanning every couple of minutes, even though this activity isn't risky. Reference: https://kerneltalks.com/disk-management/how-to-rescan-disk-in-linux-after-extending-vmware-disk/
The way to identify that a block device has been resized is to rescan it. That’s it. There’s no need to find another way of rescanning the block device in order to decide whether to rescan it. In a virtualised environment it should be safe to run this every two minutes; there will be a very slight performance hit whenever a rescan is run, because the rescan acquires interrupt locks, but rescanning a virtual block device is very fast. If you’re uncomfortable with rescans every two minutes, you can reduce the frequency — do you really need to react to disk resizes within two minutes? (Note that some paravirtualised storage drivers automatically update the size seen by the guest, so rescanning isn’t necessary; this is apparently not the case for VMware.) You may want to look at open-vm-tools which supposedly allows workloads to be triggered inside guests from the host: that way, you could resize the disk externally, and trigger a job inside the guest to rescan and resize the volumes. I’ve never done this so I don’t know if it’s actually possible.
On Linux, is it possible to know if disk size was changed without rescanning the disk?
1,496,336,101,000
My concern is about /usr and home size as you see. Home: Root: My concern is about /usr and home size as you see. It seems that they have really small space, but when I was installing ubuntu I gave enough size for / (root that contains /usr) and also for home. the df -h shows : Root : /dev/nvme0n1p5 32G 6.9G 23G 24% / Home : /dev/nvme0n1p8 142G 311M 135G 1% /home So where is the problem? why am I seeing nearly full strorage in both? and in case I reinstall ubuntu what should I do?
The screens you’re looking at don’t show the proportion of disk space used compared to the disk space available; they only show the share of disk space used. Your /home volume is 142GiB in size, out of which 311MiB are used; your home directory uses 260.3MB of that. The “Home folder” screenshot shows a full bar because the only directory being shown accounts for all the used disk space in the folders being displayed. Likewise, your / volume is 32GiB in size, but only 6.9GiB is in use. / accounts for all of that, so it gets a full bar in the second screenshot; /usr accounts for most of that, so it gets a red bar itself, but not quite full.
Why /usr and /home size is very small even tough I gave /home and / (root) enaugh size?
1,496,336,101,000
I am trying to image my Ubuntu disk using Clonezilla and it fails because I get an error saying: error cannot have overlapping partitions Below is how my disk is set up and the lsblkoutput: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 42,2M 1 loop /snap/snapd/14066 nvme0n1 259:0 0 953,9G 0 disk ├─nvme0n1p5 259:3 0 976M 0 part [SWAP] └─nvme0n1p1 259:1 0 952,9G 0 part / And here is the output of fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 953,9 GiB, 1024209543168 bytes, 2000409264 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x6e617337 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 * 2048 1998407679 1998405632 952,9G 83 Linux /dev/nvme0n1p2 1998409726 2000397734 1988009 970,7M 5 Extended /dev/nvme0n1p5 1998409728 2000408575 1998848 976M 82 Linux swap / Sola And here is how it appears in gparted: Any advice how to fix this error so I can image/save my disk?
Answer adapted from: how-to-fix-overlapped-partitions-in-the-mbr-table. You can try this but i think much be easier solution to just delete swap and logical partition Fixing the partition table with sfdisk: Boot with live Ubuntu disk; Confirm the problem on your disk device, here /dev/sda with parted e.g. sudo parted /dev/sda unit s print which should show: Error: Can't have overlapping partitions. Partition details can be checked with: sudo fdisk -l -u /dev/sda which, for you, according to your post is: Disk /dev/nvme0n1: 953,9 GiB, 1024209543168 bytes, 2000409264 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x6e617337 Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 * 2048 1998407679 1998405632 952,9G 83 Linux /dev/nvme0n1p2 1998409726 2000397734 1988009 970,7M 5 Extended /dev/nvme0n1p5 1998409728 2000408575 1998848 976M 82 Linux swap / Solaris Checking the overlaps: You can see that your extended partition /dev/nvme0n1p2 is smaller than your swap partition /dev/nvme0n1p5. To make things more clear your swap partition is inside the that extended partition and hence it's size should be smaller that extended partition size ideally.But in your case swap size is greater than logical partition size itself. Device Size /dev/nvme0n1p2 970,7M /dev/nvme0n1p5 976M or in other words end sector of nvme0n1p2 should be greater than end sector of nvme0n1p5.But in your case nvme0n1p2end = 2000397734 nvme0n1p5end = 2000408575 and hence the problem. Now you can simply solve it by reducing you swap partition size simply using gparted. (~ 600MB - 700MB) OR you can use command line tools : sfdisk Using sfdisk As suggested in the documentation that - "In cases where we do not know if the starting or ending sector is the problem, we assume that the starting sector of each partition is correct, and that the ending sector might be in error", we assume that the starting sector of extended partition nvme0n1p2 is correct. Hence we will be looking to change the end sector of swap partition nvme0n1p5. Calculations: nvme0n1p5newEnd = nvme0n1p2end - 1 = 2000397734 - 1 = 2000397733 nvme0n1p5newSize = nvme0n1p5newEnd - nvme0n1p5start = 2000397733 - 1998409728 = 1988005 Dumping a copy of the partition table in an file using the sfdisk command: sudo sfdisk -d /dev/sda should dump the partition table details. This can be dumped to a file, which after necessary corrections are made, can be fed back to sfdisk. [To OP: Please edit your Question and include the output of sudo sfdisk -d /dev/sda] Dump a copy of partition table with: sudo sfdisk -d /dev/sda > sda-backup.txt Open the file with root privilege, created in the previous step, using text editor of your choice. In the example I'll use nano. sudo nano sda-backup.txt (`sda-backup.txt` assuming the file is in the current directory, else repalce it with the file's absolute path.) Change the old size of nvme0n1p5 (1998848) to the corrected size (1988005) so that your new partition table dump would look something like: output not attached by op Save the file (Ctrl+O for nano) and close the editor (Ctrl+X for nano). Feeding back the corrected partition details to the partition table using the sfdisk command: sudo sfdisk /dev/sda < sda-backup.txt Confirm if the problem is resolved by running parted on your disk device: sudo parted /dev/sda unit s print If step 9 confirm that the partition table is fixed, you can then use GParted or other partition editors with the device. The GParted documentition also suggests an alternative method, using testdisk to scan the disk device to rebuild the partition table. The testdisk application is included on GParted Live. So if you are not comfortable with the command-line way, you can try the alternative. source Using Gparted unmount your swap partition before continuing current state resize the root partition root partition before resize root partition after resize created empty space after root partition deleting swap delaeting logical partition all partitions removed except root create new logical partition leave some free space before partition (so it doesn't overlap) and select partition type as Extended partition this is how it should look now create swap partition leave some free space after partition so it doesn't exceed and select filesysytem as linux swap this is how it should look now copy the UUID of your new swap and replace it in your /etc/fstab
Clonezilla: cannot have overlapping partitions
1,496,336,101,000
Show disk space with fdisk: sudo fdisk -l /dev/sda Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors Disk model: ST500DM002-1BD14 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 0C7DCAA1-CBD0-4A33-B210-F8D027B84A09 Device Start End Sectors Size Type /dev/sda1 2048 390819839 390817792 186.4G Linux filesystem /dev/sda2 390819840 422070271 31250432 14.9G Linux swap /dev/sda3 422070272 423120895 1050624 513M EFI System /dev/sda4 423120896 423153663 32768 16M Microsoft reserved /dev/sda5 423153664 628613119 205459456 98G Microsoft basic data The free unused, available space is about: total space for dev/sda - space for /dev/sda1,/dev/sda2,/dev/sda3,/dev/sda4,/dev/sda5 = 465.8G - 186.4G - 14.9G - 513M - 16M - 98G = 166G How can get the number directly with a command? It is better not to use the method:parse all numbers from fdisk and combine into a calculation expression 465.8 - 186.4 - 14.9 - (513+16/1000) - 98.
sfdisk -F /dev/sdX will print both sum of the free space and list of free space areas: # sfdisk -F /dev/sde Unpartitioned space /dev/sde: 477.77 MiB, 500973568 bytes, 978464 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Start End Sectors Size 22528 32527 10000 4.9M 53248 69391 16144 7.9M 71680 479231 407552 199M 479232 1023999 544768 266M So if you are interested only in the sum, you can parse it from the first line: # sfdisk -F /dev/sde | head -1 | cut -d":" -f2 | cut -d"," -f1 477.77 MiB Note that calculating free space like this is tricky. Here I have 478 MiB of free space but that doesn't mean I can use it all, I made the partitions in a way that makes the first two free regions unusable (too small to be used) and the space at the end of the disk is not a single continuous free space. This is an exaggerated example, but I've seen worse partitioning than this :-) If you want to get "biggest free continuous space usable for a partition" you'll need to check starts and ends, partition types etc. and that would be hard to parse from a bash output, you'll probably need to use a library (e.g. libfdisk or libblockdev) to get exact information (which means programming in C or Python).
How to get free available disk space with command?
1,496,336,101,000
I cloned my Btrfs drive with OpenSUSE Tumbleweed installed on it to a larger drive with Clonezilla, I've even tried doing this sector-by-sector. Yet, even though the process appears to have been finished successfully every time, when I try to boot from the new drive, a Ubuntu GRUB appears, and of course nothing is loading. I assume that the Ubuntu menu is coming from Clonezilla itself, but why wouldn't Clonezilla copy everything identically? Also, I cannot mount the new drive from OpenSUSE. The drive is seen in the partition manager, but no mount option is available. Can someone please clarify what is it that is causing the drive clone not to boot and even not to mount? I presume there are some peculiar specifics of Btrfs, but I have no idea why sector-by-sector cloning would not produce an identical copy of everything, making the disk bootable and mountable. Would appreciate some help. Update: I was able to make the partition mountable with the help of Error for `mount`: `system call failed: File exists.` But I still can't boot from it. Also, for some reason there is no user directory under /home/ on the new disk, despite the supposedly identical copy.
Okay, so according to your lsblk output and your /etc/fstab, you have essentially an all-btrfs system, with the exception of the EFI system partition. Note that a single btrfs filesystem can extend beyond a single partition or even to multiple disks: since your lsblk output does not say what your /dev/sdc is being used for, it might be used as an extension of your btrfs that contains your /home subvolume. That might explain why it is not there on the clone, or perhaps you simply failed to mount all the different subvolumes. You could use btrfs filesystem show to see which devices/partitions belong to each mounted btrfs filesystem. When you ran btrfstune -m /dev/sdb3 as you mentioned in the comments to the other question you linked, it changed the UUID of the cloned filesystem, so the UUID entries on the /etc/fstab on the cloned filesystem are no longer correct. You'll have to fix them in the /etc/fstab file of the clone, and possibly also in its GRUB configuration and/or initramfs. You can use lsblk -o +UUID to view the new filesystem UUID. This UUID is used by GRUB and the Linux kernel, but not by the UEFI firmware. It is stored within the filesystem metadata. You would have to do something like this: mount /dev/sdb3 /mnt mount -o subvol=/@/boot/grub2/x86_64-efi /dev/sdb3 /mnt/boot/x86_64-efi mount /dev/sdb1 /mnt/boot/efi and then: edit /mnt/etc/fstab to replace the filesystem UUID on every line referring to the btrfs filesystem edit /mnt/boot/grub/grub.cfg (or maybe /mnt/boot/efi/EFI/opensuse/grub.cfg depending on where OpenSuSE places its actual GRUB configuration) to replace the filesystem UUID on kernel boot options line edit /mnt/etc/default/grub to replace the filesystem UUID so that the old UUID does not accidentally come back when installing a kernel update or regenerating the GRUB configuration for some other reason maybe completely recreate your initramfs file If it turns out the initramfs file needs to be recreated (if it completely relies on kernel boot parameters to find the root filesystem, it might not be necessary), you can do it like this at this point: mount -t proc none /mnt/proc mount -t sysfs none /mnt/sys mount -o bind /dev /mnt/dev chroot /mnt /bin/bash mkinitrd # or whatever is the appropriate command for OpenSuSE exit finally unmount everything you mounted To get the system to actually boot from the cloned disk, you'll need a UEFI boot variable defined for it. From your efibootmgr -v output, the OpenSuSE boot entry refers to the EFI System Partition by a partition UUID. This is a separate UUID, which is used by the UEFI firmware only. It is stored in the GPT partition table. Boot0000* opensuse-secureboot HD(1,GPT,e099a79f-8b66-412d-89ae-a4869876f500,0x800,0x100000)/File(\EFI\opensuse\shim.efi) You can view the partition UUIDs with lsblk -o +PARTUUID. Having two disks with identical partition UUIDs may confuse your system firmware, or the firmware may simply pick the first disk with the matching UUID. If you plan to keep both disks in the same computer, you might have to change the partition UUID using sgdisk --partition-guid=1:R /dev/sdb (this command will generate a new random partition UUID for partition #1 on /dev/sdb). Once complete, you would need to create a new UEFI boot variable for the cloned disk. The command for that would be something like efibootmgr -c -d /dev/sdb -l \\EFI\\opensuse\\shim.efi -L opensuse-clone. Note the doubled backslashes, because the backslash is a special escape character for the shell; the ESP filesystem is FAT32, so the UEFI firmware uses MS-DOS/Windows style backslashes as path separators instead of Unix-style forward slashes. Helpfully, this command will automatically read the partition UUID from the specified drive, so you won't have to type it. (You may want to use efibootmgr -B -b XXXX where XXXX is the BootXXXX number of one of your past Linux installations, to clean up the obsolete UEFI boot variables from the system NVRAM.) But if you plan to move the disk to another computer, changing the partition UUID should not be necessary, but the UEFI boot variable should be created on the system that is the recipient of the cloned disk. You might use some Linux Live boot media to do that, but ensure that you boot from the media specifically in UEFI style, or you won't be able to access the UEFI boot variables. Alternatively, if you need the cloned disk to be bootable on any UEFI system with no significant preparations, you should set up a copy of the UEFI bootloader at \EFI\Boot\bootx64.efi, the fallback/removable media bootloader path on the ESP partition of the cloned disk. Unfortunately I don't have information of the exact set-up of the OpenSuSE UEFI bootloader at hand, so I cannot give you exact steps for that. To access the ESP on the cloned disk, you would have to first mount it, for example: mount /dev/sdb1 /mnt and then you could place the fallback bootloader at /mnt/EFI/BOOT/bootx64.efi, which now corresponds to the DOS-style pathname \EFI\BOOT\bootx64.efi used by the UEFI firmware.
Clonezilla and BTRFS, GRUB and boot
1,496,336,101,000
I want to get the read and write, service time, queue length and wait time of my disks. the OS is CentOS 6. I use iostat. when I run this command: iostat -x -d /dev/sda the output is: Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 4.04 272.41 21.63 58.30 7565.96 3037.79 132.66 0.06 0.74 0.66 5.26 which return the total value since system been up. but I want disk information at time. for that I should run iostat -x -d /dev/sda 1 2. the output is : Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 4.04 272.40 21.63 58.30 7565.86 3037.75 132.66 0.06 0.74 0.66 5.26 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 11.00 0.00 12.00 0.00 184.00 15.33 0.32 26.75 3.08 3.70 which the second part shows what I want. is there any way to get that information directly without 1 2? I searched man page but didn't find anything. Or is there any other way to get that information instead of iostat? (and I cant install new packages on systems -_-).
Why not pipe it through sed: iostat -x -d /dev/sda 1 2 | sed '1,2d' Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 11.00 0.00 12.00 0.00 184.00 15.33 0.32 26.75 3.08 3.70
iostat - print only current stats without the sumary since boot
1,496,336,101,000
we have linux OS with OS - sda and another disk for data df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg55-lvm_root 41932800 19731580 22201220 48% / devtmpfs 16372376 0 16372376 0% /dev tmpfs 16387592 108 16387484 1% /dev/shm tmpfs 16387592 1741416 14646176 11% /run tmpfs 16387592 0 16387592 0% /sys/fs/cgroup /dev/mapper/vg55-lvm_var 105756672 54652856 51103816 52% /var /dev/sdb 72117368 72100984 0 100% /data /dev/sda1 508588 160024 348564 32% /boot PV VG Fmt Attr PSize PFree /dev/sda2 vg55 lvm2 a-- 149.51g 92.00m # the problem is that /data is full , and we want to add another new disk from the VMcenter in order to extend the sdb disk to 200G please advice how to perfrom the steps
It seems that /data is not managed with LVM so you might add space so /dev/sdb via vCenter and then grow the file system in CentOs xfs_growfs /dev/sdb LVM If you want to have LVM for /data, that will be a bit longer. Add the disk to VMware, make it show in CentOS : List host bus numbers : ls /sys/class/scsi_host/ For each host bus, scan the bus (where [hostX] is the name you get from the previous command): echo "- - -" > /sys/class/scsi_host/[hostX]/scan Check the names of your SCSI devices ls /sys/class/scsi_device/ Rescan the SCSI buses (name are in form X:X:X:X) echo 1 > /sys/class/scsi_device/X\:X\:X\:X/device/rescan Then you can fdisk -l to see your disk First, you need to create a new Physical Volume with the new disk. I assume that the disk will be sdc. pvcreate /dev/sdc Then you can create a new VG or use the existing one : I assume you'll use the existing one : vgextend vg55 /dev/sdc You need to create a Logical Volume to use now lvcreate -L200G -n lvm_data vg55 You now need to create a filesystem on this volume mkfs.xfs /dev/mapper/vg55-lvm_data You now have a 200GB disk that can be mounted. You might mount lvm_data somewhere, copy /data to the new volume, unmount /data and the lvm_data, and mount /dev/mapper/vg55-lvm_data /data. To add /dev/sdb to the LVM (if needed). Once you have your date moved elsewhere : pvcreate /dev/sdb Confirm you want to wipe the filesystem on /dev/sdb with y Add /dev/sdb to the existing VG vgextend vg55 /dev/sdb Then you can allocate that space to the lv you want with lvextend -L68G /dev/vg55/data
LVM + add another new disk in order to extend current sdb disk size
1,496,336,101,000
I've an image of a bootable 16GB SD card. I've created the image with: cat /dev/sdd | gzip >sdcard.img.gz And I was happy because $ du -h sdcard.img.gz 482M sdcard.img.gz 482MB instead of 16GB, yay! Here're the details of the (uncompressed) image: $ du -h sdcard.img 15G sdcard.img $ partx -s sdcard.img NR START END SECTORS SIZE NAME UUID 1 16384 81919 65536 32M 6e1be81b-01 2 81920 3588095 3506176 1.7G 6e1be81b-02 However, now I need to write this image back to the SD card but I don't want to write 14GB of trailing zeros/junk! That'd take ages. How can I create image without copying what's after the last partition? When I already created image of whole SD card, how can I truncate it to not include useless junk? The point is, I don't care about the size the image is taking in the backup, but I care about the size that's transferred back to SD card, because copying to SD card is slow and copying 14GB of useless data is pointless. So compressing the disk image or copying to a sparse aware filesystem as other answers on Internet suggest is not what I'm looking for.
Answering your first question: given you have an MBR there, I suggest you do something like dd'ing the first megabyte of the original drive (that contains the boot record and possibly the boot loader), then iterating over the partitions contained therein: dev=/dev/sda fdisk -l "$dev" | sed -ne '/^\//s,\(^[^ ]*\) .*,\1,p' | while read part do dd "if=$part" "of=$(basename "$part")" done And after you record the first megabyte to the target drive, ask the kernel to read the partition table with partprobe or kpartx. After this you should be able to dd the corresponding images contents to your new partitions.
disk image without free space
1,496,336,101,000
I have built a little NAS-like device on armbian, that uses external harddisks for its file serving purposes. The (hardware) interface only provides a reduced SATA-command set and overrides some APM/AAM/standby functions, but I would like to have a longer interval until standby. I am succesfully able to keep the drives awake by repeatedly issuing some SATA commands, but I have trouble implementing a certain logic. I would like to mimic, disk-standby after xx minutes of last activity. Is there any clever way or monitoring utility that would tell me the last time, when either SMBD, ZFS or ideally the harddrive itself performed some read/write activity? Something like the interval in ifplugd... Should I get to know "dtrace"?
Perhaps you can just poll the counters of the number of read/write operations on the block device, and do your action when they no longer change. For a block device like sda, the statistics are in /sys/block/sda/stat, and the columns are described in the kernel Documentation/iostats.txt. In particular columns 1 and 5 added together give the total completed i/o operations.
How can I detect time of last harddisk/samba or zfs activity for a ifplugd like action?
1,496,336,101,000
I have installed ubuntu on my machine alongside with windows, I allocated 55GB for it. After a period, I decided to rise the space allocated to UBUNTU to 60 GB. after this augmentation (using gparted) I found that there is a non accessible space in this partition I can't found the problem. The df -h command shows me: Filesystem 1K-blocks Used Available Use% Mounted on udev 3868944 8 3868936 1% /dev tmpfs 776332 1240 775092 1% /run /dev/sda5 60057700 55396248 1587616 98% / none 4 0 4 0% /sys/fs/cgroup none 5120 0 5120 0% /run/lock none 3881640 536 3881104 1% /run/shm none 102400 52 102348 1% /run/user /dev/sdb1 488384000 464704372 23679628 96% /media/salah/LaCie the sudo fdisk -l shows : Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x26464dec Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 409806847 204800000 7 HPFS/NTFS/exFAT /dev/sda3 409806848 829968383 210080768 7 HPFS/NTFS/exFAT /dev/sda4 850939904 976771071 62915584 5 Extended /dev/sda5 850941952 973240319 61149184 83 Linux /dev/sda6 973242368 976771071 1764352 82 Linux swap / Solaris the properties of the computer disk (the root filesystem) shows me: As I can I see, the total capacity is not shown in the used and the free space. Is that problem resulting from changing the partition capacity? Thanks
The missing (grey) space in your diagram is the 5% that is reserved by your filesystem. 5% of 61.5GB = 3.1 GB 56.7 GB + 1.7 GB + 3.1 GB = 61.5 GB There's a very good answer to a similar question, Reserved space for root on a filesystem - why?, which explains the reasoning behind this reservation. It also gives a command that will reduce the reservation percentage, but I would strongly advise you not to change it unless you understand and accept the problems this may give you later on.
unused space in my filesystem diskspace on ubuntu [closed]
1,496,336,101,000
Currently, my whole system is located at the end of my hdd. I'd like to move that data to the beginning and still have booting and other details working. dd seems to do exactly what I want (to copy my data exactly how it is placed), but I'm not sure about things like booting, grub configs and so on. Will I need to set these things later, or will dd do this job for me?
(warning: this is very dangerous if you do not know what you are doing) Yes, you can, but I do not recommend it (though I did it a few times, mostly to transfer a partition to another HDD). dd if=/dev/sdaA of=/dev/sdaB will transfer the data from sdaA to sdaB, but no checking will be done, all the partition will be copied (even the empty space), you must be sure that sdaB is bigger or equal sdaA (otherwise you overwrite the beginning of following partition), and the system most likely won't boot - you'd have to boot from rescue CD/USB, mount /dev/sdaB, modify grub configuration and re-run grub-install. And optionally resize the filesystem to reclaim any remaining space. It is much better to create the filesystem on /dev/sdaB and copy the filesystem contents. You'd still have to re-run grub-install, but at least this is much safer.
Can I use dd to move my system to another partition?
1,496,336,101,000
I want to do all my archiving and compression on the command line. I have a USB flash drive located at /dev/disk/by-label/SanDiskData (which links to /dev/sdc1). I thought I could simply do: $ cd /dev/disk/by-label $ sudo tar cfv ~/data.tar SanDiskData But then the size of /home/data.tar is only 10 kB. Where can I tar entire USB flash drives on the command line?
You are supposed to mount the drive before you can access its filesystem. Only if mounted you can execute filesystem operations (that is, access individual dirs and files) like tar on it via the mountpoint instead of the device node (or some symlink to it).
Where to tar entire USB flash drives?
1,496,336,101,000
I’m trying to re-partition from an Arch Linux install disc. I’m doing so because my current OS is corrupted. When running fdisk –l this is what it looks like: /dev/sda1 * start=2048 end=1026047 blocks=512000 id=83 system=linux /dev/sda2 start=10264048 end=625141759 blocks=312057856 id=8e system=linux LVM I need to wipe everything clean, partition /sda1 for 15gb for the OS install, and then partition /sda2 to have the rest. What is the cleanest and least time consuming way to accomplish this? One option I've found to do this is using this type of command: dd if=/dev/zero of=/dev/sda However, I'm not entirely sure if I should do this to both disks or one, or the other. I'm not entirely sure that would solve my problem and I know it is very time consuming. Can anyone provide some guidance on how to handle this situation?
Wouldn't you just do the following: Boot the Arch Linux install disc. Run fdisk /dev/sda. linux-1reo:~ # fdisk /dev/sda The number of cylinders for this disk is set to 9729. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): Delete the existing partitions using the d option. Then with all the partitions deleted, create a new one using the n option. This will be the 1st partition, /dev/sda1. Command (m for help): n First cylinder (7921-9729, default 7921): Using default value 7921 Last cylinder or +size or +sizeM or +sizeK (7921-9729, default 9729): +15G Repeat step #4, but this time just go with the default choices. Next double check the partition types are set correctly. They both should be "Linux". For example: Command (m for help): p Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * 1 1402 11261533+ 7 HPFS/NTFS /dev/sda2 1403 1415 104422+ 83 Linux /dev/sda3 1416 1546 1052257+ 82 Linux swap / Solaris /dev/sda4 1547 9729 65729947+ 5 Extended /dev/sda5 1547 7920 51199123+ 8e Linux LVM /dev/sda6 7921 8045 1004031 83 Linux Once everything is set, then write the changes to disk, using the w option. Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 16: Device or resource busy. The kernel still uses the old table. The new table will be used at the next reboot. Syncing disks. Final steps will include formatting the partitions using mkfs.ext4 /dev/sda1 and mkfs.ext4 /dev/sda2. References Manually Partitioning Your Hard Drive with fdisk
Repartion multiple disks and reallocate storage
1,496,336,101,000
I have been wondering how all these small nas boxes that run linux can share over network and usb. The network part is totally under control, but I am completely at a loss on how I could hook up a computer to my server through USB cable, and get a share. Is this done with some specific hardware or is this done through software ?
Most of the NAS' that I've encountered make use of Samba and then share these USB mounted disks out as Samba shares. Rules like this can be put in your /etc/fstab file: $ blkid /dev/sda2: LABEL="OS" UUID="DAD9-00EF" TYPE="vfat" This line can be adapted to this: /dev/sda2 /export/somedir ntfs defaults 1 2 Once this USB drive is mounted at boot up, Samba can be used to share out /export/somedir. # /etc/samba/smb.conf [xbox_videos] comment = Videos for Xbox path = /export/somedir browseable = yes ; available = yes guest ok = no ; read only = yes public = yes inherit permissions = yes writeable = yes hosts allow = 192.168.0. 192.168.1. localhost
Create my own disk server
1,496,336,101,000
I am running Crunchbang, a Debian variant with OpenBox WM. I have deleted a large number of files via browsing in thunar file manager and pressing del key. They disappear from view. I then go to ~/.local/share/Trash/files/ and delete them there too. The filesystem still doesn't report the freed space though. df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb5 61G 57G 371M 100% /
I once had a similar issue trying to track down what was taking up space on my root partition but not being reported by Baobab, a disk usage analyzer. In the end I found the files in my the trash folder of the root user, /root/.local/share/Trash. The reason Baobab and other utilities wouldn't show these files is because I was running them as a non-root user and they didn't have the necessary permissions to read /root folder. A quick su allowed me to enter the necessary directory and rm * the files away.
Deleted files from home, deleted them from .local/share/Trash/files, System doesn't report back free space
1,496,336,101,000
I am currently using a simple way to back up the drive dd if=/dev/sda of=/dev/sdb. However, before each operation, I have to check fdisk -l to see if sda and sdb have been swapped during boot. This is quite inconvenient and error prone. Is using symbolic identifiers from /dev/disk/by-id/ instead of sda and sdb completely safe and bulletproof so much that it doesn't require fdisk -l or lsblk checking? It is clear that if a disk is confused and if= is replaced with of=, the consequences are catastrophic.
Note the files in /dev/disk/by-id are actually just links to a device file. $ ls -l /dev/disk/by-id/ total 0 lrwxrwxrwx 1 root root 9 Aug 1 18:31 ata-Hitachi_HDS723030ALA640_MK0331YHG99T1A -> ../../sdb lrwxrwxrwx 1 root root 10 Aug 1 18:31 ata-Hitachi_HDS723030ALA640_MK0331YHG99T1A-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 9 Aug 1 18:31 ata-TOSHIBA_DT01ACA300_43NNVJMYS -> ../../sda lrwxrwxrwx 1 root root 10 Aug 1 18:31 ata-TOSHIBA_DT01ACA300_43NNVJMYS-part1 -> ../../sda1 This is handled by udev and meant precisely for the issue you are concerned about--having a more absolute way to reference a physical disk. Some more documentation here on how it works.
Is it safe to use /dev/disk/by-id/ instead of /dev/sda?
1,496,336,101,000
I reformatted a USB disk that I used to install Linux Mint 21 XFCE. After reformatting, the disk does not appear in Nemo or Nautilus. It does appear in the disks manager software, in lsusb, and in sudo fdisk -l. I am not completely sure if the above confirms that the disk is mounted, but the disk manager lists it as /dev/sdb, and when I press the eject button in the disk manager some of the disk information blanks out. So I'm pretty sure it mounts. I don't know how to access the drive via command line, but even if there is a way to do so, I vastly prefer to access it via the GUI. .
lsusb lists the physical devices connected to your USB subsystem. Storage devices being usually associated with a /dev/sd* device driver. For standard use, Linux needs the physical device to be organized in some standard way : Filesystems. (ext2,3,4-zfs-btrfs…) Regarding storage devices, Linux will only mount filesystems, these being commonly associated with the special files /dev/sdXN for the filesystems found on sdX device and N being a number starting from 1. You therefore need to create at least one filesystem on your sdb device. This being achievable thanks to the mkfs command line utility or via your graphical disk manager… but… since you do not name it… I cannot help more. Your comment is however explicit, you managed to find your way clicking on the + sign, congrats!
Linux Mint File Manager Not Recognizing USB Drive After Formatting
1,496,336,101,000
local-fs.target seems only to check when all mount blocks are ready. How can I ensure all disks are ready if there are many disks that didn't mount during/after boot? Which target can be used to confirm all disks ( mount and didn't mount ) are ready to use/mount? boot-complete.target basic.target ... ?
There is no such target, because there is no way for systemd to know when it has indeed seen "all" disks. The difference is that all mounts that local-fs.target waits for are explicitly listed in /etc/fstab or elsewhere, while disks arrive out of nowhere as SATA and SAS and USB ports get enumerated; there's no definite list of them. However, if you know exactly which disks you need to wait for, you can create your own target that depends on the .device units representing those disks. By default, both the basic /dev/sdX name and its various symlinks under /dev/disk/by-id will have corresponding .device units, so you can list them in After= for your custom target. (Or even better, if there's a task that's done for each disk separately, then instead of having it wait for all disks you could create a templated service with instances for each disk – similar to how systemd-fsck@ runs separately for each device without waiting for "all of them".)
Which systemd target makes sure that all disks are ready?
1,496,336,101,000
The man page of sync can be found on die.net. I wonder if it only synchronizes the disk of my current working directory (cwd) or all disks? The man page does not say anything about it.
The sync utility is mostly a wrapper around the sync system call; the manual page for the latter says sync() causes all pending modifications to filesystem metadata and cached file data to be written to the underlying filesystems. All pending changes are written, across all devices. The manual page you found is somewhat outdated; sync also supports explicitly writing a specific file or file system, and you can see that in more recent versions of the manual page. So the answer to your question is “all disks” by default, but it can be finer-grained if you specify additional parameters.
Which disks does "sync" synchronize?
1,496,336,101,000
in RHEL/CentOS 7.9 anyway, when running gnome-disks which is under the Applications-Utilities-Disks menu, for a recognized SSD it offers the enabling of write-cache. I would like to know what technically is happening when turning this on, that wasn't already happening. I was under the impression, whether it was an SSD or a conventional spinning hard disk, that linux inherently does disk caching. This impression mainly comes from reading that www.linuxatemyram.com page years ago.
This controls the cache setting on the disk so it's not related to Linux or RAM, this controls how the disk itself caches the data in its internal memory before writing them to the disk permanent storage. GNOME Disks (or UDisks to be correct) just sends an ATA command to the disk telling it to enable/disable the feature called volatile write cache. (Btw. in CLI hdparm -W <0/1> does the same thing.) It's similar to any other write cache -- if you enable this, the disk will tell the OS the data have been written after saving it to the cache and the disk will write them to disk later (that's where the warning about data loss comes from).
gnome-disks write caching
1,613,345,574,000
My root partition is reaching its limits which is annoying since most package managers install packages somewhere on my root partition. I have to fix this in some way but I'm not really sure what the best approach is. I could reformat some partitions into a bigger root partition. I think I have two option to do this, I could: delete nvme0n1p2 ([SWAP]) and merge it with nvme0n1p3(/) split nvme0n1p4 However merging nvme0n1p2 and nvme0n1p3 would cause losing my SWAP partition (which I do occasionally use if my RAM runs out). However I don't use it often, so I could spit sdb (an old SSD) and use part of it for SWAP. Splitting nvme0n1p4 would require copying a lot of data to and from sda (an old and slow HDD). While I was writing this I was wondering if it was an option as well to move my default install location of pacman (I'm running Manjaro) to nvme0n1p4 which would probably solve a lot of space problems as well. However I'm not experienced enough to see what potential problems this would cause to my system. I know I didn't ask a specific question yet. So I guess my questions are: What is the most durable solution to my space problem that is least likely to break my system. Did I miss a good alternate solution to my problem? $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk └─sda1 8:1 0 931,5G 0 part /mnt/data sdb 8:16 0 111,8G 0 disk └─sdb1 8:17 0 111,8G 0 part /opt nvme0n1 259:0 0 931,5G 0 disk ├─nvme0n1p1 259:1 0 300M 0 part /boot/efi ├─nvme0n1p2 259:2 0 16G 0 part [SWAP] ├─nvme0n1p3 259:3 0 32G 0 part / └─nvme0n1p4 259:4 0 883,2G 0 part /mnt/nvme0n1p4 $ df Filesystem Size Used Avail Use% Mounted on dev 16G 0 16G 0% /dev run 16G 1,7M 16G 1% /run /dev/nvme0n1p3 32G 30G 503M 99% / tmpfs 16G 498M 16G 4% /dev/shm tmpfs 4,0M 0 4,0M 0% /sys/fs/cgroup tmpfs 16G 53M 16G 1% /tmp /dev/sdb1 110G 26G 79G 25% /opt /dev/nvme0n1p4 869G 419G 406G 51% /mnt/nvme0n1p4 /dev/nvme0n1p1 300M 312K 300M 1% /boot/efi /dev/sda1 916G 113G 757G 13% /mnt/data tmpfs 3,2G 60K 3,2G 1% /run/user/1000
One simple solution would be to remove your swap, by "merging" your / and /swap partition (more like removing your nvme0n1p2 and extend your nvme0n1p3), then finally create a swap file there. Now, if you want to create a swap file only using a GUI, you can use gnome-disks, step by step: Click on gnome-disks' top-left drive icon. Create a New Disk Image... Set the size you want for your swap file, its name and where you want to put it, then click on Attach new Image.... gnome-disks should now get you right away on the newly created swap file section, which should be identified as xx GB Loop Device. Then, just click on the "Wheels" button. Go on Format Partition... Give your swap partiton a label (e.g. "swap0"). Check the Erase witch (makes the swap contiguous on the drive, for better performance) on. Choose other as the partition Type. Click on Next. Choose Linux Swap Partiton. Then, Next again. And, Format, which will take some time. Now, click on the "Wheels" button again. Go this time on Edit mount options... Disable the User Session Defaults switch. Just make sure that Mount at system startup is checked, Show in user interface in unchecked and that Identify As is set on /dev/disk/by-uuid/xxx, just to be sure. Then press OK. And finally, click on the "Play" button to mount it, and you're done ! Using a fully dedicated swap partition is less and less needed since a long time. Also, I don't know how many GBs of RAM AND VRAM you have, but 16GB of swap is really huge. If you're really not using that much swap, you should decrease that amount. Now here's a trick: If you still want to use that much of swap space but only do from time to time, you could make a small swap file first of let's say 2GB to 4GB, then create the second one when you really need it on / or simply put it somewhere else (e.g. your sda1/mnt/data) with much more free space to make it available at all times. Few things to consider: A. Please note that you really should not put swaps on an SSD or any flash-based drive since it will really decrease it's lifespan, since swaps are "fake RAM" after all (RAM are made to whitstand a lot more I/O than any storage drives), and that a smaller SSD (typically under 400GB or even under 1025GB ones) has a lower life-span than their bigger counterparts due to smaller over-provisionning. I mean, look at how much GBs you do write per week, simply just web browsing, streaming and watching Youtube videos writes a lot (caching). It's very easy to reach 250GBs of writes a week, even for a granny. Now add the fact that you're likely a power-user (well you do as a data science student, so this means it's one of the worse cases) and that modern smaller SSDs are only granted a mere 100TBs of writes before the warranty is void (yes, void, do check the products' datasheets), and you'll quickly reach those 100TBs even before you reach the "years-based warranty". So, do consider making your swaps on either spinning disks, or throwaway flash storages (USB keys, memory cards, etc) as long as it's not USB 1.x-based or maybe 2.x if you really need them to still be very fast since that's mostly not the raw GB/s that mostly makes flash-based drives fast, but the drive cells' access times. B. Depending on what drives you put your swaps and how you fragment them (e.g. one swap file per drive, etc), you can even end up with a faster swap than you ever had. C. You can even hibernate with a swap file: https://askubuntu.com/a/892410 D. gnome-disks is a very powerful yet straighforward tool, which can check your drive's status (temperature, lifespan, bad sectors, etc), set some drives settings (caching, standby times, etc), etc but also cleanly unmount, eject and then power off your devices (drives, usb keys, etc) better than no other GUI tools.
root partition runs out of space
1,613,345,574,000
Recently I had an fsck error on boot, where I had to run fsck manually. So, I did it and the problem disappeared. After that i run smartctl -a /dev/sda and i saw the following attribute ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 201 Lifetime_Remaining% 0x0023 100 100 005 Pre-fail Always - 6 What is the meaning? Is it something I should worry about? I searched in other similar questions but I didn't find an answer. My operating system is Ubuntu 20.04 Here is the smartctl output smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.4.0-58-generic] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Marvell based SanDisk SSDs Device Model: SanDisk SD8SN8U-512G-1006 Serial Number: 174668800539 LU WWN Device Id: 5 001b44 8b6963006 Firmware Version: X4120006 User Capacity: 512.110.190.592 bytes [512 GB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: Solid State Device Form Factor: < 1.8 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-2 T13/2015-D revision 3 SATA Version is: SATA 3.2, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sat Jan 2 19:29:46 2021 EET SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 120) seconds. Offline data collection capabilities: (0x53) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 43) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 32 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 100 100 050 Pre-fail Always - 19828 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 9 Power_On_Hours 0x0032 100 100 --- Old_age Always - 6477 12 Power_Cycle_Count 0x0032 100 100 --- Old_age Always - 2401 171 Program_Fail_Count 0x0032 100 100 --- Old_age Always - 0 172 Erase_Fail_Count 0x0032 100 100 --- Old_age Always - 0 173 Avg_Write/Erase_Count 0x0032 100 100 005 Old_age Always - 6 174 Unexpect_Power_Loss_Ct 0x0032 100 100 --- Old_age Always - 145 176 Erase_Fail_Count_Chip 0x0022 100 100 --- Old_age Always - 4352 181 Program_Fail_Cnt_Total 0x0022 100 100 --- Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 --- Old_age Always - 0 184 End-to-End_Error 0x003b 100 100 097 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 --- Old_age Always - 0 188 Command_Timeout 0x0032 100 100 --- Old_age Always - 0 194 Temperature_Celsius 0x0022 032 061 --- Old_age Always - 32 (Min/Max 17/61) 198 Offline_Uncorrectable 0x0030 100 100 --- Old_age Offline - 0 199 SATA_CRC_Error 0x0032 100 100 --- Old_age Always - 0 201 Lifetime_Remaining% 0x0023 100 100 005 Pre-fail Always - 6 230 Perc_Write/Erase_Count 0x0032 100 100 --- Old_age Always - 578 578 85 241 Total_Writes_GiB 0x0032 100 100 --- Old_age Always - 430151 242 Total_Reads_GiB 0x0032 100 100 --- Old_age Always - 344983 243 Unknown_Attribute 0x0032 100 100 --- Old_age Always - 743567 244 Thermal_Throttle 0x0032 000 100 --- Old_age Always - 0 249 TLC_NAND_GB_Writes 0x0032 100 100 --- Old_age Always - 3155 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
that says that your drive not worn out at all and has 100% of its life ahead of it. when it gets down to 5% then it's time to panic. 194 Temperature_Celsius 0x0022 032 061 32 looks OK but 61 is a bit high, make sure your case has sufficient ventilation.
What is the meaning of "Lifetime_Remaining%" in smartctl?
1,613,345,574,000
We are building a data archiving system where we want to have the option to store redundant copies of data on different storage types: tape/disk/cloud, and also with the option of using different archive formats: tar, zip etc Many of the files are fairly large (50GB+), and once data is archived it is never modified. We have found that tape is ideal for this use case. We store the block offsets for each archived file in our database, and once a tape is almost full we "finalize" it by writing an index of all the file block offsets to the end (so it is self describing). I would like to know if it would be possible to do the same thing using a hard disk by using it unformatted (i.e. without any file system), and reading/writing to it as a block device. We would append archives to the disk starting from the first block and write one archive immediately after the other until the disk is almost full. In this way we would avoid any file fragmentation issues, make full use of the disk capacity, increase read/write speeds, and it would be much easier to do data recovery if ever needed. Tapes allow us to easily seek to the end of the written data, whereas with disk I guess we would have to record the number of the last block that was used, and ensure we started writing the next archive starting from the next block. I would like to know how we could calculate that in a rigorous way, where we could be sure we would not be overwriting any previously written data. Using dd, I think this would be fairly straightforward using the seek option. However we would like to use the archiving tools (tar, zip etc) to write the data directly to disk (like we do with tape), to avoid the time taken to copy the files into an intermediate archive file (on our staging disk), and then using dd to write this file to the archive disk. However tar, zip don't have any option to seek like dd does. I suppose they would just open the block device and start writing from the beginning. Would like to know if anybody has done anything like this before, or has any other inputs/thoughts regarding this idea - especially potential pitfalls to be aware of. Also, if I should fill the drive with zeros before writing anything to it?
'tar' has the option to output to stdout which can then be piped into another program (e.g. 'dd') to go to the desired device. Many years ago, I worked for a company that manufactured tape and optical backup mechanisms. Once the driver was written for the optical device, it looked just like a tape device to the rest of the backup software.
Use disk like tape
1,613,345,574,000
We need to create xfs file-system on kafka disk The special thing about kafka disk is the disk size kafka disk have 20TB size in our case I not sure about the following mkfs , but I need advice to understand if the following cli , is good enough to create xfs file system on huge disk ( kafka machine ) DISK=sdb mkfs.xfs -L kafka /dev/$DISK -f kafka best practice FileSystem Selection Kafka uses regular files on disk, and such it has no hard dependency on a specific file system. We recommend EXT4 or XFS. Recent improvements to the XFS file system have shown it to have the better performance characteristics for Kafka’s workload without any compromise in stability. Note: Do not use mounted shared drives and any network file systems. In our experience Kafka is known to have index failures on such file systems. Kafka uses MemoryMapped files to store the offset index which has known issues on a network file systems.
From the documentation you cited: The XFS filesystem [...] does not require any change in the default settings, either at filesystem creation time or at mount. Source: https://kafka.apache.org/documentation/#xfs So it should just work. Also there is nothing special anymore about a 20TB device size. Consider adding a partition table and then use /dev/sdb1 instead of /dev/sdb.
What is the right mkfs cli in order to create xfs file-system on huge disk
1,613,345,574,000
we have Linux machine with 2 disks - sda and sdb ( sda is the OS ) lsblk -d -e 11,1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 150G 0 disk sdb 8:16 0 70G 0 disk /GHT when we do sar -d , we get 12:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 12:10:01 AM dev8-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:10:01 AM dev8-0 0.12 0.07 1.93 16.87 0.00 0.48 0.28 0.00 12:10:01 AM dev253-0 0.01 0.07 0.04 8.00 0.00 0.38 0.37 0.00 12:10:01 AM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 12:10:01 AM dev253-2 0.12 0.00 1.89 15.53 0.00 0.51 0.23 0.00 12:20:01 AM dev8-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 we can see that DEV devices not displayed as sda or sdb ( disks ) so how to know which is sda or sdb ? is it possible in some way to use sar, and displayed the real disks - sda or sdb?
From man: -d Report activity for each block device<....>Device names may also be pretty-printed if option -p is used -p Pretty-print device names. Use this option in conjunction with option -d. By default names are printed as dev m-n where m and n are the major and minor num- bers for the device. Use of this option displays the names of the devices as they (should) appear in /dev. Name mappings are controlled by /etc/syscon- fig/sysstat.ioconf. sar -p -d 1 1 07:16:35 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 07:16:36 PM sda 13.00 0.00 120.00 9.23 0.04 3.08 1.38 1.80 07:16:36 PM vg_livecd-lv_root 15.00 0.00 120.00 8.00 0.05 3.07 1.27 1.90 07:16:36 PM vg_livecd-lv_swap 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 07:16:36 PM vg_livecd-lv_home 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sar + why sar not displayed the real disks under DEV section
1,613,345,574,000
I have Ubuntu 18.04 server with 500G SSD disk which has LVM and dm-crypt on it. I recently noticed that the number of bytes written to disk (as reported by vmstat -d or iostat) is unrealistically high. After monitoring the system I/O I found that the giant spike in disk writes happens once a week when fstrim.service runs: From the logs, it looks like every week when fstrim runs it reports that basically all free space was written to the disk, even though the system is almost at idle and has just under 10Gb written in a week at most. Is this an expected behavior? I always thought that only new free blocks since last fstrim run should be discarded, but not the entire free space each time. This puts absurdly high wear on SSD (judging by media wearout value as reported by disk). Or is it somehow related to the presence of dm-crypt? The disk does support TRIM: hdparm -I /dev/sda | grep TRIM * Data Set Management TRIM supported (limit 8 blocks) * Deterministic read ZEROs after TRIM And discard pass through is also enabled in dm-crypt: dmsetup table silverbox--vg-swap: 0 19529728 linear 253:0 917964800 silverbox--vg-root: 0 917962752 linear 253:0 2048 sda3_crypt: 0 937496576 crypt aes-xts-plain64 00...0 0 8:3 4096 1 allow_discards
ATA trim command only changes meta data in the disk drive, it definitely does not do any low level writes to the memory cells. If the disk supports deterministic trim a trimmed block is returned with zeros, this however is done by the controller based on the new meta data status and not because the cells were actually erased at the time of the trim command. trim commands are unfortunately counted as writes in all kernel statistics I know about. So iostat or sar or /sys/fs/ext4/*/lifetime_write_kbytes give the sum of true writes and trims. See also question on superuser. fstrim, when run once a week, seems to release the whole unused disk space. If you have, for example, a 1 TB disk which is 50% used, the default fstrim activity appears in the statistics as 500 GB written per week or 70 GB/day. Bottom line is: the write statistics is easily dominated by trims counted as writes, especially for moderately filled file systems.
Does `fstrim` causes write to all free blocks every time?
1,613,345,574,000
Inside vmware ESXI I have a CentOS virtual machine, and I resized the virtual disk from around 30 GB to 120 GB using vmware "edit" vm menu. Then I booted using a gparted bootable ISO and resized the partition from 30GB to maximum (120 GB) But now when I boot I still see the main partition (/root) as around 25 GB. From what I can tell (below code) the disk is seen as ~120GB but not the partitions ? What commands to run in order to safely expand the partition ? I think that is /root that needs to be expanded. [root@localhost ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 26G 1.6G 25G 6% / devtmpfs 3.9G 0 3.9G 0% /dev tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 3.9G 8.9M 3.9G 1% /run tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/sda1 1014M 145M 870M 15% /boot tmpfs 783M 0 783M 0% /run/user/0 [root@localhost ~]# fdisk -l Disk /dev/sda: 128.8 GB, 128849018880 bytes, 251658240 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000d5212 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2099199 1048576 83 Linux /dev/sda2 2099200 251658239 124779520 8e Linux LVM Disk /dev/mapper/centos-root: 27.9 GB, 27913093120 bytes, 54517760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/centos-swap: 3221 MB, 3221225472 bytes, 6291456 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes EDIT: [root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 xfs 2499226d-4c93-4ef1-b4ab-1055f8bab7cd /boot └─sda2 LVM2_member 49Sk0d-ClAm-FGza-9HrJ-hYGP-V1Zn-UlrgaO ├─centos-root xfs f78ccb25-5dcc-49fc-81b8-5c33e6b5e9ef / └─centos-swap swap 100d2a33-ab8d-4dd8-8e6c-19d51ad53a40 [SWAP] sr0 [root@localhost ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- <119.00g 90.00g EDIT 2: [root@localhost ~]# vgdisplay --- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size <119.00 GiB PE Size 4.00 MiB Total PE 30463 Alloc PE / Size 7423 / <29.00 GiB Free PE / Size 23040 / 90.00 GiB VG UUID dpAjcO-xazq-6sJZ-PA23-N0a0-Zcz3-iRVloi [root@localhost ~]# lvdisplay --- Logical volume --- LV Path /dev/centos/swap LV Name swap VG Name centos LV UUID ZuJyt6-YDaV-1kw7-Zjzl-4gPX-vkzH-dfmV7y LV Write Access read/write LV Creation host, time localhost, 2019-04-01 19:44:02 -0400 LV Status available # open 2 LV Size 3.00 GiB Current LE 768 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/centos/root LV Name root VG Name centos LV UUID KkcOnV-OQvj-lpmc-5Eiz-2hfd-6mcV-30zWvW LV Write Access read/write LV Creation host, time localhost, 2019-04-01 19:44:03 -0400 LV Status available # open 1 LV Size <26.00 GiB Current LE 6655 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0
Edit: It looks like you've already created the partition and physical volume and added the space to the volume group so I've removed the parts regarding adding a new partition to the disk and creating a new physical volume. To expand your root logical volume: lvextend -l +100%FREE /dev/centos/root To sync the new space: xfs_growfs /dev/centos/root Run df -h to see the new size with the extra space and lvdisplay | sed -n '/root/,$p to see the increased size of the root logical volume itself.
How to enlarge this partition?
1,613,345,574,000
I want to find head movement through Disk Scheduling first come first serve algorithm queue = 98, 183, 37, 122, 14, 124, 65, 67 head start at 53 I have confused because this same quotation two book different answer first give head movement of 236 cylinders and other book answer 640 cylinders. I don't know which one correct answer.
From   53  to   98  =   98−53 =   45 From   98  to 183  = 183−98 =   85 From 183  to   37  = 183−37 = 146 From   37  to 122  = 122−37 =   85 From 122  to   14  = 122−14 = 108 From   14  to 124  = 124−14 = 110 From 124  to   65  = 124−65 =   59 From   65  to   67  =   67−65 =     2 45 + 85 + 146 + 85 + 108 + 110 + 59 + 2 = 640 You would get different results if you stipulated that the 98, 183, 37, 122, 14, 124, 65, and 67 were track numbers, with multiple tracks per cylinder, but (after a few minutes of trying various assumptions), I couldn’t get it to come out to 236.
FCFS disk scheduling [closed]
1,613,345,574,000
I've got a server on Debian 8.1 (3.16.0-4-amd64) which is telling me that disk used is high and I can not see why. Disks usage df -h : Filesystem Size Used Avail Use% Mounted on /dev/sda1 440G 298G 120G 72% / udev 10M 0 10M 0% /dev tmpfs 3,2G 337M 2,8G 11% /run tmpfs 7,9G 0 7,9G 0% /dev/shm tmpfs 5,0M 0 5,0M 0% /run/lock tmpfs 7,9G 0 7,9G 0% /sys/fs/cgroup 192.168.10.50:/c/logs 5,5T 1,9T 3,7T 34% /mnt/nas 192.168.11.250:/data/logs_hotspots 8,2T 1,6T 6,6T 20% /mnt/NAS tmpfs 1,6G 0 1,6G 0% /run/user/1000 Size of each folder : du -sh : 11M /bin 46M /boot 0 /dev 37M /etc 464K /home 0 /initrd.img 0 /initrd.img.old 312M /lib 4,0K /lib64 16K /lost+found 16K /media 8,0K /opt 64K /root 337M /run 5,1M /sbin 4,0K /srv 0 /sys 24K /tmp 447M /usr 233M /var 0 /vmlinuz 0 /vmlinuz.old It is increasing slowly since last year and since it is a production server, I do not want (can not) to restart it. Note : NFS mounts are only for syslog-ng. If anyone has an idea...
You can have a look at the output of 'lsof | grep deleted' ... to see if deleted files are still taking up diskspace. You can then decide to restart or reload the processes still holding on to deleted files to clear up the used diskspace.
Disk is used but I can not see where
1,613,345,574,000
I wish to mount a new disk at /disk on my raspberry server. My /home/iago-lito folder and the /root folder are standing on the SD card, which is fine to me. But I would like subsequent new user home folders to be stored on the disk. Instead of messing around with partitions and /home/newuser{1,2,3,..} mounting points, I intend to simply link each home folder to the disk, with: ln -s /disk/home/newuser1 /home/newuser1 Is it okay to do something that simple? Are there downsides in terms of safety, performance or security?
Creating symbolic links for each user home directory might get confusing if the amount of users grow and you have to take care of creating the symbolic links. Alternatively you also could configure a different user HOME directory than /home/<user> when creating the user. It also could be /disk/home/<user> which can also be changed/moved afterwards or for already existing users in /etc/passwd or by using usermod. Depending on your distribution, the option to change default user home directory when creating the user would be --home or --home-dir. In doubt look it up in the corresponding man page. An example for your case would be as follows. adduser newuser --home /disk/home/newuser
Linking user's home to another disk
1,613,345,574,000
I have lot of available space in /home df -h output Filesystem Size Used Avail Use% Mounted on /dev/sda2 8,9G 2,1G 6,4G 25% / tmpfs 499M 4,0K 499M 1% /dev/shm /vol/home 2,7T 2,3T 403G 86% /home Inside /home/user/project I have the following directories: $ ls /home/user/project log data bin Is it possible to like "mount" this directories? I want to achieve this: $ df -h /dev/sda2 8,9G 2,1G 6,4G 25% / tmpfs 499M 4,0K 499M 1% /dev/shm /vol/home 2,7T 2,3T 403G 86% /home /vol/home/user 2,7T 2,3T 403G 86% /project/data /vol/home/user 2,7T 2,3T 403G 86% /project/log
A symbolic link can be used to implement your filesystem map: cd / ln -s home/user/project
Mount and/or simulate volumes with existing directories?
1,613,345,574,000
I'm doing a warranty replacement on my Chromebook Pixel 2, and want to transfer data from one laptop to another. The Pixel 2 has two USB type-C ports. I'm running Arch Linux on the old one. I'd like to just be able to dd my partition data over from the old laptop to the new one. I have a C-to-C cable and a C-to-A cable (both male-to-male). Can this be done?
USB does not do host to host, one of the devices must support client mode. this requires hardware that supports client mode, kernel with "widget support" enabled and suitable client software.
Is host-to-host file transfer possible using USB Type-C?
1,613,345,574,000
I have a server running webmin/virtualmin in debian 7 with one 80GB hard drive. I want to increase disk space adding a second 1TB disk, but without losing any data as I'm hosting a few websites. Any method and help is appreciated, Thank you.
As you chose to use the word ADD as opposed to REPLACE, it is my impression you wish to add the 1 tb drive to your system without removing the 80 gb. Most people wish to replace, but I can see why, if you are hosting webspace, you would not want the service interruption. To merely add the drive, lease connect the drive to to your system and open GPARTED. When you have located the drive, select to format the 1 tb drive as ext4 and set the mount point as something which will be recognized as part of your system, such as Home/Terabyte. When you are done with your configuration for this drive, be sure to click the checkmark to engage your settings. When it is complete, your new TB drive should appear as a folder in the directory structure probably in alphabetical order between Pictures and Videos. You don't actually need to configure the drive to appear to be a folder in your directory structure, but it will appear rather seamless.
Increase disk space without losing any data
1,613,345,574,000
A system running un-align disk partition that needs to be aligned without loss of data on all the partition including mbr with the minimum to no down time.
Create new align partition DISK=/dev/sdd (assumed new disk is point to sdd) dd if=/dev/zero of=$DISK count=1 bs=1M parted -s -- $DISK mklabel msdos parted -s -- $DISK mkpart primary ext3 64s 401624s parted -s -- $DISK mkpart primary 401628s 6144866s parted -s -- $DISK mkpart primary 6144868s 100% parted $DISK unit s print (echo t; echo 1; echo 83;echo t; echo 2; echo 82;echo t; echo 3; echo 8e; echo w) | fdisk $DISK (echo a; echo 1; echo w) | fdisk $DISK Install Grub: mkfs –t ext3 –L /boot dev/sdd1 mount $DSIK1 /mnt cd /mnt dump -0 -b 1024 -f - /boot/ | restore -r -f - -b 1024 cd / umount /mnt grub: grub> device (hd1) /dev/sdd device (hd1) /dev/sdd grub>root (hd1,0) root (hd1,0) Filesystem type is ext2fs, partition type 0x83 grub> setup (hd1) setup (hd1) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded Done. grub> quit Adding new disk to LVM: pvcreate /dev/sdd3 Extend sdd to myvg lvm group: vgextend /dev/myvg /dev/sdd Move Data to New Disk (Assumed old disk point to sda): pvmove –verbose /dev/sda3 /dev/sdd3 if there are other partition can be combined to a single partition: pvmove –verbose /dev/sdd1 /dev/sdd3 Remove Old LVM: vgreduce /dev/myvg /dev/sda3 vgreduce /dev/myvg /dev/sda1 pvremove /dev/sda1 pvremove /dev/sda3 Please note that there is no immediate need to reboot the system. You may want to confirm reboot to make sure to boot up after removing unaligned disk.
How to realign a disk on logical volume with minimum to zero down time
1,621,109,064,000
One month ago I bought a new SSD disk for my PC, that I connected with the SATA3 port to my motherboard. I created a partition /dev/sda3 on that disk and I named it PatriotSata3. It was working correctly through all month. I had to mount it manually before using it and then it was visible in /media/username/PatriotSata3. Yesterday I moved all the Docker data to this PatriotSata3 disk with this tutorial as I ran out of space on my main disk. Today when I logged on, I saw that the path /media/username/PatriotSata3 is visible before my manual mounting but is empty and I get ls: Cannot open directory '.': Access denied When I try to list the directory. But then when I mount the disk manually it's visible under the path /media/username/PatriotSata31 and has all the files. The /media/username/PatriotSata3 is also visible but the access is still denied. How can I go back to the previous state with all the files seen under /media/username/PatriotSata3 ?
Your disk wasn't renamed, it is only mounted in a different location/mountpoint, it doesn't really makes a difference. From the /media/<username>/<label> mountpoint, I'm guessing you are either mounting from GUI (e.g. from Nautilus application) or from terminal using udisksctl so the mounting a mountpoint selection is done by UDisks. Older versions of UDisks have a bug where the mountpoint is not automatically removed after unmounting the drive and next time the existing directory is not reused, but rather a new mountpoint with 1 added to it is used. You can simply remove the old directory (but please on't remove the one being currently used) a next time, it will be created again. If you plan to use the new disk more often and don't want to mount it manually everytime, I'll recommend adding it fstab to have it automatically mounted after every boot (and you can also choose better mountpoint that won't change).
Disk renamed automatically
1,621,109,064,000
I have a tmpfs named /rtmp/ that has 1GB of RAM allocated to it on Ubuntu OS. Using a bash script, I am testing to see if it is faster using to write a small text file to the hard disk or if it is faster to write to this RAM drive /rtmp/. Bash Script writing to hard disk #!/bin/bash URL="http://some.website/some.txt" wget -O ~/current/axis_tmp ${URL} cat ~/current/axis_tmp | grep "^pattern" | tail -n 1 | awk -F',' '{printf("%.0f\n", $3)}' | sed 's/ //g' > ~/current/tmp.txt sed -i 's/^/X,/' ~/current/tmp.txt sed -i 's/$/,Y/' ~/current/tmp.txt exit 0 Bash Script writing to tmpfs #!/bin/bash URL="http://some.website/some.txt" wget -O /rtmp/axis_tmp ${URL} cat /rtmp/axis_tmp | grep "^pattern" | tail -n 1 | awk -F',' '{printf("%.0f\n", $3)}' | sed 's/ //g' > /rtmp/tmp.txt sed -i 's/^/X,/' /rtmp/tmp.txt sed -i 's/$/,Y/' /rtmp/tmp.txt exit 0 After running the time command, I have the following results: Writing to disk real 0m0.554s user 0m0.022s sys 0m0.003s Writing to tmpfs real 0m0.614s user 0m0.023s sys 0m0.002s Why was it slower to write the text file to tmpfs than writing the file to the disk? Shouldn't the process time have been faster writing to the tmpfs?
Your script is not the proper way to test I/O. As hardillb pointed out, some of them and there are a lot more. This is why there are dedicated tools for this. The best tool for this could be IMHO the fio. You can try it like this fio --name=fio-rand-write --rw=randwrite --bs=4k --direct=0 --numjobs=4 \ --size=512M --ioengine=libaio --iodepth=16 You just cd to a folder in the partition you want to test, e.g. /rtmp and launch the command. Feel free to read its documentation or other threads here for more info.
Downloading and writing a text file onto tmpfs is slower than writing to disk, why?
1,621,109,064,000
We presume to have a faulty cable that connects the SAN to a direct I/O LDOM. This is a snippet of the error when running iostat -En c5t60060E8007C50E000030C50E00001067d0 Soft Errors: 0 Hard Errors: 696633 Transport Errors: 704386 Vendor: HITACHI Product: OPEN-V Revision: 8001 Serial No: 504463 Size: 214.75GB <214748364800 bytes> Media Error: 0 Device Not Ready: 0 No Device: 6 Recoverable: 0 Illegal Request: 1 Predictive Failure Analysis: 0 What does No Device: 6 mean here?
A search through the Illumos fiber-channel device code for ENODEV shows 13 uses of ENODEV in the source code that originated as OpenSolaris. Of those instances, I suspect this is the one most likely to cause your "No device" errors: pd = fctl_hold_remote_port_by_pwwn(port, &pwwn); if (pd == NULL) { fcio->fcio_errno = FC_BADDEV; return (ENODEV); } That code is in the fp_fcio_login() function, where the code appears to be trying to login to a remote WWN. It seems appropriate to assume a bad cable could prevent that from happening. Note that fiber-channel error code is FC_BADDEV, which also seems appropriate for a bad cable. In short, a review of the source code indicates that ENODEV errors are consistent a bad cable. You can use dTrace to more closely identify the association if necessary. Given that both hard and transport errors occur about 5 or 6 orders of magnitude more frequently, IMO that effort isn't necessary until the ENODEV errors occur after the other errors are addressed and no longer occur.
What does "no device" mean when running iostat -En
1,621,109,064,000
we have Linux machine redhat 7.2 LVM disk disk -sdd with only 2G we want to extend the size of /data to 20G by adding another disk - sde /dev/sdd 2.0G 9.2M 1.9G 1% /data expected results /dev/sdd 20.0G 9.2M 1.9G 1% /data what is the procedure that explain how to add another disk and join the new disk size to "sdd" ?
It does not look like your /dev/sdd is a member of LVM Group. You cannot just extend your device. Expected result is not able to be done. By the subject, I can assume, you would like to use LVM to extend /dev/sdd and keep your data from /dev/sdd on /dev/sdd and /dev/sde at /data mount-point. If so, the procedure looks like that: Do reliable the backup of /data. Create physical-volume from /dev/sdd. (It will wipe-out your data from /dev/sdd) Create physical-volume from /dev/sde (It will wipe-out your data from /dev/sde) Create volume-group that consists of two PVs: /dev/sdd and /dev/sde. Create linear logical-volume, create filesystem on it, mount it on /data. Restore your backup. References: https://wiki.archlinux.org/index.php/LVM https://linux.die.net/man/8/lvm
linux + LVM + extent small disk by add another disk
1,621,109,064,000
on my Linux redhat machine I we have the following details # pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg00 lvm2 a-- 149.51g 944.00m # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT fd0 2:0 1 4K 0 disk sda 8:0 0 150G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 149.5G 0 part ├─vg00-lv_root 253:0 0 40G 0 lvm / ├─vg00-lv_swap 253:1 0 7.7G 0 lvm [SWAP] └─vg00-lv_var 253:2 0 100.9G 0 lvm /var I want to add new partition on the OS disk ( sda ) with the same VG so it will be something like this ( pvs ) # pvs PV VG Fmt Attr PSize PFree /dev/sda2 vg00 lvm2 a-- 149.51g 944.00m /dev/sda.. vg00 lvm ........ please advice what the steps that are needed in order to create the new partition on the OS disk ?
How can you create a new partition in your hard drive? You have no more space left. However, if you want to create a new Logical Volume, then run : lvcreate -L <The size of the LV> -n <The name of the LV> <name of vg> mkfs.<format type> <LV path>
LVM + add new partition on the OS disk
1,498,831,337,000
Good day, I have a 120GB SSD, and 1TB HDD on my system. And on my SSD I have Windows installed. Afterwards, I added Linux Mint as dual boot. Before I install, I shrinked space from the HDD and created all /,/home,swap etc. on HDD. But when I launch Windows vs. Mint, it makes me feel the difference a lot, so I thought it would be cool to move my Mint to SSD. I have few questions about it. Will it increase my boot and general speed if I somehow move / to SSD or will I have to move all files to there? Would it really be a good idea to shrink a 120GB SSD? If the 2nd question's answer is yes, would it make difference to do this operation (moving Mint to SSD) as dual boot rather than normal installation? (following this guide http://blog.oaktreepeak.com/2012/03/move_your_linux_installation_t.html) OS: Windows 10 64bit, Linux Mint 18.1 64bit
Ok, since you might get confused from the comments, I've decided to write an answer. Though what I'm suggesting is not a simple procedure and if you're not experienced enough, you will end up with unbootable linux, or even worse - broken partitions and lost data. I would not suggest you to follow it unless you're experienced enough and know how to recover from boot failures later. I.e. booting from USB, mounting, chrooting, etc... These steps are not a copy/paste howto, so if you have doubts or question on any of these steps, do not start with this. You can create one new partition (5GB for example) on your SSD and move some parts of your linux there. Then format it with Ext4 or whatever FS you prefer. Copy all folders except "/home", "/var", "/media", "/run", "/opt", "/boot", "/mnt", "/proc", "/dev", "/sys". Actually you should be copying "/lib*" folders, "/bin", "/sbin" folders, "/usr", "/etc" folders and some more probably. Then create "/sys", "/dev", "/proc" empty folders on the SSD. You should update the ROOT in your bootloader config and fstab. Here you should find a way to get the rest of the folders mounted, but since they are on a single partition on HDD, it's not that easy. you can mount them in /storage folder for example and make symlinks to the root fs. or mount them in /storage folder and then bind mount them to their root fs folders (mount -o bind) in both cases you should later update fstab to do the mounting. Note: there are probably many other ways achieve what you want. On my linux, I have everything on the SSD and a (HDD mounted partition) /storage folder to hold my /home/user/[some sub folders] /var/cache and some other data-huge folders with symlinks to the root fs.
Dual Boot System, Moving Linux Mint to SSD
1,498,831,337,000
A few days ago, i changed a files in the windows directory using debian. After that it would cause kernel panic in my windows os. Is there any way to install new windows on the same corrupt disk partition ?? Please help me..
insert your windows disk and then do manual partitioning and choose to install ONLY that partition, but I think you'll have with the grub, and then you will need to repair it with a live, in any case, before you do anything you do a backup of data
Remove corrupt windows from laptop and install new windows again using linux [closed]
1,498,831,337,000
I've followed this instructions and now I'm able to access my Machintosh HD from Ubuntu, the problem is that I can't access the folder I need (example Desktop), it say that I haven't enough permissions to access the folder and I see the folder with an "X" on it. I've tried to use gksudo nautilus It works but I can't run the terminal every time I have to access a file on my HD. Is there a solution to edit the permissions permanently?
You could change the ownership of the folder by: sudo chown -R username:groupname /mount/mac/Desktop Replace username with your user-name.
Access Macintosh HD from Ubuntu - Permission denied
1,498,831,337,000
Trying to clean the mbr code part on a disk using the pfsense 2.7.0 live disk (pfsense is based on freebsd) under shell command. being /dev/da0 my drive following the suggested code for clean just the mbr code keeping the partitions the command should be: dd if=/dev/zero of=/dev/da0 bs=446 count=1 however... the result is: dd: /dev/da0: Invalid argument 1+0 records in 0+0 records out 0 bytes transferred in 0.000089 secs (0 bytes/sec) instead... if I use as code just dd if=/dev/zero of=/dev/da0 it just erases everything without errors :( I'm doing this tests in a vm so I can recover the hd many times to test this passage... however this thing is giving me headaches... EDIT: It seems that if I use bs=512 or bs=1M it doesn't gives errors. However doing so also the partitions table part would be deleted... EDIT2: I tried to use the command dd if=/dev/da0 of=/tmp/mbr_file bs=512 count=1 and it create for me a file with the mbr, I wonder what commands can I use for edit in binary mode the file filling the first 446 bytes with 0 and then use dd if=/tmp/mbr_file of=/dev/da0 bs=512 count=1 to restore it. What could I use? vi?
Ok, I did many test and came to this conclusion... Because pfsense is a damn stripped version of freebsd and LOT of tools are missing I had to do this to clear the first 446 byes in the disk preserving the partition table located at the latest 66 bytes of the first 512 bytes block. dd if=/dev/da0 of=/tmp/mbr_file_original bs=512 count=1 dd if=/dev/zero of=/tmp/mbr_file_zerofilled bs=446 count=1 cat /tmp/mbr_file_original | ( dd of=/dev/null bs=446 count=1; dd bs=66 count=1 ) > /tmp/mbr_file_partitions_table cat /tmp/mbr_file_partitions_table >> /tmp/mbr_file_zerofilled mv /tmp/mbr_file_zerofilled /tmp/mbr_file_new dd if=/tmp/mbr_file_new of=/dev/da0 bs=512 count=1 then for testing the copied mbr content dd if=/dev/da0 of=/tmp/mbr_file_test bs=512 count=1 hexdump /tmp/mbr_file_test | less In short what I did is: copy the mbr to a mbr_file_original created a zero filled 446 bytes file called mbr_file_zerofilled then BECAUSE SOMEONE YEAH I'M LOOKING AT YOU removed practically every useful tool even hex editors, leaving just this hack available used this line cat mbr_file | ( dd of=/dev/null bs=446 count=1; dd bs=66 count=1 ) > mbr_file_partition_table to extract from original mbr_file the last 66 bytes. At this point using cat I concat 2 files and renamed to a mbr_file_new for the sake of clarity and then used dd to save back the mbr to the da0 device this time without errors because I used 512 bytes in one time.
using dd for clean MBR code doesn't work on pfSense
1,498,831,337,000
I am facing a very weird problem. On my Fedora PC I have 12 GB swap file. But when I issue free -h I get : total used free shared buff/cache available Mem: 7.7Gi 1.8Gi 3.7Gi 409Mi 2.2Gi 5.2Gi Swap: 11Gi 0B 11Gi As you can see my swap is shown as 11 GB, where in reality I have 12 GB. This is corrected when I use free --giga total used free shared buff/cache available Mem: 8 1 3 0 2 5 Swap: 12 0 12 Here I get the correct output. Why is there a difference when I use -h vs --giga ? What is going on here ?
Both commands use different units: -h shows values in powers of 1024 (note the “Gi” suffix, for gibibytes), --giga shows values in powers of 1000. In your case, 12GB (gigabytes) is 12,000,000,000 bytes, which equals 11.2GiB (rounded to the closest 0.1) which free rounds down to 11. You can force free -h to use powers of 10 with the --si flag: free -h --si units can perform such conversions for you: $ units 12GB GiB * 11.175871 / 0.089478485
Why is the output of free -h different from free --giga?
1,498,831,337,000
is it possible to capture the UUID number before creating file system on disk? if yes how - by which command ? blkid ( before run mkfs.ext4 on sdb disk ) <no output> blkid ( after run mkfs.ext4 on sdb disk ) /dev/sdb: UUID="9bb52cfa-0070-4824-987f-23dd63efe120" TYPE="ext4" Goal - we want to capture the UUID number on the Linux machines disks before creation the file system
No and yes. The command to create the filesystem is the one that generates the UUID. So, before running it there is no UUID to use to name the filesystem. However, it is posible to use an specific UUID to create the filesystem: $ uuid=$(uuidgen) $ echo "$uuid" 9a7d78e5-bc6c-4b19-94da-291122af9cf5 $ mkfs.ext4 -U "$uuid" The uuidgen program which is part of the e2fsprogs package
is it possible to capture the UUID number before creating file system on disk
1,498,831,337,000
we have Linux RHEL server - 7.6 version in server disks are : lsblk -S NAME HCTL TYPE VENDOR MODEL REV TRAN sda 0:2:0:0 disk DELL PERC FD33xD 4.27 sdb 1:0:0:0 disk ATA INTEL SSDSC1BG40 DL2B sata sdc 2:0:0:0 disk ATA INTEL SSDSC1BG40 DL2B sata sdc and sdb are the OS disks about sda is disk that represented by RAID so sda include number of disks , but the question is how to count the number of disks in RAID we tried the following but we not sure if this cli described the number of disks in RAID? smartctl --scan /dev/sda -d scsi # /dev/sda, SCSI device /dev/sdb -d scsi # /dev/sdb, SCSI device /dev/sdc -d scsi # /dev/sdc, SCSI device /dev/bus/0 -d megaraid,0 # /dev/bus/0 [megaraid_disk_00], SCSI device /dev/bus/0 -d megaraid,1 # /dev/bus/0 [megaraid_disk_01], SCSI device /dev/bus/0 -d megaraid,2 # /dev/bus/0 [megaraid_disk_02], SCSI device /dev/bus/0 -d megaraid,3 # /dev/bus/0 [megaraid_disk_03], SCSI device /dev/bus/0 -d megaraid,4 # /dev/bus/0 [megaraid_disk_04], SCSI device /dev/bus/0 -d megaraid,5 # /dev/bus/0 [megaraid_disk_05], SCSI device /dev/bus/0 -d megaraid,6 # /dev/bus/0 [megaraid_disk_06], SCSI device /dev/bus/0 -d megaraid,7 # /dev/bus/0 [megaraid_disk_07], SCSI device /dev/bus/0 -d megaraid,8 # /dev/bus/0 [megaraid_disk_08], SCSI device /dev/bus/0 -d megaraid,9 # /dev/bus/0 [megaraid_disk_09], SCSI device /dev/bus/0 -d megaraid,10 # /dev/bus/0 [megaraid_disk_10], SCSI device /dev/bus/0 -d megaraid,11 # /dev/bus/0 [megaraid_disk_11], SCSI device /dev/bus/0 -d megaraid,12 # /dev/bus/0 [megaraid_disk_12], SCSI device /dev/bus/0 -d megaraid,13 # /dev/bus/0 [megaraid_disk_13], SCSI device /dev/bus/0 -d megaraid,14 # /dev/bus/0 [megaraid_disk_14], SCSI device /dev/bus/0 -d megaraid,15 # /dev/bus/0 [megaraid_disk_15], SCSI device lspci -vv | grep -i raid 06:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS-3 3108 [Invader] (rev 02) Kernel driver in use: megaraid_sas mdadm --detail /dev/sda mdadm: /dev/sda does not appear to be an md device cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sdb2[0] sdc2[1] 390054912 blocks super 1.2 [2/2] [UU] bitmap: 2/3 pages [8KB], 65536KB chunk md0 : active raid1 sdb1[0] sdc1[1] 524224 blocks super 1.0 [2/2] [UU] bitmap: 0/1 pages [0KB], 65536KB chunk unused devices: <none> lsscsi [0:2:0:0] disk DELL PERC FD33xD 4.27 /dev/sda [1:0:0:0] disk ATA INTEL SSDSC1BG40 DL2B /dev/sdb [2:0:0:0] disk ATA INTEL SSDSC1BG40 DL2B /dev/sdc cat /proc/partitions major minor #blocks name 8 0 13670809600 sda 8 16 390711384 sdb 8 17 524288 sdb1 8 18 390185984 sdb2 8 32 390711384 sdc 8 33 524288 sdc1 8 34 390185984 sdc2 9 0 524224 md0 9 1 390054912 md1 253 0 104857600 dm-0 253 1 16777216 dm-1 253 2 104857600 dm-2 253 3 10485760 dm-3 ll /sys/block/ total 0 lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-0 -> ../devices/virtual/block/dm-0 lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-1 -> ../devices/virtual/block/dm-1 lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-2 -> ../devices/virtual/block/dm-2 lrwxrwxrwx 1 root root 0 Oct 17 07:27 dm-3 -> ../devices/virtual/block/dm-3 lrwxrwxrwx 1 root root 0 Oct 17 07:27 md0 -> ../devices/virtual/block/md0 lrwxrwxrwx 1 root root 0 Oct 17 07:27 md1 -> ../devices/virtual/block/md1 lrwxrwxrwx 1 root root 0 Oct 17 07:27 sda -> ../devices/pci0000:00/0000:00:03.0/0000:02:00.0/0000:03:01.0/0000:04:00.0/0000:05:01.0/0000:06:00.0/host0/target0:2:0/0:2:0:0/block/sda lrwxrwxrwx 1 root root 0 Oct 17 07:27 sdb -> ../devices/pci0000:00/0000:00:11.4/ata1/host1/target1:0:0/1:0:0:0/block/sdb lrwxrwxrwx 1 root root 0 Oct 17 07:27 sdc -> ../devices/pci0000:00/0000:00:11.4/ata2/host2/target2:0:0/2:0:0:0/block/sdc ll /sys/block/ |grep 'primary' no output
The mdadm command will handle Linux Software RAID only. In case of hardware RAID, such as your Dell PERC FD33xD / LSI MegaRAID SAS-3 3108, you'll need a tool that will be able to communicate with the RAID controller using vendor-specific protocols to query the information. Unfortunately, since the ownership of that RAID controller product line has passed from Symbios to LSI to Avago to (current) Broadcom, it can be quite difficult to find the management tools for some RAID controller models from the original equipment manufacturer. But Dell is actually supporting a version of the management tool, known as perccli, for their branded versions of the RAID controllers. But you apparently cannot use an identifier like "PERC FD33xD" or "LSI MegaRAID SAS-3 3108" to search for drivers on Dell's support site: you need either the name of a server model that contains the RAID controller in question, or some Dell product name or support identifier that unfortunately won't appear in lsblk/lsscsi/lspci outputs. By some quick Googling, it appears that "PowerEdge FD332" is one of the models that might contain that RAID controller. So go to Dell support page, type in "PowerEdge FD332" (or your actual Dell server model, if applicable) and select "Drivers & Downloads". You'll see a box titled with "Find a driver for your PowerEdge FD332" (or whatever model you picked) with four drop-down menus. From the "Operating System" drop-down, pick your operating system, "RedHat Enterprise Linux 7" in this case. Then from the "Category" drop-down, pick "SAS RAID". And the list of downloadable drivers updates, and somewhere near the top (currently the very first entry for me!) should be "Linux PERCCLI Utility for all Dell HBA/PERC controllers". Download and install it: it will be a .tar.gz package containing both a .rpm file for RedHat and other distributions, and a .deb file for Debian and related distributions. After that, you should have the tool available in the /opt/MegaCLI/perccli/ directory, as either perccli or perccli64. The first command you should use with the tool should probably be: /opt/MegaCLI/perccli/perccli64 /show This will display the installed compatible RAID controllers and identify the numbers this tool will use for each. If there is just one RAID controller, it presumably is number 0. To get the list of actual physical disks from RAID controller #0: /opt/MegaCLI/perccli/perccli64 /c0 /eall /sall show all The list should look similar to this: ------------------------------------------------------------------------------ EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp ------------------------------------------------------------------------------ 252:0 7 Onln 0 465.25 GB SATA HDD N N 512B WDC WD5003ABYX-01WERA1 U 252:1 6 Onln 1 465.25 GB SATA HDD N N 512B WDC WD5003ABYX-01WERA1 U 252:2 5 Onln 2 74.0 GB SATA SSD N N 512B INTEL SSDSC2BB080G4 U 252:3 4 Onln 2 74.0 GB SATA SSD N N 512B INTEL SSDSC2BB080G4 U ------------------------------------------------------------------------------ EID-Enclosure Device ID|Slt-Slot No.|DID-Device ID|DG-DriveGroup DHS-Dedicated Hot Spare|UGood-Unconfigured Good|GHS-Global Hotspare UBad-Unconfigured Bad|Onln-Online|Offln-Offline|Intf-Interface Med-Media Type|SED-Self Encryptive Drive|PI-Protection Info SeSz-Sector Size|Sp-Spun|U-Up|D-Down|T-Transition|F-Foreign UGUnsp-Unsupported The numbers in the DID column are the numbers you can use with the smartctl command, e.g. smartctl -a -d megaraid,<DID value> /dev/sda Reference: https://www.thomas-krenn.com/en/wiki/Smartmontools_with_MegaRAID_Controller Note: Older and/or non-Dell-specific versions of these tools used to be known as MegaCLI and/or storcli, but those seem to be behind a labyrinth of stale web links and revised product naming schemes. The only link for MegaRAID SAS-3 3108 Linux tools on Broadcom's pages I managed to find currently points to a page in avago.com that no longer exists. So, I say this based on my 20 years of experience with enterprise-grade computer hardware: if you have systems with hardware RAID controllers, make sure you download any vendor-specific controller configuration tools from the vendor support site when initially setting up the server, and save them. And even if you have no problems with the controller, check for updates once in a while. If the product line is sold to a different company or the hardware vendor simply decides that their support site needs a new design, some tools may go missing for a while: in the case of RAID controller configuration tools, it is indeed very much better to have them and not need them, than vice versa. If you are planning to use old server models beyond their vendor support lifetime for any reason (even as test servers only!), make sure you download all the applicable vendor-specific tools and drivers before the end-of-support date, and archive them in a safe location. After the support ends, the downloads may vanish from the vendor's website without any warning.
how to verify number of disks in RAID but from OS
1,498,831,337,000
we have rhel 7.2 server , server is VM server and we add new disk - sde with the following example we create ext file system with label - disk2 mkfs.ext4 -L disk2 /dev/sde mke2fs 1.42.9 (28-Dec-2013) /dev/sde is entire device, not just one partition! Proceed anyway? (y,n) y Discarding device blocks: done Filesystem label=disk2 OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 262144 inodes, 1048576 blocks 52428 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1073741824 32 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done so we get lsblk -o +FSTYPE,LABEL | grep sde sde 8:64 0 4G 0 disk ext4 disk2 is it possible to create on new disk only the label but without creation of file system? example of expected output ( without file system on disk ) lsblk -o +FSTYPE,LABEL | grep sde sde 8:64 0 4G 0 disk disk2
The labels shown by lsblk (or rather, blkid) in its LABEL column are the file system labels, which are only available on file systems capable of storing a label. A block device with no file system can’t have such a label. GPT partitions can also be labeled, and lsblk shows that with PARTLABEL. But that’s not an option for whole disks either.
how to create disk label without creation filesystem on new disk
1,498,831,337,000
The following is net-snmp output and as you see, diskIOLA is not availabe: SNMP table: UCD-DISKIO-MIB::diskIOTable diskIOIndex diskIODevice diskIONRead diskIONWritten diskIOReads diskIOWrites diskIOLA1 diskIOLA5 diskIOLA15 diskIONReadX diskIONWrittenX 25 sda 845276160 2882477056 576632 42597061 ? ? ? 5140243456 883350772736 According to the definitions here http://www.net-snmp.org/docs/mibs/ucdDiskIOMIB.html: diskIOLAx means the x minute average load of disk (%). The other values in the table are: diskIONRead - The number of bytes read from this device since boot. diskIONWritten - The number of bytes written to this device since boot. diskIOReads - The number of read accesses from this device since boot. diskIOWrites - The number of write accesses to this device since boot So, how does this load can be calculated manually, as it is not collected in the server? In the end, we want to show graphs to users where they can find if a disk IO is heavy or not. We can either display this using Read/write bytes/sec or Read/write requests/sec. If we display Read/write requests/sec alone, we can know that there is heavy I/O going on. But we won't be knowing if the disk R/W speed is effected by this. And displaying R/W speed alone can't tell us why the speed is effected - whether it is because of too many I/O operations or not enough buffer memory for asynchronous writes. Hence, we need to display both. But, what is the other value disk IOLoad means and how can we calculate it and why is it not being collected in snmp. Does it cause huge load if enable this? If it cause heavily load collecting this value, then we can calculate it manually. But, what's the formula?
The information you indicate you have is not enough to calculate disk utilization %. Disk utilization % is calculated as disk_time_spent_in_io / elapsed_time. For example, if your disk spends 0.25 seconds performing IO in a 1 second period, then your disk is 25% utilized. The number of operations is meaningless when it comes to utilization %. Depending on your disk, and the type of IO you're performing (bulk vs random), it could be 100% utilized at 10 IOPS, or 10000 IOPS. The only way to know is by how long the disk is taking to perform those IOPs.
How to calculate disk IO load percentage?
1,498,831,337,000
I need to have a very fast disk for keeping cache. How can I do that in Linux?
Thanks to @Mat: # mkdir -p /mnt/ram # mount -t ramfs -o size=20m ramfs /mnt/ram
How to create memory-based disk in linux? [duplicate]
1,498,831,337,000
on our RHEL servers , RHEL version - 7.2 , we saw many dmesg lines as: example about sdb disk ( hard drive ) [Thu Dec 30 13:07:48 2021] EXT4-fs (sdb): error count since last fsck: 1329 [Thu Dec 30 13:07:48 2021] EXT4-fs (sdb): initial error at time 1614482941: ext4_find_entry:1312: inode 67240512 [Thu Dec 30 13:07:48 2021] EXT4-fs (sdb): last error at time 1640670898: ext4_find_entry:1312: inode 67240512 [Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 Sense Key : Medium Error [current] [Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 Add. Sense: Unrecovered read error [Thu Dec 30 13:12:19 2021] sd 0:0:1:0: [sdb] tag#0 CDB: Read(10) 28 00 80 41 13 38 00 00 08 00 [Thu Dec 30 13:12:19 2021] blk_update_request: critical medium error, dev sdb, sector 2151748408 [Thu Dec 30 13:14:38 2021] EXT4-fs warning (device sdb): __ext4_read_dirblock:902: error reading directory block (ino 67240512, block 0) [Thu Dec 30 13:17:05 2021] NOHZ: local_softirq_pending 08 [Thu Dec 30 13:21:26 2021] NOHZ: local_softirq_pending 08 [Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 Sense Key : Medium Error [current] [Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 Add. Sense: Unrecovered read error [Thu Dec 30 13:21:59 2021] sd 0:0:1:0: [sdb] tag#0 CDB: Read(10) 28 00 80 41 13 38 00 00 08 00 [Thu Dec 30 13:21:59 2021] blk_update_request: critical medium error, dev sdb, sector 2151748408 [Thu Dec 30 13:21:59 2021] EXT4-fs warning (device sdb): __ext4_read_dirblock:902: error reading directory block (ino 67240512, block 0) [Thu Dec 30 13:25:32 2021] NOHZ: local_softirq_pending 08 [Thu Dec 30 13:27:19 2021] NOHZ: local_softirq_pending 08 [Thu Dec 30 13:29:14 2021] NOHZ: local_softirq_pending 08 the Question is and based on above messages: is it - most likely cause is hard drive dying if old age ? if yes , what we should to do - replacing the disk/s ? references - https://access.redhat.com/solutions/35465
“Dying of old age” implies that the drive is old, which we can’t determine from the information in the logs. However I’m assuming this is in a professional setting; if so, in my opinion, any disk medium error should trigger a disk replacement. The “critical medium error” message indicates that this is a disk error, not related to a failure between the disk and the system (e.g. a cable failure). The logs in your question only show a single failed sector, so it might well be a localised failure, but it’s not worth taking the chance if you rely on your data storage. If there’s just one (or a few) failed sectors, you can try remapping them to continue using the drive (temporarily); see smartctl retest bad sectors for example.
HDD IO errors from kernel messages + is this definitely a HDD failure
1,498,831,337,000
When we do the following on a RHEL lab machine: lsblk | grep sdd sdd 8:48 0 1.8T 0 disk we get the sdd disk, but when we do blkid as the following: blkid | grep sdd we do not get any output. We re-scan the disk as: echo 1>/sys/class/block/sdd/device/rescan but blkid still does not recognize the sdd disk. blkid | grep sdd Why is that, and what we can do about this?
About blkid: When device is specified [… (irrelevant)]. If none is given, all partitions or unpartitioned devices which appear in /proc/partitions are shown, if they are recognized. While lsblk lists information about all available or the specified block devices. [Emphasis mine] Specify a device: blkid /dev/sdd Empty output indicates there in no structure from which blkid could read attributes (e.g. after wipefs -a /dev/sdd).
Disk doesn't appear in blkid but does appear in lsblk
1,498,831,337,000
I have been doing research on partitions lately and im quite confused on a few things: what is a partition table and what is it used for what is a partitioning scheme (GPT and MBR) and what are they used for Lastly I have done some research and have seen the term 'MBR' and 'GPT' being used to describe partition tables, my last question is, Is MBR and GPT another name for a partition?
Partitions Let's start with another question: What is a disk (from a software point of view)? A disk is a piece of memory. It has a start and an end. It holds pieces of data, enumerated starting at 0 (you call this an address). One piece of data usually is called a sector which commonly yields 512 bytes. Imagine a world without file-systems. You can totally use a disk by just directly writing your data to it. Your data is then located on the disk. It has a certain length. It starts at address a and takes up space up to address b. Now you probably want to have more than one set of data and you want to have your data organized in some way. You may say: I want to split the memory into smaller parts with fixed sizes. I call these parts partitions. I use them to organize my data. So you come up with the concept of a partition table. The partition table is a well-specified list of integer numbers characterizing (start, end, designated usage type) the disk's partitions. The MBR is actually much more than just a partition table, but it contains a partition table. The MBR also contains some executable code involved with booting the system. You could say, the MBR is one widely used implementation of the concept of a partition table. The MBR is expected to be found at sector 0. It is made to fit into that one sector of 512 bytes. As a result, there is a limit regarding the number and size of partitions it can describe. GPT is another implementation, but it is larger and consequently able to describe more and larger partitions. Etymology To understand the etymology of the term MBR, we need to consider the history. Before you can even think about how to organize the data, you want your system to boot. Powered off, a computer is pretty much "broken" as it cannot do anything. To become useful after power on, the very first program needs to be loaded from a well known location. This well known location can be the first sector of the hard-drive (this is a gross simplification of the boot process). The very first program is referred to as boot-loader. Add a few standards and the MBR (master boot record) is born. From this point of view, having a partition table in the MBR was a nice add-on more than a necessity. The boot-loader usually reads the partition table, looks at the first bootable partition, and continues to load the actual operating system. This is why the MBR partition scheme usually comes with one partition for the operating system. With the GPT (GUID Partition Table), there is one designated partition for the boot process, the ESP (EFI system partition). The ESP is usually formatted with a FAT file system. The boot loader is stored in a file. The actual operating system typically resides in another partition. This is why the GPT partition scheme usually comes with at least two partitions: One for the boot-loader, one for the operating system.
partitions (in general)
1,543,156,167,000
we want to capture the disk device that belong to the OS ( Linux ) since each linux machine have list of disks that are not the OS , we want to capture the disk that belong to the OS so by fdisk we can see that boot is on sda1 # sfdisk -l | grep Linux /dev/sda1 * 0+ 63- 64- 512000 83 Linux /dev/sda2 63+ 19581- 19518- 156773376 8e Linux LVM so according to that I created the following command , in order to capture the disk that belong to the OS ( linux ) # OS_DISK=` sfdisk -l | grep Linux | awk '$2 == "*" {print $1}' | sed s'/\// /g' | awk '{print $2}' | sed 's/[0-9]*//g' ` # echo $OS_DISK sda seems the command do the Job but Ifeel that this cli is too long and little clumsy
I find the simplest command to identify the operating system disk to be df /.  Unfortunately, it produces a lot of output (by which I mean a header line and many fields), so you would still need to do some filtering to get just the device name. You're right; your command is overly long and somewhat clumsy.  awk is a very powerful program; you rarely need to combine it with grep and/or sed, and having multiple awk commands in the same pipeline is almost never necessary.  Your pipeline can be replaced withsfdisk -l | awk '/Linux/ && $2 == "*" { gsub("[0-9]", "", $1); split($1, a, "/"); print a[3]; }' OK, it's only about a dozen characters shorter, but it's one command instead of five. P.S. sed 's/[0-9]*//g' is a slightly dangerous command.  Because of the g, it doesn't really make sense to have the * also.  To see what I mean, try sed 's/[0-9]*/X/g' with various inputs, and compare to s/[0-9]/X/g and s/[0-9]\+/X/g. OS_DISK=` command ` can be changed to OS_DISK=$(command), and the second form (with the parentheses) is preferred.
Linux + how to capture the OS disk device [closed]
1,543,156,167,000
I'm stuck in linux installation process. I've resized windows partition in order to be able to install linux (dualboot). Here is a screenshot of computer manager tool : Once, I've boot it on live usb, gparted tell me wrong information : there is only one partition witch takes the whole disk. Here is the screenshot of gparted (from live usb) : Do you have any idea ? Thank you by advance for your help :-)
This looks like your USB disk you use to boot Linux. I assume there is a third drive in drop-down. In the livecd, do the following in the terminal (you can leave gparted open): sudo fdisk -l It should spit out three drives, two that are around 120Gb and one 1Tb drive ...
Gparted see NTFS windows partition as fat32
1,543,156,167,000
I have a 500Gb hard drive with 20 bad sectors according to gnome-disk, and when I try to run a SMART self-test, it fails at reading. But gnome-disk shows that every other SMART attribute is OK, and this SMART failure doesn't kick in my motherboard's UEFI SMART failure warning on boot. Is the hard drive unreliable? Can I still use it safely? Is there anything I can do to fix it or prevent failure?
You should replace your hard drive if you value your data. SMART in consumer hardware grade is usually not very useful and the firmware mostly reports everything is OK; usually in business/server grade hardware it is more informative. Bad sectors also usually are masked internally by the hard drive up to a point. By the time they start showing up/being visible to the outside, it is time to dump the media/hard drive and replace it with a new one. See Google Says Diagnostics Don't Catch Many PC Drive Failures
Bad sectors and SMART failure on hard drive
1,543,156,167,000
I encrypted a disk using cryptsetup. I want to be able to visualize that a known text before encrypting the disk became gibberish after encrypting. How do I do such comparison? Here's an example for the best scenario: In a decrypted disk, assume I make a text file that has the word "test string" inside it. I will somehow be able to visualize "test string" before encryption, and then after encryption, visualize that the "test string" became gibberish. I would want to use the same methods to visualize "test string" and the gibberish so that I can be sure that it's "test string" that became gibberish. If it means I have to find "test string" in hex, then so be it. I just need to be able to see that there's "test string" and then "test string" is nowhere to be found (and instead there are other gibberish). Any idea what kind of methods I should use to probe the disk to find "test string"?
For example, consider the server I work on. The hard disk has a small /boot partition, /dev/sda1, which is by necessity not encrypted, and a large encrypted partition, /dev/sda2, which hosts a LUKS container, which, when opened by cryptsetup automatically at boot after entering the passphrase, appears as /dev/mapper/Serverax. In the container there is a LVM physical volume, on which lives a LVM volume group; the volume group contains the logical volumes Root, Home, Srv and Swap. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 294M 0 part /boot └─sda2 8:2 0 19.7G 0 part └─Serverax 252:0 0 19.7G 0 crypt ├─Serverax-Root 252:1 0 10.7G 0 lvm / ├─Serverax-Swap 252:2 0 1G 0 lvm [SWAP] ├─Serverax-Srv 252:3 0 6G 0 lvm /srv └─Serverax-Home 252:4 0 2G 0 lvm /home To see the raw data on the disk, read some blocks directly from /dev/sda2. In the example, the skip=$((2*1024)) skips over the 2 MiB LUKS header, and lands in the LVM header: $ sudo dd if=/dev/sda2 bs=1K count=1 skip=$((2*1024)) 2>/dev/null | hd 00000000 33 b2 f7 1b 03 ce a6 3a 87 b4 03 98 7d a7 b1 cc |3......:....}...| 00000010 1a c9 99 80 01 19 c0 db f0 54 a7 4c 1c 2b 9c ea |.........T.L.+..| 00000020 f3 84 b0 d8 0c 54 c0 fe ec c0 06 a8 8c c0 6b 10 |.....T........k.| ... 00000200 d4 0b 67 3b ba d1 21 06 58 ce 84 b4 3b 3b e0 f2 |..g;..!.X...;;..| 00000210 4d eb 99 d3 15 63 81 f3 92 b7 ff c2 17 95 ed b3 |M....c..........| 00000220 92 51 ab dc 29 84 9b 6f 68 cc a9 fe 35 cd e0 08 |.Q..)..oh...5...| 00000230 1f d1 e0 52 34 46 13 90 38 c4 3d 18 30 1a 1d c8 |...R4F..8.=.0...| 00000240 1c 05 2f 17 0b ad 39 6f 56 9c 28 71 e3 f7 78 10 |../...9oV.(q..x.| 00000250 97 09 cb 49 50 f5 b1 06 a1 8a e0 4d 7a 0e 39 94 |...IP......Mz.9.| 00000260 15 2d 05 b5 94 75 c0 a2 d1 bf 78 3d ba 30 06 61 |.-...u....x=.0.a| 00000270 e6 82 8d 4a 60 90 81 e7 0a 34 5a f8 03 fc a6 89 |...J`....4Z.....| 00000280 12 11 19 b2 2b 44 9b 0a 07 c1 40 d9 4b df bd 54 |[email protected]| 00000290 0a 40 2b 4f 1f 55 f5 e2 fa 10 41 3b f9 58 5a 2f |.@+O.U....A;.XZ/| ... The same data, decrypted, can be read from /dev/mapper/Serverax; note that this time there is no skip=: $ sudo dd if=/dev/mapper/Serverax bs=1K count=1 2>/dev/null | hd 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200 4c 41 42 45 4c 4f 4e 45 01 00 00 00 00 00 00 00 |LABELONE........| 00000210 be af fb 35 20 00 00 00 4c 56 4d 32 20 30 30 31 |...5 ...LVM2 001| 00000220 47 41 70 58 43 62 74 55 65 6b 33 41 6b 53 54 73 |GApXCbtUek3AkSTs| 00000230 4f 6b 6a 49 49 72 6e 53 66 54 41 77 6e 31 53 6e |OkjIIrnSfTAwn1Sn| 00000240 00 00 60 ed 04 00 00 00 00 00 20 00 00 00 00 00 |..`....... .....| 00000250 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000260 00 00 00 00 00 00 00 00 00 10 00 00 00 00 00 00 |................| 00000270 00 f0 1f 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000280 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 |................| 00000290 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000400
After encrypting a disk, how do you check that a known plain-text inside the disk became gibberish? [closed]
1,543,156,167,000
I'm seeing that all my EFI disks have a 1M partition that goes just before the the EFI partition: Device Start End Sectors Size Type /dev/sda1 34 2047 2014 1007K BIOS boot /dev/sda2 2048 1050623 1048576 512M EFI System /dev/sda3 1050624 ... I have tried to mount that partition to explore it but I haven't been able nor I have been able to find information online. What's the purpose of this partition and what's inside?
It is a BIOS boot partition. It is the "legacy" method to boot your system – with EFI being the "new" method. EFI systems ignore this partition. The legacy boot method usually employs a MBR and its partition table. However, disks larger than 2 TB are usually formatted with GPT. Some users want a way to use the legacy boot method with a big disk. The GPT uses the BIOS boot partition to make explicit where the legacy bootloader shall be stored. GRUB is a notable example. This partition has no file-system, hence it cannot be mounted.
What's the small 1M partition that goes before the EFI partition?
1,543,156,167,000
from sar command on saX file we can get the disks utilization as the follwing sar -d -f /var/log/sa/sa18 | grep Average Average: dev8-0 1.24 0.00 150.06 121.40 0.04 30.40 4.72 0.58 Average: dev253-0 0.32 0.00 3.75 11.83 0.01 17.95 3.48 0.11 Average: dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: dev253-2 1.12 0.00 146.31 130.68 0.04 31.79 4.46 0.50 Average: dev8-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: dev8-32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: dev8-48 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: dev253-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 we can see that disks defined as MAJ:MIN as ( dev8-16 , dev8-48 , etc ) is it possible to get the real disks name as sdb , sdc sdc , etc ? using the sar cli ( sar -d -f /var/log/sa/sa18 | grep Average )
Try this: #! /bin/bash devrez() { l=/sys/dev/block/`echo "$1" | sed 's/dev//g;s/-/:/g'` test ! -L "$l" && echo "[$1] not found" && return -1 readlink -f "$l" | awk -F / '{ORS="";print "\t"$NF}' } export -f devrez sar -d -f /var/log/sa/sa18 | awk '{OFS="\t";ORS="";print $1; system("/bin/bash -c '\''devrez "$2"'\''");$1="";$2="";print "";print;print "\n"}'
convert MAJ:MIN – device numbers to real disks names