date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,333,326,770,000
One argument I hear often about systemd is that it more adapted to current hardware needs, e.g. here Computers changed so much that they often doesn’t even look like computers. And their operating systems are very busy : GPS, wireless networks, USB peripherals that come and go, tons of softwares and services running ...
Systemd reimplements many functionalities previously scattered over the whole OS (eg. in udev daemon), and is able to recognize that device was just plugged in or out. At the same time, systemd holds all system services configuration: what need to be run, how to run it etc. And simply, it has all knowledge needed to s...
systemd in the era of hotplugable devices
1,333,326,770,000
I have been filling a hard drive and a couple of backup drives with the family pictures and videos. Once a video goes into the archive, it remains there in a folder correctly labeled with the date. The data collection has grown up to the point where I need a new drive (and new backup drives). But I wonder why should I...
I assume that you are using an ext4 file system: You can modify the size of the reserved space with tune2fs. The following command line reduces the reserved space to 1% (from default 5%). sudo tune2fs -m 1 /dev/sdxn where x is the drive letter and n is the partition number (of a partition with an ext file system). F...
Is it wrong to fill the reserved space in an external USB drive for archiving purposes?
1,333,326,770,000
Is there a way to distinguish between the Internal Hard drives and External Hard drives. Actually i need to see how many External hardrives do we have and to which server are they connected . This is the Screenshot i took and by judging by its name SDE is external hard drive. But im not sure . So help me out. Furth...
@umair i am not sure why sdb is showing as removable , could you post the o/p of this script for device in /sys/block/* do if udevadm info --query=property --path=$device | grep -q ^ID_BUS=usb then echo $device fi done
How to check how many External Hard Drives are connected to Linux Server
1,333,326,770,000
I've got a Drobo in three partitions on Linux Mint, and it periodically drops off the filesystem, losing its mount points. Upon return it disregards /etc/fstab and mounts as a new device under /media--as if I'd inserted a new USB stick. AFAICT, the fstab declarations are correct--they work manually--but maybe I've mi...
My main concern is why /etc/fstab is disregarded ... The manual mount immediately put them right back where they should be The auto-mounting you refer to is performed by udisks. As you desire, it's supposed to defer to the entry in /etc/fstab, if there is one. But if there isn't one, it mounts under /media. It so...
Drobo filesystem ignores /etc/fstab, automounts in the wrong place after connection is interrupted
1,481,580,329,000
I have a horrible situation where I have to restore data from damaged raid system in a rescue Debian Linux. I just want to mount them all to /mnt/rescue in read only modus to be able to copy VMWare GSX images to another machine and migrate them to ESXi later on. The output for relevant commands is as follows. fdisk -l...
In my case I brought up CentOS 7 and tried following everyone's instructions on this page. I kept running into a device busy message. The reason in my opinion why you are getting the mdadm: cannot open device /dev/sda1: Device or resource busy error message is because the device is already mounted as something else...
How to mount a disk from destroyed raid system?
1,481,580,329,000
I am trying to customize the initramfs rescue environment and would like to force the kernel to fail mounting / and drop into the (initramfs) rescue shell, as opposed to single user mode. How can I do that? NB: I know how to hook into initramfs-tools to achieve the customization steps, but I need to be able to verify ...
This will drop you into an initramfs shell: Start your computer. Wait until the Grub menu appears. Hit e to edit the boot commands. Append break=mount to your kernel line. Hit F10 to boot. Within a moment, you will find yourself in a initramfs shell. If you want to make this behavior persistent, add GRUB_CMDLINE_LIN...
How can I force a Ubuntu kernel to fail mounting / and drop into the initramfs rescue shell?
1,481,580,329,000
On the Internet I've only found this: /etc/kernel/postinst.d/51-dracut-rescue-postinst.sh $(uname -r) /boot/vmlinuz-$(uname -r) but it doesn't work in Fedora 36 and soon to be released version 37, because this file is missing, in fact the entire /etc/kernel/postinst.d/ directory is empty. I've also found dnf reinstal...
Rescue kernels use a general-purpose initramfs, so you have to regenerate it. (Compare the sizes of your initramfses to see the impact of this.) To create a new rescue kernel using the currently-running kernel, on Fedora 36, run sudo rm /boot/*rescue* sudo /usr/lib/kernel/install.d/51-dracut-rescue.install add "$(unam...
How to manually regenerate the rescue kernel from the running/installed kernel in Fedora in 2022?
1,481,580,329,000
This is mostly aimed at Debian/Ubuntu, but I feel savvy enough on a variety of distros to be able to adapt the solution for one distro to another. Here's my scenario. There are a few situations when the boot process will drop you to the shell (usually busybox) of the initrd. Most notably whenever you run a hardware RA...
Ubuntu 16.04 contains a package called dropbear-initramfs which is supposed to provide this feature. Lightweight SSH2 server and client - initramfs integration dropbear is a SSH 2 server and client designed to be small enough to be used in small memory environments, while still being functional and secure enoug...
Are there any canned solutions for running sshd in the initrd?
1,481,580,329,000
To make a long story short, my (CentOS 7) server's /boot is too small (100MiB) to hold 2 kernels plus the automatically generated rescue image. I want to avoid the hassle of repartitioning and reinstalling my server by preventing the rescue image from being generated. This would leave enough space for at least 2 kerne...
Open the file /usr/lib/dracut/dracut.conf.d/02-rescue.conf and change dracut_rescue_image="yes" to dracut_rescue_image="no" This seems to be the only way for CentOS 7.
How do I disable the creation of the rescue boot image on CentOS?
1,481,580,329,000
I have created a new Fedora live USB with the intention of booting into rescue mode and fixing the bootloader, so that I can dualboot win7 and Fedora 20. However, I do not understand how I am to boot into rescue mode, seeing as the installation boot prompt is not shown as described by the guide, I am taken directly to...
When you boot the live distros you'll typically get a screen like this:     When you get to this screen just hit the Esc key which will bring up the grub boot prompt from where you can type linux rescue. Additional boot options are covered here in this Fedora document titled: 7.1.3. Additional Boot Options. References...
Booting Fedora in rescue mode
1,481,580,329,000
When I enter into grub menu, I get two entries : CentOS Linux (3.10.0-514.21.1.el7.x86_64) 7 (Core) CentOS Linux (0-rescue-e1ac24cbe9f94f2caa228d77e027be8b) 7 (Core) When I boot into the second line (the rescue one), I get a normal prompt like if I had boot into the first line. I was expecting someting like a rescue ...
it still asking me for root password, root FS is not in read only mode This is the norm for systemd's rescue mode and thus for systemd operating systems. For not (re-)mounting filesystems and a read-only / mount, you should look to emergency mode, which is not the same as rescue mode. Both emergency and rescue mod...
Why booting into rescue mode menu doesn't do anything?
1,481,580,329,000
I tried to help a friend with hard drive boot problems. I first asked her to make a rescue disk (Ubuntu 12.04.3), and boot from it. Then I asked her to open a console (Alt+F1) and use sudo to become root. All OK. Then I told her to install openssh-server - so I can remotely login and look at the system - but that does...
This is a problem with apt-get, it knows about dependencies, but does not know how to upgrade a dependency. And since 12.04.3 was released, both openssh-client and openssh-server have been updated, but the first is already installed on the rescue DVD. You can have your friend do a complete upgrade of all packages befo...
Installing openssh-server after Rescue Disk boot
1,481,580,329,000
I have a computer with Ubuntu 13.10 installed. The user (say Walesa) has changed the ownership of etc folder and all its subfolders from root to Welesa using a privileged file manager. As sudo was disabled, he rebooted hoping it will be re-enabled again. But security does not allow log-in after entering username and p...
Doing: sudo chown -R root.root /etc on the commandline will set /etc and everything underneath to owner root and group root However on my system (Ubuntu 12.04) not everything under /etc is in group root. The following list might help (generated with sudo find /etc ! -gid 0 -ls | cut -c 29-): root dovecot 534...
Ownership of etc folder is changed how to restore it using commandline?
1,481,580,329,000
What can I use to create a backup image of my entire system that will be saved on a LAN computer via SSH? If I break anything later, I want to be able to restore my entire system as it was before the backup in minutes. Is there a Live CD that can "save backup image to ssh://..." and "restore from backup image ssh://.....
Clonezilla would be a suitable product for a whole-disk image. It works in a fashion similar to Ghost.
How do I backup everything?
1,481,580,329,000
I have a centOS 7.5 server that does not boot up. Only boots up to rescue mode. This happened after a forced reboot of the server. I got the following error on CentOS 7.5 after checking the journalctl -p err grub2 was installed after getting the correct x86_64 file into the system, tried to mount the boot/efi, but g...
Some security hardening manuals suggest disabling the loading of unnecessary filesystem types. The examples typically include vfat among the types to be disabled. But for systems using UEFI, vfat is a necessary filesystem type: the EFI System Partition (ESP) that contains the bootloader *.efi files is typically a FAT3...
/boot/efi failed to mount due to unknown file system "vfat" : CentOS 7.5
1,481,580,329,000
I hope you´re doing well. I work as a technician in an IT company focused on Windows systems and cloud stuff, hence my knowledge to Linux is sadly very limited. So please excuse any dumb questions but I´ll try to be as helpfull as possible. Also this is my first time posting here, so pleas tell me if I do something wr...
The main problem here is not the RAID but a bogus partition table. The partition table is made for 512 byte sectors however the drive is detected as 4K native sectors. So all partition offsets and sizes are completely wrong. You might be able to work around it with losetup: losetup --find --show --read-only --sector-s...
Recover files from Linux Raid1 member disk - as bad as it gets
1,481,580,329,000
I installed RHEL7 using vmware and at some point two boot options appeared, one of them appears to be a rescue option (second in the image). What is this option and how can I remove it? Should it be removed?
The second GRUB option is to boot in rescue mode, when something has gone haywire. To remove it: 1) Remove the kernel image file rm -rf /boot/vmlinuz-0-rescue-6b78... 2) Remove the boot option from GRUB grubby --remove-kernel=/boot/vmlinuz-0-rescue-6b78... (obviously, complete the commands with the correct number) Y...
What is the rescue boot option in RHEL7?
1,481,580,329,000
I have a faulty 320GB drive which has reading errors in samish GB positions but the exact positions vary. I am ok with probability of errors, this is out of question here. First of all I was surprised with that I need conv=sync for conv=noerror being actually useful but ok, I have spare time to grow new foot. I found ...
Try ddrescue (gddrescue in most distros): GNU ddrescue - Data recovery tool. Copies data from one file or block device to another, trying to rescue the good parts first in case of read errors.
Blindly dd'ing faulty drive to new drive
1,481,580,329,000
I have an interesting case, where e2fsck refuses to recognize the file system inside a qcow2 image file. Using testdisk I am able to see the partition, so some markers would be left. The reason this problem occurred in the first place was because the host of the virtual machine died. So I choose None as the "type" of ...
Okay, sorry for answering my own question so soon, but I noticed something flabbergasting. The .qcow2 file was of size 120400379904 Bytes, whereas the conversion of the image with qemu-img convert -O raw gave me an image of size 128849018880 Bytes. Quite a difference. Now, if we take the size in sectors found by testd...
How to find alternative superblocks in ext3 file system of partition-less qcow2?
1,481,580,329,000
On booting the Rescue System from an openSUSE DVD, I find myself at a "rescue login" prompt: What are the default login details?
The rescue login: text is a login prompt expecting you to type in a username. Enter root and press Enter, that should give you a root shell. If it asks you for a password, you press Enter again. Further reading: https://doc.opensuse.org/documentation/leap/startup/single-html/book.opensuse.startup/index.html#sec.troub...
What is the default openSUSE Rescue login?
1,481,580,329,000
So I'm having a problem with kimsufi server. I was installing windows by using this command: wget -O- ...url.../server.gz | gunzip | dd of=/dev/sda And I messed up and accidentally ran that command on already existing windows installation, now I can't use RDP anymore, I guess it's all gone now, it somehow wrote over ...
I recovered the partly overwritten partition with testdisk. In case someone has the same problem, here's the solution (use testdisk): Intel/PC Partition > Analyse > Quick search > And there I found the deleted partition [1.8 TB] > Enter to continue > [Write] (Write partition structure to disk) > And now the partit...
How to recover overwritten partition?
1,481,580,329,000
I'm looking for a Linux live system which allows me to investigate boot failure via SSH for my server which is placed under my desk and doesn't have a graphic card for energy saving reasons. I sometime make configuration/administration errors which lead to boot failures before the SSH server starts. In this case I'd l...
As @RuiFRibeiro said in his comments, this is what serial consoles are for. USB to RS-232 serial adaptors are cheap ($5-$10), and so are null-modem cables. BTW, according to the ASRock X99 Extreme specs page, your motherboard has a COM port header on it. Most motherboards do. All you need is the cable kit to extend...
Linux live system for headless rescue
1,481,580,329,000
I would like to mount an Apple iPad to my Linux device, to make a jpeg or ddrescue recovery on it. How I would do this with an Apple device?
You can't access the block device on Apple directly, it is forbidden by the OS, on which you don't have a root access, despite that you've purchased it and it is yours. To be able to do these, you have to jailbreak it (I intentionally don't use the word "crack", because it is your property). It is hard. Although the O...
How to mount an Apple device from Linux?
1,481,580,329,000
I have received a computer that the previous owner had attempted to install some Linux OS, I don't know which particular one. I have both an Ubuntu and a Windows bootable USB drive and I have attempted to boot off of them with priority set to boot off USBs in the BIOS, however when computer boots it leads me grub resc...
If you're getting the same error on both a Windows USB and Linux USB stick then it's unlikely that the USB stick is being used to boot. The 'no such device' error message should be a UUID that should be different between the two operating systems (that and Windows doesn't use GRUB). To me this indicates one of two t...
Inherited Computer, trying to boot of USB but not working
1,481,580,329,000
I only have access to my vserver via a minimal rescue system over ssh. It does not have scp or ftp installed. Is there an easy way to backup the files, preferably directly to an ftp server, but to my local machine would also be fine. Maybe this helps showing the capabilities of the rescue system: uname -a Linux cust...
tar cvzf - file1 file2 dir1 dir2 | ssh user@remotesystem "cat > /big/partition/rescue.tgz" would be my preference. You could even unpack on the fly: tar cvzf - file1 file2 dir1 dir2 | ssh user@remotesystem "cd /big/partition; tar xvzfp -" But as fuero points out, one could also rsync -avz -e "ssh user@remotesystem"...
How to backup files with only a minimal rescue system?
1,481,580,329,000
I accidentally deleted the /usr/bin directory. Using a bootable usb, is it possible to rescue my machine?
This is possible, but you might be better off re-installing. If you want to try, I would first try to copy enough of dpkg to your filesystem that dpkg will run. There are a bunch of files from dpkg that are in /usr/bin/. Copy those in. For convenience, the list is /usr/bin/dpkg-trigger /usr/bin/dpkg-deb /usr/bin/dpkg ...
Rescue /usr/bin on Debian Wheezy?
1,403,200,807,000
I recently resized the hard drive of a VM from 150 GB to 500 GB in VMWare ESXi. After doing this, I used Gparted to effectively resize the partition of this image. Now all I have to do is to resize the file system, since it still shows the old value (as you can see from the output of df -h): Filesystem ...
If you only changed the partition size, you're not ready to resize the logical volume yet. Once the partition is the new size, you need to do a pvresize on the PV so the volume group sees the new space. After that you can use lvextend to expand the logical volume into the volume group's new space. You can pass -r to t...
Can't resize a partition using resize2fs
1,403,200,807,000
I want to shrink an ext4 filesystem to make room for a new partition and came across the resize2fs program. The command looks like this: resize2fs -p /dev/mapper/ExistingExt4 $size How should I determine $size if I want to substract exactly 15 GiB from the current ext4 filesystem? Can I use the output of df somehow?
You should not use df because it shows the size as reported by the filesystem (in this case, ext4). Use the dumpe2fs -h /dev/mapper/ExistingExt4 command to find out the real size of the partition. The -h option makes dumpe2fs show super block info without a lot other unnecessary details. From the output, you need the ...
How do I determine the new size for resize2fs?
1,403,200,807,000
For resizing LVM2 partition, one needs to perform the following 2 commands: # lvextend -L+1G /dev/myvg/homevol # resize2fs /dev/myvg/homevol However, when I perform lvextend, I see that the changes are already applied to the partition (as shown in Gnome Disks). So why do I still need to do resize2fs?
The lvextend command (without the --resizefs option) only makes the LVM-side arrangements to enlarge the block device that is the logical volume. No matter what the filesystem type (or even whether or not there is a filesystem at all) on the LV, these operations are always similar. If the LV contains an ext2/3/4 files...
Why do I need to do resize2fs after lvextend?
1,403,200,807,000
What does resize2fs command do when we extend or reduce a Logical volume. Is the function same or different while using lvextend and lvreduce commands ?
There are actually four different behaviors resize2fs can have (one of them trivial). It depends on if the filesystem is mounted or unmounted and if you're shrinking or extending. Mounted, Extending Here, resize2fs attempts an online resize. More or less, this just tells the kernel to do the work. The kernel then beg...
What does resize2fs command do in Linux
1,403,200,807,000
I need to shrink a large ext4 volume, and I would like to do it with as little downtime as possible. With the testing I've done so far it looks like it could be unmounted for the resize for up to a week. Is there any way to defragment the filesystem online ahead of time so that resizefs won't have to move so many bloc...
From what I can tell, ext4fs supports online defragmentation (it's listed under "done", but the status field is empty; the original patch is from late 2006) through e4defrag in e2fsprogs 1.42 or newer which when running on Linux 2.6.28 or newer allows you to query status for directories or possibly file systems, and a...
Decrease time to shrink ext4 filesystem
1,403,200,807,000
One may help me with this, because its confusing me: I have a 1.8T Disk (it's a VM virtual disk), here a snippet of df: df -TH Filesystem Type Size Used Avail Use% Mounted on /dev/sdb ext4 1.8T 1.6T 91G 95% /af Here the partition info: parted /dev/sdb...
That would work; ext4 doesn't care about whether the block device it resides on is a partition, a whole hard drive, an LVM volume, a network block device, an iSCSI target… All it sees that there blocks.
filesystem on disk without partition
1,403,200,807,000
I added a new disk (/dev/vdb) of 2TB with existing data from the previous 1TB disk. I used fdisk /dev/vdb to extend its only partition /dev/vdb1 to full capacity of 2TB from previous 1TB. (In other words, I deleted vdb1, and then re-created it to fill the disk. See How to Resize a Partition using fdisk - Red Hat Custo...
I used fdisk /dev/vdb to extend its only partition /dev/vdb1 to full capacity of 2TB from previous 1TB... See How to Resize a Partition using fdisk - Red Hat Customer Portal. And then I did [resize2fs /dev/vdb1]... We can see this did not change the size of your filesystem. Here is why: resize2fs reads the size of t...
resize2fs fails to resize partition to full capacity?
1,403,200,807,000
I need to resize my first disk (/dev/xvda) from 40 GB to 80 GB. I'm using XEN virtualization, and the disk is resized in XenCenter, but I need to resize its partitions without losing any data. The virtual machine is running Debian 8.6. Disk /dev/xvda: 80 GiB, 85 899 345 920 bajtů, 167 772 160 sektorů Jednotky: sektor...
This should be relatively easy, since you're using LVM: First, as always, take a backup. Resize the disk in Xen (you've already done this; despite this, please re-read step 1). Use parted to resize the extended partition (xvda2); run parted /dev/xvda, then at the parted prompt resizepart 2 -1s to resize it to end at ...
How to resize LVM disk in Debian 8.6 without losing data
1,403,200,807,000
I inherited an old PC-server (quad Pentium 4) that only had partitions for /, /boot and swap (RAID1 with 2 1T SATA disks), but needed to update the distro (from CentOS 6.9). I decided to create a new partition so that the one containing / could be formatted. But I forgot to add the -p flag to resize2fs and now it's si...
Definitely an interesting question and while your result was pretty good (and as I would hoped, since catching SIGINT is not exactly rocket science and pausing halfway merely relocating some data blocks doesn't seem hard either), there are enough non-success stories as well, like for example 10yo Debian bug https://bu...
Just how dangerous is sending SIGINT to resize2fs tasked with shrinking?
1,403,200,807,000
I've previously used growpart and resize2fs to resize a mounted online ext4 paritition in a Linux system. Currently I have a Ubuntu guest running in virtualbox that I'd like to resize the partition /dev/sda5. I've already extended the virtual disk on the host via vboxmanage modifyhd --resize..., however after running ...
The commands did not work as expected as they were contained within an extended partition as described here: https://askubuntu.com/a/365953/585364 Instead I had to first extend the /dev/sda2 extended partition that was the parent of /dev/sda5. So all the commands that were required (in my specific case): growpart /de...
Ubuntu ext4 partition is not being extended or resized as expected with growpart or resize2fs
1,403,200,807,000
If I create a small filesystem, and grow it when I need to, will the number of inodes increase proportionally? I want to use Docker with the overlay storage driver. This can be very inode hungry because it uses hardlinks to merge lower layers. (The original aufs driver effectively stacked union mounts, which didn't ...
Yes. See man mkfs.ext4: -i bytes-per-inode Specify the bytes/inode ratio. mke2fs creates an inode for every bytes-per-inode bytes of space on the disk. The larger the bytes-per-inode ratio, the fewer inodes will be created. This value generally shouldn't ...
If I grow an ext4 partition, will it increase the number of inodes available?
1,403,200,807,000
I have a 4 GB SD card. Before the image load root@ubuntu# fdisk -l Disk /dev/sdb: 3965 MB, 3965190144 bytes 49 heads, 48 sectors/track, 3292 cylinders, total 7744512 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Di...
You must tell apart the resizing of a block device (here: /dev/sdb4) from the resizing of a file system. A file system can be smaller but not bigger than the underlying block device. You should make a backup of the partition table: sfdisk -d /dev/sdb > ~/sfdisk_sdb.txt Then you make a copy of that file and adapt the ...
extending a partition by resize2fs
1,403,200,807,000
I have a partition which contains MySQL data which is constantly growing. My LVM PV has precious little free space remaining and therefore I find I'm frequently adding additional space to my /var partition using lvextend and resize2fs in smallish increments (250-500 MB at a time) so as not to give too much space to /v...
Beyond the wear and tear on the HDDs I can't see any reason why this would be dangerous. I've never come across a EXT3/EXT4 parameter that limits the amount of times you can do this. There isn't any counter I've seen either. In looking through the output from tune2fs I see nothing that I would find alarming which woul...
Is there a problem using resize2fs too often?
1,403,200,807,000
Mount has the option offset to specify that a file system does not start at the beginning of a device but some specific amount of bytes after. How can I use resize2fs, which does not have that option, to resize such a file system which does not start at the device's beginning?
The offset option of mount does not get passed to mount directly, but to losetup which sets up a loop device which refers to the offsetted location of the underlaying block device. Mount then performs its operations on that loop device rather than the raw block device itself. You can also use losetup to make resize2fs...
Using resize2fs with file system offset
1,403,200,807,000
Apologies for this question but I am very new to Linux. When I installed my Fedora distribution I only allocated 20GB of my hard drive space for its partition. I recently used GParted and tried to increase the size of the partition to around 40GB. I was under the impression that I was successful but today I tried to ...
In this case, your file system is on the LV(Logical Volumne), which is on the partition. If you expand the partition, your LV will not be expanded. Please run these commands : pvresize <device name> <-- This will let the Physical Volume know that the partition it is on has been expanded. And : lvextend -l +100%FREE /d...
How to increase size of filesystem to match partition
1,403,200,807,000
So, I have a 120 GB SSD (/dev/sdb) that I have a dual boot of Windows 7 and Fedora 17. When I first started I only have a 60 GB SSD so my space was very limited. I have a partition on my SSD (dev/sdb4) which I created with gparted, that shows a "Partition 5 LVM2" (dev/sdb5) below it which I believe is what the LVM i...
Extend your physical volume first, and then the logical volume: pvresize /dev/sdb4 lvextend /dev/vg_mine/lv_root Note that I've left off the -L+16G — this will use all free space.
Extend my LVM After Upgrading SSD
1,403,200,807,000
Good afternoon! I am attempting to shrink an ext4 partition and I have found many tutorials online to achieve this, however, when implementing the actual changes, resize2fs is telling me wrong information! Here is the scenario: # parted -s /dev/sdb unit GB print Model: Hitachi HTS725050A7E630 (scsi) Disk /dev/sdb: 5...
469G is 469*1024*1024k, which is 491782144k. 122945536 blocks of 4k is also 491782144k. Parted uses G in terms of 1000, not 1024. Try unit Gi with parted.
why is resize2fs telling me wrong information
1,403,200,807,000
I ordered a dedicated server and it came with a primary partition of 20gb and a second partition of 1.8TB. I see no point in this as I plan to use it as a web server. As such I need to put pretty much everything into /var. I have rebooted in rescue mode and I have deleted the 1.8TB partition. My FS now looks like this...
resize2fs complains it has nothing to do because it only works at filesystem size. First you have to grow the partition size underneath it with fdisk, cfdisk or parted. https://geekpeek.net/resize-filesystem-fdisk-resize2fs/ Similar with LVM, it needs more free partition space to grow, or a new partition added to th...
Can't resize main partition on CentOS 7
1,403,200,807,000
I am trying to understand what I did wrong with the following mount command. Take the following file from here: http://elinux.org/CI20_Distros#Debian_8_2016-02-02_Beta Simply download the img file from here. Then I verified the md5sum is correct per the upstream page: $ md5sum nand_2016_06_02.img 3ad5e53c7ee89322ff8...
Once you have extracted the filesystem you are interested in (using dd), simply adapt the file size (967424*4096=3962568704): $ truncate -s 3962568704 trunc.img And then simply: $ sudo mount -o loop trunc.img /tmp/img/ $ sudo find /tmp/img/ /tmp/img/ /tmp/img/u-boot-spl.bin /tmp/img/u-boot.img /tmp/img/root.ubifs.9 /...
bad geometry: block count 967424 exceeds size of device (415232 blocks)
1,403,200,807,000
when I try to resize the disk we get that resize2fs /dev/sdb resize2fs 1.42.9 (28-Dec-2013) Please run 'e2fsck -f /dev/sdb' first. so when I try to do e2fsck I get the following e2fsck -f /dev/sdb e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Deleted inode 142682 has zero dtime. Fix<y>? is ...
It’s OK to let fsck fix this, it refers to a deleted inode — the data has already been deleted, nothing more will be deleted.
rhel + efsck + Deleted inode xxxxx has zero dtime
1,403,200,807,000
I have a RAID 6 array set up under CentOS 7 which originally had four 1TB drives assigned, resulting in a total capacity of 2TB. After much fussing about as described here, I was able to add a fifth drive to the array successfully, growing it out to 3TB. The confusion now is how to get the partition to grow out to the...
The crux of the problem is that a filesystem can only be expanded into the space that's seen as available on the block device you've put it onto. With partitions, that means the partition's starting and ending sector. As it stands now the kernel knows the space is there but your partition's end sector is essentially t...
Partition resize in CentOS 7
1,403,200,807,000
I want to shrink my LVM physical volume and use this free space to create another partition for another OS. I resized my root and home logical volumes using lvresize and now I'm trying to use pvresize, but I get the following error: /dev/sda2: cannot resize to xxxxx extents as later ones are allocated. This PV's free...
pvmove moves segments, not free space. You need to move this range /dev/sda2:97280-114339 to start at segment 59392 Those are 17061 segments. According to this you should: # pvmove --alloc anywhere /dev/sda2:97280-114339 /dev/sda2:59392-76453 Then resize PV, then partition, and then enjoy your free space. While LVM t...
How to shrink LVM physical volume with free space
1,403,200,807,000
I tried the process from this post resize partition on an image file. I didn't succeed in understanding why it goes wrong in my case. I produced a 8GB image using dd. The image contains two partitions. I map the image with losetup -P /dev/loop0 $image-file. Then: resize2fs /dev/loop0p2 4000M resize2fs 1.44.1 (24-Mar-2...
Thank you @sudodus and @fra-san. I think there is a compatibility issue when combining resize2fs and parted for shrinking a fs/partition. resize2fs uses 4k blocks, when parted uses Byte or MB, GB etc. I eventually found another way to shrink the 2nd partition: gnome-disks. It is provided with Linux Mint and works pret...
How to shrink a file image, produced with dd?
1,403,200,807,000
Is there any way to tell if a file system (regardless of its type) has been resized? Specifically shrunk?
As far as I know, there is no direct way for this purpose. The only idea which sprang to my mind was to check out and examine the contents - e.g. filesystem - metadata of the partition. If the size recorded in metadata does not match the size of the partition, it may have been resized. Even if the contents has been re...
How to tell if a file system has been shrunk?
1,403,200,807,000
I've shrinked my /home from 2.7TB to 100G, I've extended /root, /usr, /tmp and /var but I have been looking for a way to create an /opt partition for 3 hours now, and can't find it. The setup is 3TB luks encrypted partition on /dev/sdb3(container) inside it are my lvm partitions /root, /usr, /tmp, /var and /home in a ...
The apparent answer is to run these two commands lvcreate --name opt --size 23Gi group mkfs -t ext4 -L opt /dev/group/opt However, via the comments thread it became apparent that lvcreate threw an error message, /dev/group/opt: not found: device not cleared Aborting: Failed to wipe start of new LV A search on Google...
How to create an /opt partition on an existing installation without loosing data?
1,403,200,807,000
For resize2fs, If ``size parameter is not specified, it will default to the size of the partition The size of a filesystem is by default the size of its underlying partition. So by default, resize2fs doesn't change the size of a filesystem. Does it do nothing? Thanks.
If the underlying partition is larger than the filesystem within it, resize2fs will, by default, attempt to expand the filesystem to fill the partition. For example, if /dev/sdd3 is a 1TB partition, and we were to run: # mke2fs /dev/sdd3 500G We will have a 500GB partition within a 1TB partition. If we then resize2f...
Does `resize2fs` by default do nothing?
1,426,750,682,000
I have a problem with my remote server hosted by my provider, I have only SSH access. The problem consist of getting this error file system rootfs has reached critical status that causes problems with several services like smtp, I want to resize my partitions. I want to: - Decrease size of /home - Increase the size of...
Given your comment on Anthon's answer, I think the actual solution to your problem may be to tighten down your OS's logrotate configuration. While it is possible to move /var/log per Anthon's answer, I wouldn't recommend it.
Live resizing of an ext3 filesytem on CentOS6.5
1,426,750,682,000
I shrinked an ext4 filesystem with resize2fs: resize2fs -p /dev/sdn1 3500G (FS is used for 2.3 TB) Then I resized the partition with parted and left a 0.3% margin (~10 GB) when setting the new end: (parted) resizepart 1 3681027097kb Eventually, this turned out to be too tight: # e2fsck -f /dev/sdn1 e2fsck 1.42.9 (4-...
I resized the partition to a too small value have corrupted the fs? It's unlikely in your case, especially since you were kind enough to stop that fs(c)killer, but you can't rule out the possibility entirely. For example, corruption happens when it's a logical partition inside the extended partition of a msdos parti...
Resized partition to too small value after shrinking filesystem
1,426,750,682,000
I am not too familiar with how volume sizing works, but I have a VPS running Ubuntu 14.04, and I noticed the home directory is all used up. I have a 1TB drive on this machine, how can I allocate more space to /home? $ df Filesystem 1K-blocks Used Available Use% Mounted on udev 8186844 ...
With a VPS, I assume you do not have physical access to the machine, so the usual approach to resizing an in-use filesystem will not work (that would be to use a rescue cdrom). In your listing, the /dev/mapper/vgxxx mountpoints are the way LVM volumes are mounted. Tutorials on LVM are fairly easy to find. The problem...
Resizing directories
1,426,750,682,000
I'm working on a script for automatically setting up Amazon Linux servers. I create them with 100gb virtual disks, but the main partition is always 8gb. No problem, I call sudo resize2fs /dev/sda1 at the start of the script to expand it to the full 100gb. The process is fairly slow, though. Later on in my script I dow...
Enlarging a mounted volume has been officially supported for ext3 and ext4 for some time now. I don't know of any strong assessment regarding a change in safety. Obviously both the resizing and the other activities take even longer when done on parallel. But it seems strange to me that this takes so long. In my experi...
Is it safe to resize a partition while writing to it?
1,426,750,682,000
Trying to upgrade from F15 to F17. I need to find a way to increase /boot size without destroying data. Details: I tried upgrading using preupgrade process and via booting from Net iso on USB and both lead to the same thing: and 'Error' message in the first package (filesystem) transaction indicating the installer nee...
Use gparted to move sdb2 toward the end of the disk, so that the free space is before it. Then you can resize sdb1.
/boot too small to upgrade
1,426,750,682,000
My Debian vmware image has run out of space. I've expanded the disk image but now need to increase my root partition to see the additional space. My volume is setup as follows Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors Disk model: VMware Virtual S Units: sectors of 1 * 512 = 512 bytes Sector size (l...
UPDATE - I found this answer, and the others, to be quite helpful. You may want to compare those too. You need to do like this: swapoff, thus "freeing" the swap partition fdisk, and delete both the extended partition and the physical partition. You are now left with just /dev/sda1. You can now enlarge the image usin...
How to resize root ext3 file system without LVM
1,426,750,682,000
I have a 16GB msata but my rootfs is only 4GB. I need to increase the size of a volume group /dev/mapper/vg-var/ for my embedded system to full capacity. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 14.9G 0 disk |-sda1 8:1 0 39.2M 0 part `-sda2 8:2 0...
Resize the /dev/sda2 partition with fdisk or parted Resize the PV format on /dev/sda2 with pvresize /dev/sda2 Resize the logical volumes you want to resize with lvresize -L+<size> --resizefs /dev/vg/var (to resize your /var) or lvresize --resizefs -L+<size> /dev/vg/system1 (to resize your /). <size> can be either 100...
Increase size of a volume group
1,426,750,682,000
Does parted's resizepart command by default not modify or remove existing files on a partition? Furthermore, does it never modify or remove existing files on a partition (even by some option)? Similar questions for resize2fs? Thanks.
When shrinking a filesystem, resize2fs first checks if the part of the filesystem that is going to be cut away is free. If not, it can try to move those files out of the area that will be cut away, if there is space to do so. If this cannot be done, it stops and reports an error without shrinking the filesystem. resiz...
Do `parted resizepart` and `resize2fs` not modify or remove existing files on a partition?
1,426,750,682,000
I just installed Ubuntu 24.04, and I made a mistake: I put the /var directory on its own partition, and its size is 10 GB. After a few days it is already full. Is there a way to fix this problem without reinstalling the OS from scratch? Is it possible to resize a partition, even loosing its contents? What is a suggest...
The size of /var greatly depends on what the system is doing. For example, if the system is a mail server, /var/mail and /var/spool could grow arbitrarily large, depending on the size of the user base; those directories would then effectively be the main reason of the system's existence. The size of /var/lib depends ...
Can I expand my /var partition?
1,426,750,682,000
I was trying to shrink my home partition. I followed this ArchWiki article for that. According to this I first resized my filesystem using resize2fs and then resized my physical device using parted. In resize2fs parameter I gave my intended size as XG and after resizing, it reported that new size is Y (4k blocks). Fro...
From what you write, you have accidentally shrunk a partition smaller than the file system it contains. On it's own this shouldn't lose any data but almost every action you might do after that could [have]. This definitely includes resize2fs, e2fsck and mount. It appeares you were very lucky since the two commands ...
How to recover filesystem and physical size mismatch
1,426,750,682,000
I need to move a Pop-OS installation from a 250GB HDD to a 128GB SSD. So far I have been trying to use GParted (which worked for moving my Ubuntu installation between drives of the same size). The recovery and boot partitions copied properly, but to copy the main (root) partition I need to shrink it first (there is en...
Solution is simple: Don't shrink the partition and copy it. Instead, make a new partition on the target SSD, and copy over the files from the old partion. There's no reason why you couldn't do that – and it's both easier and safer.
Moving Pop-OS installation to a smaller drive (using GParted?)
1,426,750,682,000
How am I able to reduce /var/lib/vz logical volume (/dev/vg/data) and use it/increase the current swap size? /etc/fstab UUID=c4408a1c-aa5b-4ce2-a9e8-1673660331e9 / ext4 defaults 0 1 LABEL=EFI_SYSPART /boot/efi vfat defaults 0 1 UUID=c90b3083-1b43-427c-8016-1d2406...
easy: lvresize to, say, 350 GB (I'm assuming df -h /var/lib/vz gives you something like 340GB; if it's far less, you can of course shrink this way more!): Since you need to shrink the file system, you first have to unmount it: umount /var/lib/vz Then, resize the logical volume; we can ask the LVM tools to correctly r...
How can I shrink/use a Logical Volume and use it as swap
1,426,750,682,000
After a failed resizing operation, mount operation is failing with: Failed to read last sector (718198764): Invalid argument The partition is not accessible with Gparted and other GUI tools. How can we fix such issue?
Analyse ntfsfix -n /dev/sda5 the n parameter will make the tool output the repair solution without applying it (be very prudent using such tool as automated repair tools can choose the wrong decision to repair the partition) ntfsresize -if /dev/sda5 this will tell us what's going on exactly... Backup First thing first...
Partition mounting/resizing failed to read last sector?
1,426,750,682,000
I have Ubuntu 16.04 installed on remote server and I have requested another 20GB for my /dev/vda2 partition (now its 20GB), so the total size would be 40GB. Since vda2 is full of very valuable data (disk usage is 100%), I want to extend it. Now, I have searched for ways to do it but I found out my LVM is not configure...
You can simply use sfdisk to resize the 2nd partition. # write the current partition table into a machine readable text file sfdisk --dump /dev/vda > /var/tmp/vda.old cp /var/tmp/vda.old /var/tmp/vda.new # also copy vda.old to another machine to have a safe backup # edit the dump to set the new size for partion 2 # ...
Resize partition without using LVM
1,426,750,682,000
I extended an lvm from the terminal in system rescue live CD using the commands: # pvcreate /dev/sda7 # vgextend fedora /dev/sda7 # lvextend -l +100%FREE /dev/fedora/root The above worked but when I try to check the LV file system or resize it I get the following errors: # e2fsck -f /dev/fedora/root e2fsck: No su...
It is not enough that a LV exists on the PV, it must also be active for being used i.e. the device mapper device (/dev/mapper/fedora-root) must be created: lvchange -ay fedora/root or vgchange -ay fedora
LVM not able to be resized or checked with resize2fs and e2fsck
1,426,750,682,000
I have an vmware ext4 file system that non-lvm, non-partitioned file system that resides on a virtual 300GB disk. In other words, there is no partition and the file system was probably created by: mkfs.ext4 /dev/sdd1 The disk is barely used (1%) but I would like to keep the data on it. Is there a safe way to shrink i...
Your question is inconsistent: if there's a partition, the filesystem was created by a command like mkfs.ext4 /dev/sdd1. If there's no partition, the filesystem was created by a command line mkfs.ext4 /dev/sdd. Check the output of df /path/to/some/directory/on/that/filesystem to see which one it is. Either way, you ca...
Shrink/reduce non-lvm disk file system
1,426,750,682,000
I'm trying to shrink a partition on a 64GB SD Card down so that I can fit it on a 32GB USB thumb drive, but I'm not having any success. I have the SD card plugged into a USB adapter, which is plugged into a Raspberry Pi running Raspbian. Here is the output of fdisk -l: Disk /dev/mmcblk0: 7948 MB, 7948206080 bytes 4 he...
I don't understand the problem. If the motivation for shrinking the partition is that you want to move it to another physical storage then the "shrinking magic" is: create the partition on the target storage format the new partition mount the partition (and the source partition) cp -a /path/to/source/. /path/to/targe...
Shrinking a partition
1,426,750,682,000
What am I doing wrong? I have an image, I added it as a loop device: losetup -P /dev/loop13 ./my_image.img gparted screenshot: Then I try to change the FS size for the partition first: e2fsck -f /dev/loop13p1 resize2fs /dev/loop13p1 7G It outputs: Resizing the filesystem on /dev/loop13p1 to 1835008 (4k) blocks. The...
In order to use parted correctly, you unfortunately have to do a little math sometimes. parted /dev/loop13p1 resizepart 1 7G This command probably does not do what you expect. parted works with block devices that have partition tables on them. So in the case of /dev/loop13p1 it would be a partition table on a parti...
Change the size of the partition using parted
1,458,655,019,000
I just formatted microSD card, and would like to run a dd command. Unfortunately dd command fails: $ sudo dd bs=1m if=2016-02-26-raspbian-jessie-lite.img of=/dev/rdisk2 dd: /dev/rdisk2: Resource busy $ Everyone on the internet says I need to unmount the disk first. Sure, can do that and move on. But I want to underst...
Apple court, Apple rules. Try diskutil: $ diskutil list ... # if mounted somewhere $ sudo diskutil unmount $device # all the partitions (there's also a "force" option, see the manual) $ sudo diskutil unmountDisk $device # remember zip drives? this would launch them. good times! $ sudo diskutil eject $device (In th...
Running dd. Why resource is busy?
1,458,655,019,000
There is a guide to cgroups from Red Hat which is maybe sort of kind of helpful (but doesn't answer this question). I know how to limit a specific process to a specific CPU, during the command to start that process, by: First, putting the following* in /etc/cgconfig.conf: mount { cpuset = /cgroup/cpuset; cpu = ...
UPDATE: Note that the answer below applies to RHEL 6. In RHEL 7, most cgroups are managed by systemd, and libcgroup is deprecated. Since posting this question I have studied the entire guide that I linked to above, as well as the majority of the cgroups.txt documentation and cpusets.txt. I now know more than I ever...
How to use cgroups to limit all processes except whitelist to a single CPU?
1,458,655,019,000
If I kill a program that is listening on a TCP port, it takes up to several minutes until the port is reclaimed by the system and usable again. I've seen several Q/A mentioning this phenomenon, but without an explanation. Why does that happen, why doesn't the system reclaim the port right away? Does it also happen on ...
The idea behind this is to ensure you don't receive packets targeted for the previous program listening on that port. This TIME_WAIT state is defined in RFC793 as two times the maximum segment lifetime. I don't know about other Operating Systems but I assume that all of these have some kind of similar behavior. A work...
Why does it take up to several minutes to clean a listening TCP port after a program dies?
1,458,655,019,000
My laptop (an HP with an i3 chip) overheats like crazy every time I run a resource heavy process (like a large compilation, extracting large tarballs or ... playing Flash). I am currently looking into some cooling solutions but got the idea of limiting global CPU consumption. I figured that if the CPU is capped, chanc...
I don't know that limiting CPU to the whole system is something that's possible without a lot of hacking, but you can easily limit the amount of CPU used by a single process using cpulimit The only way I can think of you being able to use this effectively is writing a wrapper script (can't really call it a script, it'...
Is there a way to limit overall CPU consumption?
1,458,655,019,000
There are plenty of questions and answers about constraining the resources of a single process, e.g. RLIMIT_AS can be used to constrain the maximum memory allocated by a process that can be seen as VIRT in the likes of top. More on the topic e.g. here Is there a way to limit the amount of memory a particular process c...
I am not sure if this answers your question, but I found this perl script that claims to do exactly what you are looking for. The script implements its own system for enforcing the limits by waking up and checking the resource usage of the process and its children. It seems to be well documented and explained, and has...
How to limit the total resources (memory) of a process and its children
1,458,655,019,000
I'm working on an embedded Linux system (128MB RAM) without any swap partition. Below is its top output: Mem: 37824K used, 88564K free, 0K shrd, 0K buff, 23468K cached CPU: 0% usr 0% sys 0% nic 60% idle 0% io 38% irq 0% sirq Load average: 0.00 0.09 0.26 1/50 1081 PID PPID USER STAT VSZ %MEM CPU %C...
The man page you refer to comes from the procps version of top. But you're on an embedded system, so you have the busybox version of top. It looks like busybox top calculates %MEM as VSZ/MemTotal instead of RSS/MemTotal. The latest version of busybox calls that column %VSZ to avoid some confusion. commit log
What do top's %MEM and VSZ mean?
1,458,655,019,000
We can get the same result using the following two in bash, echo 'foo' | cat and cat <<< 'foo' My question is what are the difference between these two as far as the resources used are concerned and which one is better ? My thought is that while using pipe we are using an extra process echo and pipe while in here s...
The pipe is a file opened in an in-kernel file-system and is not accessible as a regular file on-disk. It is automatically buffered only to a certain size and will eventually block when full. Unlike files sourced on block-devices, pipes behave very like character devices, and so generally do not support lseek() and da...
Resource usage using pipe and here string
1,458,655,019,000
This article claims that the -m flag to ulimit does nothing in modern Linux. I can find nothing else to corroborate this claim. Is it accurate? You may try to limit the memory usage of a process by setting the maximum resident set size (ulimit -m). This has no effect on Linux. man setrlimit says it used to work onl...
It says right there in the article: This has no effect on Linux. man setrlimit says it used to work only in ancient versions. The setrlimit man page says: RLIMIT_RSS Specifies the limit (in pages) of the process's resident set (the number of virtual pages resident in RAM). This limit has e...
Does 'ulimit -m' not work on (modern) Linux?
1,458,655,019,000
To prevent fork bomb I followed this http://www.linuxhowtos.org/Tips%20and%20Tricks/ulimit.htm ulimit -a reflects the new settings but when I run (as root in bash) :(){ :|:&};: the VM still goes on max CPU+RAM and system will freeze. How to ensure users will not be bring down the system by using fork bombs or running...
The superuser or any process with the CAP_SYS_ADMIN or CAP_SYS_RESOURCE capabilities are not affected by that limitation, that's not something that can be changed. root can always fork processes. If some software is not trusted, it should not run as root anyway.
How to prevent fork bomb?
1,458,655,019,000
I want to do research on the evolution of Linux. Therefore it would be nice if I could download the sources of Linux at several moments in time (from 1991 till now). Is there a site where one can find those sources? Similar sites for other Unix based operating systems are also welcome.
I suggest these two: http://www.oldlinux.org/ and a more straightforward one from this site that contain Linux kernel 0.01, 0.10, 0.11,...,0.98: http://www.oldlinux.org/Linux.old/ and the other: http://www.codeforge.com/article/170371
Where can I find the historical source code of the Linux sources
1,458,655,019,000
Let's assume process runs in ulimited environment : ( ulimit ... -v ... -t ... -x 0 ... ./program ) Program is terminated. There might be many reasons : memory/time/file limit exceeded ; just simple segfault ; or even normal termination with return code 0. How to check what was the reason of program termination, wit...
Generally speaking, I don't think you can unfortunately. (Some operating systems might provide for it, but I'm not aware of the ones I know supporting this.) Reference doc for resource limits: getrlimit from POSIX 2008. Take for example the CPU limit RLIMIT_CPU. If the process exceeds the soft limit, it gets sent a S...
How to check, which limit was exceeded? (Process terminated because of ulimit. )
1,458,655,019,000
I want to enable core dump generation by default upon reboot. Executing: ulimit -c unlimited in a terminal seems to work until the computer is rebooted.
Think I figured out something that works. I used a program called LaunchControl to create a file called enable core dumps.plist at /System/Library/LaunchDaemons with the following contents: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyLi...
How to add persist shell ulimit settings on Mac? [duplicate]
1,458,655,019,000
I was thinking if there is "canonical" way to have this? Background & description I have to install some program on live server. Although I do trust the vendor (FOSS, Github, multiple authors...) I would rather to ensure avoiding not entirely impossible scenario of script falling in some trouble and exhausting system ...
Alternative #1: Monitor your process with monit Install M/Monit and create a configuration file based on this template: check process myprogram matching "myprogram.*" start program = "/usr/bin/myprogram" with timeout 10 seconds stop program = "/usr/bin/pkill thatscript" if cpu > 99% for 2 cycles then stop if loadavg (...
Prevent a script exhausing system resources and crashing entire system
1,458,655,019,000
Using pacemaker in a 2 nodes master/slave configuration. In order to perform some tests, we want to switch the master role from node1 to node2, and vice-versa. For instance if the current master is node1, doing # crm resource migrate r0 node2 does indeed move the resource to node2. Then, ideally, # crm resource migra...
I know this bit old; but it seems like no one answered this satisfactorily, and the requester never posted if his problem was solved or not. So here is an explanation. When you perform: # crm resource migrate r0 node2 a cli-prefer-* rule is created. Now when you want to move the r0 back to node1, you don't do: # crm ...
Pacemaker: migrate resource without adding a "prefer" line in config
1,458,655,019,000
When I first borrowed an account on a UNIX system in 1990, the file limit was an astonishing 1024, so I never really saw that as a problem. Today 30 years later the (soft) limit is a measly 1024. I imagine the historical reason for 1024 was that it was a scarce resource - though I cannot really find evidence for that....
@patbarron has still not posted his comments as an answer, and they are really excellent. So for anyone looking for the answer it is here. He writes: You can look at the source code from Seventh Edition, for example (minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/sys/h/user.h) to see how this was implemented originally...
What is the historical reason for limits on file descriptors (ulimit -n)
1,458,655,019,000
I'm working on a piece of software that requires me to know what files and resources any certain launched process are accessing. I'm not planning on attempting to track what every single script, application, and daemon is accessing, just a certain process provided by the user. Is there any way to do this in Python (o...
You can trace the system calls that a program makes. This is the usual method to find out what files it accesses. The tool to do this is called truss in many Unix systems, dtruss on OSX, strace on Linux. I'll describe Linux usage here; check the manual on other systems. The simplest form is strace myprogram arg1 arg2...
Is there any way to tell exactly what files a command is accessing?
1,458,655,019,000
Ok to get my hands dirty with cgroups and systemd, I wrote the most moronic C program I could think of (just a timer and a spinlocking while loop) and named it idiot, which I accompanied with the following idiot.service file in /sys/fs/systemd/system/: [Unit] Description=Idiot - pretty idiotic imo [Service] Type=simp...
According to man systemd.resource.control, CPUShares=weight would work as follows: The available CPU time is split up among all units within one slice relative to their CPU time share weight. Since you've told us nothing about other members of the same slice, I presume there are no other members, thus it would be ap...
Why isn't this systemd service resource limited when using CPUShares property?
1,458,655,019,000
Say one has a resource hungry command that users on a server need to run. I want to wrap said command with a wrapper script that will parse the arguments passed and ensure that the command is only being used under certain conditions or times. The problem is that if the program itself is not executable the wrapper won...
The simple answer is: It is not possible to force your users to use your wrapper script. The reason for this is fairly simple; a shell script is an interpreted program. That means that bash (or some other shell process) must read the file in order to run the commands that are called in it. This in turn means that a u...
Allow only wrapper script but not command
1,458,655,019,000
I'm running a very time-consuming script which takes many hours to end. Watching top I see that it's only taking 5% of the CPU at best, usually around 3%. Is there any way to force the script to use more CPU in order to end faster? Edit: Basically the script is bruteforcing 5 chars length passwords given a the salt an...
Improvement #1 - Loops Your looping structure seems completely unnecessary if you use brace expansions instead, it can be condensed like so: $ more pass.bash #!/bin/bash for str in $(echo {a..z}{a..z}{a..z}); do pass=$(openssl passwd -salt $1 $str) if [[ "$pass" == "$2" ]]; then echo "Password: $str" exi...
How can I force a script to use more resources?
1,458,655,019,000
Is there some way to inspect which .Xresources settings are in effect at the moment (unlike xrdb -query)? For example, I'm on a host which doesn't seem to respect *reverseVideo: true, but I don't know whether that is because I wrote it the wrong way (even *florb: glorb doesn't raise an error when running xrdb -merge $...
xrdb -query lists the resources that are explicitly loaded on the X server. appres lists the resources that an application would receive. This includes system defaults (typically found in a directories like /usr/X11R6/lib/X11/app-defaults or /etc/X11/app-defaults) as well as the resources explicitly set on the server ...
.Xresources settings in effect
1,458,655,019,000
I am looking for one or possibly more commands, or a combination of commands, to get my PC to use as much resources as possible. I want to check how my computer behaves when subject to the maximum amount of data it can handle. I've tried by running multiple programs such as browsers, graphic and system tools one by on...
You could probably use stress: stress: tool to impose load on and stress test systems If you wnat to stress memory you could use : stress --vm 2 --vm-bytes 512M --timeout 10s to use 2 vm using both 512MB of ram for 10 seconds. If you want to stress CPU add: stress --cpu ## -t 10s with ## equal to your number...
Is there a command or a serie of commands to make the computer use as much resources as possible?
1,458,655,019,000
From man time: M Maximum resident set size of the process during its lifetime, in Kilobytes. From ulimit -a: max memory size (kbytes, -m) unlimited But a "kilobyte" may mean either 1000 or 1024 bytes. I guess here it is a round 1024, but I want to be sure. Authoritative reference would be appreciated....
It is kibibytes (1024), those are raw interfaces to the getrusage()/ setrlimit() APIs. Those documentations are inaccurate (or old-school as you say). Also note that the resource limits/accountings and their units vary between systems, you'll find that it's not uncommon for shells to get it wrong on some systems (don'...
Is the kilobyte used by time and ulimit commands either 1000 (SI) or 1024 (old school) bytes?
1,458,655,019,000
I have some process running on my system. I need to list out which of the process at a moment has acquired/is using one or more of these in my system: Ethernet Camera USB Bluetooth WiFi File System etc. Is there a way to find this out ? Platform : Ubuntu/Fedora (Allowed to have SELinux as well if required to implem...
You should use a combination of lsof (to find out which process opened which file or port) and strace (to attach to and follow a process's system calls). Use the man pages for each to find out how to use them in your case
To check which resource is being accessed by which process
1,458,655,019,000
I have a mystery: what is using 6GB of my swap? My kernel version is 4.15.9-300.fc27.x86_64. This happened following some crashes. dmesg shows I had a segfault in a gnome-shell process (which belonged to gdm) and later some firefox processes (Chrome_~dThread, in libxul.so). coredumpctl -r shows no other crashes on ...
EDIT1 After stopping systemd-logind - which native Xorg responds to by dying - and restarting Xorg, I see the entire 6GB of swap wiped out. After the second time, I can confirm that this is a bug in systemd-logind. logind remembers to close the copy of the DRM fd which it holds, but it fails to close the copy which...
What could be using 6GB of my swap?
1,458,655,019,000
I have some resources used by a specific user that I had to delete because it was taking a lot of resources from the server. When I listed the processes in the server the deleted user now shows as “1001” instead of the name it used to show before I deleted it. %Cpu(s): 19.8 us, 29.5 sy, 0.0 ni, 50.7 id, 0.0 wa, 0....
Just for anybody following with my issue. It was a little odd what was happening but the user it was running that process happened to have the same ID inside of the docker container as the host, so when I listed all the processes the user ID of the user inside of the container was getting mapped to a specific user I h...
Docker user executing a process cannot be removed
1,458,655,019,000
My Computer has been freezing a lot lately, and with no apparent reason.It freezes even if my usage is 3% CPU and 9% RAM. I was using Windows 8 until I installed Ubuntu 14.04. It was really slow, and after some researching, I adopted the idea that Ubuntu 14.04 wasn't really that stable, so I decided I'd download a le...
Your problem is that you don't have any swap space. Operating systems require a swap space so that they are able to free up ram space and store it on the hard drive. What you are going to need to do is reformat your hard drive. Red Hat has a suggest swap size chart here. Load up the arch live cd and repartition and...
Linux freezing randomly
1,458,655,019,000
I ran into a problem of : fork: Resource temporarily unavailable I know that nproc is the problem Some suggested to increase the soft limit of nproc while other suggested the hard limit. Which should I increase? Isn't the soft limit is there just to warn the user and the hard limit is the one that really limits event...
It's actually other way around. The soft limit's value(s) is actually implemented i.e. in use, you can increase the limit upto the relevant hard limit's value(s) (assuming you are not super user or do not have CAP_SYS_RESOURCE capability).
Soft limit vs hard limit
1,458,655,019,000
This is the situation: I have a PHP/MySQL web application that does some PDF processing and thumbnail creation. This is done by using some 3rd party command line software on the server. Both kinds of processing consume a lot of resources, to the point of choking the server. I would like to limit the amount of resource...
Run it with nice -n 20 ionice -c 3 That will make it use the remaining CPU cycles and access to I/O not used by other processes. For RAM, all you can do is kill the process when it uses more than the amount you want it to use (using ulimit).
How to constrain the resources an application can use on a linux web server
1,458,655,019,000
I just installed atop, waited half an hour, and looked at the logs with atop -r /var/log/atop/atop_20180216. Why does my systemd --user instance show hundreds of megs of disk usage, including tens of megs of writes, during one ten minute interval? What can systemd possibly be doing? PID TID RDDSK W...
[RDDSK / WRDSK] When the kernel maintains standard io statistics (>= 2.6.20): The [read / write] data transfer issued physically on disk (so writing to the disk cache is not accounted for). This counter is maintained for the application process that writes its data to the cache (assuming that this data is physically ...
systemd shows as reading 300M in atop?
1,458,655,019,000
I am supposed to track how file system's usage of resources (i-nodes, blocks) changes before I start a program, after I start a program, delete its executable file, and then finally after I kill its last process. The problem I reach is that I can't seem to register any change in resources even in the very first stage....
stat -f /dev/mapper/fedora_12345-root returns information about the filesystem containing the device node, which is /dev. To return information about a mounted filesystem, you need to look at a file on that filesystem: stat -f /. The df utility automatically translates mounted block devices to a mount point for them, ...
How to track resources' (inodes, blocks) usage change upon starting a program