date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,521,302,084,000 |
I ran some commands without completely understanding them while trying to get screen brightness working and now I'm stuck with a nasty symlink in '/sys/class/backlight/asus_laptop' that I am trying to get rid of.
I have tried
sudo rm /sys/class/backlight/asus_laptop
sudo rm '/sys/class/backlight/asus_laptop'
su root
rm /sys/class/backlight/asus_laptop
sudo rm /sys/class/backlight/asus_laptop
Going right into directory and typing rm asus_laptop, changing ownership and using Thunar to try to remove it.
I get
rm: cannot remove '/sys/class/backlight/asus_laptop': Operation not permitted
Same goes for unlink, rmdir doesn't work, and Thunar fails.
The permissions on it are lrwxrwxrwx
How can I remove it?
|
The sysfs file system, typically mounted on /sys, just like the /proc file system, isn’t a typical file system, it’s a so called pseudo file system. It’s actually populated by the kernel and you can’t delete files directly.
So, if the ASUS laptop support isn’t appropriate for you, then you have to ask the kernel to remove it. To do so, remove the corresponding module:
sudo rmmod asus-laptop
That will remove the relevant /sys entry.
| Debian: cannot remove symlink in /sys/: operation not permitted |
1,521,302,084,000 |
I am on Ubuntu 12.04, and the ip utility does not have ip netns identify <pid> option, I tried installing new iproute, but still, the option identify doesn't
seem to be working!.
If I were to write a script (or code) to list all processes in a network-namespace, or given a PID, show which network-namespace it belongs to, how should I proceed ?
(I need info on a handful of processes, to check if they are in the right netns)
|
You could do something like:
netns=myns
find -L /proc/[1-9]*/task/*/ns/net -samefile /run/netns/"$netns" | cut -d/ -f5
Or with zsh:
print -l /proc/[1-9]*/task/*/ns/net(e:'[ $REPLY -ef /run/netns/$netns ]'::h:h:t)
It checks the inode of the file which the /proc/*/task/*/ns/net symlink points to agains those of the files bind-mounted by ip netns add in /run/netns. That's basically what ip netns identify or ip netns pid in newer versions of iproute2 do.
That works with the 3.13 kernel as from the linux-image-generic-lts-trusty package on Ubuntu 12.04, but not with the 3.2 kernel from the first release of 12.04 where /proc/*/ns/* are not symlinks and each net file there from every process and task gets a different inode which can't help determine namespace membership.
Support for that was added by that commit in 2011, which means you need kernel 3.8 or newer.
With older kernels, you could try and run a program listening on an ABSTRACT socket in the namespace, and then try to enter the namespace of every process to see if you can connect to that socket there like:
sudo ip netns exec "$netns" socat abstract-listen:test-ns,fork /dev/null &
ps -eopid= |
while read p; do
nsenter -n"/proc/$p/ns/net" socat -u abstract:test-ns - 2> /dev/null &&
echo "$p"
done
| How to list processes belonging to a network namespace? |
1,521,302,084,000 |
I'm trying to write a bash script that shows a user's name based on the uid the user provides :
#!/bin/bash
read -p "donner l UID" cheruid
if [ $(grep -w $cheruid) -n ]
then
grep -w $cheruid /etc/passwd | cut -d ":" -f "1" | xargs echo "user is : "
else
echo "user not found"
fi
when I execute this the terminal only shows the prompt message then stops working.
Am I missing something ?
|
With GNU id, you can do:
id -un -- "$cheruid"
That will query the account database (whether it's stored in /etc/passwd, LDAP, NIS+, a RDBMS...) for the first user name with that uid.
Generally, there's only one user name per uid, but that's not guaranteed, the key in the user account database is the username, not user id.
If you want to know all the user names for a given uid, you can do:
getent passwd | ID=$cheruid awk -F: '$3 == ENVIRON["ID"] {print $1}'
But that may not work for some account databases that are not enumerable (as sometimes the case for large LDAP-based ones).
| How to get user's name from uid |
1,521,302,084,000 |
The openssl passwd command computes the hash of a password typed at
run-time or the hash of each password in a list. The password list is
taken from the named file for option -in file, from stdin for option
-stdin, and from the command line otherwise. The UNIX standard algorithm crypt and the MD5-based BSD password algorithm 1 and its
Apache variant apr1 are available.
I understand the term "hash" to mean "turn an input into an output from which is it difficult/impossible to derive the original input." More specifically, the input:output relationship after hashing is N:M, where M<=N (i.e. hash collision is possible).
Why is the output of "openssl passwd" different run successively with the same input?
> openssl passwd
Password:
Verifying - Password:
ZTGgaZkFnC6Pg
> openssl passwd
Password:
Verifying - Password:
wCfi4i2Bnj3FU
> openssl passwd -1 "a"
$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/
> openssl passwd -1 "a"
$1$JhSBpnWc$oiu2qHyr5p.ir0NrseQes1
I must not understand the purpose of this function, because it looks like running the same hash algorithm on the same input produces multiple unique outputs. I guess I'm confused by this seeming N:M input:output relationship where M>N.
|
> openssl passwd -1 "a"
$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/
This is the extended Unix-style crypt(3) password hash syntax, specifically the MD5 version of it.
The first $1$ identifies the hash type, the next part OKgLCmVlis the salt used in encrypting the password, then after the separator $ character to the end of line is the actual password hash.
So, if you take the salt part from the first encryption and use it with the subsequent ones, you should always get the same result:
> openssl passwd -1 -salt "OKgLCmVl" "a"
$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/
> openssl passwd -1 -salt "OKgLCmVl" "a"
$1$OKgLCmVl$d02jECa4DXn/oXX0R.MoQ/
When you're changing a password, you should always switch to a new salt. This prevents anyone finding out after the fact whether the new password was actually the same as the old one. (If you want to prevent the re-use of old passwords, you can of course hash the new password candidate twice: once with the old salt and then, if the result is different from the old password and thus acceptable, again with a new salt.)
If you use openssl passwd with no options, you get the original crypt(3)-compatible hash, as described by dave_thompson_085. With it, the salt is two first letters of the hash:
> openssl passwd "a"
imM.Fa8z1RS.k
> openssl passwd -salt "im" "a"
imM.Fa8z1RS.k
You should not use this old hash style in any new implementation, as it restricts the effective password length to 8 characters, and has too little salt to adequately protect against modern methods.
(I once calculated the amount of data required to store a full set of rainbow tables for every classic crypt(3) hash. I don't remember the exact result, but assuming my calculations were correct, it was on the order of "a modest stack of multi-terabyte disks". In my opinion, that places it within the "organized criminals could do it" range.)
| Why is the output of "openssl passwd" different each time? |
1,521,302,084,000 |
I totally understand that --dig-holes creates a sparse file in-place. That is, if the file has holes --dig-holes options removes those holes:
Let's take it in a very simplified way, let's say we have a huge file named non-sparse:
non-sparse:
aaaaaaaaaaaaaaaaaaaaaaaaaaaa
\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00
bbbbbbbbbbbbbbbbbbbbbbbbbbbb
\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00
cccccccccccccccccccccccccccc
non-sparse has many zeros in it, assume that the interleaving zeros are in Gigabytes. fallocate --dig-holes de-allocates the space available for the zeros (holes) where the actual file size remains the same (preserved).
Now, there's --punch-hole what does it really do? I read the man page, still don't understand:
-p, --punch-hole
Deallocates space (i.e., creates a hole) in the byte range
starting at offset and continuing for length bytes. Within
the specified range, partial filesystem blocks are zeroed,
and whole filesystem blocks are removed from the file.
After a successful call, subsequent reads from this range
will return zeroes.
Creating hole, that's the opposite of --dig-hole option it seems like that, and how come that digging a hole isn't the same as creating a hole?! Help! we need a logician :).
The naming of the two options are synonymous linguistically which perhaps makes confusion.
What's the difference between --dig-holes and --punch-holes operationally (not logically or linguistically please!)?
|
--dig-holes doesn’t change the file’s contents, as determined when the file is read: it just identifies runs of zeroes which can be replaced with holes.
--punch-hole uses the --offset and --length arguments to punch a hole in a file, regardless of what the file contains at that offset: it works even if the file contains non-zeroes there, but the file’s contents change as a result. Considering your example file, running fallocate --punch-hole --offset 2 --length 10 would replace ten a characters with zeroes, starting after the second one.
| What is the difference between `fallocate --dig-holes` and `fallocate --punch-hole` in Linux? |
1,521,302,084,000 |
I want to change a word in a .docx file using a shell command.
I tried using the sed command, but it is not working.
Does anyone know a solution for this?
For example, I want to change a word (e.g. exp5) and replace that with another (exp3) in the file exo.docx.
|
So, you want to replace things in a brand-specific format? At the first look it looks bad, but the new docx format is a bit better for that than the old doc format, because it's actually a ZIP file containing XML files.
So the answer lies in unzipping it, then you'll have to rummage through the files and figure out on which one to call sed and zip it up again.
Check out the file word/document.xml in the ZIP file.
| How to replace a word inside a .DOCX file using Linux command line? |
1,521,302,084,000 |
I have done several searches and I cannot find anything on Google about why but arch has allocated 7.7 gigs to ram and 7.9 to swap.
I only have 8 gigs ram.
it allocated more ram to swap than regular
How could I change the allocations?
output of cat /proc/meminfo:
MemTotal: 8091960 kB
MemFree: 4925736 kB
MemAvailable: 6131188 kB
Buffers: 268936 kB
Cached: 1219460 kB
SwapCached: 0 kB
Active: 1527516 kB
Inactive: 1301140 kB
Active(anon): 768904 kB
Inactive(anon): 711440 kB
Active(file): 758612 kB
Inactive(file): 589700 kB
Unevictable: 32 kB
Mlocked: 32 kB
SwapTotal: 8300540 kB
SwapFree: 8300540 kB
Dirty: 1960 kB
Writeback: 0 kB
AnonPages: 1306968 kB
Mapped: 382800 kB
Shmem: 140100 kB
Slab: 197964 kB
SReclaimable: 163104 kB
SUnreclaim: 34860 kB
KernelStack: 6864 kB
PageTables: 29200 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 12346520 kB
Committed_AS: 3927808 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 186368 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 584316 kB
DirectMap2M: 7716864 kB
DirectMap1G: 0 kB
|
What this is telling you is that you have 16GB of virtual memory.
Virtual memory is the total of physical RAM and swap space added up.
It's a way of letting your system run more programs than it physically has the space for.
How much swap should be allocated to a machine is a complicated and opinionated question; ask 2 people and get 3 answers :-)
Your setup isn't bad, and I wouldn't recommend making changes to it until you learn a lot more about how virtual memory works and how to tune it. It's a good starting point.
| Arch Linux thinks I have about 16 gigs of ram when I only have 8 |
1,521,302,084,000 |
In linux, from /proc/PID/stat, I can get the start_time (22:nd) field, which indicates how long after the kernel booted the process was started.
What is a good way to convert that to a seconds-since-the-epoch format? Adding it to the btime of /proc/stat?
Basically, I'm looking for the age of the process, not exactly when it was started. My first approach would be to compare the start_time of the process being investigated with the start_time of the current process (assuming it has not been running for long).
Surely there must be way better ways.
I didn't find any obvious age-related parameters when looking at https://www.kernel.org/doc/Documentation/filesystems/proc.txt
So, What I have currently is:
process age = (current_utime - ([kernel]btime + [process]start_time))
Any alternative ways that are more efficient from within a shell script? (Ideally correct across DST changes)
|
Since version 3.3.0, the ps of procps-ng on Linux has a etimes output field that gives you the elapsed time in seconds since the process was started (which by the way is not necessarily the same thing as the elapsed time since the last time that process executed a command (if at all!) (the time that process has been running the command in the process name), so may not be as useful as you thought).
So you can do:
ps -o etimes= -p "$pid"
For the start time as Unix epoch time (with GNU date):
(export TZ=UTC0 LC_ALL=C; date -d "$(ps -o lstart= -p "$pid")" +%s)
Note that you cannot use the modification time of /proc/$pid. That is just the time those files were instantiated which has nothing to do with the start time of the process.
| Get process age from command line [duplicate] |
1,521,302,084,000 |
I'm running Ubuntu Linux. Suppose there is a program called myprogram. This program prompts the user for input; specifically, the user must type an integer when prompted and press Enter. I would like to automate this process using a bash script. In particular, I would like to execute myprogram, say, 100 times (using a counter i which goes from 1 to 100). On each execution of myprogram, I would like to enter the current value of i when prompted.
(By the way, myprogram takes options/switches -options, all of which will be constant and thus specified within the bash script.)
An incomplete skeleton of this bash script might be:
#!/bin/bash
for i in {1..100}
do
myprogram -options
done
Now I would like to modify the above code so that the current value of i is entered when prompted by the program. What is the best way to do this?
The website of the software I am using suggests using <<EOF at the end of the myprogram -options line. I think that this tells bash to look at the "end of the file" for the input to use. But what if I don't want to place the input at the end of the file? What if I would like to put it immediately after the << or <?
The reason is that things will get more complicated. For example, I may introduce an integer counter j that changes in some non-linear, non-sequential way. I would then want to feed the current value of j to myprogram on each iteration, but the value of j may change between the call to myprogram -options and the end of the file EOF.
Do you have any suggestions?
|
For nearly all programs, both echo $i | myprogram -options and myprogram -options <<<$i should work, by feeding the program $i through standard input.
<foo will use the contents of the file named foo as stdin.
<<foo will use the text between that and a line consisting solely of foo as standard input. This is a here document (heredoc), as Gilles said; EOF doesn't actually mean the end of the file, it's just a common heredoc delineator (we use "foo" instead in this example).
<<<foo will use the string "foo" as standard input. You can also specify a variable $foo, and the shell will use its contents as stdin, as I showed above. This is called a herestring, as it uses a short string in contrast to a whole block, as in a heredoc. Herestrings work in bash, but not in /bin/sh.
| Automating textual input from a bash script without using EOF |
1,521,302,084,000 |
Im using a script to delete old file in a directory, but the script kept showing
/usr/bin : Argument list too long
It turned out the number of files were 40000 and more, So I want to know what is the maximum limit of files that can be listed. Is there anyway to find it? Is it system specific?
|
This message is not from ls, but from execve. you can see the maximum size of the argument list by running getconf ARG_MAX.
It's not how many files that can be handled by the application, but the size of the arguments that can be passed using exec to the operating system, which returns E2BIG if the size of the arguments is beyond the acceptable range.
This limit was traditionally (until Linux 2.6.23) conferred using the kernel define ARG_MAX, found in sys/linux/limits.h. However, nowadays this is specific to the environment you are running on. Typically, the maximum length of the arguments can now be as big as a quarter of the userspace stack size.
| maximum number of files ls can list [duplicate] |
1,521,302,084,000 |
I am having lots of issues with tmux on Mac.
One problem I have is that I cannot bind a key in my tmux.conf to resize my panes.
What I need is the CTRL-b: resize-pane -U 10. I'm trying to increase the size of the pane upwards ten cells (or downwards or left or right) using a key shortcut instead of having to type this over and over (which I currently do unfortunately).
But I cannot find a way to configure this since on Mac, it seems that CTRL and other keys work differently on Linux.
|
In ~/.tmux.conf:
bind e resize-pane -U 10
Then, tmux source-file ~/.tmux.conf. (another useful shortcut: use the same principle).
| How can I create a shortcut to resize panes in a tmux window? |
1,521,302,084,000 |
I have a 32 GB USB flash drive. When deleting files from the drive where the drive is plugged into a Ubuntu 16 laptop, it creates a folder called '.Trash-1000'
This .Trash-1000 folder contains two folders which are 'file' and 'info' where file contains the files I have deleted and info contains metadata about those files.
The issue is this .Trash-1000 folder takes up space because it holds a copy of the deleted file. I then have to eventually delete the .Trash-1000 folder when it starts filing up after multiple deletes.
Is there a way to disable this feature on the USB drive?
|
Have a look at this article.
According to the article, Ubuntu will create such folders when a file is deleted from a USB drive. Presumably this would allow a file to be restored if you accidentally deleted it.
It contains the following solution:
Don't use the delete button only (Otherwise the .Trash-1000 folder will be created)
Press the key combination shift+delete together to delete then Ubuntu won't create a .Trash-1000 folder. (Note: If you delete files and folders this way they are gone forever!)
As alternative you can also use the command line's rm command which will also delete the file directly.
| How to disable creation of .Trash-1000 folder? |
1,521,302,084,000 |
I have a CentOS 8 guest running on a Fedora 31 host. The guest is attached to a bridge network, virbr0, and has address 192.168.122.217. I can log into the guest via ssh at that address.
If I start a service on the guest listening on port 80, all connections from the host to the guest fail like this:
$ curl 192.168.122.217
curl: (7) Failed to connect to 192.168.122.217 port 80: No route to host
The service is bound to 0.0.0.0:
guest# ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 5 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
Using tcpdump (either on virbr0 on the host, or on eth0 on the guest), I see that the guest appears to be replying with an ICMP "admin prohibited" message.
19:09:25.698175 IP 192.168.122.1.33472 > 192.168.122.217.http: Flags [S], seq 959177236, win 64240, options [mss 1460,sackOK,TS val 3103862500 ecr 0,nop,wscale 7], length 0
19:09:25.698586 IP 192.168.122.217 > 192.168.122.1: ICMP host 192.168.122.217 unreachable - admin prohibited filter, length 68
There are no firewall rules on the INPUT chain in the guest:
guest# iptables -S INPUT
-P INPUT ACCEPT
The routing table in the guest looks perfectly normal:
guest# ip route
default via 192.168.122.1 dev eth0 proto dhcp metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.217 metric 100
SELinux is in permissive mode:
guest# getenforce
Permissive
If I stop sshd and start my service on port 22, it all works as expected.
What is causing these connections to fail?
In case someone asks for it, the complete output of iptables-save on the guest is:
*filter
:INPUT ACCEPT [327:69520]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [285:37235]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
*security
:INPUT ACCEPT [280:55468]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [285:37235]
COMMIT
*raw
:PREROUTING ACCEPT [348:73125]
:OUTPUT ACCEPT [285:37235]
COMMIT
*mangle
:PREROUTING ACCEPT [348:73125]
:INPUT ACCEPT [327:69520]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [285:37235]
:POSTROUTING ACCEPT [285:37235]
COMMIT
*nat
:PREROUTING ACCEPT [78:18257]
:INPUT ACCEPT [10:600]
:POSTROUTING ACCEPT [111:8182]
:OUTPUT ACCEPT [111:8182]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
COMMIT
|
Well, I figured it out. And it's a doozy.
CentOS 8 uses nftables, which by itself isn't surprising. It ships with the nft version of the iptables commands, which means when you use the iptables command it actually maintains a set of compatibility tables in nftables.
However...
Firewalld -- which is installed by default -- has native support for nftables, so it doesn't make use of the iptables compatibility layer.
So while iptables -S INPUT shows you:
# iptables -S INPUT
-P INPUT ACCEPT
What you actually have is:
chain filter_INPUT {
type filter hook input priority 10; policy accept;
ct state established,related accept
iifname "lo" accept
jump filter_INPUT_ZONES_SOURCE
jump filter_INPUT_ZONES
ct state invalid drop
reject with icmpx type admin-prohibited <-- HEY LOOK AT THAT!
}
The solution here (and honestly probably good advice in general) is:
systemctl disable --now firewalld
With firewalld out of the way, the iptables rules visible with iptables -S will behave as expected.
| Why are my network connections being rejected? |
1,521,302,084,000 |
after a recent upgrade on my local Debian install Guake does not drop down by using F12
Operating System: Debian GNU/Linux buster/sid
Kernel: Linux 4.12.0-1-686-pae
Architecture: x86
Guake version: 0.8.8-1
once I started it manually I can call it with F12, but only on the virtual desktop where it was started, as soon as I change do a different desktop I can't call it with F12 anymore.
As you can see on the screenshot, I'm on the 2ond desktop and in the bottom left the guake symbol shows up. I can call the terminal by clicking there with the mouse. But F12 does not work.
If I start it from a terminal I get the following output:
(guake:3387): libglade-WARNING **: unknown attribute `swapped' for <signal>.
INFO:guake.guake_app:Logging configuration complete
/usr/lib/python2.7/dist-packages/guake/guake_app.py:865: GtkWarning: IA__gtk_window_set_type_hint: assertion '!gtk_widget_get_mapped (GTK_WIDGET (window))' failed
self.window.set_type_hint(gtk.gdk.WINDOW_TYPE_HINT_DOCK)
/usr/lib/python2.7/dist-packages/guake/guake_app.py:866: GtkWarning: IA__gtk_window_set_type_hint: assertion '!gtk_widget_get_mapped (GTK_WIDGET (window))' failed
self.window.set_type_hint(gtk.gdk.WINDOW_TYPE_HINT_NORMAL)
^CTraceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/lib/python2.7/dist-packages/guake/main.py", line 253, in <module>
exec_main()
File "/usr/lib/python2.7/dist-packages/guake/main.py", line 250, in exec_main
gtk.main()
|
I found a workaround for this issue on the official Guake github-page:
You have to go to the Gnome-Applications-Menu and click on the Keyboard Symbol.
This will list all the default Gnome-Keyboard-Shortcuts.
If you scroll down, click the + at the bottom of the page to add a new Shortcut:
Name: Guake
Command: guake -t
Shortcut: F12
In my case no reboot was needed.
The start of Guake by using this workaround is not very smooth,
but usable...
from the guake man-page:
-t, --toggle-visibility
Toggle the visibility of guake
| Guake terminal does not drop down with F12 after upgrade |
1,521,302,084,000 |
I have 3 USB cameras on single PC, one camera will be unused and can be ignored.
I need to force two identical cameras to be mapped to constant device names (like /dev/video1 and /dev/video2 ). The cameras should not change their order (say camera 1 mounted as video1 and camera 2 as video2 after disconnect they should be in same order not camera1 --> /dev/video2, camera2 --> /dev/video1. How to make this setup work?
I will feed camera output to Gstreamer. Cameras are using v4l2 and uvc driver interface. Linux distro in question is Archlinux, cameras - some Logitech webcams.
|
I suggest you autocreate /dev symlinks using udev, using unique properties (serial number? port number?) of your USB cameras. See this (should apply to Arch as well) tutorial about udev rules. Or maybe this tutorial is clearer.
You can get the list of properties for your devices using:
sudo udevadm info --query=all --name=/dev/video1
then
sudo udevadm info --query=all --name=/dev/video2
Find what's different and create a .rules file out of it inside /etc/udev/rules.d (you can use 99-myvideocards.rules as a filename, say); let's say you want to use the serial number, you'd get a ruleset that looks like:
ATTRS{ID_SERIAL}=="0123456789", SYMLINK+="myfirstvideocard"
ATTRS{ID_SERIAL}=="1234567890", SYMLINK+="mysecondvideocard"
After unplugging/replugging your devices (or after a reboot), you'll get /dev/myfirstvideocard and /dev/mysecondvideocard that always point to the same devices.
| How to bind v4l2 USB cameras to the same device names even after reboot? |
1,521,302,084,000 |
I currently have a NAS box running under port 80. To access the NAS from the outside, I mapped the port 8080 to port 80 on the NAS as follow:
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 10.32.25.2:80
This is working like a charm. However, this is working only if I am accessing the website from the outside of the network (at work, at diffrent house, etc). So when I type in mywebsite.com:8080, IPTables do the job correctly and everything is working fine.
Now, the problem I have is, how can I redirect this port from the inside of the network ? My domain name mywebsite.com point to my router (my linux server) from the inside (10.32.25.1) but I want to redirect port 8080 to port 80 on 10.32.25.2 from the inside.
Any clue?
Edit #1
Attempting to help facilitate this question I put this diagram together. Please feel free to update if it's incorrect or misrepresenting what you're looking for.
iptables
| .---------------.
.-,( ),-. v port 80 |
.-( )-. port 8080________ | |
( internet )------------>[_...__...°]------------->| NAS |
'-( ).-' 10.32.25.2 ^ 10.32.25.1 | |
'-.( ).-' | | |
| '---------------'
|
|
__ _
[__]|=|
/::/|_|
|
I finally found how-to. First, I had to add -i eth1 to my "outside" rule (eth1 is my WAN connection). I also needed to add two others rules. Here in the end what I came with :
iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 8080 -j DNAT --to 10.32.25.2:80
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to 10.32.25.2:80
iptables -t nat -A POSTROUTING -p tcp -d 10.32.25.2 --dport 80 -j MASQUERADE
| IPTables - Port to another ip & port (from the inside) |
1,521,302,084,000 |
I am using Fedora 17 and over the last few days I am having an issue with my system. Whenever I try to start httpd it shows me:
Error: No space left on device
When I execute systemctl status httpd.service, I receive the following output:
httpd.service - The Apache HTTP Server (prefork MPM)
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled)
Active: inactive (dead) since Tue, 19 Feb 2013 11:18:57 +0530; 2s ago
Process: 4563 ExecStart=/usr/sbin/httpd $OPTIONS -k start (code=exited, status=0/SUCCESS)
CGroup: name=systemd:/system/httpd.service
I tried to Google this error and all links point to clearing the semaphores. I don't think this is the issue as I tried to clear the semaphores but that didn't work.
Edit 1
here is the output of df -g
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 50G 16G 32G 34% /
devtmpfs 910M 0 910M 0% /dev
tmpfs 920M 136K 920M 1% /dev/shm
tmpfs 920M 1.2M 919M 1% /run
/dev/mapper/vg-lv_root 50G 16G 32G 34% /
tmpfs 920M 0 920M 0% /sys/fs/cgroup
tmpfs 920M 0 920M 0% /media
/dev/sda1 497M 59M 424M 13% /boot
/dev/mapper/vg-lv_home 412G 6.3G 385G 2% /home
Here is the deatail of httpd error log
[root@localhost ~]# tail -f /var/log/httpd/error_log
[Tue Feb 19 11:45:53 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Feb 19 11:45:53 2013] [notice] Digest: generating secret for digest authentication ...
[Tue Feb 19 11:45:53 2013] [notice] Digest: done
[Tue Feb 19 11:45:54 2013] [notice] Apache/2.2.23 (Unix) DAV/2 PHP/5.4.11 configured -- resuming normal operations
[Tue Feb 19 11:47:23 2013] [notice] caught SIGTERM, shutting down
[Tue Feb 19 11:48:00 2013] [notice] SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0
[Tue Feb 19 11:48:00 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Tue Feb 19 11:48:00 2013] [notice] Digest: generating secret for digest authentication ...
[Tue Feb 19 11:48:00 2013] [notice] Digest: done
[Tue Feb 19 11:48:00 2013] [notice] Apache/2.2.23 (Unix) DAV/2 PHP/5.4.11 configured -- resuming normal operations
tail: inotify resources exhausted
tail: inotify cannot be used, reverting to polling
Edit 2
here is the output of df-i
[root@localhost ~]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
rootfs 3276800 337174 2939626 11% /
devtmpfs 232864 406 232458 1% /dev
tmpfs 235306 3 235303 1% /dev/shm
tmpfs 235306 438 234868 1% /run
/dev/mapper/vg-lv_root 3276800 337174 2939626 11% /
tmpfs 235306 12 235294 1% /sys/fs/cgroup
tmpfs 235306 1 235305 1% /media
/dev/sda1 128016 339 127677 1% /boot
/dev/mapper/vg-lv_home 26984448 216 26984232 1% /home
Thanks
|
Here we see evidence of a problem:
tail: inotify resources exhausted
By default, Linux only allocates 8192 watches for inotify, which is ridiculously low. And when it runs out, the error is also No space left on device, which may be confusing if you aren't explicitly looking for this issue.
Raise this value with the appropriate sysctl:
fs.inotify.max_user_watches = 262144
(Add this to /etc/sysctl.conf and then run sysctl -p.)
| Httpd : no space left on device |
1,521,302,084,000 |
How can I list scsi device ids under Linux?
|
cat /proc/scsi/scsi
| Find scsi device ids under Linux? |
1,521,302,084,000 |
For example, I was reading setuid man page today. It says:
If the effective UID of the caller is root, the real UID and saved set-user-ID are also set.
I don't know what set-user-ID is. How can I get more information about it if I don't have Internet connection?
One thing I can do is to open some books and search for it.
What are other places on my Linux system where I can search for more information?
|
Use apropos
apropos - search the manual page names and descriptions
Try apropos 'set user id' for an example.
| Where to search for more info if I don't have Internet? |
1,521,302,084,000 |
The Linux foundation list of standard utilities includes getopts but not getopt. Similar for the Open Group list of Posix utilities.
Meanwhile, Wikipedia's list of standard Unix Commands includes getopt but not getopts. Similarly, the Windows Subsystem for Linux (based on Ubuntu based on Debian) also includes getopt but not getopts (and it is the GNU Enhanced version).
balter@spectre:~$ which getopt
/usr/bin/getopt
balter@spectre:~$ getopt -V
getopt from util-linux 2.27.1
balter@spectre:~$ which getopts
balter@spectre:~$
So if I want to pick one that I can be the most confident that anyone using one of the more standard Linux distros (e.g. Debian, Red Hat, Ubuntu, Fedora, CentOS, etc.), which should I pick?
Note:
thanks to Michael and Muru for explaining about builtin vs executable. I had just stumbled across this as well which lists bash builtins.
|
which is the wrong tool. getopts is usually also a builtin:
Since getopts affects the current shell execution environment, it is
generally provided as a shell regular built-in.
~ for sh in dash ksh bash zsh; do "$sh" -c 'printf "%s in %s\n" "$(type getopts)" "$0"'; done
getopts is a shell builtin in dash
getopts is a shell builtin in ksh
getopts is a shell builtin in bash
getopts is a shell builtin in zsh
If you're using a shell script, you can safely depend on getopts. There might be other reasons to favour one or the other, but getopts is standard.
See also: Why not use "which"? What to use then?
| Which is the more standard package, getopt or getopts (with an "s")? |
1,521,302,084,000 |
I'm trying to run ADB on a linux server with multiple users where I am not root (to play with my android emulator). The adb daemon writes its logs to the file /tmp/adb.log which unfortunately seems to be hard-coded into ADB and this situation is not going to change.
So, adb is failing to run, giving the obvious error: cannot open '/tmp/adb.log': Permission denied. This file is created by another user and /tmp has sticky bit on. If I start adb with adb nodaemon server making it write to stdout, no errors occur (I also set up its port to a unique value to avoid conflicts).
My question is: is there some way to make ADB write to another file than /tmp/adb.log? More generally, is there a way to create a sort of a process-specific symlink? I want to redirect all file accesses to /tmp/adb.log to, saying, a file ~/tmp/adb.log.
Again, I am not root on the server, so chroot, mount -o rbind and chmod are not valid options. If possible, I'd like not to modify ADB sources, but surely if there are no other solutions, I'll do that.
P.S. For the specific ADB case I can resort to running adb nodaemon server with nohup and output redirection, but the general question is still relevant.
|
Here is a very simple example of using util-linux's unshare to put a process in a private mount namespace and give it a different view of the same filesystem its parent currently has:
{ cd /tmp #usually a safe place for this stuff
echo hey >file #some
echo there >file2 #evidence
sudo unshare -m sh -c ' #unshare requires root by default
mount -B file2 file #bind mount there over hey
cat file #show it
kill -TSTP "$$" #suspend root shell and switch back to parent
umount file #unbind there
cat file' #show it
cat file #root shell just suspended
fg #bring it back
cat file2 #round it off
}
there #root shell
hey #root shell suspended
hey #root shell restored
there #rounded
You can give a process a private view of its filesystem with the unshare utility on up-to-date linux systems, though the mount namespace facility itself has been fairly mature for the entire 3.x kernel series. You can enter pre-existing namespaces of all kinds with nsenter utility from the same package, and you can find out more with man.
| Is it possible to fake a specific path for a process? |
1,521,302,084,000 |
I have a directory called outer.
outer contains a directory named inner (which contains lots of files of same extension)
I cd to outer. How can I delete all the files within inner but leave the directory inner remaining (but empty)?
|
If you want to delete a directory's contents and not the directory itself, all you need to do is tell rm to delete the contents:
rm inner/*
That will delete all non-hidden files in ./inner and leave the directory intact. To also delete any subdirectories, use -r:
rm -r inner/*
If you also want to delete hidden files, you can do (assuming you are using bash):
shopt -s dotglob
rm -r inner/*
That last command will delete all files and all directories in inner, but will leave inner itself intact.
Finally, note that you don't need to cd to outer to run any of these:
$ tree -a outer/
outer/
├── dir
└── inner
├── dir
├── file
└── .hidden
3 directories, 2 files
I can now run rm -r outer/inner/* from my current directory, no need to cd outer, and it will remove everything except the directory itself:
$ shopt -s dotglob
$ rm -r outer/inner/*
$ tree -a outer/
outer/
├── dir
└── inner
2 directories, 0 files
| Delete all files within a directory, without deleting the directory |
1,521,302,084,000 |
I'm trying to implement code that enumerate all existing TCP connections per process (similar to netstat -lptn).
I prefer to implement it myself and not to rely on netstat.
In order to do that, I'm parsing data from /proc/<PID>/net/tcp.
I saw that a number of TCP connections are listed under /proc/<PID>/net/tcp but not listed by netstat -lptn command.
For example I see that /proc/1/net/tcp and /proc/2/net/tcp have several TCP connections (tried on Ubuntu 16).
As I understand, /proc/1/net/tcp is related to the /sbin/init process which should not have any TCP connection.
The /proc/2/net/tcp is related to kthreadd which also should not have any TCP connection.
|
There are many misunderstandings in your approach. I'll go over them one by one.
Sockets are not associated with a specific process. When a socket is created its reference count is 1. But through different methods such as dup2, fork, and file descriptor passing it's possible to create many references to the same socket causing its reference count to increase. Some of these references can be from an open file descriptor table, which itself can be used by many threads. Those threads may belong to the same thread group (PID) or different thread groups. When you use the -p flag for netstat it will enumerate the sockets accessible to each process and try to find a process for each known socket. If there are multiple candidate processes, there is no guarantee that it shows the process you are interested in.
/proc/<PID>/net/tcp does not only list sockets related to that process. It lists all TCPv4 sockets in the network namespace which that process belongs to. In the default configuration all processes on the system will belong to a single network namespace, so you'll see the same result with any PID. This also explains why a thread/process which doesn't use networking has contents in this file. Even if it doesn't use networking itself it still belongs to a network namespace in which other processes may use networking.
/proc/<PID>/net/tcp contains both listening and connected sockets. When you pass -l to netstat it will show you only listening sockets. To match the output closer you'd need -a rather than -l.
/proc/<PID>/net/tcp contains only TCPv4 sockets. You need to use /proc/<PID>/net/tcp6 as well to see all TCP sockets.
If you are only interested in sockets in the same namespace as your own process you don't need to iterate through different PIDs. You can instead use /proc/net/tcp and /proc/net/tcp6 since /proc/net is a symlink to /proc/self/net.
| reading TCP connection list from /proc |
1,521,302,084,000 |
Will tinyCore let me use APT? I am looking forward to use a very light version of Linux on a netbook or something. For the last few years I been using Linux Mint and I like it, but I am getting tired of it, since it is too big to my taste now, and I want an easier, faster and more lightweight one, however I am used to apt-get in Ubuntu and Debian.
I only use my computers for coding and web development stuff, so I don't want to get my hands dirty on a Linux stuff for long, all I need is a lightweight OS that doesn't look stupid and big, vim and a browser to test. However, APT is important to me; will it work with TinyCore or something similar?
|
You could probably compile dpkg/apt on TinyCore, but then you'd have to package stuff on your own.
But really, you are optimizing the wrong thing here. The major factors that define memory usage are the applications you use; you could probably do with Debian + a minimal window manager (e.g. Openbox). It should be lightweight enough.
| Can I use APT on TinyCore? |
1,521,302,084,000 |
Let's say I have this virtual machine running:
[root@centos ~]# fdisk -ul
Disk /dev/sda: 8589 MB, 8589934592 bytes
255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors
Units = sectors of 1 * 512 = 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 63 417689 208813+ 83 Linux
/dev/sda2 2522205 13799834 5638815 83 Linux
/dev/sda3 13799835 16771859 1486012+ 8e Linux LVM
/dev/sda4 417690 2522204 1052257+ 5 Extended
/dev/sda5 417753 2522204 1052226 82 Linux swap / Solaris
How can I know how much free space left for more partitions on the disk?
|
As root, type in a shell:
# cfdisk /dev/sdX #Where /dev/sdX is the device
it will show you something like this:
cfdisk (util-linux-ng 2.18)
Disk Drive: /dev/sdb
Size: 3926949888 bytes, 3926 MB
Heads: 255 Sectors per Track: 63 Cylinders: 477
Name Flags Part Type FS Type [Label] Size (MB)
sdb1 Primary vfat [ABDEL] 1998.75
sdb2 Boot Primary ext3 [linx] 1924.72
if the device has free space it will be shown.
Note: cfdisk in fact is a terminal based partition editor.
| How do I check how much free space left on a device to create a partition |
1,521,302,084,000 |
I have a random speaker and I want to develop a driver for it so I can report statistics, battery, etc to the dev file system. However, I'm having a hard time finding the speaker's vendor and device id in order to properly associate it with the driver.
I don't even know who the manufacturer is and my Linux machine doesn't detect it (lsusb, and other utils), T&G has a speaker that looks exactly like mine, but mine has a different logo on it (orange flower), not T&G's one.
Does every device have a vendor and product id associated with it?
if so, how do you find it if you don't know your device, and your machine doesn't recognize it?
Is it even possible to report battery and other stats to my machine through a USB port of my speaker, which I believe is supposed to be used with a USB card?
|
Every device that communicates via USB has a VID (Vendor ID) and PID (Product ID). A vendor ID is obtained via the USB implementer forum (USB.org), which more or less guarantees its uniqueness.
When you plug-in a USB device, you should see it in the output of dmesg, even if the device is not supported. I have not yet seen a USB device that does not show this way.
The alternative is, that the device does not communicatie via USB at all, but only uses the USB connector for charging.
| Does every USB device have a vendor id and product id? |
1,521,302,084,000 |
split splits a file into pieces which in total consumes the same storage space (doubling the consumed disk space).
ln can create a symbolic link (symlink) to other (target) file while not duplicating the file and thus does not consume double the space of the target file.
due to the lack of storage space, can a file be split by reference/symbolicly (i.e. virtually splitting the file) that points to specific offsets in the big file?
for example, given a file which is 2MB break it to 2 pieces, where each piece reference 1mb of the big file (in the same concept that symlink works), such that each piece:
does not overlap other pieces (pieces will not reference the same data in the big file)
does not consume the same storage size as the big file portion it references
piece_1.file -> 2mb.file 1st MB
piece_2.file -> 2mb.file 2nd MB
and the storage size of each piece is much less than 1MB
|
due to the lack of storage space, can a file be split by reference/symbolicly (i.e. virtually splitting the file) that points to specific offsets in the big file?
Not directly, no. Files don't work that way in the thinking of POSIX. They're more independent, atomic units of data.
Two options:
Loopback devices
This is a runtime solution, meaning that it's not an on-disk solution, but needs to be set up manually. That might be an advantage or a disadvantage!
You can set up a loopback device quite easily; if you're using a freedesktop system message bus-compatible session manager (i.e., you're logged in to your machine graphically and are running gnome, xfce4, kde,…), udisks is your friend:
blksize=$((2**20))
udisksctl loop-setup -s $blksize -f /your/large/file
udisksctl loop-setup -s $blksize -o $blksize -f /your/large/file
The first command gives you a /dev/loopùëÅ which starts at the 0. byte and goes on for 2¬≤‚Å∞ bytes (i.e., a megabyte).
The second command gives you a /dev/loopùëÅ+ùü£ which starts at the 2¬≤‚Å∞. byte and goes on for 2¬≤‚Å∞ bytes (i.e., a megabyte). (Note how we're starting to count at 0, so that this is actually exactly after the first chunk.)
You can then use these two loopback devices, e.g. /dev/loop0 and /dev/loop1. They literally describe a "view" into your file. You change something in these block devices, you change it in your large file.
If you're not logging in graphically, the exact same can be achieved, but you need root privileges:
blksize=$((2**20))
sudo losetup --sizelimit $blksize -f /your/large/file
sudo losetup --sizelimit $blksize -o $blksize -f /your/large/file
Reflinking the storage blocks
This is an on-disk solution. It will also make a "logical" copy, i.e., if you change something in your small files, it will not be reflected in the large file.
You must use a file system that supports reflink (to the best of my knowledge, these are XFS, Btrfs and some network file systems). (File system blocks through which a split goes would need to be duplicated, but for most filesystems, we're talking less than 4 kB here.)
In that case, your file system can copy files without the copy using any space of its own, as long as the copy or original aren't changed (and even if they are, only the affected parts are duplicated).
So, on such a file system, we have two options:
make the split into the first and the second half "normally", and then ask a utility (duperemove) to compare the three files and deduplicate.
make the copy of the two halves of your file in a manner that hints the file system to directly avoid using twice the space.
Since the first option temporarily needs twice the space, let's do the second right away. I wrote a small program to do that split for you (full source code) (Attention: Found a bug, which I'm not going to fix. This is just an example). The relevant excerpt is this:
// this is file mysplit.c , a C99 program
// SPDX-License-Identifier: Linux-man-pages-copyleft
// (heavily based on the copy_file_range(2) man page example)
// compile with `cc -o mysplit mysplit.c`
// […]
int main(int argc, char* argv[])
{
[…]
ret = copy_file_range(fd_in /*input file descriptor*/,
&in_offset /*address of input offset*/,
fd_out /*output file descriptor*/,
NULL /*address of output offset*/,
len /*amount of bytes to copy*/,
0 /*flags (reserved, must be ==0)*/);
[…]
}
Let's try that out. We'll have to:
get and compile my program
make a XFS file system first, as that's one of the file systems that supports reflinks
mount it
put a large file filled with random data inside
check the free space on that file system
add splits of the file (using my program)
check the free space again.
The script below does just that.
# replace "apt" with "dnf" if you're on some kind of redhat/fedora
# replace "apt install" with "pacman" followed by a randomly guessed amount of options involving the letters {y, s, S, u} if you're on arch or manjaro
apt install curl gcc xfsprogs
# Download the C program from above
curl https://gist.githubusercontent.com/marcusmueller/d1e0235f9a484cb44626e35460a5c0ac/raw/6295f9f6371b916b87a5d0a5a6edad65f9ea8627/mysplit.c > mysplit.c
# Comile
cc -o mysplit mysplit.c
# Demo: (doesn't need root privileges if you got udisks)
# Make file system in a file system image,
# Mount that image
# create a 300 MB file in there,
# split it,
# show there's still nearly the same amount of free space
# Make file system in a file system image
fallocate -l 1G filesystemimage # 1 GB in size
## this is a bit confusing, but to make the root of the file system
## world-writable, we do:
echo "thislinejustforbackwardscompatibility/samefornextline
1337 42
d--777 ${UID} ${GID}" > /tmp/protofile
mkfs.xfs -p /tmp/protofile -L testfs filesystemimage
# Mount that image
loopdev=$(LANG=C udisksctl loop-setup -f filesystemimage | sed 's/.* as \(.*\).$/\1/')
## on most systems, that new device is automatically mounted.
sleep 3
## udisksctl mount -b "${loopdev}"
# create a 300 MB file in there,
target="/run/media/$(whoami)/testfs"
rndfile="${target}/largefile"
dd if=/dev/urandom "of=${rndfile}" bs=1M count=300
## Check free space
echo "free space with large file"
df -h "${target}"
# split it,
## (copy the first 100 MB)
./mysplit "${rndfile}" "${target}/split_1" 0 $((2**20 * 100))
## (copy the next 120 MB, just showing off that splits don't need to be of uniform size)
./mysplit "${rndfile}" "${target}/split_2" "$((2**20 * 100))" "$((2**20 * 120))"
# show there's still nearly the same amount of free space
## Check free space
echo "free space with large file + splits"
df -h "${target}"
| split/reference big file by offset reference |
1,521,302,084,000 |
I've faced a really strange issue today, and am totally helpless about it.
Some of the servers I manage are monitored with Nagios. Recently I saw a disk usage probe failing with this error:
DISK CRITICAL - /sys/kernel/debug/tracing is not accessible: Permission denied
I wanted to investigate and my first try was to check this directory permissions, and compare these with another server (who's working well). Here are the commands I ran on the working server and you'll see that as soon as I cd into the directory, its permissions are changed:
# Here we've got 555 for /sys/kernel/debug/tracing
root@vps690079:/home/admin# cd /sys/kernel/debug
root@vps690079:/sys/kernel/debug# ll
total 0
drwx------ 30 root root 0 Jul 19 13:13 ./
drwxr-xr-x 13 root root 0 Jul 19 13:13 ../
…
dr-xr-xr-x 3 root root 0 Jul 19 13:13 tracing/
drwxr-xr-x 6 root root 0 Jul 19 13:13 usb/
drwxr-xr-x 2 root root 0 Jul 19 13:13 virtio-ports/
-r--r--r-- 1 root root 0 Jul 19 13:13 wakeup_sources
drwxr-xr-x 2 root root 0 Jul 19 13:13 x86/
drwxr-xr-x 2 root root 0 Jul 19 13:13 zswap/
# I cd into the folder, and it (./) becomes 700!!
root@vps690079:/sys/kernel/debug# cd tracing/
root@vps690079:/sys/kernel/debug/tracing# ll
total 0
drwx------ 8 root root 0 Jul 19 13:13 ./
drwx------ 30 root root 0 Jul 19 13:13 ../
-r--r--r-- 1 root root 0 Jul 19 13:13 available_events
-r--r--r-- 1 root root 0 Jul 19 13:13 available_filter_functions
-r--r--r-- 1 root root 0 Jul 19 13:13 available_tracers
…
# Next commands are just a dumb test to double-check what I'm seeing
root@vps690079:/sys/kernel/debug/tracing# cd ..
root@vps690079:/sys/kernel/debug# ll
total 0
drwx------ 30 root root 0 Jul 19 13:13 ./
drwxr-xr-x 13 root root 0 Sep 27 10:57 ../
…
drwx------ 8 root root 0 Jul 19 13:13 tracing/
drwxr-xr-x 6 root root 0 Jul 19 13:13 usb/
drwxr-xr-x 2 root root 0 Jul 19 13:13 virtio-ports/
-r--r--r-- 1 root root 0 Jul 19 13:13 wakeup_sources
drwxr-xr-x 2 root root 0 Jul 19 13:13 x86/
drwxr-xr-x 2 root root 0 Jul 19 13:13 zswap/
Have you got any idea what could causes this behavior?
Side note, using chmod to re-etablish permissions does not seems to fix the probe.
|
/sys
/sys is sysfs, an entirely virtual view into kernel structures in memory that reflects the current system kernel and hardware configuration, and does not consume any real disk space. New files and directories cannot be written to it in the normal fashion.
Applying disk space monitoring to it does not produce useful information and is a waste of effort. It may have mount points for other RAM-based virtual filesystems inside, including...
/sys/kernel/debug
/sys/kernel/debug is the standard mount point for debugfs, which is an optional virtual filesystem for various kernel debugging and tracing features.
Because it's for debugging features, it is supposed to be unnecessary for production use (although you might choose to use some of the features for enhanced system statistics or similar).
Since using the features offered by debugfs will in most cases require being root anyway, and its primary purpose is to be an easy way for kernel developers to provide debug information, it may be a bit "rough around the edges".
When the kernel was loaded, the initialization routine for the kernel tracing subsystem registered /sys/kernel/debug/tracing as a debugfs access point for itself, deferring any further initialization until it's actually accessed for the first time (minimizing the resource usage of the tracing subsystem in case it turns out it's not needed). When you cd'd into the directory, this deferred initialization was triggered and the tracing subsystem readied itself for use. In effect, the original /sys/kernel/debug/tracing was initially a mirage with no substance, and it only became "real" when (and because) you accessed it with your cd command.
debugfs does not use any real disk space at all: all the information contained within it will vanish when the kernel is shut down.
/sys/fs/cgroup
/sys/fs/cgroup is a tmpfs-type RAM-based filesystem, used to group various running processes into control groups. It does not use real disk space at all. But if this filesystem is getting nearly full for some reason, it might be more serious than just running out of disk space: it might mean that
a) you're running out of free RAM,
b) some root-owned process is writing garbage to /sys/fs/cgroup, or
c) something is causing a truly absurd number of control groups to be created, possibly in the style of a classic "fork bomb" but with systemd-based services or similar.
Bottom line
A disk usage probe should have /sys excluded because nothing under /sys is stored on any disk whatsoever.
If you need to monitor /sys/fs/cgroup, you should provide a dedicated probe for it that will provide more meaningful alerts than a generic disk space probe.
| "cd" into /sys/kernel/debug/tracing causes permission change |
1,521,302,084,000 |
I want to copy a file from A to B, which may be on different filesystems.
There are some additional requirements:
The copy is all or nothing, no partial or corrupt file B left in place on crash;
Do not overwrite an existing file B;
Do not compete with a concurrent execution of the same command, at most one can succeed.
I think this gets close:
cp A B.part && \
ln B B.part && \
rm B.part
But 3. is violated by the cp not failing if B.part exists (even with -n flag). Subsequently 1. could fail if the other process 'wins' the cp and the file linked into place is incomplete. B.part could also be an unrelated file, but I'm happy to fail without trying other hidden names in that case.
I think bash noclobber helps, does this work fully? Is there a way to get without the bash version requirement?
#!/usr/bin/env bash
set -o noclobber
cat A > B.part && \
ln B.part B && \
rm B.part
Followup, I know some file systems will fail at this anyway (NFS). Is there a way to detect such filesystems?
Some other related but not quite the same questions:
Approximating atomic move across file systems?
Is mv atomic on my fs?
is there a way to atomically move file and directory from tempfs to ext4 partition on eMMC
https://rcrowley.org/2010/01/06/things-unix-can-do-atomically.html
|
rsync does this job. A temporary file is O_EXCL created by default (only disabled if you use --inplace) and then renamed over the target file. Use --ignore-existing to not overwrite B if it exists.
In practice, I never experienced any problems with this on ext4, zfs or even NFS mounts.
| How to copy a file transactionally? |
1,521,302,084,000 |
I am trying to combine a few programs like so (please ignore any extra includes, this is heavy work-in-progress):
pv -q -l -L 1 < input.csv | ./repeat <(nc "host" 1234)
Where the source of the repeat program looks as follows:
#include <fcntl.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/epoll.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#include <iostream>
#include <string>
inline std::string readline(int fd, const size_t len, const char delim = '\n')
{
std::string result;
char c = 0;
for(size_t i=0; i < len; i++)
{
const int read_result = read(fd, &c, sizeof(c));
if(read_result != sizeof(c))
break;
else
{
result += c;
if(c == delim)
break;
}
}
return result;
}
int main(int argc, char ** argv)
{
constexpr int max_events = 10;
const int fd_stdin = fileno(stdin);
if (fd_stdin < 0)
{
std::cerr << "#Failed to setup standard input" << std::endl;
return -1;
}
/* General poll setup */
int epoll_fd = epoll_create1(0);
if(epoll_fd == -1) perror("epoll_create1: ");
{
struct epoll_event event;
event.events = EPOLLIN;
event.data.fd = fd_stdin;
const int result = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, fd_stdin, &event);
if(result == -1) std::cerr << "epoll_ctl add for fd " << fd_stdin << " failed: " << strerror(errno) << std::endl;
}
if (argc > 1)
{
for (int i = 1; i < argc; i++)
{
const char * filename = argv[i];
const int fd = open(filename, O_RDONLY);
if (fd < 0)
std::cerr << "#Error opening file " << filename << ": error #" << errno << ": " << strerror(errno) << std::endl;
else
{
struct epoll_event event;
event.events = EPOLLIN;
event.data.fd = fd;
const int result = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, fd, &event);
if(result == -1) std::cerr << "epoll_ctl add for fd " << fd << "(" << filename << ") failed: " << strerror(errno) << std::endl;
else std::cerr << "Added fd " << fd << " (" << filename << ") to epoll!" << std::endl;
}
}
}
struct epoll_event events[max_events];
while(int event_count = epoll_wait(epoll_fd, events, max_events, -1))
{
for (int i = 0; i < event_count; i++)
{
const std::string line = readline(events[i].data.fd, 512);
if(line.length() > 0)
std::cout << line << std::endl;
}
}
return 0;
}
I noticed this:
When I just use the pipe to ./repeat, everything works as intended.
When I just use the process substitution, everything works as intended.
When I encapsulate pv using process substitution, everything works as intended.
However, when I use the specific construction, I appear to lose data (individual characters) from stdin!
I have tried the following:
I have tried to disable buffering on the pipe between pv and ./repeat using stdbuf -i0 -o0 -e0 on all processes, but that doesn't seem to work.
I have swapped epoll for poll, doesn't work.
When I look at the stream between pv and ./repeat with tee stream.csv, this looks correct.
I used strace to see what was going on, and I see lots of single-byte reads (as expected) and they also show that data is going missing.
I wonder what is going on? Or what I can do to investigate further?
|
Because the nc command inside <(...) will also read from stdin.
Simpler example:
$ nc -l 9999 >/tmp/foo &
[1] 5659
$ echo text | cat <(nc -N localhost 9999) -
[1]+ Done nc -l 9999 > /tmp/foo
Where did the text go? Through the netcat.
$ cat /tmp/foo
text
Your program and nc compete for the same stdin, and nc gets some of it.
| Why do I seem to lose data using this bash pipe construction? |
1,521,302,084,000 |
I have a little problem. I have a brand new Red Hat Linux Server and I installed Docker CE for CentOS/Red Hat with the official repositories for Docker CE. Now I see that docker creates the container under /var/lib/docker, but my problem is that I use an extra partition for my data under /data/docker. How can I change the default root directory for Docker in CentOS/Red Hat?
I tried a few HOWTOs but I get the same problem. I can’t find the configuration. For example, I search for the following files:
/etc/default/docker (I think only for Debian/Ubuntu)
/etc/systemd/system/docker.service.d/override.conf (I can't find on my system)
/etc/docker/daemon.json (I can't find on my system)
If I get the docker info I see:
Docker Root Dir: /var/lib/docker
|
Stop all running docker containers and then docker daemon. Move "/var/lib/docker" directory to the place where you want to have this data.
For you it would be:
mv /var/lib/docker /data/
and then create symlink for this docker directory in /var/lib path:
ln -s /data/docker /var/lib/docker
Start docker daemon and containers.
| Change Docker Root Dir on Red Hat Linux? |
1,521,302,084,000 |
It seems that my server keeps restarting. I want to know why.
How can I know when the last time server was rebooted and why?
root pts/0 139.193.156.125 Thu Aug 8 21:10 still logged in
reboot system boot 2.6.32-358.11.1. Thu Aug 8 20:38 - 21:11 (00:33)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 20:15 - 21:11 (00:56)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 19:16 - 21:11 (01:55)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 18:56 - 21:11 (02:14)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 18:24 - 21:11 (02:47)
root pts/1 139.193.156.125 Thu Aug 8 18:16 - crash (00:07)
root pts/0 195.254.135.181 Thu Aug 8 18:10 - crash (00:13)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 17:52 - 21:11 (03:19)
root pts/0 195.254.135.181 Thu Aug 8 17:38 - crash (00:13)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 17:08 - 21:11 (04:02)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 16:58 - 21:11 (04:12)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 16:45 - 21:11 (04:26)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 16:35 - 21:11 (04:36)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 16:27 - 21:11 (04:44)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 15:59 - 21:11 (05:12)
reboot system boot 2.6.32-358.11.1. Thu Aug 8 06:15 - 21:11 (14:56)
root pts/1 208.74.121.102 Wed Aug 7 06:03 - 06:04 (00:00)
root pts/1 208.74.121.102 Tue Aug 6 15:34 - 17:40 (02:05)
root pts/0 139.193.156.125 Tue Aug 6 11:28 - 04:40 (1+17:11)
In Linux is there ANY WAY to know why the system rebooted? Specifically did high load cause it? If not that then What?
|
/var/log/messages
That is the main log file you should check for messages related to this. Additionally either /var/log/syslog (Ubuntu) or /var/log/secure (CentOS)
To find out when your server was last rebooted just type uptime to see how long it has been up.
| How to know why server keeps restarting? |
1,521,302,084,000 |
I am the owner of a NAS, running some Linux distribution. It comes with a web administration frontend, where I can manage several services, user rights and also when it should go to sleep. My problem is, for some reason, when the NAS has gone to sleep, the hard drive turns on again after a couple of minutes. Then it will spin for some time, then sleep again. This keeps going on indefinitely.
How can I try to determine the cause for this? I am very new to Linux, but I managed to get root access, and now have a SSH connection.
|
inotify-tools is a simple way of doing this. There are several examples on their site that would be able to do what you want (see the inotifywatch example for a really basic one).
| Finding out what is spinning up harddrive |
1,521,302,084,000 |
What is the use of option -o for command useradd? What is a good use case of this option?
|
useradd’s -o option, along with its -u option, allows you to create a user with a non-unique user id. One use case for that is to create users with identical privileges (since they share the same user id) but different passwords, and if appropriate, home directories and shells. This can be useful for service accounts (although typically you’d achieve the same result using sudo nowadays); it can also be useful for rescue purposes with a root-equivalent account using a statically-linked shell such as sash.
| What is the use of option -o in the useradd command? |
1,521,302,084,000 |
I'm trying to add Qemu to my continuous integration pipeline to test various initrd artifacts. I've already discovered that I can run Qemu like this:
qemu-system-x86_64 \
-machine q35 \
-drive if=pflash,format=raw,file=OVMF_CODE.fd,readonly \
-drive if=pflash,format=raw,file=OVMF_VARS.fd \
-kernel vmlinuz-4.4.0-121-generic \
-initrd my-initramfs.cpio.xz \
-nographic
...and cause qemu-system-x86_64 to exit with status 0 if I do this in my init script:
# poweroff -f
This is works because the init script doesn't exit -- it invokes poweroff -f and sleeps "forever" or until Qemu does a "Power Down":
ACPI: Preparing to enter system sleep state S5
reboot: Power down
I would like to be able to detect problems in the init script by forcing an exit on error via set -eu. Exiting the init script (correctly) causes a kernel panic but the qemu-system-x86_64 process hangs forever.
How can I keep it from hanging forever? How do I get the Qemu host to detect a kernel panic in the Qemu guest?
Further clarification:
The nature of my application is security-sensitive; i.e., configuring/compiling the linux kernel is "allowed", but passing kernel parameters is not. To put a fine point on it, CMDLINE_OVERRIDE is enabled.
|
I've got something that's working:
Configure (and build) the kernel with CONFIG_PVPANIC=y; this produces a kernel with compiled-in support for the pvpanic device.
Invoke qemu-system-x86_64 with the -device pvpanic option; this instructs Qemu to catch (and exit on) a kernel panic.
A kernel panic causes qemu-system-x86_64 to exit successfully (return status 0), but at least it's not hanging anymore.
Many thanks to @dsstorefile1 for pointing me in the right direction.
References:
https://cateee.net/lkddb/web-lkddb/PVPANIC.html
https://github.com/qemu/qemu/blob/master/docs/specs/pvpanic.txt
| Can I make Qemu exit with failure on kernel panic? |
1,521,302,084,000 |
I am tring to create random 1G test file via dd command.
dd status=progress if=/dev/zero of=/tmp/testfile.zer bs=100M count=10
dd status=progress if=/dev/urandom of=/tmp/testfile1.ran bs=100M count=10
dd status=progress if=/dev/urandom of=/tmp/testfile2.ran bs=100M count=20
The output is:
-rw-rw-r-- 1 dorinand dorinand 320M dub 21 12:37 testfile1.ran
-rw-rw-r-- 1 dorinand dorinand 640M dub 21 12:37 testfile2.ran
-rw-rw-r-- 1 dorinand dorinand 1000M dub 21 12:37 testfile.zer
Why is the output testfile generate from /dev/urandom three times smaller? I would expect that the size of testfile1.ran will be 1000M and size of testfile2.ran will be 2000M. Could anybody why this happening? How should I generate random testfile?
|
With larger blocksize, there is a risk of getting incomplete reads. This also happens a lot when reading from a pipe, rather than a block device.
If you expect to receive a certain size (count*bs) you also have to supply iflag=fullblock.
It might not be necessary for bs=1M or smaller, but it's still recommended either way.
dd will also try to show you how many incomplete reads it got. It copies n+m blocks, n complete and m incomplete ones. When copying files that are not multiple of blocksize, it's normal for the last block to be incomplete.
Example:
$ dd status=progress if=/dev/urandom of=/dev/null bs=100M count=20
dd: warning: partial read (33554431 bytes); suggest iflag=fullblock
536870896 bytes (537 MB, 512 MiB) copied, 2 s, 254 MB/s
0+20 records in
0+20 records out
671088620 bytes (671 MB, 640 MiB) copied, 2.64391 s, 254 MB/s
In this case it got only incomplete reads and not a single full 100M block. Obviously /dev/urandom is unwilling to serve that much data in a single read. My version of dd even tells you to use iflag=fullbock directly.
With fullblock everything is OK:
$ dd status=progress if=/dev/urandom of=/dev/null bs=100M count=20 iflag=fullblock
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 8 s, 255 MB/s
20+0 records in
20+0 records out
2097152000 bytes (2.1 GB, 2.0 GiB) copied, 8.22914 s, 255 MB/s
It takes longer because it actually copies more than twice the amount of data.
| Different size of file from /dev/zero and /dev/urandom [duplicate] |
1,521,302,084,000 |
I have a set of files that contain some information.
I am interested in the subset of files that do not contain a specific pattern at all.
E.g.
cat file.txt
foo
bar
trivial information
some customer data
let’s say that I am interested in files that do not have the line ‘trivial information’.
How would I do that?
If I do:
grep -v ‘trivial information’
it will not work because the rest of the lines in the file are match to this inverted search so the file.txt will end up in the result.
So how do I do an invert match on the whole file and not line by line?
|
You are looking for the -L flag:
grep -L 'trivial information' *
From man grep:
-L, --files-without-match
Suppress normal output; instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match.
| grep inverted match on the file and not on a line by line match [duplicate] |
1,521,302,084,000 |
I have a custom kernel module that I compiled from this patch that adds support for the logitech G19 keyboard among other G series devices. I compiled it just fine against Ubuntu's maverick kernel's master branch (2.6.35).
I can boot and load the module, but I'm running into a really strange situation. As soon as I load the module (either on boot or through modprobe), I get a black screen and my console locks up.
The weird part is that it doesn't lock my system up, it's just the current console session. I can SSH into my box, and it gives me a terminal and a session. And I can type, and I can even run a command and it gives me the output. It then draws my next prompt and immediately locks up.
I see in dmesg that there's a null pointer, and I get the following stacktrace:
[ 956.215836] input: Logitech G19 Gaming Keyboard as /devices/pci0000:00/0000:00:1d.7/usb1/1-2/1-2.1/1-2.1.2/1-2.1.2:1.1/input/input5
[ 956.216023] hid-g19 0003:046D:C229.0004: input,hiddev97,hidraw3: USB HID v1.11 Keypad [Logitech G19 Gaming Keyboard] on usb-0000:00:1d.7-2.1.2/input1
[ 956.216065] input: Logitech G19 as /devices/pci0000:00/0000:00:1d.7/usb1/1-2/1-2.1/1-2.1.2/1-2.1.2:1.1/input/input6
[ 956.216128] Registered led device: g19_97:orange:m1
[ 956.216146] Registered led device: g19_97:orange:m2
[ 956.216178] Registered led device: g19_97:orange:m3
[ 956.216198] Registered led device: g19_97:red:mr
[ 956.216216] Registered led device: g19_97:red:bl
[ 956.216235] Registered led device: g19_97:green:bl
[ 956.216259] Registered led device: g19_97:blue:bl
[ 956.216872] Console: switching to colour frame buffer device 40x30
[ 956.216899] BUG: unable to handle kernel NULL pointer dereference at 000000000000001c
[ 956.216903] IP: [<ffffffffa040b21b>] sys_imageblit+0x21b/0x4ec [sysimgblt]
[ 956.216911] PGD 273554067 PUD 2726ca067 PMD 0
[ 956.216914] Oops: 0000 [#1] SMP
[ 956.216917] last sysfs file: /sys/devices/pci0000:00/0000:00:1d.7/usb1/1-2/1-2.1/1-2.1.2/1-2.1.2:1.1/usb/hiddev1/uevent
[ 956.216921] CPU 5
[ 956.216922] Modules linked in: hid_g19(+) led_class hid_gfb fb_sys_fops sysimgblt sysfillrect syscopyarea btrfs zlib_deflate crc32c libcrc32c ufs qnx4 hfsplus hfs minix ntfs vfat msdos fat jfs xfs exportfs reiserfs snd_hda_codec_atihdmi snd_hda_intel snd_hda_codec snd_hwdep snd_pcm snd_seq_midi snd_rawmidi snd_seq_midi_event snd_seq snd_timer snd_seq_device ioatdma snd i5000_edac soundcore snd_page_alloc psmouse edac_core i5k_amb shpchp serio_raw dca ppdev parport_pc lp parport usbhid hid floppy e1000e
[ 956.216953]
[ 956.216956] Pid: 3147, comm: modprobe Not tainted 2.6.35-26-generic #46 DSBF-DE/System Product Name
[ 956.216959] RIP: 0010:[<ffffffffa040b21b>] [<ffffffffa040b21b>] sys_imageblit+0x21b/0x4ec [sysimgblt]
[ 956.216963] RSP: 0018:ffff8802766db738 EFLAGS: 00010246
[ 956.216965] RAX: 0000000000000000 RBX: ffff880273e71000 RCX: ffff880272e93b40
[ 956.216968] RDX: 0000000000000007 RSI: 0000000000000010 RDI: ffff880272e93b40
[ 956.216970] RBP: ffff8802766db7d8 R08: 0000000000000000 R09: ffff880272e93b98
[ 956.216972] R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000
[ 956.216974] R13: 0000000000000010 R14: 0000000000000008 R15: ffff8802766db8c8
[ 956.216977] FS: 00007fcae7725700(0000) GS:ffff880001f40000(0000) knlGS:0000000000000000
[ 956.216979] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 956.216981] CR2: 000000000000001c CR3: 000000026ba26000 CR4: 00000000000006e0
[ 956.216983] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 956.216986] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[ 956.216988] Process modprobe (pid: 3147, threadinfo ffff8802766da000, task ffff8802696a16e0)
[ 956.216990] Stack:
[ 956.216991] ffff8802766db778 ffffffff810746ae ffff8802766db700 ffff88026b2cadc0
[ 956.216994] <0> ffff8802766db778 ffffffff812beef9 ffff8802f66db947 ffff8802766db94f
[ 956.216998] <0> ffff8802766db848 00000000812bf22e ffff880272e93b40 ffffffff812feb40
[ 956.217001] Call Trace:
[ 956.217011] [<ffffffff810746ae>] ? send_signal+0x3e/0x90
[ 956.217018] [<ffffffff812beef9>] ? put_dec+0x59/0x60
[ 956.217023] [<ffffffff812feb40>] ? fbcon_resize+0xd0/0x230
[ 956.217027] [<ffffffffa04175da>] gfb_fb_imageblit+0x1a/0x30 [hid_gfb]
[ 956.217031] [<ffffffff813051b9>] soft_cursor+0x1c9/0x270
[ 956.217034] [<ffffffff81304e8b>] bit_cursor+0x65b/0x6c0
[ 956.217037] [<ffffffff812c1796>] ? vsnprintf+0x316/0x5a0
[ 956.217043] [<ffffffff81061045>] ? try_acquire_console_sem+0x15/0x60
[ 956.217046] [<ffffffff81300ca8>] fbcon_cursor+0x1d8/0x340
[ 956.217049] [<ffffffff81304830>] ? bit_cursor+0x0/0x6c0
[ 956.217054] [<ffffffff81368139>] hide_cursor+0x29/0x90
[ 956.217057] [<ffffffff8136b078>] redraw_screen+0x148/0x240
[ 956.217060] [<ffffffff8136b42e>] bind_con_driver+0x2be/0x3b0
[ 956.217063] [<ffffffff8136b569>] take_over_console+0x49/0x70
[ 956.217066] [<ffffffff812ff7fb>] fbcon_takeover+0x5b/0xb0
[ 956.217069] [<ffffffff81303ca5>] fbcon_event_notify+0x5c5/0x650
[ 956.217076] [<ffffffff8158e7f6>] notifier_call_chain+0x56/0x80
[ 956.217080] [<ffffffff8108510a>] __blocking_notifier_call_chain+0x5a/0x80
[ 956.217084] [<ffffffff81085146>] blocking_notifier_call_chain+0x16/0x20
[ 956.217089] [<ffffffff812f366b>] fb_notifier_call_chain+0x1b/0x20
[ 956.217092] [<ffffffff812f4c8c>] register_framebuffer+0x1ec/0x2e0
[ 956.217098] [<ffffffff814084f8>] ? usb_init_urb+0x28/0x40
[ 956.217101] [<ffffffffa041790f>] gfb_probe+0x21f/0x4f0 [hid_gfb]
[ 956.217107] [<ffffffffa0425778>] g19_probe+0x558/0xedc [hid_g19]
[ 956.217115] [<ffffffff811c059c>] ? sysfs_do_create_link+0xec/0x210
[ 956.217128] [<ffffffffa00330c7>] hid_device_probe+0x77/0xf0 [hid]
[ 956.217131] [<ffffffff81388aa2>] ? driver_sysfs_add+0x62/0x90
[ 956.217134] [<ffffffff81388bc8>] really_probe+0x68/0x190
[ 956.217138] [<ffffffff81388d35>] driver_probe_device+0x45/0x70
[ 956.217140] [<ffffffff81388dfb>] __driver_attach+0x9b/0xa0
[ 956.217143] [<ffffffff81388d60>] ? __driver_attach+0x0/0xa0
[ 956.217146] [<ffffffff81388008>] bus_for_each_dev+0x68/0x90
[ 956.217149] [<ffffffff81388a3e>] driver_attach+0x1e/0x20
[ 956.217151] [<ffffffff813882fe>] bus_add_driver+0xde/0x280
[ 956.217154] [<ffffffff81389140>] driver_register+0x80/0x150
[ 956.217157] [<ffffffff8158e7f6>] ? notifier_call_chain+0x56/0x80
[ 956.217161] [<ffffffffa042a000>] ? g19_init+0x0/0x20 [hid_g19]
[ 956.217166] [<ffffffffa0032913>] __hid_register_driver+0x53/0x90 [hid]
[ 956.217169] [<ffffffff81085115>] ? __blocking_notifier_call_chain+0x65/0x80
[ 956.217173] [<ffffffffa042a01e>] g19_init+0x1e/0x20 [hid_g19]
[ 956.217178] [<ffffffff8100204c>] do_one_initcall+0x3c/0x1a0
[ 956.217184] [<ffffffff8109bd9b>] sys_init_module+0xbb/0x200
[ 956.217192] [<ffffffff8100a0f2>] system_call_fastpath+0x16/0x1b
[ 956.217195] Code: 83 e1 fc 48 89 4d c8 eb d3 8b 83 14 01 00 00 83 f8 04 74 09 83 f8 02 0f 85 7b 01 00 00 48 8b 4d b0 48 8b 83 00 04 00 00 8b 51 10 <44> 8b 04 90 8b 51 14 8b 3c 90 44 8b 4d ac 45 85 c9 75 16 41 b9
[ 956.217218] RIP [<ffffffffa040b21b>] sys_imageblit+0x21b/0x4ec [sysimgblt]
[ 956.217221] RSP <ffff8802766db738>
[ 956.217223] CR2: 000000000000001c
[ 956.217227] ---[ end trace 95d6c6d6913ccc79 ]---
Can anyone point me in the right direction as to how to go about debugging this?
The stacktrace leads me to believe that it's not the hid-g15 driver but the hid-gfb driver, which creates a frame buffer for the LCD on the keyboard. This makes sense since it's locking up my display/console but digging into the kernel code isn't really going anywhere. So much of it is assembly and macro functions.
The last function on the stacktrace that involves my new code is gfb_fb_imageblit. The entirety of that function is
struct gfb_data *par = info->par;
sys_imageblit(info, image);
gfb_fb_update(par);
Am I reading the stacktrace wrong? Am I missing something? Any tips on how to debug this?
|
First things first, debug the module? Just see if you can load it up in gdb it might point you straight at a line that uses the relevant variable(or close to it).
oh, and you might find this article useful
| How do I debug a kernel module in which a NULL pointer appears? |
1,558,028,264,000 |
I wrote the code:
// a.c
#include <stdlib.h>
int main () {
system("/bin/sh");
return 0;
}
compiled with command:
gcc a.c -o a.out
added setuid bit on it:
sudo chown root.root a.out
sudo chmod 4755 a.out
On Ubuntu 14.04, when I run as general user, I got root privilege.
but on Ubuntu 16.04, I still got current user's shell.
Why is it different?
|
What changed is that /bin/sh either became bash or stayed dash which got an additional flag -p mimicking bash's behaviour.
Bash requires the -p flag to not drop setuid privilege as explained in its man page:
If the shell is started with the effective user (group) id not equal to the real user
(group) id, and the -p option is not supplied, no startup files are read, shell functions
are not inherited from the environment, the SHELLOPTS, BASHOPTS, CDPATH, and GLOBIGNORE
variables, if they appear in the environment, are ignored, and the effective user id is
set to the real user id. If the -p option is supplied at invocation, the startup behavior
is the same, but the effective user id is not reset.
Before, dash didn't care about this and allowed setuid execution (by doing nothing to prevent it). But Ubuntu 16.04's dash's manpage has an additional option described, similar to bash:
-p priv
Do not attempt to reset effective uid if it does not match uid. This is not set by default to help avoid incorrect usage by
setuid root programs via system(3) or popen(3).
This option didn't exist in upstream (which might not be have been reactive to a proposed patch*) nor Debian 9 but is present in Debian buster which got the patch since 2018.
NOTE: as explained by Stéphane Chazelas, it's too late to invoke "/bin/sh -p" in system() because system() runs anything given through /bin/sh and so the setuid is already dropped. derobert's answer explains how to handle this, in the code before system().
* more details on history here and there.
| Why does the setuid bit work inconsistently? |
1,558,028,264,000 |
You can not use pivot_root on an initramfs rootfs, you will get Invalid Argument. You can only pivot real filesystems.
Indeed:
Fedora Linux 28 - this uses the dracut initramfs.
Boot into an initramfs shell, by adding rd.break as an option on the kernel command line.
cd /sysroot
usr/bin/pivot_root . mnt
-> pivot_root fails with "Invalid argument", corresponding to an errno value of EINVAL.
There is no explanation for this in man 2 pivot_root:
EINVAL put_old is not underneath new_root.
Why does it fail? And as the next commenter replied, "Then how would Linux exit early user space?"
|
Unlike the initrd, Linux does not allow to unmount the initramfs. Apparently this helped keep the kernel code simple.
Instead of pivot_root, you can use the switch_root command. It implements the following procedure. Notice that switch_root deletes all the files on the old root, to free the initramfs memory, so you need to be careful where you run this command.
initramfs is rootfs: you can neither pivot_root
rootfs, nor unmount it. Instead delete everything out of rootfs to
free up the space (find -xdev / -exec rm '{}' ';'), overmount rootfs
with the new root (cd /newmount; mount --move . /; chroot .), attach
stdin/stdout/stderr to the new /dev/console, and exec the new init.
Note the shell commands suggested are only rough equivalents to the C code. The commands won't really work unless they are all built in to your shell, because the first command deletes all the programs and other files from the initramfs :-).
Rootfs is a special instance of ramfs (or tmpfs, if that's enabled), which is always present in 2.6 systems. You can't unmount rootfs for approximately the same reason you can't kill the init process; rather than having special code to check for and handle an empty list, it's smaller and simpler for the kernel to just make sure certain lists can't become empty.
https://github.com/torvalds/linux/blob/v4.17/Documentation/filesystems/ramfs-rootfs-initramfs.txt
| pivot_root from initramfs to new root - error "Invalid argument" |
1,558,028,264,000 |
i'm trying to run samba service on Ubuntu server and it gives me erros and says its masked and dead, how do i fix that ? what does cause it to be like this?
Here is the error i get:-
Failed to start samba.service: Unit samba.service is masked.
If I'm running the Ubuntu server on Virtual box, would that be the issue? thanks.
|
This is not a bug.
What you describe is intentional on the parts of the Debian people.
You are not supposed to control samba services this way on a Debian/Ubuntu systemd operating system. You are supposed to manipulate the smbd, nmbd, and samba-ad-dc services as needed.
There is no umbrella samba.target to do the original job of the old Debian/Ubuntu samba van Smoorenburg rc script; which was starting/stopping these three en bloc. Other people created them, for other systemd operating systems. The Debian people did not.
So where you read Debian/Ubuntu doco saying things like service samba action remember that that is not an available thing any more, now that Debian Linux and Ubuntu Linux are systemd operating systems.
Just to add to the confusion …
What the rest of the world outwith Debian knows as samba.service is called samba-ad-dc.service in the Debian world. Similarly, nmbd.service and smbd.service are originally nmb.service and smb.service outwith Debian.
So where you read generic systemd operating system doco about samba talking about nmb, smb, and samba services, you must mentally perform the translation for Debian/Ubuntu, particularly for the latter name.
Further reading
Liang Guo (2014-03-06). /etc/init.d/samba forbit systemd shutdown system. 740942. Debian bugs.
Mask /etc/init.d/samba init script for systemd. Ivo De Decker. 2014-10-24.
Ivo De Decker (2014-10-24). samba init script should not be started after upgrade. 766690. Debian bugs.
Ivo De Decker (2014-11-15). samba: unit samba.service is masked. 769714. Debian bugs.
Wulf C. Krueger (2010). samba.target
| Ubuntu Service samba is masked and can't start |
1,558,028,264,000 |
I have an Acer Aspire on which I installed Linux Mint 17.2. The touchpad does not work at all; xinput doesn't even list any touchpad unit at all. Probably a driver issue, is there some way to make it work?
|
The solution: add i8042.nopnp to the kernel command line. To do this:
sudoedit /etc/default/grub
and add:
GRUB_CMDLINE_LINUX="i8042.nopnp"
If there's already a line with GRUB_CMDLINE_LINUX=…, add i8042.nopnp inside the quotes, separated from any other word within the quotes by a space, e.g.
GRUB_CMDLINE_LINUX="some-other=option i8042.nopnp"
Then run
sudo update-grub
and reboot. Hope it works, it worked for me!
| Touchpad does not work on Acer Aspire |
1,558,028,264,000 |
Is there a command similar to netstat -np but grouped by state and PID?
I'd like to know the current count of server connections in a particular state grouped by Programs.
similar to,
102 squid ESTABLISHED
32 httpd ESTABLISHED
I use RHEL5.
|
You can use sort to reorganize the output of netstat in any format you like.
$ netstat -anpt 2>&1 | tail -n +5 | sort -k7,7 -k 6,6
This will sort the output using the 7th column first (the process name/PID) followed by the state (ESTABLISHED, LISTEN, etc.).
NOTE: The first part of the command, netstat -anpt 2>&1 | tail -n +5 .. will direct all the output that may occur on STDOUT to STDIN as well and then chop off the first 5 lines which are boilerplate output from netstat which we are uninterested in.
Example
$ netstat -anpt 2>&1 | tail -n +5 | sort -k7,7 -k 6,6
tcp 0 0 192.168.1.20:49309 192.168.1.103:631 ESTABLISHED 2077/gnome-settings
tcp 0 0 192.168.1.20:38393 204.62.14.135:443 ESTABLISHED 2260/mono
tcp 0 0 192.168.1.20:39738 74.125.192.125:5222 ESTABLISHED 2264/pidgin
tcp 0 0 192.168.1.20:40097 87.117.201.130:6667 ESTABLISHED 2264/pidgin
tcp 0 0 192.168.1.20:53920 217.168.150.38:6667 ESTABLISHED 2264/pidgin
...
tcp 1 0 192.168.1.20:50135 190.93.247.58:80 CLOSE_WAIT 24714/google-chrome
tcp 1 0 192.168.1.20:44420 192.168.1.103:631 CLOSE_WAIT 24714/google-chrome
tcp 0 0 192.168.1.20:36892 74.125.201.188:5228 ESTABLISHED 24714/google-chrome
tcp 0 0 192.168.1.20:43778 74.125.192.125:5222 ESTABLISHED 24714/google-chrome
tcp 0 0 192.168.1.20:33749 198.252.206.140:80 ESTABLISHED 24714/google-chrome
...
You can use a similar approach to get the counts using various tools such as wc or uniq -c.
Changing the output
If you'd really like to get the output of netstat looking like this:
102 squid ESTABLISHED
32 httpd ESTABLISHED
You can do some further slicing and dicing using awk & sed. This can be made more compact, but should get you started and does the job.
$ netstat -anpt 2>&1 | tail -n +5 | awk '{print $7,$6}' | sort -k1,1 -k3,3 \
| sed 's#/# #' | column -t
2264 pidgin ESTABLISHED
2264 pidgin ESTABLISHED
24714 google-chrome CLOSE_WAIT
24714 google-chrome CLOSE_WAIT
24714 google-chrome ESTABLISHED
24714 google-chrome ESTABLISHED
...
24714 google-chrome ESTABLISHED
26358 ssh ESTABLISHED
26358 ssh ESTABLISHED
26358 ssh ESTABLISHED
26358 ssh LISTEN
26358 ssh LISTEN
26358 ssh LISTEN
NOTE: column -t simply aligns all the output in nice columns.
Counting the connections
Finally to do what you want in terms of tallying the occurrences:
$ netstat -anpt 2>&1 | tail -n +5 | awk '{print $7,$6}' | sort -k1,1 -k3,3 \
| sed 's#/# #' | column -t | uniq -c
6 - LISTEN
8 - TIME_WAIT
1 2077 gnome-settings ESTABLISHED
1 2260 mono ESTABLISHED
10 2264 pidgin ESTABLISHED
2 24714 google-chrome CLOSE_WAIT
27 24714 google-chrome ESTABLISHED
3 26358 ssh ESTABLISHED
4 26358 ssh LISTEN
1 26359 ssh ESTABLISHED
4 3042 thunderbird ESTABLISHED
1 32472 monodevelop ESTABLISHED
2 32472 monodevelop LISTEN
1 32533 mono ESTABLISHED
1 32533 mono LISTEN
1 3284 monodevelop LISTEN
1 3365 mono LISTEN
1 4528 mono LISTEN
1 8416 dropbox ESTABLISHED
1 8416 dropbox LISTEN
The first column represents the counts.
| Command similar to netstat -np but grouped by state and PID? |
1,558,028,264,000 |
I'm trying to figure out how I can use AWK to subtract lines. For example, imagine the input file is:
30
20
The output would be:
10
Now, as a test I am trying to calculate the "Used" memory column from:
$ cat /proc/meminfo
So at the moment I have written this:
$ grep -P 'MemTotal|MemFree' /proc/meminfo | \
-- Here comes the calculation using AWK
I have tried the following:
$ grep -P 'MemTotal|MemFree' /proc/meminfo | \
awk '{print $2}' | awk '{$0-s}{s=$0} END {print s}'
But this just gives me the last row of data.
I've found a working solution, but I doubt it's the most optimal one. All my coding experience tells me that hard coding the amount of rows is terrible :P
$ grep -P 'MemTotal|MemFree' /proc/meminfo | \
awk '{print $2}' | awk 'NR == 1{s=$0} NR == 2 {s=s-$0} END {print s}'
|
You can also do this using awk, paste, and bc. I find this approach easier to remember, the syntax of awk always requires me to look things up to confirm.
NOTE: This approach has the advantage of being able to contend with multiple lines of output, subtracting the 2nd, 3rd, 4th, etc. numbers from the 1st.
$ grep -P 'MemTotal|MemFree' /proc/meminfo | \
awk '{print $2}' | paste -sd- - | bc
7513404
Details
The above uses awk to select the column that contains the numbers we want to subtract.
$ grep -P 'MemTotal|MemFree' /proc/meminfo | \
awk '{print $2}'
7969084
408432
We then use paste to combine these 2 values values and add the minus sign in between them.
$ grep -P 'MemTotal|MemFree' /proc/meminfo | \
awk '{print $2}'| paste -sd- -
7969084-346660
When we pass this to bc it performs the calculation.
$ grep -P 'MemTotal|MemFree' /proc/meminfo | \
awk '{print $2}'| paste -sd- - | bc
7513404
| How to subtract rows (lines) with AWK |
1,558,028,264,000 |
There are some tools in side the kernel,
<kernel source root directory>/tools
perf is one of them.
In ubuntu I think the tools inside this folder is available as package linux-tools
How can I compile it form source and install it and run it?
|
what's wrong with the following?
make -C <kernel source root directory>/tools/perf
| How can I compile, install and run the tools inside kernel/tools? |
1,558,028,264,000 |
I have a AutoDome Junior HD IVA camera. How do I play RTSP stream in my Linux distro? I tried VLC but it fails. Is there any other reliable tool?
Follow up:
Try 1: fail
$ mplayer rtsp://192.168.1.10:554
MPlayer SVN-r33251-4.6.0 (C) 2000-2011 MPlayer Team
mplayer: could not connect to socket
mplayer: No such file or directory
Failed to open LIRC support. You will not be able to use your remote control.
Playing rtsp://192.168.1.10:554.
Connecting to server 192.168.1.10[192.168.1.10]: 554...
rtsp_session: unsupported RTSP server. Server type is 'unknown'.
STREAM_LIVE555, URL: rtsp://192.168.1.10:554
Stream not seekable!
file format detected.
Initiated "video/H264" RTP subsession on port 43230
demux_rtp: Failed to guess the video frame rate
VIDEO: [H264] 0x0 0bpp 0.000 fps 0.0 kbps ( 0.0 kbyte/s)
FPS not specified in the header or invalid, use the -fps option.
Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared object file: No such file or directory
[vdpau] Error when calling vdp_device_create_x11: 1
==========================================================================
Opening video decoder: [ffmpeg] FFmpeg's libavcodec codec family
Selected video codec: [ffh264] vfm: ffmpeg (FFmpeg H.264)
==========================================================================
Audio: no sound
Starting playback...
V: 0.0 0/ 0 ??% ??% ??,?% 0 0
Exiting... (End of file)
Try 2: fail
$ mplayer mms://192.168.1.10:554
MPlayer SVN-r33251-4.6.0 (C) 2000-2011 MPlayer Team
mplayer: could not connect to socket
mplayer: No such file or directory
Failed to open LIRC support. You will not be able to use your remote control.
Playing mms://192.168.1.10:554.
STREAM_ASF, URL: mms://192.168.1.10:554
Connecting to server 192.168.1.10[192.168.1.10]: 554...
Connected
read error:: Resource temporarily unavailable
pre-header read failed
Connecting to server 192.168.1.10[192.168.1.10]: 554...
unknown ASF streaming type
Failed, exiting.
Connecting to server 192.168.1.10[192.168.1.10]: 554...
Cache size set to 320 KBytes
Cache fill: 0.00% (0 bytes) nop_streaming_read error : Resource temporarily unavailable
Stream not seekable!
Cache fill: 0.00% (0 bytes) nop_streaming_read error : Resource temporarily unavailable
Cache fill: 0.00% (0 bytes)
Cache not filling, consider increasing -cache and/or -cache-min!
nop_streaming_read error : Resource temporarily unavailable
Cache not filling, consider increasing -cache and/or -cache-min!
nop_streaming_read error : Resource temporarily unavailable
Cache not filling, consider increasing -cache and/or -cache-min!
nop_streaming_read error : Resource temporarily unavailable
Cache not filling, consider increasing -cache and/or -cache-min!
nop_streaming_read error : Resource temporarily unavailable
Cache not filling, consider increasing -cache and/or -cache-min!
Invalid seek to negative position!
Exiting... (End of file)
|
Try mplayer, it's usually the audio and video player that supports the widest range of formats.
If you have a supposedly RTSP source which is actually an HTTP URL, first retrieve the contents of the URL; you'll get a file containing just another URL, this time rtsp:// (sometimes you get another HTTP URL that you need to follow too). Pass the rtsp:// URL to mplayer on its command line.
There are servers out there (and, for all I know, hardware devices too) that serve files containing a rtsp:// URL over HTTP, but then serve content in the MMS protocol¹. This is for compatibility with some older Microsoft players (my memory is hazy over the details), but it breaks clients that believe that RTSP is RTSP and MMS is MMS. If you obtain an rtsp:// URL that doesn't work at all, try replacing the scheme with mms://.
¹ No relation with Multimedia Messaging Service a.k.a. video SMS.
| How to play RTSP stream under Linux for the BOSCH AutoDome Junior HD IVA camera? |
1,558,028,264,000 |
I am just wondering. KDE sends me a notification if there is 10% battery life left on my keyboard, which is wireless. But is there a way to get the whole battery status data?
|
Battery information is provided to desktop environments by UPower; this includes the battery information for some keyboards and mice. You can see what your computer knows about its batteries by running
upower --dump
For example, on my desktop with a wireless Logitech mouse, it shows (among other things)
Device: /org/freedesktop/UPower/devices/mouse_0003o046Do101Bx0006
native-path: /sys/devices/pci0000:00/0000:00:14.0/usb1/1-1/1-1:1.2/0003:046D:C52B.0003/0003:046D:101B.0006
vendor: Logitech, Inc.
model: M705
serial: XXXXXXXX
power supply: no
updated: Mon 27 Aug 2018 15:41:36 CEST (106 seconds ago)
has history: yes
has statistics: no
mouse
present: yes
rechargeable: yes
state: discharging
warning-level: none
percentage: 25%
icon-name: 'battery-low-symbolic'
On my laptop, it shows the laptop batteries, and the battery status of connected battery-powered devices.
| Is there a way to see the remaining battery life of your keyboard/mouse on Ubuntu/Linux? |
1,558,028,264,000 |
I received an assignment to install Fedora 24-1.2 in VirtualBox with these specifications, and I'm running into issues that make me question how realistic this is.
For one, I'm not even able to create 8 CPUs. VirtualBox doesn't even give me the option. The most I can do is 4.
Secondly, It won't allow me to create so much RAM with only a 10gb hard drive.
I've double checked the assignment about a dozen times now and yes, that is my my instructor wants. I'll figure it out if need be... I just want to confirm, are these settings plausible?
|
10GB HD, 8 CPUs, 4GB of RAM - Those settings could well be plausible for a Linux VM, I have had VMs smaller than that, and far larger too (in server grade hardware).
The needed resources depend wildly on what the server is supposed to do, however the requested resources are not probably plausible for the (host) machine/computer you are using.
You are sharing/"stealing" resources that the host is not using to give to the VM; you cannot steal what is not there; for to give virtual CPUs, disk space and RAM to a VM in Virtualbox, you have got to have the physical (free) resources to match in the (host) computer you are using.
If you have only 4GB or even 8GB of physical RAM, the OS also needs a good chunk to work properly, and won't allow you to use your full RAM for VirtualBox; the same comment applies roughly to your CPUs.
What I advise is creating a VM constrained to your hardware limits, and explaining in a report why you were not able/why it does not make sense creating the resources as asked. It will probably get you some brownie points showing you understood the challenge and still managed to create the VM.
| Is a VM with 10 GB hardrive, 8 CPU's, and 4GB of RAM plausible? |
1,558,028,264,000 |
I'm trying to decrease a Linux image running SuSE, and thought about running strip on all of the system's executables. Even though I may not re-gain much disk space this way, would there be any harm in doing so?
|
It's not the case for Linux (just checked...), but on other systems (such as BSDs, e.g., OSX) doing this will remove any setuid/setgid permissions as a side-effect. Also (still looking at OSX), the ownership of the file may change (to the user doing the writing).
For Linux, I recall that early on, stripping a shared library would prevent linking to it. That is not a problem now, though as the Program Library HOWTO notes, it will make debuggers not useful. It prevents linking to static libraries.
Further reading:
24.14 Don't Use strip Carelessly (Unix Power Tools)
How do I strip local symbols from linux kernel module without breaking it?
What Linux and Solaris can learn from each other
| What harm would there be in running strip on all files? |
1,558,028,264,000 |
I'm pretty sure that all Red Hat and Debian based distributions follow the convention of shipping the kernel configuration in /boot/config-*, but what of other distributions? Or, if this convention is extremely common, which distributions don't follow it?
|
Debian and derivatives (Ubuntu, Linux Mint, …)
The configuration for the kernel /boot/vmlinuz-VERSION is stored in /boot/config-VERSION. The two files ship in the same package, linux-VERSION or kernel-VERSION.
Arch Linux, Gentoo (if enabled)
The configuration for the running kernel is stored in the kernel binary and can be retrieved with zcat /proc/config.gz.
This file exists when the CONFIG_IKCONFIG option is set when compiling the kernel - and so can be true (or not) regardless of distribution, though the default kernel configuration for the two named does enable it.
Incidentally, arch linux's default configuration does not name the kernel (or its initramfs image) by version even in /boot - the files there are named only for their corresponding packages. For example, a typical arch linux boot kernel is named /boot/vmlinuz-linux where linux is the package one installs for the default kernel.
| Where can I find the kernel configuration on each Linux distribution? |
1,558,028,264,000 |
I will be backing up a large (750GB) disk to an external USB disk using dd.
Should I be using redirection or piping? Which is more efficient? Or is there a difference?
Also, what is the best block size? USB is likely to be the bottleneck here.
dd if=/dev/sda bs=1M | gzip -c > /mnt/sdb1/backups/disk.img.gz
gzip -dc /mnt/sdb1/backups/disk.img.gz | dd of=/dev/sda bs=1M
vs
dd if=/dev/sda bs=1M | gzip -c | dd of=/mnt/sdb1/backups/disk.img.gz
dd if=/mnt/sdb1/backups/disk.img.gz | gzip -dc | dd of=/dev/sda bs=1M
Thanks.
|
You don't need to use dd or piping at all.
</dev/sda gzip >/mnt/sdb1/backups/disk.img.gz
</mnt/sdb1/backups/disk.img.gz gunzip >/dev/sda
I once made a benchmark and found using dd slower than cat for a straight copy between different disks. I would expect the pipe to make any solution involving dd even slower in this case.
| gzip - redirection or piping? |
1,558,028,264,000 |
When I run the command
./program
I get the error:
bash: ./program: cannot execute binary file: Exec format error
When I run uname -a I get:
4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:34:49 UTC 2016 i686 i686 i686 GNU/Linux
Also I checked the information about the program that I was trying to run and I got:
ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=c154cb3d21f6bbd505d165aed3aa6ed682729441, not stripped
/proc/cpuinfo shows
flags : fpuvme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx rdtscp lm constant_tsc arch_perfmon pebs bts xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm epb tpr_shadow vnmi flexpriority ept vpid xsaveopt dtherm ida arat pln pts
How can I run the program?
|
You have a 64-bit x86 CPU (indicated by the lm flag in /proc/cpuinfo), but you’re running a 32-bit kernel. The program you’re trying to run requires a 64-bit runtime, so it won’t work as-is: even on a 64-bit CPU, a 32-bit kernel can’t run 64-bit programs.
If you can find a 32-bit build of the program (or build it yourself), use that.
Alternatively, you can install a 64-bit kernel, reboot, and then install the 64-bit libraries required by your program.
To install a 64-bit kernel, run
sudo dpkg --add-architecture amd64
sudo apt-get update
sudo apt-get install linux-image-generic:amd64
This will install the latest 64-bit Xenial kernel, along with various supporting 64-bit packages. Once you reboot, you should find that uname -a shows x86_64 rather than i686. If you attempt to run your program again, it might just work, or you’ll get an error because of missing libraries; in the latter case, install the corresponding packages (use apt-file to find them) to get the program working.
| Getting the error: bash: ./program: cannot execute binary file: Exec format error |
1,558,028,264,000 |
I'm investigating a very strange effect on some Beagle Bone Black (BBB) boards. We're seeing occasional jumps of a few months in the system clock which always correlate with systemd-timesyncd updating the system clock. We see 2 to 3 of these a week across a fleet of 2000 devices in diverse locations.
We've spent a lot of time checking SNTP but that appears to be behaving normally.
We've finally come up with a hardware issue with the on-board real time clock that can cause it to randomly jump 131072 seconds (36 hours) due to electronic noise. This doesn't immediately sit right, the reported time jump is quite specific and much less than we've observed, however deeper reading on the issue suggests jumps may be more random and may even go backwards.
My question is... How does linux use a real time clock to maintain the system clock?
I want to know if an error with the real time clock would only present itself in the system clock when a timesync agent (ntpd or systemd-timesyncd) updates. Is there any direct link between the system clock and an RTC or is it only used by an agent?
Note: In the first paragraph I mentioned that we're seeing jumps of a few months in the system clock which always correlate with systemd-timesyncd updating the system clock. By this I mean that the very first syslog message after a time jump is a Time has been changed syslog message:
grep 'Time has been changed' /var/log/syslog
Oct 2 23:53:33 hostname systemd[1]: Time has been changed
Nov 21 00:07:05 hostname systemd[1]: Time has been changed
Nov 21 00:05:17 hostname systemd[1]: Time has been changed
Nov 21 00:03:29 hostname systemd[1]: Time has been changed
Nov 21 00:01:43 hostname systemd[1]: Time has been changed
Oct 3 02:07:20 hostname systemd[1]: Time has been changed
Oct 3 06:37:04 hostname systemd[1]: Time has been changed
To the best of my knowledge the only thing that emits these messages is systemd-timesycnd (See source code). Obviously if anyone else knows of other regular systemd syslog messages messages matching these I'm open to suggestions.
|
Thanks very much to sourcejedi for this answer. This really led me to find the right answer.
Answer to the question
How does Linux use a real time clock to maintain the system clock?
It does so only once, during boot. It will not query the RTC again until the next reboot. This is configurable, but will do so by default on most kernel builds.
I want to know if an error with the real time clock would only present
itself in the system clock when a timesync agent (ntpd or
systemd-timesyncd) updates.
Unless the system is rebooted, the time in the RTC is unlikely to get into the system clock at all. Some agents like ntpd can be configured to use an RTC as a time source but this is not usually enabled by default. It's inadvisable to enable it unless you know the RTC is a very good time source.
Is there any direct link between the system clock?
It appears the time is copied the other way. The RTC is periodically updated with the system time. As per sourcejedi's answer, this is done by the kernel if CONFIG_RTC_HCTOSYS is set.
This can be tested:
Set the RTC
# hwclock --set --date='18:28'
Then check the RTC time every few minutes with:
# hwclock
The result of this will be the system time will not change at all,
and the RTC will eventually revert to the system time.
The cause of time jumps on the BBB
As sourcejedi pointed out, the messages were not being triggered by systemd-timesyncd. They were being triggered by connman. The evidence was (should be) a spurious log message in /var/log/syslog:
Oct 3 00:10:37 hostname connmand[1040]: ntp: adjust (jump): -27302612.028018 sec
...
Nov 21 00:07:05 hostname systemd[1]: Time has been changed
prior to version 1.37, connman is hard coded to promiscuously poll the default gateway for the time. It does not need to be DHCP configured to do this and if connman's NTP client is enabled (it is by default) then it will do this regardless of any other configuration.
In our case some home routers were actually responding to these NTP requests, but the results were highly unreliable. Particularly where the router was rebooted, it continued to hand out the time without actually knowing the correct time.
For example we know that at least one version of the BT Home Hub 5 will, when rebooted, default to 21st November 2018 and give out this date over NTP. It's own NTP client will then correct the problem but there's a window where it hands out 21st November 2018.
That is, this issue was ultimately caused by our customers rebooting their router and connman just accepting this time.
I'll express my frustration here, it seems the belligerence of some has left this "feature" in connman for far too long. It was reported as a problem as early as 2015. And it's a really well hidden "feature". There are no timeservers configured and no log message to explain what connman is doing or documentation as to why. If your test rigs have no NTP server on the default gateway you'll never see this in testing.
How to Fix
We are looking at two options which both appear to work:
Remove connman completely. It seems the network works just fine without it; we've not yet found the reason for it being there in the first place.
apt-get remove connman
Disable NTP in connman by editing /var/lib/connman to include:
[global]
TimeUpdates=manual
| How does Linux use a real time clock? |
1,558,028,264,000 |
We use the following script:
more test.sh
#!/bin/bash
while read -r line
do
echo $line
done < /tmp/file
This is the file:
kafka-broker,log.retention.hours,12
kafka-broker,default.replication.factor,2
fefolp-defaults,fefolp.history.fs.cleaner.interval,1d
fefolp-defaults,fefolp.history.fs.cleaner.maxAge,2d
fefolp-env,fefolp_daemon_memory,10000
blo-site,blo.nodemanager.localizer.cache.target-size-mb,10240
blo-site,blo.nodemanager.localizer.cache.cleanup.interval-ms,300000
ams-env,metrics_collector_heapsize,512
fefolp,hbase_master_heapsize,1408
fefolp,hbase_regionserver_heapsize,512
fefolp,hbase_master_xmn_size,192
core-site,blolp.proxyuser.ambari.hosts,*
core-site,Hadoop.proxyuser.root.groups,*
core-site,Hadoop.proxyuser.root.hosts,*
blo-site,blo.scheduler.minimum-allocation-mb,1024
blolp-env,fefolp_heapsize,4096
Remark - after the last line - there are no space!
But the script prints only these lines (except the last line):
./test.sh
kafka-broker,log.retention.hours,12
kafka-broker,default.replication.factor,2
fefolp-defaults,fefolp.history.fs.cleaner.interval,1d
fefolp-defaults,fefolp.history.fs.cleaner.maxAge,2d
fefolp-env,fefolp_daemon_memory,10000
blo-site,blo.nodemanager.localizer.cache.target-size-mb,140
blo-site,blo.nodemanager.localizer.cache.cleanup.interval-ms,300
ams-env,metrics_collector_heapsize,51
fefolp,hbase_master_heapsize,1408
fefolp,hbase_regionserver_heapsize,542
fefolp,hbase_master_xmn_size,19
core-site,blolp.proxyuser.ambari.hosts,*
core-site,Hadoop.proxyuser.root.groups,*
core-site,Hadoop.proxyuser.root.hosts,*
blo-site,blo.scheduler.minimum-allocation-mb,1024
Why does this happen?
|
Your input text contains an incomplete line as its last line. The last line is not terminated by a newline.
while IFS= read -r line || [ -n "$line" ]; do
printf '%s\n' "$line"
done <file
The above loop will read unmodified lines¹ (without stripping whitespaces or interpreting backslashed control sequences) from the file called file and print them to standard output.
When an incomplete line is read, read will fail, but $line will still contain data. The extra -n test will detect this so that the loop body is allowed to output the incomplete line. In the iteration after that, read will fail again and $line will be an empty string, thus terminating the loop.
¹ assuming they don't contain NUL characters in shells other than zsh and assuming they don't contain sequences of bytes not forming part of valid characters in the yash shell, both of which shouldn't happen if the input is valid text, though that missing line delimiter on the last line already makes it invalid text.
| Why does this 'while' loop not recognize the last line? |
1,558,028,264,000 |
I wrote a little bash script that made me stumble across the "Year 2038 Bug". I did not know about this problem before and I just dare on posting the --debug output I got from date when my script tried to calculate across this magic date (03:14:07 UTC on 19 January 2038).
date -d "20380119"
date: parsed number part: today/this/now
date: input timezone: +01:00 (set from system default)
date: warning: using midnight as starting time: 00:00:00
date: starting date/time: '(Y-M-D) 2038-01-19 00:00:00 TZ=+01:00'
date: '(Y-M-D) 2038-01-19 00:00:00 TZ=+01:00' = 2147468400 epoch-seconds
date: output timezone: +01:00 (set from system default)
date: final: 2147468400.000000000 (epoch-seconds)
date: final: (Y-M-D) 2038-01-18 23:00:00 (UTC0)
date: final: (Y-M-D) 2038-01-19 00:00:00 (output timezone TZ=+01:00)
Tue Jan 19 00:00:00 CET 2038
date -d "20380119 + 1 days"
date: parsed hybrid part: +1 day(s)
date: input timezone: +01:00 (set from system default)
date: warning: using midnight as starting time: 00:00:00
date: starting date/time: '(Y-M-D) 2038-01-19 00:00:00 TZ=+01:00'
date: warning: when adding relative days, it is recommended to specify 12:00pm
date: error: adding relative date resulted in an invalid date: '(Y-M-D) 2038-01-20 00:00:00 TZ=+01:00'
date: invalid date '20380119 + 1 days'
date -d "20380120" --debug
date: parsed number part: today/this/now
date: input timezone: +01:00 (set from system default)
date: warning: using midnight as starting time: 00:00:00
date: error: invalid date/time value:
date: user provided time: '(Y-M-D) 2038-01-20 00:00:00 TZ=+01:00'
date: normalized time: '(Y-M-D) 2038-01-20 00:00:00 TZ=+01:00'
date:
date: possible reasons:
date: numeric values overflow;
date: missing timezone
date: invalid date '20380120'
Is there any way to make GNU date calculate across this date?
(on a LINUX 32 bit system)
Operating System: Debian GNU/Linux buster/sid
Kernel: Linux 4.12.0-2-686-pae
Architecture: x86
|
If you want to stick to GNU date on 32-bit Linux, there’s no easy way to get it to work with dates after 2038. The coreutils maintainers don’t consider this a coreutils bug, so don’t expect a fix there — the fix will have to come from the C library and the kernel. If you want to play around with the work in progress, you’ll need:
Arnd Bergmann’s kernel patches (most of which are merged or close to being merged, as of version 5.1 of the kernel),
Albert Aribaud’s glibc patches (based on the design outlined here),
and a decent amount of skill and patience.
For more on the way 2038 was planned to be handled in the 32-bit Linux world, see LWN and the write-up of the 2038 BoF at DebConf 17 (with the follow-up comments there and on LWN). This January 2019 LWN article describes the changes which are being implemented in the kernel.
| Bash - date, working around the 2038 bug on 32bit LINUX system |
1,558,028,264,000 |
In Mac I use purge to free up some memory. What is equivalent to it in Linux(Ubuntu Server)? apt-get install purge gave me nothing. If you are no familiar with Mac's purge here is it's man page:
purge(8) BSD System Manager's Manual purge(8)
NAME
purge -- force disk cache to be purged (flushed and emptied)
SYNOPSIS
purge
DESCRIPTION
Purge can be used to approximate initial boot conditions with a cold disk
buffer cache for performance analysis. It does not affect anonymous mem-
ory that has been allocated through malloc, vm_allocate, etc.
SEE ALSO
sync(8), malloc(3)
September 20, 2005
|
This can be do the same thing with purge:
sync && echo 3 > /proc/sys/vm/drop_caches
From man proc:
/proc/sys/vm/drop_caches (since Linux 2.6.16)
Writing to this file causes the kernel to drop clean caches,
dentries and inodes from memory, causing that memory to become
free.
To free pagecache, use echo 1 > /proc/sys/vm/drop_caches; to
free dentries and inodes, use echo 2 > /proc/sys/vm/drop_caches;
to free pagecache, dentries and inodes, use echo 3 >
/proc/sys/vm/drop_caches.
Because this is a nondestructive operation and dirty objects are
not freeable, the user should run sync(8) first.
And from man sync:
NAME
sync - flush file system buffers
DESCRIPTION
Force changed blocks to disk, update the super block.
| What is equivalent to Mac's purge in Linux? [duplicate] |
1,558,028,264,000 |
I installed Linux on another computer and I want to move my /home directory to that computer. I want to back up that directory with any file permission, symlinks etc.
How should I do it? Are there any parameters for tar gzip?
|
If you mean you want to include the files that the symlinks point to, use -h.
tar -chzf foo.tar.gz directory/
Permissions and ownership are preserved by default. If you just want to include the symlinks as symlinks, leave out -h. Small -z is for gzip.
This is all spelled out in man tar; you can search for terms (such as "symlink") in man via the forward-slash key /.
When you extract the archive (tar -xzf foo.tar.gz), ownership will only be preserved if you unpack as root, otherwise, all the files will be owned by you. This is a feature, since otherwise it would often be impossible for a normal user to access files in an archive they (e.g.) download. Modal permissions (read/write/execute) will remain the same. If as a regular user you want the ownership preserved anyway, you can use the -p switch (so tar -xzpf foo.tar.gz).
However, there is a catch.
File ownership is actually stored as a number, not a name; the system reports names by correlating them with a value from /etc/passwd. You can find the number corresponding to your user name with:
grep yourusername < /etc/passwd
Which will print something like:
piotek:x:1001:1001::/home/piotek:/bin/bash
The first number is your user number, the second one is your group number (they are usually the same). The other fields are explained in man 5 passwd (note the 5).
As a consequence, if you tar up some files and unpack them on another system as root, or using -p (so that ownership is preserved), and there is a user on that system whose user number is 1001, those files will be owned by that user, even if their name is not piotek (and even if there is a piotek user on the system with a different corresponding number).
The tar man page is a little confusing in this regard, since it refers to the -p switch as involving file permissions. This is a common *nix ambiguity in a context where the state of the read/write/execute bits are referred to as mode.
| Symlinks and permissions in backup archives |
1,558,028,264,000 |
I'm currently engaged in a non-work related homework exercise. I have an ext4 filesystem sitting on a logical volume. I'm testing different performance tuning strategies and this idea occurred to me. Since pvmove can move individual and ranges of extents, is there a way to map out what physical extents hold a particular file (in theory it can be backing files for a database, or a large commonly accessed file share) and move them to a particular storage device (for example I have a regular HDD and an SSD drive in the same LVM Volume Group)?
I thought of using "filefrag" but then it occurred to me that I wasn't 100% on whether the extent numbers would necessarily be used in sequential order (so knowing how many sectors in ext4 sees a file isn't necessarily going to let me figure out what which extent numbers/volumes the file is physically sitting on.
Any ideas?
|
The two main ingredients are hdparm --fibmap file, which tells you where the file is physically located within the LV, and lvs -o +seg_pe_ranges,vg_extent_size which tells you where the LV is physically located on your device(s).
The rest is math.
So, for example:
# hdparm --fibmap linux-3.8.tar.bz2
linux-3.8.tar.bz2:
filesystem blocksize 4096, begins at LBA 0; assuming 512 byte sectors.
byte_offset begin_LBA end_LBA sectors
0 288776 298511 9736
4984832 298520 298623 104
5038080 298640 298695 56
5066752 298736 298799 64
5099520 298824 298895 72
[...]
I don't know why this is so fragmented - downloaded with wget. May be a good example because as you see, you get a headache without scripting this somehow, at least for fragmented files. I'll just take the first segment 288776-298511 (9736 sectors). The count is wrong since it's not 512 byte sectors, but anyhow.
First check that this data is actually correct:
# dd if=linux-3.8.tar.bz2 bs=512 skip=0 count=9736 | md5sum
9736+0 records in
9736+0 records out
4984832 bytes (5.0 MB) copied, 0.0506548 s, 98.4 MB/s
7ac1bb05a8c95d10b97982b07aceafa3 -
# dd if=/dev/lvm/src bs=512 skip=288776 count=9736 | md5sum
9736+0 records in
9736+0 records out
4984832 bytes (5.0 MB) copied, 0.123292 s, 40.4 MB/s
7ac1bb05a8c95d10b97982b07aceafa3 -
Wheeee.That's identical so we are reading the LV-src at the right place. Now where's the source-LV located?
# lvs -o +seg_pe_ranges,vg_extent_size
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert PE Ranges Ext
[...]
src lvm -wi-ao--- 4.00g /dev/dm-1:5920-6047 32.00m
[...]
Now that's boring, this LV isn't fragmented. No headache here. Anyway.
It says src is on /dev/dm-1 and starts at PE 5920 and ends at PE 6047. And PE size is 32 MiB.
So lets see if we can read the same thing from /dev/dm-1 directly. Math-wise, this is a little bleargh since we used 512 byte blocksize earlier... :-/ but I'm lazy so I'll just calculate the MiB and then divide by 512! Ha! :-D
# dd if=/dev/dm-1 bs=512 skip=$((1024*1024/512 * 32 * 5920 + 288776)) count=9736 | md5sum
9736+0 records in
9736+0 records out
4984832 bytes (5.0 MB) copied, 0.0884709 s, 56.3 MB/s
3858a4cd75b1cf6f52ae2d403b94a685 -
Boo-boo. This isn't what we're looking for. What went wrong? Ah! We forgot to add the offset that's occupied by LVM at the beginning of a PV for storing LVM metadata and crap. Usually this is MiB-aligned, so just add another MiB:
# dd if=/dev/dm-1 bs=512 skip=$((1024*1024/512 * 32 * 5920 + 288776 + 1024*1024/512)) count=9736 | md5sum
9736+0 records in
9736+0 records out
4984832 bytes (5.0 MB) copied, 0.0107592 s, 463 MB/s
7ac1bb05a8c95d10b97982b07aceafa3 -
There you have it.
| Determining LVM Extent numbers for given file |
1,550,506,300,000 |
Ive installed ubuntu server and the drive configuration is confusing me.
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 2G 0 part /boot
└─sda3 8:3 0 445.1G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
sr0 11:0 1 1024M 0 rom
My docker applications are reporting the drive as 100G and downloading media will quickly fill up the drive. I would like to either:
Extend the main partition to 550G
Relocate the /media folder to the 445G partition
I'm getting out of my depth when researching as gets pretty complex very quickly.
How to resize a partition
How to resize a logical partition
Can anyone tell me with some certainty what i do to make the additional space usable, and avoid losing all my data.
|
your 100G "partition" is not a partition. It's an LVM2 logical volume.
That's good.You can increase its size trivially, for example:
lvresize -L +50G -r ubuntu-vg/ubuntu-lv
That will increase the size by 50 GB, using 50 of the probably 345GB free space in your ubuntu-vg volume group.
| Expanding a drive partition, without data loss |
1,550,506,300,000 |
When I' installing Fedora(Using automatic partitioning) on a ~140 GB unallocated space, it creates the following partitions:
And what it seems like, I'm only able to use the /home partition(For personal files, installations etc). So I have only 73.4 GB to use for personal use, which is too small. I also did this on my laptop which has a total of 500 GB disk space, turns out there I also got 73.4 GB to use, and the rest ~400GB is unreachable for me.
What is the /root partition even used for? And why does it need that kind of space? Can someone explain this for me in a comprehensive way?
|
The partition labeled …/root here is /, the root of your filesystem hierarchy. That would be where installed programs go, as well as global configuration files.
Then …/home is where user home directories go, so it's where your personal documents and data, and user-local configurations will go.
For more information about how the filesystem hierarchy is divided, see the FHS.
| Why do Linux split the select partition into /live and /home? |
1,550,506,300,000 |
I have a 1 Terabyte HDD which is currently empty storage space for Windows which is on SSD. I want to split the HDD to two 500GB partitions and use first half as Windows storage space and the other half for Linux distro.
During Linux install when choosing to create the partitions yourself, there is an option to select the device for bootloader install. In this case i want to avoid dual-boot so i would choose the HDD for the Linux bootloader (so if i want to boot Linux, i would have to go through BIOS and change boot order between SSD and HDD).
I have done this operation before, but i used the whole disk back then for Linux.
What i want to know is if i install Linux on that other half of HDD, will it compromise my Windows install even if the bootloaders and installs are both on different disks?
Windows + Windows bootloader on SSD, and Linux + Linux bootloader on HDD.
Can the Linux install (other half) mess up Windows through the empty Windows storage partition (first half) on the HDD?
|
...if I install Linux on that other half of HDD, will it compromise my Windows install even if the bootloaders and installs are both on different disks?
No.
Can the Linux install (other half of HDD) mess up Windows through the empty Windows storage partition (first half) on the HDD?
Not if you define you want a side-by-side installation on the HDD. Examples cannot be provided since you did not specify the distro you wish to use, so we can't know which installer you will see and what specific steps to take. That's another question to submit, anyway.
| Windows on SSD, Linux on HDD. No dual-boot. Separate bootloader devices. Will there be problems? |
1,550,506,300,000 |
I have been doing research on partitions lately and im quite confused on a few things:
what is a partition table and what is it used for
what is a partitioning scheme (GPT and MBR) and what are they used for
Lastly I have done some research and have seen the term 'MBR' and 'GPT' being used to describe partition tables, my last question is, Is MBR and GPT another name for a partition?
|
Partitions
Let's start with another question: What is a disk (from a software point of view)?
A disk is a piece of memory. It has a start and an end. It holds pieces of data, enumerated starting at 0 (you call this an address). One piece of data usually is called a sector which commonly yields 512 bytes.
Imagine a world without file-systems. You can totally use a disk by just directly writing your data to it. Your data is then located on the disk. It has a certain length. It starts at address a and takes up space up to address b. Now you probably want to have more than one set of data and you want to have your data organized in some way. You may say: I want to split the memory into smaller parts with fixed sizes. I call these parts partitions. I use them to organize my data.
So you come up with the concept of a partition table. The partition table is a well-specified list of integer numbers characterizing (start, end, designated usage type) the disk's partitions.
The MBR is actually much more than just a partition table, but it contains a partition table. The MBR also contains some executable code involved with booting the system. You could say, the MBR is one widely used implementation of the concept of a partition table. The MBR is expected to be found at sector 0. It is made to fit into that one sector of 512 bytes. As a result, there is a limit regarding the number and size of partitions it can describe.
GPT is another implementation, but it is larger and consequently able to describe more and larger partitions.
Etymology
To understand the etymology of the term MBR, we need to consider the history. Before you can even think about how to organize the data, you want your system to boot. Powered off, a computer is pretty much "broken" as it cannot do anything. To become useful after power on, the very first program needs to be loaded from a well known location. This well known location can be the first sector of the hard-drive (this is a gross simplification of the boot process). The very first program is referred to as boot-loader. Add a few standards and the MBR (master boot record) is born. From this point of view, having a partition table in the MBR was a nice add-on more than a necessity.
The boot-loader usually reads the partition table, looks at the first bootable partition, and continues to load the actual operating system. This is why the MBR partition scheme usually comes with one partition for the operating system.
With the GPT (GUID Partition Table), there is one designated partition for the boot process, the ESP (EFI system partition). The ESP is usually formatted with a FAT file system. The boot loader is stored in a file. The actual operating system typically resides in another partition. This is why the GPT partition scheme usually comes with at least two partitions: One for the boot-loader, one for the operating system.
| partitions (in general) |
1,550,506,300,000 |
I had someone plug in a brand new 4TB drive via usb in an external enclosure in a data centre. I remotely used disk to create a partition with the default parameters to use up the entire drive. I then ran mkfs.ext4. After copying lots of data, I had the drive shipped to me.
When plugged in on the computer at home (via SATA internally). I can't mount the drive.
I'm getting the wrong fs type, bad superblock error that you see many questions about. The difference is that I know I did format it with ext4.
I did see one question that mentioned something about the partition starting too early. Here is my fdisk -l output:
Disk /dev/sda: 4000.8 GB, 4000787030016 bytes
42 heads, 63 sectors/track, 2953150 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0xfb4c8856
Device Boot Start End Blocks Id System
/dev/sda1 256 976754645 488377195 83 Linux
Is there something I can do to avoid losing the data? Or do I have to ship it back, start over, and ship it back to me once again?
|
Quick answer
Is there something I can do to avoid losing the data?
Yes. You can access the data after mounting like this:
mount -o ro,offset=((256*4096)) /dev/sda /path/to/mountpoint
(ro just in case; if files look right, you can remount with -o rw).
Explanation
This answer explains what happened:
The enclosure exposes the drive to the computer as an Advanced Format 4Kn device, allowing the use of MBR for compatibility with Windows XP systems. When the drive is removed from the enclosure, the change in logical sector format results in an invalid partition table.
Your drive now reports the capacity of 7814037168 logical sectors of 512 bytes each. When in the enclosure, it was 976754646 logical sectors of 4096 bytes each.
The current partition entry was valid in terms of 4096-byte sectors. It says the partition spans from the sector number 256 to 976754645, which was the last sector (keep in mind sectors are numbered from 0; N sectors take numbers from 0 to N-1).
And I can tell this is a MBR (DOS) partition table. GPT needs few sectors at the end of the device for its backup table. You had no unused sectors there, so MBR
But now any tool sees the device with 512-byte logical sectors. The partition table again says the only partition spans from the sector number 256 to 976754645 and this is wrong.
The proper values now are:
256*8 = 2048
(976754645+1)*8-1 = 7814037167
Note the latter is the very last sector (your fdisk says there are 7814037168 sectors).
You cannot fix the MBR partition table because now it should take too many sectors. Compare what Wiki says:
Since block addresses and sizes are stored in the partition table of an MBR using 32 bits, the maximal size, as well as the highest start address, of a partition using drives that have 512-byte sectors (actual or emulated) cannot exceed 2 TiB−512 bytes (2,199,023,255,040 bytes or 4,294,967,295 sectors × 512 bytes per sector). Alleviating this capacity limitation was one of the prime motivations for the development of the GPT.
It would not be easy to fully convert to GPT, because you have no room for the secondary (backup) partition table at the end of the device. MBR lives at the beginning of the device only; GPT requires room at the beginning and at the end.
Still you can tell mount at what offset the filesystem starts, this is what my command does. The offset is 256*4096 bytes (or 2048*512 bytes, it's the same number). The command given above uses the shell to calculate the offset. The offset counts from the beginning of the entire device, therefore the command uses /dev/sda, not /dev/sda1.
My tests indicate ext4 doesn't rely on the logical sector size of the underlying device, so you should be OK mounting this way.
Now it should be clear that "shipping it back, starting over, and shipping it back once again" wouldn't help. The enclosure would translate the logical sector size again and you would be surprised the filesystem mounts. On the other hand if you clear the disk now, create a GPT partition table and a filesystem anew, and then ship the drive, it won't mount in the data centre, if they connect it via the same enclosure.
Hint
If you need to ship the disk back and forth, consider a superfloppy, i.e. a filesystem on the entire device, without any partition table (e.g. mkfs.ext4 /dev/sda). You mount such filesystem with mount /dev/sda /path/to/mountpoint regardless of whether there is an enclosure that interferes or not.
| 4TB drive formatted with ext4 can't mount due to wrong fs type |
1,550,506,300,000 |
I've been Googling about, and it seems the answer is 'no' from anecdotal reports for gparted. However does this apply to parted as well?
I'm not talking about risk factors here involved by inputting the wrong partition, fat fingering a button, power cuts etc - I mean direct effects only.
How does parted know how much 'space' is available? I just see the following output:
GNU Parted 3.2
Using /dev/sdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA OCZ-VERTEX3 (scsi)
Disk /dev/sdd: 60.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 316MB 315MB primary ext4
2 316MB 60.0GB 59.7GB primary ext4
While on gparted I see the following (It's showing up with this drive as /dev/sde after a reboot):
It has some functionality to prevent me from 'resizing' the partition to small (and hence prevent data loss - I assume).
|
gparted and parted may have similar names but they do (very) different things. gparted is a standalone software with a distinct set of features and explicitely not (just) a GUI frontend to parted, even though it's labelled as such in many places.
How does parted know how much 'space' is available?
parted does not know nor care (anymore) in the least about your filesystems. It might display the filesystem type for convenience and orientation of the user only, but there is no filesystem related functionality.
When you grow a partition in parted, the filesystem does not grow along with it. You have to do this yourself. (e.g. resize2fs after growing the partition).
When you shrink a partition in parted, you have to make sure beforehand that the filesystem will not take offense. (e.g. resize2fs before shrinking the partition).
If you want to move a partition to a different start sector, then parted does nothing to help you with the relocation logic whatsoever. (if you REALLY know what you're doing, you could do this manually, but you probably shouldn't).
Is it possible to lose data with any of these? Yes, of course.
You should always have a backup.
| Does parted have the same functionality as gparted for shrinking an ext4 partition? |
1,550,506,300,000 |
I just received my brand new Lenovo ThinkPad X280. I will be installing Arch Linux on it. I will not ever be using Windows. Is there any value in keeping any of the existing disk partitions?
At some date in the future (e.g., when I sell the laptop) I would like to restore the disk to the factory default state with the original Windows 10 it came with. Will keeping some of the existing partitions make that process easier?
Here are the partitions I show on this brand new system:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 119.2G 0 disk
├─sda1 8:1 0 260M 0 part
├─sda2 8:2 0 16M 0 part
├─sda3 8:3 0 118G 0 part
└─sda4 8:4 0 1000M 0 part
gdisk /dev/sda
Number Start (sector) End (sector) Size Code Name
1 2048 534527 260.0 MiB EF00 EFI system partition
2 534528 567295 16.0 MiB 0C01 Microsoft reserved ...
3 567296 248020991 118.0 GiB 0700 Basic data partition
4 248020992 250068991 1000.0 MiB 2700 Basic data partition
For example, if I blow away partition 3 and use that space to make my new Linux partitions, but I keep partitions 1, 2 and 4, will those partitions somehow enable me to restore the original system? If not, is there any value in keeping them at all?
|
The presence of EFI system partition (ESP) indicates this laptop has UEFI, and the disk has a GPT partition table instead of traditional MBR. If you want to use native UEFI boot method, ESP is a required item: UEFI bootloaders for Linux will install in there too. However, you can delete the ESP and let the Linux distribution installer recreate it.
Currently, most new x86 systems will have both UEFI and a compatibility layer for legacy-style boot methods. When installing an operating system, there is one thing you should be aware of: you should take care to boot the OS installer using the same style you wish the installed OS to eventually use (either legacy or UEFI). If an installation media is bootable in both legacy and UEFI styles, you may see two options for it in the BIOS boot menus to represent the two possible boot methods.
Your partition #4 is probably the Windows recovery partition, which will be recreated if you let Windows 10 to completely repartition your disk on install. Partition #2 is reserved by Microsoft to aid in partition type conversions (either to Dynamic Disk or maybe the future WinFS). If you are not going to use Windows, there will be no point in keeping them.
Bottom line: if you delete partition #1 and make an UEFI-style installation, the installer will recreate partition #1 for you (possibly with slightly different size). Partitions #2, #3 and #4 look like they are part of Windows 10 default layout for UEFI and will be automatically recreated if/when you reinstall Windows.
| Installing Linux: keep existing partitions (such as Microsoft reserved partition)? |
1,550,506,300,000 |
I have one SSD and one HDD in my computer. I use the SSD as the drive which holds the OSes (Ubuntu and Windows). I installed them on my SSD in BIOS/legacy mode (MBR format) and I still have an ESP on my HDD where I had the OSes before, is it safe to remove it? Will it cause any issues with other partitions on the HDD? I'm only using the HDD as a data disk.
|
It should be safe, but if you want to be absolutely sure, temporarily comment out any /etc/fstab entries referring to the data disk in Ubuntu, shutdown your system, temporarily unplug the data cable from the HDD and then verify your OSes are still bootable. If both OSes still boot just fine, then you can be absolutely sure the ESP can be removed. If there are problems, just plug the data cable back in.
After testing, remember to uncomment the /etc/fstab entries in Ubuntu you did comment out at the start.
If your system is really booting in legacy mode, then the ESP partition should absolutely be not a factor: it is essentially just a plain FAT32 partition in a GPT-partitioned disk with a specific partition type UUID.
In theory, an UEFI-capable system could be configured to first read an UEFI driver from ESP on one disk, and then start up a legacy compatibility layer to boot in legacy BIOS mode from another disk the UEFI firmware cannot directly access without the driver; but that would be a very special configuration and I don't think most UEFI firmwares allow that kind of configuration to happen. So if you had such a special configuration, you would probably already be aware of more details of that specific UEFI implementation than possibly anyone else outside the engineering team that developed the system/motherboard.
Removing the ESP may or may not affect the partition numbering, depending on whether the tool you use for the removal just marks the ESP's slot in the GPT partition table as unused or completely rebuilds the GPT partition table. In practice, it means if you're referring to the partitions on the HDD by the /dev/sd* device name in Ubuntu, you may have to adjust the partition number(s) for that disk by one. If you're using volume labels or UUIDs to identify the partitions in Ubuntu's /etc/fstab, you don't have to do anything in any case.
| Removing EFI System Partition from data disk |
1,550,506,300,000 |
Somehow I have an ext4 filesystem in /dev/sdb. I expected it to be /dev/sdb1.
I can mount it manually; I can access the data; I can reference it in /etc/fstab; etc, but I want it in a standard partition.
I don't want to lose the data, and there isn't enough room on the drive to duplicate it into another partition. It's not a lot of data: I can always move it to an external device, fix the fs, then move it back, but now I'm curious :)
Is there a method to either remap or move the data into /dev/sdb1 (which, as of now, doesn't exist)?
fdisk gives the following:
$ sudo fdisk /dev/sdb1
fdisk: cannot open /dev/sdb1: No such file or directory
$ sudo fdisk /dev/sdb
Device /dev/sdb already contains a ext4 signature.
The signature will be removed by a write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x4096cdf8.
Command (m for help): p
Disk /dev/sdb: 200GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4096cdf8
Yes, this is a very small drive! I'm using Debian Stretch in a VirtualBox vm.
The response to df:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb 196G 116G 71G 63% /media/mymountdir
I understand that, as @MarkPlotnick states, I can do this fairly painlessly since I'm in a vm. But I would like to know if there is a cli-based method. Thanks!
|
You can write an ext4 (or any other) filesystem on a whole disk (instead of a partition), but of course doing so means that there is no partition table; you are using the whole, raw device.
This is possible if you start with a disk with a partition table, and then mistakenly (with the disk info still in memory) format it as ext4, overwriting the partition table (i.e. use mkfs.ext4 /dev/sdb instead of mkfs.ext4 /dev/sdb1). The result is a disk with no valid partition table (it has an ext4 begin block now instead), but the filesystem stores its size independently, so it will still work (this is also done on some external disks). You can mount the device the same as a partition - just use sdb where you would have used sdb1.
What follows is risky as all hell, as you may well imagine. You should already have a backup, and if you don't, get one now. On the other hand if you didn't have a backup, it means that you're not very interested in that data (which was at risk of a hardware failure, a software glitch or, depending on the scenario, spilled coffee, burst pipe, burglary and disasters both natural and unnatural), so if the worst should happen, still it would be no great mischief.
UPDATE: if you have the space somewhere, do the backup, reformat and reinstall. Same exact time as the shifting method, but one hundred percent more data safety. And if you then do not delete the backup copy, you have an update backup image thrown in for free.
First step: resize the ext4 file system so that it's one whole disk cylinder shorter. Get the cylinder size from hdparm since the partition table, well, just isn't there (fdisk will tell you the total number of sectors, not how they're organised). On some external USB drivers, you might need to try reading the disk make and model, and use this to search for information on the Internet. SATA drivers should be OK.
Now that you know by how much, you can resize the file system and shift your whole partition "to the right", towards the end of the disk, thus freeing one cylinder at the beginning, which is where the partition table and start blank space go (I don't exactly know why on LBA disks sdX1 should start one cylinder - or one track - after the partition table, but I never found worthwhile experimenting).
To shift the partition, you can use an exceedingly dangerous buffering strategy or the reverse option to dd_rescue (I seem to have seen some bug report in which this option was said not to work).
You may want to experiment with a largish file - say one gigabyte - to see whether the two options work; shift the content so that it moves data sixteen megabytes from the beginning inside the file, leaving the file size the same; then inspect the contents to verify this is what happened. Afterwards, repeat with /dev/sdb and the appropriate offsets.
After that, use fdisk to re-create the partition table.
Good luck!
| How do I move an ext4 filesystem from /dev/sdb to /dev/sdb1? |
1,550,506,300,000 |
I'm stuck in linux installation process.
I've resized windows partition in order to be able to install linux (dualboot).
Here is a screenshot of computer manager tool :
Once, I've boot it on live usb, gparted tell me wrong information : there is only one partition witch takes the whole disk.
Here is the screenshot of gparted (from live usb) :
Do you have any idea ?
Thank you by advance for your help :-)
|
This looks like your USB disk you use to boot Linux. I assume there is a third drive in drop-down.
In the livecd, do the following in the terminal (you can leave gparted open):
sudo fdisk -l
It should spit out three drives, two that are around 120Gb and one 1Tb drive ...
| Gparted see NTFS windows partition as fat32 |
1,550,506,300,000 |
I'm going to write a disk partition creator program for flash memory removable devices, mostly controlled by SCSI based I/O and accessed with LBA address.
For reference, I'm researching the partition table on the SD cards that were partitioned and formatted by the disk utility of Ubuntu.
I used the 'unit' command of 'parted' software in Linux to watch the parameters of the cards with CHS unit and byte unit.
The following is for a 8GB sd card with 15122432 sectors of LBA:
pi@raspberrypi:~ $ sudo parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit chs print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 1020,130,11
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 1020,239,62. Each cylinder is 7587kB.
Partition Table: msdos
Disk Flags:
Number Start End Type File system Flags
1 0,1,0 1019,238,61 primary ext3
(parted) unit b print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 7742685184B
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 31744B 7738552319B 7738520576B primary ext3
The following is for a 4GB sd card with 7585792 sectors of LBA:
(parted) unit chs print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 1019,71,29
Sector size (logical/physical): 512B/512B
BIOS cylinder,head,sector geometry: 1019,120,62. Each cylinder is 3809kB.
Partition Table: msdos
Disk Flags:
Number Start End Type File system Flags
1 0,1,0 1018,119,61 primary ext3
(parted) unit b print
Model: Generic CRM01 SD Reader (scsi)
Disk /dev/sda: 3883925504B
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 31744B 3881656319B 3881624576B primary ext3
From my observation, the disk geometry values (C/H/S) are different on different capacity SD card, and the geometry values are seem associated with the end CHS address of the end of the partition. It is seems like..
The card with the partition which end CHS tuple is (c, h, s), which disk geometry will be (c + 1 / h + 1 / s + 1). Are they related?
But how the values are determined? Does those depend on OS or file system?
|
In the 1980s, the BIOS in a PC needed to know the geometry of a hard disk in order to operate it properly. Users had to enter the correct number of cylinders, heads and sectors.
At some point (with the introduction of the IDE interface? My memory is rather fuzzy on this topic), disks became capable of reporting their geometry to the computer.
By the early to mid-1990s, disk firmware managed the disk geometry on its own. The BIOS still needed to have a value for C/H/S, because it used that to calculate the disk size. But the breakdown into C/H/S value could be arbitrary. So disks reported values of C/H/S that had nothing to do with the disk's actual geometry but fitted the allowed range and gave the correct total size.
A storage medium to which the concept of cylinders and heads doesn't apply would likewise make up some value that resulted in the correct total size.
| How the disk geometry (C/H/S) was determined on the partition table of flash memory storage? |
1,550,506,300,000 |
I have an arch install with the root partition on a flash drive.
Additionally, I have an external hard drive which has my HOME and SWAP partitions on it.
I have read online that using a flash drive for swap can prematurely wear out a flash drive, is this also the case for the OS install? Is there any advantages/disadvantages of moving my root partition to the HDD or can flash drives be used for a more long term time from to store and OS?
|
Flash drives are based on flash memory which has a limited number of write-erase cycles. See https://en.wikipedia.org/wiki/Flash_memory#Memory_wear
However, I believe this is less of a problem with modern flash memory.
Most Linux distributions follow the Filesystem Hierarchy Standard (FHS, https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard). If you read through the descriptions you can see that /var is used for variable data. Variable data, as the name states, changes often and therefore causes lots of writes to the storage device. To reduce wear on your flash drive you should move /var to a separate storage device.
Depending on the amount of memory your computer has you could also consider disabling swapping. This can be done using the swapoff command.
You may be interested in reading the following pages on the Arch Wiki:
Installing Arch Linux on a USB key: https://wiki.archlinux.org/index.php/Installing_Arch_Linux_on_a_USB_key
and
Improving performance (section about reducing disk read/writes): https://wiki.archlinux.org/index.php/Improving_performance#Reduce_disk_reads.2Fwrites
I have been running an Arch Linux installation on a cheap Kingston flash drive for a little over two years now without a problem, without any extra storage!
| Can linux be run on a flash drive long term |
1,550,506,300,000 |
I am using Ubuntu 22.04 LTS in a windows dual boot setup. This is the state of the partitions at the moment. (Windows Screenshot)
On my Ubuntu, I have the following
df -H
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.4G 3.0M 3.4G 1% /run
efivarfs 263k 138k 120k 54% /sys/firmware/efi/efivars
/dev/nvme0n1p6 51G 42G 6.2G 88% /
tmpfs 17G 1.1M 17G 1% /dev/shm
tmpfs 5.3M 4.1k 5.3M 1% /run/lock
/dev/nvme0n1p7 160G 57G 96G 38% /home
/dev/nvme0n1p1 101M 35M 67M 34% /boot/efi
tmpfs 3.4G 177k 3.4G 1% /run/user/1000
Partition 6 (/) and partition 8 (/home) are the ones I am using for Ubuntu and want to expand them to un-allocated spaces.
How can I safely resize my partition 6 to take up available space on the left?
|
There are multiple things you need to understand:
a partition contains a file-system, usually but not necessarily with the same size as the partition
partitions can be expanded (or reduced) on their end only, because otherwise the file-system always has to start at the beginning of the partition. If you change the startpoint of the partition, the filesystem cannot be found anymore.
file-system can be expanded to a increased size of the partition very easy and on current filesystems even in live-mode
you must never decrease the size of a partition without decreasing the filesystem size before doing so, or you will loose data.
That being said, you can move your p6 (/) to the “left”, i.e. move not only the boundaries of the partition but also all it's data, including the filesystem. This means, the whole partition will be copied, no matter how many data it contains (This may be incorrect if you are using modern partitioning tools on SSDs, because SSDs have unlike harddrives another abstraction but the behaviour is the same, it's only faster).
After you moved your partition to the left, you can increase its size to the right. Depending on your partitioning tool, it will automatically increase the size of the file-system as well.
Long story short: If you want to increase a partition with a filesystem to the “left”, you have to move it so you can extend it to the “right” :-).
Because Windows can't handle Linux filesystems and you cannot move a filesystem used in a linux-system (only extend to the right), I recommend you to use an ubunut live-session and a tool like gnome disks or gparted.
| Ubuntu resize partition in in backward direction |
1,550,506,300,000 |
I have a VM on Promox running Debian 11. I need to increase disk space, so I've resized disk in Proxmox GUI. But now, I need to enlarge root partition.
This wouldn't be a problem, if root partition was the last on disk, but now, it's a partition in the middle of the disk. There is a swap partition that needs to be moved to the end of the disk, before I can resize the root partition. But there are two conditions to be fulfilled:
The swap partition has to keep the same size
The final swap partition has to be aligned properly to disk layout
As I'm working on a headless, we have to use CLI. This is, what parted give me:
(parted) print
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB bios_grub
2 2097kB 33.3GB 33.3GB ext4
3 33.3GB 34.4GB 1022MB linux-swap(v1) swap
(parted) print free
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 68.7GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
17.4kB 1049kB 1031kB Free Space
1 1049kB 2097kB 1049kB bios_grub
2 2097kB 33.3GB 33.3GB ext4
3 33.3GB 34.4GB 1022MB linux-swap(v1) swap
34.4GB 68.7GB 34.4GB Free Space
I know, that I have to remove the existing swap partition, using swapoff /dev/sda3 and deleting it in parted.
My question is: What do I have to type in for creating a properly aligned swap partition with exactly the same size at the end of this disk?
BTW: I don't want top replace Swap with a file.
|
You can align by using flags like this:
parted /dev/sda --align optimal
and then setting the units as your first step
unit MiB
print
Personally I have a script called pdisk that handles this and I run parted non-interactively:
#!/bin/sh
#
dev="$1"
shift
test 0 -eq $# && set -- print
parted "$dev" --align optimal unit MiB "$@"
And then
pdisk /dev/sda [<command>]
| Move partition to end of disk |
1,550,506,300,000 |
I'm copying an image file (size: 2GB) to an USB disk on /dev/sda (size: 2TB) using dd:
sudo dd if=2023-05-03-raspios-bullseye-arm64-lite.img of=/dev/sda bs=4M status=progress conv=fdatasync
After dd, the USB disk has the same "size" as the image (2GB), at least that is what fdisk says:
$ sudo fdisk -l /dev/sda
Disk /dev/sda: 1,96 GiB, 2101346304 bytes, 4104192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x544c6228
Device Boot Start End Sectors Size Id Type
/dev/sda1 8192 532479 524288 256M c W95 FAT32 (LBA)
/dev/sda2 532480 4104191 3571712 1,7G 83 Linux
How can I regain the full disk space of my USB device after I ran dd?
I want to keep the partitions+data created by dd.
I would like to be able to do either one of these operations after fixing the partitions:
Create a new partition within the (not-visible) remaining disk space
Resize /dev/sda2 to the full space
|
If /dev/sda was a block device, then fdisk -l /dev/sda would show its full size. There would be no need to "regain the full disk space".
Hypothesis: your /dev/sda is not the USB disk. It's a regular file dd created because /dev/sda did not exist at all when you run dd. The regular file is just a copy of the source image.
For a regular file fdisk -l … shows the size of the file. This is what you got.
You can reject or confirm the hypothesis by running ls -l /dev/sda. If the first character is b then the file is a block device, so most likely the disk you mentioned. If the first character is - then the file is a regular file.
If /dev/sda is a regular file, remove it; it does not belong there. In this case the USB disk must be something else (e.g. /dev/sdb) or for some reason there is no corresponding node in /dev/ at all. lsblk may help you find the right node (if it exists).
| Regain full disk space after copying a small image file to a large USB device |
1,550,506,300,000 |
I'm seeing that all my EFI disks have a 1M partition that goes just before the the EFI partition:
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 ...
I have tried to mount that partition to explore it but I haven't been able nor I have been able to find information online.
What's the purpose of this partition and what's inside?
|
It is a BIOS boot partition. It is the "legacy" method to boot your system – with EFI being the "new" method. EFI systems ignore this partition.
The legacy boot method usually employs a MBR and its partition table. However, disks larger than 2 TB are usually formatted with GPT. Some users want a way to use the legacy boot method with a big disk. The GPT uses the BIOS boot partition to make explicit where the legacy bootloader shall be stored. GRUB is a notable example. This partition has no file-system, hence it cannot be mounted.
| What's the small 1M partition that goes before the EFI partition? |
1,550,506,300,000 |
I am trying to create a script to create an image of an entire partition, restore the image in another partition and boot from the new partition.
I am having problems with the last part, i.e. making the changes to boot from the new partition.
For this I install ubuntu/debian using auto partitioning and configuring the hdd like this
/dev/sda
/dev/sda1 - /boot/efi
/dev/sda2 - / (Ubuntu/debian)
/dev/sda3 - SWAP
/dev/sda4 - Not mounted - Target partition to copy/restore the image of sda2
So what I want to do is to create an image of dev/sda2 and restoring it to /dev/sda4 and booting then from /dev/sda2.
The reason for this is to be able to supply complete images of an unix installation and "update" some IOT devices without internet connection. So every time we supply a new image, this image gets restored in one of the partitions and this partition becomes the boot partition. This process applies everytime we supply a new image, i.e. everytime the boot partition switches.
If something goes wrong by applying/installing the new image, the boot partition should not change and instead boot from the "old working" partition.
As for now I succeeded creating the image using dump and restoring it to the target partition.
I am having problems with the changes to tell the grub to boot from the other partition, where the dump was restored.
I tried various things, like grub-install, update-grub and chrooting into the restored installation and running those commands, but I never got it working.
Could someone explain what is to be done to achieve what I am looking for?
|
Modern Linux distributions tend to configure GRUB to identify the filesystem to boot from using filesystem UUIDs (or equivalents). When you clone sda2 to sda4, you'll now have two filesystems with the same UUID. It is likely that GRUB will either boot the first one it finds with a matching UUID, or just stop if it detects multiple matching UUIDs.
So, the first thing you'll need to do after cloning the filesystem is to assign a new UUID to the new clone on sda4, to make the UUIDs unique again. Here's a question that includes answers specifying how to change the UUIDs of many filesystem types. My guess would be that you have missed this step.
The second step would be to update the mini-grub.cfg on sda1 located at /boot/efi/EFI/<name of distribution>/grub.cfg. It contains the search.fs_uuid command that will look for the filesystem that contains /boot/grub and the main GRUB configuration file. Once you have updated the UUID there, GRUB will be looking for its configuration in the new clone instead of the original filesystem.
And finally, you'll need to update both the /boot/grub/grub.cfg and /etc/fstab of the new clone on sda4 to actually use the new UUID of sda4.
(And even if you chose to use partition/device names instead of UUIDs, you would have to make changes to all those same places.)
| Copy partition and boot from new copied partition |
1,550,506,300,000 |
I often need to mount my windows partition (which mounts to the C:/ location by default) and I have to navigate to partition/Users/[username]/Desktop manually.
I think I can't just create a shortcut in Debian because the absolute path will change every time I mount the partition.
How can I create a shortcut that lives in C:/ and points to the desktop so every time I mount it from Linux I can just double click it and go to the folder I want?
Thanks
|
You need to think in terms of the directory paths in your Debian system. Let's assume the Windows C:\ gets mounted as /media/c. It doesn't really matter but we'll use that as the example. Let's further assume that in the Windows context you want to create a shortcut (symbolic link) to that partition's \Users\roaima\Desktop.
cd /media/c
ln -s Users/roaima/Desktop roaima_Desktop
That's it. The symbolic link (shortcut) is called roaima_Desktop, and you can now cd /media/c/roaima_Desktop.
If the mount point changes to /mnt/wincdrive you can still cd /mnt/wincdrive/roaima_Desktop because the symbolic link was created with a path relative to its starting point: it contains only the path necessary to get from the point it was created to a specific subdirectory, and has no information about the directory path associated with the mount point itself.
The symbolic link will not work in the Windows context. It will appear in the filesystem as roaima_Desktop but Windows will not be able to do anything with it. (NTFS has symbolic links but Windows does not.)
| How to create a shortcut to a Windows partition folder? |
1,550,506,300,000 |
These are the steps I am trying from this answer:
Steps I ran:
Run below command to get PV (Physical Volume) name (Ex: /dev/sda1)
sudo pvs
tini-wini # pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 ubuntu-vg lvm2 a-- <9,00g 0
tini-wini #
Resize the PV
sudo pvresize /dev/sda3
tini-wini # sudo pvresize /dev/sda3
Physical volume "/dev/sda3" changed
1 physical volume(s) resized or updated / 0 physical volume(s) not resized
tini-wini #
Run below command to get root logical volume name (Filesystem value of / row; ex: /dev/mapper/ubuntu--vg-root)
df -h
tini-wini # df -h
Filesystem Size Used Avail Use% Mounted on
udev 948M 0 948M 0% /dev
tmpfs 199M 1,1M 198M 1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 8,8G 8,2G 211M 98% /
tmpfs 992M 0 992M 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 992M 0 992M 0% /sys/fs/cgroup
/dev/loop0 45M 45M 0 100% /snap/snapd/15314
/dev/loop1 62M 62M 0 100% /snap/core20/1405
/dev/loop2 62M 62M 0 100% /snap/core20/1376
/dev/loop3 68M 68M 0 100% /snap/lxd/22753
/dev/loop4 56M 56M 0 100% /snap/core18/2344
/dev/loop5 68M 68M 0 100% /snap/lxd/22526
/dev/loop6 44M 44M 0 100% /snap/snapd/15177
/dev/loop7 56M 56M 0 100% /snap/core18/2253
/dev/sda2 976M 207M 703M 23% /boot
tmpfs 199M 0 199M 0% /run/user/1000
tini-wini #
Expand logical volume:
sudo lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-root
tini-wini # lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Size of logical volume ubuntu-vg/ubuntu-lv unchanged from <9,00 GiB (2303 extents).
Logical volume ubuntu-vg/ubuntu-lv successfully resized.
resize2fs 1.45.5 (07-Jan-2020)
The filesystem is already 2358272 (4k) blocks long. Nothing to do!
tini-wini #
With lsblk command it´s show me that the space in disk are there.
tini-wini # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 44,7M 1 loop /snap/snapd/15314
loop1 7:1 0 61,9M 1 loop /snap/core20/1405
loop2 7:2 0 61,9M 1 loop /snap/core20/1376
loop3 7:3 0 67,8M 1 loop /snap/lxd/22753
loop4 7:4 0 55,5M 1 loop /snap/core18/2344
loop5 7:5 0 67,9M 1 loop /snap/lxd/22526
loop6 7:6 0 43,6M 1 loop /snap/snapd/15177
loop7 7:7 0 55,5M 1 loop /snap/core18/2253
sda 8:0 0 21G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 9G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 9G 0 lvm /
sr0 11:0 1 1024M 0 rom
tini-wini #
What do I need to do in this case?
Update
fdisk -l
Disk /dev/loop0: 55,5 MiB, 58183680 bytes, 113640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 67,94 MiB, 71221248 bytes, 139104 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop2: 67,83 MiB, 71106560 bytes, 138880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop3: 44,65 MiB, 46804992 bytes, 91416 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop4: 61,92 MiB, 64901120 bytes, 126760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop5: 55,53 MiB, 58212352 bytes, 113696 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop6: 43,64 MiB, 45748224 bytes, 89352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop7: 61,92 MiB, 64901120 bytes, 126760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 21 GiB, 22548578304 bytes, 44040192 sectors
Disk model: VBOX HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 363D35B2-1971-493B-B67D-6C40297B89AB
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2101247 2097152 1G Linux filesystem
/dev/sda3 2101248 20969471 18868224 9G Linux filesystem
Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 8,102 GiB, 9659482112 bytes, 18866176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
tini-wini #
|
Based on the comments, it looks like the procedure was incomplete. This is the sequence of events that need to happen when resizing an LV on a VirtualBox VM.
Start at the VirtualBox level. VirtualBox keeps VM disks as individual files on the host system. Update the size of the disk to the desired value.
On the guest VM, the kernel will recognize the increase in disk size, but it's not usable yet. You need to use a tool such as fdisk, gdisk, or gparted (or other GUI equivalents) to create a partition from the extra space. You can also extend an existing partition, if the free space immediately follows the existing partition.
Next, the lowest LVM level - physical volumes. If a new partition was created, run pvcreate to create a new PV from it. If you extended a partition, run pvresize to pick up the changes.
On to the second layer of the LVM - volume groups. If a PV was added, you'll need to add it to the volume group using vgextend. If a PV was resized, the changes should be picked up automatically.
At this point, you should see some free space available under the target volume group.
Next, the top layer of LVM - logical volumes. Extend the desired LV using lvextend to allocate the free space from your volume group. Use commands such as lsblk to figure out which LV to extend.
The last step is to tell the filesystem on the LV to take up the new space. lvextend can manage this for several filesystems with the --resizefs option. Otherwise, you can use filesystem-specific tools to extend it (resize2fs for ext2/3/4, xfs_growfs for XFS etc).
You should now have a filesystem with the desired amount of extra storage space.
| Grow partition on ubuntu server 18.04 inside virtualbox doesn't seem to work |
1,550,506,300,000 |
Scenario: for simplicity - consider that exists a hard disk of 500GB to only install Linux (for example Ubuntu, Debian or Fedora) - and if exists a hard disk of 750GB or 1TB then 500GB are dedicated for Linux (as first case) and the rest of the disk for Windows.
I read many tutorials about best practices about to define partitions to install Linux, for the most common or general scenario is suggested:
/boot 100MB
/swap x2 current RAM if is minor or equals of 4GB
/ 50GB to 100 GB
/home 50GB to 100 GB
Note: for above is important consider the order of the partitions too.
Until here all is OK and gparted can be used in peace.
Now considering the security and administration aspects/concerns then is available the following:
#1
/boot 100MB
/swap x2 current RAM if is minor or equals of 4GB
/ 50GB to 100 GB
/home 50GB
#2
/var
/usr
/tmp
Question 1 Is the order of part #2 correct? or it does not matter? In many tutorials these partitions of the part 2 appear or are only mentioned but never indicated about their order - therefore not sure if is not important or not. It can be problematic later.
Question 2: What are the recommended sizes for /var, /usr and /tmp partitions? This question would be tricky but I am assuming is there a kind of guidance or rule thumb for them.
|
100 MiB for /boot is not enough, I recommend 1 GiB. It varies in different distributions, but on my system initramfs is 36 MiB and vmlinuz 11 MiB. So with 100 MiB you probably wouldn't be able to fit more than one bootable kernel+init on your system.
I would recommend 1 GiB, 500 MiB minimum.
Don't forget that you'll also need /boot/efi if you are on a UEFI system. Recommended size for /boot/efi vary, it's usually is between 100 or 200 MiB and 550 or 600 MiB.
If you have enough RAM you don't necessarily need swap (unless you plan using suspend to disk). Some distributions don't create swap by default and either just don't use swap or create swap on zram.
50 GiB for / is good.
50 GiB for /home depends on what you plan to do with the system. For me (desktop with a single OS) it wouldn't be enough but for a server it's not really necessary to create a separate /home (but it definitely won't hurt) -- I wouldn't expect to put anything else than SSH key in my home on a server but that again might depend on the "type" of the server. If you are running server for multiple users and plan to set up something like per-user web directories with Apache (with /home/<user>/public_html) it makes sense to have separate /home. In general the size of /home depends on how much data will the user (users) store there.
/tmp shouldn't be on disk, most distribution now use tmpfs and store /tmp in RAM.
I don't really see a use case for separate /usr. But if you want, you can do that. Edit: As @telcoM pointed out in the comments, having a separate /usr isn't a good idea and your system may be unbootable with a separate /usr.
Separate /var is useful for servers that store a lot of things in /var, like webservers that use /var/html, or for virtualization and other similar applications that use /var a lot. So it again depends on what you plan to do with the system.
Separate /var can be also useful for systems with flatpak which installs the applications to /var/lib. It can also prevent /var/log from eating all space in / if something goes wrong when logging (but journald limits can also prevent that).
You should also consider using something more modern than plain partitions. Especially if you plan to create multiple mount points. Dealing with running out of space on one of them is really painful with fixed partitions but with technologies like LVM or btrfs (sub)volumes you can make the system future proof more easily. Moving free space from filesystem to another (e.g. shrinking /home to make more space for /var) isn't trivial with partitions (because they cannot be resized to the left), but relatively easy with LVM and with btrfs this isn't an issue at all.
| Define partitions to install Linux but considering Security and Administration aspects/concerns |
1,550,506,300,000 |
I have a 2TB harddrive which I restored a Clonezilla Ubuntu OS image on it, the image was of a 512GB disk, so now my system sees the 2TB drive as a 512 with 1.5TB of free space. I looked into it and some suggestions is to boot from a USB then install gparted then merge them but then it said that it might mess up with the grub and since it’s the OS system drive I don’t want to mess it up that’s why I am seeking advice before trying things.
And here is the result of sudo lsblk
So I wonder how I can allocate the 1.5TB to the rest of the OS harddrive so my system sees the drive as 2 TB not 512GB without messing up GRUB?
|
Unmount the swap and logical partition
use the gparted to delete the partitions
extend your root partition using gparted
finally create a logical partition and a swap partition in it
Verify/modify the /etc/fstab so that UUID matches
Here is a small illustration.Modify for your case accordingly.Feel free to ask if you face any error/doubs and accept the answer if it helped
if you are doing this from system itself then unmount the swap partition before continuing
if doing this from live usb make sure every partition in unmounted ofc ):
current situation,disk in mbr and swap in lartition
start by deleting the swap partition
delete the logical partition
resize your root ext4 partition
here you might get something like
well read it carefully; basically if anything happens during resizing then you are done and your data is gone );
root partition size before resizing
root partition size after resizing
now it should look like this
create your logical partition
allocate te size of logical partition
create a swap partition in logical partition
allocate swap partition size
finally it should look like this.check the list of pending processes
apply the changes
all done , this is how it should look like
correct the UUID of swap partition in /etc/fstab
| How can I expand a partition (with my Grub boot loader on it) and make sure it still boots after the fact? |
1,550,506,300,000 |
What I did
Use the graphical installer of Debian buster to create the following partitions on one hard drive.
primary partition /boot
primary partition (for lvm)
Create one volume group G.
Create logical volume X in volume group G.
Create logical volume Y in volume group G.
Create logical volume Z in volume group G.
X, Y, Z are not the actual names of the logical volumes.
Surprise
When I create the logical volumes with the menus of the installer, the installer would list the logical volumes that I created.
Somehow in the list, the ordering of X, Y, and Z is scrambled. I expect to see X first, then Y, then Z. But in the list, I might see another ordering (something like Y, X, Z).
My expectation
I expect that the ordering of the physical locations of X, Y, and Z (on the hard drive) is same as the order that I create them:
Order of creation: X, Y, Z
Expected order of physical location on hard drive:
+---+---+---+
| X | Y | Z |
+---+---+---+
There is no gap between X and Y and no gap between Y and Z.
Question
Will I get what I want?
|
Logical volumes are not partitions, their order is not important. Actually, there is nothing like "order" with LVs. You can have multiple physical volumes in the volume group and LVs allocated on multiple PVs and even with one PV LVs can be allocated in multiple "segments" -- e.g. you can get something like x1 | y1 | x2 | y2 written on the disk. (This doesn't normally happen, but it's possible to create LVs like that. That's why they are called logical volumes, you don't really care about their physical allocation.)
LVM tools like lvs print logical volumes in alphabetical order:
$ sudo lvs test
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
a test -wi-a----- 12,00m
b test -wi-a----- 12,00m
if you want to see where are the LVs really allocated, you can use pvdisplay -m
--- Physical Segments ---
Physical extent 0 to 2:
Logical volume /dev/test/b
Logical extents 0 to 2
Physical extent 3 to 5:
Logical volume /dev/test/a
Logical extents 0 to 2
Physical extent 6 to 24:
FREE
In this example you can see that LV b is allocated on the first three physical extents and a is "second".
I'm not sure how Debian installer works internally, but if you want for some reason be sure that certain LVs are allocated in a specific way (maybe on a specific PV), you can create them manually first and tell the installer to reuse existing LVs. You can select both specific PV and extent ranges when creating LVs using lvcreate (see lvcreate man page) but if you are not trying to do something special, you don't need to care about this. Just trust LVM, it will allocate the LVs in some logical way.
| Does the Debian graphical installer scramble the ordering of logical volumes in a volume group? |
1,550,506,300,000 |
My HDD is divided into two partitions and they are placed in the mnt folder under the root directory. And they are located inside new and windows respectively. Whenever I attempt to rename the second the second partition using root access in the terminal, I get an error message that states this: cannot move '/mnt/windows' to '/mnt/Main Volume': Device or resource busy. Let me share the full terminal code that I executed.
mv /mnt/windows /mnt/Main\ Volume
mv: cannot move '/mnt/windows' to '/mnt/Main Volume': Device or resource busy
Can anyone tell me why this is happening? And is there any other way to rename the partition folders?
|
Your partitions are not placed in /mnt, they are mounted there -- by mounting a partition (disk/volume...) to a specific location, you are saying "I want content of this device here" and you can't simply change it by renaming the mountpoint, you need to unmount the device and mount it again to a new location.
I assume your Windows partition is mounted during boot automatically thanks to a fstab entry, so you need to edit /etc/fstab and change the mountpoint there and either reboot or run sudo umount /mnt/windows and sudo mount -a to change the mountpoint without rebooting.
| Cannot rename directory in the root directory |
1,550,506,300,000 |
I was trying to shrink my home partition. I followed this ArchWiki article for that. According to this I first resized my filesystem using resize2fs and then resized my physical device using parted. In resize2fs parameter I gave my intended size as XG and after resizing, it reported that new size is Y (4k blocks). From this info I calculated my partition size is (Y * 4) KiB and when resizing physical partition using parted I used this size. But in reality it is (Y * 4) KB. So now total block number of the filesystem is higher than the total block number of the physical device.
In resize2fs man page it is stated that if size isn't specified it will take up whole space from the device. So to solve this problem I ran resize2fs again so that it match the fs size with physical size. But it gave following error:
resize2fs 1.45.6 (20-Mar-2020)
Resizing the filesystem on /dev/sda3 to 159907584 (4k) blocks.
resize2fs: Can't read a block bitmap while trying to resize /dev/sda3
Please run 'e2fsck -fy /dev/sda3' to fix the filesystem
after the aborted resize operation.
But when I issue e2fsck it reported the mismatch and suggested to abort. So, now I am stuck in a loop:
e2fsck 1.45.6 (20-Mar-2020)
The filesystem size (according to the superblock) is 186122240 blocks
The physical size of the device is 159907584 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>?
Is there any way to recover from this? Is it safe to mount and access the partition so that I can take backup?
Thanks!
|
From what you write, you have accidentally shrunk a partition smaller than the file system it contains. On it's own this shouldn't lose any data but almost every action you might do after that could [have]. This definitely includes resize2fs, e2fsck and mount. It appeares you were very lucky since the two commands you executed both detected the problem and aborted.
The big question is did you do anything with the extra space you created by shrinking the partition? If you did, if you made an additional partition and formatted it, then you may have damaged your data beyond repair. If not then you may be okay.
To fix the immediate issue you must use a tool such as parted to increase the partition back to its original size. If you did nothing with that free space already then your data should be right where you left it. This will fix the immediate problem and you can use e2fsck to double check. Do abort if it gives you a similar warning to the first time.
The root cause of your problem is that you have not properly shrunk the file system with resize2fs before you shrink the partition with parted. This is necessary to move any file data out of the space you are going to remove from the partition.
I note that the wiki you reference correctly indicates that you should specify the size in resize2fs....
... Please be very careful with units and take the time to understand the numbers you are entering. 4k blocks in an ext2/3/4 means 4096 bytes. Elsewhere the term "block" can mean something completely different. Also many partitioning programs including parted make a distinction between KB, MB... and KiB, MiB. Make sure you know which units you intend:
KB = 1,000 MB = 1,000,000
KiB = 1,024 MiB = 1,048,576
| How to recover filesystem and physical size mismatch |
1,550,506,300,000 |
I don't know how to expand my kali linux partition.
I want to add the unallocated marked partition to the /dev/sda7. If I try to resize /dev/sda7 I don't see any free space on my disk but if I try to resize some other partition I see 143GB of free space. How can I resize it without reinstall?
|
My approach would be removing the swap partition first, and then moving/resizing the ext4 partition backward. You can recreate swap partition later.
By the way, you cannot resize your ext4 partition while you are using it. Therefore you should perform this operation on another system, I recommend creating a live USB and using it.
| I don't know how to expand my kali linux partition |
1,550,506,300,000 |
I'm trying to mount an external Toshiba USB drive in Ubuntu 19.04. No entry appears in the file manager gui when the drive is plugged in. fdisk shows...
ewan@tiny:~$ sudo fdisk -l
...
Disk /dev/sdb: 698.7 GiB, 750156374016 bytes, 1465149168 sectors
Disk model: External USB 3.0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX [redacted]
I used gdisk to partition the drive, and listing partitions shows:
Command (? for help): p
Disk /dev/sdb: 1465149168 sectors, 698.6 GiB
Model: External USB 3.0
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1465149134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1465149134 698.6 GiB 8300 Linux filesystem
The partition changes were saved with the gdisk 'w' command.
Using lsblk shows (drive info redacted):
ewan@tiny:~$ lsblk -fa
...
loop22 squashfs 0 100% /snap/gnome-system-m
loop23 squashfs 0 100% /snap/gnome-characte
sda
├─sda1 vfat XXXXXXXXX 503.4M 1% /boot/efi
├─sda2 ext4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 412.8M 34% /boot
└─sda3 crypto_L XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
└─sda3_crypt
LVM2_mem XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
├─ubuntu--vg-root
│ ext4 XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 110.5G 47% /
└─ubuntu--vg-swap_1
swap XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX [SWAP]
sdb
└─sdb1
When I try to mount:
ewan@tiny:~$ sudo mount /dev/sdb /media/usb/
NTFS signature is missing.
Failed to mount '/dev/sdb': Invalid argument
The device '/dev/sdb' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
Any suggestions please?
|
gdisk just modifies the partition table, it does not actually create the filesystem metadata structures into new partitions (aka "formatting" the partition). For that, you'll need some variant of the mkfs command.
As you've created a partition /dev/sdb1 and marked it as a Linux filesystem, you should now create the filesystem of the desired type on it. For example, if you choose to use the ext4 filesystem type, you should run sudo mkfs.ext4 /dev/sdb1; if you chose XFS instead, you should run sudo mkfs.xfs /dev/sdb1 instead.
After the mkfs command is successfully executed, the filesystem should be ready for mounting. And you should use the partition device (/dev/sdb1), not the whole-disk device (/dev/sdb) for mounting.
| Unable to mount a gpt partitioned external USB drive in Ubuntu 19.04 |
1,550,506,300,000 |
I have installed Ubuntu on my old laptop to give him a second life. I installed Ubuntu on the 24gb SSD. But the old Windows10 partition is on the 1tb HDD.
Now every time I boot the system it asks which one I want to boot from. I dont want this, and want to just delete everything from this disk (the Windows one ofcourse) and start with a clean disk. How do I do this?
|
Open gnome-disks, or gparted.
I recommend gparted.
Execute sudo apt install gparted, if you don't have it installed.
Select the drive you want to clean in the upper right corner, and press Device -> Create partition table.
You would usually pick msdos type as it is readable by all modern OSes.
Now your disk is clean, you may create new partitions of any type. Ubuntu is familiar with ext4 or btrfs. I recommend the latter if you'd like to use snapshot functionality (restore backed-up file states).
| New ubuntu install need to clean second disk |
1,550,506,300,000 |
First of all, this question is NOT the duplicate of this one, which compares lower-case "sdx" with mixed-case "sdX". Back to the question.
If I have a formatted memory stick which is composed of only a single partition and I use the following
dd if=input_file of=/dev/sdXY bs=ZZ ...
How will it be different from using
dd if=input_file of=/dev/sdX bs=ZZ ...
The question came to my mind because when i had to make a bootable Linux USB, I had to mention the drive number and NOT the partition number i.e.
dd if=Linux.iso of=/dev/sdX bs=ZZ ...
Would be glad if someone can explain the difference while considering one single partition drive/stick etc.
|
Most ISOs nowadays are "hybrid ISO" images containing CD/DVD boot code (El Torito standard,
an extension to the ISO 9660 CD-ROM standard) and
an MBR (including boot code). This makes them bootable from CD/DVDs or USB storage devices.
By copying the ISO to the device (and not a partition), you create a USB bootable version of the ISO (like the read-only version on optical media) including
MBR and partition(s).
If you copy an image to a partition, you generally copy a "partition image" (without MBR / GPT).
| What is the difference between using "of=/dev/sdX" and "of=/dev/sdXY" in dd |
1,550,506,300,000 |
Installed Debian 9.6 in VirtualBox 25GB vhd and partitioning said that:
"Maximum size for this partition is 26.8GB"
"New partion size 26.8GB".
df shows 25669860, which seems to be correct.
Where did that extra 1.6GB come during installation?
|
That's because the installer displays the size in decimal gigabytes, whereas other utilities use binary gigabytes.
In bytes, 25 * 2^30 = 26,843,545,600; or, in larger units, 25 GiB = 26.8 GB.
One binary gigabyte (sometimes called a gibibyte) is 2^30 = 1,073,741,824 bytes.
One decimal gigabyte is 10^9 = 1,000,000,000 bytes.
(This is an old dilemma. Generally, the prefixes kilo-, mega-, giga-, tera- etc. refer to powers of 10. By abuse of language, it became usual to use them loosely to refer to powers of 2 which have values close to the corresponding powers of 10. Commonly, the binary meaning is almost always meant when speaking of memory capacity; and the decimal meaning is almost always meant when speaking of bandwidth. But for disk storage capacity some utilities default to the decimal meaning, and others to the binary meaning.)
| Wrong partition size during install? |
1,542,760,058,000 |
I have a Lenovo Thinkpad W550s that already has Windows 7 on it. I would like to install Fedora 29 Workstation alongside Windows 7, but I have run into some problems.
The hard drive was formatted with MBR (not GPT) and three partitions. Using the fdisk -l command from a Fedora 29 LiveUSB yields the following information:
Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x7a8dee3d
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 3074047 3072000 1.5G 7 HPFS/NTFS/exFAT
/dev/sda2 3074048 944916479 941842432 449.1G 7 HPFS/NTFS/exFAT
/dev/sda3 944916480 976771071 31854592 15.2G 7 HPFS/NTFS/exFAT
The motherboard has UEFI. However, Legacy BIOS is enabled, and Secure Boot is disabled.
In the Fedora 29 Workstation installer, I could shrink the /dev/sda2 partition, and use that for root, home, whatever, and delete the /dev/sda3 partition to satisfy the four partition limit with MBR. But when I try to install the OS, the installer gives an error about requiring a /boot/efi partition. Even when I try deleting /dev/sda1 (still from within the Fedora installer), formatting that and installing the EFI to /dev/sda1, the installer still won't proceed.
Is there a way to install Fedora 29 on this laptop without removing Windows 7? I need it for work, and can't do a reinstall of Windows 7.
|
One of two things are the issue:
You created a UEFI only Installer USB
Your booting in UEFI mode and need to boot in MBR/Legacy Mode.
If you can get to the CLI try this :
https://askubuntu.com/questions/162564/how-can-i-tell-if-my-system-was-booted-as-efi-uefi-or-bios
Update:
When I have a USB/ISO that is both UEFI/MBR compatible it usually shows two boot options in the BIOS/BootLoader. See if a second option shows up and try that and/or try messing with BIOS settings to force MBR/Legacy mode only.
I have also had it where Rufus (Windows ISO write to USB Tool) will say "Do you want to use ISO Mode (Recommended)" or "DD Mode" and I generally use ISO mode. But, I remember having where that created a UEFI only ISO and I then tried DD Mode and had a Hybrid USB which was both MBR and UEFI compatible. Try using DD to create installer USB and then check for a new boot entry.
| Can't install Fedora 29 on Thinkpad W550s due to GPT |
1,542,760,058,000 |
I learnt yesterday that I could pvcreate directly on /dev/sdb instead of /dev/sdb1. I thought that you could only pvcreate on a existing partition. Doing it on a partition adds a level and operations so what are the benefits of creating a partition before doing pvcreate?
|
There are two reasons to do so.
If the partition does not allocate 100% of the space of the device, this allows you to assign only a part of the device to LVM, hence leaving the rest of the device available for other uses.
In the case of a partition allocating all the device space, the reason is that if the disk is accessed by other non-Linux OSes, they might not recognize LVM and see the unpartitioned disk as a clean slate. Making a partition on it signals that the disk is being used for something.
| What are the benefits of doing pvcreate on a partition instead of on device? [duplicate] |
1,542,760,058,000 |
Not really familiar with LVM and partitionning, usually, I :
extend a virtual disk (where there is already a PV on it)
create a new partition with the new free space
create a new PV from it
add it to my volum group
My question is, can I, instead :
extend a virtual disk (where there is already a PV on it)
delete the existing partition and recreate it adding the new free space
extend the existing PV
extend the VG
?
|
I'm not quite sure what you mean by a virtual disk, but if you have a way of increasing the block device with a PV on it, you can use the pvresize command to grow the PV to the new size of the block device. Once the PV has grown, you will need to use lvextend to give more space to your selected LV; and finally, use resize2fs (assuming ext2/3/4) to grow the filesystem to use the new LV space.
| Can I extend a partition that is already used as a LVM PV? |
1,542,760,058,000 |
I have installed ubuntu server 16.04 as a webserver at work. I had initially allocated 100GB to it. For some reason, some of the space has been eaten up by tmpfs and I am not able to claim it back.
Here is what i get when I run df -h
Filesystem Size Used Avail Use% Mounted on
udev 31G 0 31G 0% /dev
tmpfs 6.2G 8.9M 6.2G 1% /run
/dev/mapper/filesystem--vg-root 36G 34G 238M 100% /
tmpfs 31G 0 31G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 31G 0 31G 0% /sys/fs/cgroup
/dev/sda1 472M 57M 391M 13% /boot
tmpfs 6.2G 0 6.2G 0% /run/user/1000
It appears that my / folder is full. Which is not supposed to be the case. The server has already used up the 36GB only. Please help someone.
|
As understand from the comments OP have very huge swap partition. Here is the procedure to shrink it to the reasonable size:
Disable swap:
swapoff /dev/mapper/thitoacademics--vg-swap_1
Change the swap LV to 4GB:
lvreduce -L 4G /dev/mapper/thitoacademics--vg-swap_1
Recreate new swap (just in case):
mkswap /dev/mapper/thitoacademics--vg-swap_1
Add new swap:
swapon /dev/mapper/thitoacademics--vg-swap_1
All those commands need to be executed as root
For moving the free diskspace to the other LV/filesystem you should follow those steps:
Extend the other filesystem:
lvextend -L+4G /dev/mapper/filesystem--vg-root
Extend the filesystem
resize2fs /dev/mapper/filesystem--vg-root
Again all those commands need to be executed as root
| Ubuntu Server 16.04 filesystem usage |
1,542,760,058,000 |
Before:
memory 8G, swap:8G
After:
after adding 8G memory, the swap doesn't work
how can I activate it?
OS: CentOS 7, 64bit.
|
You can confirm that swap is activated and how much is being used with:
swapon --show
As an aside, if you want to check memory usage without htop you can use:
free
That said, what you have already shown indicates that you likely do not have any problems with your swap.
| How to activate swap after adding more memory? |
1,542,760,058,000 |
Can I have separate / and /tmp but /home + /var on one partition somehow?
Separate /tmp is good because I can set it up with some quick unreliable filesystem. I often change distributions therefore separate / is a blessing - quick re-install and I'm good as long as /home and /var are untouched.
The problem is, I don't want to designate space for any of the last three - I want them to share available resources. I sometimes need more space in /var and I can see there's available space in /home that I cannot use, sometimes it's the other way around. It's frustrating. Any ideas?
|
You can always mount your third partition somewhere (like /mnt/combo or something), and then bind-mount subdirectories from this mountpoint to the three designated directories.
In fstab, this would look something like
UUID=... /mnt/combo auto defaults
/mnt/combo/usr /home none bind
/mnt/combo/var /var none bind
/mnt/combo/home /home none bind
Also consider this: /home makes sense to live on a separate partition - even better, a separate drive, which can be somehow protected (raid, backups,...). /var would make sense to be separate if you really have something personal in there (websites and such), otherwise it makes no difference. /usr can definitely be part of /, it makes no sense to have it separate because on a modern system, the distinction between /bin and /usr/bin is blurred and noone cares about it anymore, and segmenting a system only creates problems if one of the partitions somehow doesn't mount.
/tmp should normally be ram-backed anyway (tmpfs), unless you really are running out of RAM, and most distros do that by default unless you change it.
Big picture: separate /home if you have to, the rest is just overhead - you probably have no reason to have different filesystem types or different permissions on any of these, and partitioning doesn't usually mean physical separation (same hard drive?).
| Multiple mountpoints on one partition? |
1,542,760,058,000 |
Can I convert Free space in a LVM partition to a ext3 partition?
If I run pvs:
PV VG Fmt Attr PSize PFree
/dev/sda3 ubuntu-vg lvm2 a-- 297,11g 30,02g
So I have 30GB unused, I would like to take them out of the LVM partition to convert it in ext3 partition. Is it possible? Or is it better to just partition these 30 GB in a new Logical Volume?
|
Since you have LVM set up, just use that — you can either extend an existing LV (and the filesystem it hosts), or create a new LV. See lvextend(8) and lvcreate(8) for details.
| Convert free space LVM to ext |
1,542,760,058,000 |
System:
Laptop with Linux Mint 17.3, 1x SSD for system and 2x HDD intended for RAID1 using mdadm.
Situation:
Without knowing how to create RAID1 properly, I created it badly.
GParted showed a warning that a primary gpt partition table is not there, and that it is using the backup one, I think it showed this twice
GParted showed the partition on both HDDs contained ext4 filesystem, instead of linux-raid filesystem
GParted did not show the raid flag on neither HDDs
Reboot caused the array not to work, I mean not only it did not mount automatically, it could not be mounted without stopping the array and re-assembling it
There were probably other things I did not notice like I don't know if the array, I mean the mirroring, even worked properly
|
In this answer, let it be clear that all of your data will be destroyed on both of the array members (drives), so back it up first!
Open terminal and become root (su); if you have sudo enabled, you may also do for example sudo -i; see man sudo for all options):
sudo -i
Check what number (mdX) the array has:
cat /proc/mdstat
Suppose it is md0 and it is mounted on /mnt/raid1, first we have to unmount and stop the array:
umount /mnt/raid1
mdadm --stop /dev/md0
We need to erase the super-block on both drives, suppose sda and sdb:
mdadm --zero-superblock /dev/sda1
mdadm --zero-superblock /dev/sdb1
Let's get to work; we should erase the drives, if there were any data and filesystems before, that is. Suppose we have 2 members: sda, sdb:
pv < /dev/zero > /dev/sda
pv < /dev/zero > /dev/sdb
If you were to skip the previous step for your reasons, you need to wipe all filesystems on both of the drives. Then check if there is nothing left behind, you may peek with GParted on both of the drives, and if there is any filesystem other than unknown, wipe it.
First, we wipe all existing partitions, suppose sda contains 3 partitions, then:
wipefs --all /dev/sda3
wipefs --all /dev/sda2
wipefs --all /dev/sda1
Use this on both of the drives and do all partitions there are.
Then, we wipe the partition scheme with:
wipefs --all /dev/sda
wipefs --all /dev/sdb
Then, we initialize both drives with GUID partition table (GPT):
gdisk /dev/sda
gdisk /dev/sdb
In both cases use the following:
o Enter for new empty GUID partition table (GPT)
y Enter to confirm your decision
w Enter to write changes
y Enter to confirm your decision
Now, we need to partition both of the drives, but don't do this with GParted, because it would create a filesystem in the process, which we don't want, use gdisk again:
gdisk /dev/sda
gdisk /dev/sdb
In both cases use the following:
n Enter for new partition
Enter for first partition
Enter for default of the first sector
Enter for default of the last sector
fd00 Enter for Linux RAID type
w Enter to write changes
y Enter to confirm your decision
To triple-check if there is nothing left behind, you may peek with GParted on both of the newly created partitions, and if they contain any filesystem other than unknown, wipe it:
wipefs --all /dev/sda1
wipefs --all /dev/sdb1
You can examine the drives now:
mdadm --examine /dev/sda /dev/sdb
It should say:
(type ee)
If it does, we now examine the partitions:
mdadm --examine /dev/sda1 /dev/sdb1
It should say:
No md superblock detected
If it does, we can create the RAID1 array:
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1
We shall wait until the array is fully created, this process we may watch with:
watch -n 1 cat /proc/mdstat
After creation of the array, we should look at its detail:
mdadm --detail /dev/md0
It should say:
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Now we create filesystem on the array, if you use ext4, this is better to be avoided, because of ext4lazyinit would take noticeable amount of time, hence the name, "lazyinit", therefore I recommend you to avoid this one:
mkfs.ext4 /dev/md0
Instead, you should force a full instant initialization with:
mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/md0
By specifying these options, the inodes and the journal will be initialized immediately during creation, useful for larger arrays.
If you chose to take a shortcut and created the ext4 filesystem with the "better avoided command", note that ext4lazyinit will take noticeable amount of time to initialize all of the inodes, you may watch it until it is done, e.g. with:
iotop
Either way you choose to make the file system initialization, you should mount it after it has finished its initialization.
We now create some directory for this RAID1 array:
mkdir --parents /mnt/raid1
And simply mount it:
mount /dev/md0 /mnt/raid1
Since we are essentially done, we may use GParted again to quickly check if it shows linux-raid filesystem, together with the raid flag on both of the drives.
If it does, we properly created the RAID1 array with GPT partitions and can now copy files on it.
See what UUID the md0 filesystem has:
blkid /dev/md0
Copy the UUID to clipboard.
Now we need to edit fstab, with your favorite text editor:
nano /etc/fstab
And add add an entry to it:
UUID=<the UUID you have in the clipboard> /mnt/raid1 ext4 defaults 0 0
You may check if it is correct, after you save the changes:
mount --all --verbose | grep raid1
It should say:
already mounted
If it does, we save the array configuration; in case you don't have any md device yet created, you can simply do:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
In case there are arrays already existent, just run the previous command without redirection to the conf file:
mdadm --detail --scan
and add the new array to the mdadm.conf file manually.
In the end, don't forget to update your initramfs:
update-initramfs -u
Check if you did everything according to plan, and if so, you may restart:
reboot --reboot
| How to re-create RAID1 really properly |
1,542,760,058,000 |
I'm running a dual boot Windows/Mint partition. Everything is going fine until things suddenly get REALLY screwy and slow. I get a popup that "Filesystem root only has 297.8 MB disk space remaining."
I ran df -k and got this output:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 19091584 17807900 290816 99% /
none 4 0 4 0% /sys/fs/cgroup
udev 4038960 4 4038956 1% /dev
tmpfs 811000 1704 809296 1% /run
none 5120 0 5120 0% /run/lock
none 4054984 15288 4039696 1% /run/shm
none 102400 12 102388 1% /run/user
/dev/sda6 57433348 8825192 45667588 17% /home
/dev/sda2 97280 23312 73968 24% /boot/efi
/home/jd/.Private 57433348 8825192 45667588 17% /home/jd
/dev/sdb1 976760404 1861952 974898452 1% /media/jd/TOSHIBA EXT
This is what my du -hs /* looks like when run from /:
9.8M /bin
72M /boot
4.0K /cdrom
4.0K /dev
27M /etc
17G /home
0 /initrd.img
327M /lib
4.0K /lib64
16K /lost+found
1.6G /media
4.0K /mnt
374M /opt
du: cannot access ‘/proc/3134/task/3134/fd/4’: No such file or directory
du: cannot access ‘/proc/3134/task/3134/fdinfo/4’: No such file or directory
du: cannot access ‘/proc/3134/fd/4’: No such file or directory
du: cannot access ‘/proc/3134/fdinfo/4’: No such file or directory
0 /proc
5.4M /root
du: cannot access ‘/run/user/1000/gvfs’: Permission denied
2.4M /run
16M /sbin
4.0K /srv
0 /sys
64K /tmp
5.0G /usr
12G /var
0 /vmlinuz
I'm pretty lost on how to go about resizing the partitions or how that partition even filled up so fast. Deleting large files from my Home folder doesn't seem to do anything, as there's still a lot of space in there.
|
I have solved the issue. My /var/log directory contained a logfile and kern.log that were each over 5.7GB each. It seems that my machine was logging the same lengthy error thousands of times over, which quickly filled my machine.
| / directory at 100% capacity. Not sure where I went wrong in my partitioning or how to proceed |
1,542,760,058,000 |
I'm downloading a huge file using a torrent client and I was wondering if I can use GParted to partition around 100gb of my 1tb to save time because I plan to dual boot this computer right after the download finishes.
Below is my current partition.
|
As the volume/partition that you wish to modify is mounted, you should not modify it. In fact, GParted will not let you modify mounted partitions:
Why are some menu items disabled?
The partition is mounted and modifying a mounted partition is DANGEROUS. Just unmount the partition…
To use GParted on the boot volume, you'll need to stop/finish the torrent, then reboot from another volume. Hence, the one that you wish to modify will not be in use. The simplest way is to download the GParted Live image, then boot from USB or DVD.
| Is it okay to partition my drive while I'm downloading a huge file (4gb) |
1,542,760,058,000 |
I have a machine running Centos 6.x, with only one hard drive, 2 TB in size. I found that it had almost all of that space partitioned to the /home directory, so I decided I wanted to reduce that directory to only 200GB, and use the rest of that space to create a new partition.
I used this guide to do the reduction which worked fine:
http://www.linuxtechi.com/reduce-size-lvm-partition/
The new size of the /home directory is correct, however I can't find that space I freed up so I can partition it (should be well over 1TB free space).
If I run
lsblk
I get:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 1.8T 0 part
├─vg_testbox1-lv_root (dm-0) 253:0 0 50G 0 lvm /
├─vg_testbox1-lv_swap (dm-1) 253:1 0 31.5G 0 lvm
└─vg_testbox1-lv_home (dm-2) 253:2 0 200G 0 lvm /home
sr0 11:0 1 1024M 0 rom
If I go into parted and run
print free
I get this:
Model: ATA Hitachi HUA72302 (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
32.3kB 1049kB 1016kB Free Space
1 1049kB 525MB 524MB primary ext4 boot
2 525MB 2000GB 2000GB primary lvm
2000GB 2000GB 90.1kB Free Space
Obviously I'm missing something here, just not sure what.
|
You are running your disk usage via LVM, the Logical Volume Manager. Almost the entire disk is given over to LVM. Your "partitions" for / and /home are allocated out of the LVM space.
You can see the usage with the pvdisplay, vgdisplay and lvdisplay commands (run these as root). If you want a new logical "partition" for your CentOS system you create one like this:
lvcreate --size 50G --name lv_somelabel /dev/vg_testbox1
Here, the partition would be 50GB and would be named "lv_somelabel". The volume group "vg_testbox1" already exists on your system. You can then create a filesystem on it, mount it, etc:
mkfs -L somelabel -t ext4 /dev/vg_testbox1/lv_somelabel
mkdir -p /mnt/somelabel
mount /dev/vg_testbox1/lv_somelabel /mnt/somelabel
| Where did my free space go after reducing a partition? |
1,542,760,058,000 |
I have a Debian system on which we migrated to a SSD for faster execution. Before that we had a 2.0Tb hard disks in RAID. Now we want to use the RAID drives to perform storage generated by the application.
I tried using the mount command to mount one of the disks, but it failed.
fdisk -l output :
Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00089ca4
Device Boot Start End Blocks Id System
/dev/sdb1 2048 33556480 16777216+ fd Linux raid autodetect
/dev/sdb2 33558528 34607104 524288+ fd Linux raid autodetect
/dev/sdb3 34609152 3907027120 1936208984+ fd Linux raid autodetect
Disk /dev/sdc: 480.1 GB, 480103981056 bytes
255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00047ef7
Device Boot Start End Blocks Id System
/dev/sdc1 2048 33556480 16777216+ 82 Linux swap / Solaris
/dev/sdc2 33558528 34607104 524288+ 83 Linux
/dev/sdc3 34609152 937701040 451545944+ 83 Linux
Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000275d2
Device Boot Start End Blocks Id System
/dev/sda1 2048 33556480 16777216+ fd Linux raid autodetect
/dev/sda2 33558528 34607104 524288+ fd Linux raid autodetect
/dev/sda3 34609152 3907027120 1936208984+ fd Linux raid autodetect
As you can see there are two 2Tb hard disks in RAID. Is there any way I can format them to one single partition on both drives and mount them to lets say /media/attachment?? Any help would be nice. Thanks a lot. :-)
|
there are two 2Tb hard disks in RAID. Is there any way I can format them to one single partition on both drives and mount them to lets say /media/attachment
For the purposes of this answer I am using /dev/sda and /dev/sdb. It is your responsibility to ensure that this matches your situation.
You can do this provided you are happy to erase all the data on these two disks.
Ensure the disks are unused and you have taken a backup of any data on them that you wanted to keep
Using fdisk or your preferred alternative, erase the partition table and create a single partition covering the entire disk. This will leave you with partitions /dev/sda1 and /dev/sdb1
EITHER
Create a RAID 1 device, which we will identify as /dev/md1, using these two physical partitions
mdadm --create /dev/md1 --level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1
OR
Create a RAID 0 device, also identified as /dev/md1
mdadm --create /dev/md1 --level=raid0 --raid-devices=2 /dev/sda1 /dev/sdb1
Save the metadata for boot time
mdadm --examine --brief /dev/sda1 /dev/sdb1 >> /etc/mdadm/mdadm.conf
Create the filesystem. Notice that the RAID device is /dev/md1 and from this point on you rarely need to reference /dev/sda1 or /dev/sdb1
mkfs -t ext4 -L bigdisk /dev/md1
Mount it. Don't forget to update /etc/fstab if you want this configured permanently
mkdir -p /media/attachment
mount /dev/md1 /media/attachment
You can cat /proc/mdstat to see the state of the RAID device. If you are running as RAID 1 this will show you the synchronisation status.
| Debian : Mounting a raid array |
1,542,760,058,000 |
My version of fdisk man page states default is DOS mode is disabled, which is what I want.... but why is it stating 'Created a new DOS disklabel'?
localhost four # fdisk /dev/sdc
Welcome to fdisk (util-linux 2.24.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x780f1aa7.
Command (m for help):
|
The "DOS mode" that the man page is referring to is a mode that keeps partitions aligned on cylinder boundaries, which have been an anachronism since the late 90's. In other words, it defaults to letting partitions start and end on any sector. The DOS disklabel, otherwise known as MBR, is the conventional PC partition table, as opposed to GPT, which is used in modern computers that boot using UEFI instead of bios, and is needed for drives > 2 TiB.
| fdisk - defaults to non Dos but yet uses Dos disklabel |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.