date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,487,538,464,000
Source code is text; text is architecture-independent data (i.e., /usr/shareable). Would it be, for some reason, a bad idea to have /usr/src/ link into /usr/share/src in a Linux distro?
In Unix and clones like Linux, /usr/src is not just for any source code. It is traditionally where you put the source code for the running operating system. The basic build tools for each OS and their configuration files are set to use /usr/src and if all you did was ln -fs /usr/share/src /usr/src then those tools are still going to find it in the expected spot. Such is the whole point of file linking. However, ever un*x system I have worked on in the past 25 years has let you set where the build tools find the source, place the compiled object files, place the man pages, place the linked binaries (programs and libraries) and so on. This is often used for installing the newly built system to a second hard drive that you could then detach and pop in a second computer to boot from instead of updating your own OS. Or for having multiple source archives available at the same time for multiple versions or different architectures and you just run the build tools with different config files. This means that there is no need to link thing to or from different locations. So no, it's not a bad idea, per se, but why by bother changing things from the usual unless you have need to do so? It's just one more thing to potentially forget and be the cause of mistakes later on.
/usr/share/src?
1,487,538,464,000
I'm trying to make a program (Thunderbird) run at startup, but when I go to usr/bin it seems empty. However, if I run $ which Thunderbird in terminal, it tells me /usr/bin/thunderbird. The usr/bin folder has the option to display hidden files checked. What could be the reason I can't find any files in that folder?
Make sure you are typing in /usr/bin, not usr/bin. The latter means "look for usr/bin starting in the current directory." For example, if your current directory is your home directory (~), then it will look for ~/usr/bin. The former means "look for /usr/bin starting from the root directory." This makes sure that the search for usr/bin starts from the root directory /. To put this another way, if the path doesn't start with a /, then it is assumed that the path you specified is from the current directory.
usr/bin appears empty
1,487,538,464,000
I have a couple command line php scripts and also php pages that call the same class file. Instead of having mutiple copies of it floating around, I want to put class.myApi.php in one central location on the filesystem. All files will then reference that one single file. Where does it belong? Does it belong in /usr/bin ?
I would suggest either /usr/local/share/php/your-app-name or /opt/local/share/php/your-app-name.
where on the filesystem do user-generated system scripts go?
1,487,538,464,000
I'm building a control device that runs Debian Jessie on a ARM based Linux SBC. I'm curious what the recommended location is for application files? To date, I've been placing things in a root level directory, e.g. /MyApplication But I was toying with moving it to /root/ since it's a single user deployment, e.g. /root/MyApplication I know if I was on a more conventional multi-use(r) system, I'd place it in /usr/local/ or maybe /opt/local/. But I'm think that perhaps the guidelines/practices might be different for embedded single use devices?
Certainly you have tighter control of the environment on an embedded system than you do on a desktop or server, and you can probably get away with putting your files anywhere you like (subject to constraints like avoiding read-only filesystems, which embedded systems often have). That being said, I would definitely avoid /root. That's root's home directory and application files that belong to the application and not to the system administrator emphatically do not belong there. On an embedded system, /MyApplication is probably just fine. It has the advantage of being obvious to anyone who inherits management of the system. /usr/local and /opt/local are fine too, but they lump your application's files together with any other software that might be installed in those directories (which might occur because it's not packages with the operating system distribution). I would consider /opt/MyApplication as an alternative to /MyApplication, but not with any very strong preference.
Where to place application files in embedded linux deployment?
1,487,538,464,000
I need to increase the logical volume of the var directory, the maximum size of var right now is 10GB, I need to make it 50GB. I have a Centos 6 server. The output of df -h is: Filesystem Size Used Avail Use% Mounted on rootfs 10G 10G 0 100% / /dev/root 10G 10G 0 100% / none 991M 312K 990M 1% /dev /dev/sda2 455G 3.6G 429G 1% /home tmpfs 991M 0 991M 0% /dev/shm /dev/root 10G 10G 0 100% /var/named/chroot/etc/named /dev/root 10G 10G 0 100% /var/named/chroot/var/named /dev/root 10G 10G 0 100% /var/named/chroot/etc/named.conf /dev/root 10G 10G 0 100% /var/named/chroot/etc/named.rfc1912.zones /dev/root 10G 10G 0 100% /var/named/chroot/etc/rndc.key /dev/root 10G 10G 0 100% /var/named/chroot/usr/lib64/bind /dev/root 10G 10G 0 100% /var/named/chroot/etc/named.iscdlv.key /dev/root 10G 10G 0 100% /var/named/chroot/etc/named.root.key I followed this tutorial. In order to increase the volume you have to do: lvextend -L +40G /Path/To/var My problem is simple, I don't know where my var is located. If i do lvextend -L +40G /dev/root/var I get Volume group "root" not found If i do lvextend -L +40G /dev/var I get Path required for Logical Volume "var" Please provide a volume group name Run `lvextend --help' for more information. I tried every possible path, still can't find the right path to var, so where my var is located? EDIT If i do lvextend -L +40G /dev/root I get Path required for Logical Volume "root" Please provide a volume group name Run `lvextend --help' for more information. pvs gives no output at all. lvs gives this output No volume groups found
As I expected from the name /dev/root, you're not using LVM. You have a few options: Reinstall Hope that your partitioning scheme allows you to grow the root partition with (g)parted. Create a new partition as LVM volume, create a vg and and an lv for /var and move /var over Clean up the current system so you don't need the space Options 2 and 3 are best done when booting from a rescue cd or rescue netboot.
increasing a logical volume
1,487,538,464,000
I am looking for a command that will write the names of all directories, subdirectories and file names to a text file. Example format: directory1 |_subdirectory1 | |_filename1.mp4 |_subdirectory2 | |_filename2.txt | |_filename3.jpg | directory2 | ... So the text file will only just show what the directory, subdirectories and file names are. The lines dont have to exist but it graphically gives a better view. Not sure if that part is possible or not. Thx!
tree command with -o option. For example to output contents of /home to file.txt tree /home -o file.txt You will probably need to install tree package, since it is usually not installed by default. There are other options available which you can see with: tree --help
Write directories, subdirectories and file names to .txt file
1,487,538,464,000
I have a shell function which operates in-folder (in my case, it looks for *.eps` files and convert them to pdf without blank borders). Let us abstract this by the below function: function do_here() { echo $(pwd) } I look for an intelligent way to traverse directories and paths given root path ROOT_PATH and operate do_here on each tree leaf. There may be symbolic leaves, but this requirement seems a plus at this point. The rationale seems to be Traverse directories from ROOT_PATH with alias to_path; run cd $to_path; run do_here; Go to step 2. I do not know how to obtain the list the directory paths from step 1.
Bash also supports ** for recursive globbing, provided shopt -s globstar is set. And you can get only directories if you suffix the glob with a /. Add dotglob so you don't miss directories with names starting with a dot. So, e.g. #!/bin/bash shopt -s globstar shopt -s dotglob do_here() { echo "working in $(pwd)..."; } # cd ROOT_PATH for d in **/; do ( cd "$d" do_here ) done Note that there are some differences in how ** works between the shells, namely with regard to following symlinks inside the directory tree. If the tree contains a link to another directory, at least some versions of Bash would follow it, possibly going outside the tree. (I'm not sure what you mean with "symbolic leaves", so I don't know if that's a problem.)
Operate and output inplace results
1,487,538,464,000
I have 2 independent folders A and B. B has many files with the extension .build. Across A there are a fe subdirectories that have the same structure as subdirectories of B. For example A has some_path/Tools/Camera/ and B has different_path/Tools/Camera. Say I manually identified 2 subdirectories one in A one in B that have the same structure, I need to copy all .build files from the subdir of B into the one in A. How would I do this?
Enable the globstar Bash shell option: shopt -s globstar Now change directory into B and run: for path in **/; do [ -d "<A-dir>/$path" ] && cp -n "$path/"*.build "<A-dir>/$path" done This will recursively check each subdirectory in B and see if there is an equivalent subdirectory in A. If there is then it will copy all .build files from the B subdirectory over to A.
How to copy all files with same extension between identical directory structures?
1,487,538,464,000
I'm trying to write a find command that searches for all files or directories with the name andreas in them (forward slashes added for clarity). ~$ find . -iname '*andreas*' ./Documents/Resume - Andreas/ ./Documents/Resume - Andreas/Resume-Andreas-Renberg.odf ./Documents/Resume - Andreas/Resume-Andreas-Renberg.pdf ./Documents/Resume - Andreas/Resume-Andreas-Renberg-v2.pdf ./Documents/Resume - Andreas/Resume-Andreas-Renberg-v3.pdf ./Documents/Resume - Andreas/Resume-Andreas-Renberg-final.pdf ./Pictures/Trip with Andreas to Isengard/ ./Pictures/Profiles/andreas-renberg.jpg ./.cache/junk/nothing here/hide-from-andreas.tar.gz I almost have exactly what I need, but there is a huge amount of redundancy in that list that is causing problems in my other scripts. Find already told me about the directory ./Documents/Resume - Andreas/, so I don't want find to list all the contents of the directory that match too. I want the output to look like this: ~$ find . -iname '*andreas*' [command] ./Documents/Resume - Andreas/ ./Pictures/Trip with Andreas to Isengard/ ./Pictures/Profiles/andreas-renberg.jpg ./.cache/junk/nothing here/hide-from-andreas.tar.gz
You could prune matches of type directory: find . -iname '*andreas*' -print -type d -prune From man find: -prune True; if the file is a directory, do not descend into it.
Find all files OR directories (without contents) that match a name
1,487,538,464,000
On PCs, the XDG Desktop specifications allow the individual desktops to set different folders for various data. When writing an application that will only ever be run on servers, should one simply hardcode /etc/myapp, /var/cache/myapp etc, or are there potential environment variables or similar that should be checked?
Most Linux applications that I know of indeed hardcode paths but sometimes allow to redefine them using environment variables and command line arguments. I see nothing wrong with hardcoding these paths but maybe you'll make the lives of your potential app users easier if you allow to specify the configuration file location as a command line argument and in this conf file allow to change the location of /var/cache/myapp. In case you hardcode everything at least make sure the existing packages in your distro don't conflict with your locations. For DNF based distros it will be (this is an example): dnf whatprovides '/var/cache/dnf' dnf-4.9.0-1.fc35.noarch : Package manager Repo : @System Matched from: Filename : /var/cache/dnf dnf-4.9.0-1.fc35.noarch : Package manager Repo : fedora Matched from: Filename : /var/cache/dnf So you obviously cannot use /var/cache/dnf. In the end it's gonna much easier to just have everything installed in /opt/appname or /usr/local/opt/appname. No native Linux applications use these directories.
What directory specifications need to be followed on servers?
1,487,538,464,000
I need to draw the tree structure of the following code. cd /; mkdir a b c a/a b/a; cd a; mkdir ../e ../a/f ../b/a/g; cd../b/./; mkdir /a/k a/b ../a/./b /c I know that: cd /; (goes to root) , mkdir creates directories a b c but I can't understand the rest of the line. Any thoughts would be really helpful.
This is written in a confusing manner and I'm assuming comes from a basic linux/unix test. I can explain. It will seem clearer if it is on multiple lines. The ; char means end of a command. The mkdir command can do multiple things with one execution. cd / You will be in / as your current working directory. mkdir a b c a/a b/a Creates directories relative to your cwd: /a, /b, /c, /a/a, /b/a cd a Your cwd becomes /a mkdir ../e ../a/f ../b/a/g Creates directories relative to current location. The .. means to go up one. Above your cwd of /a is / so you create /e, then /a/f, then /b/a/g dirs. cd ../b/./ While .. means parent directory, . means this directory. So, from /a you would go up one (..) then into /b, then stay where you are (.). A trailing / after a directory name means only that it is a directory and is optional. mkdir /a/k a/b ../a/./b /c Again this needs to be broken up since it is obviously written to be confusing. Creates /a/k, since the leading / means an absolute path, then /b/a/b since you are already in /b and it is relative (does not start with /). Next is /a/b since you are already in /b and the . does nothing. Then it will try to create /c but this already exists. I would suggest working through this yourself on a command line and see if it makes sense.
Tree structures and directories [closed]
1,487,538,464,000
I have a directory-tree which contains files with the same name, e.g.: a/file a/... a/b/file x/a/file x/a/... x/b/file I'm trying to use find (with -exec and -prune) to get all paths where file occurs first when recursively searching the tree. My example would return: a/file x/a/file x/b/file It does not return a/b/file because a file called file is already in a/. How can I achieve that with Bash?
find . -type d -exec sh -c ' f="$1"/file [ -e "$f" ] && { printf "%s\n" "$f"; true; } ' find-sh {} \; -prune Instead of finding files I find directories and test the existence of file in each directory. If file exists, the path to it is printed and -prune activated for the directory. Notes: true is there to make sure -prune does its job even if printf fails for whatever reason. In this solution find never processes any file, it processes directories. If you want to do something with file, you can do it from within the inner shell, using $f. Example: if I used … -prune -print, the tool would print the relevant directory pathname, not the file. To print the path to file I used printf from within the inner shell.
Recursively search directories for a filename, stop descending if found
1,487,538,464,000
I have a directory containing about 7k folders. These folders were extracted from zips and some of the extraction was done using Python scripts. Some of these folders are extracted in such a way that Main Dir | --------------- | | fold1 fold2 | | ------ ------- | | | .pngs .txts fold2 | ------- | | .pngs .txts The requirement is to move the fold2 category of folders in a directory structure similar to that of fold1 where a folder contains the data instead of another same named folder containing the data. How may I do it using bash or command line so that I have all 7k folders in a homogeneous structure similar to fold1?
The following script will search the current working directory for paths of the form a/B/B/c and compress them to a/B/c. This also compresses a/B/B/B/B/c to a/B/c and a/B/B/c/D/D/e to a/B/c/D/e. You need GNU find to use the -regextype option and an implementation of mv supporting -n. If you don't have these please have a look at the unsafe posix version at the end of the script. shopt -s dotglob failglob find . -depth -regextype egrep -type d -regex '.*/([^/]*)/\1' -print0 | while IFS= read -r -d '' path; do mv -n -t "$path/.." "$path"/* && rmdir "$path" done Arbitrary path names (whitespaces, special symbols like *, and even linebreaks) are supported. The command makes sure not to overwrite or delete any files. In a situation as in the left tree the repeated sub-directory has to be kept. You will get the error message rmdir: failed to remove './A/A'. The result can be seen to the right. . (before) . (after) └── A └── A ├── someFile ├── someFile ├── collision ├── collision └── A ├── anotherFile ├── collision └── A └── anotherFile └── collision Hidden files are copied too. Bad Posix Version A more portable version of the script which cannot handle line breaks inside paths, may overwrite files in situations as the one shown above, and cannot move hidden files (sub-directory is kept if there are hidden files inside). find . -depth -type d | grep -E -x '.*/([^/]*)/\1' | while IFS= read -r path; do mv "$path"/* "$path/.." && rmdir "$path" done
How to move some specific folder keeping a predefined directory structure?
1,487,538,464,000
I am wondering if there is a convention on where to store the data folder containing all the files for the tables and rows of a relational database system such as Postgres.
On OpenBSD, the (OpenBSD) postgresql-server package will be preconfigured to use /var/postgresql/data for its databases. It also adds a _postgresql service user with /var/postgresql as its home directory. Storing databases under /var makes sense as they generally contain variable data. If your /var partition is not big enough, you may consider changing this to some other location where you have more space, or mounting a separate filesystem at /var/postgresql. Unfortunately, I'm not a FreeBSD user and can not tell you how to do that in the most convenient way for PostgreSQL on FreeBSD. On OpenBSD, changing the location of the data directory would involve changing a datadir variable in the rc script /etc/rc.d/postgresql (this particular variable does not seem to be configurable through the native rcctl utility for whatever reason). From a comment by JdeBP: For FreeBSD, the default location for the databases is /var/db/postgres/data10 (presumably this is for PostgreSQL 10). This is configurable by changing/setting the value of the postgresql_data variable in /etc/rc.conf.
On FreeBSD, or other BSDs, what directory is commonly used for data folder storing the content of a database such as Postgres
1,487,538,464,000
I have a stand-alone web app, it's an executable file. At this point it doesn't require nginx or apache in front of it because it has a built-in webserver. Where should I put it on a server? In the directory of my user? Or in /opt/something or somewhere else? Also, it's controlled by systemd.
Traditionally, the location for this would be /usr/local (resp. /usr/local/bin if this is just one executable). See What is /usr/local/bin? and What is the difference between /opt and /usr/local? .
Where should I put a standalone web app which doesn't require an external web server?
1,487,538,464,000
I need to know how to get the 4th directory name into the tree. this directory var changes while working through it using a loop, so while the loop is running always updating this directory structor I need to update this and keep it current /dir 1/dir 2/dir 3/dir 4/dir 5/ I only need to get that name that is in that 4th dir into a var to use it elsewhere I am using a loop and extraction like this. #!/bin/bash working_dir="/media/data/temp1" script_dir="$HOME/working" find "$working_dir" -type f -name "*.*" | while [ $xf -lt $numberToConvert ] ; do read FILENAME; j=$FILENAME xpath=${j%/*} xbase=${j##*/} xfext=${xbase##*.} xpref=${xbase%.*} path1=${xpath} pref1=${xpref} ext1=${xfext} echo echo echo "xpath is -> "$xpath"" echo "xbase is -> "xbase"" echo "xfext is -> "$xfext"" echo "xpref is -> "$xpref"" echo "path is -> "$path1"" echo "pref is -> "$pref1"" echo "ext is -> "$ext1"" echo getme="$(basename "$working_dir/")" echo "GET ME "$getme"" echo //Code that does stuff to files in dir's after the 4th Dir // is here but hidden let xf++ done the output to all of that is: xpath is -> /media/data/temp1/Joe Jackson - The Ultimate Collection/CD2 xbase is -> xbase xfext is -> mp3 xpref is -> Jumpin' Jive Live - Joe Jackson path is -> /media/data/temp1/Joe Jackson - The Ultimate Collection/CD2 pref is -> Jumpin' Jive Live - Joe Jackson ext is -> mp3 GET ME temp1 basename returns the 3rd dir not the 4th one after that one, that is what I need to get the name of the one between /temp1/ ... / and CD /media/data/temp1/Joe Jackson - The Ultimate Collection/CD2 that Joe Jackson ... dir name will change when the script is done with it going into another 4th dir with a different name so I need to be able to have it update and keep current getting JUST that 4th dir name. all I need is that line something like this 4thDirNameIs=${code goes here} it is 4 deep into the directory structor not 5 or 3 or any other dir or file name even. that title tells it all into is the key word one has to be outside of the directoy in oder to go into it 4th directory name into the directories. I hope that is clear enough this time
If your file is saved as $FILENAME, these will all give you the 4th directory. In these examples, I am manually setting FILENAME="/media/data/temp1/Joe Jackson - The Ultimate Collection/CD2/Jumpin' Jive Live - Joe Jackson.mp3" Use the shell: $ dir=${FILENAME#*/*/*/*/}; echo ${dir//\/*} Joe Jackson - The Ultimate Collection Here, I am first removing the first 4 slashes, leaving the 4th directory and file name and then removing the file name. Shell and basename: $ dir=${FILENAME#*/*/*/*/}; basename "$dir" Jumpin' Jive Live - Joe Jackson.mp3 Same idea as above really, just using basename for the final step. Parse it with Perl: $ echo "$FILENAME" | perl -pe 's#(.?/){4}(.?)/.*#$2#' Joe Jackson - The Ultimate Collection The regular expression matches 4 repetitions of 0 or more characters followed by a slash, then everything to the next slash and then everything else. The parentheses let you capture a matched pattern so we are replacing everything with the second pattern mathched, the directory name. Alternatively, you can split the line into an array on slasehs and print the 5th field (5th because the 1st is empty due to the / at the beginning of the variable): $ echo "$FILENAME" | perl -F"/" -lane 'print $F[4]' Joe Jackson - The Ultimate Collection Parse it with awk: $ echo "$FILENAME" | awk -F"/" '{print $5}' Joe Jackson - The Ultimate Collection Same idea, but in awk.
how do I extract the SubDir name 4 deep & put into a Var 4 later use?
1,487,538,464,000
I'm building several (> 10) packages as a non-root user from source, and want to install them (for the sake of this question, choose the --prefix= value). Two obvious options are: Use my home directory, which means I'll have etc/, usr/, var/ and similar subdirs, and within each of them will be the files for different packages. But most of the packages don't interact, so there's isn't much sense in having their files in the same subdirs, and it will make uninstalling kind of a pain I think Use a separate dir for each package, e.g. --prefix=$HOME/myfooapp, --prefix=$HOME/mybarlib and so on. This will keep everything separate, but now my home dir will fill up with multiple subdirs I don't want to see. Also, some of the packages do interact, so I don't mind them being together in the same place (no need to make the PATH super-long). Is there some other alternative I'm missing? I mean, I could do Like Option 2. but in a subdirectory of my home directory, e.g. --prefix=$HOME/opt/myfooapp, --prefix=$HOME/opt/mybarlib is this the best I can do?
Eventually I decided on $HOME/opt/myfooapp. The reasons are: Options 2 and 3 allow me to separate the file hierarchies of the different packages; that's less of a consideration when you're working on / as the root user - since you have a package manager to help you. I don't... Only option 3 avoids cluttering my home directory with multiple directories having to do with software package installations.
Where should I install packages built from source as a non-root user?
1,487,538,464,000
I have a directory(INPUTDIR) with sample names as subdirectories(508_C,540_C,570_D etc).Within those each subdirectories there is another directory called FASTQ which contains two kinds of files. e.g. 540_Ct_1.fastq.gz 540_Ct_2.fastq.gz I want to create two lists,the first having all _1.fastq.gz filenames with paths and the other having _2.fastq.gz filenames with paths. The directory structure is INPUT DIR > 508_C >FASTQ > 508_1.fastq.gz 508_2.fastq.gz INPUT DIR > 540_C >FASTQ > 540_Ct_1.fastq.gz 540_Ct_2.fastq.gz INPUT DIR > 570_D >FASTQ >570_Ct_1.fastq.gz 570_Ct_2.fastq.gz The INPUTDIR is the main directory.I want to create TWO lists in this directory. One list has : /home/user/INPUT DIR > 508_C >FASTQ > 508_1.fastq.gz /home/user/INPUT DIR > 540_C >FASTQ > 540_Ct_1.fastq.gz /home/user/INPUT DIR > 570_D >FASTQ > 570_Ct_1.fastq.gz The second list has: /home/user/INPUT DIR > 508_C >FASTQ >508_2.fastq.gz /home/user/INPUT DIR > 540_C >FASTQ > 540_Ct_2.fastq.gz /home/user/INPUT DIR > 570_D >FASTQ > 570_Ct_2.fastq.gz Thanks, Ron
cd INPUTDIR find . -name \*1.fastq.gz > list1 find . -name \*2.fastq.gz > list2 The paths in the "list" files will be relative to the current directory. If you want absolute paths, use find "$PWD" -name \*1.fastq.gz > list1
Creating a list containing filenames with paths
1,487,538,464,000
When I first log in to Unix on my Mac, I usually see a ~ after my username in the command line. However, if I look at the apps contained in that directory ~/Applications they are not all of my applications, but rather just some of my chrome apps. If I cd to my / directory, and then go to /Applications there I see all of my applications, so I am wondering what is the difference? Also, if I go to ~/MY-USERNAME/Applications I see the same as if I just I were just in ~. So what is the difference? Lastly, how come I can cd into my USERNAME directory endlessly? (see screen shot, "startec" is my username)
~ is your home directory, / is the root directory. ~ is where you keep your personal files and directories. Other users can't see or access them. Files and directories in / are system-wide and accessible to all users who have the right permissions. Startec is a link, which allows you to have two pointers to the same directory (in this case it points to your home directory). I know that most people draw the file system strictly as a tree, but with links (hard or soft) this isn't completely true. To create links you can use the ln command. To see more about those see, man 1 ln. Here is a diagram of a unix file system with links shown in dotted lines, and directories with solid lines. Source: http://users.aber.ac.uk/cwl/UNIX/notes/filesystem/fs.html
What is the difference between ~ and / in paths [duplicate]
1,346,614,573,000
Long story short, i just accidentally deleted my entire home folder. Thankfully it seems like the hidden files are still there. I'm not sure, but aren't all of the folders within the home folder (Desktop, Downloads and whatever else is in there) empty by default? If that is the case, could some super nice person just name all of the files located in the home folder so that i can rebuild it? Thanks a bunch in advance
How about making a new user, and then copying all the hidden files to this new user. You could then rename the new user to your old one. I don't know the specifics of your situation, but I think this is better than manually recreating the default folders.
Home folder structure in Ubuntu 12.04.1?
1,346,614,573,000
using terminal on MAC OS I need to return path along with the file name in the a directory and all sub-directories, but only if a fie has a specific file extension (e.g. .txt). I tried this, but it does not filter by file extension: find $PWD/* -maxdepth 20 I also tried this, but it does not return me a directory path: ls my-dir |egrep '\.txt$'
If you're on macos, then your shell is likely zsh, then: print -rC1 -- $PWD/**/*.txt(N) Would print raw on 1 Column the full paths of the non-hidden files (of any type including regular, symlink, fifo...) whose name ends in .txt in the current working directory or below, sorted lexically Add the D qualifier (inside the (...) above) to include hidden ones, . to restrict to regular files only, om to sort by age, :q to quote special characters in the file paths if any... For head and tail of those paths in 2 separate columns: () {print -rC2 -- $@:h $@:t; } $PWD/**/*.txt(N) Where we pass that list of paths to an anonymous function which prints the heads and tails of its @rguments on 2 Columns.
Return all files of a specific extension with directory and subdirectories paths
1,346,614,573,000
Here is my file structure. I am currently residing in test2/ directory, and all commands must run from there. test/ |_test2/ publish/ |_subfolder1/ |_f1 |_f2 |_f3 |_subfolder2/ |_f4 |_f5 |_f6 |_f7 I need to do the following Create a tar.gz of subfolder2 (named the same) remove subfolder2 create a tar.gz of subfolder1 (named the same) including subfolder2.tar.gz remove subfolder1 In the end the tar structure should look like this publish/ |_subfolder1.tar.gz |_subfolder2.tar.gz When I untar each tar.gz in their current path the output should look like this: publish/ |_subfolder1.tar.gz |_subfolder1/ |_f1 |_f2 |_f3 |_subfolder2.tar.gz |_subfolder2/ |_f4 |_f5 |_f6 |_f7 I can accomplish this by doing the following: cd ../../publish/subfolder1 tar -zxf subfolder2.tar.gz subfolder2/ rm -rf subfolder2 cd ../ tar -zxf subfolder1.tar.gz subfolder1/ rm -rf subfolder1 cd ../test/test2 I really dont want my script to be hopping around folders with cd. I tried using the following command instead: tar -zcf ../../publish/subfolder1/subfolder2.tar.gz -C ../../publish/subfolder1/subfolder2/ . rm -rf ../../publish/subfolder1/subfolder2/ tar -zcf ../../publish/subfolder1.tar.gz -C ../../publish/subfolder1/ . rm -rf ../../publish/subfolder1/ This WILL create tarballs, but gets rid of the directories. After untaring them I see the following: publish/ |_subfolder1.tar.gz |_subfolder2.tar.gz |_f1 |_f2 |_f3 |_f4 |_f5 |_f6 |_f7 How can I achieve keeping the folder paths in the tarball structure without using a bunch of cd commands?
To archive the contents of foo, change to foo and tar the current directory: -C /some/dir/foo .. To archive the foo directory itself, change to foo's parent directory and tar foo: -C /some/dir foo. So the commands would look like: tar -zcf ../../publish/subfolder1/subfolder2.tar.gz -C ../../publish/subfolder1 subfolder2 rm -rf ../../publish/subfolder1/subfolder2/ tar -zcf ../../publish/subfolder1.tar.gz -C ../../publish subfolder1 rm -rf ../../publish/subfolder1/
Create a tar.gz file from a different directory, tar must keep files in specific directory structure
1,346,614,573,000
About share data (files and directories) among users within the same machine, has sense use the /srv directory it according with: What's the most appropriate directory where to place files shared between users? I am assuming it is still valid or recommendable - correct me if that changed But - What should be the directory to be shared for user/groups but for software oriented for development? i.e: Java, Maven, Gradle (all available from a .tar.gz file). It because has no sense have repeated the same unpacked directory for each user.
There are 2 kinds of programs: Linux like. Their files are spread over all filesystem in accordance to files type/usage. No issues with that kind of programs. They are managed well by package managers. Windows like. Every program file in one program-specific folder. Like C:/Program files/XXX Not good way. But if you have one or few such programs in Linux place it in /opt Or you can create a directory in /home for such purposes. /home/opt for example. That case might be useful if you afraid of conflicts between your software and software that automatically chooses /opt. A software that to be used by only one person might be installed into ${HOME}/bin But it is not your case.
What's the most appropriate directory where to place development software shared between users?
1,346,614,573,000
What is the prevailing (or correct) convention on where to install cross-platform libraries? E.g. libfoo.so.1.0.0 compiled for the host might typically be located at /usr/local/lib/. If I also had to install libfoo for a non-host architecture, e.g. ARM, where should it go? Some reading leads me to /usr/local/lib/aarch64-linux-gnu/, but other reading leads me to /usr/local/aarch64-linux-gnu/lib/. I've recently started learning about the "configure; make; make install" recipe; configure takes a --prefix argument, so it ends up following the latter convention. Does that mean that is the prevailing/correct convention? OTOH, people have claimed that the former is the "debian convention," but I'm having a hard time finding evidence to back that up.
It's really up to you as long as it's a separate directory not used by your system. You can even use something like /opt/arm64 or even /arm64.
What is convention on where to install cross-platform libraries?
1,346,614,573,000
This is easier to explain with an example. Imagine I have a directory structure as follows: pics/cats/png/01.png pics/cats/png/02.png pics/cats/jpg/01.jpg pics/cats/jpg/02.jpg pics/dogs/png/01.png pics/dogs/png/02.png pics/dogs/jpg/01.jpg pics/dogs/jpg/02.jpg I would like to rsync the "pics" directory to a destination, but on the destination I would like the following result, assuming the filter string for my leaf directories is "png": pics/cats/png/01.png pics/cats/png/02.png pics/dogs/png/01.png pics/dogs/png/02.png In addition, I would like to accomplish the following result as well: (as the png dirs are no longer necessary) pics/cats/01.png pics/cats/02.png pics/dogs/01.png pics/dogs/02.png It might be important to note that any directory might have the string "png" in it, but I only want to "filter" on the leaf directories, ie, directories that do not contain another directory. It might be important to note also that I want to keep the contents of the "png" directories, even if they contain non-png files. Ie: pics/cats/png/01.png pics/cats/png/02.txt pics/cats/jpg/01.jpg pics/cats/jpg/02.jpg pics/dogs/png/01.txt pics/dogs/png/02.png pics/dogs/jpg/01.jpg pics/dogs/jpg/02.jpg Becomes: pics/cats/png/01.png pics/cats/png/02.txt pics/dogs/png/01.txt pics/dogs/png/02.png Or: pics/cats/01.png pics/cats/02.txt pics/dogs/01.txt pics/dogs/02.png Last item to note: The directory structure might be "n" deep. Ie: pics/cats/house/tabby/png/01.png pics/cats/house/tabby/png/02.txt pics/cats/house/tabby/jpg/01.jpg pics/cats/house/tabby/jpg/02.jpg Becomes: pics/cats/house/tabby/png/01.png pics/cats/house/tabby/png/02.txt Or: pics/cats/house/tabby/01.png pics/cats/house/tabby/02.txt If no easy way exists I'm sure I can just write a bash script to do it, but this seems like a use case that while not common, I'm sure crops up every now and then, and perhaps there is a name and flag for this operation.
You can get all the leaf nodes, filter them with grep and save the result to a file. Then you run rsync with the --files-from option. That is just the basics. You could filter directly in awk and/or pipe directly to to xargs for example. I'm not trying to be concise or performant, but to show the steps involved. If you are at the root of the hierarchy: $ find . -type d | sort | awk '$0 !~ last "/" {print last} {last=$0} END {print last}' | grep '/png$' > /tmp/dirs_rsync.txt $ rsync -av --files-from=/tmp/dirs_rsync.txt . /your/destination/folder
How to rsync a large directory tree, but only leaf directories that match a regex?
1,346,614,573,000
I've got 2 very old Backups of a friends computer. They were simply copied into a folder each on an external Harddrive. Both are about 300GB in Size and the contents are very much alike but not identical and the folder-structure is different. I want to free that space and make one single Backup of those two. I think about 90% of the files are douplicates, but i dont want to miss the files that are not. So what I need is a program that compares the files in two directories with all their subdirectories but ignoring these subdirectories. All files within Folder A should be compared with All Files in Folder B. All exact douplicates in Folder B should be marked/moved(/deleted). I will handle the remains in Folder B manually. I've tried meld, I've tried Gnome-Commander (I'm using Xubuntu with XFCE) I would enjoy a gui-solution but I should be able to handle terminal and scripts too. I thought it may be possible to build a file-list for both sides and pipe these to some diff-program, but how to do it exactly is out of my capabilities. Well, looking forward to your answers, turtle purple
If the aim is to preserve file content (avoid losing data), I would concentrate on file equality, not the naming of directories of files. Start with running this on each of the top-level folders, and save the output (it will run for a while!). find FolderA -type f -print0 | xargs -0 cksum > FoldA.cksum find FolderB -type f -print0 | xargs -0 cksum > FoldB.cksum Sort the two outputs together, which brings any identical file contents together. Then start writing awk to group identical content based on the first two fields (checksum and size). (a) Any one-line group is a unique file to be kept. (b) Any larger group is a list of identical files. May as well keep the top one, and write the other names to a list for deletion (these may be duplicates between A and B, or within A, or within B, or both. All duplicate files will now only be in your FolderA name, as will be about half of the unique files. What do you do with files where the selected copy is from FolderB (assuming you need to merge the remnants)? If their pathname (from after FolderB down to the lowest directory) exists in FolderA, that's probably where you need to mv them to, via another output list. If their pathname at FolderA does not exist, you would be guessing where they really belong. You could make the appropriate directory (with all its parents) and risk mislaying it, or eyeball it to see if it corresponds to anywhere else. Both those last steps need an extra check: the possibility that you have two (or more) non-identical files with the same name. In that case, you need to choose some resolution (like always keep the later version), or extend the filename to make it unique, or examine each case individually. My approach would be to work this incrementally: deal with the exact duplicate files first (90% in your estimation), then evaluate the discrepancies for any pattern you can use to reconcile the remainder.
Compare large directories recursively - but ignoring sub-directories - compare two backups - with gui
1,346,614,573,000
If I have a program that copies and modifies scripts or code-files as part of its operation, where should these other "default" or "template" scripts be installed?
/usr/local/share, /opt, or users directory. /usr/local follows the same structure as /usr however it is for you to put your own stuff into. /usr is for the system installer to manage. Don't get in its way. /usr/local will not be touched by built in tools, so can be backed up and managed more easily. /opt is also (like /usr/local) for your own use. However if follows a different pattern. It is one directory per package, everything in one directory. You can also put it in your own directory. This needs no special permission, but will be harder for other users to know it is there.
Where should I install template scripts? [closed]
1,346,614,573,000
Is there a Linux utility for creating templates for directory structures, outside of the scripts packaged with a DE like Gnome or LXDE? (I run openbox and would like to avoid a DE at all costs) There is this post talking about template files, which I will absolutely use, and there is the template toolkit but that's more for documents and dynamic web content (although, it can be easily extended). I'm looking for something to encompass whole directory structures as well. Say I have a C template directory structure like so: prj_?\ Doxyfile Makefile README bin\ data\ doc\ include\ src Or a nodejs template structure like: index.html js/ main.js models/ views/ collections/ templates/ libs/ backbone/ underscore/ ... css/ ... Surely there must be something available, given that folders like /skel exist... or is that yet another manually scripted section of Linux? If this doesn't yet exist, I would love to contribute to the community by writing and maintaining a tool of this nature, or at the very least writing a vim plugin to handle such functionality (spoiler alert: doesn't exist), but that would be something to discuss on the vim exchange.
The mtree(1) utility is standard on BSD systems, and I'm pretty sure it ought to be available for Linux as well somewhere. It reads a directory hierarchy specification and compares it with what's found on disk, optionally deleting files (or updating their permissions or ownership) and creating missing directories as needed. Its specification does not quite look like what you have. The specification for an /etc directory, as found on an OpenBSD base system, would look like /set type=dir uname=root gname=wheel mode=0755 . etc X11 app-defaults .. twm .. xenodm pixmaps .. .. xinit .. xsm .. .. fonts conf.avail .. conf.d .. .. .. That would be the tool for creating empty directories (mtree -d -u -f spec_file or something similar). If you want to create something like a populated directory hierarchy from some skeleton directory, I would use a tar archive of the skeleton directory and just unpack that at the destination. An existing skeleton directory can be copied with pax easily: ( cd -- "$skel_dir" && pax -rw -p p . "$dest_dir" )
Tool for creating and calling folder structure templates?
1,346,614,573,000
When browsing to trash:/// in Nautilus, I see a long list of files and directories that I recognize and remember placing in the trash. However, attempting to delete them from Nautilus either results in a Preparing message indefinitely, or an error message Error while deleting. You do not have sufficient permissions to delete the file "_____". I've already emptied ~/.local/share/Trash/files as my regular user, and that directory does not exist as the root user. I've downloaded the trash tool from the AUR to confirm my findings: running trash -l or trash -e as my regular user and root both confirm that the trash can is empty. I can clearly see that it is not empty though. I'm able to browse through the directories using nautilus and open these files. How can I locate these files in order to permanently delete them?
This problem was actually being caused by having a .Trash-1000 folder on the external drive itself, with all the contents I was seeing in Nautilus within it. In order to delete these files, I had to remount that drive as writable: sudo mount -o remount,rw /partition/identifier /mount/point After this point, I was able to rm -rf .Trash-1000/ on the drive, and the files were no longer visible as part of the Trash within Nautilus.
Nautilus shows files in Trash, can't be located on cli
1,346,614,573,000
My main directory is /home/hts/.hts/tvheadend/input/dvb/networks/1d38df81855dee2d39e692ecc4caf05c/muxes In there are many more directories with radomly generated names Example: /027941cc4936a3a3515c78487abc5445/ /4ab4097f4089f9e6d3c062a96f027707/ /8224af212d24d291570864021d9107a3/ /bffd49d7d6af0f6405b1dba81df70d89/ Again in those folders is one file (named "config") and one subfolder (named "services". In the subfolder "services" there are files with a radomly generated name. Example: /home/hts/.hts/tvheadend/input/dvb/networks/1d38df81855dee2d39e692ecc4caf05c/muxes/027941cc4936a3a3515c78487abc5445/services/02f1f0807a9228c6543425c4f47312e0 I want a simple script that enters every single folder under "muxes" and for each of those folders enters the subfolder "services" and replaces the term "enabled": true, through "enabled": false, in every file of that folder.
as has been posted in the comments so many options; #!/bin/bash find /home/hts/.hts/tvheadend/input/dvb/networks/1d38df81855dee2d39e692ecc4caf05c/muxes -maxdepth 1 -type d | while read ad; do find "$ad/config/services/" -type f -exec sed -i 's/"enabled": true,/"enabled": true,/' '{}' \; done or #!/bin/bash dirarr=($(find /home/hts/.hts/tvheadend/input/dvb/networks/1d38df81855dee2d39e692ecc4caf05c/muxes -maxdepth 1 -type d)) for dir in ${dirarr[@]}; do editfile=$(ls -1 $dir/config/services/) sed -i 's/"enabled": true,/"enabled": false,/' $editfile done
Execute command in dynamic directories via shell script
1,346,614,573,000
I have certain directory path like this /backup/data/cm/au and /backup/data/cm/ds. I have to create paths from /cm/au and cm/ds in tmp with same owner and group of their originals.
Your question is a little vague, but it looks like you want something like # cd /backup/data # cp -a cm /tmp If you don't want to copy all of the cm directory, do # cd /backup/data # cp -a cm/au cm/ds /tmp The -a option tells cp to preserve as much meta-data as possible.  For it to preserve ownership (rather than make the copies owned by you), you must run this as root (e.g., run it under sudo). (-a also implies -R; i.e., recursive copy.)
Clone directories [closed]
1,346,614,573,000
When installing a library with configure, make and make install, does make install copy <libraryname>.pc to some place? If yes: Where is it? If no: Should we copy it somewhere rather than leave them where they are? I am asking that, because on an old notebook of mine, I saw: add the directory containing libraryname.pc to the PKG_CONFIG_PATH environment variable, so that pkg-config --clags libraryname and pkg-config --libs libraryname can find them. Will dpkg -l be able to track the installed library depending on whether it's path was added to PKG_CONFIG_PATH? Once I copy <libraryname>.pc to some place, will dpkg -l be able to track the installed packages?
In general, ./configure && make && make install without any parameters sticks everything under /usr/local, which would place foo.pc in /usr/local/lib/pkgconfig/foo.pc To make use of this, you'd need to do basically PKG_CONFIG_PATH=/usr/local/lib/pkgconfig:${PKG_CONFIG_PATH} pkg-config --cflags foo, or, compile in this manner: ./configure --prefix=/usr #places built binaries under /usr instead of /usr/local make make install Now the foo.pc file will be where its expected. Note: This places stuff in system folders, so realize you can overwrite important things if you're not careful. And to answer regarding the dpkg question, no. Package managers can only track files installed by them. Now, if you're feeling adventureous, you can write up the files needed to wrap the standard ./configure && make && make install buils process to produce a dpkg installable deb package, which would be tracked :) Its been some time since I last made a debian package, a few years, and to be honest I rather hated the process, so don't expect info from me on that front. I've since switched to archlinux and writing PKGBUILDs (scripts that build arch packages using makepkg) is quite a simple task :)
Where is <libraryname>.pc?
1,346,614,573,000
I'm trying to write a command that uses mv to move files two directory levels up. So if the folder order goes like this: ~/Test/2020-08-01/001/002/file.txt, I want to move file.txt from directory 002 to directory 2020-08-01. When I type out this command from my home directory mv ~/Test/2020-08-01/001/002/* ../.. I get an error that says: mv: cannot move '/home/user/Test/2020-08-01/001/002/file1.txt' to '../../file1.txt': Permission denied I don't understand why I'm getting a "Permission denied" error and I don't think it's sudo-related. I also don't want to try sudo in case I mess something up. If anyone has any insight please let me know. Thank you.
The issue is that ../.. in your command is relative to your current directory. If your current directory is /home/user, then ../.. refers to the root of the directory hierarchy (where your non-privileged user can't write). To move file.txt from ~/Test/2020-08-01/001/002 to ~/Test/2020-08-01, use mv ~/Test/2020-08-01/001/002/file.txt ~/Test/2020-08-01 If you want to use relative directory paths, then ensure that you are first in the correct directory, then perform the move: cd ~/Test/2020-08-01/001/002 mv file.txt ../.. or, cd ~/Test/2020-08-01 mv 001/002/file.txt . ... or some combination thereof.
Moving folder contents up 2 directory levels
1,346,614,573,000
I am trying to make a photo organizer with a zsh shell script. But i am having trouble creating sub directories within each main directory(based on date). Currently the script starts from a folder i created and gets one argument(the file it needs to edit, hence the first cd $1). Secondly, i do some name changing which is irrelevant for my question. Next i create a directory for each date and move the photo to the correct directory. The issue is, i want to loop through each date folder and make 2 new sub directories(jpg and raw). But when i run the code i get an error that there is no such file or directory.. Here is my current script: #!/bin/zsh. cd $1 for i in *.JPG; do mv $i $(basename $i .JPG).jpg; done for i in *; do d=$(date -r "$i" +%d-%m-%Y) mkdir -p "$d" mv -- "$i" "$d/"; done for d in *; do cd $d for i in *.jpg; do mkdir -p "jpg" mv -- "$i" "jpg"; done for i in *.NEF; do mkdir -p "raw" mv -- "$i" "raw"; done done If anyone knows where i made a mistake that would be really helpfull since i have no clue what goes wrong and there is no debugger in nano as far as i know. Error ➜ files sh test2.sh sdcard1 test2.sh: line 16: cd: 05-03-2022: No such file or directory mv: rename *.jpg to jpg/*.jpg: No such file or directory mv: rename *.NEF to raw/*.NEF: No such file or directory test2.sh: line 16: cd: 23-10-2021: No such file or directory mv: rename *.jpg to jpg/*.jpg: No such file or directory mv: rename *.NEF to raw/*.NEF: No such file or directory
for d in *; do cd $d As far as I can tell this is the error. You've created a loop over your directories d. You cd into $d. But your loop never cd's back out done cd .. done So on the second iteration with the second $d, you're still in the first subdir, which of course does not contain the second $d as a subsubdir. Incidently you're ordering by increasing day, %d-%m-%Y. You're free to do that of course, but you might find ordering by year organises the dirs more tidily, %Y-%m-%d.
How can i create sub directories within a directory? [closed]
1,346,614,573,000
I am using Debian 11 I would like to move /var and /home directories to a NVME partition nvme1n1p1 I have attached to the server. EDIT: I am able to move the home folder and bind to the partition. But seems I have not done correctly somewhere. Because I see the read/write speed is not high. How to do this correctly please? Please note, I am not a Linux expert. This was what I have found online. cd / sudo fdisk /dev/nvme1n1 sudo mkfs.ext4 /dev/nvme1n1p1 sudo mount /dev/nvme1n1p1 /data/ sudo mkdir /data/var/ sudo mkdir /data/home/ sudo rm -rf /data/lost+found sudo cp -rp /home/* /data/home/ sudo cp -rp /var/* /data/var/ sudo mv /home /home.orig sudo mv /var /var.orig sudo mkdir /home sudo mkdir /var sudo mount --bind /data/home /home/ sudo mount --bind /data/var /var/ sudo umount /dev/nvme1n1p1 sudo mount /dev/nvme1n1p1 /data/ sudo nano /etc/fstab /data/home /home none rw,bind 0 0 /data/var /var none rw,bind 0 0 sudo mount -a
I was able to fix it. I was not mounting the NVME at startup here's the revised script that I used to solve. Posting in case if anyone needs it. lsblk sudo -s cd / sudo fdisk /dev/nvme1n1 sudo mkfs.ext4 /dev/nvme1n1p1 sudo mount /dev/nvme1n1p1 /mnt/ sudo mkdir /mnt/var/ sudo mkdir /mnt/home/ sudo rm -rf /mnt/lost+found sudo cp -rp /home/* /mnt/home/ sudo cp -rp /var/* /mnt/var/ sudo mv /home /home.orig sudo mv /var /var.orig sudo mkdir /home sudo mkdir /var sudo mount --bind /mnt/home /home/ sudo mount --bind /mnt/var /var/ sudo blkid /dev/nvme1n1p1 (Copy the UUID and use in the fstab command) sudo umount /dev/nvme1n1p1 sudo mount /dev/nvme1n1p1 /mnt/ sudo nano /etc/fstab Add the following lines in the file: UUID=aa6155a0-2a66-4c3a-977b-4976d47c5eb3 /mnt ext4 defaults 0 2 /mnt/home /home none rw,bind 0 0 /mnt/var /var none rw,bind 0 0 sudo mount -a Explanation: We are creating 2 folders in /mnt directory then mounting the disk at nvme1n1 copying all items to 2 folders in /mnt directory renaming the original home & var folders creating new /home & /var folder at root Binding newly created root folder to folders in /mnt obtaining UUID for the partition adding partition and the mount points in fstab DONE!
Move /var and /home directory on a separate NVME partition
1,346,614,573,000
How can I inspect two directory trees to determine if they are identical. Are there some good tools that allow tree merging that is free? I am using Ubuntu Linux 20.04 LTS on an x86_64 architecture. I am interested in the contents of the files or directories, whether they are identical (in regards to source code or text).
A simple diff Linux command will be sufficient. Shown below is a structure of folder and sub folders we've created: It contains two sub-folders, each with two text files (file1.txt and file2.txt) to be compared. file1.txt is identical at folder1 and folder2 Using the following, will compare both folders and with all their files: diff ./folder1 ./folder2 The result are (as expected):
Inspecting two directory trees
1,346,614,573,000
I would like to use dmenu to open files. I figured out how to get this to work if the file is in my home directory: #!/bin/sh FILE="workbook.pdf" zathura "$FILE" However, I cannot get it to work with files from my sd card. I tried: #!/bin/sh FILE="mnt/School/Latin/Lingua\ Latina/Workbook/workbook.pdf" zathura "$FILE" This opens Zathura, but not the pdf. I tried other pdf files, too, but they would not open. I tried having the backslash (to represent a space) and not having it. I tried zathura "mnt/School/Latin/Lingua\ Latina/Workbook/workbook.pdf" with and without a "$" after the first quotation mark. In short, I've tried just about everything I can think of. I can't find any examples of people using dmenu to open files, let alone open them from another partition. Any help would be appreciated (including finding a different way to quickly open these files). Thank you so much!
You are missing one slash at the beginning, the path should start with: /mnt/... instead of mnt/... It is formally called the root directory and with it (/) every full path begins. More info here for example.
Opening file from sd card using dmenu
1,346,614,573,000
I have a scenario where I want to create a main folder sub folder child folder and file in that child folder my text file contain below format ABC|A1|B1|A.txt ABC|BB|CD|AF|JIDS.txt ABC|BB|CDE|AFD|KL|JI.pdf This above file need to be passed in form of for loop Lets take an example of first record ABC|A1|B1|A.txt ABC -> Main folder A1 -> Sub folder B1 -> Child folder A.txt -> Is a file This format folders should be created ABC{Main folder}--> |=> A1 {Subfolder}--> |=> B1 {child folder}--> |=>A.txt {file inside child folder}
For each line, change all |-characters to / to form a pathname, create the directory, create the file at the end of the pathname. tr '|' '/' <file | while IFS= read -r pathname; do mkdir -p "$(dirname "$pathname")" && touch "$pathname" done Result: $ tree ABC ABC |-- A1 | `-- B1 | `-- A.txt `-- BB |-- CD | `-- AF | `-- JIDS.txt `-- CDE `-- AFD `-- KL `-- JI.pdf 8 directories, 3 files Explanation: The tr command will replace all |-character with /, and produce something like the following: ABC/A1/B1/A.txt ABC/BB/CD/AF/JIDS.txt ABC/BB/CDE/AFD/KL/JI.pdf The above output from tr is read, line by line, by the while loop. In each iteration $pathname will be a line from this output. In each iteration, dirname "$pathname" returns the directory part of $pathname (e.g. ABC/A1/B1 for the first iteration). This is used in a call to mkdir -p which creates the directories that might be missing. The -p option to mkdir creates intermediate directories. touch is used to actually create the file referenced by $pathname in the directory that was just created by mkdir. The && between mkdir and touch ensures that we don't try to create a file in a directory that mkdir (for whatever reason) failed to create.
How do i create main folder subfolder and file name
1,346,614,573,000
I would like to standardize on a directory where normal users would put programs/installs which would be used by other users. What would be the best practices? For instance, I could create a /users/shared_binaries/ directory with fairly open permissions. Is there a convention/best practice for doing something like this? Assume I don't have any network drive to place them. Basically, I want a location to share binaries without needing to install in the users' home directory nor have higher access than standard user.
There's no standard location for this. The standard way to do things is that you have to be privileged to install a program in a place where other users would run it. It's your choice between a subtree of /usr/local like /usr/local/users/bin, a subdirectory of /home like /home/shared/bin, a subdirectory of /opt like /opt/users/bin, etc. Do use path ending in /bin. This makes it more evident that it's meant for executable program, and it lets you put other things in sibling directories, such as libraries, documentation and data files used by these programs.
Linux path for users' shared binaries [duplicate]
1,346,614,573,000
I have a directory structure produced during archiving. The archiving system can't be changed. Whenever a duplicate directory occurs the archiver appends an integer suffix to the directory name. My goal is to collapse these so that there is only one directory per name (no suffixes) and sub-directories are merged. # Current Structure / +-- Folder_Name/ +-- 20170913/ +-- File One +-- Folder_Name_1/ +-- 20170913/ +-- File One +-- File Two +-- Folder_Name_2/ +-- 20170915/ +-- File Three +-- Folder_Name_3/ +-- 20170918/ +-- File Four # What I would like to achieve / +-- Folder_Name/ +-- 20170913/ +-- File One +-- File Two +-- 20170915/ +-- File Three +-- 20170918/ +-- File Four Lots of folders are created during archiving (it is an ultrasound machine saving a monthly backup of all images) and there are often duplicates produced. At the same time I am using ImageMagick to convert the bitmap files to png in order to keep file sizes down. I have tried to do this with rsync using the --remove-source-files option, but I haven't been able to find a way to match these directories for rsync (I don't think it supports regex). I then tried to use find to pipe the output into rsync but can't work out a) the regex that would achieve that; and b) how to provide the suffixed and un-suffixed directory names. I am thinking I need to use variables for this, but am not experienced with shell scripting. Am I taking the right approach here? If so, could someone point me to a resource or tutorial that can walk me through setting this up? If not then what tools should I be looking at using?
I'm pretty sure a simple cp -R Folder_Name_*/* Folder_Name will do all the merge for you. After you checked the result, you can remove the source directories. If you have multiple folders to merge, build a for loop around it: for folder in *_1; do folder="${folder%_*}" cp -R "$folder"_* "$folder" # rm -rf "$folder"_* done (rm commented out until verified)
Merging Directories Based on their Suffix
1,352,400,120,000
I'd like to do some general disk io monitoring on a debian linux server. What are the tools I should know about that monitor disk io so I can see if a disk's performance is maxed out or spikes at certain time throughout the day?
For disk I/O trending there are a few options. My personal favorite is the sar command from sysstat. By default, it gives output like this: 09:25:01 AM CPU %user %nice %system %iowait %steal %idle 09:35:01 AM all 0.11 0.00 0.01 0.00 0.00 99.88 09:45:01 AM all 0.12 0.00 0.01 0.00 0.00 99.86 09:55:01 AM all 0.09 0.00 0.01 0.00 0.00 99.90 10:05:01 AM all 0.10 0.00 0.01 0.02 0.01 99.86 Average: all 0.19 0.00 0.02 0.00 0.01 99.78 The %iowait is the time spent waiting on I/O. Using the Debian package, you must enable the stat collector via the /etc/default/sysstat config file after package installation. To see current utilization broken out by device, you can use the iostat command, also from the sysstat package: $ iostat -x 1 Linux 3.5.2-x86_64-linode26 (linode) 11/08/2012 _x86_64_ (4 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 0.84 0.00 0.08 1.22 0.07 97.80 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util xvda 0.09 1.02 2.58 0.49 112.79 12.11 40.74 0.15 48.56 3.88 1.19 xvdb 1.39 0.43 4.03 1.82 43.33 18.43 10.56 0.66 112.73 1.93 1.13 Some other options that can show disk usage in trending graphs is munin and cacti.
How can I monitor disk io?
1,352,400,120,000
Is there a command that will list all partitions along with their labels? sudo fdisk -l and sudo parted -l don't show labels by default. EDIT: (as per comment below) I'm talking about ext2 labels - those that you can set in gparted upon partitioning. EDIT2: The intent is to list unmounted partitions (so I know which one to mount).
With udev, You can use ls -l /dev/disk/by-label to show the symlinks by label to at least some partition device nodes. Not sure what the logic of inclusion is, possibly the existence of a label.
List partition labels from the command line
1,352,400,120,000
I sometimes need to plug a disk into a disk bay. At other times, I have the very weird setup of connecting a SSD using a SATA-eSATA cable on my laptop while pulling power from a desktop. How can I safely remove the SATA disk from the system? This Phoronix forum thread has some suggestions: justsumdood wrote: An(noymous)droid wrote: What then do you do on the software side before unplugging? Is it a simple "umount /dev/sd"[drive letter]? after unmounting the device, to "power off" (or sleep) the unit: hdparm -Y /dev/sdX (where X represents the device you wish to power off. for example: /dev/sdb) this will power the drive down allowing for it's removal w/o risk of voltage surge. Does this mean that the disk caches are properly flushed and powered off thereafter? Another suggestion from the same thread: chithanh wrote: All SATA and eSATA hardware is physically able to be hotplugged (ie. not damaged if you insert/pull the plug). How the chipset and driver handles this is another question. Some driver/chipset combinations do not properly handle hotplugging and need a warmplug command such as the following one: echo 0 - 0 > /sys/class/scsi_host/hostX/scan Replace X with the appropriate number for your SATA/eSATA port. I doubt whether is the correct way to do so, but I cannot find some proof against it either. So, what is the correct way to remove an attached disk from a system? Assume that I have already unmounted every partition on the disk and ran sync. Please point to some official documentation if possible, I could not find anything in the Linux documentation tree, nor the Linux ATA wiki.
Unmount any filesystems on the disk. (umount ...) Deactivate any LVM groups. (vgchange -an) Make sure nothing is using the disk for anything. You Could unplug the HDD here, but it is recommended to also do the last two steps Spin the HDD down. (irrelevant for SSD's) (sudo hdparm -Y /dev/(whatever)) Tell the system, that we are unplugging the HDD, so it can prepare itself. (echo 1 | sudo tee /sys/block/(whatever)/device/delete) If you want to be extra cautious, do echo 1 | sudo tee /sys/block/(whatever)/device/delete first. That'll unregister the device from the kernel, so you know nothing's using it when you unplug it. When I do that with a drive in an eSATA enclosure, I can hear the drive's heads park themselves, so the kernel apparently tells the drive to prepare for power-down. If you're using an AHCI controller, it should cope with devices being unplugged. If you're using some other sort of SATA controller, the driver might be confused by hotplugging. In my experience, SATA hotplugging (with AHCI) works pretty well in Linux. I've unplugged an optical drive, plugged in a hard drive, scanned it for errors, made a filesystem and copied data to it, unmounted and unplugged it, plugged in a differerent DVD drive, and burned a disc, all with the machine up and running.
How can I safely remove a SATA disk from a running system?
1,352,400,120,000
I used to think that file changes are saved directly into the disk, that is, as soon as I close the file and decide to click/select save. However, in a recent conversation, a friend of mine told me that is not usually true; the OS (specifically we were talking about Linux systems) keeps the changes in memory and it has a daemon that actually writes the content from memory to the disk. He even gave the example of external flash drives: these are mounted into the system (copied into memory) and sometimes data loss happens because the daemon did not yet save the contents into the flash memory; that is why we unmount flash drives. I have no knowledge about operating systems functioning, and so I have absolutely no idea whether this is true and in which circumstances. My main question is: does this happen like described in Linux/Unix systems (and maybe other OSes)? For instance, does this mean that if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? Perhaps it depends on the disk type -- traditional hard drives vs. solid-state disks? The question refers specifically to filesystems that have a disk to store the information, even though any clarification or comparison is well received.
if I turn off the computer immediately after I edit and save a file, my changes will be most likely lost? They might be. I wouldn't say "most likely", but the likelihood depends on a lot of things. An easy way to increase performance of file writes, is for the OS to just cache the data, tell (lie to) the application the write went through, and then actually do the write later. This is especially useful if there's other disk activity going on at the same time: the OS can prioritize reads and do the writes later. It can also remove the need for an actual write completely, e.g., in the case where a temporary file is removed quickly afterwards. The caching issue is more pronounced if the storage is slow. Copying files from a fast SSD to a slow USB stick will probably involve a lot of write caching, since the USB stick just can't keep up. But your cp command returns faster, so you can carry on working, possibly even editing the files that were just copied. Of course caching like that has the downside you note, some data might be lost before it's actually saved. The user will be miffed if their editor told them the write was successful, but the file wasn't actually on the disk. Which is why there's the fsync() system call, which is supposed to return only after the file has actually hit the disk. Your editor can use that to make sure the data is fine before reporting to the user that the write succeeded. I said, "is supposed to", since the drive itself might tell the same lies to the OS and say that the write is complete, while the file really only exists in a volatile write cache within the drive. Depending on the drive, there might be no way around that. In addition to fsync(), there are also the sync() and syncfs() system calls that ask the system to make sure all system-wide writes or all writes on a particular filesystem have hit the disk. The utility sync can be used to call those. Then there's also the O_DIRECT flag to open(), which is supposed to "try to minimize cache effects of the I/O to and from this file." Removing caching reduces performance, so that's mostly used by applications (databases) that do their own caching and want to be in control of it. (O_DIRECT isn't without its issues, the comments about it in the man page are somewhat amusing.) What happens on a power-out also depends on the filesystem. It's not just the file data that you should be concerned about, but the filesystem metadata. Having the file data on disk isn't much use if you can't find it. Just extending a file to a larger size will require allocating new data blocks, and they need to be marked somewhere. How a filesystem deals with metadata changes and the ordering between metadata and data writes varies a lot. E.g., with ext4, if you set the mount flag data=journal, then all writes – even data writes – go through the journal and should be rather safe. That also means they get written twice, so performance goes down. The default options try to order the writes so that the data is on the disk before the metadata is updated. Other options or other filesystem may be better or worse; I won't even try a comprehensive study. In practice, on a lightly loaded system, the file should hit the disk within a few seconds. If you're dealing with removable storage, unmount the filesystem before pulling the media to make sure the data is actually sent to the drive, and there's no further activity. (Or have your GUI environment do that for you.)
Are file edits in Linux directly saved into disk?
1,352,400,120,000
I'm using Ubuntu 12.04, and when I rigth click on a my flash drive icon (in the Unity left bar) I get two options that have me confused: eject and safely remove. The closer I came to an answer was this forum thread, which concludes that (for a flash drive) they are both equal and also equivalent to use the umount command. However, this last assertion seems to be false. If I use umount from the console to unmount my flash dive, and then I use the command lsblk, I still see my device (with nothing under MOUNTPOINT, of course). On the other hand, if I eject or safely remove my flash drive, lsblk does not list it anymore. So, my question is, what would be the console command/commands that would really reproduce the behaviour of eject and safely remove?
If you are using systemd then use udisksctl utility with power-off option: power-off Arranges for the drive to be safely removed and powered off. On the OS side this includes ensuring that no process is using the drive, then requesting that in-flight buffers and caches are committed to stable storage. I would recommend first to unmount all filesystems on that usb. This can be done also with udisksctl, so steps would be: udisksctl unmount -b /dev/sda1 udisksctl power-off -b /dev/sda If you are not using systemd then old good udisks should work: udisks --unmount /dev/sda1 udisks --detach /dev/sda
Eject / safely remove vs umount
1,352,400,120,000
Can someone please explain to me, what the difference is between creating mdadm array using partitions or the whole disks directly? Supposing I intend to use the whole drives. Imagine a RAID6 created in two ways, either: mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 or: mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd What is the difference, and possible problems arising from any of the two variants? For example, I mean the reliability or manageability or recovery operations on such arrays, etc.
The most important difference is that it allows you to increase the flexibility for disk replacement. It is better detailed below along with a number of other recommendations. One should consider to use a partition instead of the entire disk. This should be under the general recommendations for setting up an array and may certainly spare you some headaches in the future when further disk replacements get necessary. The most important arguments is: Disks from different manufacturers (or even different models of the "same" capacity from the same manufacturer) don't necessarily have the exact same disk size and, even the smallest size difference, will prevent you from replacing a failed disk with a newer one if the second is smaller than the first. Partitioning allows you to workaround this; Side note on why to use different manufacturers disks: Disks will fail, this is not a matter of a "if" but a "when". Disks of the same manufacturer and the same model have similar properties, and so, higher chances of failing together under the same conditions and time of use. The suggestion so is to use disks from different manufacturers, different models and, in special, that do not belong to the same batch (consider buying from different stores if you are buying disks of the same manufacturer and model). This is not uncommon that a second disk fail happen during a resotre after a disk replacement when disks of the same batch are used. You certainly don't want this to happen to you. So the recommendations: 1) Partition the disks that will be used with a slightly smaller capacity than the overall disk space (e.g, I have a RAID5 array of 2TB disks and I intentionally partitioned them wasting about 100MB in each). Then, use /dev/sd?1 of each one for composing the array - This will add a safety margin in case a new replacing disk has less space than the original ones used to assemble the array when it was created; 2) Use disks from different manufacturers; 3) Use disks of different models if different manufacturers are not an option for you; 4) Use disks from different batches; 5) Proactively replace disks before they fail and not all at the same time. This may be a little paranoid and really depends on the criticity of the data you have. I use to have disks that have 6 months differences in age from each other; 6) Make regular backups (always, regardless if you use an array or not). Raid doesn't serve the same purpose of backups. Arrays assure you high availability, Backups allow you to restore lost files (including the ones that get accidentally deleted or are damaged by viruses, some examples of something that using arrays will not protect you from). OBS: Except for all the non-neglectable rational above, there aren't much further technical differences between using /dev/sd? vs /dev/sd?#. Good luck
What's the difference between creating mdadm array using partitions or the whole disks directly
1,352,400,120,000
When monitoring disk IO, most of the IO is attributed to jbd2, while the original process that caused the high IO is attributed a much lower IO percentage. Why? Here's iotop's example output (other processes with IO<1% omitted):
jbd2 is a kernel thread that updates the filesystem journal. Tracing filesystem or disk activity with the process that caused it is difficult because the activities of many processes are combined together. For example, if two processes are reading from the same file at the same time, which process would the read be accounted against? If two processes write to the same directory and the directory is updated on disk only once (combining the two operations), which process would the write be accounted against? In your case, it appears that most of the traffic consists of updates to the journal. This is traced to the journal updater, but there's no tracing between journal updates and the process(es) that caused the write operation(s) that required this journal update.
Why is most the of disk IO attributed to jbd2 and not to the process that is actually using the IO?
1,352,400,120,000
I need to securely erase harddisks from time to time and have used a variety of tools to do this: cat /dev/zero > /dev/disk cat /dev/urandom > /dev/disk shred badblocks -w DBAN All of these have in common that they take ages to run. In one case cat /dev/urandom > /dev/disk killed the disk, apparently overheating it. Is there a "good enough" approach to achieve that any data on the disk is made unusable in a timely fashion? Overwriting superblocks and a couple of strategically important blocks or somesuch? The disks (both, spinning and ssd) come from donated computers and will be used to install Linux-Desktops on them afterwards, handed out to people who can't afford to buy a computer, but need one. The disks of the donated computers will usually not have been encrypted. And sometimes donors don't even think of deleting files beforehand. Update: From the answers that have come in so far, it seems there is no cutting corners. My best bet is probably setting up a lab-computer to erase multiple disks at once. One more reason to ask big companies for donations :-) Thanks everyone!
Overwriting the superblock or partition table just makes it inconvenient to reconstruct the data, which is obviously still there if you just do a hex dump. Hard disks have a built-in erasing feature: ATA Secure Erase, which you can activate using hdparm: Pick a password (any password): hdparm --user-master u --security-set-pass hunter1 /dev/sdX Initiate erasure: hdparm --user-master u --security-erase hunter1 /dev/sdX Since this is a built-in feature, it is unlikely that you'll find a faster method that actually offers real erasure. (It's up to you, though, to determine whether it meets your level of paranoia.) Alternatively, use the disk with full-disk encryption, then just throw away the key when you want to dispose of the data.
How can I speed up secure erasing of a disk?
1,352,400,120,000
How do I get read and write IOPS separately in Linux, using command line or in a programmatic way? I have installed sysstat package. Please tell me how do I calculate these separately using sysstat package commands. Or, is it possible to calculate them using file system? ex: /proc or /sys or /dev
iostat is part of the sysstat package, which is able to show overall iops if desired, or show them separated by reads/writes. Run iostat with the -d flag to only show the device information page, and -x for detailed information (separate read/write stats). You can specify the device you want information for by simply adding it afterwards on the command line. Try running iostat -dx and looking at the summary to get a feel for the output. You can also use iostat -dx 1 to show a continuously refreshing output, which is useful for troubleshooting or live monitoring, Using awk, field 4 will give you reads/second, while field 5 will give you writes/second. Reads/second only: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4; }' Writes/sec only: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $5; }' Reads/sec and writes/sec separated with a slash: iostat -dx <your disk name> | grep <your disk name> | awk '{ print $4"/"$5; }' Overall IOPS (what most people talk about): iostat -d <your disk name> | grep <your disk name> | awk '{ print $2; }' For example, running the last command with my main drive, /dev/sda, looks like this: dan@daneel ~ $ iostat -dx sda | grep sda | awk '{ print $4"/"$5; }' 15.59/2.70 Note that you do not need to be root to run this either, making it useful for non-privileged users. TL;DR: If you're just interested in sda, the following command will give you overall IOPS for sda: iostat -d sda | grep sda | awk '{ print $2; }' If you want to add up the IOPS across all devices, you can use awk again: iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}' This produces output like so: dan@daneel ~ $ iostat -d | tail -n +4 | head -n -1 | awk '{s+=$2} END {print s}' 18.88
How to get total read and total write IOPS in Linux?
1,352,400,120,000
I have a single disk that I want to create a mirror of; let's call this disk sda. I have just bought another identically-sized disk, which we can call sdb. sda and sdb have one partition called sda1 and sdb1 respectively. When creating a raid, I don't want to wipe my sda clean and start again, I just want it to start mirroring with sdb. My train of thought was to do: mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=1 /dev/sda1 ... to create the array without sdb disk, then run something like (I'm thinking the following command out loud, because I am not sure how to achieve this step) mdadm /dev/md0 --add /dev/sdb1 Note sdb1 is assumed to be formatted similarly to sda1 Is this possible?
The simple answer to the question in the title is "Yes". But what you really want to do is the next step, which is getting the existing data mirrored. It's possible to convert the existing disk, but it's risky, as mentioned, due the the metadata location. Much better to create an empty (broken) mirror with the new disk and copy the existing data onto it. Then, if it doesn't work, you just boot back to the un-mirrored original. First, initialize /dev/sdb1 as the new /dev/md0 with a missing drive and initialize the filesystem (I'm assuming ext3, but the choice is yours) mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 missing mkfs -text3 /dev/md0 Now, /dev/sda1 is most likely your root file system (/) so for safety you should do the next step from a live CD, rescue disk or other bootable system which can access both /dev/sda1 and /dev/md0 although I have successfully done this by dropping to single user mode. Copy the entire contents of the filesystem on /dev/sda1 to /dev/md0. For example: mount /dev/sda1 /mnt/a # only do this if /dev/sda1 isn't mounted as root mount /dev/md0 /mnt/b cd /mnt/a # or "cd /" if it's the root filesystem cp -dpRxv . /mnt/b Edit /etc/fstab or otherwise ensure that on the next boot, /dev/md0 is mounted instead of /dev/sda1. Your system is probably set to boot from /dev/sda1 and the boot parameters probably specify this as the root device, so when rebooting you should manually change this so that the root is /dev/md0 (assuming /dev/sda1 was root). After reboot, check that/dev/md0 is now mounted (df) and that it is running as a degraded mirror (cat /proc/mdstat). Add /dev/sda1 to the array: mdadm /dev/md0 --add /dev/sda1 Since the rebuild will overwrite /dev/sda1, which metadata version you use is irrelevant. As always when making major changes, take a full backup (if possible) or at least ensure that anything which can't be recreated is safe. You will need to regenerate your boot config to use /dev/md0 as root (if /dev/sda1 was root) and probably need to regenerate mdadm.conf to ensure /dev/md0 is always started.
Can I create a software RAID 1 with one device
1,352,400,120,000
Edited: do not run this to test it unless you want to destroy data. Could someone help me understand what I got? dd if=/dev/zero of=/dev/sda bs=4096 count=4096 Q: Why specifically 4096 for count? dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr blockdev --getsz /dev/sda - 4096) Q: What exactly does this do? Warning; Above code will render some/all specified device/disk's data useless!
dd if=/dev/zero of=/dev/sda bs=4096 count=4096 Q: why 4096 is particularly used for counter? This will zero out the first 16 MiB of the drive. 16 MiB is probably more than enough to nuke any "start of disk" structures while being small enough that it won't take very long. dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr blockdev --getsz /dev/sda - 4096) Q: What does this exactly? blockdev --getsz gets the size of the block device in "512 byte sectors". So this command looks like it was intended to zero out the last 2 MiB of the drive. Unfortunately this command is broken syntax wise. I expect the command was originally intended to be dd if=/dev/zero of=/dev/sda bs=512 count=4096 seek=$(expr `blockdev --getsz /dev/sda` - 4096) and the backticks got lost somewhere along the line of people copy/pasting it between different environments. Old partition tables, LVM metadata, raid metadata etc can cause problems when reusing a drive. Zeroing out sections at the start and end of the drive will generally avoid these problems while being much faster than zeroing out the whole drive.
What does `dd if=/dev/zero of=/dev/sda` do
1,352,400,120,000
I'm having a little issue. I've a live system which run on RHEL 6.7 (VM) and have VMware 6.5 (which is not managed by our group) . The issue is, the other group tried to extend the capacity of an existing disk on a VM. After that, I ran a scan command to detect new disk as usual with echo "- - -" > /sys/class/scsi_host/host0/scan, but nothing happened. They added 40G on sdb disk which should be 100G and I saw that is changed on VM but not in Linux. So where is the problem ? As I said, this is a live system, so I don't want to reboot it. Here is the system : # df -h /dev/mapper/itsmvg-bmclv 59G 47G 9.1G 84% /opt/bmc # lsblk sdb 8:16 0 60G 0 disk └─itsmvg-bmclv (dm-2) 253:2 0 60G 0 lvm /opt/bmc # vgs VG #PV #LV #SN Attr VSize VFree itsmvg 1 1 0 wz--n- 59.94g 0 # pwd /sys/class/scsi_host # ll lrwxrwxrwx 1 root root 0 Nov 13 16:18 host0 -> ../../devices/pci0000:00/0000:00:07.1/host0/scsi_host/host0 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host1 -> ../../devices/pci0000:00/0000:00:07.1/host1/scsi_host/host1 lrwxrwxrwx 1 root root 0 Nov 13 16:19 host2 -> ../../devices/pci0000:00/0000:00:15.0/0000:03:00.0/host2/scsi_host/host2
Below is the command that you need to run to scan the host devices so it will show the new hard disk connected. echo "- - -" >> /sys/class/scsi_host/host_$i/scan $i is the host number
How to detect new hard disk attached without rebooting?
1,352,400,120,000
Possible Duplicate: How to know if /dev/sdX is a connected USB or HDD? The output of ls /dev/sd* on my system is - sda sda1 sda2 sda3 sda4 sda5 sda6 sda7 sdb sdc sdc1 sdc2 How should I determine which drive is which?
Assuming you're on Linux. Try: sudo /lib/udev/scsi_id --page=0x80 --whitelisted --device=/dev/sdc or: cat /sys/block/sdc/device/{vendor,model} You can also get information (including labels) from the filesystems on the different partitions with sudo blkid /dev/sdc1 The pathid will help to determine the type of device: readlink -f /sys/class/block/sdc/device See also: find /dev/disk -ls | grep /sdc Which with a properly working udev would give you all the information from the other commands above. The content of /proc/partitions will give you information on size (though not in as a friendly format as lsblk already mentionned by @Max). sudo blockdev --getsize64 /dev/sdc Will give you the size in bytes of the corresponding block device. sudo smartctl -i /dev/sdc (cross-platform), will also give you a lot of information including make, model, size, serial numbers, firmware revisions...
How to determine which sd* is usb? [duplicate]
1,352,400,120,000
I have a minimalist busybox system that I was recently trying to use, and I found a small problem: it has no lsblk command. Is there another command to list disks, partitions, and sizes like lsblk? Some that also don't work: lsblk lsusb fdisk -l cfdisk
Seeing through the wiki page of busybox, I see it supports df command to find disk usage. You can try the below command. df -h - Show free space on mounted file systems. From the man page of busybox, they have provided examples of how to use the df command. However, as @nwildner pointed out, the df will show storage on a mounted filesystem and not the schemes related to partitions. To find it out, you can check the below file. cat /proc/partitions As you had mentioned fdisk -l is not working the above file might contain the partition information. Testing fdisk -l produced the below output in my system. Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 9726 78019672+ 8e Linux LVM Now, I can get the partition information if I use cat /proc/partitions. The output is, major minor #blocks name 8 0 78125000 sda 8 1 104391 sda1 8 2 78019672 sda2 253 0 78019156 dm-0 253 1 72581120 dm-1 253 2 5406720 dm-2 The major number is 8 that indicates it to be a disk device. The minor ones are your partitions on the same device. 0 is the entire disk, 1 is the primary, 2 is extended and 5 is logical partition. The rest is of course block size and name of disk/partition. Not sure if an intelligent suggestion, but did you try sudo fdisk -l to see if it is working? EDIT#1 You can also run $ df -T. This is another command that does not require super user privileges to execute. However, this will report for every mount point. Another command that can come handy is # file -sL /dev/sdXY. This has one downside in that it does not work with the full block device. Requires the exact device to be passed. The output is quite neat though: References How to determine the filesystem of an unmounted device?
Stat disk drives wihout lsblk?
1,352,400,120,000
I have no free space left in the /tmp folder in Fedora 26. That causes several problems. Folder /tmp takes up all the free RAM: $ df -h /tmp Filesystem Size Used Avail Use% Mounted on tmpfs 3.9G 3.9G 12k 100% /tmp How can I increase the size of this folder manually to get more free space without adding more RAM or deleting files from it? My question is similar to this and this but it is different: In my question I need to increase the size of the folder while I've reached maximal size
mount -o remount,size=5G /tmp/
How to increase /tmp folder size manually
1,352,400,120,000
I woke up this morning to a notification email with some rather disturbing system log entries. Dec 2 04:27:01 yeono kernel: [459438.816058] ata2.00: exception Emask 0x0 SAct 0xf SErr 0x0 action 0x6 frozen Dec 2 04:27:01 yeono kernel: [459438.816071] ata2.00: failed command: WRITE FPDMA QUEUED Dec 2 04:27:01 yeono kernel: [459438.816085] ata2.00: cmd 61/08:00:70:0d:ca/00:00:08:00:00/40 tag 0 ncq 4096 out Dec 2 04:27:01 yeono kernel: [459438.816088] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout) Dec 2 04:27:01 yeono kernel: [459438.816095] ata2.00: status: { DRDY } (the above five lines were repeated a few times at a short interval) Dec 2 04:27:01 yeono kernel: [459438.816181] ata2: hard resetting link Dec 2 04:27:02 yeono kernel: [459439.920055] ata2: SATA link down (SStatus 0 SControl 300) Dec 2 04:27:02 yeono kernel: [459439.932977] ata2: hard resetting link Dec 2 04:27:09 yeono kernel: [459446.100050] ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 2 04:27:09 yeono kernel: [459446.314509] ata2.00: configured for UDMA/133 Dec 2 04:27:09 yeono kernel: [459446.328037] ata2.00: device reported invalid CHS sector 0 ("reported invalid CHS sector 0" repeated a few times at a short interval) I make full nightly backups of my entire system to an external (USB-connected) drive, and the above happened right in the middle of that backup run. (The backup starts at 04:00 through cron, and tonight's logged completion just before 04:56.) The backup process itself claims to have completed without any errors. There are two internally connected SATA drives and two externally (USB) connected drives on my system; one of the external drives is currently dormant. I don't recall off the top of my head which physical SATA ports are used for which of the internal drives. When googling I found the AskUbuntu question Is this drive failure or something else? which indicates that a very similar error occured after 8-10 GB had been copied to a drive, but the actual failure mode was different as the drive switched to a read-only state. The only real similarity is that I did add on the order of 7-8 GB of data to my main storage last night, which would have been backed up around the time that the error occured. smartd is not reporting anything out of the ordinary on either of the internal drives. Unfortunately smartctl doesn't speak the language of the external backup drive's USB bridge, and simply complains about Unknown USB bridge [0x0bc2:0x3320 (0x100)]. Googling for that specific error was distinctly unhelpful. My main data storage as well as the backup is on ZFS and zpool status reports 0 errors and no known data errors. Nevertheless I have initiated a full scrub on both the internal and external drives. It is currently slated to complete in about six hours for the internal drive (main storage pool) and 13-14 hours for the backup drive. It seems that the next step should be to determine which drive was having trouble, and possibly replace it. The ata2.00 part probably tells me which drive was having problems, but how do I map that identifier to a physical drive?
I wrote one-liner based on Tobi Hahn answer. For example, you want to know what device stands for ata3: ata=3; ls -l /sys/block/sd* | grep $(grep $ata /sys/class/scsi_host/host*/unique_id | awk -F'/' '{print $5}') It will produce something like this lrwxrwxrwx 1 root root 0 Jan 15 15:30 /sys/block/sde -> ../devices/pci0000:00/0000:00:1f.5/host2/target2:0:0/2:0:0:0/block/sde
Given a kernel ATA exception, how to determine which physical disk is affected? [duplicate]
1,352,400,120,000
I am running some heavy I/O processes on my workstation and recently installed iotop to monitor them. Here's a recent screenshot: I'm a bit confused about the readings in the IO> column. It indicates my disk is running at around ~1500% I/O activity. Is that even possible? How to figure out the maximum possible I/O of my disk from these readings? And how does iotop calculate the relative I/O activity?
iotop shows statistics from several different origins; take care when adding up stuff. This previous discussion covers the difference between per-process read/write amounts and the system total read/write amounts: they cover different stuff since the per-process amounts include all I/O (whether to disk, to cache, to network, etc.) whereas the system total is between RAM and disk (including swap, delayed cache writes, etc.). You can't add up numbers from the IO> column. They show what fraction of each process's time is spent on I/O, not what fraction of total I/O comes from each process. 99.9% means that this process is pretty much always blocked on I/O. Accounting for I/O by process is difficult since a lot of I/O is shared between processes (cache of files used by multiple processes, a process requesting RAM causing another process to be swapped out, etc.) I don't think there's a useful definition of the “maximum possible I/O” of a disk. There's a maximum sequential write speed and a maximum sequential read speed at different points of the chain (hdparm -t displays some of these values), but that isn't really indicative of actual usage. Reading and writing files is typically not sequential; on a hard disk, moving heads to access a different location is often what takes the most time.
How does iotop calculate the relative I/O activity?
1,352,400,120,000
UPDATE: No, it is not safe to delete these snaps. I deleted them and can no longer open three of my applications. Attempt at opening Visual Studio Code: ~$ code internal error, please report: running "code" failed: cannot find installed snap "code" at revision 33: missing file /snap/code/33/meta/snap.yaml The snaps in /var/lib/snapd/snaps are taking up 2.0 GB of space on my disk right now. I want to clear up space, but I'm not sure if deleting these snaps is safe (if so, can I just run sudo rm -rf *?) This is what I see when I run snap list: code_32.snap gnome-3-28-1804_116.snap gnome-logs_93.snap code_33.snap gnome-3-34-1804_27.snap gnome-system-monitor_135.snap core18_1705.snap gnome-3-34-1804_33.snap gnome-system-monitor_145.snap core18_1754.snap gnome-calculator_730.snap gtk-common-themes_1502.snap core_8935.snap gnome-calculator_748.snap gtk-common-themes_1506.snap core_9066.snap gnome-characters_495.snap partial discord_109.snap gnome-characters_539.snap spotify_36.snap gnome-3-28-1804_110.snap gnome-logs_100.snap spotify_41.snap What are the gnome, code, and core snaps? I've installed Discord and Spotify. Will deleting the discord and spotify snaps lead to any issues with opening those applications? I'm using Ubuntu 18.04.3 LTS.
So, there's a couple questions here and I'll try to address them in an order that makes sense: What are snaps? Snaps are a way to package software, like deb packages or flatpaks. They work across linux distros and have become popular because of how easy they are to maintain and use. You can find more here: https://snapcraft.io/ What are the gnome, code, and core snaps? Core is required for snap to function, it has the program's core runtime. The gnome snaps are a pack of basic apps (calculator, system-monitor, etc). The base gnome-3-34 snaps are dependencies for the various gnome apps. Code is vscode. The snaps in /var/lib/snapd/snaps are taking up 2.0 GB of space on my disk right now. I want to clear up space Snap lets you easily roll-back to previous versions in case you want to. This leads to a lot of disk space being taken up, especially if an app and its dependencies are heavy. The other answer details how to limit this. I've installed discord and spotify. Will deleting the discord and spotify snaps lead to any issues with opening those applications Yes, if discord and spotify are installed via snap removing those files will result in the applications being removed (or broken, in this case). I'm not sure if deleting these snaps is safe (if so, can I just run sudo rm -rf *?) If you delete the snaps properly (through snap remove) yes, most of them can be removed. Removing files manually with sudo rm is dangerous. Some programs have files littered around the system and removing only part of them can cause issues and, sometimes, may need a reinstall to fix. If a package is installed through a manager (snap in this case), you should always uninstall it via the same manager. Since you removed files manually, snap can't find all the parts it needs to function and fails. You'll want to reinstall it with the following (note this will likely remove config files for snap and its programs, if that is an issue back them up): sudo apt purge snapd sudo apt install snapd snap install discord spotify code [...]
Is it safe to delete these snaps? [closed]
1,352,400,120,000
If I know that a partition is for example /dev/sda1 how can I get the disk name (/dev/sda in this case) that contains the partition ? The output should be only a path to disk (like /dev/sda). It shouldn't require string manipulation, because I need it to work for different disk types.
You can observe in /sys the block device for a given partition name. For example, /dev/sda1: $ ls -l /sys/class/block/sda1 lrwxrwxrwx 1 root root /sys/class/block/sda1 -> \ ../../devices/pci0000:00/.../ata1/host0/target0:0:0/0:0:0:0/block/sda/sda1 A script to take arg /dev/sda1 and print /dev/sda is: part=$1 part=${part#/dev/} disk=$(readlink /sys/class/block/$part) disk=${disk%/*} disk=/dev/${disk##*/} echo $disk I don't have lvm etc to try out, but there is probably some similar path. There is also lsblk: $ lsblk -as /dev/sde1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sde1 8:65 1 7.4G 0 part `-sde 8:64 1 7.4G 0 disk and as @don_crissti said you can get the parent directly by using -o pkname to get just the name column, -n to remove the header, and -d to not include holder devices or slaves: lsblk -ndo pkname /dev/sda1
How to get disk name that contains a specific partition
1,352,400,120,000
I mount /tmp on tmpfs using: sudo systemctl enable tmp.mount sudo systemctl start tmp.mount But this way /tmp takes up all the free RAM: $ df -h /tmp Filesystem Size Used Avail Use% Mounted on tmpfs 3.9G 12K 3.9G 1% /tmp $ How do I tell systemd tmp.mount to use only 1G? I know I can alternatively not use systemd and manually add an entry to /etc/fstab and specify the size there. But I don't want to do that. I want to use systemd backed tmpfs.
The systemd way of overriding tmp.mount, or extending it, is to add a local override in /etc/systemd/system. You can either copy the existing tmp.mount (from /lib/systemd/system or /usr/share/systemd probably) and edit the copy, or better yet, add configuration snippets to only change the mount options, by running sudo systemctl edit tmp.mount and entering [Mount] Options=mode=1777,strictatime,nosuid,nodev,size=1G in the editor which opens. This will create a directory called /etc/systemd/system/tmp.mount.d inside that directory, add a file called override.conf containing the text above. Note that systemd.mount still says that In general, configuring mount points through /etc/fstab is the preferred approach. so you may just want to do that, i.e. edit /etc/fstab to add the size=... option on the /tmp line (adding it if necessary): tmpfs /tmp tmpfs mode=1777,strictatime,nosuid,nodev,size=1G 0 0 In fact, this is the recommended approach to change mount options for any of systemd’s “API file systems”: Even though normally none of these API file systems are listed in /etc/fstab they may be added there. If so, any options specified therein will be applied to that specific API file system. Hence: to alter the mount options or other parameters of these file systems, simply add them to /etc/fstab with the appropriate settings and you are done. Using this technique it is possible to change the source, type of a file system in addition to simply changing mount options. That is useful to turn /tmp to a true file system backed by a physical disk. API file systems include the following: /sys, /proc, /dev, /run, /tmp, /sys/fs/cgroup, /sys/kernel/security, /sys/kernel/debug, /sys/kernel/config, /sys/fs/selinux, /dev/shm, /dev/pts, /proc/sys/fs/binfmt_misc, /dev/mqueue, /dev/hugepages, /sys/fs/fuse/connections, /sys/firmware/efi/efivars. systemd ensures they are mounted even if they are not specified in /etc/fstab or a mount unit. Be careful when sizing tmpfs file systems: they will end up competing with whatever else in your system needs memory (including swap), and can result in memory exhaustion when you don’t expect it; at worst this can result in deadlocks.
Systemd backed tmpfs | How to specify /tmp size manually
1,352,400,120,000
I am in a frustrating situation - no matter how I try, gparted won't let me assign the empty space to the first partition: The middle partition is blocking me from expanding /dev/sda1. I need to move partition /dev/sda2 to the end of the drive, like so (fabricated image): Then I will be able to expand first partition: How to do that? I assume data from /dev/sda2 must be physically copied to end of the drive.
It is recommended that, before making any changes, you make backups of any data you do not want to lose in case anything goes wrong. Before you start, both partitions will need to be unmounted– if you cannot unmount one of them (e.g. because it is your root partition), use a live CD with GParted (e.g. the GParted live CD or the Ubuntu live CD) and resize them that way. Select the second (extended) partition, and click on Resize/Move. Use the right handle to extend the partition to the end of the free space, and then click on Resize/Move. Select the contained swap partition, and click on Resize/Move. Drag the partition to the end of the extended partition, and then click on Resize/Move. You can safely click OK on this warning message, as we are only moving a swap partition. Select the extended partition again and click on Resize/Move. Use the left handle to shrink the partition to the end of the free space, and then click on Resize/Move. Then, select the first partition, and click on Resize/Move. Use the right handle to extend the partition to the end of the free space, and then click on Resize/Move. Finally, click on Apply and your partitions should be reformatted.
Move free space from end of the drive to first partition with gparted
1,352,400,120,000
I can use a variety of tools to measure the volume of disk I/O currently flowing through the system (such as iotop and iostat) but I'm curious if it's possible to easily detect if a disk is seeking a lot with only a small amount of I/O. I know it;s possible to extract this information using blktrace and then decode it using btt but these are somewhat unwieldy and I was hoping there was a simpler alternative?
The ratio (rkB/s + wkB/s)/%util of the iostat -x output should give you some insight: Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0.04 3.65 7.16 6.37 150.82 212.38 53.71 0.03 1.99 0.82 3.31 0.76 1.03 I'm not sure how exactly this ratio corresponds to the disk seek. But the idea is that, if the disk is busy and does not have a high throughput it is probably seeking. However, it's not guaranteed. Broken disks sometimes show a high utilisation and have almost no throughput. But it's at least an indicator. You can also provide a number to iostat (e.g. iostat -x 5) to specify the update interval. That way you can monitor continuously.
How to detect if a disk is seeking?
1,352,400,120,000
Many times, especially when messing around with boot-loaders, I'll see numerical drive and partition numbers used. For instance, in my /boot/grub/grub.cfg I see set root='hd0,gpt2', my UEFI boot entries often reference drive/partition numbers, and it seems to crop up in almost any context where bootloaders are concerned. Now that we have UUID and PARTUUID, addressing partitions in this manner seems incredibly unstable (afaik, drives are not guaranteed to be mounted in the same order always, a user may move the order of drives being plugged into their mobo, etc.) My questions therefore are twofold: Is this addressing scheme as unstable as I have outlined above? Am I missing something in the standard that means this scheme is far more reliable than I expect, or will this addressing scheme truly render your system unbootable (until you fix your boot entries at least) as a result of your drives simply being recognized in a different order or plugging them into different slots on your motherboard? If the answer to the question above is yes, then why does this addressing scheme continue to be used? Wouldn't using UUID or PARTUUID for everything be far more stable, and consistent?
The plain numbering scheme is not actually used in recent systems (with "recent" being Ubuntu 9 and later, other distributions may have adapted in that era, too). You are correct in observing the root partition is set with the plain numbering scheme. But this only is a default or fall-back setting which is usually overridden with the very next command, such as: search --no-floppy --fs-uuid --set=root 74686973-6973-616e-6578-616d706c650a This selects the root partition based on the file-system's UUID. In practice, the plain numbering scheme is usually stable (as long as there are no hardware changes). The only instance I observed non-predictable numbering was system with many USB-drives which were enumerated based on a first-come-first serve pattern and then emulated as IDE drives. None of these processes are inherently chaotic, so I assume a problem in that particular systems BIOS implementation. Note: "root partition" in this context means the partition to boot from, it may be different from the partition containing the "root aka. / file system".
Why is drive/partition number still used?
1,352,400,120,000
How to get a list of all disks, like this? /dev/sda /dev/sdb
ls (shows individual partitions though) # ls /dev/sd* /dev/sda /dev/sda1 ls (just disks, ignore partitions) # ls /dev/sd*[a-z] /dev/sda fdisk # fdisk -l 2>/dev/null |awk '/^Disk \//{print substr($2,0,length($2)-1)}' /dev/xvda
Get simple list of all disks [duplicate]
1,352,400,120,000
Specifically, in smartctl output, how is LifeTime(hours) calculated? I'm assuming it's one of the following: The difference (in hours) between the time of the test and the manufacture date of the drive. The difference (in hours) between the time of the test and the first powered-on date of the drive. The difference (in hours) between the time of the test (in terms of "drive running hours") and the total number of "drive running hours". *By "drive running hours", I mean a running total of the number of hours a drive has been powered on. (Analogy: Airplane engines don't have odometers like cars. Rather, they usually show the number of hours the engines have been running. I'm using "drive running hours" to mean something similar, but for hard drives) Example smartctl output: === START OF READ SMART DATA SECTION === SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 22057 - # 2 Short offline Completed without error 00% 22057 - # 3 Extended offline Completed without error 00% 22029 - # 4 Extended offline Completed without error 00% 21958 -
If I remember correctly this can vary from drive to drive. Most brands: Once testing is done at the manufacturer the firmware is loaded which will begin monitoring the first time the drive is started by the user. The firmware does not monitor actual times. It works exactly like the hour meter on a plane. The only difference being some brands might do testing with the firmware active, so a brand new drive might show 1-2 hours where others will show 0 (Unless the test takes over an hour.) If you run smartctl -A /dev/sdX, replacing x with your drive, you can see the attributes that your HDD is reporting. There is a Powered On Time attribute which is where this value comes from.
In smartctl output, what does LifeTime(hours) mean?
1,352,400,120,000
I have two VMs in VirtualBox. For exmaple, VM 1 runs Red Hat, and VM 2 runs Ubuntu. For the Red Hat VM I have redhat.vdi and redhat2.vdi, and for the Ubuntu VM I have ubuntu.vdi and unbuntu2.vdi. Each VM can access its own virtual disks without problem. How can I access ubuntu.vdi from the Red Hat virtual machine, using VirtualBox?
This is how you add another virtual hard disk to a VM in VirtualBox. Go into the VirtualBox Manager and make sure both VMs are shut down Right-click on the VM in question and pick Settings Go into the Storage category Select the controller on which you want to connect the virtual hard disk Click the "Add attachment" button and select "Add hard disk" from the popup menu Pick "Choose existing disk" Tell VirtualBox which hard disk file you want to add, and click Open When you start the VM the next time, the disk will be available just as if you had installed a second physical hard disk in a real computer.
Mounting another VM's .vdi in VirtualBox
1,352,400,120,000
We have RedHat 7.2 OS. /dev/sdc is mounted to /bla/appLO Is it possible to run fsck on mounted disks (without umount /bla/appLO) and to see only the errors if they exist? Example: e2fsck -n /dev/sdc e2fsck 1.42.9 (28-Dec-2013) Warning! /dev/sdc is mounted. Warning: skipping journal recovery because doing a read-only filesystem check. /dev/sdc: clean, 11/1310720 files, 126322/5242880 blocks Does fsck -n show the error even though the disk is mounted?
No. You should never run fsck on a mounted filesystem. Correcting errors on a live filesystem will mess up your disk. Even if you run the tool in read-only mode (without error correction) the results can't be trusted. This is true even if the filesystem is mounted read-only. From man e2fsck: Note that in general it is not safe to run e2fsck on mounted filesystems. The only exception is if the -n option is specified, and -c, -l, or -L options are not specified. However, even if it is safe to do so, the results printed by e2fsck are not valid if the filesystem is mounted. If e2fsck asks whether or not you should check a filesystem which is mounted, the only correct answer is ``no''. Only experts who really know what they are doing should consider answering this question in any other way. From man fsck: For some filesystem-specific checkers, the -n option will cause the fs-specific fsck to avoid attempting to repair any problems, but simply report such problems to stdout. This is however not true for all filesystem-specific checkers. In particular, fsck.reiserfs(8) will not report any corruption if given this option. fsck.minix(8) does not support the -n option at all. You should take the time to unmount the disk and do a proper filesystem check; results that cannot be trusted aren't useful at all.
Is it possible to run fsck to only see errors on mounted disk
1,352,400,120,000
My goal is to get the disks greater than 100G from lsblk. I have it working, but it's awkward. I'm pretty sure it can be shortened. Either by using something totally different than lsblk, or maybe I can filter human readable numbers directly with awk. Here's what I put together: lsblk | grep disk | awk '{print$1,$4}' | grep G | sed 's/.$//' | awk '{if($2>100)print$1}' It outputs only the sdx and nvmexxx part of the disks larger than 100G. Exactly what I need. I am happy with it, but am eager to learn more from you Gurus 😉
You can specify the form of output you want from lsblk: % lsblk -nblo NAME,SIZE mmcblk0 15931539456 mmcblk0p1 268435456 mmcblk0p2 15662038528 Options used: -b, --bytes Print the SIZE column in bytes rather than in human-readable format. -l, --list Use the list output format. -n, --noheadings Do not print a header line. -o, --output list Specify which output columns to print. Use --help to get a list of all supported columns. Then the filtering is easier: % lsblk -nblo NAME,SIZE | awk '$2 > 4*2^30 {print $1}' # greater than 4 GiB mmcblk0 mmcblk0p2 In your case, that'd be 100*2^30 for 100GiB or 100e9/1e11 for 100GB.
Can I shorten this filter, that finds disk sizes over 100G?
1,352,400,120,000
I want to create a new encrypted LUKS-partition in GParted. I've searched the UI and the help, but the only thing I can find is how to open and close an existing LUKS partition and how to to copy and paste an existing one. However, I can find no way to create a new one. I can create a new partition e.g. for btrfs, but it is never encrypted. So it seems for that only task of creating a new partition I have to resort to other tools like GNOME Disks (GNOME Disk Utility), which easily allows this when creating a new partition, or fallback to the commandline, which I'd like to avoid. Or is there any way to create a new encrypted partition? Broader use case Actually, i want to do what is described in the GParted help: Copy an encrypted partition and „maintaining an encrypted” partition on a new disk. However, to do so (i.e. to not decrypt the data while copying), I have to paste it „into an existing open LUKS encrypted partition”, i.e. I need to have an encrypted partition first. So, finally, is there any way to create a new encrypted partition in GParted?
GParted doesn't support creating of encrypted partitions, you'll need to use either GNOME Disks or blivet-gui (shameless plug for my project) or you can just use cryptsetup directly if you are ok with using command line tools. See GParted Features page for details about supported features, LUKS is listed as not supported in the Create column.
How to create a new encrypted LUKS-partition in GParted?
1,352,400,120,000
I'm not sure what is wrong here but when running fdisk -l I don't get an output, and when running fdisk /dev/sdb # I get this fdisk: unable to open /dev/sdb: No such file or directory I'm running Ubuntu 12.10 Server Can someone please tell me what I'm doing wrong? I want to delete /dev/sdb2-3 and just have one partition for sdb The only thing I've done differently with the setup of this server is use ext4 instead of ext3, I figured the extra speed of ext4 would help since I am using SSDs now root@sb8:~# ll /dev/sd* brw-rw---- 1 root disk 8, 1 Nov 23 14:58 /dev/sda1 brw-rw---- 1 root disk 8, 2 Nov 23 14:55 /dev/sda2 brw-rw---- 1 root disk 8, 17 Nov 23 19:20 /dev/sdb1 brw-rw---- 1 root disk 8, 18 Nov 23 15:45 /dev/sdb2 brw-rw---- 1 root disk 8, 19 Nov 23 14:51 /dev/sdb3 brw-rw---- 1 root disk 8, 33 Nov 23 15:47 /dev/sdc1 brw-rw---- 1 root disk 8, 49 Nov 23 15:48 /dev/sdd1 root@sb8:~# cat /proc/partitions major minor #blocks name 8 0 117220824 sda 8 1 112096256 sda1 8 2 5119968 sda2 8 16 117220824 sdb 8 17 20971520 sdb1 8 18 95718400 sdb2 8 19 526304 sdb3 8 48 1953514584 sdd 8 49 1863013655 sdd1 8 32 1953514584 sdc 8 33 1863013655 sdc1 root@sb8:~# ll /dev/disk/by-path/ total 8 drwxr-xr-x 2 root root 4096 Nov 23 15:48 ./ drwxr-xr-x 5 root root 4096 Nov 23 15:42 ../ lrwxrwxrwx 1 root root 10 Nov 23 14:58 pci-0000:00:1f.2-scsi-0:0:0:0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Nov 23 19:20 pci-0000:00:1f.2-scsi-1:0:0:0-part1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 Nov 23 15:45 pci-0000:00:1f.2-scsi-1:0:0:0-part2 -> ../../sdb2 lrwxrwxrwx 1 root root 10 Nov 23 15:47 pci-0000:00:1f.2-scsi-2:0:0:0-part1 -> ../../sdc1 lrwxrwxrwx 1 root root 10 Nov 23 15:48 pci-0000:00:1f.2-scsi-3:0:0:0-part1 -> ../../sdd1 root@sb8:~# df -T /dev Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/root ext4 111986032 1993108 104388112 2% /
On most non-embedded Linux installations, and many embedded installations, /dev is on a RAM-backed filesystem, not on the root partition. Most current installations have /dev as a tmpfs filesystem, with the udev daemon creating entries when notified by the kernel that some hardware is available. Recent kernel offer the possibility of having /dev mounted as the devtmpfs filesystem, which is directly populated by the kernel. I think Ubuntu 12.10 still uses udev. Either way, /dev should not be on the root partition (as shown by the output of df /dev), it should be on its own filesystem. Did you accidentally unmount /dev? The first thing you should try is to reboot: this should mount /dev properly. Before that, check that you haven't added an entry for /dev in /etc/fstab (there should be no line with /dev in the second column). Even with /dev on the root partition, you can create /dev/sdb by running cd /dev sudo MAKEDEV sdb But not having /dev managed dynamically isn't a stable configuration, you'll run into similar problems for a lot of other hardware.
/dev/sdb: No such file or directory (but /dev/sdb1 etc. exists)
1,352,400,120,000
fdisk -l output: . . Disk label type: dos Disk identifier: 0x0006a8bd . . What are Disk label type and Disk identifier? Also, apart from the manuals, where else can I find more information about disk management / partitioning etc..?
The disk label type is the type of Master Boot Record. See http://en.wikipedia.org/wiki/Master_boot_record. The disk identifier is a randomly generated number stuck onto the MBR. In terms of tools for looking at disks, fdisk is on its way to being deprecated if it isn't already so. parted is the replacement for fdisk and gparted can be used to provide a graphical interface to parted (although certainly other tools exist as well).
"fdisk -l" output: what are Disk label type" and "Disk identifier"
1,352,400,120,000
OS: Debian Bullseye, uname -a: Linux backup-server 5.10.0-5-amd64 #1 SMP Debian 5.10.24-1 (2021-03-19) x86_64 GNU/Linux I am looking for a way of undoing this wipefs command: wipefs --all --force /dev/sda? /dev/sda while the former structure was: fdisk -l /dev/sda Disk /dev/sda: 223.57 GiB, 240057409536 bytes, 468862128 sectors Disk model: CT240BX200SSD1 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 8D5A08BF-0976-4CDB-AEA2-8A0EAD44575E Device Start End Sectors Size Type /dev/sda1 2048 1050623 1048576 512M EFI System /dev/sda2 1050624 468860927 467810304 223.1G Linux filesystem and the output of that wipefs command (is still sitting on my terminal): /dev/sda1: 8 bytes were erased at offset 0x00000052 (vfat): 46 41 54 33 32 20 20 20 /dev/sda1: 1 byte was erased at offset 0x00000000 (vfat): eb /dev/sda1: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa /dev/sda2: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef /dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 8 bytes were erased at offset 0x37e4895e00 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa I might have found an article hosted on https://sysbits.org/, namely: https://sysbits.org/undoing-wipefs/ I will quote the wipe and undo parts from there, I want to know if it's sound and I can safely execute it on my server, which I did not yet reboot, and since then trying to figure out a work-around from this hell of a typo: wipe part wipefs -a /dev/sda /dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 8 bytes were erased at offset 0x3b9e655e00 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa undo part echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x00000200)) echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x3b9e655e00)) echo -en '\x55\xaa' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x000001fe)) partprobe /dev/sda Possibly alternative solution Just now, I ran the testdisk on that SSD drive, and it found many partitions, but only these two match the original: TestDisk 7.1, Data Recovery Utility, July 2019 Christophe GRENIER <[email protected]> https://www.cgsecurity.org Disk /dev/sda - 240 GB / 223 GiB - CHS 29185 255 63 Partition Start End Size in sectors 1 P EFI System 2048 1050623 1048576 [EFI System Partition] [NO NAME] 2 P Linux filesys. data 1050624 468860927 467810304 Can I / Should I just hit Write (Write partition structure to disk)? If not, why not?
You're lucky that wipefs actually prints out the parts it wipes. These, wipefs -a /dev/sda /dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 8 bytes were erased at offset 0x3b9e655e00 (gpt): 45 46 49 20 50 41 52 54 /dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x00000200)) echo -en '\x45\x46\x49\x20\x50\x41\x52\x54' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x3b9e655e00)) echo -en '\x55\xaa' | dd of=/dev/sda bs=1 conv=notrunc seek=$((0x000001fe)) do look sensible to me in general. But note that the offsets there are different from the ones in your case! You'll need to use the values you got from wipefs. Based on the offset values (0x3b9e655e00 vs 0x37e4895e00), they had a slightly larger disk than you did (~256 GB vs ~240 GB). Using their values would mean that the backup GPT at the end of disk would be left broken. That shouldn't matter much, in that any partitioning tool should be able to rewrite it as long as the first copy is intact. But if it was the other way around, and the wrong offset you used happened to be within the size of your disk, you'd end up overwriting some random part of the drive. Not good. Also, the magic numbers for the filesystems of course need to be in the right places. I tested wiping and undoing it with a VFAT image, and wrote this off the top of my head before reading your version too closely: printf "$(printf '\\x%s' 46 41 54 31 36 20 20 20)" | dd bs=1 conv=notrunc seek=$(printf "%d" 0x00000036) of=test.vfat that's for the single wipefs output line (repeat for others): test.vfat: 8 bytes were erased at offset 0x00000036 (vfat): 46 41 54 31 36 20 20 20 The nested printf at the start allows to copypaste the output from wipefs, without having to manually change 46 41 54 31... to \x46\x41\x54\x31.... Again, you do need to take care to enter the correct values in the correct offsets! It probably wouldn't be too bad to automate that further, but what with the risk involved, I'm not too keen to post such a script publicly without significant testing. If you can, take a copy of the disk contents before messing with it.
Undoing wipefs --all --force /dev/sda? /dev/sda
1,352,400,120,000
I've got an USB pen drive and I'd like to turn it into a bootable MBR device. However, at some point in its history, that device had a GPT on it, and I can't seem to get rid of that. Even after I ran mklabel dos in parted, grub-install still complains about Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet.. I don't want to preserve any data. I only want to clear all traces of the previous GTP, preferably using some mechanism which works faster than a dd if=/dev/zero of=… to zero out the whole drive. I'd prefer a termina-based (command line or curses) approach, but some common and free graphical tool would be fine as well.
If you want a single command, instead of navigating interactive menus in gdisk, try: $ sudo sgdisk -Z /dev/sdx substituting sdx with the name of your disk in question. (obviously - don't wipe out the partition information on your system disk ;)
Remove all traces of GPT disk label
1,352,400,120,000
I have a disk image, it's a "whole" disk image, e.g., contains multiple partitions, and I want to clone just one of them (not the first one..) onto a partition on an external drive with multiple partitions on it (I'm also not cloning it onto the first partition of the disk...) FDisk'ing the image gives this: # fdisk -l 2013-02-09-wheezy-raspbian.img Disk 2013-02-09-wheezy-raspbian.img: 1939 MB, 1939865600 bytes 255 heads, 63 sectors/track, 235 cylinders, total 3788800 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00014d34 Device Boot Start End Blocks Id System 2013-02-09-wheezy-raspbian.img1 8192 122879 57344 c W95 FAT32 (LBA) 2013-02-09-wheezy-raspbian.img2 122880 3788799 1832960 83 Linux # and the block device looks like this: # fdisk -l /dev/sdc Disk /dev/sdc: 8014 MB, 8014266368 bytes 247 heads, 62 sectors/track, 1022 cylinders, total 15652864 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 2048 131071 64512 e W95 FAT16 (LBA) /dev/sdc2 131072 15652863 7760896 83 Linux # I want the second partition of the image to replace the second partition of the block device. Don't worry about the trailing corrupted free space, I'll use GParted to clean that up, and I need it for something else anyway.
# losetup --find --show --partscan --read-only 2013-02-09-wheezy-raspbian.img /dev/loop7 # dd if=/dev/loop7p2 of=/dev/narnia bs=1M If --partscan doesn't work, you can also use one of: # partx -a /dev/loop7 # kpartx /dev/loop7 or similar partition mapping solutions. You should probably mount it first just to see if it's the right thing or what. Of course you can also read the fdisk output and give dd the skip=131072 or whatever directly, i.e. make it skip that many blocks of input so it starts reading at where the partition is located; but it's nicer to see actual partitions with a loop device.
How to use DD to clone a partition off a disk image?
1,352,400,120,000
Currently working on a project where I'm dealing with an arbitrary group of disks in multiple systems. I've written a suite of software to burn-in these disks. Part of that process was to format the disks. While testing my software, I realized that if at some point during formatting the disks, the process stops/dies, and I want to restart the process, I really don't want to reformat all of the disks in the set, which have already successfully formatted. I'm running this software from a ramfs with no disks mounted and none of the disks I am working on ever get mounted and they not be used by my software for anything other than testing, so anything goes on these bad boys. There's no data about which to be concerned. EDIT: No, I'm not partitioning. Yes, ext2 fs. This is the command I'm using to format: (/sbin/mke2fs -q -O sparse_super,large_file -m 0 -T largefile -T xfs -FF $drive >> /tmp/mke2fs_drive.log 2>&1 & echo $? > $status_file &) SOLUTION: Thanks to Jan's suggestion below: # lsblk -f /dev/<drv> I concocted the following shell function, which works as expected. SOURCE is_formatted() { drive=$1 fs_type=$2 if [[ ! -z $drive ]] then if [[ ! -z $fs_type ]] then current_fs=$(lsblk -no KNAME,FSTYPE $drive) if [[ $(echo $current_fs | wc -w) == 1 ]] then echo "[INFO] '$drive' is not formatted. Formatting." return 0 else current_fs=$(echo $current_fs | awk '{print $2}') if [[ $current_fs == $fs_type ]] then echo "[INFO] '$drive' is formatted with correct fs type. Moving on." return 1 else echo "[WARN] '$drive' is formatted, but with wrong fs type '$current_fs'. Formatting." return 0 fi fi else echo "[WARN] is_formatted() was called without specifying fs_type. Formatting." return 0 fi else echo "[FATAL] is_formatted() was called without specifying a drive. Quitting." return -1 fi } DATA sdca ext2 46b669fa-0c78-4b37-8fc5-a26368924b8c sdce ext2 1a375f80-a08c-4889-b759-363841b615b1 sdck ext2 f4f43e8c-a5c6-495f-a731-2fcd6eb6683f sdcn sdby ext2 cf276cce-56b1-4027-a795-62ef62d761fa sdcd ext2 42fdccb8-e9bc-441e-a43a-0b0f8d409c71 sdci ext2 d6e7dc60-286d-41e2-9e1b-a64d42072253 sdbw ext2 c3986491-b83f-4001-a3bd-439feb769d6a sdch ext2 3e7dba24-e3ec-471a-9fae-3fee91f988bd sdcq sdcf ext2 8fd2a6fd-d1ae-449b-ad48-b2f9df997e5f sdcs sdco sdcw ext2 27bf220e-6cb3-4953-bee4-aff27c491721 sdcp ext2 133d9474-e696-49a7-9deb-78d79c246844 sdcx sdct sdcu sdcy sdcr sdcv sdde sddc ext2 0b22bcf1-97ea-4d97-9ab5-c14a33c71e5c sddi ext2 3d95fbcb-c669-4eda-8b57-387518ca0b81 sddj sddb sdda ext2 204bd088-7c48-4d61-8297-256e94feb264 sdcz sddk ext2 ed5c8bd8-5168-487f-8fee-4b7c671ef2cb sddl sddn sdds ext2 647d2dea-f71d-4e87-bbe5-30f6424b36c9 sddf ext2 47128162-bcb7-4eab-802d-221e8eb36074 sddo sddh ext2 b7f41e1a-216d-4580-97e6-f2df917754a8 sddg ext2 39b838e0-f0ae-447c-8876-2d36f9099568 Which yielded: [INFO] '/dev/sdca' is formatted with correct fs type. Moving on. [INFO] '/dev/sdce' is formatted with correct fs type. Moving on. [INFO] '/dev/sdck' is formatted with correct fs type. Moving on. [INFO] '/dev/sdcn' is not formatted. Formatting. [INFO] '/dev/sdby' is formatted with correct fs type. Moving on. [INFO] '/dev/sdcd' is formatted with correct fs type. Moving on. [INFO] '/dev/sdci' is formatted with correct fs type. Moving on. [INFO] '/dev/sdbw' is formatted with correct fs type. Moving on. [INFO] '/dev/sdch' is formatted with correct fs type. Moving on. [INFO] '/dev/sdcq' is not formatted. Formatting. [INFO] '/dev/sdcf' is formatted with correct fs type. Moving on. [INFO] '/dev/sdcs' is not formatted. Formatting. [INFO] '/dev/sdco' is not formatted. Formatting. [INFO] '/dev/sdcw' is formatted with correct fs type. Moving on. [INFO] '/dev/sdcp' is formatted with correct fs type. Moving on. [INFO] '/dev/sdcx' is not formatted. Formatting. [INFO] '/dev/sdct' is not formatted. Formatting. [INFO] '/dev/sdcu' is not formatted. Formatting. [INFO] '/dev/sdcy' is not formatted. Formatting. [INFO] '/dev/sdcr' is not formatted. Formatting. [INFO] '/dev/sdcv' is not formatted. Formatting. [INFO] '/dev/sdde' is not formatted. Formatting. [INFO] '/dev/sddc' is formatted with correct fs type. Moving on. [INFO] '/dev/sddi' is formatted with correct fs type. Moving on. [INFO] '/dev/sddj' is not formatted. Formatting. [INFO] '/dev/sddb' is not formatted. Formatting. [INFO] '/dev/sdda' is formatted with correct fs type. Moving on. [INFO] '/dev/sdcz' is not formatted. Formatting. [INFO] '/dev/sddk' is formatted with correct fs type. Moving on. [INFO] '/dev/sddl' is not formatted. Formatting. [INFO] '/dev/sddn' is not formatted. Formatting. [INFO] '/dev/sdds' is formatted with correct fs type. Moving on. [INFO] '/dev/sddf' is formatted with correct fs type. Moving on. [INFO] '/dev/sddo' is not formatted. Formatting. [INFO] '/dev/sddh' is formatted with correct fs type. Moving on. [INFO] '/dev/sddg' is formatted with correct fs type. Moving on. Do note that the magic potion was extending Jan's suggestion to simply output what I cared about: lsblk -no KNAME,FSTYPE $drive
Depending on how you access the drives, you could use blkid -o list (deprecated) on them and then parse the output. The command outputs, among other things, a fs_type label column, that shows the filesystem. blkid -o list has been superseded be lsblk -f.
Method to test if disks in system are formatted
1,352,400,120,000
I have a btrfs RAID1 system with the following state: # btrfs filesystem show Label: none uuid: 975bdbb3-9a9c-4a72-ad67-6cda545fda5e Total devices 2 FS bytes used 1.65TiB devid 1 size 1.82TiB used 1.77TiB path /dev/sde1 *** Some devices missing The missing device is a disk drive that failed completely and which the OS could not recognize anymore. I removed the faulty disk and sent it for recycling. Now I have a new disk installed under /dev/sdd. Searching the web, I fail to find instructions for such a scenario (bad choice of search terms?). There are many examples how to save a RAID system when the faulty disk still remain somewhat accessible by the OS. btrfs replace command requires a source disk. I tried the following: # btrfs replace start 2 /dev/sdd /mnt/brtfs-raid1-b # btrfs replace status /mnt/brtfs-raid1-b Never started No error message, but status indicate it never started. I cannot figure out what the problem with my attempt is. I am running Ubuntu 16.04 LTS Xenial Xerus, Linux kernel 4.4.0-57-generic. Update #1 Ok, when running the command in "non background mode (-B)", I see an error that did not showed up before: # btrfs replace start -B 2 /dev/sdd /mnt/brtfs-raid1-b ERROR: ioctl(DEV_REPLACE_START) failed on "/mnt/brtfs-raid1-b": Read-only file system /mnt/brtfs-raid1-b is mounted RO (Read Only). I have no choice; Btrfs does not allow me to mount the remaining disk as RW (Read Write). When I try to mount the disk RW, I get the following error in syslog: BTRFS: missing devices(1) exceeds the limit(0), writeable mount is not allowed When in RO mode, it seams I cannot do anything; cannot replace, nor add, nor delete a disk. But there is no way for me to mount the disk as RW. What option is left? It shouldn't be this complicated when a simple disk fails. The system should continue running RW and warn me of a failed drive. I should be able to insert a new disk and have the data recopied over it, while the applications remain unaware of the disk issue. That is a proper RAID.
Update: According to @mkudlacek, this problem has been fixed. For prosperity, here is my answer to why in 2017, I could not rebuild a RAID with a missing drive. Turns out that this is a limitation of btrfs as of beginning of 2017. To get the filesystem mounted rw again, one needs to patch the kernel. I have not tried it though. I am planing to move away from btrfs because of this; one should not have to patch a kernel to be able to replace a faulty disk. Click on the following links for details: Kernel patch here Full email thread Please leave a comment if you still suffer from this problem as of 2020. I believe that people would like to know if this has been fixed or not. Update: I moved to good old mdadm and lvm and am very happy with my RAID10 4x4 Tb (8 Tb total space), as of 2020-10-20. It is proven, works well, not resource intensive and I have full trust in it.
Btrfs RAID1: How to replace a disk drive that is physically no more there?
1,352,400,120,000
Is there any way to find some process which is periodically writing to disk (according to hdd led) on FreeBSD 10 with ZFS (maybe turn ZFS into verbose logging mode)? lsof and other instantly aggregating statistics utilities seems not able to catch anything due to a short time of a moment of a disk access.
DTrace is able to report on vfs information in FreeBSD (as well as a raft of other probes). DTrace is enabled by default in the 10 kernel so all you need to do is load the module then run the dtrace script. Load the DTrace module kldload dtraceall Get the vfssnoop.d script from the FreeBSD forums. The whole thread is a treasure trove for disk monitoring. Run it: ./vfssnoop.d Watch the output for what is accessed: # ./vfssnoop.d cc1: warning: is shorter than expected TIMESTAMP UID PID PROCESS CALL SIZE PATH/FILE 1555479476691083 0 1225 nfsd vop_getattr - /share/netboot 1555479478601010 0 1225 nfsd vop_inactive - /share/netboot 1555479482457241 0 1225 nfsd vop_getattr - /share/wpad.dat 1555480557262388 0 1432 cron vop_getattr - /var/cron/tabs 1555480557302178 0 1432 cron vop_inactive - /var/cron/tabs 1555480557336414 0 1432 cron vop_inactive - /etc 1555480557346224 0 1432 cron vop_getattr - /etc/crontab
FreeBSD 10 trace disk activity
1,352,400,120,000
My program creates many small short-lived files. They are typically deleted within a second after creation. The files are in an ext4 file system backed by a real hard disk. I know that Linux periodically flushes (pdflush) dirty pages to disk. Since my files are short-lived, most likely they are not cached by pdflush. My question is, does my program cause a lot of disk writes? My concern is my hard disk's life. Since the files are small, let's assume the sum of their size is smaller than dirty_bytes and dirty_background_bytes. Ext4 has default journal turned on, i.e. metadata journal. I also want to know whether the metadata or the data is written to disk.
A simple experiment using ext4: Create a 100MB image... # dd if=/dev/zero of=image bs=1M count=100 100+0 records in 100+0 records out 104857600 bytes (105 MB) copied, 0.0533049 s, 2.0 GB/s Make it a loop device... # losetup -f --show image /dev/loop0 Make filesystem and mount... # mkfs.ext4 /dev/loop0 # mount /dev/loop0 /mnt/tmp Make some kind of run with short lived files. (Change this to any method you prefer.) for ((x=0; x<1000; x++)) do (echo short-lived-content-$x > /mnt/tmp/short-lived-file-$x sleep 1 rm /mnt/tmp/short-lived-file-$x ) & done Umount, sync, unloop. # umount /mnt/tmp # sync # losetup -d /dev/loop0 Check the image contents. # strings image | grep short-lived-file | tail -n 3 short-lived-file-266 short-lived-file-895 short-lived-file-909 # strings image | grep short-lived-content | tail -n 3 In my case it listed all the file names, but none of the file contents. So only the contents were not written.
Are short-lived files flushed to disk?
1,352,400,120,000
On my system (Debian), I can see the UUID identifier for all of my disks partitions (i.e. /dev/sda1, dev/sda2, ..) ls /dev/disk/by-uuid/ However, I don't see the UUID identifier for /dev/sda itself. Is it possible to reference whole disk with UUID? I need this because I want to reference a particular disk, and I cannot rely it will be called /dev/sda. EDIT The solution suggested by @don_crissti is great. However, I would like the UUID to be the same for all hard disks of the same Model/Manufacturer, not unique by serial number. Using udevadm, I can see the disk attributes: udevadm info -n /dev/sda -a ATTRS{model}=="Samsung SSD 840 " ATTRS{vendor}=="0x8086" ATTRS{class}=="0x010700" ATTRS{device}=="0x1d6b" .... How can I generate a UUID from these attributes, so that same Model/Manufacturer disk will have the same UUID ?
The symlinks under /dev/disk/by-uuid/ are created by udev rules based on filesystems UUIDs. If you look at /usr/lib/udev/rules.d/60-persistent-storage.rules you will find entries like: ...... ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}" To reference a disk you could use the disk serial number and the ENV{ID_SERIAL_SHORT} key. The following udev rule matches the drive with serial no. 0000000013100925DB96 and creates a symlink with the same name under /dev/disk/by-uuid/: KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL_SHORT}=="0000000013100925DB96", SYMLINK+="disk/by-uuid/$env{ID_SERIAL_SHORT}" As to your other question... sure, you could always use ENV{ID_MODEL} instead of ENV{ID_SERIAL_SHORT} and use a custom string for your symlink name. The following rule matches any drive with ID_MODEL = M4-CT128M4SSD2 and creates a symlink M4-SSD-1234567890 under /dev/disk/by-uuid/: KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_MODEL}=="M4-CT128M4SSD2", SYMLINK+="disk/by-uuid/M4-SSD-1234567890" Note that this works fine as long as there's only one drive matching the ID_MODEL. If there are multiple drives of the same model, the rule is applied again for each of them and the symlink will point to the last detected/added drive.
reference whole disk (/dev/sda) using UUID
1,352,400,120,000
I have an 8G usb stick (I'm on linux Mint), and I'm trying to copy a 5.4G file into it, but getting No space left on device The filesize of the copied file before failing is always 3.6G An output of the mounted stick shows.. df -T /dev/sdc1 ext2 7708584 622604 6694404 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe df -h /dev/sdc1 7.4G 608M 6.4G 9% /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe du -h --max-depth=1 88K ./.ssh ls -h myfile -rw-r--r-- 1 moo moo 5.4G May 26 09:35 myfile So a 5.4G file, won't seem to go on an 8G usb stick. I thought there wasn't issues with ext2, and it was only problems with fat32 for file sizes and usb sticks ? Would changing the formatting make any difference ? Edit: Here is an report from tunefs for the drive sudo tune2fs -l /dev/sdd1 Filesystem volume name: Last mounted on: /media/moo/ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe Filesystem UUID: ba20d7ab-2c46-4f7a-9fb8-baa0ee71e9fe Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: ext_attr resize_inode dir_index filetype sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 489600 Block count: 1957884 Reserved block count: 97894 Free blocks: 970072 Free inodes: 489576 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 477 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8160 Inode blocks per group: 510 Filesystem created: Mon Mar 2 13:00:18 2009 Last mount time: Tue May 26 12:12:59 2015 Last write time: Tue May 26 12:12:59 2015 Mount count: 102 Maximum mount count: 26 Last checked: Mon Mar 2 13:00:18 2009 Check interval: 15552000 (6 months) Next check after: Sat Aug 29 14:00:18 2009 Lifetime writes: 12 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Default directory hash: half_md4 Directory Hash Seed: 249823e2-d3c4-4f17-947c-3500523479fd FS Error count: 62 First error time: Tue May 26 09:48:15 2015 First error function: ext4_mb_generate_buddy First error line #: 757 First error inode #: 0 First error block #: 0 Last error time: Tue May 26 10:35:25 2015 Last error function: ext4_mb_generate_buddy Last error line #: 757 Last error inode #: 0 Last error block #: 0
Your 8GB stick has approximately 7.5 GiB and even with some file system overhead should be able to store the 5.4GiB file. You use tune2fs to check the file sytem status and properties: tune2fs -l /dev/<device> By default 5% of the space is reserved for the root user. Your output lists 97894 blocks, which corresponds to approximately 385MiB and seems to be the default value. You might want to adjust this value using tune2fs if you don't need that much reserved space. Nevertheless, even with those 385MiB the file should fit on the file system. Your tune2fs output shows an unclean file system with errors. So please run fsck on the file system. This will fix the errors and possibly place some files in the lost+found directory. You can delete them if you're not intending to recover the data. This should fix the file system and copying the file will succeed.
Unable to copy large file onto ext2 usb stick [closed]
1,352,400,120,000
I want to create a fixed size Linux ramdisk which never swaps to disk. Note that my question is not "why" I want to do this (let say, for example, that it's for an educative purpose or for research): the question is how to do it. As I understand ramfs cannot be limited in size, so it doesn't fit my requirement of having a fixed size ramdisk. It also seems that tmpfs may be swapped to disk. So it doesn't fit my requirement of never swapping to disk. How can you create a fixed size Linux ramdisk which does never swap to disk? Is it possible, for example, to create tmpfs inside a ramfs (would such a solution fit both my requirements) and if so how? Note that performances are not an issue and the ramdisk getting full and triggering "disk full" errors ain't an issue either.
This is just a thought and has more than one downside, but it might be usable enough anyway. How about creating an image file and a filesystem inside it on top of ramfs, then mount the image as a loop device? That way you could limit the size of ramdisk by simply limiting the image file size. For example: $ mkdir -p /ram/{ram,loop} $ mount -t ramfs none /ram/ram $ dd if=/dev/zero of=/ram/ram/image bs=2M count=1 1+0 records in 1+0 records out 2097152 bytes (2.1 MB) copied, 0.00372456 s, 563 MB/s $ mke2fs /ram/ram/image mke2fs 1.42 (29-Nov-2011) /ram/ram/image is not a block special device. Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 256 inodes, 2048 blocks 102 blocks (4.98%) reserved for the super user First data block=1 Maximum filesystem blocks=2097152 1 block group 8192 blocks per group, 8192 fragments per group 256 inodes per group Allocating group tables: done Writing inode tables: done Writing superblocks and filesystem accounting information: done $ mount -o loop /ram/ram/image /ram/loop $ dd if=/dev/zero of=/ram/loop/test bs=1M count=5 dd: writing `/ram/loop/test': No space left on device 2+0 records in 1+0 records out 2027520 bytes (2.0 MB) copied, 0.00853692 s, 238 MB/s $ ls -l /ram/loop total 2001 drwx------ 2 root root 12288 Jan 27 17:12 lost+found -rw-r--r-- 1 root root 2027520 Jan 27 17:13 test In the (somewhat too long) example above the image file is created to be 2 megabytes and when trying to write more than 2 megabytes on it, write simply fails because the filesystem is full. One obvious downsize for all this is of course that there is much added complexity, but at least for academic purposes this should suffice.
How to create a fixed size Linux ramdisk which does never swap to disk?
1,507,025,938,000
I have a VPS. I might be able to encrypt my partition but I havent tried. My VPS company I believe can reset my root password although the only SSH key I see is my own. With all my data I have it encrypted with encfs. Just in case a hacker gets some kind of access, encfs can only mount the drive if my password is correct (SSH keys won't mount it, resetting root password will not mount as the new password is an incorrect passphrase) My question is can my VPS host break into my box? Physically the data is encrypted. I believe root can be changed without resetting the box? If so then they can have access to my already mounted filesystem? If another user with no permission is logged in can the user do something to access the ram and dump sensitive data? can the VPS host easily read the contents of my RAM? Note: This is hypothetical. I'm thinking about if I have big clients I want to know how much security I can promise and this popped into mind. I'd rather not host a box at home nor have the pipes to support it.
As a general rule, physical access to the machine is all that's ever needed to compromise it. You are, after all, trusting that what the machine tells you is true; a person with physical access can void that trust. Consider that an attacker with physical access can theoretically do anything (including installation of hardware/firmware rootkits, etc). If the data is encrypted, that's a good first step, but at every step (when you are entering your authentication to decrypt the volume, etc) you are trusting the computer not to lie to you. That is much more difficult when you do not have personal control over the physical machine. As for some of your specific queries: If another user with no permission is logged in can the user do something to access the ram and dump sensitive data? In general, no. Raw memory access is a privileged operation. can the vps host easily read the contents of my ram? Yes. Isolation in a virtual environment means that you have no control over the external operating environment that the VPS is running within. This operating environment could indeed do this.
Physically break into box? (Memory & disk)
1,507,025,938,000
I have a 128GB Somnambulist SSD. I know this brand is one of the worst. I measured the speed using GNOME Disk Utility, and it showed a read/write speed of 420/340. After encrypting the SSD with Debian 12, the read speed, as measured by the GNOME Disk Utility, dropped to 13.5 MB/s! Is this drop in speed normal, or is the issue likely related to the SSD itself?
If your CPU is old enough it may not support AES-NI instructions, so encryption/decryption will be slow. grep -qw aes /proc/cpuinfo && echo Supported || echo Unsupported Will tell you everything.
Low performance of encrypted SSD
1,507,025,938,000
I have a 3TB drive which I have partitioned using GPT: $ sudo sgdisk -p /dev/sdg Disk /dev/sdg: 5860533168 sectors, 2.7 TiB Logical sector size: 512 bytes Disk identifier (GUID): 2BC92531-AFE3-407F-AC81-ACB0CDF41295 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 5860533134 Partitions will be aligned on 2048-sector boundaries Total free space is 2932 sectors (1.4 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 10239 4.0 MiB 8300 2 10240 5860532216 2.7 TiB 8300 However, when I connect it via a USB adapter, it reports a logical sector size of 4096 and the kernel no longer recognizes the partition table (since it's looking for the GPT at sector 1, which is now at offset 4096 instead of 512): $ sudo sgdisk -p /dev/sdg Creating new GPT entries. Disk /dev/sdg: 732566646 sectors, 2.7 TiB Logical sector size: 4096 bytes Disk identifier (GUID): 2DE535B3-96B0-4BE0-879C-F0E353341DF7 Partition table holds up to 128 entries First usable sector is 6, last usable sector is 732566640 Partitions will be aligned on 256-sector boundaries Total free space is 732566635 sectors (2.7 TiB) Number Start (sector) End (sector) Size Code Name Is there any way to force Linux to recognize the GPT at offset 512? Alternatively, is there a way to create two GPT headers, one at 512 and one at 4096, or will they overlap? EDIT: I have found a few workarounds, none of which are very good: I can use a loopback device to partition the disk: $ losetup /dev/loop0 /dev/sdg Loopback devices always have a sector size of 512, so this allows me to partition the device how I want. However, the kernel does not recognize partition tables on loopback devices, so I have to create another loopback device and manually specify the partition size and offset: $ losetup /dev/loop1 /dev/sdg -o $((10240*512)) --sizelimit $(((5860532216-10240)*512)) I can write a script to automate this, but it would be nice to be able to do it automatically. I can run nbd-server and nbd-client; NBD devices have 512-byte sectors by default, and NBD devices are partitionable. However, the NBD documentation warns against running the nbd server and client on the same system; When testing, the in-kernel nbd client hung and I had to kill the server. I can run istgt (user-space iSCSI target), using the same setup. This presents another SCSI device to the system with 512-byte sectors. However, when testing, this failed and caused a kernel NULL pointer dereference in the ext4 code. I haven't investigated devmapper yet, but it might work.
I found a solution: A program called kpartx, which is a userspace program that uses devmapper to create partitions from loopback devices, which works great: $ loop_device=`losetup --show -f /dev/sdg` $ kpartx -a $loop_device $ ls /dev/mapper total 0 crw------- 1 root root 10, 236 Mar 2 17:59 control brw-rw---- 1 root disk 252, 0 Mar 2 18:30 loop0p1 brw-rw---- 1 root disk 252, 1 Mar 2 18:30 loop0p2 $ $ # delete device $ kpartx -d $loop_device $ losetup -d $loop_device This essentially does what I was planning to do in option 1, but much more cleanly.
Recognizing GPT partition table created with different logical sector size
1,507,025,938,000
I've seen some disk formatting/partitioning discussions that mention destroying existing GPT/MBR data structures as a first step: sgdisk --zap-all /dev/nvme0n1 I wasn't previously aware of this, and when I've set up a disk, I've generally used: parted --script --align optimal \ /dev/nvme0n1 -- \ mklabel gpt \ mkpart ESP fat32 1MiB 512MiB \ set 1 boot on \ name 1 boot \ mkpart primary 512MiB 100% \ set 2 lvm on \ name 2 primary Should I have cleared things out (e.g. sgdisk --zap-all) first? What are the downsides to not having done that?
This advice is from the time when other tools didn't properly support GPT and were not removing all the pieces of the GPT metadata. From sgdisk man page for the --zap/--zap-all option: Use this option if you want to repartition a GPT disk using fdisk or some other GPT-unaware program. That's no longer true. Both fdisk and parted support GPT now and if you create a new partition table, they will remove both the two GPT headers (GPT has a backup header at the end of the disk that can cause problems when not removed) and the Protective MBR header. That being said, it is generally not a bad idea to properly remove all headers/signatures when removing a preexisting storage layout. I personally use wipefs to remove signatures from all devices before removing them just to make sure there nothing left that could be unexpectedly discovered later -- I've been in situations where a newly created MD array or LVM logical volume suddenly has a filesystem on it just because it was created on the same (or close enough) offset where a previous device was. Storage tools usually try to detect filesystem signatures when creating a new partition/device and can wipe them for you, but doing that manually never hurts.
Is it important to delete GPT/MBR labels before reformatting/repartitioning?
1,507,025,938,000
All of the tools I've tried until now were only capable to create a dual (GPT & MBR) partition table, where the first 4 of the GPT partitions were mirrored to a compatible MBR partition. This is not what I want. I want a pure GPT partition table, i.e. where there isn't MBR table on the disk, and thus there isn't also any synchronizing between them. Is it somehow possible?
TO ADDRESS YOUR EDIT: I didn't notice the edit to your question until just now. As written now, the question is altogether different than when I first answered it. The mirror you describe is not in the spec, actually, as it is instead a rather dangerous and ugly hack known as a hybrid-MBR partition format. This question makes a lot more sense now - it's not silly at all, in fact. The primary difference between a GPT disk and a hybrid MBR disk is that a GPT's MBR will describe the entire disk as a single MBR partition, while a hybrid MBR will attempt to hedge for (extremely ugly) compatibility's sake and describe only the area covered by the first four partitions. The problem with that situation is the hybrid-MBR's attempts at compatibility completely defeat the purpose of GPT's Protective MBR in the first place. As noted below, the Protective MBR is supposed to protect a GPT-disk from stupid applications, but if some of the disk appears to be unallocated to those, all bets are off. Don't use a hybrid-MBR if it can be at all helped - which, if on a Mac, means don't use the default Bootcamp configuration. In general, if looking for advice on EFI/GPT-related matters go nowhere else (excepting maybe a slight detour here first) but to rodsbooks.com. ahem... This (used to be) kind of a silly question - I think you're asking how to partition a GPT disk without a Protective MBR. The answer to that question is you cannot - because the GPT is a disk partition table format standard, and that standard specifies a protective MBR positioned at the head of the disk. See? What you can do is erase the MBR or overwrite it - it won't prevent most GPT-aware applications from accessing the partition data anyway, but the reason it is included in the specification is to prevent non-GPT-aware applications from screwing with the partition-table. It prevents this by just reporting that the entire disk is a single MBR-type partition already, and nobody should try writing a filesystem to it because it is already allocated space. Removing the MBR removes that protection. In any case, here's how: This creates a 4G ./img file full of NULs... </dev/zero >./img \ dd ibs=4k obs=4kx1k count=1kx1k 1048576+0 records in 1024+0 records out 4294967296 bytes (4.3 GB) copied, 3.38218 s, 1.3 GB/s This writes a partition table to it - to include the leading Protective MBR. Each of printf's arguments is followed by a \newline and written to gdisk's stdin. gdisk interprets the commands as though they were typed at it interactively and acts accordingly, to create two GPT partition entries in the GUID Partition Table it writes to the head of our ./img file. All terminal output is dumped to >/dev/null (because it's a lot and we'll be having a look at the results presently anyway). printf %s\\n o y n 1 '' +750M ef00 \ n 2 '' '' '' '' \ w y | >/dev/null \ gdisk ./img This gets pr's four-columned formatted representation of the offset-accompanied strings in the first 2K of ./img. <./img dd count=4 | strings -1 -td | pr -w100 -t4 4+0 records in 4+0 records out 2048 bytes (2.0 kB) copied, 7.1933e-05 s, 28.5 MB/s 451 * 1033 K 1094 t 1212 n 510 U 1037 > 1096 e 1214 u 512 EFI PART 1039 ;@fY 1098 m 1216 x 524 \ 1044 30 1153 = 1218 529 P 1047 L 1158 rG 1220 f 531 ( 1050 E 1161 y=i 1222 i 552 " 1065 w 1165 G} 1224 l 568 V 1080 E 1170 $U.b 1226 e 573 G 1082 F 1175 N 1228 s 575 G 1084 I 1178 C 1230 y 577 y 1086 1180 b 1232 s 583 G 1088 S 1185 x 1234 t 602 Ml 1090 y 1208 L 1236 e 1024 (s* 1092 s 1210 i 1238 m You can see where the MBR ends there, yeah? Byte 512. This writes 512 spaces over the first 512 bytes in ./img. <>./img >&0 printf %0512s And now for the fruits of our labor. This is an interactive run of gdisk on ./img. gdisk ./img GPT fdisk (gdisk) version 1.0.0 Partition table scan: MBR: not present BSD: not present APM: not present GPT: present Found valid GPT with corrupt MBR; using GPT and will write new protective MBR on save. Command (? for help): p Disk ./img: 8388608 sectors, 4.0 GiB Logical sector size: 512 bytes Disk identifier (GUID): 0528394A-9A2C-423B-9FDE-592CB74B17B3 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 8388574 Partitions will be aligned on 2048-sector boundaries Total free space is 2014 sectors (1007.0 KiB) Number Start (sector) End (sector) Size Code Name 1 2048 1538047 750.0 MiB EF00 EFI System 2 1538048 8388574 3.3 GiB 8300 Linux filesystem
How to construct a GPT-only partition table on Linux?
1,507,025,938,000
I noticed today while setting up a new system that the disk as represented by /dev/disk/by-id also had a WWN link that I was not familiar with: My research suggests that this is a "World Wide Number" or some kind of unique ID to the drive. What is the use case for this and when would it be appropriate to use it versus the drive's other represented ID, serial number, and UUID (from /dev/disk/by-uuid). Links to documentation for this WWN identifier would also be useful. I'm getting a lot of hits on fiber channel stuff on Google. Is this a related use case or something else entirely?
The WWN identifies the actual physical hardware disk; in simple cases it is derived from the hardware serial number. (Except when it doesn't; a WWN can also identify storage which is not an actual hardware disk, it just pretends that it is for the sake of the operating system, for example, a volume presented by a storage array.) UUIDs etc. identify logical structures (partitions, logical volumes, filesystems). [Storage arrays, for example a HPE 3PAR, are indeed usually connected to the computers which use them on a Fibre Channel Storage Area Network aka SAN. Infrastructure administrators configure the storage arrays to present logical disks to the hosts; each such logical disk is identified by a WWN.] So the use cases when a WWN is appropriate are those when the administrator is interested in monitoring or configuring the actual physical (or, in the case of logical disks presented by storage array, the pseudo-physical) disk; for example, before any partition table or LVM metadata is written to the disk, the only honest identifier is the WWN. On the other hand, UUIDs can be copied from one physical disk to another, for example in order to perform a hardware upgrade.
What are the use cases for a hard disk's WWN?
1,507,025,938,000
I have a couple of large disks with backup/archive material on them. They're ext4. Regarding the ones of those that will be stored for a couple of years without reading the whole disc again I've been thinking of a way to refresh the disks magnetic state. Shelf life of drives seems to be a matter of debate everywhere I've been looking for an answer, but it seems after a couple of years (say 5 or so) of storage it would be wise to refresh the data in some way (?) I've seen this suggested: dd if=/dev/sda of=/dev/sda Is it safe? Is it useful? What I'm looking to do is another thing than a fsck or a dd if=/dev/sda of=/dev/null, both of which will probably discover existing magnetic drop outs on the disk. What I want to do is to refresh the magnetic data before the magnetic charges on the disk lowers below a readable level. How can I do this?
Generally you can't really refresh the whole disk without reading/writing all of it. fsck is unlikely to provide what you need - it works with the file system not the underlying device hence it mostly just scans file system meta data (inodes and other file system structures). badblocks -n might be an option to dd if=X of=X. In any case you probably want to use large blocks to speed things up (for dd something like bs=16M, for badblocks this would read -b 16777216, or -b $((1<<24)) in reasonable shells). You'll probably also want to use conv=fsync with dd. As for the safety of dd with the same input and output device - it reads block from input and writes it to output, so it should be safe (I have re-encrypted an encrypted partition like this on several occasions, by creating loop devices with the same underlying device and different passwords and then dd'ing from one to the other) - at least for some types of physical media: for example with shingled drives it is definitely not obvious to me, that it is 100% failure-proof.
How do I refresh the magnetic state on a disks with backups?
1,507,025,938,000
Consider you've been informed about a bad sector like this: [48792.329933] Add. Sense: Unrecovered read error - auto reallocate failed [48792.329936] sd 0:0:0:0: [sda] CDB: [48792.329938] Read(10): ... [48792.329949] end_request: I/O error, dev sda, sector 1545882485 [48792.329968] md/raid1:md126: sda: unrecoverable I/O read error for block 1544848128 [48792.330018] md: md126: recovery interrupted. How do I find out which file might include this sector? How to map a sector to file? Or how to find out if it just maps to free filesystem space? The mapping process should be able to deal with the usual storage stack. For example, in the above example, the stack looks like this: /dev/sda+sdb -> Linux MD RAID 1 -> LVM PV -> LVM VG -> LVM LV -> XFS But, of course, it could even look like this: /dev/sda+sdb -> Linux MD RAID 1 -> DM_CRYPT -> LVM PV -> LVM VG -> LVM LV -> XFS
The traditional way is to copy all files elsewhere and see which one triggers a read error. Of course, this does not answer the question at all if the error is hidden by the redundancy of the RAID layer. Apart from that I only know the manual approach. Which is way too bothersome to actually go through with, and if there is a tool that does this magic for you, I haven't heard of it yet, and I'm not sure if more generic tools (like blktrace) would help in that regard. For the filesystem, you can use filefrag or hdparm --fibmap to determine block ranges of all files. Some filesystems offer tools to make the lookup in the other direction (e.g. debugfs icheck) but I don't know of a syscall that does the same, so there seems to be no generic interface for block->file lookups. For LVM, you can use lvs -o +devices to see where each LV is stored; you also need to know the pvs -o +pe_start,vg_extent_size for Physical Extent offset/sizes. It may actually be more readable in the vgcfgbackup. This should allow you to translate the filesystem addresses to block addresses in each PV. For LUKS, you can see the offset in cryptsetup luksDump. For mdadm, you can see the offset in mdadm --examine. If the RAID level is something other than 1, you will also need to do some math, and more specifically, you need to know the RAID layout in order to understand which address on the md device may translate to which block of which RAID member device. Finally you will need to take partition offsets into account, unless you were using the disks directly without any partitioning.
How to find out which file is affected by a bad sector?
1,507,025,938,000
Is there a way to make sar (from sysstat) collect free disk space data?
Maybe this post is old, however, you can get the disk space with the current version (sysstat-11.1.1) Example: [root@ipboss-linux ~]# sar -F 1 1 Linux 2.6.32-358.el6.x86_64 (ipboss-linux.qualif.fr) 09/23/2014 _x86_64_ (8 CPU) 11:21:16 AM MBfsfree MBfsused %fsused %ufsused Ifree Iused %Iused FILESYSTEM 11:21:17 AM 4974 4869 49.47 54.55 441591 199257 31.09 /dev/sda3 11:21:17 AM 65 32 32.71 37.87 25649 39 0.15 /dev/sda1 11:21:17 AM 215269 1278 0.59 5.67 14068929 13119 0.09 /dev/sda2
Using SAR to monitor free disk space data
1,507,025,938,000
How to find the MAX I/O a physical disk can support? My application is doing I/O, and I can find the actual throughput (Blk_wrtn/s) by using linux commands. But how can I find what is max limit I can reach? I want to know if it can be further loaded.
Obiously using Unix tools is the easiest way to do it. You can measure the max operation by creating a test case and use appropriate tools to measure its perfomance. A good resource can be found here: LINUX - Test READ and WRITE speed of Storage sudo hdparm -tT /dev/sdX for example as read test. And to measure write: dd if=/dev/random of=<some file on the hd> bs=8k count=10000; sync; # Hit CONTROL-C after 5 seconds to get results # 65994752 bytes (66 MB) copied, 21.8919 s, 3.0 MB/s flag Note As pointed out in the comments the dd command also measures the performance of the file system and even /dev/random. It does measure the write performance of an environment, that heavily depends on the hard disks performance, though.
How to find the MAX IO a physical disk can support
1,507,025,938,000
I need to list the partition type GUID's from the command line. Note: This is not the same as the partition UUID. Basically I'm needing to search for all disks that have the Ceph OSD type GUID: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D The intention is to emulate some things done with ceph-disk (python) in bash script on CoreOS. Why? so I can mount them to the appropriate place automatically with ceph-docker.
This was my ultimate solution using blkid -p function find_osds() { local osds declare -a dev_list mapfile dev_list < <(lsblk -l -n -o NAME --exclude 1,7,11) # note -I not available in all versions of lsblk, use exclude instead for dev in "${dev_list[@]}"; do dev=/dev/$(trim "$dev") if blkid -p "$dev" | fgrep -q '4fbd7e29-9d25-41b8-afd0-062c0ceff05d'; then osds+=($dev) fi done echo "${osds[@]}" }
List partition type GUID's for all disks from command line?
1,507,025,938,000
I have two hard drives of different physical sector size. I would like to create an LVM volume group with them, however, when I do so with vgcreate, I get a warning telling me that the two disks have different physical sector size. Is there something to be concerned about?
You don't want to mix different sector sizes in a single VG. Newer versions of LVM don't even allow creating VG on PVs with mixed sector sizes by default (older versions show only the warning message you saw). The problem is not with the VG, but with the LVs and filesystems -- if you resize or move LV to the larger sector size PV the filesystem can get corrupted. You can create the VG, but you need to make sure your LVs are allocated only on PVs with same sector size and remember to keep this setup in the future (you can specify which PV will be used with lvcreate), but I recommend creating two separate VGs. If one of your disks is 512 sectors NVMe you might be also able to switch it to 4096 sectors using nvme-cli (or vice versa to make both disks same sector size). Few related links LVM Bugzilla: FS fails to mount if we lvextend the LV with a new PV with different sector size LVM ML: Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size Commit changing the default behaviour
Is there any risk to create an LVM group with two disks of different physical sector size?
1,507,025,938,000
When we perform this (on linux redhat 7.x) umount /grop/sdc umount: /grop/sdc: target is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) We can see that mount failed on busy. But when we do remount then ... remount is success as the following: mount -o rw,remount /grop/sdc echo $? 0 So very interesting. Does remount use the option like ( umount -l ) ? what the different between remount to umount/mount ?
man mount : remount Attempt to remount an already-mounted filesystem. This is commonly used to change the mount flags for a filesystem, especially to make a readonly filesystem writeable. It does not change device or mount point. The remount functionality follows the standard way how the mount command works with options from fstab. It means the mount command doesn't read fstab (or mtab) only when a device and dir are fully specified. The remount option is used when the file system isn't currently in use to modify the mount option from ro to rw. target is busy. If the file system is already in use you can't umount it properly , you need to find the process which accessed your files (fuser -mu /path/ ) , killing the running process then unmounting the file.
what the difference between remount to umount/mount?
1,507,025,938,000
os: centos7 test file: a.txt 1.2G monitor command: iostat -xdm 1 The first scene: cp a.txt b.txt #b.txt is not exist The second scene: cp a.txt b.txt #b.txt is exist Why the first scene don't consume IO, but the second scene consume IO?
It could well be that the data had not been flushed to disk during the first cp operation, but was during the second. Try setting vm.dirty_background_bytes to something small, like 1048576 (1 MiB) to see if this is the case; run sysctl -w vm.dirty_background_bytes=1048576, and then your first cp scenario should show I/O. What's going on here? Except in cases of synchronous and/or direct I/O, writes to disk get buffered in memory until a threshold is hit, at which point they begin to be flushed to disk in the background. This threshold doesn't have an official name, but it's controlled by vm.dirty_background_bytes and vm.dirty_background_ratio, so I'll call it the "Dirty Background Threshold." From the kernel docs: vm.dirty_background_bytes Contains the amount of dirty memory at which the background kernel flusher threads will start writeback. Note: dirty_background_bytes is the counterpart of dirty_background_ratio. Only one of them may be specified at a time. When one sysctl is written it is immediately taken into account to evaluate the dirty memory limits and the other appears as 0 when read. dirty_background_ratio Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which the background kernel flusher threads will start writing out dirty data. The total available memory is not equal to total system memory. vm.dirty_bytes and vm.dirty_ratio There's a second threshold, beyond this one. Well, more a limit than a threshold, and it's controlled by vm.dirty_bytes and vm.dirty_ratio. Again, it doesn't have an official name, so we'll call it the "Dirty Limit". Once enough data has been "written", but not committed to the underlying block device, further attempts to write will have to wait for write I/O to complete. (The precise details of what data they'll have to wait on is unclear to me, and may be a function of the I/O scheduler. I don't know.) Why? Disks are slow. Spinning rust especially so, so while the R/W head on a disk is moving to satisfy a read request, no write requests can serviced until the read request completes and the write request can be started. (And vice versa) Efficiency This is why we buffer write requests in memory and cache data we've read; we move work from the slow disk to faster memory. When we eventually go to commit the data to disk, we've got a good quantity of data to work with, and we can try to write it in a way that minimizes seek time. (If you're using an SSD, replace the concept of disk seek time with reflashing of SSD blocks; reflashing consumes SSD life and is a slow operation, which SSDs attempt--to varying degrees of success--to hide with their own write caching.) We can tune how much data gets buffered before the kernel attempts to write it to disk using vm.dirty_background_bytes and vm.dirty_background_ratio. Too much write data buffered! If the amount of data you're writing is too great for how quickly it's reaching disk, you'll eventually consume all your system memory. First, your read cache will go away, meaning fewer read requests will be serviced from memory and have to be serviced from disk, slowing down your writes even further! If your write pressure still doesn't let up, eventually like memory allocations will have to wait on your write cache getting freed up some, and that'll even more disruptive. So we have vm.dirty_bytes (and vm.dirty_ratio); it lets us say, "hey, wait up a minute, it's really time we got data to the disk, before this gets any worse." Still too much data Putting a hard stop on I/O is very disruptive, though; disk is already slow from the perspective of reading processes, and it can take several seconds to several minutes for that data to flush; consider vm.dirty_bytes's default of 20. If you have a system with 16GiB of RAM and no swap, you might find your I/O blocked while you wait for 3.4GiB of data to get flushed to disk. On a server with 128GiB of RAM? You're going to have services timing out while you wait on 27.5GiB of data! So it's helpful to keep vm.dirty_bytes (or vm.dirty_ratio, if you prefer) fairly low, so that if you hit this hard threshold, it will only be minimally disruptive to your services. What are good values? With these tunables, you're always trading between throughput and latency. Buffer too much, and you'll have great throughput but terrible latency. Buffer too little, and you'll have terrible throughput but great latency. On workstations and laptops with only single disks, I like to set vm.dirty_background_bytes to around 1MiB, and vm.dirty_bytes to between 8MiB and 16MiB. I very rarely find a throughput benefit beyond 16MiB for single-user systems, but the latency hangups can get pretty bad for any synchronous workloads like web browser data stores. On anything with a striped parity array, I find some multiple of the array's stripe width to be a good starting value for vm.dirty_background_bytes; it reduces the likelihood of needing to perform a read/update/write sequence while updating parity, improving array throughput. For vm.dirty_bytes, it depends on how much latency your services can suffer. Myself, I like calculating the theoretical throughput of the block device, use that to calculate how much data it could move in 100ms or so, and setting vm.dirty_bytes accordingly. A 100ms delay is huge, but it's not catastrophic (in my environment.) All of this depends on your environment, though; these are only a starting point for finding what works well for you.
why linux cp command don't consume disk IO?
1,507,025,938,000
I'm looking for an easy solution to protect against random bit flips (so-called bit rot) of data stored on various drives. They are not disk arrays, just single disks, that I backup once a week. So I'm not looking for redundancy, but for file integrity -- i.e. I want to know if my files that I haven't accessed in a long time got randomly damaged or not, and hopefully repair them if possible. Please note that I want a generic solution, I'm specifically not looking for a filesystem like ZFS or btrfs (which I'm already aware of), partly because they have way too much overhead just for checksumming, and they're too complex / unstable (btrfs case). It doesn't have to be an automatic thing. That is, if I have to run a command to generate checksums (and maybe recovery) for new written files, that's fine, but it should be simple to use, not something like manually storing checksums and verifying and then copying the bad files back etc (which I'm doing already, that's why I ask for something simpler, less manual). At a glance, SnapRAID seems to do what I want, except it's made for disk arrays, which is my problem. I think that it could work with just 1 data disk and 1 parity disk, in which case the parity disk will probably be a mirror (backup) of the data disk, but I'm not sure. Other than that it does what I need: checksumming files, ability to verify this, and even repair them from the backup (parity). I'll still run a weekly backup on external media, but this local backup needs to be less manual because it's starting to be a pain to manage. Are there other tools like SnapRAID which are made for just 1 data disk or filesystem that they protect with automatic checksumming/backup, or should I just use SnapRAID? Does it work fine with just 1 disk? Because it uses a parity disk for the backup, I'll have to completely wipe my local backup disk before using it with SnapRAID, so I'm hesitant to just "test" it for myself without confirmation. One downside to this is that the parity disk will not be accessible as a normal disk, even though in this case it's not really a parity disk but just a mirror. So if there's another similar easy-to-use tool for dealing with just backing up and integrity of files of 1 disk instead of a disk array, I'd like to know. Thanks.
You should have a look at bup Very efficient backup system based on the git packfile format, providing fast incremental saves and global deduplication (among and within files, including virtual machine images). bup supports bup-fsck (with par2) verify or repair a bup repository
Easy backup solution to protect against bit rot (or verify)
1,507,025,938,000
We have a Redhat 7 machine. And the filesystem for device /dev/sdc is ext4. When we perform: mount -o rw,remount /grop/sdc We get write protected error like: /dev/sdc read-write, is write-protected in spite the /etc/fstab allow read and write and all sub folder under /grop/sdc have full write/read permissions: /dev/sdc /grop/sdc ext4 defaults,noatime 0 0 Then we do umount -l /grop/sdc and from df -h, we see that the disk is currently not mounted. Then we perform mount /grop/sdc but we get busy. :-( So we do not have a choice and we perform a reboot. And from history we do not see that someone limited the disk for read only by mount. This is very strange, how the disk device became write protected? In order to solve this we perform a full reboot, and now the disk is write/read as it should be. What happens here, after reboot we check the dmesg and we see the following: EXT4-fs warning (device sdc): ext4_clear_journal_err:4698: Marking fs in need of filesystem check. EXT4-fs (sdc): warning: mounting fs with errors, running e2fsck is recommended EXT4-fs (sdc): recovery complete can we say that during boot - e2fsck was performed ? dmesg | grep sdc [sdc] Disabling DIF Type 2 protection [sdc] 15628053168 512-byte logical blocks: (8.00 TB/7.27 TiB) [sdc] 4096-byte physical blocks [sdc] Write Protect is off [sdc] Mode Sense: d7 00 10 08 [sdc] Write cache: disabled, read cache: enabled, supports DPO and FUA sdc: unknown partition table [sdc] Attached SCSI disk EXT4-fs warning (device sdc): ext4_clear_journal_err:4697: Filesystem error recorded from previous mount: IO failure EXT4-fs warning (device sdc): ext4_clear_journal_err:4698: Marking fs in need of filesystem check. EXT4-fs (sdc): warning: mounting fs with errors, running e2fsck is recommended EXT4-fs (sdc): recovery complete EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: (null) EXT4-fs (sdc): error count since last fsck: 5 EXT4-fs (sdc): initial error at time 1510277668: ext4_journal_check_start:56 EXT4-fs (sdc): last error at time 1510496990: ext4_put_super:791
It appears your filesystem has become corrupt somehow. Most filesystems switch to read-only mode once they encounter an error. Please perform the following commands in a terminal: umount /dev/sdc e2fsck /dev/sdc mount /dev/sdc If /dev/sdc is the harddisk which has your operating system on it, use a startup DVD or usb stick to boot from.
How an ext4 disk became suddenly write protected in spite configuration is read/write?
1,507,025,938,000
I use dd a lot. I live in a contant fear of making a mistake one day, for example writing on sda (computer disk) instead of sdb (USB disk) and then erasing everything I have on my computer. I know dd is supposed to be a power user tool but still, it doesn't make sense to me that you can basically screw your whole computer by hitting the wrong key. Why ins't there a security measure that prevent dd from writing on the disk it gets the command from ? Not sure how anyone would do this on purpose. Please note that I didn't tried this myself, I've only read about it, so I could be wrong about all that.
It's reasonable to ask why the dd command doesn't first check whether its target contains a mounted filesystem, and then prompt for confirmation or require a special flag. One simple answer is that it would break any scripts that expect to be able to use dd in this way, and that aren't designed to handle interactive input. For instance, it can be reasonable to modify the partition table of a raw device while a partition of that same device is mounted; you just have to be careful to only modify the first sector. There are a huge number of Linux systems out there in the wild, and it's impossible to know what kind of crazy setups people have come up with. So the maintainers of dd are very unlikely to make a backwards-incompatible change that would cause problems for an unknown number of environments.
Why is dd not protected against writing on the active disk ?
1,507,025,938,000
I wish to clone a large disk (a 500GB SSD, for what it's worth), and I am leaning toward using cat, as suggested by Gilles here. But what gave me pause is that I do not really know what cat does upon read errors. I know how dd behaves in these cases, i.e. the command dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync status=progress does not stop for errors on read, and pads the read error with zeroes (the sync option) so that data stays in sync. Unfortunately, it does so by padding the zeroes at the end of the block to be written, so that a single error in an early 512-byte read messes up the whole 64K of data (even worse with larger, faster block sizes). So I am wondering: can I do better/differently with cat? Or should I just move on to Clonezilla?
cat stops if it encounters a read or write error. If you’re concerned there might be unreadable sectors on your source drive, you should look at tools such as ddrescue.
Errors on cloning disk with cat
1,507,025,938,000
If I put /var as first partition, then /home and /, will the partition for /var have better performance than if I put other partitions close to head of the disk? Will the disk sector position matter? I heard some people saying that I should put swap closer to the head of a partition to get higher performance.
Yes, it is true. The platters in a disk rotate at a fixed speed (7200 RPM in the common case). As such when the head is over the outer portion of the platter more surface area passes under the head per rotation than on the inside track. Thus more IO per rotation is possible. (The 'beginning' of the drive is the outside tracks of the platters) Now whether this is going to be at all noticeable, especially for swap which you shouldnt be using extensively anyway, is debatable.
Does the position of partition on disk affect speed?