date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,336,040,124,000 |
I am trying to copy a file from Root directory over to my home directory. I had to execute the extraction of a file as ROOT user. I can see it is extracted into root directory but I now want to access this in my home directory. I have tried a number of commands to do this but it seems to fail.
|
Assuming your username is deirdre:
As root, you have to move the file from root's homedir to deirdre's homedir and change its owner to deirdre:
mv /root/somefile ~deirdre/
chown deirdre: ~deirdre/somefile
Once you've done this, you will be able to login as deirdre and access the file.
It isn't clear from your question whether the file is in root's home directory or in the filesystem root; if it is the last case, the commands are instead:
mv /somefile ~deirdre/
chown deirdre: ~deirdre/somefile
The deirdre: form isn’t supported by all chown implementations; if this doesn’t work for you, use chown deirdre ~deirdre/somefile. chown deirdre: automatically changes the file’s group to deirdre’s login group.
| Copying a file from ROOT to home directory |
1,336,040,124,000 |
I'd like to use cp -b to copy a file to a destination, possibly creating a backup file of the destination path if it already exists. But, if the backup file already exists, I'd like to have cp fail with an error.
I know I can use -n to avoid clobbering the target file, but I want to instead refuse to clobber the backup file.
Is there a way to do that? I happen to be using GNU cp on Linux, and I'm willing to accept an answer that is specific to Linux if no POSIX option is available.
|
If you want to avoid clobbering any backup files with GNU cp, you can use numbered backups:
cp --backup=t source destination
Rather than overwrite a backup, this creates additional backups.
Example
As an example, let's consider a directory with two files:
$ ls
file1 file2
Now, let's copy file1 over file2:
$ cp --backup=t file1 file2
$ ls
file1 file2 file2.~1~
As we can see, a backup was made.
Let's copy it again:
$ cp --backup=t file1 file2
$ ls
file1 file2 file2.~1~ file2.~2~
Another backup was made.
Documentation
From man cp, just before the end of the "description" section, the various possible options for --backup are itemized:
The backup suffix is '~', unless set with --suffix or
SIMPLE_BACKUP_SUFFIX. The version control method may be selected via
the --backup option or through the VERSION_CONTROL environment
variable. Here are the values:
none, off
never make backups (even if --backup is given)
numbered, t
make numbered backups
existing, nil
numbered if numbered backups exist, simple otherwise
simple, never
always make simple backups
As a special case, cp makes a backup of SOURCE when the force and
backup options are given and SOURCE and DEST are the same name for
an existing, regular file.
| Can I use the 'cp' command with the -b option, but have it fail if the backup file already exists? |
1,336,040,124,000 |
I have made my peace with the fact that NTFS(-3g) on Linux will be slower than NTFS performance on Windows. I can write to my external NTFS-formatted USB 3.0 HDD at about 100+ MB/s on Windows while I have to settle for 30 MB/s (give or take) on my Debian (Wheezy) box.
That's not the problem, however. I found (empirically) that if I want to copy, say, 20 files from my box to the HDD, the copying starts off at the "normal" 30 MB/s but gradually slows down to a miserly 4 MB/s!! Whereas, if I do the copy, say, 5 files at a time (serially) the copying speed remains at 30 MB/s for all four copy processes. This is not specific to Debian, by the way. I've observed similar behavior on Fedora and Ubuntu.
My question is, is this behavior normal or should I be concerned? If something is wrong, how should I go about debugging it/fixing it?
|
You are seeing the effects of drive head seek latency when running the parallel copies.
With most file systems, including NTFS and ext[234], data is stored in distinct locations on the drive. File system information here, block allocation data over there and file data way over there.
When writing a single file, the metadata changes relatively infrequently, so the head is mostly in the right place to be writing data blocks. When running 20 concurrent writes, the head has to dash between the block allocation areas and the data areas about 20 times as often and disk head seeks are measured in the 10s of milliseconds.
When writing to a native filesystem, some liberties can be taken with the amount of seeking done (for example, keeping a copy of part of the free-list in memory and only writing that out infrequently, thus saving a bunch of seeks). I expect the same applies to NTFS under Windows, but the NTFS Linux filesystem developers can't afford to be as cavalier, opting for consistency over performance.
| Copying to an external NTFS partition: slows down when I copy many files at once |
1,336,040,124,000 |
I don’t want the “last modified” attribute to be changed to the current date when copying files to a mounted Samba folder. How can I do this?
This behavior occurs with (K)Ubuntu 12.04 and Ubuntu 15.10. It can be reproduced using GUI browsers (tested with nautilus 3.4.2 and dolphin 2.0) and using cp -p in terminal.
The Samba folder was mounted to the local file system either with:
sudo mount -t smbfs //mynas/folder /mnt/nas/ -o user=username
or
sudo mount -t cifs //mynas/folder /home/mnt/nas/ -o user=username.
Notes:
When connecting to the same Samba folder (either with nautilus or dolphin) using a URL like smb://username@mynas/folder/, then I can copy files to it without having the “modified time” replaced with the current time!
But mounting a Samba folder is more convenient, also not all tools support the smb protocol. This is why using the URI smb:// is not a workaround for me.
|
The command cp -P doesn't apply to your needs. You are using uppercase argument letter -P which is used to never follow symbolic links. What you want to use is lowercase to preserve timestamps: cp -p
As described in the comment section of the question, using the correct gid and uid solved the problem:
sudo mount -t cifs //mynas/folder /home/mnt/nas/ -o user=username -o gid=1000,uid=1000
| Prevent updates to 'modified time' when copying files to a mounted Samba folder |
1,336,040,124,000 |
In my first question: How do I get the creation date of a file on an NTFS logical volume, I asked how to get the "Date created" field in NTFS-3G. Now, that I know I can get the "Date created", I have started adding files onto my NTFS-3G partition and would like to set the "Date created" of each file to its "Date modified" value.
Since this needs to be done on a whole repository of files, I would like to recursively apply it to a single directory on down. If I know how to do this for a single file, I could probably do the recursion myself, but if you want to add that in I would be more than happy.
|
The system.ntfs_times extended attribute contains 32 bytes that consist of the btime, mtime, atime, ctime as 64bit integers.
You can list them with for instance:
getfattr --only-values -n system.ntfs_times -- "$file" |
perl -MPOSIX -0777 -ne 'print ctime $_/10000000-11644473600 for unpack("Q4",$_)'
So you can just copy the second integer to the first with something like:
getfattr -n system.ntfs_times -e hex -- "$file" |
sed '2s/0x.\{16\}\(.\{16\}\)/0x\1\1/' |
setfattr --restore=-
| How do I recursively set the date created attribute to the date modified attribute on NTFS-3G? |
1,336,040,124,000 |
I am searching files who have either been created or modified for the last 60 minuts. I find these via
find ~/data/ -cmin -60 -mmin -60 -type f
~ the home directory /usr/wg/
After that I want to copy these files and preserve the main folder structure...
The results of the find command are for instance...
/usr/wg/data/foo1/file1.txt
/usr/wg/data/foo2/bar2/file2.txt
...
Now when I use
rsync -a `find ~/data/ -cmin -60 -mmin -60 -type f` ~/vee/
In the folder ~/vee/ I get
/usr/wg/vee/usr/wg/data/foo1/file1.txt
/usr/wg/vee/usr/wg/data/foo2/bar2/file2.txt
...
While I want
/usr/wg/vee/foo1/file1.txt
/usr/wg/vee/foo2/bar2/file2.txt
...
How do I achieve this? I looked at
How to copy modified files while preserving folder structure
https://serverfault.com/questions/180853/how-to-copy-file-preserving-directory-path-in-linux
https://stackoverflow.com/questions/1650164/bash-copy-named-files-recursively-preserving-folder-structure
and several other answers, but I do not seem to get it right.
|
You should rewrite your command on this way:
cd ~/data; find . -cmin -60 -mmin -60 -type f
to be able to get from find relative paths
And maybe something like
find ... -exec cp -r "{}" ../vee/ \;
will do the work with copy the files with subdirectory structure
| Finding files and copy with folder structure intact |
1,336,040,124,000 |
In directory /source I'd like to copy all files with a file name more than 15 characters to directory /dest. Is there a UNIX command to do this?
EDIT: Although this question explains how to search for a filename of a certain length, my question also asks how to copy the file.
|
You can make a pattern with 16-or-more characters and copy those files.
A simple (but not elegant) approach, using 16 ? characters, each matching any single character:
for n in /source/????????????????*;do [ -f "$n" ] && cp "$n" /dest/; done
After the 16th ?, use * to match any number of characters. The pattern might not really match anything, so a test -f to ensure it is a file is still needed.
| Copy All Files With Certain Length File Name |
1,336,040,124,000 |
Can we set permissions, on a Linux box OR for a particular directory (Scientific Linux in my case) that, one can read files but cannot copy, move or delete them?
Update: My Scenario is,
We developed an GUI program which are having some images which we have created by consuming lot of time and efforts. Our directory structure is :
/GUI/program/GUI.exe
/GUI/images/A/A1.jpeg A2.png .... A200.png
/GUI/images/B/B1.png B2.png .... B200.png
.
.
/GUI/images/I/I1.png I2.png .... I200.png
No need to say ./GUI.exe calls images which required by user to interact.
Now I need to hide /GUI/images/*.
Is there any way?
|
If a file can be read, it can be copied. You can, however, stop the file from being deleted or moved, by not giving write permissions to the directory where the file resides.
Edited with additional info since the question has been amended:
Given the scenario you've now added to the question, you could do this:
create a user that will be used only for this program, e.g. guiuser
change the ownership of the /GUI/images directory to e.g. guiuser
change the permissions of the directory and files inside it so that only guiuser has read permission
change the owner of the program GUI.exe to be owned by guiuser
change permissions of the program to run setuid (chmod u+s /GUI/program/GUI.exe)
When your users run the program, that program will have the access rights of guiuser, so the program will be able to read the files even though the ordinary user doesn't have permission.
| How do I disable copy permissions? |
1,336,040,124,000 |
When using rsync to copy files over the network, I give a path so that rsync will know where to put the file on the remote server.
rsync -av /home/ME/myfile user@remoteserver:/home/ME/
If I leave off the remote path, where will rsync put the file? Eg:
rsync -av /home/ME/myfile user@remoteserver
|
rsync -av /home/ME/myfile user@remoteserver
This command will not send the file to your remote server, it will just make a duplicate of the /home/me/myfile in your current working directory and the name of the file will be called user@remoteserver.
Just like when you want to create a backup of a file before editing it with cp, you do
cp -a /etc/fstab /etc/fstab.org
| Where does rsync copy a file if I don't specify the remote path? |
1,336,040,124,000 |
I have the following folder and files:
.
├── photos
│ ├── photo-a
│ │ ├── photo-a.meta.json
│ │ └── photo-a.json
│ ├── photo-b
│ │ ├── photo-b.meta.json
│ │ └── photo-b.json
...
There are more folders and files in the photos folder in the same structure
I would want to copy all the files photo-a.json, photo-b.json and others into another folder called photos-copy. Basically, I want to exclude all the files that end with .meta.json but copy only those that end with .json files.
So, if done correctly, photos-copy folder would look like this:
.
├── photos-copy
│ ├── photo-a.json
│ └── photo-b.json
...
I tried something along cp -a ./photos/*.json ./photos-copy but this will end up copying everything because the .meta.json files also end with .json extensions.
How can I do this?
|
A couple of options spring to mind.
rsync --dry-run -av --prune-empty-dirs --exclude '*.meta.json' --include '*.json' photos/ photos-copy/
Or if you don't have rsync (and why not!?), this will copy the files retaining the structure
cp -av photos/ photos-copy/
rm -f photos-copy/*/*.meta.json
This variant will flatten the files into a single directory
cp -av photos/*/*.json photos-copy/
rm -f photos-copy/*.meta.json
You can do more fancy things with bash and its extended pattern matching, which here tells the shell to match everything that does not contain .meta in its name:
shopt -s extglob
cp -av photos/*/!(*.meta).json photos-copy/
| How can I copy all files while excluding files with a certain pattern? |
1,336,040,124,000 |
I want to copy an entire file structure (with thousands of files and hundreds of directories), it's a hierarchy of directories and there are those node_modules directory that I want to exclude from the copying process.
Is there a Unix command to copy from a directory and all of its files and sub-directories recursively with an option to say don't include the directories with the name <name> ?
Something like :
cp root/ rootCopy/ --except node_modules
?
If not, is there a simple way to do that from the command line without writing a bash or something ?
|
You can try with rsync or tar command .
See this or this post.
From rsync man page
--exclude=PATTERN exclude files matching PATTERN
--exclude-from=FILE read exclude patterns from FILE
rsync -avz --exclude 'dir*' source/ destination/
| Take all the file structure but these directories |
1,336,040,124,000 |
I have a folder root_folder containing a lot of subfolders. Each of these subfolders contains a small number of files (between 1 and 20) and I want to copy all the subfolders containing at least 5 files into another folder new_folder. I have found how to print the folders that interest me: https://superuser.com/questions/617050/find-directories-containing-a-certain-number-of-files but not how to copy them.
|
You can do a for loop on the find result and copy the folder with -R :
IFS=$'\n'
for source_folder in "$(find . -maxdepth 1 -type d -exec bash -c "echo -ne '{}\t'; ls '{}' | wc -l" \; |
awk -F"\t" '$NF>=5{print $1}');" do
if [[ "$source_folder" != "." ]]; then
cp -R "$source_folder" /destination/folder
fi
done
| Copy subfolders containing at least n files |
1,336,040,124,000 |
I have a folder with a lot of files. I want to copy all files which begin with of these names (separated by space):
abc abd aer ab-x ate
to another folder. How can I do that?
|
With csh, tcsh, ksh93, bash, fish or zsh -o cshnullglob, you can use brace expansion and globbing to do that (-- is not needed for these filenames, but I assume they are just examples):
cp -- {abc,abd,aer,ab-x,ate}* dest/
If you'd rather not use brace expansion, you can use a for loop (here POSIX/Bourne style syntax):
for include in abc abd aer ab-x ate; do
cp -- "$include"* dest/
done
If you have a very large amount of files, this might be slow due to the invocation of cp once per include. Another way to do this would be to populate an array, and go from there (here ksh93, zsh or recent bash syntax):
files=()
includes=(abc abd aer ab-x ate)
for include in "${includes[@]}"; do
files+=( "$include"* )
done
cp -- "${files[@]}" dest/
| copying files with particular names to another folder |
1,336,040,124,000 |
How can I compare all files in two directories and copy the ones that are different to another directory? For example, say we have dir1 and dir2:
dir1:
build.gradle
gradle.properties
somejar.jar
javacode.java
anotherjar.jar
dir2:
build.gradle <-- different from build.gradle in dir1
gradle.properties
somejar.jar
javacode.java <-- different from javacode.java in dir1
yetanotherjar.jar
How may I create a new directory dir3 that contains the different files from dir2, the common files in dir1 and dir2 and all uncommon files in both dir1 and dir2? dir3 should contain:
dir3:
build.gradle <-- from dir2
gradle.properties <-- these are common files both in dir1 and dir2
somejar.jar <--
javacode.java <-- from dir2
anotherjar.jar <-- from dir1
yetanotherjar.jar <-- from dir2
|
All you need is
cp -n dir2/* dir1/* dir3/
| Comparing each file in two directories and copying it to another if it differs from its counterpart |
1,336,040,124,000 |
I have a statistical application which runs every minute and creates charts accordingly.
In order to make these charts available to other users, I need to copy the whole folder containing the charts and paste it to a shared folder where other users can see the contents.
How can I automate this process so that e.g each 5 minutes the files and folders are updated?
|
This sounds like something which could perhaps be perfectly solved with rsync. In its simplest form it can be called like this
rsync sourceFolder destinationFolder
Called in a crontab every 5 minute:
*/5 * * * * /usr/bin/rsync sourceFolder destinationFolder
For options, permissions, exlude of special files or directories see man rsync.
| How can I automate the process of copying files from one folder to another in Centos |
1,336,040,124,000 |
I have the string xyz which is a line in file1.txt, I want to copy all the lines after xyz in file1.txt to a new file file2.txt. How can I achieve this?
I know about cat command. But how to specify the starting line?
|
Using GNU sed
To copy all lines after xyz, try:
sed '0,/xyz/d' file1.txt >file2.txt
1,/xyz/ specifies a range of lines starting with the first and ending with the first occurrence of a line matching xyz. d tells sed to delete those lines.
Note: For BSD/MacOS sed, one can use sed '1,/xyz/d' file1.txt >file2.txt but this only works if the first appearance of xyz is in the second line or later. (Hat tip: kusalananda.)
Another approach, as suggested by don_crissti, should work for all sed:
{ printf %s\\n; cat file1.txt; } | sed '1,/xyz/d' >file2.txt
Example
Consider this test file:
$ cat file1.txt
a
b
xyz
c
d
Run our command:
$ sed '1,/xyz/d' file1.txt >file2.txt
$ cat file2.txt
c
d
Using awk
The same logic can used with awk:
awk 'NR==1,/xyz/{next} 1' file1.txt >file2.txt
NR==1,/xyz/{next} tells awk to skip over all lines from the first (NR==1) to the first line matching the regex xyz. 1 tells awk to print any remaining lines.
| How to copy the rest of lines of a file to another file [duplicate] |
1,336,040,124,000 |
I have a directory that have zillions of files. As soon as I try to copy this directory, I receive a message that a file could not be copied (corrupt ?) and the copy stops.
Is there any command I can type on terminal that can check all files on that directory tree and list all files that could not be copied or that are corrupt? I would not like to copy the files at this time, just to check if files would not be possible to copy/read if I try.
I am on OS X Mountain Lion but any unix command should work fine, I hope.
thanks.
NOTE: by corrupt I mean a file that cannot be read or copied.
|
rsync can be used to copy directories, and is capable of restarting the copy from the point at which it terminated if any error causes the rsync to die.
Using rsync's --dry-run option you can see what would be copied without actually copying anything. The --stats and --progress options would also be useful. and --human-readable or -h is easier to read.
e.g.
rsync --dry-run -avh --stats --progress /path/to/src/ /path/to/destination/
I'm not sure if rsync is installed by default on Mac OS X, but I have used it on Macs so I know it's definitely available.
For a quick-and-dirty check on whether files in a subdirectory can be read or not, you could use grep -r XXX /path/to/directory/ > /dev/null. The search regexp doesn't matter, because output is being discarded anyway.
STDOUT is being redirected to /dev/null, so you'll only see errors.
The only reason I chose grep here was because of its -r recursion option. There are many other commands that could be used instead of grep here, and even more if used with find.
| Finding corrupted files |
1,336,040,124,000 |
I am looping through a number of files, and $file represents the current file.
How can I make copies or renames and keep the extension the same for this current file?
e.g. if $file = x.jpg
How to make a copy of $file's with a filename of x_orig.jpg
So far I have:
for file in /tmp/p/DSC*.JPG; do
cp $file $file+'_orig'.JPG
done
but that copies the file
DSCF1748.JPG
to
DSCF1748.JPG+_orig.JPG
whereas I want a copy named
DSCF1748_orig.JPG
Similarly, using cp $file "$file"_orig.JPG
results in files such as DSCF1748.JPG_orig.JPG
I want to get the _orig in the middle of the filename...
|
You can use bash's string substitution features for that:
for file in /tmp/p/DSC*.JPG; do
cp "$file" "${file%.JPG}"_orig.JPG
done
The general format is ${string%substring} which will remove substring from the end of string. For example:
$ f=foobar.JPG; echo "${f%.JPG}"
foobar
| How to copy multiple files but keep their extensions the same? |
1,336,040,124,000 |
I am having to put together a script that will ssh into devices to run a command such as "show running-config" and save the output to a file on my local machine. I have done similar tasks like this right from the command line and have it save the file to my local system. For example,
ssh [email protected] ls > ls_from_remotes_sys
And the file ls_from_remotes_sys is on my local system. However, I will need to script this and the only way I know how to do that is with expect. So I have put together this:
#!/usr/bin/expect -f
spawn ssh [email protected] ls > ls_from_remotes_sys
expect "[email protected]'s password:"
send "password\r"
interact
The expect script works but the file gets saved to the remote system, which is not what I want.
Question 1 - Why does the file get saved to the local system from command line and why does it get saved to the remote system with expect?
Question 2 - Is there a way to send the file to my local system? (ssh back is not an option)
I was thinking that maybe instead of redirecting into a file I could just have the script output the command results to my screen. So,
Question 3 - If I do this, how can I capture the stdout on my screen from the remote system and send it to a file on the local system?
|
The expect command you use:
spawn ssh [email protected] ls > ls_from_remotes_sys
This, effectively calls
exec("ssh","[email protected]","ls",">","ls_from_remotes_sys")
That means the three parameters (ls, > and the filename) are sent to the remote system; ie the redirection happens on the remote system.
A kludge could be to call it via sh -c "ssh test@....".
Another alternative would be to have the redirection done outside of the expect script
e,g: if you called this "get_ls"
#!/usr/bin/expect -f
spawn ssh [email protected] ls
expect "[email protected]'s password:"
send "password\r"
interact
Then you could do get_ls > ls_from_remotes_sys.
| Capture stdout from ssh session to local machine |
1,336,040,124,000 |
There is a collection of .doc files, in addition to other types of files, on a remote server (which supports SCP).
I am trying to write a script to retrieve the latest (most recently modified) .doc file from the remote server. The path to my current working directory cannot be absolute since my script may be deployed in another server.
I am able to solve the problem partially in two steps:
Copy all the .doc files from the remote server to my local ~/Downloads folder:
scp -i key.pem abc@xyz:/tmp/*.doc ~/Downloads/
Select the latest file from ~/Downloads and copy it to the required folder:
cd ~/Downloads
latest_file=$(ls -t *.doc | head -n 1)
cp -p "$latest_file" /current working directory
How can I copy the latest .doc file present in the remote server xyz under the folder /tmp to my local machine in a single statement without downloading all of them into an intermediate folder?
|
I am not really clear what your problem is, but if you're trying to copy to the current directory then just use . to refer to the current directory so that your command is:
scp -i key.pem abc@xyz:/tmp/*.doc .
| Copying latest file from remote server |
1,336,040,124,000 |
I just wanted to confirm this behaviour. If I copy something in Midnight Commander with the option to put it in the background and then I start a second process of copying something in that same Midnight Commander console, does this break the first process? I have the feeling it does.
And does somebody know how to get a visual output about the state of the background process?
|
I think I found out what happened. Midnight Commander can handle several background processes. But it might stop them if Midnight Commander is exited. It will, however, resume on restart.
| Does Midnight Commander cancel a background copy if another background copy is initiated? |
1,336,040,124,000 |
I often transfer large files from a remote server using rsync with the following command :
rsync -rvhm \
--progress \
--size-only \
--stats \
--partial-dir=.rsync-partial \
"user@server::module/some_path" "some_path"
That way, even if the transfer fails, I can resume it later and I know that I'll only have complete files in some_path on the destination, since all partial transfers will stay in some_path/.rsync-partial.
When a transfer resumes, rsync first checks the partial one to determine where exactly to resume (I guess) and I'm fine with that. The problem is that when it's done with this check, the partial file gets copied outside of the .partial-rsync folder for resume. Therefore, I'm left with a partial transfer (that will be replaced or deleted at the next pause or when the transfer finishes) along with the ongoing one.
This is inconvenient since :
I don't have much free space on the destination ;
The files are quite large ;
If the partial transfer went "far enough", I might not be able to resume it since rsync will try to copy it first and will complain that there isn't enough space available to resume ;
There is no reason that I can think of to keep a copy of the partial file to resume the transfer : the partial file itself should be used.
Is there a way to avoid this behavior or is this by design ? And if so, why would we want it to work this way ?
|
It doesn't look like this is possible yet although there is a patch available allowing the use of the --inplace option in conjunction with --partial-dir to avoid this copy.
Refer to Bug 13071 for further details but from the description:
If --inplace is used with --partial-dir, or with any option implying it (--delay-update), the inplace behavior will take place in the partial directory, each file being moved to its final destination once completed.
Unfortunately, so far this patch has not been applied.
| rsync keeps previous partial file when resuming |
1,336,040,124,000 |
I have a notebook which can't boot up Ubuntu and I want to copy all of the files to an external hard drive with USB connection to save them. After I want to install a new Ubuntu to the notebook, how can I copy all of the files to an external hard drive using just the grub console, or (initramfs) console?
Update:
sudo lsblk
sudo: lsblk: command not foud
sudo vgscan
No volume groups found
sudo lvs
No volume groups found
Is this possible?
|
After I want to install a new Ubuntu to the notebook, how can I copy...
I think you mean first copy the files to an external drive, then reinstall Ubuntu.
You can't copy files with the GRUB console: the filesystem drivers of GRUB are basically read-only. (You can write into e.g. the /boot/grub/grubenv file, but only by overwriting its existing contents - you cannot increase the size of the file.)
With the initramfs console, you would first mount the necessary filesystems (= at least the filesystem you want to copy from and the filesystem of the external hard drive you want to copy to.), and then use the regular cp command to copy the files.
But there is probably a better option: prepare an Ubuntu Live DVD/USB, boot from it, and then you have all the basic filesystem manipulation tools (even GUI!) available to you.
| How to copy files with grub console? |
1,425,620,266,000 |
I get all the modified svn files using svn st | grep ^M command
M student/includes/class_student_promotion.php
M student/includes/class_student_report.php
M student/resources/js/student_co_scholistic_activities.js
M staff/php/edit_staff_details.php
M library/includes/class_book_return.php
M library/includes/class_book_item_stock_entry.php
M library/includes/class_library_common_function.php
M library/includes/auto_book_name_list.php
M library/includes/class_book_issue.php
M library/php/book_item_details_entry.php
M includes/connection.php
M includes/links.php
M staff_student/php/student_time_table.php
M assignment/php/ajax_created_assignment_report.php
M assignment/php/ajax_submitted_assignment_report.php
M student_attendance/php/date_wise_attendance_summary_report.php
M student_attendance/resources/js/holiday_master.js
and my requirement is I want the files listed in the output to be copied in the same folder structure. Like includes folder to be created and just the files listed in the svn st must be copied.
This command
cp `svn st | ack '^M' | cut -b 8-` backup
can copy all the modified files to a directory but What I expect is I want the folders holding the modified files should also be created
|
Assuming that with folder you mean directory and assuming you have no spaces or special characters in your file and directory names:
svn st | ack '^M' | cut -b 8- | cpio -pdmv backup
This is cpio in pass-through mode (-p). It takes a list of filenames to copy from stdin. -d allows it to create directories, -m preserves modification times and -v makes it verbose. Any filenames on stdin are copied to the destination directory, student/includes/class_student_promotion.php will be copied to backup/student/includes/class_student_promotion.php
| Find all the modified svn files and copy files modified in the same folder structure |
1,425,620,266,000 |
Today I found a little bug in my own written CMS.
Now I want to rewrite all the files on the server with the same filename.
However, I do have some customers with the specific file customized so I can't overwrite all of the files.
Since some customers have a customized file, I need to check the filesize.
When the filesize is different than it's source, it should overwrite.
When it's the same filesize, it should skip the file.
A couple of months ago I asked a question a little similar, how-to overwrite multiple files from 1 source. I managed too fix this with:
echo /home/*/*/phpthumb | xargs -t -n 1 cp -r /home/test/testuser/phpthumb/*
Can I use this in a .sh script with a small if / else for rewriting or skipping the file?
If yes, how can I accomplish this?
My unix experience is average so if something is not clear about my question, feel free to ask.
Kind Regards.
|
I assume you changed the source code in one of your source files on one of your servers and want to port the fix onto other web sites, right? If you kept a copy of the orginial file (I always do when I don't know the project by heart), use your best asset: diff:
diff -Nau old/file new/file > file.patch
You will have a differential between the original and corrected source files, which you can patch all your servers with, through SSH, for instance:
# See what's to be done, no action
cat file.patch | ssh server "patch -d <directory> --dry-run"
# Apply the patch if all seems good
cat file.patch | ssh server "patch -d <directory>"
You might have to tweak the patch header if your unchanged copy is in a different directory than the corrected file. But basically, if both are in the same directory, file.php being the fixed file and file.php.orig the original, cd
to the root directory of your web site and run the diff command, e.g.:
cd /home/www/htdocs/www.mysite.tld
diff -Nau some/deep/dir/file.orig.php some/deep/dir/file.php > /tmp/file.patch
The from the root directory of the other web sites:
# See what's to be done, no action
patch -d /home/www/htdocs/www.my.other.site.tld --dry-run < /tmp/file.patch
# Apply the patch if all seems good
patch -d /home/www/htdocs/www.my.other.site.tld < /tmp/file.patch
Other useful argument is -p. Both diff and patch can work on a directory tree to apply fixes recursively. You can also build a more complex patch by concatenating them in one file. See patch --help and diff --help for more details on how to use them.
| Find file, check size and overwrite when filesize is different |
1,425,620,266,000 |
If process A copies files to some location loc and process B regularly copies the files from loc to some other location, can B read a file that is currently in the process of being copied to loc by A?
I'm using Ubuntu Linux 12.04 if that's important.
Background information: I want to continuously backup a PostgreSQL cluster. PostgreSQL provides WAL archiving for that. It works by having the database call a script that copies a completed WAL file to some backup location.
I want another process to regularly copy the backed up WAL files to another server. If a WAL file is currently being copied by the database, can the second process still read the file without running into some EOF condition before the file is copied as a whole?
In other words: Can I do the following with no synchronization between A and B?
A B
cp pg_xlog/some_wal_file /backup/ scp /backup/* user@remote-machine:/backups/
|
I think the best thing to is to ensure that process B only copies files which have been fully transferred by process A. One way to do this would be to use a combination of cp and mv in process A, since the mv process uses the rename system call (provided the files are on the same filesystem) which is atomic. This means that from the perspective of process B, the files appear in their fully formed state.
One way to implement this would be to have a partial directory inside your /backup directory which is ignored by process B. For process A you could do something like:
file="some_wal_file"
cp pg_xlog/"$file" /backup/partial
mv /backup/partial/"$file" /backup
And for process B (using bash):
shopt -s extglob
scp /backup/!(partial) user@remote-machine:/backups/
Although the program that you probably want to look into, both for process A and process B, is rsync. rsync creates partial files and atomically moves into place by default (although usually the partial files are hidden files rather than being in a specific directory). Rsync will also avoid transferring files that it doesn't need to and has a special delta algorithm for transferring only the relevant parts of files that need to be updated over the network (rsync must be installed in both locations, although transfers still go over ssh by default). Using rsync for process A:
rsync -a --partial-dir=/backup/partial pg_xlog/some_wal_file /backup/
For process B:
rsync -a --exclude=/partial/ /backup/ user@remote-machine:/backups/
| Can I safely read a file that is appended to by another process? |
1,425,620,266,000 |
I have 3 distinct folders: history, inbox, backup.
I need to copy all the files from 'history' to 'inbox', only if they are not present in 'backup'.
How to do this?
|
Just an example, is there any subfolder in history ?
for x in history/*;
do
[[ -f backup/"$(basename "$x")" ]] || cp "$x" inbox
done
This script would loop through all possible files in history folder, and extract the basename of it (e.g the basename of /bin/ls is ls), and check if the file exists in backup folder; if not, do the copy procedure.
| Advanced file filtering |
1,425,620,266,000 |
Where do I find documentation of behavior of cp and rsync commands when the destination path shares the inode with another path? In other words, when I do
$ cp [options] src dest
$ rsync [options] src dest
and when there is dest2 that is a hard link to the same inode as dest, do these commands modify the content at this inode [Let's call it behavior (i)], or do they create a new inode, fill this inode with the content of src, and link dest to the new inode [behavior (ii)]?
In the case of behavior (i), I will see the results of cp and rsync by accessing dest2, whereas I will not see them in the case of behavior (ii).
I observed that the behavior depends on the options. In particular, with -a option, both cp and rsync took behavior (ii) as far as I experimented. [cp without any option took behavior (i).] I would like to know if this is guaranteed, and I wish someone to kindly point to documentation. Or, I would appreciate it if someone could kindly suggest some words or phrases to search for.
Below are my experiment.
First, I create a test sample:
$ ls
work
$ ls -li work/
total 12
23227072 -rw-rw-r-- 1 norio norio 17 Oct 19 00:52 file000.txt
23227071 -rw-rw-r-- 1 norio norio 17 Oct 19 00:52 file001.txt
23227073 -rw-rw-r-- 1 norio norio 17 Oct 19 00:52 file002.txt
$ cat work/file000.txt
This is file000.
$ cat work/file001.txt
This is file001.
$ cat work/file002.txt
This is file002.
$ cp -r work backup
$ ls
backup work
$ ls -li backup/
total 12
23227087 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file000.txt
23227065 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file001.txt
23227092 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file002.txt
$ cat backup/file000.txt
This is file000.
$ cat backup/file001.txt
This is file001.
$ cat backup/file002.txt
This is file002.
$
$ cp -al backup backupOld
$ ls
backup backupOld work
$ ls -li backupOld/
total 12
23227087 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file000.txt
23227065 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file001.txt
23227092 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file002.txt
$ cat backupOld/file000.txt
This is file000.
$ cat backupOld/file001.txt
This is file001.
$ cat backupOld/file002.txt
This is file002.
$
Thus far, I created files under backupOld as hard links to the files of the same name under backup.
Now, I modify files under work and copy them to backup by cp, cp -a, and rsync -a.
$ echo "Hello work 000." >> work/file000.txt
$ echo "Hello work 001." >> work/file001.txt
$ echo "Hello work 002." >> work/file002.txt
$ cat work/file000.txt
This is file000.
Hello work 000.
$ cat work/file001.txt
This is file001.
Hello work 001.
$ cat work/file002.txt
This is file002.
Hello work 002.
$
$ cat backup/file000.txt
This is file000.
$ cat backup/file001.txt
This is file001.
$ cat backup/file002.txt
This is file002.
$ cat backupOld/file000.txt
This is file000.
$ cat backupOld/file001.txt
This is file001.
$ cat backupOld/file002.txt
This is file002.
$
$ ls -li work/
total 12
23227072 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file000.txt
23227071 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file001.txt
23227073 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file002.txt
$ ls -li backup/
total 12
23227087 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file000.txt
23227065 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file001.txt
23227092 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file002.txt
$ ls -li backupOld/
total 12
23227087 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file000.txt
23227065 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file001.txt
23227092 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file002.txt
$
$ cp work/file000.txt backup
$ cp -a work/file001.txt backup
$ rsync -a work/file002.txt backup
$
$ ls -li backup
total 12
23227087 -rw-rw-r-- 2 norio norio 33 Oct 19 01:00 file000.txt
23227094 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file001.txt
23227095 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file002.txt
$
$ ls -li backupOld
total 12
23227087 -rw-rw-r-- 2 norio norio 33 Oct 19 01:00 file000.txt
23227065 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file001.txt
23227092 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file002.txt
$
cp overwrote the content of inode 23227087 shared by backup/file000.txt and backupOld/file000.txt, whereas cp -a and rsync -a created new inodes for backup/file001.txt and backup/file002.txt, respectively, to hold the new contents copied from work/file001.txt and work/file002.txt.
$ cat backup/file000.txt
This is file000.
Hello work 000.
$ cat backup/file001.txt
This is file001.
Hello work 001.
$ cat backup/file002.txt
This is file002.
Hello work 002.
$ cat backupOld/file000.txt
This is file000.
Hello work 000.
$ cat backupOld/file001.txt
This is file001.
$ cat backupOld/file002.txt
This is file002.
$
|
cp’s behaviour is specificied by POSIX. -a isn’t specified by POSIX, but it implies -R which is. When copying a single file, without -R, and the target exists,
A file descriptor for dest_file shall be obtained by performing actions equivalent to the open() function defined in the System Interfaces volume of POSIX.1-2017 called using dest_file as the path argument, and the bitwise-inclusive OR of O_WRONLY and O_TRUNC as the oflag argument.
Thus the target is opened, truncated, and its contents replaced with those of the source; the inode doesn’t change, and any hard linked file is affected.
With -R,
The dest_file shall be created with the same file type as source_file.
A new file is created.
rsync’s behaviour is documented in the description of the --inplace option (see man rsync):
This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of
creating a new copy of the file and moving it into place when it is complete, rsync instead writes the updated
data directly to the destination file.
Thus by default, rsync creates new files instead of updating existing ones.
| Hard link as destination of cp and rsync |
1,425,620,266,000 |
I have a question related to the serial terminal. It is sometimes possible to connect to a device using a command such as screen. One example would be screen /dev/ttyUSB0 115200.
I can connect through a Linux ARM device with it (even passing the login phase). Thus, I can easily transfer everything that is text. Now, I would like to copy a binary file through it. How can it be done?
|
Instead of using screen, you might want to use a dedicated serial terminal emulator program, such as minicom, since it has built-in support for the local side of serial-port binary transfer protocols like ZMODEM.
To transfer a file from local system to an ARM device, you would need to have the command-line tool for the ZMODEM protocol installed on both devices. At least on Debian, it comes in package named lrzsz.
First, you would login to the ARM device and run the rz (Receive Zmodem) command on it. It will output a special "waiting to receive" character sequence which can be detected by a ZMODEM-aware terminal emulator program, such as minicom. At that point, the terminal emulator program should automatically allow you to select a file for sending to the remote ARM device. If that does not happen, you can still select the "send file using ZMODEM" (or "upload file...") function manually from your terminal emulator.
Some terminal emulators may have full internal implementation of the ZMODEM protocol, but minicom just uses the sz (Send Zmodem) command-line tool to do the actual file transfer, so you'll need to have the lrzsz package installed locally too.
Transferring from the remote ARM to the local system works essentially the same: you run the sz <filename> command at the remote end, and the incoming transfer ("download") should be automatically detected by your terminal emulator.
Since the sz and rz tools are designed to be used at the remote end and will transfer the file over what is essentially the standard input and output of your shell session, using the commands at the local side requires specific input/output redirections and the terminal emulator must stop reading the serial port while the file transfer program is running. All these things would make it extremely inconvenient to use the sz/rz tools on the local side with a program like screen that does not have the necessary features for accommodating external file transfer tools.
| Send a binary file through serial terminal |
1,425,620,266,000 |
What I have done is clone a small 32GB flash module that had three partitions. I happened to have a 32GB USB lying around, so I thought it might just work; it did not. It seems 32GB from Toshiba is a bit different than 32GB from Sandisk.
Anyway, so then took to a 2TB external drive and did the exact same thing. Specifically, I did the following:
dd if=/dev/sdX of=/dev/sdY bs=100M
aside Does the final block come across as a partial copy or is it dropped if EOF is reached first?
So as to essentially clone the entire flash module -- partition table and all. The 32GB -> 2TB was easy enough since the dd utility properly halted after reading through the end of the final (third) partition.
So, what I want to do now is just create a simple binary blob containing the entire flash image. My 2TB drive is now identically partitioned with respect to the original drive: sdx1, sdx2, sdx3. So, once again I just took to dd with the following:
dd if=/dev/sdx of=firmware.bin bs=100M
Doing so will not only copy the first 32GB I am interested in, but it will also continue on through and clone the entire 2TB drive, or so it did when I tried it. I can find the exact byte-length of the partitions of interest by the following:
$ lsblk -b
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 2000398933504 0 disk
├─sdc1 8:33 0 134217728 0 part
├─sdc2 8:34 0 2147483648 0 part
└─sdc3 8:35 0 29734297600 0 part
A definite way to solve this would be to set the blocksize of dd to one byte and then set the number of blocks to read as the sum of the three sizes above:
dd if=/dev/sdc of=firmware.bin bs=1 count=32015998976
But I can't image how long that would actually take.
EDIT: A quick test of curiosity of the above showed a solid ~150KB/s transfer rate.
tl;dr How can I exclusively copy the first three partitions of a disk that is much larger than the sum of the partition sizes?
|
Just copy the partitions that you need and the MBR if you need it too.
The MBR is stored in the the first 512 bytes of the disk.
dd if=/dev/sdX of=/path/to/mbr_file.img bs=512 count=1
Copy each partition
dd if=/dev/sdX1 of=/path/to/partition1.img bs=512
dd if=/dev/sdX2 of=/path/to/partition2.img bs=512
dd if=/dev/sdX3 of=/path/to/partition3.img bs=512
| How to clone an entire disk to a larger disk and then offload? |
1,425,620,266,000 |
We are using rsync to sync two folders on same machine.
Files will be written to a source folder from another application. We have the problem that, even if a file is not completely written/copied to the source folder, rsync copies that file to destination.
Is there any way/option to check/transfer only complete files from the source folder
|
If the sizes of the files are fixed (after the write operation of the application), you can transfer only the files based on the size so the files that are not done being written to yet will not be copied :
--max-size=SIZE don't transfer any file larger than SIZE
--min-size=SIZE don't transfer any file smaller than SIZE
options of rsync provides that.
Alternately you can use fuser or lsof to check if the application is writing to the file at the instant of start transferring :
if fuser /path/to/file.txt >/dev/null 2>&1; do
rsync ....
else
sleep 10
fi
| rsync option to exclude partial files |
1,425,620,266,000 |
Copy files to a destination folder only if the files already exist. but the source files have a different file extension.
i.e
I have a backup of some files with ".desktop.in" extension and I want to copy to destination where the files extensions are ".desktop" and only the files that already exist in the destination.
source folder contain:
a.desktop.in
b.desktop.in
c.desktop.in
destination folder contain:
a.desktop
b.desktop
Want only to overwrite a.desktop and b.desktop files
Tried this:
for file in /destination/*.desktop;do cp /src/"${file##*/}".in "$file";done
But that doesn't look optimized for that task.
Do you know a better way to do that?
|
Found two ways:
for file in /src/*.desktop.in; do
file=${file%.in}
if test -e "/dest/$(basename ${file})"
then cp "/src/${file}.in" "/dest/${file}"
fi
done
rsync and --existing:
for file in /src/*.desktop.in; do
rsync --dry-run --existing --verbose "/src/${file}" "/dest/${file%.in}"
done
| Copy files to a destination folder only if the files already exist. but the source files have a different file extension |
1,425,620,266,000 |
I have files with random numbers in /home/user/files folder (every 4 days I have new ones).
for example:
john20
john25
john40
tom12
tom32
simon2
simon8
simon53
I want to take only last modified files (the newest) and copy them to different location (/home/user/fixed) without that numbers in file name.
I know how to filter and copy that using find command but I don't know how copy all of them without that numbers.
find $files_are_here -maxdepth 1 -mtime -2 -type f -exec cp {} $new_path \;
This will copy all of the files modified in the last 2 days to the new path but with original name. In my case:
john40
tom32
simon53
but I would like to have only john, tom and simon inside that folder. So after I run my script again they will be replaced by the newest one.
|
The following is hopefully self explanatory
find -maxdepth 1 -mtime -2 -type f -exec bash -c 'name=${1##*/}; cp "$name" /some/other/dir/${name%%[0-9]*}' _ {} \;
| find file, copy but with different name |
1,425,620,266,000 |
I want to using crontab to synchronize two directory between my linux partion and windows partion like this:
24 9 * * * cp -r /home/fan/Data /media/T/Data
But it would create a directory named Data in the origin Data directory, instead of copy the missing file from the source directory. I can't find a proper option at the cp manual to perfectly solve this. How can i copy the missing file(they do exist at the destination directory) from the src to dir.
And by the way, seems it need the T disk have been mounted to run the copy command, how to automatically mount the disk when i need to run the command(the mount command should run as root).
And how can i get the error message if the command have an exception?
|
To address the error-message portion of the question, you might choose to run a script from cron instead of the system command.
24 9 * * * /usr/local/sbin/sync_data.sh
Create the file as /usr/local/sbin/sync_data.sh, giving root ownership and execute permission: chown root:root /usr/local/sbin/sync_data.sh && chmod 0700 /usr/local/sbin/sync_data.sh. The contents of the script are below.
#!/usr/bin/env bash
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root." 1>&2
exit 1
fi
# [ ] vs [[ ]] : http://mywiki.wooledge.org/BashFAQ/031
dir_src="/home/fan/Data/"
dir_dst="/media/T/"
# A Boolean to know if running interactively or not
b_interactive=1
if [[ -v PS1 ]] ; then # The "-v PS1" method is for BASH 4.2+
$b_interactive=0 # Could also use [[ -z "$PS1" ]]
fi
# Send messages to console or syslog.
function notify() {
message="$1"
if [[ $b_interactive -eq 1 ]] ; then
echo "$message"
else
# eval combines args into a single string for execution...
eval $cmd_logger -p err "$0: $message"
fi
}
# If the mount point if not currently mounted...
if [[ "$(grep $dir_dst /proc/mounts)" = "" ]] ; then
# Try to mount the directory.
mount $dir_dst
# Send a message to console or syslog.
if [[ $? -ne 0 ]] ; then
notify("$0 failed to mount $dir_dst")
exit 1;
fi
fi
# Create a backup directory if it does not exist
if [[ -d $dir_dst ]] ; then
mkdir -p $dir_dst 2>/dev/null
fi
# A one-way sync to dir_dst, deleting files that no longer exist in DIR_SRC...
rsync -a --delete $dir_src $dir_dst &> /dev/null
# Check the return status of last command...
if [ $? -eq 0 ]; then
notify("$0: the rsync process succeeded.");
else
notify("$0: the rsync process failed.");
fi
unset dir_src
unset dir_dst
unset b_interactive
| how to synchronize two directory? |
1,425,620,266,000 |
I need to copy huge files in my Linux machine.
Example:
cp source.txt target.txt
I want to create bar progress that will show that copy still in progress on each copy file
Examples"
cp file file1
copy file > file1 .........
cp moon mars
copy moon > mars .......
|
In short, you won't find cp native functionality for progress bar output. Why? Many reasons. However, you have some options:
Use a different tool. rsync, as mentioned by @user1404316 has --progress:
rsync -P largeFile copyLocation
If you don't need the extra semantics that cp and rsync take care of, create a new file with pv ("Pipe Viewer") by redirecting stdout:
pv < largeFile > copyLocation
If you do need the extra semantics, you can use progress, though it doesn't give the bar specifically. It attaches to already running processes, so you would invoke it like:
# In one shell
$ cp largeFile copyLocation
# In another shell
$ progress -m
[ 4714] cp /home/hunteke/largeFile
1.1% (114 MiB / 10.2 GiB) # -m tells progress to continually update
Another option is gcp, which does exactly what you've requested with a progress bar:
gcp largeFile copyLocation
Another option abuses curl's ability to handle file:// urls:
curl -o copyLocation file:///path/to/largeFile
You can write a shell script
| how to create progress bar during copy files |
1,425,620,266,000 |
I have more than 90 subdirectories and inside each one, there will be a number of .txt files.
What I need to do is to copy all those txt files out to one single directory.
How can I do that?
|
use command :
find . -name "*.txt" -exec cp {} /path/to/destination \;
| How use minimum number of commands to copy all .txt files from all subdirectories to one directory? |
1,425,620,266,000 |
EDIT: Total rewrite of question for clarity.
I have a directory tree (new) with a bunch of files of with an extension of .new. I have an identical tree (old) where many of the files have names identical to those in the new tree except that the extension is .old. I would like to copy all of the .new files from the new directory tree into the old directory tree which contains the .old files. As a file with a .new extension is written into the old directory tree, I would like to delete any file with the same name but a .old extension.
So, if in the new directory tree, there is a file named new/foo/bar/file.new, it will be copied to the old directory tree as old/foo/bar/file.new and then the file old/foo/bar/file.old will be deleted if it exits.
EDIT #1
This answer was hashed out below (using the old question that had extraneous background information that was confusing). See the actual solution that I worked out below as one of the answers.
|
This was the final answer that got hashed out in the comments for terdons answer.
cd new
for i in */*/*.new; do cp "$i" "path/to/old/${i}" && rm "path/to/old/${i//new/old}"; done
| Need to copy files to existing directory and remove files already there with the same name but different extension |
1,425,620,266,000 |
I'm trying to copy files from an USB stick to another drive. At least the file names appear to be corrupt, ls shows them as:
'ZHECMIv'$'\027''.PDF'
'ZHEKMI>2.P─F'
ZHENIL~1.PDF
'эeloѤyfɯrɥvdr.2uOroä䁲igez_o_聴eŢe'$'\340\240\256''Ű聤f'
'ၙanPѥòѳen-ၐoint-M䁯rѴ&`df'
Copying fails with errors like these:
cp: error reading '/media/pg/VERBATIM/2012/03/MVANES~0.PDF': Input/output error
cp: cannot create regular file '/media/pg/Elements SE/verba/2012/03/ERANmS~3.P'$'\004''B': Invalid argument
cp: cannot stat '/media/pg/VERBATIM/2014/09/f5'$'\004''7'$'\004''0'$'\004''.': No such file or directory
On the chance that only the filenames are corrupt, I tried this:
pg@TREX:~$ cp /media/pg/VERBATIM/2012/02/'YQ83A1'$'\177''0.╨DF' ./1.pdf
cp: error reading '/media/pg/VERBATIM/2012/02/YQ83A1'$'\177''0.╨DF': Input/output error
fsck.vfat -n shows:
fsck.fat 4.2 (2021-01-31)
There are differences between boot sector and its backup.
This is mostly harmless. Differences: (offset:original/backup)
65:01/00
Not automatically fixing this.
FATs differ but appear to be intact.
Using first FAT.
Cluster 113918 out of range (67222785 > 1968189). Setting to EOF.
Cluster 113928 out of range (2211211 > 1968189). Setting to EOF.
Cluster 113929 out of range (67222860 > 1968189). Setting to EOF.
Cluster 113937 out of range (2211092 > 1968189). Setting to EOF.
...
Cluster 657871 out of range (1). Setting to EOF. (Several)
...
Cluster 1940714 out of range (1342259Internal error: next_cluster on bad cluster
52 > 1968189). Setting to EOF. (once)
fdisk -l output:
pg@TREX:~$ sudo fdisk -l /dev/sde
Disk /dev/sde: 30.05 GiB, 32262586368 bytes, 63012864 sectors
Disk model: STORE N GO
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sde1 32 63012863 63012832 30G c W95 FAT32 (LBA)
Are these files lost for good, or is there a tool I can try to recover them with?
Debian Bullseye fully up to date
$LANG is set to en_US.UTF.8
USB stick 32GB, VFAT
Data added to drive with Win10 Pro
|
cp: error reading <filename>: Input/output error indicates there is corruption in locations other than filenames too.
The fdisk output is normal for a GPT-partitioned external disk. The size does not match the 32 GB you said, but 1.82 TiB is consistent with the disk model Elements SE 25FD reported by fdisk: are you really targeting the correct disk with your fdisk command?
Based on the cp commands output, it looks like you listed the fdisk output of the disk you are recovering files to, rather than the USB stick you are recovering them from.
I would suggest using ddrescue to image the failing USB stick and working on the copy, and/or using PhotoRec to try and recover the files since it is designed for rescuing files from failing flash-type media.
| Copying files from USB drive fails - I/O error or Invalid Argument |
1,425,620,266,000 |
When I don't need to adjust destination filenames I can do something like this:
$ find -type f -name '*.pat' -print0 | xargs -O cp -t /path/to/dest
It is safe because the filenames may even contain newline characters.
An alternative:
$ find -type f -name '*.pat' -print0 | cpio -p -0 -d /path/to/dest
Now I have the problem that the destination is a VFAT filesystem ... thus certain characters are just not allowed in filenames (e.g. '?'). That means that I have to adjust the destination filenames.
Something like
for i `find -type f -name '*.pat'` ; do
cp "$i" `echo $i | sed 's/?/_/'`
done
works only for filenames without spaces - I could change IFS to just newline - but how to set '\0' as IFS?
And still - the for loop leads to as much forks/execs (of mv/sed) as you have files - which is much more excessive than the few forks/execs needed for the two examples in the beginning.
What are the alternatives to solve that problem?
|
With pax as found on Debian, Suse, OpenBSD, NetBSD at least:
find . -type f -name '*.pat' -print0 | pax -0rws'/?/_/gp' /path/to/dest/
pax is a standard utility (contrary to tar or cpio), but its -0 option is not, though can be found in a few implementations.
If there's both a ?.pat and _.pat files, they will end up replaced with same name so one will overwrite the other in the destination. Same, if there's a _ and ? directory, their content will be merged inside the _ directory in the destination.
With GNU sort and GNU uniq, you can check for conflicts beforehand with:
find . -type f -name '*.pat' -print0 |
tr '?' _ |
sort -z |
uniq -zd |
tr '\0' '\n'
Which would report conflicting files (but not directories).
You could use zsh's zmv which would take care of conflicts, but that would still mean one mkdir and one cp per file:
autoload zmv
mkdir-and-cp() {mkdir -p -- $3:h && cp $@}
zmv -n -Qp mkdir-and-cp '(**/)*.pat(D.)' '/path/to/dest/$f:gs/?/_/'
(remove -n when happy).
| How to copy a list of files and adjust destination filenames on the fly? |
1,425,620,266,000 |
Suppose I have host A from which I ssh to host B, where I sudo -U some_role and from under it ssh to host C. My goal is an interactive shell on C.
Assume that from C I cannot ssh back to A.
What is the best way to copy a file from A to C using the connection built above? What preparations / changes should I introduce into the chain to make infrequent, simple file copying possible?
Of course I can run cat > target_file inside the interactive shell and copy-paste via the terminal, but for large binary files this is not exactly convenient.
|
I take it host B is e.g. a gateway in an intranet and can connect to host A and C, e.g. like this:
-----------------------------------------------------------
| |
| |
--------- ----------- -------------- |
|A |------------------------------| B |------------------| | |
--------- | | | C | |
----------- -------------- |
| |
-----------------------------------------------------------
In this case the best solution is ssh port forwarding. I describe it at http://www.linuxintro.org/wiki/Tunneling_with_OpenSSH
First, on A, you tunnel B's port 2222 to C's port 22 with the command:
ssh -L 2222:C:22 B
then you copy to C's port 22, by connecting to port 2222 on A (localhost) with the command:
scp -P 2222 file root@localhost:
| Copy files via a complex ssh connection? |
1,425,620,266,000 |
General question
Assuming two directories with identical content are stored on different devices. Is there a way to calculate the size of the directories and always get the exact same number for both?
In other words, is there such a thing as a "real size" of a directory irrespective of where it is stored?
Practical example
I transferred directories between two storage devices using rsync -ahP /dir1/ /dir2/.
After the transfer finished without errors, I checked the sizes of the directories on each device using du -s --apparent-size. For some directories I got the exact same number on both devices, but not for all of them.
Specific questions
Is it possible that rsync with the chosen options didn't produce an exact copy of the directory? If yes, would there be a way to get an exact copy?
Does the used du command not give the "real size" of the directory irrespective of the storage device. If yes, would there be a way to calculate such a size?
|
Note that du, even GNU's one with its --apparent-size option will include the apparent size (as reported by lstat()) of all types of files, including regular files, devices, symlinks, fifos, directories. GNU du like many other implementations will try to not count the same file several times (like when there are several hard links to the same file).
Here, since you're not passing the -H option to rsync, hard links won't be preserved so that exclusion of duplicates in du's account would cause a discrepancy if there were hard links in the source.
The apparent size of a file of type directory does represent the real size of its data: a list of file names alongside information on where to find them, but the format and size of that list depends on the type of file system, how it was configured, and how the directory was populated.
For device files, fifos, sockets for which rsync doesn't transfer any data, some systems (like Linux) always return 0 as the apparent size, some will return the amount of data that could be read from them (for block device files for instance, it could be the size of the corresponding storage).
So here, probably the best you can do is compute the sum of the apparent size of regular and symlink files which are the ones consistent from one system to another¹.
You could do that with GNU find with:
find . -type f,l -printf '%s\n' | awk '{s+=$0}; END{print s}'
If you find the same number on the source and destination that would be an indication that rsync may have managed to transfer all the data (the contents of regular files, and symlinks (their target path)). It may not have managed to transfer all metadata like extended attributes, ACLs (both of which you're not preserving anyway since you didn't pass the -X and -A options), file names, empty files...
As a consistent representation of the amount of directories data (assuming no encoding issue¹), you could use find . | wc -c (the sum of all file paths length + 1).
You could also re-run the same rsync command with -n (dry-run) and -v (verbose) to check if things are missing, maybe adding a --delete to also check for files that are on the destination and not the source.
¹ Strictly speaking, symlink sizes could vary if there were some transformations operated on file names like in some cases of character encoding transformations for non-ASCII characters especially if there are non-Unix or macOS file systems involved
| Get size of directory (including all its content) irrespective of disk usage |
1,425,620,266,000 |
The Situation:
I want to copy a directory recursively to an external hard drive. The directory contains a lot of files (at least 100,000).
The Problem:
External hard drives tend to get quite hot when heavily used like in my task (usage over several hours). That is bad for the life expectancy. So since time is not an issue in my case I would like that the copying would take some rest between copying the files to allow the drive to cool down a bit.
At the moment I use ionice -c3 nice nice cp -r which at least reduced performance loss to all other running tasks. But it does not address the heat problem.
Any suggestions?
Using other commands than cp like rsync would be ok to (if applicable from command line) but so far I could not find any command or option that allows me to wait x seconds between each file copy.
Additional Information:
The external hard drive is the final destination of the data not a way to transfer the data to another computer.
|
Not particularly elegant, but you could run your copy command and then run a loop that pauses it for, say 3 minutes every 20 minutes:
Launch your copy command in the background
cp -r /path/to/dir /path/to/external/drive &
Run this loop which will stop/restart it:
while ps -p $! >/dev/null; do
kill -SIGCONT $!;
sleep 20m;
kill -s SIGSTOP $!;
sleep 3m;
done
| Copy a lot of files recursively leads to a heat problem on my external hard drive |
1,425,620,266,000 |
I've been searching and I cannot find an answer on any of these sites that will work. I want to copy all of the contents of this folder to my server. Which Linux command should I use to do that? I don't want to manually run through all of the objects in that folder, one at a time per se. I've seen everything from scp to rsync, but can't get either to work.
Thanks,
|
you can use something like
cd directory-where-you-want-to-put-the-files
wget -r ftp://ftp.eso.org/pub/qfits/
| Copy contents of remote server folder to current server? |
1,425,620,266,000 |
I need to copy a particular named file from multiple directories and need to add number in the file prefix sequentially. For example I have the following directories, gene1, gene2, gene3 ..... gene100 and each directory having a file namely protein.fasta. I need to copy all the protein.fasta file from each directory and paste in another directory namely output.
I have tried the following script but the script is not serving my purpose, It copy and paste only one file and rest is not copied and renamed finally end up with error. Kindly help me to do the same.
a=1
for i in **/protein.fasta
do
cp "$i" "$a"_"$i" output/
a=`expr $a + 1`
done
Detailed example and expected output is given below,
Directories
gene1, gene2, gene3....gene100
file to be extracted from each file is protein.fasta
Expected output in the output directory
1_protein.fasta
2_protein.fasta
3_protein.fasta
.
.
100_protein.fasta
Thank you in advance.
|
You should be able to loop over directories and remove the gene prefix from the current directory name to use as the prefix for the target file name:
for d in gene*; do
echo cp "$d/protein.fasta" "output/${d#gene}_protein.fasta"
done
Remove the echo once you are satisfied that it is doing the right thing.
| copy a file from multiple directories and add numbers in the prefix of each file? |
1,425,620,266,000 |
The sequence of commands
mkdir -p BASE/a/b/c && cp -a --target-directory=BASE/a/b/c /a/b/c/d
creates a subdirectory a/b/c under BASE, and then copies the directory tree at /a/b/c/d to BASE/a/b/c.
One problem with it is that it entails a lot of repetition, which invites errors.
I can roll a shell function that encapsulates this operation; for example, here's a sketch of it without any error checking/handling:
copytwig () {
local source=$1
local base=$2
local target=$base/$( dirname $source )
mkdir -p $target && cp -a --target-directory=$target $source
}
...but I was wondering if this could already be done with a "standard" Unix utility (where I'm using "standard" as shorthand for "likely to be found in a Unix system, or at least likely to be found in a Linux system").
|
With pax (a mandatory POSIX utility, though not installed by default in some GNU/Linux distributions yet):
pax -s':^:BASE/:' -pe -rw /a/b/c/d .
(note that neither --target-directory nor -a are standard cp options. Those are GNU extensions).
Note that with -pe (similar to GNU's -a), pax will try and copy the metadata (times, owner, permissions...) of the directory components as well (while BASE's metadata will be as if created with mkdir BASE).
| "standard" single-command alternative for `mkdir -p BASE/a/b/c && cp -a -t BASE/a/b/c /a/b/c/d`? |
1,425,620,266,000 |
I'm rsync'ing a huge folder from an external to an internal 3,5" HDD, both 5.400 rpm. When using dstat to have a look at the current throughput, I regularly see patterns like this:
--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai stl| read writ| recv send| in out | int csw
20 6 34 40 0| 98M 90M|1272B 3652B| 0 0 |1702 4455
21 6 37 37 0| 121M 90M|1646B 4678B| 0 0 |2057 6488
17 24 29 30 0| 77M 95M| 630B 2416B| 0 0 |1581 4644
20 5 33 43 0| 86M 84M|1372B 2980B| 0 0 |1560 4146
20 6 30 44 0| 80M 75M| 700B 2722B| 0 0 |1484 3942
11 2 47 39 0| 39M 65M| 642B 1332B| 0 0 | 765 1507
0 0 50 50 0| 0 91M| 70B 354B| 0 0 | 136 70
0 0 50 49 0| 0 71M| 306B 346B| 0 0 | 146 119
0 0 50 50 0| 0 83M| 70B 346B| 0 0 | 145 60
0 0 50 50 0| 0 0 | 70B 346B| 0 0 | 36 84
0 0 50 50 0| 0 0 | 164B 646B| 0 0 | 35 71
0 0 50 50 0| 0 0 | 140B 802B| 0 0 | 30 64
0 0 50 50 0| 0 0 | 70B 346B| 0 0 | 27 68
0 0 50 50 0| 0 34M| 134B 346B| 0 0 | 86 68
0 0 50 50 0| 0 0 | 70B 346B| 0 0 | 30 71
0 0 50 50 0| 0 0 |2320B 346B| 0 0 | 40 76
0 0 50 50 0| 0 0 | 70B 346B| 0 0 | 29 71
0 0 50 50 0| 0 0 | 70B 346B| 0 0 | 25 50
-----------------------------[snip]------------------------------
0 0 50 50 0| 0 0 |2230B 346B| 0 0 | 35 61
0 0 50 50 0| 0 60M| 70B 346B| 0 0 | 118 83
1 7 42 50 0| 256k 104M| 230B 500B| 0 0 | 281 480
21 5 31 42 0| 117M 76M|1120B 3392B| 0 0 |1849 4309
23 5 36 36 0| 137M 56M|1202B 3958B| 0 0 |2149 5782
24 5 36 35 0| 138M 100M|1284B 4112B| 0 0 |2174 6021
Say, for several seconds up to a minute, both read and write throughput drop to zero. What's the bottleneck here?
I mean, since both drives are about the same speed, none of them should be idle for too long. Even further, at least one drive should be always reading or writing. What is the system waiting for?
System is idle, only thing eating cpu is the rsync task. Memory is 8GB, CPU is a 7th-gen i5 quad-core. The internal HDD is hooked via SATA to the mainboard, a Gigabyte G170X-Ultra Gaming. Filesystem is ext4 in both cases, encrypted with dmcrypt/LUKS on the internal (write) side. Might that be the cause? If so, how to check the performance of dmcrypt? I see, CPU is 50% idle 50% waiting when the transfer drops occur. What may I conclude from that?
It's an up-to-date-Archlinux with kernel version 4.13.11-1-ARCH. Anything to look out for? Thanks in advance.
UPDATE: iotop was pointed out to be more accurate than dstat. Unfortunately, iotop shows zero thoughput as well when dstat drops to zero. I've done a screencast to show it.
|
There are 2 sets of tools to get some block-level device statistics. The first is iolatency from Brendan Gregg's perf tools. It produces a simple histogram of disk operation latency such as:
>=(ms) .. <(ms) : I/O |Distribution |
0 -> 1 : 1913 |######################################|
1 -> 2 : 438 |######### |
2 -> 4 : 100 |## |
4 -> 8 : 145 |### |
8 -> 16 : 43 |# |
16 -> 32 : 43 |# |
32 -> 64 : 1 |# |
Another script in the toolset is iosnoop which shows commands and their operations, eg:
COMM PID TYPE DEV BLOCK BYTES LATms
/usr/bin/mon 31456 R 8,0 9741888 4096 2.14
/usr/bin/mon 31456 R 8,0 9751408 4096 0.16
/usr/bin/mon 31456 R 8,0 20022728 4096 1.44
/usr/bin/mon 31456 R 8,0 19851752 4096 0.26
jbd2/sda3-41 416 WS 8,0 130618232 65536 1.89
jbd2/sda3-41 416 WS 8,0 209996928 65536 1.92
jbd2/sda3-41 416 WS 8,0 210006528 8192 1.94
Then there is the blktrace package which records low-level block operations with blktrace and then shows all sorts of information with blkparse, and many other commands, including the simple summary from btt (pdf user guide):
$ sudo blktrace /dev/sda # ^C to stop
=== sda ===
CPU 0: 180 events, 9 KiB data
CPU 1: 1958 events, 92 KiB data
Total: 2138 events (dropped 0), 101 KiB data
$ ls -ltra # one file per cpu
-rw-r--r-- 1 root root 8640 Nov 5 10:16 sda.blktrace.0
-rw-r--r-- 1 root root 93992 Nov 5 10:16 sda.blktrace.1
$ blkparse -O -d combined.output sda.blktrace.* # combine cpus
$ btt -i combined.output
ALL MIN AVG MAX N
Q2Q 0.000001053 0.106888548 6.376503027 253
Q2G 0.000000795 0.000002266 0.000011060 184
G2I 0.000000874 0.000979485 0.002588781 328
Q2M 0.000000331 0.000000599 0.000002716 70
I2D 0.000000393 0.000480112 0.002435491 328
M2D 0.000002044 0.000028418 0.000126845 70
D2C 0.000080986 0.001925224 0.010111418 254
Q2C 0.000087025 0.002603157 0.010120629 254
...
D2C, for example, is how long it takes the hardware device to do an operation.
You might also run sudo smartctl -a /dev/sda on each disc to see if there are any failures showing.
| Causes for inefficient I/O? |
1,425,620,266,000 |
I was trying to copy one folder from one location to another. The folder is about 6.4 Gb.
So I did
cp -r source_folder level1/val
after that, I went into the level1 folder and checked:
level1$ ls
val
But If I try to cd into val, an error is raised:
level1$ cd val
-bash: cd: val: No such file or directory
And it does not appear to be copying anything, either:
level1$ du -sh val
0 val
I also checked with python if the directory exists or not, but it also says that it does not exist
>>> import os
>>> os.path.exists('level1/val')
False
I can't even delete the folder that has been created:
level1$ rmdir val
rmdir: failed to remove 'val': Not a directory
On the other hand, I was able to delete it as if it was a file:
level1$ rm val
level1$ ls
level1$
What is going on? And how can I make sure to copy the folder correctly?
EDIT
Added the output of ls -ld source_folder level1/val which returns
lrwxrwxrwx 1 user1 dinfk 4 Jun 20 12:05 source_folder -> test
drwxr-sr-x 2 user2 systems 4096 Aug 27 19:02 level1/val
|
Evidently, the val that resulted from the copy the first time round is a broken symbolic link.
ls lists val because it exists: there is a directory entry called val.
cd val complains “No such file or directory” because val is a broken symbolic link. cd needs to access the target of the link, but the target doesn't exist (that's the definition of a broken symlink).
du val shows 0 because a symbolic link doesn't use any storage space. (The space for the name and metadata is not counted.)
os.path.exists returns False for broken symbolic links.
rmdir val rightfully complains that val is not a directory, since it's a symbolic link.
rm val deletes val normally, since val is a file that isn't a directory.
You report:
lrwxrwxrwx 1 user1 dinfk 4 Jun 20 12:05 source_folder -> test
The command cp -r copies the symbolic link as a symbolic link. Since source_folder is a symbolic link whose target is test, this results in level1/val being a symbolic link whose target is test. The target of a symbolic link is a simple string, it doesn't “track” anything. Symbolic links that don't start with a / are relative. level1/val is a symbolic link whose target is test so it points to level1/test. Since level1/test doesn't exist, the symbolic link is broken.
Later you saw:
drwxr-sr-x 2 user2 systems 4096 Aug 27 19:02 level1/val
This time you did something different and copied a directory tree.
To copy the target of the link rather than the link itself, you can use
cp -r source_folder/ level1/val
The trailing slash tells the cp command to act on the directory that the link points to rather than on the symbolic link itself. If the argument is a directory, this doesn't make any difference.
| ls shows a directory but it is inaccessible |
1,425,620,266,000 |
I have an approximately 1TB directory with subdirectories dir1. I have made an rsync backup copy dir1.back.
How can I efficiently restore dir1 to the state of dir1.back - that is, replace files in dir1 with those of dir1.back if they've change and deleting any new files in dir1? Given its large size, cp/rsync of the entire dir1.back is highly infeasible.
If I'm not mistaken, this might be possible using rsync --check-sum?
|
Reverse the RSYNC. So swap around dir1 and dir1.back then add the delete flag to ensure it removes files as appropriate to make sure it properly syncs and not just ignore files that weren't present in dir1 originally.
rsync -avz --delete dir1.bak/ dir1/
| Restore directory to previous state |
1,425,620,266,000 |
I have several audio devices (car radio, portable radio, MP3 player) that take SD cards and USB sticks with a FAT file system on it. Because these devices have limited intelligence they do not sort filenames on the FAT FS by name but merely play them in the order in which they have been copied to the SD card.
In MS DOS and MS Windows this was not a problem; using a simple utility that sorted files alphabetically and then copied them across in that order did the trick. However, on Linux the files copied from the ext4 file system do not end up on the FAT FS in the same order as in which they were read and copied across, presumably because there is a buffering mechanism in the way which improves efficiency but does not worry too much about the physical order in which the files end up on the target device.
I have also tried to use Windows in a Virtual Box VM but still the files end up being written in a different order than the one they were read from the Linux file system.
Is there a way (short of copying them across manually one by one and waiting for all write buffers to be flushed) to ensure that files end up on the FAT SD target in the order in which they were read from the ext4 file system?
|
I remember asking this a long time ago (you are welcome to search for it). My guess at this long future time is: mount the device with option sync (removes the buffering), sort the list to ensure that they are copied in order.
| File order on FAT/FAT32/VFAT file systems |
1,425,620,266,000 |
I have a long list of data files that I need to copy over to my server, they have the names
data_1.dat
data_2.dat
data_3.dat
...
data_100.dat
Starting from data_1.dat, I would like to get all the files where the number is increased by 3, i.e. data_4.dat, data_7.dat, data_10.dat, ...
Is there a way to specify this? Right now I am doing in manually using get data_4.dat, but there must be a way to automatize this.
|
On Linux:
printf -- '-get data_%d.txt\n' $(seq 1 3 100) | sftp -b - [email protected]
On BSD (with no seq(1) in sight):
printf -- '-get data_%d.txt\n' $(jot 100 1 100 3) | sftp -b - [email protected]
| sftp: command to select desired files to copy |
1,425,620,266,000 |
I want to find images, from a directory, that were added in last 1 year and copy them to a new directory preserving original folder structure.
I am using find but it is not copying anything. is there any way it can be done using 1 line command?
find image/* -mtime +356 -exec cp {} modified-last-year/ \;
I am in the image directory when running this command and i want to only search image folder recursively.
[EDIT] After the two answers I did following.
1. find image/* -mtime +356 | cpio -pd /target_dir
I get 0 Blocks.
find /full/path/to/image -mtime 365 -type f ( "-name *.jpg -o -name *.gif ) -execdir cp {} /full/path/to/image_target_dir/modified-last-year \;
AND
find /full/path/to/image -mtime 365 -type f -execdir cp {} /full/path/to/image_target_dir/modified-last-year \;
Nothing copied.
AND simply find to get count of files with and without -type f.
find /full/path/to/image -mtime 365 -type f | wc -l
i get 0.
I could verify that there are indeed files with in image dir and in sub directories which were added in last 1 yr. infact there should be more than 200 images.
[EDIT 2]
I have to also exclude one directory from find so the following code worked fine.
Thanks to 1st answer, i was able to create this.
find /full/path/to/image/* -path /full/path/to/image/ignored_dir -prune -o -print -mtime -365 | cpio -pd /full/path/to/target_dir/modified-last-year
|
You can use find and cpio in passthrough mode:
find image/ -mtime -365 | cpio -pd /target_dir
EDIT: removed unnecessary * from the find path.
| find modified files recursively and copy with directory preserving directory structure |
1,425,620,266,000 |
I have a working directory: /home/myusername/projectdir
The working directory contains files and sub-directories. The depth of sub-directories is not known.
I want to put all the *.log files into the same output directory, with the base-names prefixed with the sub-directory path (substitutin / with #).
Example:
/home/myusername/projectdir/file1.log -> /home/myusername/output/file1.log
/home/myusername/projectdir/subdir/file2.log -> /home/myusername/output/#subdir#file2.log
/home/myusername/projectdir/subdir/subsubdir/file3.log -> /home/myusername/output/#subdir#subsubdir#file3.log
I tried this:
cd "$PROJECT_DIR"
CDIR=""
for x in **/*.log; do
if [ "$CDIR" != "$PROJECT_DIR/${x%/*}" ]; then
CDIR="$PROJECT_DIR/${x%/*}"
SUBDIR="${x%/*}"
PREFIX=${SUBDIR//'/'/'#'}
cd "$CDIR"
for FILENAME in *.log; do
NEWNAME="#$PREFIX#$FILENAME"
cp "$FILENAME" "$OUTPUT_DIR/$NEWNAME"
done
fi
done
How can I do this more elegantly?
|
By using \0-delimited strings, this can handle spaces and \n in file names.
cd "${PROJECT_DIR%/*}"
outdir="output"; mkdir -p "$outdir"
find "$PROJECT_DIR" -type f -name '*.log' -printf "%p\0${outdir}/%P\0" |
awk 'BEGIN{FS="/";RS=ORS="\0"}
NR%2||NF==2 {print; next}
{gsub("/","#"); sub("#","/#"); print}' |
xargs -0 -n2 cp -T
mkdir -p creates the destination directory (no error if it already exists).
find prints \0-delimited file-paths (%P means withoutfind's $1 directory-path).
awk creates the 2 file-paths required by cp, as two \0 delimited records.
xargs reads \0-delimited file-paths, 2 at a time, and passes them to cp -T
Here is tree of the test source directory`
projectdir
├── file0.txt
├── file1.log
└── subdir
├── file2.log
└── subsubdir
└── file3.log
2 directories, 4 files
Here is tree of the destination directory`
output
├── file1.log
├── #subdir#file2.log
└── #subdir#subsubdir#file3.log
0 directories, 3 files
| Copy filenames, and add path prefix, in a directory, recursively |
1,425,620,266,000 |
If I decide I want to copy a folder that is sufficiently large using cp, then half way through the copy I decide to abort or pause the process, will this ever cause corruption? Would it be better to let the copy finish and then delete the files?
|
If you pause the process and resume it later, nothing bad will happen. As long as nothing else writes to the input or output file in the middle, the output will be a faithful copy of the original.
If you kill the copy process, then you'll end up with a partial copy in the output file. There's no point in waiting: if you want to cancel the copy, cancel it as soon as you've decided and remove the partial output.
The input file will never be affected by the copy operation except that its last-read timestamp may be updated.
| can cancelling a copy cause corruption? |
1,425,620,266,000 |
I accidentally moved an entire directory of ~100GB to trash. I was trying to place it in bookmarks but dragged it into trash. It's there in the trash. But
when i try to restore I run out of space on the disk
Prior to deletion I had less than 50GB free on disk, if I need to restore the normal way I need about 68GB more free on the disk. That is
if I have to restore I have to delete every file from trash
immediately after restoring it
so i can revert back to initial state. I tried to use "rsync -av --remove-source-files /Trash/file /Dest"
but it also doesn't work.
Any suggestions to solve the problem ?
I use MX17 beta 2 based on debian stable.The disk is NTFS formatted.
|
if you are moving to the same partition then
mv /source/* /dest/
should work without creating a copy or consuming more space
Alternatively, just do the same exercise with /dest/ on an external drive or partition then copy them back once you have cleared space in your original location.
| Accidentally trashed large file |
1,508,107,905,000 |
I have a hard time using Linux' built-in tools to rip an audio cd (sound juicer, rhythmbox). The reason likely being my drive, which vibrates a lot and cannot read the disk continuously. Playing the disk in any audio player results in short pauses and stutter-y playback. Ripping the CD results in noticeable artefacts. I would have thought there's some validation going on in those tools, say for example a buffer that's save to convert, but apparently that's not the case and data is converted as comes from the drive. This phenomenon occurred on several cds to different extent.
To work around the drive, I copied the .wav files over to disk (using thunar file browser). To double check if at least that worked, I found the location of the CD files, cd'd into that directory and used diff to compare the first file to the copied on in my music directory:
/run/user/1000/gvfs/cdda:host=sr0$ diff Track\ 1.wav ~/Music/Artist/Album/Track\ 1.wav
Binary files Track 1.wav and /home/me/Music/Artist/Album/Track 1.wav differ
Ok, so they are different. Why is this the case? How can I copy the file correctly without getting a different one? Or is the problem with my verification? Is diff a valid way to to compare the two files?
Ideally, I'd love to just rip a CD to flac files renamed to match the track titles like sound juicer would do, but more reliably.
|
To rip an audio CD you should really use a tool such as cdparanoia.
This will handle jitter and error correction, will retry as necessary, and try to create a "perfect" datastream.
Typically you would use this to create the wav files, which can then be converted to FLAC format as necessary.
There are other tools, including some front end GUIs, that can talk to external databases like CDDB to automatically work out the album and track names, but for raw audio ripping cdparanoia is hard to beat.
| How can I copy a .wav file from an audio cd and verify it? |
1,508,107,905,000 |
I've little inconvenience while copying image files from another Android Project to my current one.
Suppose I've files called nice_little_icon.png in each of the directories drawable-ldpi, drawable-mdpi, drawable-hdpi, drawable-xhdpi and drawable-xxhdpi, which are under res directory of Project1.
Now how do I copy these files into project2's res directory using single Linux/Unix command?
so my end result will look like
Project1/../res/drawable-ldpi/nice_little_icon.png -> Project2/../res/drawable-ldpi/nice_little_icon.png
Project1/../res/drawable-mdpi/nice_little_icon.png -> Project2/../res/drawable-mdpi/nice_little_icon.png
Project1/../res/drawable-hdpi/nice_little_icon.png -> Project2/../res/drawable-mdpi/nice_little_icon.png
Project1/../res/drawable-xhdpi/nice_little_icon.png -> Project2/../res/drawable-xhdpi/nice_little_icon.png
Project1/../res/drawable-xxhdpi/nice_little_icon.png -> Project2/../res/drawable-xxhdpi/nice_little_icon.png
|
You can use the pax command (a standardized replacement for tar and cpio). This command is present on all POSIX-compliant systems, but beware that some Linux distributions omit it from their default installation. pax copies each path under the destination directory.
pax -rw -pe drawable-*/nice_little_icon.png ../../Project2/res/
Instead of relying on wildcards in the shell, you can use the -s option to ignore some files.
pax -rw -pe -'s!^drawable-[^/]*/nice_little_icon\.png$!&!' -'s!.*/.*!!' drawable-* ../../Project2/res/
| How to copy multiple files with a same name from children directory to another directory without losing the parent directory? |
1,508,107,905,000 |
I want to find all files in dir1 having corresponding same file names in dir2, and replace them with the files from dir2.
For example:
dir1: first.txt second.txt
dir2: third.txt first.txt
So I want to remove from dir1 the old first.txt file and replace it with first.txt from dir2.
How to achieve this using Bash terminal?
|
Actually, there's a single command that does exactly what you're asking.
rsync -av --existing dir2/ dir1/
This will recursively copy the files from dir2 into dir1 only if the file already exists in dir1.
The -av options are the options you'll usually use for copying files using rsync.
The --existing option tells rsync to skip creating new files on receiver.
You must have the trailing slash on dir2/ on the command line because rsync behaves differently than most commands in the slash has a meaning to rsync.
rsync can also be used over the network similar to scp.
rsync can handle many other types of file synchronization, updating, and backup tasks.
| Find and replace all same files between 2 directories |
1,508,107,905,000 |
I am doing backups of a directory with a simple:
cp -R /production/directory /backups/location
Sometimes I need to restore a backup, but doing:
cp -R /backups/location/* /production/directory
or
cp -RT /backups/location /production/directory
has the unwanted (in my case) effect, of keeping files present in /production/directory
but not in the backup, while I want them removed, to get things in the exact same state as when the backup was done.
Is there any magic switch, or other simple command, to perform that, or do I need to manually remove the whole directory first ?
|
You can use rsync to achieve what you want.
rsync -r --delete-during /backup/location/ /production/directory
For more on see man rsync
| How to copy a folder by another, but NOT by merging/adding |
1,508,107,905,000 |
Assume an rsync --recursive --ignore-existing foo bar copy command was being run for a large directory tree named foo, but that that command got prematurely interrupted. For example, because of a sudden power failure on the target machine.
Running the above command again can save a lot of time for any files/dirs already fully copied to the destination bar. But what about any file(s) that was right in the middle of being copied? Presumably, that file exists at the destination in an incomplete/broken state? Having it simply ignored as a "file that's already done" would be very undesirable, of course.
Does rsync handle this (extremely run of the mill) scenario intelligently without being directed to do so? (If so, how?)
Or does it only do so if specifically instructed by means of more or different flags?
Frustratingly, neither the man page nor any vaguely related questions or answers on this site seem to address this very important detail in a clear way. (Sorry if I just didn't look very well.)
|
You can use the -c option to make rsync skip files by calculating and comparing their checksum instead of date and size:
-c, --checksum
skip based on checksum, not mod-time & size
That should copy all files again that doesn’t have the right contents.
You also have to remove the--ignore-existing flag, otherwise files that already exists on the destination will not be transferred.
| rsync --recursive --ignore-existing: what (if any) additional flags are needed to resume/repair/overwrite any partially copied files? |
1,508,107,905,000 |
RHEL 7.9 x86-64
high end dell servers with Xeon cpu's, 512gb ram, intel nic card
I am the only user on the server(s), and there is no other work load on them
cisco 1gbps wired LAN
data.tar is ~ 50 gb
/bkup is NFS mounted as vers=4.1 and sync
a scp data.tar backupserver:/bkup/ runs at 112 MB/sec consistently; I've seen this for 5+ years and believe this to be correct
a rsync -P data.tar /bkup runs at 55 MB/sec consistently; this one is copying over NFS
running both the copies at the same time, scp drops from 112 to 55, and rsync over NFS drops from 55 to 35
when one finishes, the other copy speed resumes to the original rate
why? and how can I improve speed over NFS?
|
The primary cause is probably the fact that the NFS share is mounted with the sync option. This theoretically improves data safety in scenarios where the server might suddenly disappear or the client might unexpectedly disconnect, but it also hurts performance.
Using the sync mount option is equivalent to the application calling fsync() on the file it is writing to after every call to write(). IOW, every time the client submits an IO request, it has to wait for that to finish before it can submit another. This has a nontrivial impact on how fast data can be written even when used with local filesystems, but network filesystems make it much worse (because at least a full network round-trip is required after each IO request before the next one can be issued). If you have the time, you can actually see this type of effect yourself by trying to copy a file that is a few hundred MB in size using TFTP as compared to SCP. TFTP bakes this type of synchronization into the protocol at a basic level, and does so in a way that each individual packet has to be acknowledged before the next can be sent, so it will likely get even less performance than you’re seeing from NFS.
Provided you are using responsible software that atomically replaces files and handles copies sanely, you can probably safely switch to async mode for NFS to avoid this issue.
| why is NFS copy speed half that of SSH scp? |
1,508,107,905,000 |
When copying large files (1-2 GB per file) between file systems, file fragmentation can happen if the destination file system is nearly full.
Our C++ application code uses fallocate() to pre-allocate space when creating and writing data files but I'm wondering how the linux copy command /bin/cp handles that.
Does cp just copy bytes or chunks of data in a loop (and let the file system deal with it)? Or does cp first call fallocate() or posix_fallocate() with the size of the source file?
I haven't found anything on this subject searching the internet.
The filesystem could be ext3, ext4, or xfs.
Centos 8.1, kernel 4.18.0-147.el8.x86_64 #1 SMP
EDIT I
As background, the actual application reads a constant bit rate network stream and pre-allocates a file for N seconds of content. If the actual bitrate is higher, the file naturally grows. ftruncate() is called when the file is closed, which handles if the actual bitrate is lower. cp is only used to move those files between filesystems, hence my question.
And the reasoning for that is to avoid fragmentation. Without fallocate the file system will become increasingly fragmented over time. (Of course fallocate() doesn't completely prevent the problem but certainly mitigates it)
According to Uninitialized blocks and unexpected flags, fallocate() results in "efficient" allocation of contiguous blocks (for most filesystems):
The fallocate() system call is meant to be a way for an application to request the efficient allocation of blocks for a file. Use of fallocate() allows a process to verify that the required disk space is available, helps the filesystem to allocate all of the space in a single, contiguous group, and avoids the overhead that block-by-block allocation would incur.
So I was wondering if copying a large, heavily fragmented file ends up contiguous or fragmented at the destination. Since cp doesn't use fallocate() to pre-allocate space then answer appears to be "possibly yes".
|
The version of cp provided by GNU coreutils does use fallocate, but only to punch holes in files, not to pre-allocate space for copy targets.
There are a couple of mentions of adding support for fallocate, so it appears there were at least vague plans for something like this at some point.
| Does cp (copy) pre-allocate space with fallocate()? |
1,508,107,905,000 |
I have to move my user home to another device, it occupies several gigabytes, I would like to avoid losing something, at the moment I think I will use rsync rsync --progress -avh --remove-source-files $SRC/ $DST/, Is there anything better?
|
If you want to be on the really safe side (depending on your level of paranoia):
Burn in the new device
Perform surface test of new device
Copy files to new device (i.e. not using the option --remove-source-files)
Mount the new device in its intended place and make sure everything works correctly
Only when everything works fine, discard the source files.
In case you would rather keep a backup of the source you could instead of using rsync for copying in point 3 create a tar archive of your home, extract that to the new device, and keep the archive when discarding the old device.
| Linux, safest way to move a file system |
1,508,107,905,000 |
I know how to use the cp command to recursively (-r) copy nested folders while preserving (-p) file meta-data such as modification times:
cp -pr "/path/from" "path/to"
Is there a way to omit one particular folder nested at the first level deep, by name?
For example, a folder named Photos is being copied. I want to skip the Dogs folder nested within. I want everything but not /Photos/Dogs folder nor its contents.
I suppose I could let it copy, then delete. But that is inefficient. Is there a way no avoid the folder copy in the first place.
I am working on macOS Mojave currently, and FreeBSD 12 later.
|
Yes, you can fairly easily make a copy of a file structure while avoiding copying one or several of the subdirectories, but you would not do it with cp.
With rsync, you can exclude files and/or directories using an exclusion pattern. In your case, it looks like you'd want to use
rsync -av --exclude='Photos/Dogs/' /path/from/ path/to
This would make path/to an exact copy of /path/from while avoiding any directory called Dogs at any path matching Photos/Dogs. If you remove the trailing / on the source directory, you'll instead get path/to/from as an exact copy of the source folder.
The exclude pattern used would make rsync ignore any subdirectory called Dogs located in a directory called Photos, for example /path/from/Photos/Dogs and /path/from/holidays/2013/Photos/Dogs. Using --exclude=Dogs/ would exclude any subdirectory called Dogs regardless of what its parent directory is called, and using --exclude=Dogs would exclude anything (regardless of file type) that is called Dogs. To match only a Photos/Dogs directory directly under /path/from, use --exclude='/Photos/Dogs/'.
See the section called "INCLUDE/EXCLUDE PATTERN RULES" in the rsync manual on your system.
The -a (--archive) option will make sure timestamps, permissions etc. are also copied, and also enable recursive copying. The -v (--verbose) option enables verbose operation.
Add -H (--hard-links) if you want to preserve hard linking between names (again, see the rsync manual).
| Omitting one particular nested folder while copying a folder |
1,508,107,905,000 |
I have a folder that contains 1208 folder. In each of these folders, I have 6 different files which follows a special naming criteria.
What I need to do is to get only one of these files from all the 1208 folders if it contains the following in its name: _fa_a
The hard way is to go into each of the folders and copy that file to my destination folder.
Is there an easier way to do so? or I need to do it manually?
|
find your_folder -type f -name "*_fa_a*" | while read filename; do echo mv "${filename}" destination_folder; done
this find command finds the file and move to the destination_folder.
i added echo command for you to verify the results before move it. once you are happy with echo command output, remove the mv command.
| Extract a single file from a list of directories in Linux |
1,508,107,905,000 |
I've copied a large directory to another location (through a network). I needed to preserve all the timestamps (especially ctime and mtime).
However somewhere in the process I screwed things up. (I probably made a typo in the flags.) And all the files have new timestamps now.
I still got the directory with the correct timestamps. But I don't want to copy it all again because it took me days. Can I somehow just sync the filestamps, e.g. with rsync?
Note that this has to be done through a ssh tunnel over a network that is rather slow. The PCs on both ends however are quite fast.
|
Yes, rsync is your best bet. Something like this should work:
rsync -vr --size-only --times <source> <dest>
--size-only tells rsync not to copy the files again, --times tells it to update timestamps.
| Fit timestamps of multiple files/folders to existing ones |
1,508,107,905,000 |
I'm trying to replace my cp function by an rsync function
My cp function is the following
find /home/odroid/USBHD/Movies/ -iname "*.mkv" -mtime '-1' -exec cp -n {} /home/odroid/NASVD/Movies \;
do you guys have any idea how to do this(note the mtime can also be replaced by --ignore-existing
|
find /home/odroid/USBHD/Movies/ -iname "*.mkv" -mtime '-1' -print0 | xargs -0 -I{} rsync -a --ignore-existing {} /home/odroid/NASVD/Movies/
| Replace cp function by rsync [duplicate] |
1,508,107,905,000 |
I wanted to generate some waste file of 50 GB. so i wrote this
eightnoteight@mr:~/ while true; do
> cat txt >> tmp
> cat tmp >> txt
> done
and when i ran top,watch to observe. I noticed that in top the memory consumption of cat is 0.0
If cat is not consuming my memory who is doing the work? (Is it direct kernel calls?)
|
I believe you are getting misled by the rounding in the %MEM column. If you note the VIRT and RSS column, they report the amount of virtual memory and resident memory used. In both cases you can see that they are non-zero.
Virtual memory is the amount of virtual memory the process has, including shared libraries and pages that have been swapped out. Resident memory (RSS) is the amount of non-swapped physical memory that the process has in use.
Because cat is a small executable with a simple job and low memory requirements, the bytes of memory it takes up rounds to 0.0 on your system with 4GB of main memory.
Your instinct isn't far off, however, your kernel is doing most of the work that is actually involved in writing the file to disk.
| top not showing the memory usage of cat |
1,508,107,905,000 |
I have a bunch of files with a pattern like this:
{UID}-YYMMDD-HHMMSSMM-NEW.xml
Or a real example:
56959-140918-12465122-NEW.XML
I want to copy these files to another directory in date and time order contained within the filename. I also only want to apply this to filenames that match the above pattern.
Are there any tools that exist to do this? If not I imagine a script could be used, something like.
Match filenames by regex
match date section by regex then list by date ascending
for each date match time by regex then list by time ascending add each file to global list.
Once finished with organising file list copy files
|
If all the matching files are in the current directory (and not in any subdirectory or if the subdirectory names do not contain -), you can use for step 1 to 3:
find -regex '.*/[0-9]+-[0-9]+-[0-9]+-NEW\.XML' | sort --field-separator=- --key=2 > filelist
and for step 4:
while IFS= read -r line; do
cp -v $line /PATH/TO/DESTFOLDER/
done < filelist
Explanation: The regexp pattern of find matches to all files with the desribed pattern. sort separates the fields by - and sorts first according to the second field (date) then according to following fields, here the third field (time).
The way to process lines in a shell is described here. Each line is stored in the $line variable and copied to the destination folder. The -v option of cp displays which file is currently copied.
| Copy files by date/time order contained within filename? |
1,508,107,905,000 |
I'm not entirely sure where to ask this question, so if this is not the right place just let me know and I'll move it.
Today I was trying to using recursive scp to copy a directory between two Linux servers and made a typo when referring to the destination server. Instead of transferring the directory to the other machine, it began to transfer the file back into itself. It took a while for me to notice what it was doing since the directory was so large, but then I started seeing the same files popping up again and again on the transfer status until I killed it. It confused me that this was allowed and I have a few questions.
I guess my main question I would like to ask you all, though, is if there is ever an instance when a user would need to make a secure copy to himself on the same machine or is it an unnecessary side effect of scp that was just shrugged off?
Edit - Sorry guys, here's the actual command
scp -r /path/to/dir SameUser@sameserver:/path/to/dir
|
For copying files on the same machine you wouldn't need scp at all. Anyway, if you specify a directory or file as destination instead of a hostname and a path it will copy it for you locally, which seems to be what happened. If you supply the command line you used we can point you what happened exactly.
EDIT:
With the supplied command line, what it does is to go over the network interface, connect to the SSHD server on your local machine and then make the copy. There is no good reason for that since you can copy it locally with cp.
| Interesting Secure Copy Behavior |
1,508,107,905,000 |
I have two directories that look something like the following but with many more files.
folder1/pic1.png
folder1/test/readme.txt
folder2/guest.html
folder2/backup/notes.txt
I want to "merge" these two so all the contents of folder2 end up in folder1 and folder2 gets removed. They are on the same filesystem and disk (ext4). I know all of the files are unique, would mv work fine here?
|
The "rsync" command is useful for this. I do something like this:
rsync -PHACcviuma --copy-unsafe-links --exclude="*~" folder2/ folder1/ && rm -fr folder2
All the flags are documented in the rsync man page; basically rsync won't replace newer files with older ones, and won't bother to copy any files that are duplicated in the destination. Otherwise it will copy things with the original metadata (timestamp, permissions, etc) preserved.
The rsync program will also include "hidden files" (names starting with "."), backups (ending with "~", etc) so I use the --exclude option to skip certain uninteresting file patterns.
| How should I merge two folders on the same filesystem? |
1,508,107,905,000 |
Is there a Linux command to copy a_b_c_d.jpg into a.jpg, b.jpg, c.jpg, and d.jpg so that each file is a copy of the original?
It should extract the name from the original name, separated by _ and ending with the first ..
|
Mayhap something along this line:
for FN in *.jpg; do IFS="_."; AR=($FN); for i in "${AR[@]}"; do [ ! "$i" = "jpg" ] && echo cp "$FN" "$i".jpg; done; done
cp a_b_c_d.jpg a.jpg
cp a_b_c_d.jpg b.jpg
cp a_b_c_d.jpg c.jpg
cp a_b_c_d.jpg d.jpg
You may want to add some more error checking, IFS safe guarding, remove the echo once you're happy with it, etc...
EDITed according to Kusalananda's comment.
for FN in *.jpg; do IFS="_"; for i in ${FN%.jpg}; do echo cp "$FN" "$i".jpg; done; done
| Is there a linux command to copy a file to multiple other files based on the filename? [closed] |
1,508,107,905,000 |
Referencing this post to find and delete duplicate files based on checksum, I would like to modify the approach to perform a copy operation followed by a file integrity check on the destination file.
SOURCE = /path/to/Source
DEST = /path/to/Destination
# filecksums containing the md5 of the copied files
declare -A filecksums
for file in "$@"
do
[[ -f "$file" ]] || continue
# Generate the checksum
cksum=$(cksum <"$file" | tr ' ' _)
# Can an exact duplicate be found in the destination directory?
if [[ -n "${filecksums[$cksum]}" ]] && [[ "${filecksums[$cksum]}" != "$file" ]]
then
rm -f "$file"
else
echo " '$file' is not in '$DEST'" >&2
fi
done
I want to use the result of the md5 checksum comparison to allow rm -f of the source file only if the checksums are equivalent. If there is a difference, I want to echo the result and escape. rsync might be another option, but I think I would have problems forcing a checksum comparison for local-local file transfer.
UPDATE
I have looked into using rsync per @Lucas 's answer. It appears that there are options to transfer files more stabily with checks rather than a bulk mv /data1/* /data2/ and report what was done and delete after a check. This might narrow the definition as indicated by community members.
|
Implementing something like this might be hard as a first try if you care about the files and don't want to mess up. So here are some alternatives to writing a full script in bash. These are more or less complex command lines (oneliners) that might help in your situation.
There is one uncertainty in your question: do you want to compare each file in source with every file in dest or only those with "matching" file names? (That would be compare /path/to/src/a with /path/to/dest/a and /path/to/src/b with /path/to/dest/b but not /path/to/src/a with /path/to/dest/b and so on)
I will assume that you only want to compare files with matching paths!!
first idea: diff
The good old diff can compare directories recursively. Also use the -q option to just see which files differ and not how they differ.
diff -r -q /path/to/source /path/to/dest
cons
This can take a long time depending on the size of your hard disk.
This doesn't delete the old files.
The output i not easily parseable
pros
This doesn't delete any files :)
So after you manually/visually confirmed that there are no differences in any files you care about you have to manually delete the source with rm -rf /path/to/source.
second idea: rsync (edit: this might be the best now)
rsync is the master of all copying command line tools (in my opinion ;). As mentioned in the comments to your question it has a --checksum option but it has a bulkload of other options as well. It can transfer files from local to remote from remote to local and from local to local. One of the most important features in my opinion is that if you give the correct options you can abort and restart the command (execute the same command line again) and it will continue where it left of!
For your purpose the following options can be interesting:
-v: verbose, show what happens can be given several times but normally one is enough
-n: dry run, very important to test stuff but don't do anything (combine with -v)!!
-c: use checksum to decide what should be copied
--remove-source-files: removes files that where successfully transfered (pointed out by @brawny84, I did not know it and did not find it in the man page on my first read)
So this command will overwrite all files in dest which have a different checksum than the corresponding file in source (corresponding by name).
rsync -a -c -v --remove-source-files -n /path/to/source /path/to/dest
rsync -a -c -v --remove-source-files /path/to/source /path/to/dest
pros
works with checksums
has a dry run mode
will actually copy all missing files and files that differ from source to dest
can be aborted and restarted
has an exclude option to ignore some files in src if you don't want to copy all files
can delete transferred source files
cons
??
third idea: fdupes
The program fdupes I designed to list duplicate files. It checks the md5sums by default.
pros
it uses md5 to compare files
it has a --delete option to delete one of the duplicates
cons
it compares each file to every other file so if there are duplicate files inside dest itself it will also list them
delete mode seems to be interactive, you have to confirm for every set of equal files, that might not be feasible for large directory trees
the non interactive mode will delete all but the first file from each set of equal files. But I have no idea which the first file is (in source or in dest?)
last idea: go through the pain of actually writing and debugging your own shell script
I would start with something like this if it has to be done manually. I did not test this, try it with the ls first and try to figure out if it will brake something!!
#!/bin/bash
# first require that the source and dest dirs
# are given as arguments to the script.
src=${1:?Please give the source dir as first argument}
dest=${2:?Please give the destination dir as second argument}
# go to the source directory
cd "$src"
# This assumes that there are no newlines in filenames!
# first find all plain files in the current dir
# (which should be $src)
# then use xargs to hand the filenames to md5sum
# pipe the md5 sums into a subshell
# go to the dest in the subshell
# read the md5sums from stdin and use md5sum -c to check them
# After the subshell filter lines to only keep those that end in "OK"
# and at the same time remove the "OK" stuff after the file name
# use xargs to hand these file names to ls or rm.
find . -type f | \
xargs md5sum | \
( cd "$dest" && md5sum -c ) | \
sed -n 's/: OK$//p' | \
xargs ls
The ls in the last line is to list all files that passed the check. If you replace it with rm they are removed from the source dir (the current dir after the cd "$src").
| Help with script/rsync command to move file with md5 sum comparison before deleting the source file/ [closed] |
1,508,107,905,000 |
Is there any difference between:
cp -R /a/* /b
and
cp -R /a/. /b
The original idea was to copy anything from folder /a into folder /b.
|
The only difference is that the first command,
cp -R /a/. /b
would copy hidden files and directories from /a to /b, while the second command,
cp -R /a/* /b
would not do so.
The reason for the second command not copying hidden files is that the * expands to all the non-hidden names in /a (unless the shell option dotglob is set in bash, or the equivalent option in whatever shell is being used, if available).
The original question used -r in the second command instead of -R:
The flag -r is kept in some implementations of cp (GNU cp for example) for backwards compatibility. It is a non-standard flag for the cp command and on implementation that have it, it is similar to -R.
In GNU and AIX cp, -r and -R are the same. In some historical implementations of cp, it handles special files such as FIFOs and sockets differently. Solaris' implementation of cp -r/-R is only different for FIFOs (-R recreates them, -r reads from them). None of the free BSDs have -r in their cp implementations.
| Difference in cp -R argument? |
1,508,107,905,000 |
I want to copy and rename multiple files from one directory to another.
Particularly, I want something like this:
/tmp/tmp.XXX/aaa.original.txt
/tmp/tmp.XXX/bb5.original.txt
/tmp/tmp.XXX/x2x.original.txt
copied into
/root/hello/dump-aaa.txt
/root/hello/dump-bb5.txt
/root/hello/dump-x2x.txt
I've tried some things like these which don't work:
cp /tmp/tmp.XXX/*.original.txt /root/hello/*.txt
find /tmp/tmp.XXX/ -name '*.original.txt' | xargs -i cp /root/hello/dump-{}.txt
for f in /tmp/tmp.XXX/*.original.txt; do cp -- "$f" "/root/hello/dump-$f.txt"; done
Usually the above codes result in error:
cp: cannot create regular file '/root/hello/dump-/tmp/tmp.XXX/aaa.original.txt.txt': No such file or directory
|
bash solution:
for f in /tmp/tmp.XXX/*.original.txt; do
bn="${f##*/}" # extracting file basename
cp "$f" "/root/hello/dump-${bn%%.*}.txt"
done
| How to copy and add prefix to file names in one step from another directory? |
1,508,107,905,000 |
We want to migrate contents of an old server to a new one but rather than coping everything, we want to exclude any directory that has a .swf file in it. We are aware of the --exclude flag but it will only exclude the file(s), not the parent directory (and it's contents), if there is a .swf file
If this cannot be done with rsync, is there a bash script we can use to copy files from one server to another, with this condition?
|
You could generate a list of directories which contain *.swf files, and then convert that list into an exclude file for rsync.
e.g.
find /path/to/topdir/ -iname '*.swf' -printf "%h\n" |
sort -u |
sed -e 's/^/- /; s:$:/**:' > rsync-exclude.txt
output will look something like this, which can be used with rsync's --exclude-from=FILE option:
- /path/to/topdir/directory1/**
- /path/to/topdir/directory2/**
- /path/to/topdir/directory3/**
- /path/to/topdir/directory5/subdir1/**
Note that rsync's include and exclude options can be difficult to master, they do not work quite like either globs or regular expressions. You will want to practice with --dry-run and --verbose first until you get the patterns exactly right.
| rsync - copy contents of the directory only if a certain file doesn't exist |
1,508,107,905,000 |
I want to copy a Windows 7 partition that came installed on my laptop to my desktop computer.
I've tried:
# bzip2 -c /dev/sda5 | nc 192.168.1.1 2222 # on laptop
# nc -l 2222 | bzip2 -d > /dev/sda1 # on desktop
But gparted tells me the partition is corrupted with a lot of error messages.
I also tried:
# dd if=/dev/sda1 | gzip -1 - | ssh user@hostname dd of=image.gz # on laptop
# dd if=image.gz | gunzip -1 - | dd of=/dev/sda5 # on desktop
It worked for a small partition (35 Mb), but didn't for larger ones (18Gb and 120Gb). The ssh pipe keeps breaking and one attempt that completed gave errors in gparted.
What would be a better way of copying the partitions?
|
I finally copied using a tar pipe.
# cd /mnt/sda1/ && tar cf - * | nc 192.168.1.1 2222 # on laptop
# cd /mnt/sda5/ && nc -l 2222 | tar x # on desktop
Copying was way faster and seemed to work.
I wasn't able to boot in Windows 7 thought. I only saw a black screen when booting in it and the recovery partition freezes at Starting Windows.
I think that you need two licences to use Windows 7 on two computer so I don't think it would have worked.
| How to copy a partition over network |
1,508,107,905,000 |
I'm looking to copy files from subdirectories that match this pattern
vendor/plugin/*/tasks/*.rake
into a folder
lib/tasks
|
Sounds pretty easy:
cp vendor/plugin/*/tasks/*.rake lib/tasks
Or if the first * should match a whole subtree, use something like:
find vendor/plugin -path "*/tasks/*.rake" -exec cp "{}" lib/tasks +
| How to copy files nested in directories that match a pattern? |
1,508,107,905,000 |
How can I copy files between two hosts the first primary host is running Linux and the secondary host is running Windows.
I am looking for a correct command line to use it in terminal/Linux?
I tried
scp user1@remote1:/home/file user2@remote:/home/file
But it didn't work.
Any suggestions ?
|
On Linux, install and run the SSH daemon sshd (the package is openssh-server in most distributions). Then from Windows download and use WinSCP to connect to the Linux machine and transfer your files in both directions.
Or - to do this the other way around - install the SSH server freeSSHd on Windows, then from Linux run the command scp user1@linuxbox:/home/user1/myfile user2@winbox: (adapt as appropriate).
All these tools are available for free.
| Copy files between two hosts |
1,425,693,332,000 |
I am installing an rpm package and it appears to be skipping certain files without giving me any notice as to what the issue is.
When I execute
rpm -ivh package_name.rpm
the rpm provides me with no indication that the installation failed.
After executing this, I verify the installation:
rpm -V package_name
And I see that some files are reported as missing
missing /path/to/some/crucial/file
When I look into my / directory, I see that a few files were created which start with u2dtmp*. These are the files which do not get created.
I have attempted to remove old locks from my rpm installation and cleaned the database rpm --rebuilddb, but nothing seems to allow these files to be installed successfully.
This issue only appears on a single machine. It installs successfully on other linux machines which have the same os.
|
After some work a solution was found. Inside the rpm, a few dos2unix calls were made. A coworker of mine was able to determine that the verson of dos2unix that was installed had some issues.
After upgrading to the latest version, the u2dtmp* files disappeared.
| RPM skipping files on install |
1,425,693,332,000 |
I have a file test.txt in directory A/B. I want to copy test.txt to A/C and rename it newtest.txt.
I know I can use the cp and mv commands to do this, but the issue is that there already is a test.txt in A/C, and there's already a newtest.txt in A/B and I don't want to overwrite either of those files.
I know I technically can do what I want with mv test.txt ./verynewtest.txt && cp verynewtest.txt ../C && mv verynewtest.txt test.txt && cd ../C && mv verynewtest.txt newtest.txt, but that seems really long.
Is there a faster/better way of doing this?
|
Just do
$ cd A/B
$ cp test.txt ../C/newtest.txt
Use
$ cp -i test.txt ../C/newtest.txt
to check whether ../C/newtest.txt (i.e., A/C/newtest.txt) already exists
and ask for confirmation.
(I almost never deliberately overwrite files,
so I alias cp to cp -i to get this protection every time I do a cp.
But it’s also wise just to be careful not to clobber files
you don’t want to clobber, and not rely on aliases to save you.)
| How can I copy a file and paste it as a different name? |
1,425,693,332,000 |
I need to make a copy of my home directory and place that copy in the same home directory. The exit code of the command must be 0. Currently, my home directory does not contain any other directories.
Is there a better way than the following? (pwd is the home directory)
mkdir /tmp/temp && cp * /tmp/temp && mv /tmp/temp .
|
Call rsync and exclude the directory where you're putting the copy.
cd
mkdir copy
rsync -a --exclude=copy . copy
Copying * excludes dot files (files whose name begins with a .), which are common and important in a home directory.
| How to copy the home directory in the home directory? |
1,425,693,332,000 |
I was copying a very big file and I accidentally stopped it. Can I resume copying data without need to delete copy and copy data again?
Command I used:
pv original.data > copy.data
|
Continue with dd:
dd if=original.data of=copy.data ibs=512 obs=512 seek=NNN skip=NNN status=progress
You have to get byte count in the copy.data. Then replace NNNs with byte count divided by 512 (value set to ibs and obs).
| Continue copying file |
1,425,693,332,000 |
I would like to copy files in an remote server from one directory (in the remote server) to another (also in the same remote server). I tried this:
scp -r [email protected]:/folder_a/*myfiles* ../folder_b
This did no give any error message, but it did not work. I also tried this:
scp -r [email protected]:/folder_a/*myfiles* [email protected]:/folder_b
which gave an error message:
Permission denied, please try again.
Permission denied, please try again.
Received disconnect from : 2: Too many
authentication failures for myacc lost connection
How to copy files within a same server from one directory to another?
|
You should use ssh and do:
ssh [email protected] "cp /folder_a/*myfiles* /folder_b"
| How to copy files within a remote server? |
1,425,693,332,000 |
So I was trying to download a file from a remote host connected with SSH to a local folder on my Mac using rsync. The command I used is:
$ rsync --progress -avz -e "ssh [email protected] -i ~/.ssh/keyFile" [email protected]:/path/to/files/ ~/Downloads/
And here is what is prompted:
[email protected]'s password: [I typed the password here]
bash: 1.2.3.4: command not found
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [Receiver=3.1.3]
It seems odd that the remote host address 1.2.3.4 is interpreted as a command?! I am not sure how and why this happens. How could I update the command to achieve expected result and start to copy the file?
|
I'm pretty sure that you don't want to include the user@host part of the ssh command within the -e flag. Try
rsync --progress -avz -e "ssh -i ~/.ssh/keyFile" [email protected]:/path/to/files/ ~/Downloads/
| rsync thinks the source host is a command |
1,425,693,332,000 |
I am running a PHP script on my Apache server and from the script I need to copy some files (to run a Bash script that copies files). I can copy to a directory /tmp with no problems, but when I want to copy to /tmp/foo then I get this error:
cp: cannot create regular file '/tmp/foo/file.txt': Permission denied
even though the permissions for the directory /tmp and /tmp/foo are set to the same value.
Do you know what is the problem?
|
/tmp Directory has all the permissions (read/write) for all users. but if you made /tmp/foo by your own account, it has its permissions just for you! if you want to make it writable for other users (or programs) change its permission with this command:
chmod 777 /tmp/foo
If you have any other files inside this directory from before, add -R flag to above command.
Update:
Use this command to change /tmp/foo owner from your own to apache default user:
sudo chown www-data:www-data /tmp/foo -R
also please check your apache2 configuration to see which user it has for running the php scripts.
| cp: cannot create regular file: Permission denied |
1,425,693,332,000 |
I have one file, let's call it image.png. I also have a folder of files like so:
picture.png
file.png
screenshot.png
art.png
painting.png
And so on.
What I want to do is replace each file inside the folder with image.png, but I want to keep the name of the original (so picture.png is still called picture.png, but when viewed contains image.png.
image.png does not have to be located inside the folder.
I've tried this so far, but it doesn't seem to work:
for file in 'folder'
do
cp -f 'image.png' $file
done
|
cp should do what you want. The problem is that you are not iterating through a folder. You are only doing one iteration with the "folder" being the contents of the $file variable. Try iterating over the file globbing, like this:
for file in folder/*
do
cp -vf 'image.png' "$file"
done
I added a -v so you can get more verbose output to see any error, but you could leave that off once you get the correct results.
| How do I replace all files in a folder with one file? |
1,425,693,332,000 |
I got a git directory with plenty of python files(and some special file like .git).
I'd like to copy only these python files to another directory with the directory structure unchanged.
How to do that?
|
You will receive in destination_dir files with full path from /
find /path/git_directory -type f -iname "*.py" \
-exec cp --parents -t /path/destination_dir {} +
Other solution is rsync
rsync -Rr --prune-empty-dirs \
--include="*.py" \
--include="**/" \
--exclude="*" \
/path/git_directory /path/destination_dir
| How to copy a directory with only specified type of files? |
1,425,693,332,000 |
I installed SentOS 8 with Xfce and there is not many gtk themes in it. Can I simply copy them from Mint 20 Xfce? Is it legal to copy the contents of the two folders /usr/share/icons/ and /usr/share/themes/ between different linux distros?
|
There shouldn’t be any problem in the majority of cases; exceptions might include themes with branding (but I haven’t checked).
You’ll find the license terms for the various files involved on most if not all distributions, included in the distribution. For Debian-based distributions, find the packages involved using dpkg -S ${file}, then look at /usr/share/doc/${package}/copyright. For Fedora- or RHEL-based distributions, find the packages involved using rpm -q --whatprovides ${file}, then look at /usr/share/licenses/${package} (check the output of rpm -qL ${package} if necessary.
| Is it legal to copy themes between different linux distros? |
1,425,693,332,000 |
I have a file in a very long path, for example:
/opt/very/long/path/file1
I want to copy the file in this directory:
cp /opt/very/long/path/file1 /opt/very/long/path/file2
I don't want repeat this long path. I can go to the destination folder and copy:
cd /opt/very/long/path/
cp file1 file2
But I don't want change directory. One of the reason is: if I had many long paths I would have to go to directories every time.
cd /opt/very/long/path/
cp file1 file2
cd /opt/other/very/very/long/path/
cp fileA fileB
Another reasons are: I want to keep context and clear history (every command says what and where was copied).
Therefore, better is not change directory:
cp /opt/very/long/path/file1 /opt/very/long/path/file2
cp /opt/other/very/very/long/path/fileA /opt/other/very/very/long/path/fileB
But I have to repeat paths.
There is a shortcut like this?
cp /opt/very/long/path/file1 ./file2
cp /opt/other/very/very/long/path/fileA ./fileB
But a dot . means "current directory". There is any character that means "destination directory" or "source directory"?
cp /opt/very/long/path/file1 <destination>/file2
cp /opt/other/very/very/long/path/fileA <destination>/fileB
|
Brace expansion is nice for this:
cp /opt/other/very/very/long/path/{fileA,fileB}
... will expand to:
cp /opt/other/very/very/long/path/fileA /opt/other/very/very/long/path/fileB
when it actually executes.
The command will show up in your history as you typed it, which preserves the paths:
$ history
# ...
508 cp /opt/other/very/very/long/path/{fileA,fileB}
509 history
| Copy a file in a destination folder |
1,425,693,332,000 |
I'm using the following command to get the most recent file in a directory
/usr/bin/find /home/user1/folder1/ -type f -printf '%T@ %p\n' | sort -n | tail -1 | cut -f2- -d" " | cut -f5 -d"/"
This returns only the file name not the entire path.
I then want to copy the file I found into another folder, so I append the following to the previous find command:
-exec cp {} /home/user2/folder2 \;
So the full command looks like this:
/usr/bin/find /home/user1/folder1/ -type f -printf '%T@ %p\n' | sort -n | tail -1 | cut -f2- -d" " | cut -f5 -d"/ -exec cp {} /home/user2/folder2 \;
But this returns
cut: invalid option -- 'e'
What am I doing wrong here?
|
Your command appears to have two issues, the first of which may not matter much in your case, but is nevertheless worth pointing out: (i) it is not generic in the sense that it will not be able to process arbitrary filenames, in particular filenames that contain newlines (i.e. \n), and (ii) as already noted by Kusalananda, the -exec option belongs to the find command, and can thus not be separated therefrom as you have tried.
Using the GNU utilities, these issues can be fixed with the following pipeline, which will find the most recent file in (or below) the directory /home/user1/folder1/ and copy it to /home/user2/folder2/:
find /home/user1/folder1/ -type f -printf '%T@ %p\0' 2>/dev/null |
sort -znk1,1 | tail -zn1 | cut -zf2- -d' ' |
xargs -0 cp -t /home/user2/folder2/
As to issue (i): note the \0 at the end of the -printf format string, and the -z and -0 options to the various commands in the pipeline, which ensure that the identified filename is passed in a NUL-delimited fashion, and thus enable it to include blanks and/or newlines.
As to issue (ii): you can use the xargs command to collect arguments from stdin and to build a new commandline with them. Part of the trick here is to use the -t option to the cp command, to specify the target directory before providing any filename to be copied there, since xargs will build a commandline by appending any arguments it receives on stdin to a given command.
| Find and copy with exec not working |
1,425,693,332,000 |
I can use rm to delete the old folder, then use cp to copy the new folder. But how to do it in one go?
|
Using rsync:
rsync -av --delete source/ target
This would delete all the contents of the directory target that does not match the contents of the directory source, and would additionally copy the contents of source there.
The trailing / at the end of source/ is significant as without it, you would get a directory at target/source instead of making target a copy of source.
The -a (or --archive) option makes rsync copy timestamps and other metadata, and the -v (or --verbose) option makes rsync operate verbosely. Without --delete, no existing contents in target would be deleted (unless it had the same name as things in source in which case it would be updated).
| How to copy a folder by overwriting an existing folder and delete all the old content in Linux? |
1,425,693,332,000 |
I am trying to copy files from one path to another path. I have a text file which has all names of files in the following pattern:
file-1.txt
file-2.pdf
file-3.ppt
....
I created a .sh file with the following code:
#!/bin/bash
file=`cat filenames.txt`;
fromPath='/root/Backup/upload/';
toPath='/root/Desktop/custom/upload/';
for i in $file;
do
filePath=$fromPath$i
#echo $filePath
if [ -e $filePath ];
then
echo $filePath
yes | cp -rf $filePath $toPath
else
echo 'no files'
fi
done
The above code is copying only the last file name from the text instead of all to the destination path.
|
file=/path/to/filenames.txt
fromPath=/root/Backup/upload/
toPath=/root/Desktop/custom/upload/
cd "$fromPath" && xargs mv -t "$toPath" < "$file"
| copy files from one path to another path in linux |
1,425,693,332,000 |
I have 2 external drives and I want to use rsync to copy the files that have been updated (modified timestamp) in the source directory to a target directory.
The files have the same filename but the timestamp is different i.e. some files have been recently updated but the filename has remained the same.
However,
rsync -rv --ignore-existing --progress /Volumes/vol1/Data/ /Volumes/vol2/Data/
does not do anything. The result is null nothing is transferred.
sending incremental file list
sent 68 bytes received 12 bytes 160.00 bytes/sec total size is
20,634 speedup is 257.93
How can I solve this?
|
Well, Reading The Fine Manual I find this:
--ignore-existing skip updating files that exist on receiver
So by definition, the options you're using are explicitly asking NOT to update already existing files.
I think you simply want to use "-a" (archive) option:
rsync -av --progress /Volumes/vol1/Data/ /Volumes/vol2/Data/
| Rsync unexprected behavior |
1,425,693,332,000 |
I am trying to switch from one hard drive to another. So I decided to boot in Linux, hook both hard drives up, and copy all files from one hard drive to the other.
However, when I try to copy protected Windows-10 files, such as C:\Windows\explorer.exe or C:\Windows\notepad.exe, I get the following error:
cp: cannot access 'explorer.exe': Input/output error
The same happens regardless of the command that I run on the file -- even ls, or including sudo. Clearly, the file is present, since Windows boots normally. Also, the hard drive is not damaged.
How do I bypass this error, and copy these Windows files onto the new hard drive?
|
If you just copy files from one NTFS partition to another there's a high chance your Windows won't boot at all. You'll need to use ntfsclone for that.
Speaking of your specific error: you're most likely missing an NTFS-3G compression plugin. It's not clear what your distro is but in Fedora the package is called ntfs-3g-system-compression. According to repology it's not even available in Ubuntu and its derivatives, so you might want to install it manually:
https://github.com/ebiggers/ntfs-3g-system-compression
| How to copy Windows 10 system files in Linux? |
1,425,693,332,000 |
I am writing a shell script to do some complex task which is repetitive in nature.
To simplify the problem statement, the last step in my complex task is to copy a file from a particular path (which is found based on some complex step) to a pre-defined destination path. And the file that gets copied will have the permission as 600. I want this to be changed to 644.
I am looking at an option where in I can instruct the system to copy that file and change the permission all in one command. Something like -
cp -<some_flag> 644 <source_path> <destination_path>
You may say, I can change the permission of the file once it is copied in a 2 step process. However, there is problem with that as well. Reason being, the source path is obtained as an output of another command. So I get the absolute path for the file and not just the file name to write my script to call chmod with a name.
My command last segment looks like -
...| xargs -I {} cp {} /my/destination/path/
So I dont know the name of the file to call chmod after the copy
|
Just include the chmod in your xargs call:
...| xargs sh -c 'for file; do
cp -- "$file" /my/destination/path/ &&
chmod 700 /my/destination/path/"$file";
done' sh
See https://unix.stackexchange.com/a/156010/22222 for more on the specific format used.
Note that if your input to xargs is a full path and not a file name in the local directory, you will need to use ${file##*/} to get the file
name only:
...| xargs sh -c 'for file; do
cp -- "$file" /my/destination/path/ &&
chmod 700 /my/destination/path/"${file##*/}";
done' sh
| How to copy a file and change destination file permission in one step? |
1,425,693,332,000 |
I use the following command to find a file and copy it somewhere else,
find /search/ -name file.txt -exec cp -Rp {} /destination \;
How can I copy all files and subdirectories in the parent directory of file.txt?
Example,
/search/test/sub
/search/test/sub2
/search/test/file.txt
/search/test/file.doc
They should be copied as
/destination/sub
/destination/sub2
/destination/file.txt
/destination/file.doc
|
With -execdir (not a standard predicate, but often implemented), the given utility would execute in the directory where the file was found.
This means that you could do
find /search -name file.txt -execdir cp -Rp . /destination \;
Without -execdir:
find /search -name file.txt -exec sh -c 'cp -Rp "${1%/*}/." /destination' sh {} \;
or,
find /search -name file.txt -exec sh -c 'cd "${1%/*}" && cp -Rp . /destination' sh {} \;
These last two variations execute a short in-line script for each found file. The script takes the pathname of the file as its first argument (in $1), and strips the filename off of the pathname using ${1%/*} (a standard parameter substitution). Then it applies the same cp command as in the first variation with -execdir.
The code that does the cd emulates a bit more faithfully what the -execdir variation at the top actually does, while the middle variation bypasses changing the directory by referring to . in the source directory at the end of the path instead.
| How to find a file and copy its directory? |
1,425,693,332,000 |
I am using Cygwin as Linux shell, I have following contents in my current working directory:
Files :
Abc.dat
123.dat
456.dat
Directories:
W_Abc_w
W_123_w
W_456_w
Now I want to copy files as below:
Abc.dat -> W_Abc_w
123.dat -> W_123_w
456.dat -> W_456_w
How to achieve this in a single line linux command?
I need a generic solution which can be used for similar cases in future...
Destination directory always exists, but number of characters in file name will vary. Destination directory name will always contain the file name of file to be copied along with other extra characters.
Destination directory names have unique pattern eg. Abc_sa_file_name_1 second directory name will be Abc_sa_file_name_2. File names also has pattern e.g kim_1. Kim_2 .
I will be moving or copying file kim_1 to Abc_sa_kim_1_1. I wish to operate complete pattern in one command.
|
In one command (line):
cp Abc.dat W_Abc_w/; cp 123.dat W_123_w/; cp 456.dat W_456_w/
The trailing slashes are not required, but a habit to indicate that the intention is to put the file into a destination directory not as a new file.
As a generic loop with a pattern:
for f in ???.dat
do
[ -d W_"${f%.dat}"_w ] && cp -- "$f" W_"${f%.dat}"_w
done
This picks up every filename that has three characters followed by .dat and copies them into the correspondingly-named directory, if that directory already exists. The filename expansion inside the cp command strips off the trailing .dat.
If you were interested in a command-line approach that does not use a loop -- but also moves the files instead of copies them -- you could use zsh:
autoload zmv
zmv '(???).dat' 'W_$1_w'
| Copy files such that individual files gets copied to the folder having file name as a string within complete folder name |
1,425,693,332,000 |
I have a symlink to a file on my Ubuntu system, and I need to copy the original file to a different directory and have a new name there. I am able to copy it to a different directory using
readlink -ne my_symlink | xargs -0 cp -t /tmp/
But I am not able to give a new name in the destination directory.
Basically, I am looking for a command that could look like:
readlink -ne base.txt | xargs -0 cp -t /tmp/newnametofile
When I try the exact same command above, it gives me file or directory not found error.
Anyway to achieve this?
|
cp will dereference symlinks with -L option.
This should work:
cp -L my_symlink /tmp/newnametofile
Regarding your xargs, -t, --target-directory option of cp only takes DIRECTORY as input. You could make it work using xargs -I{} cp {} /tmp/newnametofile (but I'd use cp -L anyways...
| copying a symlink to a target file using cp -t |
1,425,693,332,000 |
I am using Ubuntu 18.04 LTS and
I wish to copy files from one folder to another and along with it also wish to store the filepath of each file that's being copied from folder1 to folder2 in a third file with space as a separator.
NOTE: Since cp command does not return any output so storing it in file won't work.
Any mix of commands or script is welcome which can be run through terminal.
Please do not suggest any additional software.
|
A combination of find and cpio can be used to recursively copy all files and subfolders from folder1 to folder2. With tee in between you can write all file names (relative to folder1) into outputfile.
cd folder1 && find . -depth | tee outputfile | cpio -pdm folder2
The command cd folder1 is necessary because cpio wants to get file names relative to the source folder.
folder2 must be specified either absolute or relative to folder1.
To copy files only you could modify the find command:
... find . -type f -maxdepth 1 ...
| Get filepath of every file that's copied from one folder to another |
1,425,693,332,000 |
We need to do a once-only archive copying of users' home folders to an archive server (pending final deletion) when they leave, in case they later discover that they may still require some of their files (although we do of course very strongly encourage them to take their own backup of everything they might still need before they go).
We had been using scp for this, but have now got inadvertently snared by a former user who had installed some software which had created an unusual symlink structure in one of their folders, which seemed to result in scp looking ever upwards and then trying to copy rather more than was expected, before being stopped.
Unfortunately, it turns out that scp seems to always follow symlinks and does not appear to have any option to prevent this.
I am looking for an alternative way to backup a user folder that avoids this problem (and ideally is no more complicated than it absolutely needs to be).
tar could be a possibility, but I am slightly concerned that the creation of a tarball locally before copying it to the archive server could use a not insignificant amount of storage space, and might pose some difficulties in the event that our fileserver becomes rather more full at some point in the future.
Another possibility might be to use rsync, but this seems possibly over-the-top for a once-only file transfer, and I know from previous experience that tuning rsync's own options can sometimes be fiddly in itself.
Can anyone suggest a reliable and simple alternative to scp for this?
|
If you like tar except for the temp file, this is easy: don't use a temp file. Use a pipe.
cd /home ; tar cf - user | gzip | ssh archivehost -l archiveuser 'cat > user.archived.tar.gz'
Substitute xz or whatever you prefer for gzip. Or move it over to the other side of the connection, if saving CPU cycles on the main server is more important than saving network bandwidth (and CPU on the archive server)
cd /home ; tar cf - user | ssh archivehost -l archiveuser 'gzip > user.archived.tar.gz'
You could stick a gpg in there too. Have a key pair that's just for these archives, encrypt with the public key when storing, use the private key when you need to recover something.
More details as requested:
I intend user to be the user whose home directory /home/user you are archiving. archivehost is the server where you're going to store the archives, and archiveuser is an account on the archive server that will own the archives.
tar cf - user means "create a tar archive of user and write it to stdout". The -c is "create", and -f - is "use stdin/stdout as the file". It will probably work with just tar c user since -f - is likely to be the default, but originally the default action of tar was to read or write a tape device. Using an explicit -f - may just be a sign that I'm old.
The tar z flag would be fine, except then I couldn't show how to move it to the other side of the ssh. (Also, connecting gzip and tar with an explicit pipe is one of those "old people" things - tar didn't always have that option.) Plus I can substitute bzip2, lzop, xz -3v, or any other compression program without needing to remember the corresponding tar options.
I never heard of --checkpoint before, so you'll just have to rely on your own tests for that one.
| Archiving user home folder to remote server, without following symlinks? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.