date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,387,878,201,000 |
How is install different from a simple copy, cp or dd? I just compiled a little utility and want to add it to /usr/sbin so it becomes available via my PATH variable. Why use one vs the other?
|
To "install" a binary compiled from source the best-practice would be to put it under the directory:
/usr/local/bin
On some systems that path is already in your PATH variable, if not you can add it by adapting the PATH variable in one of your profile configuration files ~/.bashrc ~/.profile
PATH=${PATH}:/usr/local/bin
dd is a low level copy tool that is mostly used to copy exactly sized blocks of the source which could be for example a file or a device.
cp is the common command to copy files and directories also recursively with the option -r and by preserving the permissions with the option -p.
install is mostly similar to cp but provides additionally the option to set the destination file properties directly without having to use chmod separately.
cp your files to /usr/local/bin and adapt the PATH variable if needed. That's what I would do.
| How is install different from cp? [duplicate] |
1,387,878,201,000 |
I have a huge file tree. Some files have same name but in different case, e.g., some_code.c and Some_Code.c.
So when I'm trying to copy it to an NTFS/FAT filesystem, it asks me about whether I want it to replace the file or skip it.
Is there any way to automatically rename such files, for example, by adding (1) to the name of conflict file (as Windows 7 does)?
|
Many GNU tools such as cp, mv and tar support creating backup files when the target exists. That is, when copying foo to bar, if there is already a file called bar, the existing bar will be renamed, and after the copy bar will contain the contents of foo. By default, bar is renamed to bar~, but the behavior can be modified:
# If a file foo exists in the target, then…
cp -r --backup source target # rename foo → foo~
cp -r --backup=t source target # rename foo → foo.~1~ (or foo.~2~, etc)
There are other variants, such as creating numbered backups only when one already exists. See the coreutils manual for more details.
| Copy files with renaming |
1,387,878,201,000 |
I have moved (mv) a pretty large directory on my NAS (Linux based), but had to interrupt the procedure. Not being a regular Linux user, I though I could just continue and merge the rest later in.
mv /oldisk/a /newdisk
Procedure is halfway done, so rest of /oldisk/a still exists, and /newdisk/a with the already copied files is already present. I have no idea which files have been already copied. BTW, under /oldisk/a, of course, are plenty of sub directories.
What would be the best way to move / merge the remaining files to /newdisk/a ?
|
rsync --verbose --archive --dry-run /oldisk/a/ /newdisk/a/
The --dry-run (or -n) will do a dry run, showing you what it would do without actually doing anything.
If it looks ok, run the rsync without the -n option.
This will be a copy, not a move, which isn't quite what you're doing, but is safer. The --archive (or -a) ensures all the ownership and timestamps metadata is preserved (which a regular copy would not).
| Best way to continue stopped move (mv) by merging directories? |
1,387,878,201,000 |
So I have a repo with some of my config files and I'm trying to create a makefile to install them in the homedir. The problem I have is that when I run the following command straight in bash
install -m 755 -d ~/path/to/dotfilesDir/ ~/
seemingly nothing happens while
install -m 755 ~/path/to/dotfilesDir/{file1,file2,...} ~/
works as intended.
Why doesn't the first (easier and cleaner) solution work?
|
From a look at the man page, it seems that install will not do what you want.
Indeed, the Synopsis section indicates a usage of the form:
install [OPTION]... -d DIRECTORY...
and later on, the man page says:
-d, --directory
treat all arguments as directory names; create all components of
the specified directories
So it seems to me that the point of this option is to be able to install a complicated (but empty) directory structure à la mkdir -p ....
You can accomplish what you want with a loop:
for file in /path/to/DotFiles/dir/*;do
install -m 755 "$file" ~/
done
Or, if there are many levels under /path/to/DotFiles/dir, you can use find:
find /path/to/DotFiles/dir/ -type f -exec 'install -m 755 "{}" ~/' +
| Problem with install command to copy a whole directory |
1,387,878,201,000 |
I'm doing this sync locally on Ubuntu 12.04. The files are generally small text files (code).
I want to copy (preserving mtime stamp) from source directory to target but I only want to copy if the file in target already exists and is older than the one in source.
So I am only copying files that are newer in source, but they must exist in target or they won't be copied. (source will have many more files than target.)
I will actually be copying from source to multiple target directories. I mention this in case it impacts the choice of solution. However, I can easily run my command multiple times, specifying the new target each time, if that's what is required.
|
I believe you can use rsync to do this. The key observation would be in needing to use the --existing and --update switches.
--existing skip creating new files on receiver
-u, --update skip files that are newer on the receiver
A command like this would do it:
$ rsync -avz --update --existing src/ dst
Example
Say we have the following sample data.
$ mkdir -p src/; touch src/file{1..3}
$ mkdir -p dst/; touch dst/file{2..3}
$ touch -d 20120101 src/file2
Which looks as follows:
$ ls -l src/ dst/
dst/:
total 0
-rw-rw-r--. 1 saml saml 0 Feb 27 01:00 file2
-rw-rw-r--. 1 saml saml 0 Feb 27 01:00 file3
src/:
total 0
-rw-rw-r--. 1 saml saml 0 Feb 27 01:00 file1
-rw-rw-r--. 1 saml saml 0 Jan 1 2012 file2
-rw-rw-r--. 1 saml saml 0 Feb 27 01:00 file3
Now if I were to sync these directories nothing would happen:
$ rsync -avz --update --existing src/ dst
sending incremental file list
sent 12 bytes received 31 bytes 406.00 bytes/sec
total size is 0 speedup is 0.00
If we touch a source file so that it's newer:
$ touch src/file3
$ ls -l src/file3
-rw-rw-r--. 1 saml saml 0 Feb 27 01:04 src/file3
Another run of the rsync command:
$ rsync -avz --update --existing src/ dst
sending incremental file list
file3
sent 115 bytes received 31 bytes 292.00 bytes/sec
total size is 0 speedup is 0.00
We can see that file3, since it's newer, and that it exists in dst/, it gets sent.
Testing
To make sure things work before you cut the command loose, I'd suggest using another of rsync's switches, --dry-run. Let's add another -v too so rsync's output is more verbose.
$ rsync -avvz --dry-run --update --existing src/ dst
sending incremental file list
delta-transmission disabled for local transfer or --whole-file
file1
file2 is uptodate
file3 is newer
total: matches=0 hash_hits=0 false_alarms=0 data=0
sent 88 bytes received 21 bytes 218.00 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
| Best way to sync files - copy only EXISTING files and only if NEWER than target |
1,387,878,201,000 |
I'm trying to upload some big files (around 10GB) with a slow upload speed (200kb/s) on a often disconnected SSH connection (due to poor network conditions).
I'm trying to use scp, but if there is a best way over SSH, I'm ok with it.
What is the best way to do it ?
I've tried to split it up in several parts using split, but it's not really efficient as it require a lot of manual work before and after it is transfered.
|
Use rsync with the --partial option
rsync -av --partial sourcedir user@desthost:/destinationdir
The --partial will keep partially transferred files. When you resume the rsync transfer after a ssh broken connection, partially transferred files will start resuming from the point where the ssh connection was lost, and also successfully transferred files will not be transferred again.
Also consider passing in the -z option if you believe the file(s) you are transferring can be compressed significantly; for example, log files comprising of repeated text.
| Transfer a file over a unstable SSH connection |
1,387,878,201,000 |
This is a very basic question I am just quite new to bash and couldn't figure out how to do this. Googling unfortunately didn't get me anywhere.
My goal is to connect with sftp to a server, upload a file, and then disconnect.
I have the following script:
UpdateJar.sh
#!/bin/bash
sftp -oPort=23 [email protected]:/home/kalenpw/TestWorld/plugins
#Change directory on server
#cd /home/kalenpw/TestWorld/plugins
#Upload file
put /home/kalenpw/.m2/repository/com/Khalidor/TestPlugin/0.0.1-SNAPSHOT/TestPlugin-0.0.1-SNAPSHOT.jar
exit
the issue is, this script will establish an sftp connection and then do nothing. Once I manually type exit in connection it tries to execute the put command but because the sftp session has been closed it just says put: command not found.
How can I get this to work properly?
Thanks
|
You can change your script to pass commands in a here-document, e.g.,
#!/bin/bash
sftp -oPort=23 [email protected]:/home/kalenpw/TestWorld/plugins <<EOF
put /home/kalenpw/.m2/repository/com/Khalidor/TestPlugin/0.0.1-SNAPSHOT/TestPlugin-0.0.1-SNAPSHOT.jar
exit
EOF
The << marker followed by the name (EOF) tells the script to pass the following lines until the name is found at the beginning of the line (by itself).
| Execute command in sftp connection through script |
1,387,878,201,000 |
I'd like to copy a content of directory 1 to directory 2.
However,
I'd like to only copy files (and not directories) from my directory 1.
How can I do that?
cp dir1/* dir2/*
then I still have the directories issue.
Also, all my files don't have any extension, so *.* won't do the trick.
|
cp dir1/* dir2
cp will not copy directories unless explicitly told to do so (with --recursive for example, see man cp).
Note 1: cp will most likely exit with a non-zero status, but the files will have been copied anyway. This may be an issue when chaining commands based on exit codes:&&, ||, if cp -r dir1/* dir2; then ..., etc. (Thanks to contrebis for their comment on that issue)
Note 2: cp expects the last parameter to be a single file name or directory. There really should be no wildcard * after the name of the target directory. dir2\* will be expanded by the shell just like dir1\*. Unexpected things will happen:
If dir2 is empty and depending on your shell and settings:
you may just get an error message, which is the best case scenario.
dir2/* will be taken literally (looking for a file/directory named *), which will probably lead to an error, too, unless * actually exists.
dir2/* it will just be removed from the command entirely, leaving cp dir1/*. Which, depending on the expansion of dir1/*, may even destroy data:
If dir1/* matches only one file or directory, you will get an error from cp.
If dir1/* matches exactly two files, one will be overwritten by the other (Bad).
If dir/* matches multiple files and the last match is a, you will get an error message.
If the last match of dir/* is a directory all other matches will be moved into it.
If dir2 is not empty, it again depends:
If the last match of dir2/* is a directory, dir1/* and the other matches of dir2/* will be moved into.
If the last match of dir2/* is a file, you probably will get an error message, unless dir1/* matches only one file.
| Copy only regular files from one directory to another |
1,387,878,201,000 |
I drag and drop a folder into another by mistake in FileZilla.
~/big_folder
~/some_other_folder
The folder got moved is a very huge one. It includes hundreds of thousands of files (node_modules, small image files, a lot of folders)
What is so weird is that after I release my mouse, the moving is done. The folder "big_folder" is moved into "some_other_folder".
~/some_other_folder/big_folder
(there is no big_folder in the ~/ after moving)
Then I realize the mistake and try move back but it fails both on FileZilla and terminal.
Then I have to cp -r to copy files back because there are server-side codes accessing those files in ~/big_folder
And it takes like forever to wait ...
What should I do?
BTW, here are the output from FileZilla (it's the failure of the moving back):
Status: Renaming '/root/big_folder' to '/root/some_other_folder/big_folder'
Status: /root/big_folder -> /root/some_other_folder/big_folder
Status: Renaming '/root/some_other_folder/big_folder' to '/root/big_folder'
Command: mv "big_folder" "/root/big_folder"
Error: mv /root/some_other_folder/big_folder /root/big_folder: received failure with description 'Failure'
|
If a directory is moved within the same filesystem (the same partition), then all that is needed is to rename the file path of the directory. No data apart from the directory entry for the directory itself has to be altered.
When copying directories, the data for each and every file needs to be duplicated. This involves reading all the source data and writing it at the destination.
Moving a directory between filesystems would involve copying the data to the destination and removing it from the source. This would take about as long time as copying (duplicating) the data within a single filesystem.
If FileZilla successfully renamed the directory from ~/big_folder to ~/some_other_folder/big_folder, then I would revert that using
mv ~/some_other_folder/big_folder ~/big_folder
... after first making sure that there were no directory called ~/big_folder (if there was, the move would put big_folder from some_other_folder into the ~/big_folder directory as a subfolder).
| Why is mv so much faster than cp? How do I recover from an incorrect mv command? |
1,387,878,201,000 |
I'm on a macbook running Lion. In Terminal I'm connected to my schools server with ssh. I navigated to a folder on the server and have a file I want to copy to my local machine, but I don't know what the IP address of my local machine is. How can I get it? I'm in the folder on the server, and I want to copy read.txt onto my local machine's hard drive. I've tried scp ./read.txt [my computer name].local/newRead.txt but it doesn't work.
|
You don't need to know your own host's IP address in order to copy files to it. Simply use scp to copy the file from the remote host:
$ scp [email protected]:path/to/read.txt ~/path/to/newRead.txt
If you want to copy to your local host from your remote host, get your own IP address with ifconfig and issue the following:
$ scp path/to/read.txt [email protected]:path/to/newRead.txt
where 1.2.3.4 is your local IP address. A convenient way to extract a host's IP address is using this function:
ipaddr() { (awk '{print $2}' <(ifconfig eth0 | grep 'inet ')); }
where eth0 is your network interface. Stick it in ~/.bash_profile in order to run it as a regular command - ipaddr.
| How can I get the address of my local machine? |
1,387,878,201,000 |
You've got three folders:
folder current, which contains your current files
folder old, which contains an older version of the same files
folder difference, which is just an empty folder
How do you compare old with current and copy the files which are different (or entirely new) in current to difference?
I have searched all around and it seems like a simple thing to tackle, but I can't get it to work in my particular example. Most sources suggested the use of rsync so I ended up with the following command:
rsync -ac --compare-dest=../old/ new/ difference/
What this does however, is copies all the files from new to difference, even those which are the same as in old.
In case it helps (maybe the command is fine and the fault lies elsewhere), this is how I tested this:
I made the three folders.
I made several text files with different contents in old.
I copied the files from old to new.
I changed the contents of some of the files in new and added a few additional files.
I ran the above command and checked the results in difference.
I have been looking for a solution for the past couple of days and I'd really appreciate some help. It doesn't necessarily have to be using rsync, but I'd like to know what I'm doing wrong if possible.
|
I am not sure whether you can do it with any existing linux commands such as rsync or diff. But in my case I had to write my own script using Python, as python has the "filecmp" module for file comparison. I have posted the whole script and usage in my personal site - http://linuxfreelancer.com/
It usage is simple - give it the absolute path of new directory, old directory and difference directory in that order.
#!/usr/bin/env python
import os, sys
import filecmp
import re
from distutils import dir_util
import shutil
holderlist = []
def compareme(dir1, dir2):
dircomp = filecmp.dircmp(dir1, dir2)
only_in_one = dircomp.left_only
diff_in_one = dircomp.diff_files
dirpath = os.path.abspath(dir1)
[holderlist.append(os.path.abspath(os.path.join(dir1, x))) for x in only_in_one]
[holderlist.append(os.path.abspath(os.path.join(dir1, x))) for x in diff_in_one]
if len(dircomp.common_dirs) > 0:
for item in dircomp.common_dirs:
compareme(
os.path.abspath(os.path.join(dir1, item)),
os.path.abspath(os.path.join(dir2, item)),
)
return holderlist
def main():
if len(sys.argv) > 3:
dir1 = sys.argv[1]
dir2 = sys.argv[2]
dir3 = sys.argv[3]
else:
print "Usage: ", sys.argv[0], "currentdir olddir difference"
sys.exit(1)
if not dir3.endswith("/"):
dir3 = dir3 + "/"
source_files = compareme(dir1, dir2)
dir1 = os.path.abspath(dir1)
dir3 = os.path.abspath(dir3)
destination_files = []
new_dirs_create = []
for item in source_files:
destination_files.append(re.sub(dir1, dir3, item))
for item in destination_files:
new_dirs_create.append(os.path.split(item)[0])
for mydir in set(new_dirs_create):
if not os.path.exists(mydir):
os.makedirs(mydir)
# copy pair
copy_pair = zip(source_files, destination_files)
for item in copy_pair:
if os.path.isfile(item[0]):
shutil.copyfile(item[0], item[1])
if __name__ == "__main__":
main()
| How do you compare two folders and copy the difference to a third folder? |
1,387,878,201,000 |
I am investigating the behavior of a binary on Oracle Linux 9 (XFS filesystem). This binary, when called by a process, creates a directory under /tmp and copies some files to it. This directory gets a randomized name each time the process runs (a keyword + a GUID).
Immediately after, it deletes the directory. I want to access the files contained in this directory before it is deleted, but the whole process ends too fast for any of my commands.
Is there any way I could "intercept" and copy this directory before it is deleted?
|
I found this shell script that uses inotify-tools, and it did exactly what I was looking for (author: https://unix.stackexchange.com/a/265995/536771):
#!/bin/sh
TMP_DIR=/tmp
CLONE_DIR=/tmp/clone
mkdir -p $CLONE_DIR
wait_dir() {
inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do
echo $file
DIR=`dirname "$file"`
mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}"
cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}"
done
}
trap "trap - TERM && kill -- -$$" INT TERM EXIT
inotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do
if ! [ -d "$file" ]; then
continue
fi
echo "setting up wait for $file"
wait_dir "$file" &
done
The simpler solution that worked for me even better than the script:
chattr +a /tmp
This is because the script fails if the binary creates a single file under /tmp instead of a folder. It also fails if the binary creates more than one folder under /tmp.
Edit: an even simpler solution that worked was to run:
cp -rp /source /clone
chattr interfered with what I was checking, and the first script works fine for directories created under /tmp, but not for files created under /tmp
| How can I copy a /tmp/ directory that is created & deleted by a process? |
1,387,878,201,000 |
Some file copying programs like rsync and curl have the ability to resume failed transfers/copies.
Noting that there can be many causes of these failures, in some cases the program can do "cleanup" some cases the program can't.
When these programs resume, they seem to just calculate the size of the file/data that was transferred successfully and just start reading the next byte from the source and appending on to the file fragment.
e.g the size of the file fragment that "made it" to the destination is 1378 bytes, so they just start reading from byte 1379 on the original and adding to the fragment.
My question is, knowing that bytes are made up of bits and not all files have their data segmented in clean byte sized chunks, how do these programs know they the point they have chosen to start adding data to is correct?
When writing the destination file is some kind of buffering or "transactions" similar to SQL databases occurring, either at the program, kernel or filesystem level to ensure that only clean, well formed bytes make it to the underlying block device?
Or do the programs assume the latest byte would be potentially incomplete, so they delete it on the assumption its bad, recopy the byte and start the appending from there?
knowing that not all data is represented as bytes, these guesses seem incorrect.
When these programs "resume" how do they know they are starting at the right place?
|
For clarity's sake - the real mechanics is more complicated to give even better security - you can imagine the write-to-disk operation like this:
application writes bytes (1)
the kernel (and/or the file system IOSS) buffers them
once the buffer is full, it gets flushed to the file system:
the block is allocated (2)
the block is written (3)
the file and block information is updated (4)
If the process gets interrupted at (1), you don't get anything on the disk, the file is intact and truncated at the previous block. You sent 5000 bytes, only 4096 are on the disk, you restart transfer at offset 4096.
If at (2), nothing happens except in memory. Same as (1).
If at (3), the data is written but nobody remembers about it. You sent 9000 bytes, 4096 got written, 4096 got written and lost, the rest just got lost. Transfer resumes at offset 4096.
If at (4), the data should now have been committed on disk. The next bytes in the stream may be lost. You sent 9000 bytes, 8192 get written, the rest is lost, transfer resumes at offset 8192.
This is a simplified take. For example, each "logical" write in stages 3-4 is not "atomic", but gives rise to another sequence (let's number it #5) whereby the block, subdivided into sub-blocks suitable for the destination device (e.g. hard disk) is sent to the device's host controller, which also has a caching mechanism, and finally stored on the magnetic platter. This sub-sequence is not always completely under the system's control, so having sent data to the hard disk is not a guarantee that it has been actually written and will be readable back.
Several file systems implement journaling, to make sure that the most vulnerable point, (4), is not actually vulnerable, by writing meta-data in, you guessed it, transactions that will work consistently whatever happens in stage (5).
If the system gets reset in the middle of a transaction, it can resume its way to the nearest intact checkpoint. Data written is still lost, same as case (1), but resumption will take care of that. No information actually gets lost.
| How do programs that can resume failed file transfers know where to start appending data? |
1,387,878,201,000 |
I'm not very familiar with all the tricks of grep/find/awk/xargs quite yet.
I have some files matching a particular pattern, say *.xxx. These files are in random places throughout a certain directory. How can I find all such files, and move them to a folder in my home directory on Unix (that may not exist yet)?
|
mkdir ~/dst
find source -name "*.xxx" -exec mv -i {} -t ~/dst \;
| How can you move (or copy) all files of a certain type to a directory in Unix? |
1,387,878,201,000 |
What are the consequences for a ext4 filesystem when I terminate a copying cp command by typing Ctrl + C while it is running?
Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it?
And, most importantly, is terminating a cp process a safe thing to do?
|
This is safe to do, but naturally you may not have finished the copy.
When the cp command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall, or system call, is a function that an application can use to requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls from cp ~/hello.txt /mnt, it would look like:
open("/home/user/hello.txt", O_RDONLY) = 3
open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4
read(3, "Hello, world!\n", 131072) = 14
write(4, "Hello, world!\n", 14) = 14
close(3) = 0
close(4) = 0
This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished, not while it is running (in fact, signals only arrive during a kernelspace to userspace context switch). Note that some signals, like read(), can be terminated early.
Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has returned. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, so there is no risk of filesystem corruption.
| What happens when I kill 'cp'? Is it safe and does it have any consequences? |
1,387,878,201,000 |
What is the effect of copying a file say fileA.big (900mb) from location B to location C, if during that cp operation, say 35% through the process, fileA.big is appended with new information and grows from 900MB to 930MB?
What is the result of the end copy (i.e. fileA.big at location C)?
What if the copy is about 70% through, and the original file is updated but this time truncated to 400MB (i.e. the progress of the copy is beyond the truncation point), what is the result of the end copy?
Referring to a Linux OS on an ext3/ext4 filesystem. No volume shadow magic etc. Just plain old cp. Curiosity sparked by copying live CouchDB files for backup, but more interested in general scenarios rather than specific use case.
|
If fileA.big is grown during the copy, the copy will include the data that was appended.
If the file is truncated shorter than where the copy is currently at, the copy will abort right where its at and the destination file will contain what was copied up to the time it aborted.
| What happens if a file is modified while you're copying it? |
1,387,878,201,000 |
I have a home file server that I use Ubuntu on.
Recently, one of my drives filled up so I got another and threw it in there.
I have a very large folder, the directory is about 1.7 T in size and contains a decent amount of files.
I used GCP to COPY the files from the old drive to the new one and it seems to have worked fine.
I want to now validate the new directory on the new drive against the original directory on the old drive before I delete the data from the old drive to free up space. I understand that I can do a CRC check to do this.
How, specifically, can I do this?
|
I’d simply use the diff command:
diff -rq --no-dereference /path/to/old/drive/ /path/to/new/drive/
This reads and compares every file in the directory trees and reports any differences. The -r flag compares the directories recursively while the -q flag just prints a message to screen when files differ – as opposed to printing the actual differences (as it does for text files). The --no-dereference flag may be useful if there are symbolic links that differ, e.g., in one directory, a symbolic link, and in its corresponding directory, a copy of the file that was linked to.
If the diff command prints no output, that means the directory trees are indeed identical; you can run echo $? to verify that its exit status is 0, indicating that both sets of files are the same.
I don’t think computing CRCs or checksums is particularly beneficial in this case. It would make more sense if the two sets of files were on different systems and each system could compute the checksums for their own set of files so only the checksums need to be sent over the network. Another common reason for computing checksums is to keep a copy of the checksums for future use.
| Verifying a large directory after copy from one hard drive to another |
1,387,878,201,000 |
Is there a multi-threaded cp command on Linux?
I know how to do this on Windows, but I don't know how this is approached in a Linux environment.
|
As Celada mentioned, there would be no point to using multiple threads of execution since a copy operation doesn't really use the cpu. As ryekayo mentioned, you can run multiple instances of cp so that you end up with multiple concurrent IO streams, but even this is typically counter-productive. If you are copying files from one location to another on the same disk, trying to do more than one at a time will result in the disk wasting time seeking back and forth between each file, which will slow things down. The only time it is really beneficial to copy multiple files at once is if you are, for instance, copying several files from several different slow, removable disks onto your fast hard disk, or vice versa.
| Multithreaded cp on linux? [duplicate] |
1,387,878,201,000 |
I have a large directory containing subdirectories and files that I wish to copy recursively.
Is there any way to tell cp that it should perform the copy operation in order of file size, so that the smallest files get copied first?
|
This does the whole job in one go - in all child directories, all in a single stream without any filename problems. It'll copy from smallest to largest every file you have. You will need to mkdir ${DESTINATION} if it doesn't already exist.
find . ! -type d -print0 |
du -b0 --files0-from=/dev/stdin |
sort -zk1,1n |
sed -zn 's/^[^0-9]*[0-9]*[^.]*//p' |
tar --hard-dereference --null -T /dev/stdin -cf - |
tar -C"${DESTINATION}" --same-order -xvf -
You know what, though? What this doesn't do is empty child directories. I could do some redirection over that pipeline, but it's just a race condition waiting to happen. Simplest is probably best. So just do this afterwards:
find . -type d -printf 'mkdir -p "'"${DESTINATION}"'/%p"\n' |
. /dev/stdin
Or, since Gilles makes a very good point in his answer to preserve directory permissions, I should try also. I think this will do it:
find . -type d -printf '[ -d "'"${DESTINATION}"'/%p" ] ||
cp "%p" -t "'"${DESTINATION}"'"\n' |
. /dev/stdin
I'd be willing to bet that's faster than mkdir anyway.
| copy smallest files first? |
1,387,878,201,000 |
I want to cp aaa/deep/sea/blob.psd to bbb/deep/sea/blob.psd
How do I do the copy if the deep and sea directories don't exist under bbb so that the copy both creates the directories that are needed and copies the file?
Right now I get
No such file or directory as deep and sea don't exist.
I looked thru the man help pages and other questions but nothing jumps out at me.
The closest I've got is using rcp for the directory:
rcp -r aaa/deep/sea/ bbb/deep/sea/
though this copies the whole directory and contents and I just want the one file. Trying to do that however gave cp: cannot create regular file bbb/deep/sea/blob.psd' such file or directory
|
Try to use such next function for such situation:
copy_wdir() { mkdir -p -- "$(dirname -- "$2")" && cp -- "$1" "$2" ; }
and use it as
copy_wdir aaa/deep/sea/blob.psd bbb/deep/sea/blob.psd
By the way, GNU cp has a --parents option. It's really close to what you want, but not exactly.
It will also create aaa directory that seems you don't need. However you can first cd to aaa and copy like:
cd aaa && cp --parents deep/sea/blob.psd ../bbb/
| How can I copy a file and create the target directories at the same time? |
1,387,878,201,000 |
I want to rsync only certain file types (e.g. .py) and I want to exclude files in some directories (e.g. venv).
This is what I have tried:
rsync -avz --include='*/' --exclude='venv/' --include='*.py' --exclude='*' /tmp/src/ /tmp/dest/
But it doesn't work.
What am I missing?
I also followed the answer to this question but it didn't help.
|
venv/ needs to be excluded before */ is included:
rsync -avz --exclude='venv/' --include='*/' --include='*.py' --exclude='*' /tmp/src/ /tmp/dest/
The subtlety is that rsync processes rules in order and the first matching rule wins. So, if --include='*/' is before --exclude='venv/', then the directory venv/ is included by --include='*/' and the exclude rule is never consulted.
Could we simplify this?
Why do we need --include='*/' and --exclude='*'? Why isn't --exclude=venv/ --include='*.py' sufficient?
The default is to include files/directories. So, consider:
rsync -avz --exclude='venv/' --include='*.py' source target
This would include everything except files or directories under venv/. You, however, only want .py files. That means that we have to explicitly exclude other files with --exclude='*'.
--exclude='*' excludes both files and directories. So, if we specify --exclude='*', then all directories would be excluded and only the .py files it the root directory would be found. .py files in subdirectories would never be found because rsync does not look into directories that are excluded. Thus, if we have --exclude='*', we need to precede it with --include='*/' to ensure that the contents of all directories are explored.
| Rsync, include only certain files types excluding some directories |
1,387,878,201,000 |
My computer has one 500GB drive.
I want to move 400GB of data from /unencrypted to /encrypted.
Both directories are on the same partition, but /encrypted is handled by ecryptfs, so mv /uncrypted/* /encrypted would:
Copy all files to destination
Then remove them from source
...which I can't afford, because it requires 800GB.
If files were moved one-by-one, there would be no problem (the ecryptfs zone is dynamic).
Is there an mv option or another tool, that moves a directory file-by-file?
There is a huge number of files, so ARG_MAX might be a problem for script-based solutions.
|
If you have rsync (remove --dry-run to do it for real):
rsync --dry-run --remove-source-files -avHAX /unencrypted/ /encrypted
Otherwise, using bash4+ and GNU stat:
#!/bin/bash
set -e
shopt -s nullglob globstar
for from in /unencrypted/**/*; do
to="${from/\/un//}"
if [[ -d "$from" ]]; then
echo mkdir -p "$to"
echo chmod "$(stat -c %a "$from")" "$to"
echo chown "$(stat -c %u:%g "$from")" "$to"
else
echo cp -a "$from" "$to" && echo rm "$from"
fi
done
echo rm -r /unencrypted
To run it for real, remove echo from each command.
| How to move a directory, file by file? (instead of "copy then remove") |
1,387,878,201,000 |
I'm playing with btrfs, which allows cp --reflink to copy-on-write. Other programs, such as lxc-clone, may use this feature as well. My question is, how to tell if a file is a CoW of another? Like for hardlink, I can tell from the inode number.
|
Good question. Looks like there aren't currently any easy high-level ways to tell.
One problem is that a file may only share part of the data via Copy-on-Write. This is called a physical extent, and some or all of the physical extents may be shared between CoW files.
There is nothing analogous to an inode which, when compared between files, would tell you that the files share the same physical extents. (Edit: see my other answer).
The low level answer is that you can ask the kernel which physical extents are used for the file using the FS_IOC_FIEMAP ioctl, which is documented in Documentation/filesystems/fiemap.txt. In principle, if all of the physical extents are the same, then the file must be sharing the same underlying storage.
Few things implement a way to look at this information at a higher level. I found some go code here. Apparently the filefrag utility is supposed to show the extents with -v. In addition, btrfs-debug-tree shows this information.
I would exercise caution however, since these things may have had little use in the wild for this purpose, you could find bugs giving you wrong answers, so beware relying on this data for deciding on operations which could cause data corruption.
Some related questions:
How to find out if a file on btrfs is copy-on-write?
How to find data copies of a given file in Btrfs filesystem?
| How to verify a file copy is reflink/CoW? |
1,387,878,201,000 |
I've got two issues with my script that copies files and adds a timestamp to the name.
cp -ra /home/bpacheco/Test1 /home/bpacheco/Test2-$(date +"%m-%d-%y-%T")
The above adds Test2 as the filename, but I want it to keep the original source file's file name which in this example is named Test.
cp -ra /home/bpacheco/Test1 /home/bpacheco/Test2-$(date +"%m-%d-%y-%r")
The other issue is when I add the %r as the timestamp code I get the error stating that target "PM" is not a directory. I'm trying to get the timestamp as 12-hour clock time.
|
One of your problems is that you left out the double quotes around the command substitution, so the output from the date command was split at spaces. See Why does my shell script choke on whitespace or other special characters? This is a valid command:
cp -a /home/bpacheco/Test1 "/home/bpacheco/Test2-$(date +"%m-%d-%y-%r")"
If you want to append to the original file name, you need to have that in a variable.
source=/home/bpacheco/Test1
cp -a -- "$source" "$source-$(date +"%m-%d-%y-%r")"
If you're using bash, you can use brace expansion instead.
cp -a /home/bpacheco/Test1{,"-$(date +"%m-%d-%y-%r")"}
If you want to copy the file to a different directory, and append the timestamp to the original file name, you can do it this way — ${source##*/} expands to the value of source without the part up to the last / (it removes the longest prefix matching the pattern */):
source=/home/bpacheco/Test1
cp -a -- "$source" "/destination/directory/${source##*/}-$(date +"%m-%d-%y-%r")"
If Test1 is a directory, it's copied recursively, and the files inside the directory keep their name: only the toplevel directory gets a timestamp appended (e.g. Test1/foo is copied to Test1-05-10-15-07:19:42 PM). If you want to append a timestamp to all the file names, that's a different problem.
Your choice of timestamp format is a bad idea: it's hard to read for humans and hard to sort. You should use a format that's easier to read and that can be sorted easily, i.e. with parts in decreasing order of importance: year, month, day, hour, minute, second, and with a separation between the date part and the time part.
cp -a /home/bpacheco/Test1 "/home/bpacheco/Test2-$(date +"%Y%m%d-%H%M%S")"
cp -a /home/bpacheco/Test1 "/home/bpacheco/Test2-$(date +"%Y-%m-%dT%H%M%S%:z")"
| Copy a file and append a timestamp |
1,387,878,201,000 |
Instead of using the following command:
cp {source file} {dest file}
I want to be able to copy a file into the clipboard, and paste it somewhere else, in another directory. something like this:
/usr/local/dir1# cp {source file}
/usr/local/dir1# cd /usr/local/dir2
/usr/local/dir2# paste
Is it possible?
|
I think you should do something like the GUI applications do. My idea for doing this is to write two functions for Copy and Paste, where Copy writes path of files to be copied to a temporary file and Paste reads those paths and simply calls cp command. My implementation (to be put in .bashrc file) is like below:
function Copy {
touch ~/.clipfiles
for i in "$@"; do
if [[ $i != /* ]]; then i=$PWD/$i; fi
i=${i//\\/\\\\}; i=${i//$'\n'/$'\\\n'}
printf '%s\n' "$i"
done >> ~/.clipfiles
}
function Paste {
while IFS= read src; do
cp -Rdp "$src" .
done < ~/.clipfiles
rm ~/.clipfiles
}
Better scripts could be written for implementing this idea, I tested my own and it works very well for files and folders (I don't know how xclip could work for copying folders!!)
For example:
/usr/local/dir1# Copy a.txt *.cpp
/usr/local/dir1# cd /usr/local/dir2
/usr/local/dir2# Paste
/usr/local/dir1# Copy *.h *.cpp b.txt subdir1
/usr/local/dir1# cd /usr/local/dir2
/usr/local/dir2# Paste
/usr/local/dir1# Copy a.txt b.txt
/usr/local/dir1# cd /usr/local/dir2
/usr/local/dir2# Copy c.txt d.txt
/usr/local/dir2# cd /usr/local/dir3
/usr/local/dir3# Paste
| Copy and paste a file/directory from command line |
1,387,878,201,000 |
I have no experience with btrfs, but it's advertised to be able to
de-duplicate files.
In my application, I'd need to duplicate whole directory trees.
From what I learned, btrfs only de-duplicates in some post scan, not
immediately. Even just using cp doesn't seem to trigger any
de-duplication (at least, df shows an increased disk usage in the
size of the copied files).
Can I avoid moving data around altogether and tell btrfs directly to
duplicate a file at another location, essentially just cloning its
metadata?
In essence, similar to a hardlink, but with independent metadata
(permissions, mod. times, ...).
|
There are two options:
cp --reflink=always
cp --reflink=auto
The second is almost always preferable to the first. Using auto means it will fallback to doing a true copy if the file system doesn't support reflinking (for instance, ext4 or copying to an NFS share). With the first option, I'm pretty sure it will outright fail and stop copying.
If you are using this as part of a script that needs to be robust in the face of non-ideal conditions, auto will serve your better.
| How to duplicate a file without copying its data with btrfs? |
1,387,878,201,000 |
I want to copy the attributes (ownership, group, ACL, extended attributes, etc.) of one directory to another but not the directory contents itself.
This does not work:
cp -v --attributes-only A B
cp: omitting directory `A'
Note: It does not have to be cp.
|
After quite a bit of trial and error on the commandline, I think I've found the answer. But it isn't a cp-related answer.
rsync -ptgo -A -X -d --no-recursive --exclude=* first-dir/ second-dir
This does:
-p, --perms preserve permissions
-t, --times preserve modification times
-o, --owner preserve owner (super-user only)
-g, --group preserve group
-d, --dirs transfer directories without recursing
-A, --acls preserve ACLs (implies --perms)
-X, --xattrs preserve extended attributes
--no-recursive disables recursion
For reference
--no-OPTION turn off an implied OPTION (e.g. --no-D)
-r, --recursive recurse into directories
| How to clone/copy all file/directory attributes onto different file/directory? |
1,513,855,139,000 |
I have to move some files from one filesystem to another under Ubuntu. However, it is very important that the files never exist as partial or incomplete files at the destination, at least not under the correct file name.
So far, my only solution is to write a script that takes each file, copies it to a temporary name at the destination, then renames it (which I believe should be atomical) at the destination to the original filename and finally deletes the originating file.
However, writing and debugging a script seems like it is overkill for this task. Is there a way or tool that already does this natively?
|
rsync copies to temporary filenames (e.g. see Rsync temporary file extension and rsync - does it create a temp file during transfer?) unless you use the --inplace option. It renames them only after the file has been transferred successfully. rsync also deletes any destination files that were only partially transferred (e.g. due to disk full or other error).
There is also a --remove-source-files option which deletes the source file(s) after they've been successfully transferred. See the rsync man page for more details.
Putting that all together, you could use something like:
rsync -ax --remove-source-files source/ target/
This option is particularly useful for tasks like moving files out of an "incoming" queue or similar to the directory where they will be processed.
Alternatively, if this is a once-off mirror, maybe just use rsync without the --remove-source-files option. You can delete the source files later if you want/need to.
| Approximating atomic move across file systems? |
1,513,855,139,000 |
I'm writing a script to publish a webapp. While copying files, I must replace a placeholder current_date in a file with the current date.
I would start with something like this to define the date string
date=`date +%Y%m%d`
The copy and replace part is where I don't know how to start.
|
Use sed. Here is an example:
sed "s/current_date/`date +%Y%m%d`/" infile > copyfile
| Copy file while replacing text in it |
1,513,855,139,000 |
I want to copy and rename multiple c source files in a directory.
I can copy like this:
$ cp *.c $OTHERDIR
But I want to give a prefix to all the file names:
file.c --> old#file.c
How can I do this in 1 step?
|
a for loop:
for f in *.c; do cp -- "$f" "$OTHERDIR/old#$f"; done
I often add the -v option to cp to allow me to watch the progress.
| How to copy and add prefix to file names in one step? |
1,513,855,139,000 |
I am totally new to Unix. I am writting a script which will copy files from a Windows shared folder to Unix.
In Windows, when I type \\Servername.com\testfolder in Run command I am able to see testfolder. The directory testfolder is a shared folder through the whole network.
Now I want to copy some files from that testfolder to a Unix machine. Which command should I use? I know the IP Address of server but I don't know the username.
|
From your UNIX server you need to mount the Windows share using the procedure laid out in this link.
Basically you create a directory on your UNIX machine that is called the mount point. You then use the mount command to mount the Windows share on that mount point. Then when you go to the directory that you have created you see the files that are in the Windows share.
| Copy file from Windows shared folder to Unix |
1,513,855,139,000 |
consider the following example:
/source
/source/folder1
/source/folder2
/source/folder3
/destination
/destination/folder2
/destination/folder3
/destination/folder3/mytestfolder1
/destination/folder4
/destination/folder4/mytestfolder1
/destination/folder4/mytestfolder2
I want to sync source to destination but "/destination/folder4/mytestfolder1" must be ignored.
I tried using the exclude parameter
rsync -av --delete --progress --exclude "mytestfolder1" /source/ /destination/
but this ignores all folders named "mytestfolder1".
When I supplied the full path, nothing is ignored since it seems rsync thinks the path is in the source and not on the destination.
rsync -av --delete --progress --exclude "/destination/folder4/mytestfolder1" /source/ /destination/
rsync -av --delete --progress --exclude "destination/folder4/mytestfolder1" /source/ /destination/
I've searched the net but didn't find anything helpful.
Thanks for the help. :)
|
If you use an absolute path in a filter (include/exclude), it's interpreted starting from the root of the synchronization. You aren't excluding a directory in the source, or a excluding a directory in the destination, you're excluding a directory in the tree to synchronize.
Thus:
rsync -av --delete --progress --exclude "/folder4/mytestfolder1" /source/ /destination/
| how do I exclude a specific folder on the destination when using rsync? |
1,513,855,139,000 |
How would I copy (archive style where date isn't changed) all the files in a backup directory to the user's directory while renaming each file to remove the random string portion from the name (i.e., -2b0fd460_1426b77b1ee_-7b8e)?
cp from:
/backup/path/data/Erp.2014.02.16_16.57.03-2b0fd460_1426b77b1ee_-7b8e.etf
to:
/home/user/data/Erp.2014.02.16_16.57.03.etf
Each file will always start with "Erp." followed by the date-time stamp string followed by the random string and then the extension ".etf". I want to keep all name elements including the date-time stamp. I just want to remove the random string.
The random string allows multiple backups of the same file. However, in this case, I just ran fdupes and there are no duplicates. So I can simply restore all the files, removing the random string.
I'm looking for a one-line bash command to do it.
If that won't work, I could do it in two or more steps. I normally use KRename, but in this case I need to do it in bash. (I'm working remotely.)
|
pax can do this all at once. You could do:
cd /backup/path/data && pax -wrs'/-.*$/.etf/' Erp*etf /home/user/data
pax preserves times by default, but can add -pe to preserve everything (best done as root) or -pp to preserve permissions , eg:
cd /backup/path/data && pax -wrs'/-.*$/.etf/' -pe Erp*etf /home/user/data
Otherwise (pax isn't usually available by default), surely it is better to do a copy then a rename:
cp -a /backup/path/data/Erp*.etf /home/user/data
rename 's/-.*$/.etf/' /home/user/data/Erp*.etf
This way there is not a different process started for each file.
| how to rename files while copying? |
1,513,855,139,000 |
I have a problem with the timestamps of files copied from my PC or laptop to USB drives: the last modification time of the original file and that of the copied file are different. Therefore, synchronizing files between my PC and my USB drive is quite cumbersome.
A step by step description
I copy an arbitrary file from my PC/laptop to a USB drive using the GUI or with the command
cp -a file.txt /media/gabor/CORSAIR/
I check the last modification time of the original file:
$ ls -l --time-style=full-iso file.txt
-rw-rw-r-- 1 gabor gabor 0 2018-09-22 15:09:23.317098281 +0200 file.txt
I check the last modification time of the copied file:
$ ls -l --time-style=full-iso /media/gabor/CORSAIR/file.txt
-rw-r--r-- 1 gabor gabor 0 2018-09-22 15:09:23.000000000 +0200 /media/gabor/CORSAIR/file.txt
As you can see, the seconds in the last modification time of the copied file are truncated to zero decimal digits. However, if I enter the command
if ! [ file.txt -nt /media/gabor/CORSAIR/file.txt ] && ! [ file.txt -ot /media/gabor/CORSAIR/file.txt ]; then echo "The last modification times are equal."; fi
I get the output The last modification times are equal.
The situation changes if I unmount and remount the USB drive and I execute the last two commands again:
$ ls -l --time-style=full-iso /media/gabor/CORSAIR/file.txt
-rw-r--r-- 1 gabor gabor 0 2018-09-22 15:09:22.000000000 +0200 /media/gabor/CORSAIR/file.txt
$ if [ file.txt -nt /media/gabor/CORSAIR/file.txt ]; then echo "The file is newer on the PC."; fi
The file is newer on the PC.
So after remount, the last modification time of the copied file is further reduced by one second. Further unmounting and remounting, however, doesn't affect the last modification time any more. Besides, the test on the files now shows that the file on the PC is newer (although it isn't).
The situation is further complicated by the fact that the last modification time of files is shown differently on my PC and on my laptop, the difference being exactly 2 hours, although the date and time setting is the same on my PC and on my laptop!
Further information
Both my PC and laptop show the behaviour, described above. I have Ubuntu 14.04.5 (trusty) on my PC and Ubuntu 16.04.2 (xenial) on my laptop.
My USB drives have vfat file system. The output of mount | grep CORSAIR on my PC is
/dev/sdb1 on /media/gabor/CORSAIR type vfat (rw,nosuid,nodev,uid=1000,gid=1000,shortname=mixed,dmask=0077,utf8=1,showexec,flush,uhelper=udisks2)
The output of mount | grep CORSAIR on my laptop is
/dev/sdb1 on /media/gabor/CORSAIR type vfat (rw,nosuid,nodev,relatime,uid=1000,gid=1000,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,showexec,utf8,flush,errors=remount-ro,uhelper=udisks2)
My other USB drives show the same behaviour.
Question
Can the difference in the last modification times be eliminated somehow? For example, using other parameters at mounting/unmounting? Or is it a bug in Ubuntu?
I would like to achieve that the timestamps of the original and copied files are exactly the same, so that synchronization can be done more efficiently. Also, I would like to keep the vfat file system on my USB drives, so that I can use them under Windows, too.
|
The problem with the timestamp seconds changing comes from the fact that a VFAT (yes, even FAT32) filesystem stores the modification time with only 2-second resolution.
Apparently, as long as the filesystem is mounted, the filesystem driver caches timestamps accurate to 1-second resolution (probably to satisfy POSIX requirements), but once the filesystem is unmounted, the caches are cleared and you'll see what is actually recorded on the filesystem directory.
The two-hour difference between the PC and the laptop are probably caused by different timezone settings and/or different default mount options for VFAT filesystem. (I'm guessing that you're located in a timezone whose UTC offset is currently 2 hours, either positive or negative.)
Internally, Linux uses UTC timestamps on Unix-style filesystems; but on VFAT filesystems, the (current) default is to use local time on VFAT filesystem timestamps, because that is what MS-DOS did and Windows still does. But there are two mount options that can affect this: you can specify the mount option tz=UTC to use UTC-based timestamps on VFAT filesystems, or you can use time_offset=<minutes> to explicitly specify the timezone offset to be used with this particular filesystem.
It might be that the default mount options for VFAT have changed between Ubuntu 14.04 and 16.04, either within the kernel or the udisks removable-media helper service, resulting in the two-hour difference you see.
| Timestamps of files copied to USB drive |
1,513,855,139,000 |
I have a text file containing a list of directories with its absolute path
$ cat DirectoriesToCopy.txt
/data/Dir1
/data/Dir2
I want to use rsync to copy all these directories preserving its absolute path to another location.
I tried the following rsync command, but it doesn't work
rsync -avr --include-from=DirectoriesToCopy.txt --exclude='*/' --exclude='/*' / /media/MyDestination/
What is going wrong here?
|
Use the following command:
rsync -av --include-from=DirectoriesToCopy.txt --include /data/ --exclude='/data/*' --exclude='/*/' / /media/MyDestination/
You need to include /data/ explicitly, you could also have added that to the list in the file. Then exclude all other directories (order is important with includes/excludes).
Note that your usage of -r was redundant as that's included in -a.
EDIT:
You could also accomplish the same result with:
rsync -av --relative /data/Dir1 /data/Dir2 /media/MyDestination/
It's not rsync that's forcing you to do difficult things just to copy a couple of directories, it just gives you multiple ways of doing the same thing; in some cases going the include/exclude way may be more suited, here I'd do the --relative thing above (without --relative you'd end up with /media/MyDestination/Dir1 and /media/MyDestination/Dir2, with the --relative the whole source path is copied to the destination).
| rsync a list of directories with absolute path in text file |
1,513,855,139,000 |
Besides the explanations from the man pages and --help information, in which way do the commands dd, cp and rsync differ when used to copy files? In which context is each of these superior to the others, for some definition of 'superior', so that it should get preference of use?
|
They are completely different animals that better fit on different file or device manipulation cases:
dd
This command was created as a "copy and convert" utility, originally intended for converting files between the ASCII, little-endian, byte-stream world of DEC computers and the EBCDIC, big-endian, appearing at the first time on Unix Version 5. It became the de facto command utility to manipulate everything that can be mapped as a file inside your Unix-like operating system(clone a disc, backup of mbr, clone a disk as file, copy some blocks of one device file, write an image to a usb stick..), and piped to other commands, the sky is the limit. One alternative to this software is the dcfldd command.
dd related stuff:
Is there a way to determine the optimal value for the bs parameter to dd?
11 Awesome DD Commands
6 Examples to Backup Linux Using dd Command (Including Disk to Disk)
What does dd stand for? - Could be "Dataset Definition" or "Copy and Convert" and was renamed to dd only because cc was reserved for the C compiler. It is up to choose one of the naming theory ;)
cp
Make copy of files and directories. This is a more "higher" level of abstraction, where you can copy directories recursively, without caring about block size, file conversion, etc. It is a better tool to deal with "1-to-many" cases of file copy, ownership, symbolic link follow, recursively copy, and verbosity. However, it has it's limitations like dealing with file changes, remote copy, and those things better handled by rsync.
cp related stuff:
Is it possible to see cp speed and percent copied? - Specific case there rsync is a better tool to local copy files ;)
Difference Between cp -r and cp -R (copy command)
rsync
Can to copy files inside the same computer, but it's features are more useful on remote copy scenarios. Some of the features are ownership handling/manipulation, more easy "exclude" expressions for a better copy, file checksum to see if a file was already copied, delete origin files during or after copu, the use of a "transparent shell" by invoking the protocol wanted using a specific URI(ssh://, rsync://...), pipelining and other stuff that create optimized environment for remote mirroring things.
rsync related stuff:
rsync all files of remote machine over SSH without root user?
Rsync filter: copying one pattern only
Rsync - 10 Practical Examples
How to Backup Linux? 15 rsync Command Examples
Further Reading:
dd vs cat — is dd still relevant these days?
| What is the difference between `dd`, `cp` and `rsync`? [closed] |
1,513,855,139,000 |
I recently did a clean install of Linux Mint 17.3 with Cinnamon on my machine.
Before the clean install, if I right clicked on a file or folder in nemo, the menu would have a 'create shortcut' option. Now after the clean install, that option isn't there. I've gone through the nemo preferences and I can't find any option to enable it.
After some searching I found out a keyboard shortcut for making file shortcuts in nemo (ctrl+shift+click and drag), but I'd much rather the more intuitive (and memorable) right click menu option.
Similarly, other right click options that are now missing are
copy to
other pane
home
Downloads
etc
move to
other pane
home
Downloads
etc
How can I get those options back as well?
I've tried searching through the Nemo preferences, but to no avail.
|
When you right click the file or folder in Nemo, there will be a + sign at the top. Clicking that will expand the menu and give you the options you want.
For what it's worth, I found this functionality at this link:
https://forums.linuxmint.com/viewtopic.php?t=212256
| Right click menu in Nemo missing 'create shortcut' and 'copy/move to' |
1,513,855,139,000 |
I have files created into my home directory with only user read permission (r-- --- ---). I want to copy this file to another directory /etc/test/ which has the folder permission of 744 (rwx r-- r--). I need to allow for the file I am copying to inherit the permission of the folder it is copied in because so far when I copy it, the files permissions are still the same (r-- --- ---). I have tried setfacl command, but it did not work? Please help.
PS. I can't just chmod -r /etc/test/ because there are many files which will be copied into this folder over time and I don't want to run chmod command every time a file is copied over.
|
Permissions are generally not propagated by the directory that files are being copied into, rather new permissions are controlled by the user's umask. However when you copy a file from one location to another it's a bit of a special case where the user's umask is essentially ignored and the existing permissions on the file are preserved. Understanding this concept is the key to getting what you want.
So to copy a file but "drop" its current permissions you can tell cp to "not preserve" using the --no-preserve=all switch.
Example
Say I have the following file like you.
$ mkdir -m 744 somedir
$ touch afile
$ chmod 400 afile
$ ll
total 0
-r--------. 1 saml saml 0 Feb 14 15:20 afile
And as you've confirmed if we just blindly copy it using cp we get this:
$ cp afile somedir/
$ ls -l somedir/
total 0
-r--------. 1 saml saml 0 Feb 14 15:20 afile
Now let's repeat this but this time tell cp to "drop permissions":
$ rm -f somedir/afile
$ cp --no-preserve=all afile somedir/
$ ls -l somedir/
total 0
-rw-rw-r--. 1 saml saml 0 Feb 14 15:21 afile
So the copied file now has its permissions set to 664, where did it get those?
$ umask
0002
If I changed my umask to something else we can repeat this test a 3rd time and see the effects that umask has on the un-preserved cp:
$ umask 037
$ rm somedir/afile
$ cp --no-preserve=all afile somedir/
$ ls -l somedir/
total 0
-rw-r-----. 1 saml saml 0 Feb 14 15:29 afile
Notice the permissions are no longer 664, but are 640? That was dictated by the umask. It was telling any commands that create a file to disable the lower 5 bits in the permissions ... these guys: (----wxrwx).
| File inheriting permission of directory it is copied in? |
1,513,855,139,000 |
We have multiple deployment of an application on servers such as app00, app01 and so on. I need to copy a single log file from all these servers onto my local mac so I can perform some grepping and cutting.
I used csshX for viewing this file but I cannot find an equivalent for scp. I basically want two things:
Ability to connect to n numbers of such servers and copy the file
Avoid naming conflicts locally perhaps by prefixing the log file with the server hostname
How do I do this?
|
This is trivial to do with a little script. For example:
for server in app0 app1 app4 app5 appN; do
scp user@$server:/path/to/log/file /local/path/to/"$server"_file
done
The above will copy the file from each of the servers sequentially and name it SERVERNAME_file. So, the file from app0 will be app0_file etc. You can obviously change the names to whatever you would like.
| How do I copy a file from multiple servers to my local system? |
1,513,855,139,000 |
To my understanding, for manipulating files there is only the sys_write syscall in Linux, which overwrites the file content (or extends it, if at the end).
Why are there no syscalls for inserting or deleting content in files in Linux?
As all current file systems do not require the file to be stored in a continuous memory block, an efficient implementation should be possible.
(The files would get fragmented.)
With file system features as "copy on write" or "transparent file compression", the current way of inserting content seems to be very inefficient.
|
On recent Linux systems that is actually possible, but with block (4096 most of the time), not byte granularity, and only on some filesystems (ext4 and xfs).
Quoting from the fallocate(2) manpage:
int fallocate(int fd, int mode, off_t offset, off_t len);
[...]
Collapsing file space
Specifying the FALLOC_FL_COLLAPSE_RANGE flag (available since Linux
3.15) in mode removes a byte range from a file, without leaving a hole.
The byte range to be collapsed starts at offset and continues for len
bytes. At the completion of the operation, the contents of the file
starting at the location offset+len will be appended at the location
offset, and the file will be len bytes smaller.
[...]
Increasing file space
Specifying the FALLOC_FL_INSERT_RANGE flag (available since Linux 4.1)
in mode increases the file space by inserting a hole within the file
size without overwriting any existing data. The hole will start at
offset and continue for len bytes. When inserting the hole inside
file, the contents of the file starting at offset will be shifted
upward (i.e., to a higher file offset) by len bytes. Inserting a hole
inside a file increases the file size by len bytes.
| Why are there no file insertion syscalls |
1,513,855,139,000 |
I have a directory on an nfs mount, which on the server is at /home/myname/.rubies
Root cannot access this directory:
[mitchell.usher@server ~]$ stat /home/mitchell.usher/.rubies
File: `/home/mitchell.usher/.rubies'
Size: 4096 Blocks: 8 IO Block: 32768 directory
Device: 15h/21d Inode: 245910 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 970/mitchell.usher) Gid: ( 100/ users)
Access: 2016-08-22 15:06:15.000000000 +0000
Modify: 2016-08-22 14:55:00.000000000 +0000
Change: 2016-08-22 14:55:00.000000000 +0000
[mitchell.usher@server ~]$ sudo !!
sudo stat /home/mitchell.usher/.rubies
stat: cannot stat `/home/mitchell.usher/.rubies': Permission denied
I am attempting to copy something from within that directory to /opt which only root has access to:
[mitchell.usher@server ~]$ cp .rubies/ruby-2.1.3/ -r /opt
cp: cannot create directory `/opt/ruby-2.1.3': Permission denied
[mitchell.usher@server ~]$ sudo !!
sudo cp .rubies/ruby-2.1.3/ -r /opt
cp: cannot stat `.rubies/ruby-2.1.3/': Permission denied
Obviously I can do the following (and is what I've done for the time being):
[mitchell.usher@server ~]$ cp -r .rubies/ruby-2.1.3/ /tmp/
[mitchell.usher@server ~]$ sudo cp -r /tmp/ruby-2.1.3/ /opt/
Is there any way to do this that wouldn't involve copying it as an intermediary step or changing permissions?
|
You can use tar as a buffer process
cd .rubies
tar cf - ruby-2.1.3 | ( cd /opt && sudo tar xvfp - )
The first tar runs as you and so can read your home directory; the second tar runs under sudo and so can write to /opt.
| How to copy a directory which root can't access to a directory that only root can access? |
1,513,855,139,000 |
I have millions of files with the following nomenclature on a Linux machine:
1559704165_a1ac6f55fef555ee.jpg
The first 10 digits are timestamp and the ones followed by a _ are specific ids. I want to move all the files matching specific filename ids to a different folder.
I tried this on the directory with files
find . -maxdepth 1 -type f | ??????????_a1ac*.jpg |xargs mv -t "/home/ubuntu/ntest"
However I am getting an error indicating:
bash 1559704165_a1ac6f55fef555ee.jpg: command not found
When I tried, mv ??????????_a1ac*.jpg I am getting argument list too long error. I have atleast 15 different filename patterns. How do I move them.
|
You should use:
find . -maxdepth 1 -type f -name '??????????_a1ac*.jpg' \
-exec mv -t destination "{}" +
So maxdepth 1 means that you want to search in current directory no subdirectories.
type f means find only files.
name '??????????_a1ac*.jpg' is a pattern that matches with file you are searching.
mv -t destination "{}" + means move matched files to destination. Here + adds new matched files to previous one like:
mv -t dest a b c d
Here a b c d are different files.
| Moving millions of files to a different directory with specfic name patterns |
1,513,855,139,000 |
Recently because of some problem, I lost all my server files and requested the hosting team to provide me my files from a backup. They will provide me a link from which I have to download a compressed file and upload it again on server.
Is there any way to download that file directly onto the server? I have full access on the server.
|
You may use the wget utility.
It has a really simple syntax, and all what do you need is to:
wget http://link.to.file and it will be stored in the same directory where do you run wget. If you'd like to store a downloaded file somewhere else, you may use -P option, e.g.
wget -P /path/to/store http://link.to.file
| download file from Internet to server using SSH |
1,513,855,139,000 |
I have a binary that creates some files in /tmp/*some folder* and runs them. This same binary deletes these files right after running them. Is there any way to intercept these files?
I can't make the folder read-only, because the binary needs write permissions. I just need a way to either copy the files when they are executed or stop the original binary from deleting them.
|
You can use the inotifywait command from inotify-tools in a script to create hard links of files created in /tmp/some_folder. For example, hard link all created files from /tmp/some_folder to /tmp/some_folder_bak:
#!/bin/sh
ORIG_DIR=/tmp/some_folder
CLONE_DIR=/tmp/some_folder_bak
mkdir -p $CLONE_DIR
inotifywait -mr --format='%w%f' -e create $ORIG_DIR | while read file; do
echo $file
DIR=`dirname "$file"`
mkdir -p "${CLONE_DIR}/${DIR#$ORIG_DIR/}"
cp -rl "$file" "${CLONE_DIR}/${file#$ORIG_DIR/}"
done
Since they are hard links, they should be updated when the program modifies them but not deleted when the program removes them. You can delete the hard linked clones normally.
Note that this approach is nowhere near atomic so you rely on this script to create the hard links before the program can delete the newly created file.
If you want to clone all changes to /tmp, you can use a more distributed version of the script:
#!/bin/sh
TMP_DIR=/tmp
CLONE_DIR=/tmp/clone
mkdir -p $CLONE_DIR
wait_dir() {
inotifywait -mr --format='%w%f' -e create "$1" 2>/dev/null | while read file; do
echo $file
DIR=`dirname "$file"`
mkdir -p "${CLONE_DIR}/${DIR#$TMP_DIR/}"
cp -rl "$file" "${CLONE_DIR}/${file#$TMP_DIR/}"
done
}
trap "trap - TERM && kill -- -$$" INT TERM EXIT
inotifywait -m --format='%w%f' -e create "$TMP_DIR" | while read file; do
if ! [ -d "$file" ]; then
continue
fi
echo "setting up wait for $file"
wait_dir "$file" &
done
| Watch /tmp for file creation and prevent deletion of files? [duplicate] |
1,513,855,139,000 |
I have an Asustor NAS that runs on Linux; I don't know what distro they use.
I'm able to log in it using SSH and use all Shell commands. Internal Volume uses ext2, and external USB HDs use NTFS.
When I try to use cp command to copy any file around, that file's date metadata is changed to current datetime.
In example, if I use Windows to copy the file from SMB and the file was modified in 2007, the new file is marked as created now in 2017 but modified in 2007. But with Linux cp command its modified date is changed to 2017 too.
This modified date is very relevant to me because it allows me to sort files on Windows Explore by their modified date. If it's overridden, I'm unable to sort and they all seem to have been created now. I also use modified date to know when I acquired some rare old files.
Is there any parameter I can use in cp command to preserve original file metadata?
Update: I tried cp --preserve=timestamps but it didn't work, it printed:
cp: unrecognized option '--preserve=timestamps'
BusyBox v1.19.3 (2017-03-22 17:23:49 CST) multi-call binary.
Usage: cp [OPTIONS] SOURCE DEST
Copy SOURCE to DEST, or multiple SOURCE(s) to DIRECTORY
-a Same as -dpR
-R,-r Recurse
-d,-P Preserve symlinks (default if -R)
-L Follow all symlinks
-H Follow symlinks on command line
-p Preserve file attributes if possible
-f Overwrite
-i Prompt before overwrite
-l,-s Create (sym)links
If I try just -p it says cp: can't preserve permissions of '...': Operation not permitted, but as far as I've tested, timestamps are being preserved.
|
If you use man cp to read the manual page for the copy command you'll find the -p and --preserve flags.
-p same as --preserve=mode,ownership,timestamps
and
--preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all
What this boils down to is that you should use cp -p instead of just cp.
| cp losing file's metadata |
1,513,855,139,000 |
I installed Linux on another computer and I want to move my /home directory to that computer. I want to back up that directory with any file permission, symlinks etc.
How should I do it? Are there any parameters for tar gzip?
|
If you mean you want to include the files that the symlinks point to, use -h.
tar -chzf foo.tar.gz directory/
Permissions and ownership are preserved by default. If you just want to include the symlinks as symlinks, leave out -h. Small -z is for gzip.
This is all spelled out in man tar; you can search for terms (such as "symlink") in man via the forward-slash key /.
When you extract the archive (tar -xzf foo.tar.gz), ownership will only be preserved if you unpack as root, otherwise, all the files will be owned by you. This is a feature, since otherwise it would often be impossible for a normal user to access files in an archive they (e.g.) download. Modal permissions (read/write/execute) will remain the same. If as a regular user you want the ownership preserved anyway, you can use the -p switch (so tar -xzpf foo.tar.gz).
However, there is a catch.
File ownership is actually stored as a number, not a name; the system reports names by correlating them with a value from /etc/passwd. You can find the number corresponding to your user name with:
grep yourusername < /etc/passwd
Which will print something like:
piotek:x:1001:1001::/home/piotek:/bin/bash
The first number is your user number, the second one is your group number (they are usually the same). The other fields are explained in man 5 passwd (note the 5).
As a consequence, if you tar up some files and unpack them on another system as root, or using -p (so that ownership is preserved), and there is a user on that system whose user number is 1001, those files will be owned by that user, even if their name is not piotek (and even if there is a piotek user on the system with a different corresponding number).
The tar man page is a little confusing in this regard, since it refers to the -p switch as involving file permissions. This is a common *nix ambiguity in a context where the state of the read/write/execute bits are referred to as mode.
| Symlinks and permissions in backup archives |
1,513,855,139,000 |
I have a top folder with many sub-folders. It's named "a". There are many .png and .jpg files in there. I'd like to recursively copy "a" into a new folder "b", but only copy the .png and .jpg files. How do I achieve that?
|
find a \( -name "*.png" -or -name "*.jpg" \) -exec cp {} b \;
| Copy only certain file types from a folder structure to another |
1,513,855,139,000 |
How to resume securely and reliably process of copying file $A into backup location $B done with pv "$A" > "$B" or cat "$A" > "$B" ?
(let's assume file $A is very big, e.g. LVM2 snapshot file)
Is it achievable with dd ?
Preffered: bash or python (preferably python3) solutions.
Example scenario: pv "$A" > "$B" interrupted after copying 90%. How to resume it, in order to finish copying process but not repeating whole work again ?
|
Yes you can use dd to skip the blocks.
A="file1"
B="file2"
BLOCKSIZE=512 # default bs for dd
size_b=$(stat -c "%s" "$B")
skip_blocks=$((size_b / BLOCKSIZE))
dd if="$A" of="$B" skip=$skip_blocks seek=$skip_blocks bs=$BLOCKSIZE
The important parameters here are skip as well as seek:
skip: skip BLOCKS ibs-sized blocks at start of input
seek: skip BLOCKS obs-sized blocks at start of output
| Resume interrupted copying process |
1,513,855,139,000 |
I need to recursively copy a folder from a Ubuntu remote server where I have ssh access. I don't want to follow symbolic links, nor to copy permissions/owner/group, because my client system (Ubuntu too) doesn't have the same users as the server.
This rsync solution could be the best one.
But the server does not have rsync and I can't install it there; so that command gives me error.
Is there another way to copy the remote folder?
|
If you have the permission to use FUSE on your local machine, install the sshfs package. SSHFS lets you access remote files via normal filesystem access: it mounts a directory tree accessed over SFTP. You only need to have SFTP access on the remote side (which is enabled by default with OpenSSH on Ubuntu). Once the remote directory is mounted, you can use the tools of your choice to manipulate files, without having to care whether they're local or remote.
mkdir ~/net/remote-server
sshfs remote-server:/ ~/net/remote-server
rsync -a --no-copy-links ~/net/remote-server/remote/path/ /local/path/
fusermount -u ~/net/remote-server
| Copy from remote server which doesn't have rsync |
1,513,855,139,000 |
While moving a big chunk of data between two external USB drives, I notice my laptop is slowed down. It was my understanding that the files are not written to any intermediate location (such as /tmp or similar) unless there is a shortage of free RAM. Am I wrong?
|
If you have a copy such as this, or its GUI equivalent,
cp -a /media/external/disk1/. /media/external/disk2/
the data is read from the first disk's filesystem and written directly to the second. There is no intermediate write to another storage location. If you are seeing slow speeds it may be that the two disks are sharing the same USB controller and contending for access to the bus.
Anything more than that and you will have to provide further details, such as the make/model of computer, its bus topology, etc.
| When moving files between two external drives, are they temporarily written to the internal hdd of the computer? |
1,513,855,139,000 |
For a simple transfer of /home to another disk i use cp -a that seems to me an extremely slow way. Should like know a more efficient way to complete the task. I have /home mounted as logical volume, but the target disk is not an LVM system
|
Try tar, pax, cpio, with something buffering.
(cd /home && bsdtar cf - .) |
pv -trab -B 500M |
(cd /dest && bsdtar xpSf -)
I suggest bsdtar instead of tar because at least on some Linux distributions tar is GNU tar which contrary to bsdtar (from libarchive) doesn't handle preserving extended attributes or ACLs or linux attributes.
pv will buffer up to 500M of data so can better accommodate fluctuations in reading and writing speeds on the two file systems (though in reality, you'll probably have a disk slower that the other and the OS' write back mechanism will do that buffering as well so it will probably not make much difference). Older versions of pv don't support -a (for average speed reporting), you can use pv -B 200M alone there.
In any case, those will not have the limitation of cp, that does the reads and the writes sequentially. Here we've got two tar working concurrently, so one can read one FS while the other one is busy waiting for the other FS to finish writing.
For ext4 and if you're copying onto a partition that is at least as large as the source, see also clone2fs which works like ntfsclone, that is copies the allocated blocks only and sequentially, so on rotational storage is probably going to be the most efficient.
partclone generalises that to a few different file systems.
Now a few things to take into consideration when cloning a file system.
Cloning would be copying all the directories, files and their contents... and everything else. Now the everything else varies from file system to file systems. Even if we only consider the common features of traditional Unix file systems, we have to consider:
links: symbolic links and hard links. Sometimes, we'll have to consider what to do with absolute symlinks or symlinks that point out of the file system/directory to clone
last modification, access and change times: only the first two can be copied using filesystem API (cp, tar, rsync...)
sparseness: you've got that 2TB sparse file which is a VM disk image that only takes 3GB of disk space, the rest being sparse, doing a naive copy would fill up the destination drive.
Then if you consider ext4 and most Linux file systems, you'll have to consider:
ACLs and other extended attributes (like the ones used for SELinux)
Linux attributes like immutable or append-only flags
Not all tools support all of those, or when they do, you have to enable it explicitly like the --sparse, --acls... options of rsync, tar... And when copying onto a different filesystems, you have to consider the case where they don't support the same feature set.
You may also have to consider attributes of the file system themselves like the UUID, the reserved space for root, the fsck frequency, the journalling behavior, format of directories...
Then there are more complex file systems, where you can't really copy the data by copying files. Consider for example zfs or btrfs when you can take snapshots of subvolumes and branch them off... Those would have their own dedicated tools to copy data.
The byte to byte copy of the block device (or at least of the allocated blocks when possible) is often the safest if you want to make sure that you copy everything. But beware of the UUID clash problem, and that implies you're copying onto something larger (though you could resize a snapshot copy of the source before copying).
| faster alternative to cp -a |
1,513,855,139,000 |
When I import pictures from my camera in Shotwell, it also imports the video clips. This is somewhat annoying, as I would like to store my videos in another folder. I've tried to write a bash command to do this, but have not had success.
I need a command that meets the following requirements:
Locate all files in a directory structure that do not have an extension of .jpg, .png, .gif, or .xcf (case insensitive).
Move all of these files into a target directory, regardless of whether the file names or directory paths contain spaces or special characters.
Any help would be appreciated!
EDIT: I'm using the default shell in Ubuntu, meaning that some commands are aliased, etc.
EDIT 2: I've attempted this myself (not the copy part, just the listing of files part). I turned on extglob and ran the following command:
$ ls -R /path | awk '
/:$/&&f{s=$0;f=0}
/:$/&&!f{sub(/:$/,"");s=$0;f=1;next}
NF&&f{ print s"/"$0 }'
This lists everything. I tried using grep on the end of it, but haven't the foggiest idea of how to get it to not match a pattern I give it. The extglob switch didn't help much with grep, even though it does help with other commands.
|
You can use find to find all files in a directory tree that match (or don't match) some particular tests, and then to do something with them. For this particular problem, you could use:
find -type f ! \( -iname '*.png' -o -iname '*.gif' -o -iname '*.jpg' -o -iname '*.xcf' \) -exec echo mv {} /new/path \;
This limits the search to regular files (-type f), and then to files whose names do not (!) have the extension *.png in any casing (-iname '*.png') or (-o) *.gif, and so on. All the extensions are grouped into a single condition between \( ... \). For each matching file it runs a command (-exec) that moves the file, the name of which is inserted in place of the {}, into the directory /new/path. The \; tells find that the command is over.
The name substitution happens inside the program-execution code, so spaces and other special characters don't matter.
If you want to do this just inside Bash, you can use Bash's extended pattern matching features. These require that shopt extglob is on, and globstar too. In this case, use:
mv **/!(*.[gG][iI][fF]|*.[pP][nN][gG]|*.[xX][cC][fF]|*.[jJ][pP][gG]) /new/path
This matches all files in subdirectories (**) that do not match *.gif, *.png, etc, in any combination of character cases, and moves them into the new path. The expansion is performed by the shell, so spaces and special characters don't matter again.
The above assumes all files are in subdirectories. If not, you can repeat the part after **/ to include the current directory too.
There are similar features in zsh and other shells, but you've indicated you're using Bash.
(A further note: parsing ls is never a good idea - just don't try it.)
| Bash copy all files that don't match the given extensions |
1,513,855,139,000 |
I want to duplicate a directory on an FTP server I'm connected to from my Mac via the command-line
Let's say I have file. I want to have files2 with all of file's subdirectories and files, in the same directory as the original. What would be the simplest way to achieve this?
EDIT:
With mget and mput you could download all files and upload them again into a different folder but this is definitely NOT what i want/need (I started this question trying to avoid duplicating with this download upload method from the dektop client)
|
What you have is not a unix command line, what you have is an FTP session. FTP is designed primarily to upload and download files, it's not designed for general file management, and it doesn't let you run arbitrary commands on the server. In particular, as far as I know, there is no way to trigger a file copy on the server: all you can do is download the file then upload it under a different name.
Some servers support extensions to the FTP protocol, and it's remotely possible that one of these extensions lets you copy remote files. Try help site or remotehelp to see what extensions the server supports.
If you want a unix command line, you need remote shell access, via rsh (remote shell) or more commonly in the 21st century ssh (secure shell). If this is a web host, check if it provides ssh access. Otherwise, contact the system administrator. But don't be surprised if the answer is no: command line access would be a security breach in some multi-user setups, so there may be a legitimate reason why it's not offered.
| Easiest way to duplicate directory over FTP |
1,513,855,139,000 |
Using Bash
So let's say I have a bunch of files randomly placed in a parent directory ~/src, I want to grab all the files matching a certain suffix and move (or copy) them to a ~/dist directory.
Let's assume for this purpose that all filenames have this naming convention:
<filename_prefix>.<filename_suffix>
I found out that this was a quick way to get all files with a particular filename_suffix and put them in a dist folder:
mkdir ~/dst
find source -name "*.xxx" -exec mv -i {} -t ~/dst \;
Now a step further... how can I use the output of find, in this case filename, and use the filename_prefix to generate a directory of the same name in ~/dist and then move (or copy) all the files with that prefix into the appropriate directory?
mkdir ~/dst
find source -name "*.xrt,*.ini,*.moo" -exec mv -i {} -t ~/dst \;
Essentially, how do I change the above command (or maybe use another command), to create a structure like this
(OUTPUT)
~/dist/people/people.xrt
~/dist/games/games.xrt
~/dist/games/games.moo
~/dist/games/games.ini
~/dist/monkeys/monkeys.ini
~/dist/monkeys/monkeys.xrt
from a directory tree like this?
(INPUT)
~/src/xrt/people.xrt
~/src/xrt/games.xrt
~/src/conf/games.ini
~/src/pack/monkeys.xrt
~/src/e344/games.moo
~/src/e344/monkeys.moo
~/src/en-us/monkeys.ini
|
It would be a hell to tell find what to do in this case.
Better use the shell:
for i in **/*.{xrt,ini,moo}; do
FILE=$(basename "$i")
DIR=~/dst/${FILE%.*}
echo mkdir -p -- "$DIR"
echo mv -i -t "$DIR" -- "$i"
done
Use shopt -s globstar to make the ** glob work (or use zsh!).
And remove the echos later if the command prints what you want.
| How can you move (or copy) all files to a directory with the same filename prefix? |
1,513,855,139,000 |
I need to copy one very large file (3TB) on the same machine from one external drive to another. This might take (because of low bandwidth) many days.
So I want to be prepared when I have to interrupt the copying and resume it after, say, a restart.
From what I've read I can use
rsync --append
for this (with rsync version>3). Two questions about the --append flag here:
Do I use rsync --append for all invocations? (For the first invocation when no interrupted copy on the destination drive yet exists and for the subsequent invocations when there is an interrupted copy at the destination.)
Does rsync --append resume for the subsequent invocations the copying process without reading all the already copied data? (In other words: Does rsync mimic a dd-style seek-and-read operation?)
|
Do I use rsync --append for all invocations?
Yes, you would use it each time (the first time there is nothing to append, so it's a no-op; the second and subsequent times it's actioned). But do not use --append at all unless you can guarantee that the source is unchanged from the previous run (if any), because it turns off the checking of what has previously been copied.
Does rsync --append resume for the subsequent invocations… without reading all the already copied data?
Yes, but without rsync --partial would probably have first deleted the target file.
The correct invocation would be something like this:
rsync -a -vi --append --inplace --partial --progress /path/to/source/ /path/to/target
You could remove --progress if you didn't want to see a progress indicator, and -vi if you are less bothered about a more informational result (you'll still get told if it succeeds or fails). You may see -P used in other situations: this is the same as --partial --progress and can be used for that here too.
--append to continue after a restart without checking previously transferred data
--partial to keep partially transferred files
--inplace to force the update to be in-place
If you are in any doubt at all that the source might have changed since the first attempt at rsync, use the (much) slower --append-verify instead of --append. Or better still, remove the --append flag entirely and let rsync delete the target and start copying it again.
| Is rsync --append able to resume an interrupted copy process without reading all the copied data? |
1,513,855,139,000 |
So I'm trying to share files between the Samsung Galaxy S5 with Android and my Debian9/KDE machine using MTP instead of KDE Connect.
The problem is that I keep getting:
The process for the mtp protocol died unexpectedly.
When trying to copy over files.
It also often says
No Storages found. Maybe you need to unlock your device?
I can view some of the phone's contents in dolphin after trying for a while: pressing "Allow" whenever the dialog on the phone asks for it while trying to open it in dolphin which correctly detects it as Samsung Galaxy S5.
I once could successfully copy over a bunch of images.
I already tried sudo apt-get install --reinstall libmtp-common. syslog has things like the following:
usb 1-5: usbfs: process 7907 (mtp.so) did not claim interface 0 before use
usb 1-5: reset high-speed USB device number 35 using xhci_hcd
usb 1-5: usbfs: process 7909 (mtp.so) did not claim interface 0 before use
colord-sane: io/hpmud/pp.c 627: unable to read device-id ret=-1
usb 1-5: USB disconnect, device number 35
usb 1-5: new high-speed USB device number 36 using xhci_hcd
usb 1-5: usbfs: process 7930 (mtp.so) did not claim interface 0 before use
usb 1-5: usbfs: process 7930 (mtp.so) did not claim interface 0 before use
usb 1-5: usbfs: process 7930 (mtp.so) did not claim interface 0 before use
|
Install the jmtpfs package
apt install jmtpfs
Edit your /etc/fuse.conf as follows
# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other
Create an udev rule. Use lsusb or mtp-detect to get the ID of your device
nano /etc/udev/rules.d/51-android.rules
with the following line:
SUBSYSTEM=="usb", ATTR{idVendor}=="04e8", ATTR{idProduct}=="6860", MODE="0666", OWNER="[username]"
Replace 04e8 and 6860 with yours , then run:
udevadm control --reload
Reconnect your device , open the terminal and run:
mkdir ~/mtp
jmtpfs ~/mtp
ls ~/mtp
sample output:
Card Phone
To unmount your device use the following command:
fusermount -u ~/mtp
Also you can use the go-mtpfs tool:
Mount MTP devices over FUSE
mkdir ~/mtp
go-mtpfs ~/mtp
A graphical tool to mount your device : gmtp:
simple file transfer program for MTP based devices
sudo apt install gmtp
gmtp
kio-mtp
access to MTP devices for applications using the KDE Platform
| How to get the Samsung Galaxy S5 to work with MTP on Debian 9? |
1,513,855,139,000 |
Let's say I have a large file (8GB) called example.log on ZFS.
I do cp example.log example.bak to make a copy. Then I add or modify a few bytes in original file. What will happen?
Will ZFS copy the entire 8GB file or only the blocks that changed (and all the chain of inodes pointing from the file descriptor to that block)?
|
As far as I could tell, FreeBSD ZFS does not support copy-on-write
using cp; the native cp does not seem to have an option for such
lightweight copies, and trying GNU cp with --reflink errors out on
the ZFS system I tried with error message "cp: failed to clone
'example.bak' from 'example.log': Operation not supported".
A commenter mentions that Solaris cp has a -z switch to do such
copies.
However, and I hope this answers your underlying question,
copy-on-write is used for filesystem snapshots: let's say you have
900GB used out of 1000GB available, nothing prevents you from making a
snapshot of that filesystem, the snapshot will not occupy 900GB; in
fact, it will initially not occupy any new data blocks at all.
After creating a snapshot of your original filesystem containing
example.log, you end up with two "copies": the read-only version in
the snapshot, and you live version in its original location. What
happens when the copy is modified, be it by appending or by being
altered in-place? That is where the magic happens: only those blocks
that are altered get copied and start using up space. It is not the
case that the entire file gets copied as soon as it gets altered.
| How does ZFS copy on write work for large files |
1,513,855,139,000 |
I have directory loaded with thousands of sub directories:
/home/tmp/
1
12
123
1234
2345
234
3456
345
34
Each subdirectory in turn has hundreds of subdirectories that I want to rsync if the first level subdirectory matches...
What I need is a way to copy/rsync only the directories that start with a given digit [1-9]...
What I think I want is basically something that would allow me to use wild cards to match
rsync -rzvvhP remotehost:/home/tmp/1* /home/tmp/
I want rsync to sync up the
/home/tmp/1/
/home/tmp/12/
/home/tmp/123/
/home/tmp/1234/
directories and any child subdirectories they have but not any of the first level directories that start with a different digit...
/home/tmp/234/
/home/tmp/2345/
........./3*/
........./4*/ etc..
What I've tried:
rsync -rzvvhP --exclude='*' --include-from=1.txt remotehost:/home/tmp/ /home/tmp/
where 1.txt contains:
1
12
123
1234
When I do this with 2.txt though rsync still seems to run through all the directories that start with 1 and 3 etc...
How can I do this so that I can have one command to rsync only the directories that start with any given digit?
|
What you proposed should actually work:
rsync -rzvvhP remotehost:/home/tmp/1\* /home/tmp/
(You can get away with not quoting the * in most circumstances, as the pattern remotehost:/home/tmp/1\* is unlikely to match any file so it will be left alone with most shell setups.)
Your attempt with --exclude='*' failed because the first match applies, and your first match for everything (*) says to exclude.
See this guide for some general principles about rsync filters. Here, to include only directories beginning with 1 at the toplevel, and copy everything in included subdirectories, include /1 then exclude /*.
rsync -rzvvhP --include='/1' --exclude='/*' remotehost:/home/tmp/ /home/tmp/
| rsync all directories that start with a specific digit |
1,513,855,139,000 |
I have two folders:
ORIGINAL/
ORIGINAL_AND_MY_CHANGES/
My friend has a copy of ORIGINAL/. I would like to generate MY_CHANGES.tgz -- it should contain only new/changed files from ORIGINAL_AND_MY_CHANGES/ comparing to ORIGINAL/. So my friend can unpack it into his copy of ORIGINAL/ and get ORIGINAL_AND_MY_CHANGES/.
How can I do this?
P.S. I tried diff but it can't save binary data and rsync --link-dest -- it generates hard links which are useless in the archive.
P.P.S. In my case modification time can't be used to decide which file was changed.
|
With rsync
What you're doing is essentially an incremental backup: your friend (your backup) already has the original files, and you want to make an archive containing the files you've changed from that original.
Rsync has features for incremental backups.
cd ORIGINAL_AND_MY_CHANGED
rsync -a -c --compare-dest=../ORIGINAL . ../CHANGES_ONLY
-a means to preserve all attributes (times, ownership, etc.).
-c means to compare file contents and not rely on date and size.
--compare-dest=/some/directory means that files which are identical under that directory and the source tree are not copied. Note that the path is relative to the destination directory.
Rsync copies all directories, even if no files end up there. To get rid of these empty directories, run find -depth CHANGES_ONLY -type d -empty -delete (or if your find doesn't have -delete and -empty, run find -depth CHANGES_ONLY -exec rmdir {} + 2>/dev/null).
Then make the archive from the CHANGES_ONLY directory.
The pedestrian way
Traverse the directory with your file. Skip files that are identical with the original. Create directories in the target as necessary. Copy changed files.
cd ORIGINAL_AND_MY_CHANGES
find . \! -type d -exec sh -c '
for x; do
if cmp -s "$x" "../ORIGINAL/$x"; then continue; fi
[ -d "../CHANGES_ONLY/$x" ] || mkdir -p "../CHANGES_ONLY/${%/*}"
cp -p "$x" "../CHANGES_ONLY/$x"
done
' {} +
| How do I save changed files? |
1,513,855,139,000 |
POSIX filenames may contain all characters except /, but some filesystems reserve characters like ?<>\\:*|". Using pax, I can copy files while replacing these reserved characters:
$ pax -rw -s '/[?<>\\:*|\"]/_/gp' /source /target
But pax lacks an --delete option like rsync and rsync cannot substitute characters. I'm looking for a simple way to backup my music collection to an external hard drive on a regular basis.
|
You can make a view of the FAT filesystem with POSIX semantics, including supporting file names with any character other than / or a null byte. POSIXovl is a relatively recent FUSE filesystem for this.
mkdir backup-fat
mount.posixovl -S /media/sdb1 backup-fat
rsync -au /source backup-fat/target
Characters in file names that VFAT doesn't accept are encoded as %(XX) where XX are hexadecimal digits. As of POSIXovl 1.2.20120215, beware that a file name like %(3A) is encoded as itself, and will be decoded as :, so there is a risk of collision if you have file names containing substrings of the form %(XX).
Beware that POSIXovl does not cope with file names that are too long. If the encoded name doesn't fit in 255 characters, the file can't be stored.
POSIXovl stores unix permissions and ownership in files called .pxovl.FILENAME.
| What is the best way to synchronize files to a VFAT partition? |
1,513,855,139,000 |
$ cp --no-preserve=mode --parents /sys/power/state /tmp/test/
$ cp --no-preserve=mode --parents /sys/bus/cpu/drivers_autoprobe /tmp/test/
The second of the two lines will fail with
cp: cannot make directory ‘/tmp/test/sys/bus’: Permission denied
And the reason is that /tmp/test/sys is created without write permission (as is the original /sys); a normal mkdir /tmp/test/sys2 would not have done this:
$ ls -la /tmp/test/
total 32
drwxr-xr-x 3 robert.siemer domain^users 4096 Oct 11 13:56 .
drwxrwxrwt 13 root root 20480 Oct 11 13:56 ..
dr-xr-xr-x 3 robert.siemer domain^users 4096 Oct 11 13:56 sys
drwxr-xr-x 2 robert.siemer domain^users 4096 Oct 11 13:59 sys2
How can I instruct cp to not preserve the mode, apart from --no-preserve=mode, which does not work as I believe it should...?
Or which tool should I use to copy a list of files without preserving “anything” except symlinks?
|
In case you are using GNU coreutils. This is a bug which is fixed in version 8.26.
https://lists.gnu.org/archive/html/bug-coreutils/2016-08/msg00016.html
So the alternative tool would be an up-to-date coreutils, or for example rsync which is able to do that even with preserving permissions:
$ rsync -a --relative /sys/power/state /tmp/test
$ rsync -a --relative /sys/bus/cpu/drivers_autoprobe /tmp/test/
Though I see rsync has other problems for this particular sysfs files, see
rsync option to disable verification?
Another harsh workaround would be to chmod all the dirs after each cp command.
$ find /tmp/test -type d -exec chmod $(umask -S) {} \;
(The find/chmod command above would also not work for any combination of existing permissions and umask.)
BTW you could report this bug to your Linux-Distribution and they might fix your 8.21 package via maintenance updates.
| Why does cp --no-preserve=mode preserves the mode? Alternative tools available? |
1,513,855,139,000 |
How would I go about to copy files over a very unstable internet connection?
Sometimes, the connection is lost, other times the IP of one machine or the other machine is changed, sometimes both, though dynamic DNS will catch it.
Which tool or command would you suggest?
I've heard that rsync is pretty nifty in copying only the diff, but that means a lot of work either restarting it again and again or putting it into a while or cronjob.
I was hoping for something easier and foolproof.
Addendum:
It's about copying every now and then a couple of directories with a few very large files >5GB in them from one site to the other. After the copy, both are moved locally to different locations.
I can't do anything on the networking level, I wouldn't have the knowledge to do so.
I'd rather not set up a web server in order to use wget. That is not secure and seems like a circuitous route.
I have already established an SSH connection and could now rsync, as rsync is already installed on both machines (I wouldn't be able to get an rsync daemon up and running).
Any hints on how I could make an intelligent rsync over ssh so that it tries to continue when the line is temporarily cut? But rsync won't be the problem when the ssh connection dies. So something like this (https://serverfault.com/questions/98745/) probably won't work:
while ! rsync -a .... ; do sleep 5 ; done
Any ideas?
Thanks a lot!
Gary
|
OK, I have found the solution in my case. I am indeed using the suggested while loop. It now looks like this:
while ! \
rsync -aiizP --append --stats . -e ssh [email protected]:./path/rfiles ; \
do now=$(date +"%T") ; echo · Error at $now · ; sleep 5 ; done
Without the while loop, I would have to manually start the rsync again. Now, it works just like a charm.
The interesting thing is: I get the error exactly ten minutes after the connection is lost and about 9 minutes after the connection is up and running again! In the meantime, nothing is happening in the terminal window. I wonder where this 10 minute timeout comes from.
Thank you very much for your help.
Gary
FYI: This is the timeout error that I receive (10 mins after the fact):
...
thedirectory/afile.ext
Read from remote host myhost.com: Operation timed out
rsync: writefd_unbuffered failed to write 16385 bytes [sender]: Broken pipe (32)
rsync: connection unexpectedly closed (394 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-40/rsync/io.c(452) [sender=2.6.9]
| The most *robust* remote file copy? |
1,513,855,139,000 |
I'm looking for some sort of a command that I can use, to copy/append multiple files into one; but without shell redirection (I'd like to try it in call_usermodehelper, see similar issue in call_usermodehelper / call_usermodehelperpipe usage - Stack Overflow). I know I could otherwise use:
cat file1 file2 > file.merge
... but that requires shell redirection.
My findings so far:
Cannot use cat, as it's default stdout output cannot be redefined (through, say, command line argument) - and other than that, it's shell redirection
Cannot use dd in single invocation, as it can only accept one (and only one) if= input file argument
Cannot use cp, as it will treat multiple files individually, and cannot copy them all "merged" into a single location
So - is there any standard tool, that would allow me to do something like (pseudocode):
copytool -i file1 -i file2 -o file.merge
... such that the output file.merge represents file2 appended to file1 contents?
|
You can do:
sed -n wfile.merge file1 file2
Or:
awk '{print > "file.merge"}' file1 file2
Or:
sh -c 'cat file1 file2 > file.merge'
(note that depending on the implementation, the first two may not work properly with binary files).
| Copy multiple files into one (append, merge) in single invocation without shell redirection? |
1,334,477,754,000 |
I am trying to find the simplest way to upload a file using ssh and after that run a command on the remote machine within the same ssh session for some post-processing, so that I don't need to login again. The upload should, if possible, show some progress indicator.
So far I looked into scp and rsync, and both are not capable of running any hooks. (I could use the --rsync-path parameter to execute some script before rsync) but I want to do post-processing. Is there any way to open a ssh session, upload, execute a command and close it again?
|
You might want the ControlMaster mechanism in ssh.
| Upload file over ssh and execute command on the remote machine |
1,334,477,754,000 |
When moving or copying files as root I often want to set the ownership for those files based on the owner of the directory I am moving the files to.
Before I go off and write a script that parses the rsync output for all the files that were copied over and then goes through those setting chown on each file, is there a better/existing way to do this?
As an example, say I need to copy/move/sync the folder tmp/ftp/new-assests/ to ~user1/tmp/ and to ~user2/html-stuff/ the originals are owned by the user _www and I want the target files and the folder containing them and any other folders to be owned by user1 and user2, respectively and the target directories have existing files in them.
Yes, the users could copy the fils themselves if they had read access to that folder, but that is irrelevant in this case. Let’s assume these are all nologin users and they do not have access to the source file, if that helps.
|
Using rsync:
rsync -ai --chown=user1 tmp/ftp/new-assests/ ~user1/tmp/
This would copy the directory to the given location and at the same time change the ownership of the files to user1, if permitted.
The general form of the argument to --chown is USER:GROUP, but you may also use just USER to set a particular user as owner, as above, or just :GROUP to set a particular group (the : is not optional if you leave the user ID out).
| Setting ownership when copying or syncing files |
1,334,477,754,000 |
I'm looking for java code to copy files to a remote linux system. I have tried Runtime.getRuntime().exec() function by passing an scp command, but each time I run the program it is asking for the remote system password. I'd like to avoid that.
I looked at the Jsch library -- using this I can login to a remote system -- but I can't copy the files to the remote system. Once I login I can do scp to my host but again it requires the host system username and password. However, I only have the remote system's information.
|
Copying a file from one host to another requires a daemon on the remote host, implementing some application-level file transmission protocol. This is a requirement no matter from which language you are going to talk to that remote daemon.
Your options for Linux systems are:
SSH. This requires a SSH daemon (say openssh-server) on the remote side. Because ssh is designed for security you will have to configure the remote host to authenticate you with either a password or a private key. Actually copying the file can be done via the scp utility or ssh client library (jsch would be an example of such).
NFS. The remote host installs a daemon (for example samba) and shares some files. Your local computer (cifs-utils package is capable of that) can then mount a remote location on the local file system. This way you can copy a file to the remote host by just copying the file locally. Authentication is optional, files are sent in plain over the network.
FTP. An ftp server is installed on remote side and configured to permit access to certain locations for certain users. You can then use any ftp client or some ftp client library (commons-net library from the Apache project, for instance) to connect to the remote ftp server and copy the files. Authentication is optional, files are sent in plain over the network.
All of this seems like a lot of work, and in deed it is, because there is not a single widely-adopted and standardized protocol that would be implemented and configured out-of-the-box on most systems.
| Java code to copy files from one linux machine to another linux machine |
1,334,477,754,000 |
Is it possible to paste the selected filename into the copy popup, so when I hit F5 this filename will be in the 'to' section, so I can adjust it?
For example:
I want to copy /home/piotr/testFile.log to /home/piotr/testFile2.log.
I open both panels in the same directory and hit F5, but the 'to' value is: /home/piotr and I would like it to be /home/piotr/testFile.log, so I can simply adjust the name instead of typing it from the scratch.
|
Use Shift-F5 instead (or Shift-F6 for renaming) – the dialog will have the to field filled with current file's name (without the path).
Sadly those combinations not work in certain circumstances. No idea if it depends on MC build, terminal or some used library. So I also added this in ~/.mc/menu as an alternative:
5 Copy
read -e -i "%f" -p 'Copy file : ' name
[[ "$name" && "$name" != "%f" ]] && cp "%f" "$name"
6 Rename
read -e -i "%f" -p 'Rename file : ' name
[[ "$name" && "$name" != "%f" ]] && mv "%f" "$name"
Then I just select the file, hit F2, 5 (or 6 for renaming) then edit the name and press Enter. It requires bash 4 or newer, due to the read's -i option.
| Insert selected filename while copying in Midnight Commander |
1,334,477,754,000 |
I't trying to move around 4.5 million files (size ranges from 100 - 1000 bytes) from one partition to other. The total size of the folder is ~2.4 GB
First I tried to zip it and move the zipped file to the new location. It is able to paste only ~800k files and shows "out of space" error.
Next I tried the mv command and it also resulted in same condition.
Using rsync also resulted in the same error with only ~800k files being moved.
I checked the disk free status and it is way under the limit. ( The new partition has ~700 GB free space and the required space is ~2.4 GB).
I checked the free inode for that partition it is the same. It is using only ~800k out of the maximum possible 191 M inodes. ( I had actually formatted the partition with 'mkfs.ext4 -T small /dev/sdb3' )
I have no idea of what is going wrong here. Everytime it is only able to copy or move ~800k files only.
|
I have found the reason for the error (found it on a different forum).
The error was due to the hashing algorithm used by ext4 which is enabled by "dir_index" parameter. There were too many hashing collisions for me so I disabled it by the following command:
tune2fs -O "^dir_index" /dev/sdb3
The downside is that my partition is slower than before due to no indexing.
For more information on the problem :
ext4: Mysterious “No space left on device”-errors
| Moving millions of small files results in "out of space" error |
1,334,477,754,000 |
I have some files on my laptop which I want to copy them on a remote cluster. To this end, I use PuTTy to SSH the remote cluster. Then to copy files, I use PuTTy terminal and after logging to the remote system, I write the below instruction,
scp -r ~/Desktop/AFU/ username@host:~/SVM
aiming copy all files in folder C:Users\name\Desktop\AFU in my laptop to a folder named SVM on the remote cluster.
However, it does not work and I get the error:
/home/username/Desktop/AFU: No such file or directory.
Could you please help me?
The operating system on my laptop is Windows 8.1.
|
The scp command you're trying to run is not only wrong, but won't work anyway because it presumes your laptop is running a SSH server.
To do what you want, there's a much simpler way: use WinSCP on your laptop to connect to the remote cluster (it works similarly to PuTTY), then upload the files you want -- in your case, files from C:Users\name\Desktop\AFU in your laptop to ~/SVM on the remote cluster.
| Copying files from my (windows) computer to a remote system over ssh [closed] |
1,334,477,754,000 |
I have a folder with more than 30 sub directories and I want to get the list of the files which was modified after a specified date(say sep 8 which is the real case) and to be copied with the same tree structure with only the modified files in that folder
I have say 30 dir from that I have the list of the files I need found using last modified date
Find command output
a/a.txt
a/b/b.txt
a/www.txt
etc..
For example I want the folder "a" created and only the a.txt in it...like wise for the other also "a/b" to be created and b.txt inside it...
|
Assuming you have your desired files in a text file, you could do something like
while IFS= read -r file; do
echo mkdir -p ${file%/*};
cp /source/"$file" /target/${file%/*}/${file##*/};
done < files.txt
That will read each line of your list, extract the directory and the file name, create the directory and copy the file. You will need to change source and target to the actual parent directories you are using. For example, to copy /foo/a/a.txt to /bar/a/a.txt, change source to foo and target to bar.
I can't tell from your question whether you want to copy all directories and then only specific files or if you just want the directories that will contain files. The solution above will only create the necessary directories. If you want to create all of them, use
find /source -type d -exec mkdir -p {} /target
That will create the directories. Once those are there, just copy the files:
while IFS= read -r file; do
cp /source/"$file" /target/"$file"
done
Update
This little script will move all the files modified after September 8. It assumes the GNU versions of find and touch. Assuming you're using Linux, that's what you will have.
#!/usr/bin/env bash
## Create a file to compare against.
tmp=$(mktemp)
touch -d "September 8" "$tmp"
## Define the source and target parent directories
source=/path/to/source
target=/path/to/target
## move to the source directory
cd "$source"
## Find the files that were modified more recently than $tmp and copy them
find ./ -type f -newer "$tmp" -printf "%h %p\0" |
while IFS= read -rd '' path file; do
mkdir -p "$target"/"$path"
cp "$file" "$target"/"$path"
done
Strictly speaking, you don't need the tmp file. However, this way, the same script will work tomorrow. Otherwise, if you use find's -mtime, you would have to calculate the right date every day.
Another approach would be to first find the directories, create them in the target and then copy the files:
Create all directories
find ./ -type d -exec mkdir -p ../bar/{} \;
Find and copy the relevant files
find ./ -type f -newer "$tmp" -exec cp {} /path/to/target/bar/{} \;
Remove any empty directories
find ../bar/ -type d -exec rmdir {} \;
| Clone directory tree structure and copy files to the corresponding directories modified after a specific date |
1,334,477,754,000 |
cp --reflink=auto shows following output for MacOS:
cp: illegal option -- -
Is copy-on-write or deduplication supported for HFS? How can I COW huge files with HFS?
|
Apple's new APFS filesystem supports copy-on-write; CoW is automatically enabled in Finder copy operations where available, and when using cp -c on the command line.
Unfortunately, cp -c is equivalent to cp --reflink=always (not auto), and will fail when copy-on-write is not possible with
cp: somefile: clonefile failed: Operation not supported
I'm not aware of a way to get auto behavior. You could make a shell script or function with automatic fallback a la
cpclone() { cp -c "$@" || cp "$@"; }
but it'll be difficult to make it entirely reliable for all edge cases.
| cp --reflink=auto for MacOS X |
1,334,477,754,000 |
I need to copy all files and directory from source let's say /var/www/html/test/ to destination /var/www/html/test2/. Destination can already have extra files and folders which i need to remove after copying the files from source.
I cannot delete everything from destination before copying it.
UPDATE
I tried following :
1) Copied the file from source to destination using cpcommand
cp -R source destination
which working fine.
2) I tried to iterate over all the files in destination file to check if the file is exist in source. If not remove the file from destination
for file in /var/www/html/test2/*;
do filestr=`basename $file`;echo $file;
if [ `ls /var/www/test1/ | grep -c $filestr` -eq 0 ];
then rm $file; fi;
done;
which working fine for the root files in the destination only.
Need to find out how to recursively check all file and directory matching with source or not.
|
#!/bin/bash
SOURCE="/var/www/html/test/"
DESTINATION="/var/www/html/test2/"
cp -pRu "$SOURCE*" "$DESTINATION"
HITSDIR=`find $DESTINATION -type d | sed -e 's|'$DESTINATION'\(.*\)|\1|g'`
for i in $HITSDIR; do
if [ -e $SOURCE$i ]; then
echo Yes $SOURCE$i exists
else
echo Nope delete $DESTINATION$i.
#rm -r $DESTINATION$i
fi
done
HITSFILES=`find $DESTINATION -type f | sed -e 's|'$DESTINATION'\(.*\)|\1|g'`
for i in $HITSFILES; do
if [ -e $SOURCE$i ]; then
echo Yes $SOURCE$i exists
else
echo Nope delete $DESTINATION$i.
#rm $DESTINATION$i
fi
done
This should do what you want, just uncomment the rm once you did a dry run.
| Copy whole folder from source to destination and remove extra files or folder from destination |
1,334,477,754,000 |
I want to copy a directory into another directory.
For example, cp -r dir1 dir2 copies the contents of dir1 into dir2. I want to copy dir1 itself into dir2 so that if I ls dir2 it will output dir1 and not whatever was inside of dir1.
|
Just do as you did:
cp -r dir1 dir2
and you will have dir1 (with its content as well) inside dir2. Try if you don't believe ;-).
The command that would copy content of dir1 into dir2 is:
cp -r dir1/* dir2
| Copy directory not just the contents |
1,334,477,754,000 |
I often need to move files between two Linux computers via USB. I use gparted to format the USB's. When I formatted the USB to use FAT32, the USB was unable to copy symlinks, so I had to recreate the symlinks on the other computer after copying the files. When I formatted the USB to use EXT3, I created a lost+found directory on the USB, and prevented me from copying files to the USB unless I became root.
Is there a preferred file system to use when transferring files between two Linux computers?
How can I copy files without running into the problems presented by the FAT32 and EXT3 filesystems?
|
What I do is to store tarballs on the USB drive (formatted as VFAT). I'm wary of reformatting USB drives, they are build/optimized for VFAT so to level wear, and I'm afraid it will die much sooner with other filesystems. Besides, formatting another way will make it useless for ThatOtherSystem...
| What filesystem should be used when transferring files between Linux systems? |
1,334,477,754,000 |
Is there a way to do this: "How can I copy a subset of files from a directory while preserving the folder structure?", but without using rsync?
Is it possible to be done using only cp, find and grep?
I have very limited bash shell (i.e. its a git shell under windows :( ), so rsync is not an option.
I know I can get cygwin, but I was wondering what I can do in this limited situation.
|
You can use tar or cpio or pax (if any of these is available) to copy certain files, creating target directories as necessary. With GNU tar, to copy all regular files called *.txt or README.* underneath the current directory to the same hierarchy under ../destination:
find . -type f \( -name '*.txt' -o -name 'README.*' \) |
tar -cf - -T - |
tar -xf - -C ../destination
With just find, cp, mkdir and the shell, you can loop over the desired files with find and launch a shell command to copy each of them. This is slow and cumbersome but very portable. The shell snippet receives the destination root directory as $0 and the path to the source file as $1; it creates the destination directory tree as necessary (note that directory permissions are not preserved by the code below) then copies the file. The snippet below works on any POSIX system and most BusyBox installations.
find . -type f \( -name '*.txt' -o -name 'README.*' \) -exec sh -c '
mkdir -p "$0/${1%/*}";
cp -p "$1" "$0/$1"
' ../destination {} \;
You can group the sh invocations; this is a little complicated but may be measurably faster.
find . -type f \( -name '*.txt' -o -name 'README.*' \) -exec sh -c '
for x; do
mkdir -p "$0/${x%/*}";
cp -p "$x" "$0/$x";
done
' ../destination {} +
If you have bash ≥4 (I don't know whether Git Bash is recent enough), you don't need to call find, you can use the ** glob pattern to recurse into subdirectories.
shopt -s globstar extglob
for x in **/@(*.txt|README.*); do
mkdir -p "../destination/${x%/*}"
cp -p -- "$x" "../destination/$x"
done
| How to copy only matching files, preserving subdirectories |
1,334,477,754,000 |
How would it be possible to copy all new contents from one directory to another so that only new files are copied from the source directory (both directories have the same naming tree). For example, here is the layout of directory A:
/dirA
a.php
b.txt
subdirA1/
readme.txt
config
source_file1.c
/dirB
c.php
subdirA1/
readme.txt
at the end dirB should have all the new files in dirA. Assume that there are only new files in dirA and its sub directories. The result should be the union of the two directories:
/dirB
a.php
b.txt
c.php
subdirA1/
readme.txt
config
source_file1.c
I have tried using cp -ra:
cp -ra dirA/* dirB/
but dirB is completely overwritten by dirA.
|
rsync was designed exactly to solve this problem:
[$]> rsync -av --ignore-existing dirA/ dirB/
| How to copy files from one directory to another so that only new files are copied? [duplicate] |
1,334,477,754,000 |
I'm trying to write a Makefile to install the content of my folder into another on the system.
I would like to keep the same directory structure, like this.
localfolder
├── a
└── b
├── c
└── d
├── e
└── f
I tried different options, but it does nothing
$ install -d localfolder /opt/folder
(does nothing)
$ install -t localfolder /opt/folder
install: omitting directory '/opt/folder'
$ install -D localfolder /opt/folder
install: omitting directory 'localfolder'
Can anyone point me into the right direction ? Googling 'linux install command' is not bringing any pertinent information.
Thanks!
|
For those who wants a solution, here you go:
the install command doesn't work recursivly. So I wrote a shell script that does the trick.
The first argument is the folder you want to copy, and the second is the target directory
#!/bin/sh
# Program to use the command install recursivly in a folder
magic_func() {
echo "entering ${1}"
echo "target $2"
for file in $1; do
if [ -f "$file" ]; then
echo "file : $file"
echo "installing into $2/$file"
install -D $file $2/$file
elif [ -d "$file" ]; then
echo "directory : $file"
magic_func "$file/*" "$2"
else
echo "not recognized : $file"
fi
done
}
magic_func "$1" "$2"
It is also available as a gist here
| Install the content of a folder into another |
1,334,477,754,000 |
I have machine on which the files are uploaded by ftp. From this machine I would like to run cronjob and scp/rsync (simply copy them) to different machine in the same network.
The problem is I don't want to copy files which are not coplete (still during transfer)
Is there a possibility to check whether a file is complete and only then copy to another server?
|
You can use lsyncd:
Lsyncd watches a local directory trees event monitor interface
(inotify or fsevents). It aggregates and combines events for a few
seconds and then spawns one (or more) process(es) to synchronize the
changes. By default this is rsync.
You can specify the time out after which a file has changed is to be synced. Set it to e.g. five times the typical upload time and you're probably fine.
| Operations only on complete files [duplicate] |
1,334,477,754,000 |
I'd like to copy a firmware update file to my Canon 7D camera, connected via USB.
After it was auto-mounted by thunar + thunar-volman + gvfs-gphoto2 I tried the following:
$ cp eos7d-v205-win/7D000205.FIR /run/user/1000/gvfs/gphoto2\:host\=%5Busb%3A001%2C012%5D/
$ echo $?
0
$ ls /run/user/1000/gvfs/gphoto2\:host\=%5Busb%3A001%2C012%5D/
DCIM MISC
So that went into a black hole.
The first time I try to copy it with Ctrl-c and Ctrl-v prints the following error message when pasting the file:
Error writing file.
-108: No such file or directory.
Do you want to skip it?
If I try again after that it simply crashes:
$ thunar
Segmentation fault (core dumped)
$ echo $?
139
The Gphoto 2 shell has an undocumented put function which I also tried:
$ sudo umount /run/user/1000/gvfs
$ gphoto2 --shell
gphoto2: {.../eos7d-v205-win} /> help put
Help on "put":
Usage: put [directory/]filename
Description:
Upload a file
* Arguments in brackets [] are optional
So this function takes a single argument with an optional directory. Weird, but should be doable. Some attempts at making it work:
$ gphoto2 --shell
gphoto2: {.../eos7d-v205-win} /> ls
store_00010001/
gphoto2: {.../eos7d-v205-win} /> put 7D000205.FIR
*** Error ***
You need to specify a folder starting with /store_xxxxxxxxx/
*** Error (-1: 'Unspecified error') ***
gphoto2: {.../eos7d-v205-win} /> put /store_00010001/7D000205.FIR
*** Error ***
PTP Access Denied
*** Error (-1: 'Unspecified error') ***
gphoto2: {.../eos7d-v205-win} /> put /store_00010001/MISC/7D000205.FIR
*** Error ***
PTP Access Denied
*** Error (-1: 'Unspecified error') ***
Maybe it's not supported?
Digikam has an upload feature, but that just reported 'Failed to upload file "7D000205.FIR".' Running it from the shell produced no more information.
man gvfs-copy doesn't explicitly say it can't copy to a camera, but I can't figure out how:
$ gvfs-copy "file://${HOME}/7D000203.FIR" /run/user/1000/gvfs/gphoto2\:host\=%5Busb%3A004%2C006%5D/
Error copying file file:///[...]/7D000203.FIR: Error writing file: -1: Unspecified error
$ gvfs-copy "file://${HOME}/7D000203.FIR" file:///run/user/1000/gvfs/gphoto2\:host\=%5Busb%3A004%2C006%5D/
Error copying file file:///[...]/7D000203.FIR: Error opening file '/run/user/1000/gvfs/gphoto2:host=[usb:004,006]': No such file or directory
$ gvfs-copy "file://${HOME}/7D000203.FIR" gphoto2://host\=%5Busb%3A004%2C006%5D/
Error copying file file:///[...]/7D000203.FIR: The specified location is not mounted
gphoto2 says it should be possible to upload files to the camera:
$ gphoto2 --port usb: --abilities
Abilities for camera : Canon EOS 7D
Serial port support : no
USB support : yes
Capture choices :
: Image
: Preview
Configuration support : yes
Delete selected files on camera : yes
Delete all files on camera : no
File preview (thumbnail) support : yes
File upload support : yes
The gphoto2 manual says it supports "uploading" files. It doesn't work. Trying a command reported as working elsewhere:
$ gphoto2 --upload-file 7D000203.FIR --folder /store_00010001
*** Error ***
PTP Access Denied
*** Error (-1: 'Unspecified error') ***
For debugging messages, please use the --debug option.
Debugging messages may help finding a solution to your problem.
If you intend to send any error or debug messages to the gphoto
developer mailing list <[email protected]>, please run
gphoto2 as follows:
env LANG=C gphoto2 --debug --debug-logfile=my-logfile.txt --upload-file 7D000203.FIR --folder /store_00010001
Please make sure there is sufficient quoting around the arguments.
After trying the debug command I get the following relevant log lines:
ptp_usb_getresp [usb.c:434] (0): PTP_OC 0x100c receiving resp failed: PTP Access Denied (0x200f)
put_file_func [library.c:5940](0): 'ptp_sendobjectinfo (params, &storage, &parent, &handle, &oi)' failed: 'PTP Access Denied' (0x200f)
gp_context_error (0): PTP Access Denied
gp_camera_folder_put_file [gphoto2-camera.c:1248](0): 'gp_filesystem_put_file (camera->fs, folder, filename, type, file, context)' failed: -1
gp_camera_free (2): Freeing camera...
gp_camera_exit (2): Exiting camera ('Canon EOS 7D')...
ptp_usb_sendreq (2): Sending PTP_OC 0x1003 / Close session request...
gp_port_write (3): Writing 12 = 0xc bytes to port...
gp_port_write (3): Wrote 12 = 0xc bytes to port: (hexdump of 12 bytes)
0c 00 00 00 01 00 03 10-0c 00 00 00 ............
Too many WTFs per minute. What do I need to do to copy a file to my camera in Arch Linux?
In case it's relevant: I tried copying the file on Windows 7, and it also fails:
You do not have permission to create this item.
|
I'm not sure you will like this answer, but, in my experience too, using PTP has always caused a high WTF/min. Presumably the camera itself restricts writing in the root folder, or something equally sensical.
I would suggest getting your hands on a CompactFlash reader, mounting the filesystem directly, and using that type of access to copy your firmware file to the card's root folder (e.g., mount /dev/sdc1 /mnt/camera and then cp eos7d-v205-win/7D000205.FIR /mnt/camera/).
I've found card readers to feel a whole lot faster than PTP, presumably because the former can benefit from your computer's filesystem read-ahead capabilities, while PTP doesn't, so I consider them a worthy (and small) expense.
| How to copy files *to* a camera? |
1,334,477,754,000 |
Please excuse me if this is too basic and you're tempted to throw an RTFM at me.
I want to prevent users from copying certain files while granting them read access to the same files. I thought this was impossible until I came across this example in the SELinux Wiki:
allow firefox_t user_home_t : file { read write };
So I was thinking, is it possible to give the files in question a mode of 0700 for instance, and use SELinux to grant read access only to the application that the users will normally be using to read the files?
Again, I'm sorry if this is too basic, it's just that I'm on a tight schedule and I want to give an answer to my boss one way or the other (if it's possible or not) as soon as possible and I know nothing about SELinux so I'm afraid reading on my own to determine whether it's possible or not would take me too much time. Please note that I'm not averse to reading per se and would hugely appreciate pointers to the relevant documentation if it exists.
So basically, my question is, is there a way to do this in SELinux or am I wasting my time pursuing such an alternative?
P.S. I'm aware that granting read access can allow users who are really intent on copying the files to copy and paste them from within the application they'll read them with; I'm just looking for a first line of defense.
EDIT
To better explain my use case:
The files in question are a mixture of text and binaries.
They need to be read by proprietary commercial software: they are simulation models for an electronics simulation software.
These models are themselves proprietary and we don't want the users simulating with them leaking them out for unauthorized use.
The software only needs to read the models and run a few scripts from these files; it will not write their contents anywhere.
In short, I want only the simulation software to have read and execute access to these files while preventing read access for the users.
|
I think it's important to note that the cat isn't the problem in my comment above, but shell redirection. Are you trying to restrict copying of binaries or text files? If it's binaries, then I believe you can work something out with rbash (see http://blog.bodhizazen.net/linux/how-to-restrict-access-with-rbash/).
However, if it's text files, I'm not sure how you can prevent someone from just copying from their local terminal.
I'm not sure any general SELinux solution would be helpful here. Does your application that reads files need to write data anywhere? If not and these files only need to be read by your application, you could just give your application's type read-only access to the files of the type you would like it to read and don't give it write anywhere.
I think some more information on the exact permissions required by your use-case might be helpful, sorry for the vague answer.
UPDATE - MORE SPECIFIC ANSWER
I think you can achieve what you want without SELinux, as this is how many things are handled (e.g. normal users changing their password in /etc/shadow via the passwd command):
Make a separate user and/or group for your commercial software (might already be done)
Give the files read-only access by said user and/or group
Make sure normal users do not have access to those files
Make your executable setuid or getgid (depending on whether you used a group or user) e.g. chmod g+s or chmod u+s
When users run the application, they will now have the same permissions that the application user or group has, thereby allowing read access to those specific files within the desired application.
UPDATE2 - MULTIPLE USERS AND GROUPS
If you have multiple applications and groups, you can likely achieve the functionality you are looking for with sudo. Many people are aware of its ability to let you run commands a root, but it's usefulness goes far beyond that example. I'm not sure this an ideal setup, but it's one way to do what you're attempting.
You can still make all the application files owned by the application, but then you can make separate groups for each set of files.
This is what your /etc/sudoers or a file in /etc/sudoers.d/ could look like:
User_Alias FILEGROUP1 = user1, user2
User_Alias FILEGROUP2 = user3, user4
Cmnd_Alias MYEDITOR = /usr/local/bin/myeditor, /usr/local/bin/mycompiler
FILEGROUP1 ALL=(:fileset1) NOPASSWD: MYEDITOR
FILEGROUP2 ALL=(:fileset2) NOPASSWD: MYEDITOR
Where user1 and user2 need access to files owned by the group fileset1 and user3 and user4 need access to files owned by the group fileset2. You could also use groups instead of users.
The users could access their files through the editor by doing sudo -g fileset1 /usr/local/bin/myeditor or something similar.
It might help to create some wrapper scripts for the necessary sudo -g commands for your users, especially since it sounds like may be a graphical application.
More details:
http://www.garron.me/linux/visudo-command-sudoers-file-sudo-default-editor.html
https://serverfault.com/questions/166254/change-primary-group-with-sudo-u-g
http://linux.die.net/man/8/sudo
| SELinux: Can I disable copying of certain files? |
1,334,477,754,000 |
I have two folders on two different servers.
I want to sync files between A and B, however, I only want to copy files that don't already exist in folder B because these files are huge. I don't care about updating files. I simply wants a copy of each in folder B.
How do I do this on Linux? (I suppose it'd be nice to know how to update files that have changed too)
|
rsync is able to do this.
rsync --ignore-existing <src> <dest>
You can perform also various kinds of updates. Just have a look at the man page.
| How do I only copy files to a remote folder on another server that don't already exist in the folder... from the command line in linux? |
1,334,477,754,000 |
In a directory, I have hundreds of sub-directories. Each sub-directory has hundreds of jpg pictures. If the folder is called "ABC_DEF", the files inside the folder will be called "ABC_DEF-001.jpg", "ABC_DEF-002.jpg", so on and so forth.
For example:
---Main Directory
------Sub-Directory ABC_DEF
----------ABC_DEF-001.jpg
----------ABC_DEF-002.jpg
------Sub-Directory ABC_GHI
----------ABC_GHI-001.jpg
----------ABC_GHI-002.jpg
From each of the sub-directory I want to copy only the first file, e.g., the file with the extension -001.jpg - to a common destination folder called DESTDIR.
I have changed the code given here to suit my use case. However, it always prints the first directory along with the filenames and I am not able to copy the files to the desired destination. The following is the code:
DIR=/var/www/html/beeinfo.org/resources/videos/
find "$DIR" -type d |
while read d;
do
files=$(ls -t "$d" | sed -n '1h; $ { s/\n/,/g; p }')
printf '%s,%s\n' "$files";
done
How can I fix this code?
|
Why find when all files are in directories of the same depth?
cd -- "$DIR" &&
cp -- */*-001.jpg /destination/path
| Shell Script: Copy first file from multiple folders into one single folder |
1,334,477,754,000 |
I am trying to copy a large folder structure between machines. I want to maintain the ownership/rights during the copy as it is not reasonable to 'fix' the privs afterwards.
Therefore, I am using the following command to tar the file with privs intact and transfer the data to the destination machine. The same users exist on both machines.
tar cfzp - foldertocopy | ssh me@machine "cat > /applications/incoming/foldertocopy.tar.gz"
The transfer works fine, and the next step is to su to root on the remote machine and untar the file.
The problem is: there isn't enough disk space to store both the compressed and uncompressed data at the same time.
I could use rsync/recursive scp but my user doesn't have the rights to create the files with the correct privs itself and root can't log in remotely.
What are my options? The source machine is RHEL4 and the destination is RHEL5.
|
As root, set up a named pipe:
# mkfifo /tmp/fifo
# chmod o+w /tmp/fifo
Then, transfer your data as me:
$ tar cfzp - foldertocopy | ssh me@machine "cat > /tmp/fifo"
But read it as root:
# tar -xfzp /tmp/fifo
| Copying large tree from one machine to another, maintaining ownership |
1,334,477,754,000 |
While manually copying the file the following works, as below:
userx@x:~$ cp -rv /opt/test-bak/* /opt/test/
'/opt/test-bak/file1' -> '/opt/test/file1'
'/opt/test-bak/file2' -> '/opt/test/file2'
'/opt/test-bak/file3' -> '/opt/test/file3'
'/opt/test-bak/subdir1/subfile1' -> '/opt/test/subdir1/subfile1'
'/opt/test-bak/subdir2/subfile2' -> '/opt/test/subdir2/subfile2'
However, installing it as a system service returns the "cannot stat '/opt/test-bak/*': No such file or directory" error
● testcopy.service - test usage of /bin/cp in systemd
Loaded: loaded (/etc/systemd/system/testcopy.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2019-04-21 14:55:16 +08; 4min 28s ago
Process: 7872 ExecStart=/bin/cp -rv /opt/test-bak/* /opt/test/ (code=exited, status=1/FAILURE)
Main PID: 7872 (code=exited, status=1/FAILURE)
Apr 21 14:55:15 userx@x systemd[1]: Started test usage of /bin/cp in systemd.
Apr 21 14:55:15 userx@x cp[7872]: /bin/cp: cannot stat '/opt/test-bak/*': No such file or directory
Apr 21 14:55:16 userx@x systemd[1]: testcopy.service: Main process exited, code=exited, status=1/FAILURE
Apr 21 14:55:16 userx@x systemd[1]: testcopy.service: Unit entered failed state.
Apr 21 14:55:16 userx@x systemd[1]: testcopy.service: Failed with result 'exit-code'.
My service file as below:
[Unit]
Description=test usage of /bin/cp in systemd
[Service]
Type=simple
ExecStart=/bin/cp -rv /opt/test-bak/* /opt/test/
[Install]
WantedBy=multi-user.target
Journal shows the below
Apr 21 15:05:12 x systemd[1]: Started test usage of /bin/cp in systemd.
Apr 21 15:05:12 x cp[9892]: /bin/cp: cannot stat '/opt/test-bak/*': No such file or directory
Apr 21 15:05:12 x systemd[1]: testcopy.service: Main process exited, code=exited, status=1/FAILURE
Apr 21 15:05:12 x systemd[1]: testcopy.service: Unit entered failed state.
Apr 21 15:05:12 x systemd[1]: testcopy.service: Failed with result 'exit-code'.
Can anyone shed some light on this?
|
When using filename globbing patterns on the command line, it's the shell that expands the globbing patterns to filenames that match them, creating a list of pathnames that is then passed on to the utility that you are calling (cp here).
The command that you specify with ExecStart in your service file will not run in a shell. This means that the filename globbing pattern * will not be expanded and that cp will be called with /opt/test-bak/* as the single literal source path to copy.
You could try wrapping your command in an in-line shell script:
ExecStart=/bin/sh -c '/bin/cp -rv /opt/test-bak/* /opt/test/'
Or, wrap your command in a short shell script,
#!/bin/sh
/bin/cp -rv /opt/test-bak/* /opt/test/
and then call that instead.
Since I know virtually nothing about systemd, there may be better ways to do this though.
Note that if the * glob does not match anything (because the directory is empty), then you would have the same issue as before. An unmatched globbing pattern will by default be left unexpanded.
Personally, I would use either
cd /opt/test-bak && /bin/cp -Rp -v . /opt/test
or
rsync -ai /opt/test/bak-test/ /opt/test
Since neither of these rely on the shell doing filename generation, they could both run without a wrapping shell. Not relying on a shell glob would also, in this case, ensure that hidden files and directories in bak-test will get copied.
| systemd and copy (/bin/cp): no such file or directory |
1,334,477,754,000 |
I have a very large video file on a USB external hard disk. It is about 13GB. I can play the video directly, and it seems there's no problem. But if I try to copy the file to other places, I got a strange error and the USB device is disconnected automatically, then connect back again.
I tried copying from KDE, using cp, or rsync, no luck. I am running out of ideas. I have never seen this kind of problem before.
P.S.
The file is on a LVM partition.
I don't have the error message now. But I remember it was some like
Failed to read block ...
|
You could try your luck with ddrescue. It's usually used for whole disks or partitions, but it also works with single files. It keeps a logfile for retries.
ddrescue /source/your_video.avi /target/your_copy.avi /target/your_copy.logfile
If the disc vanishes in this process, just remount it and start the command again, and it should resume where it left off.
ddrescue also has a bunch of options, use info ddrescue for a manual and more detailed usage examples.
If you have a disk to spare, making a whole disk copy might give better results. It depends on what's damaged exactly - the file itself or just filesystem metadata.
| How to copy a very large video file with error in it? |
1,334,477,754,000 |
I'm doing a file transfer using sftp. Using the get -r folder command, I'm surprised about the order that the program is downloading the content.
It looks like it would be selecting the files it needs to download randomly. I can't believe that this is actually the case and I'm asking myself what's going on behind the scenes?
What's the order that sftp follows when downloading a folder with its content?
From what I can see so far, it is not by name nor by size.
|
When you list the directory contents with the ls command, it will sort the listing into alphanumeric order according to current locale's sorting rules by default. It is easy to assume that this is the "natural order" of things within the filesystem - but this isn't true.
Most filesystems don't sort their directories in any way: when adding a new file to a directory, the new file basically gets the first free slot in the directory's metadata structure. Sorting is only done when displaying the directory listing to the user. If a single directory has hundreds of thousands or millions of files in it, this sorting can actually require non-trivial amounts of memory and processing power.
When the order in which the files are processed does not matter, the most efficient way is to just read the directory metadata in order and process the files in the order encountered without any explicit sorting. In most cases this would mean the files will be processed basically in the order they were added to the directory, interspersed with newer files in cases where an old file was deleted and a later-added file reclaimed its metadata slot.
Some filesystems might use tree structures or something else in their internal design that might enforce a particular order for their directory entries as a side effect. But such an ordering might be based on inode numbers of the files or some other filesystem-internal detail, and so would not be guaranteed to be useful for humans for any practical purpose.
As @A.B said in the question comments, a find command or a ls -f or ls --sort=none would list the files without any explicit sorting, in whatever order the filesystem stores its directories.
| In what order does sftp fetch files when using "get -r folder"? |
1,334,477,754,000 |
I'm finding myself in a terminal more and more often these days as I learn to do certain types of things quicker or more conveniently.
However, when it comes to copying a large amount of data (i.e. hundreds of gigabytes) from one HDD to another, I always revert to the GUI (Nautilus or Finder in my case; the file systems are ext4 or HFS+).
What I have in mind is the initial copying of data to a new larger HDD that's replacing an older one, or to an external back-up HDD.
Are there any tangible benefits to be had using terminal commands in this setting? If so, what are they?
EDIT
Sometimes with these large GUI copies it'll get tripped up somewhere along the way due to a corrupt file or for some other reason. I guess I was wondering if terminal commands, rather than the GUI method, can avoid this problem. It's often quite difficult to determine where the GUI copy has got to, where to resume, and which files are causing the issues.
To my eyes at least, these copies seem a little bit random as to where they start and end.
|
I don't really see a difference between copying many files and other tasks, usually what makes the command line more attractive is
simple tasks which are trivial enough for you to do on the command line, so that using the GUI would be a waste of time (faster to type a few characters than click in menus, if you know what characters to type);
very complex tasks which the GUI just isn't capable of doing.
There's another benefit I see to the command line in one very specific circumstance. If you're performing a very long operation, like copying many files, and you may want to check the progress while logged into your machine remotely, then it's convenient to see the task's progress screen. Then it's convenient to run the task in a terminal multiplexer like Screen or Tmux. Start Screen, start the task inside Screen, then later connect to your machine with SSH and attach to that Screen session.
| Terminal command vs. GUI drag & drop when copying large no. of files: Any tangible benefit? |
1,334,477,754,000 |
I'm taking some early forays into setting up a basic LAMP box. It's my first time setting up the software I'll use as opposed to just being handed a working environment, so go easy on me :)
I have installed Apache, and the corresponding htdocs folder has permissions of drwxr-xr-x. I can copy from remote to local fine, but when trying to copy a small directory I get permission denied.
I should mention I am logging in using my own admin user account on the box, and of course htdocs is not owned by me.
So I figure, in my naivety that I just need to sudo the command - that didn't work. Okay, next I'll "fix" the permissions to 774 based on what I read on the web. Nope that did not work either. I am thinking "do I need to add write access to the third "user"? That seems a weird one.
Then I read a forum thread where the guy was told that because the folder was root owned, he'd have to scp the files into his home/ dir on the remote host, then sudo cp them to the apache folder.
Seems a longwinded method to me, but before I try and do that, I thought I'd ask here whether that is true, and whether there were any best practices here and whether any of my assumptions were wrong?
Secondly - what is appropriate permissions for htdocs?
I'm still in the early stages and will probably eventually setup some FTP access, but I'd would be good to know.
|
There are many ways to skin this cat. Here are some for you to consider:
The htdocs tree almost certainly doesn't have to be owned by root. What matters is that it be readable by the Apache user. Depending on the *ix system in question, that may be apache, www-data, or something else. The default file mode you give above, drwxr-xr-x (abbreviated 755) is fine for this.
So, the question is, who should own this tree, and which group should it belong to. This may be enough:
$ sudo chown -R dan.apache /var/www
This says user dan owns /var/www and everything under it (-R, recursive) and that group apache has some permissions to it, too. If httpd is running as group apache, it probably gets enough permission to read files in the tree and change directories within it, sufficient for most sites.
Another way is to go with whatever permissions you have and simply tell scp to impersonate the owner of the /var/www/ tree:
mybox$ scp ~/site-mirror/index.html [email protected]:/var/www/htdocs
That copies the local copy of the root index.html file to the appropriate location on example.com, logging in as user www. You can use whatever user name and host name you need here. You just need the ability to login as the /var/www/ tree owner's user remotely. If you can't do that, consider going with option #1, at least to get things set up in a way that does allow you to scp files directly.
If you set up pre-shared keys for SSH, you won't even have to give a password.
Instead of scp, I recommend you use rsync for web site development:
mybox$ rsync -ave ssh --delete ~/site-mirror [email protected]:/var/www/htdocs
This mirrors the contents of ~/site-mirror on mybox (your local work machine) into /var/www/htdocs on example.com, logging in as user www. The advantage of using rsync over raw scp is that you don't have to copy and re-copy files that haven't changed. The Rsync algorithm computes the changes and sends only that.
| Permissions "problem" using SCP to copy to root owned folder from local |
1,334,477,754,000 |
I have two servers (A and B) and my local machine. I'm trying to transfer a file from server A to server B.
From server A:
scp ./backup.tar [email protected]:/home/public/
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey, password).
lost connection
From server B:
scp [email protected]:/home/public/backup.tar .
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey, password).
lost connection
Same error message when I try from my local computer. What's going on?
This is what I get when I try to ssh from Server A to Server B with the debug flag:
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /home/private/.ssh/identity
debug1: Trying private key: /home/private/.ssh/id_rsa
debug1: Trying private key: /home/private/.ssh/id_dsa
debug1: Next authentication method: password
debug1: read_passphrase: can't open /dev/tty: No such file or directory
debug1: Authentications that can continue: publickey,password
Permission denied, please try again.
debug1: read_passphrase: can't open /dev/tty: No such file or directory
debug1: Authentications that can continue: publickey,password
Permission denied, please try again.
debug1: read_passphrase: can't open /dev/tty: No such file or directory
debug1: Authentications that can continue: publickey,password
debug1: No more authentication methods to try.
Permission denied (publickey,password).
Does this mean it can't find my terminal? I should mention that server B is a subdomain of server A. My hosting provider however sees them as completely different entities and they are not hosted on the same LPAR.
Conclusion
I've emailed my hosting provider and it seems that there is a small bug related to the version of ssh and the OS (freeBSD). Currently, my workaround is to (1) scp the file locally to my machine, then (2) scp the file locally to the second server. This is what scp -3 is supposed to do, but that fails as well.
|
This looks like there is a problem with ssh configuration on the servers - you cannot ssh from any of them (probably for security reasons).
You can try Stephane's suggestion to do the transfer from your local machine (scp [email protected]:/home/public/backup.tar [email protected]:/home/public/). This should out rule the problem with taking input from terminal (which might me purposely created on the servers).
If that doesn't help, it will mean that the provider probably disallows outgoing ssh connections. In that case, you'll be left with two options:
ask the provider to enable outgoing ssh connections
or
transfer the files through your local machine:
scp -3 [email protected]:/home/public/backup.tar [email protected]:/home/public/
| "Permission denied, try again" while transferring files with scp |
1,334,477,754,000 |
Every once in a while, I find the need to do:
cp /really/long/path/to/file.txt /totally/different/long/path/to/copy.txt
Since I use autojump, getting to the directories is very fast and easy. However, I'm at a loss when it comes to copying from one directory to the other without having to type out at least one of the full paths.
In a GUI filesystem navigator, this is easy: navigate to the first directory; Copy the original file; navigate to the second directory; and Paste. But with cp, it seems like I can't do the copy in two steps.
I'm looking to do something like the following:
(use autojump to navigate to the first directory)
$ copy file.txt
(use autojump to navigate to the second directory)
$ paste copy.txt
Instead of the longer-to-type:
(use autojump to navigate to the first directory)
$ cp file.txt /totally/different/long/path/to/copy.txt
Is there a tool that provides the functionality I'm looking for? I'm using Zsh on OS X El Capitan.
|
The below works in bash. I haven't tried it in zsh.
Try:
echo ~- # Just to make sure you know what the "last directory" is
Then:
cp file.txt ~-/copy.txt
Also see:
More examples of use of ~- (and its interaction with pushd and popd)
Is it possible to name a part of a command to reuse it in the same command later on?
| How to cp in two steps |
1,334,477,754,000 |
I use the following command to synchronize two folders :
rsync -avhiu --progress --stats folder1/ folder2/
But unfortunately I have a bunch of file which differ only by their time stamps and rsync does the transfer of the whole file only to modify the time ...
The man page of rsync says the following :
sending only the differences between the
source files and the existing files in the destination
So I assume I do something in the wrong way. How can I make rsync copy only the time (when it is the only attribute changing of course) ?
|
The -W option is implied if you use rsync without copying to/from a remote system (i.e. only between two local folders):
-W, --whole-file
With this option rsync’s delta-transfer algorithm is not used
and the whole file is sent as-is instead. The transfer may be
faster if this option is used when the bandwidth between the
source and destination machines is higher than the bandwidth to
disk (especially when the "disk" is actually a networked
filesystem). This is the default when both the source and
destination are specified as local paths, but only if no
batch-writing option is in effect.
Try running with --no-whole-file or --no-W:
rsync -avhiu --no-whole-file --progress --stats folder1/ folder2/
| rsync update only timestamp |
1,334,477,754,000 |
Many times I find GNOME file copy GUI tool (Nautilus) irritating when it stops working. It happens when:
I cancel the copy or move
I try to copy to blue tooth exchange folder to my friends laptop(when he forgets to permit the operation or if the file is big)
some other times, it just hangs while copying
So I obviously get irritated with this and I want to quit the operation immediately. Unfortunately whenever I try to cancel it, it never works. This happened to me several times so I tried to find the process ps aux| grep copy or cp or something like that but I m never successful. Maybe it has become a zombie process I guess.
|
As far as I know, there is no way to rescue Nautilus (file manager for GNOME) when it hangs. You are left only with the option of killing it, so go to the command line and run:
killall nautilus
After that, it should automatically restart, and then you can try again.
This is just a bug in it. Try avoid parallel copying at once, though I'm not sure if that's what triggers the behavior, but it tends to be a slower operation anyways, compared to serial copying.
Note that Nautilus doesn't invoke the shell's copy commands, that's why your ps attempts didn't help. It uses different technology (GIO and/or GVFS).
| How to quit GNOME file copy GUI after it hangs |
1,334,477,754,000 |
For application specific reasons, I need to copy an entire parent directory into a subdirectory. E.g.,
cp -r ../ tmp/
The problem with this is that it gets stuck in an infinite loop, recursively copying the contents of tmp/ over and over again.
There are a number of ways around this, like tarring the directory and unpacking it in tmp, but I'm wondering if there are any particularly elegant/unixy solutions.
(note: I'm using Apple OS/X).
|
If you start from the parent directory, you can do this with find and GNU cp. Assuming the directory you're in currently (the one containing tmp) is called folder, and that tmp is empty, you'd run
cd ..
find . -path ./folder/tmp -prune -o -type f -exec cp --parents -t folder/tmp {} +
This asks find to list everything under . (which is your old ..), exclude it if it matches ./folder/tmp, and otherwise if it's a file pass it to cp with the --parents option (which tells cp to reconstruct the source hierarchy).
If tmp already has files which also need to be copied, the following variant is slightly less Unix-y (since it uses moreutils' sponge) but avoids skipping the contents of tmp:
cd ..
find . -type f -print0 | sponge | xargs -0 cp --parents -t folder/tmp
You could avoid the use of sponge by saving the list of files to copy in a file somewhere else (but then things get rather less elegant):
cd ..
find . -type f -print0 > /tmp/filelist
xargs -0 cp --parents -t folder/tmp < /tmp/filelist
rm /tmp/filelist
You can avoid the requirement on GNU cp by using cpio instead:
cd ..
find . -type f -print0 > /tmp/filelist
cpio -pmd0 folder/tmp < /tmp/filelist
rm /tmp/filelist
or
cd ..
find . -type f -print0 | sponge | cpio -pmd0 folder/tmp
If tmp is empty you can also avoid the requirement on moreutils:
cd ..
find . -path ./folder/tmp -prune -o -type f -print0 | cpio -pmd0 folder/tmp
| Copying a Parent Directory into a Subdirectory Without an Infinite Loop |
1,334,477,754,000 |
I'm trying to write a script to copy files recursively from a particular folder except files A.extn, B/*.extn and C/* where B and C are directories and extn is just some generic extension. This is what I have:
#!/usr/local/bin/zsh
setopt EXTENDED_GLOB
TMPDIR=/tmp/test
cp -pR $(dirname $0)/**~(*.foo/*|*/bar.txt|*.abc|qux.txt) $TMPDIR
However this doesn't do the negation of the pattern as expected. I think I do know why — although the pattern is correct (as seen with echo), cp -R is not aware of the pattern, and enters a directory that it is "not supposed to", and once in there, the pattern is no longer valid.
How do I modify the above to do what I want? I guess it is possible with find and xargs, but I'm drawn towards the clarity of the above and would prefer something similar (but if it's the wrong way to do it, I'd be perfectly happy with a different solution).
|
You are correct that the pattern is expanded before cp is run, so is unknown to that command.
You may be able to accomplish what you want by using the --parents option to cp rather than -R. That will only copy the files which match your pattern, but will use the full path name as supplied rather than only the trailing file name.
But, this option isn't portable. AFAIK, it's only supported by the GNU version of cp.
| Modifying zsh globbing patterns to use with cp |
1,334,477,754,000 |
If I have input folder files_input that has subfilders like 01-2015, 02-2015, 03-2015 etc and all these subfolders have other subfolders. Each subfolder has only one file called index.html.
How can I copy all these index.html files into one folder called files_output so that they end up like separate files in the same folder. They should ofcourse be renamed and I have tried to use --backup for that...
I have tried
find files_input -name \*.html -exec cp --backup=t '{}' files_output \;
to get them numbered but that copies only one file and nothing else.
I don't know does that change anything but I'm using zsh, here are the versions:
$ zsh --version | head -1
zsh 5.0.2 (x86_64-pc-linux-gnu)
$ bash --version | head -1
GNU bash, version 4.3.11(1)-release (x86_64-pc-linux-gnu)
$ cp --version | head -1
cp (GNU coreutils) 8.21
$ find --version | head -1
find (GNU findutils) 4.4.2
Ideas?
Edit:
Trying to run e.g. following
cp --backup=t files_input/01-2015/index.html files_output
five times in a row still gives me one index.html in files_output folder! Is cp broken ? Why don't I have five different files?
|
As you're a zsh user:
$ tree files_input
files_input
|-- 01_2015
| |-- subfolder-1
| | `-- index.html
| |-- subfolder-2
| | `-- index.html
| |-- subfolder-3
| | `-- index.html
| |-- subfolder-4
| | `-- index.html
| `-- subfolder-5
| `-- index.html
|-- 02_2015
| |-- subfolder-1
| | `-- index.html
| |-- subfolder-2
| | `-- index.html
| |-- subfolder-3
| | `-- index.html
| |-- subfolder-4
| | `-- index.html
| `-- subfolder-5
| `-- index.html
(etc.)
$ mkdir -p files_output
$ autoload -U zmv
$ zmv -C './files_input/(*)/(*)/index.html' './files_output/$1-$2-index.html'
$ tree files_output
files_output
|-- 01_2015-subfolder-1-index.html
|-- 01_2015-subfolder-2-index.html
|-- 01_2015-subfolder-3-index.html
|-- 01_2015-subfolder-4-index.html
|-- 01_2015-subfolder-5-index.html
|-- 02_2015-subfolder-1-index.html
|-- 02_2015-subfolder-2-index.html
(etc.)
What's happening here is that we make the command zmv available with autoload -U zmv. This command is used for renaming, copying or linking files matching a zsh extended globbing pattern.
We use zmv with its -C option, telling it to copy the files (as opposed to moving them, which is the default). We then specify a pattern that matches the files we'd want to copy, ./files_input/(*)/(*)/index.html. The two (*) matches the two levels of subdirectory names, and we put them within parentheses for use in the new name of each file. The new name of each file is the second argument, ./files_output/$1-$2-index.html, where $1 and $2 will be the strings captured by the parentheses in the pattern, i.e. back-references to the subdirectory names. Both arguments should be single quoted.
| Recursive copy files with rename |
1,334,477,754,000 |
I have few big folders "cosmo_sim_9", "cosmo_sim_10".... in one of my external hard disk, and a old copy of this on another external hard disk.
I want to Synchronize old directories with the new one(recursively), but without overwriting already existing files(for saving time).
How can I do this? My os is Fedora 20.
|
use rsync:
rsync -a --ignore-existing cosmo_sim_9 /dest/disk/cosmo_sim_9
--ignore-existing will cause it to skip existing files on the destination, -a will make it recursive, preserving if possible permission/ownership/group/timestamp/links/special devices.
you can do it for all directories by using a bash for loop:
for dir in cosmo_sim_* ; do
rsync -a --ignore-existing "$dir" "/dest/disk/$dir"
done
| How To Synchronize Directories in two different external hard disks? |
1,334,477,754,000 |
I am new to these commands. I am trying to gzip a local folder and unzip the same on the remote server. The thing is, gzipping and unzip must happen on the fly. I tried many and one of the closest I believe is this:
tar cf dist.tar ~/Documents/projects/myproject/dist/ | ssh [email protected]:~/public_html/ "tar zx ~/Documents/projects/myproject/dist.tar"
As you can see above, I am trying to send out the dist folder to the remote server, but before that I am trying to compress the folder on the fly (looks like that is not happening in above command).
local folder: ~/Documents/projects/myproject/dist/
remote folder: ~/public_html (directly deploying to live)
Of course, the gzip created file must not be there, it should happen on the fly.
My intention is to run the above like through a file like sh file.command. In other words, I am trying to deploy my compiled project which is in dist folder, to live when the sh command is executed. I don't want to do this manually every time I make a change in my project.
|
If you have rsync then use that instead, as it makes use of existing files to allow it to transfer only differences (that is, parts of files that are different):
rsync -az ~/Documents/projects/myproject/dist/ [email protected]:public_html/
Add the --delete flag to to completely overwrite the target directory tree each time. If you want to see what's going on, add -v.
If you don't have rsync, then this less efficient solution using tar will suffice:
( cd ~/Documents/projects/myproject/dist && tar czf - . ) |
ssh [email protected] 'cd public_html && tar xzf -'
Notice that the writing and reading of the compressed tarball is via stdout and stdin (the - filename). If you were using GNU tar you could use the -C to set a correct directory before processing, whereas here we've used old-fashioned (traditional?) cd. Add the v flag (on the receiving side) to see what's going on, i.e. tar xzvf ....
| gzip compress a local folder and extract it to remote server |
1,336,040,124,000 |
I have this situation
$ ls -la
-rw-r--r-- 1 user user 123 Mar 5 19:32 file-a
-rwx---rwx 1 user user 987 Mar 5 19:32 file-b
I would like to overwrite file-b with file-a but I would like to preserver all permissions and ownership of file-b.
This does not work, because it uses permissions of file-a
cp file-a file-b # << edit: this works as expected! My fault!
mv file-a file-b
This works, but it can be called only from shell. Imagine the situation I can call only execve or similar function.
cat file-a > file-b
I know, that I can execute something like
sh -c "cat file-a > file-b"
but this introduce difficulties with escaping filenames so I don't want to use this.
Is there some common command that can do this or should I write my own helper c program pro this task?
|
A simple command to copy a file without copying the mode is
dd if=file-a of=file-b
but then you get dd’s verbose status message written to stderr.
You can suppress that
by running the command in a shell and adding 2> /dev/null,
but then you’re back to square 1.
If you have GNU dd, you can do
dd if=file-a of=file-b status=none
| Overwrite file preserving target permissions without shell invocation |
1,336,040,124,000 |
I have a folder like this:
./folder-a/index.html
./folder-b/index.html
./folder-c/subdir/index.html
./new-content/folder-a/index.html
./new-content/folder-b/index.html
./new-content/folder-c/subdir/index.html
The new-content folder contains an on-going updated stuff that I update. When I want to update my content I am going to copy them to overwrite the existed ones, like this:
\cp -rf new-content/* ./
But how can I set up a backup of the going-to-be-overwritten files?
Any simple way to achieve this?
|
From man cp (the GNU version, found on Linux and Cygwin)
--backup[=CONTROL]
make a backup of each existing destination file
-b like --backup but does not accept an argument
Example
touch 1 2
cp -bv 2 1
‘2’ -> ‘1’ (backup: ‘1~’)
Note that this does not check for existing backup files, i.e. if 1~ exists it will be overwritten. Using the long version you can avoid this. E.g.
cp -v --backup=numbered 2 1
‘2’ -> ‘1’ (backup: ‘1.~1~’)
cp -v --backup=numbered 2 1
‘2’ -> ‘1’ (backup: ‘1.~2~’)
cp -v --backup=numbered 2 1
‘2’ -> ‘1’ (backup: ‘1.~3~’)
| How to backup all the files that I'm copying before being overwritten? |
1,336,040,124,000 |
Here's the problem I'm attempting to solve:
Let's say I have a directory "A", containing some files as well as some other directories.
I want to copy all the files directly under directory A to directory B.
I want to recursively copy all the folders inside folder A to folder C.
What is the shortest and less platform-specific way to accomplish this in UNIX/Linux?
|
Probably something like this
find A -type f -maxdepth 1 -exec cp {} B/ \;
And
find A -type d -maxdepth 1 -mindepth 1 -exec cp -r {} C/ \;
Where -type is a flag, determining the type you're looking for (file or directory), - maxdepth how deep into directory, and -exec for executing a command on the result.
| Copying files based on a condition |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.