date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,425,693,332,000 |
Scenario:
Source dirs:
/day1/hour1/instance1/files.ext
/day1/hour1/instance2/files.ext
/day1/hour1/instance3/files.ext
/day1/hour2/instance1/files.ext
/day1/hour2/instance2/files.ext
etc..
Target dir (already exist):
/day1/hour1/instance4/files.ext
/day1/hour1/instance5/files.ext
/day1/hour1/instance6/files.ext
/day1/hour2/instance6/files.ext
/day1/hour2/instance7/files.ext
I have to copy all files from source to target.
As you can see I have a same tree, that means same day, and same hours, but different instance from source and target.
I need to copy all dir and files from source in the same exactly tree in target but preserving all files that are already in target folder.
How could I achieve this?
cp -R is what I need? or do I need to add some more parameters?
|
With rsync:
rsync --archive --ignore-existing source_dir/ target_dir/
This will copy the source_dir hierarchy inte target_dir, but will not overwrite any files in the target_dir that already exists.
| Copy all files recursively without replacing |
1,425,693,332,000 |
How can I gzip and copy files to another folder keeping its directory structure with one command in Linux?
For example, I have:
/dir1
/dir1/file1.fit
/dir1/file2.fit
/dir1/file3.fit
/dir1/dir2/file1.fit
/dir1/dir2/file2.fit
/dir1/dir2/file3.fit
After I use a command (Lets we say I copy /dir1 to /another_dir), I want to get:
/another_dir/dir1
/another_dir/dir1/file1.fit.gz
/another_dir/dir1/file2.fit.gz
/another_dir/dir1/file3.fit.gz
/another_dir/dir1/dir2/file1.fit.gz
/another_dir/dir1/dir2/file2.fit.gz
/another_dir/dir1/dir2/file3.fit.gz
Here /another_dir is actually another hard drive. Since no enough space in this target drive (it is a data of 2TB!), please do not suggest me to copy the files first and then gzip all (or vice-versa). Similarly, the gz files should not remain in the source folder after the operation.
|
Assuming you're in the root folder where are all directories for compression (in your case /), you can use find along with xargs command, e.g.
find dir1/ -name "*.fit" -print0 | xargs -i% -r0 sh -c 'mkdir -vp "$(dirname "/another_dir/%")" && gzip -vc "%" | tee "/another_dir/%".gz > /dev/null && rm -v "%"'
Note: You can also replace | tee "/another_dir/%".gz > /dev/null with > "/another_dir/%".gz.
This will find all .fit files in dir1/ and pass them to xargs command for parsing where % is replaced with each of your file.
The xargs command will:
create the empty folder (mkdir) with its parents (-p) as a placeholder,
compress given file (%) into standard output (-c) and redirect compressed output to tee,
tee will save the compressed input into .gz file (since tee by default prints the input to the terminal screen, sending it to /dev/null will suppress it, but it'll still save the content into the given file).
After successful compression, remove the original (rm). You can always remove that part, in order to remove them manually after verifying your compressed files.
It is important that you're in relative folder to your dir1/, so all paths returned by find are relative to the current folder, so you don't have to convert absolute paths into relative (this still can be done by realpath, e.g. realpath --relative-to=$absolute $current, but it will just overcomplicate the above command).
On macOS, to use -r argument for xargs, you need to install GNU xargs (brew install xargs) and use gxargs command instead. Similar on other BSD systems.
Related question: gzip several files in different directories and copy to new directory.
| How to gzip and copy files keeping its directory structure? |
1,425,693,332,000 |
In a directory, I have about 150 files with a certain extension, for example:
abc.ext
def.ext
ghi.ext
...
Now I want to copy all of these files also to a new filename (without extension):
abc
def
ghi
What is the shortest way to get this done? Is it possible without writing a loop in a bash file?
Edit: Thank you for your answers (dirkt, John Newman).
After seeing the answers I'll stick with the shorter one, although it has a loop.
|
It's possible without writing a script ("bash file"), but not without using a loop:
for f in *.ext ; do cp -- "$f" "$(basename "$f" .ext)" ; done
basename can be used to remove the suffix.
| copy files to a new name - shortest way |
1,425,693,332,000 |
Considering folderA containing those files:
foo
bar
baz
and folderB containing those:
foo
baz
foobar
qux
How can I copy foo and baz from folderA to a new folderC?
Note that I'm comparing only their names, not their contents.
|
Use a for loop over the files. Parameter expansion can be used to extract parts of the path:
#! /bin/bash
for file in folderA/* ; do
basename=${file##*/}
if [[ -f folderB/$basename ]] ; then
cp "$file" folderC/"$basename"
fi
done
You can loop over files in folderB, too, and I'd recommend it if folderB contains significantly fewer files than folderA.
| Copy files from a folder if they're in another folder too |
1,425,693,332,000 |
I'm trying to download a large number of files from a remote server. Part of the path is known, but there's a folder name that's randomly generated, so I think I have to use wildcards. The path is something like this:
/home/myuser/files/<random folder name>/*.ext
So was trying this:
rsync -av [email protected]:~/files/**/*.ext ./
This is giving me following error:
bash: /usr/bin/rsync: Argument list too long
I also tried scp instead of rsync but got the same error. It seems bash interprets the wildcard as the full list of files.
What's the right way to achieve this?
|
Instead of letting the remote shell expand a glob that results in a too long list of arguments, use --include and --exclude filters to transfer only the files that you want:
rsync -aim --include='*/' --include='*.ext' --exclude='*' \
[email protected]:files ./
This would give you a directory called files in the current directory. Beneath it, you will find the parts of the remote directory structure that contain the .ext files, including the files themselves. Directories without .ext files would not appear on the target side as we use -m (--prune-empty-dirs).
With the --include and --exclude filters, we include any directory (needed for recursion) and any name matching *.ext. We then exclude everything else. These filters work on a "first match wins" basis, which means the --exclude='*' filter must be last. The rsync utility evaluates the filters as it traverses the source directory structure.
If you then want to move all the synced files into the current directory (ignoring the possibility of name clashes), you could use find like so:
find files -type f -name '*.ext' -exec mv {} . \;
This looks for regular files in or beneath files, whose names match the pattern *.ext. Matching files are moved to the current directory using mv.
| Retrieve large number of files from remote server with wildcards |
1,425,693,332,000 |
Why does a file with a permission of 0664/-rw-rw-r-- become 0777/-rwxrwxrwx when copied onto an external hard drive? The external drive is NTFS-formatted - does this matter?
|
It does matter, because the set of attributes and metadata supported for a file vary widely across the various types of filesystems.
Specifically, the file-system permissions (and ownership, for that) you are referring to here originate in the traditional Unix user management framework and are therefore a feature of the filesystems developed for/usually used in Unix/Linux operating systems, like the EXT family of filesystems. They are stored in the inode, a special low-level data block describing a filesystem data structure.
NTFS comes from the Windows world where users and permissions are handled very differently; in particular, NTFS uses access-control lists to determine which user may do what with a certain file (1). So, when an NTFS drive is mounted on a Linux/Unix system, the file system driver has to "translate" the properties of that drive into something understandable to the Linux tools for handling filesystems, which sometimes can mean substituting data that simply isn't present on the actual fileystem with default values.
So, since
NTFS has no notion of your local users, and
it doesn't control access via ownership/group membership
when copying a file from a Unix/Linux-type file system to an NTFS filesystem will lead to a loss of metadata which is then substituted with a default "everyone can do everything".
See also
External drive chmod does nothing
Will permission bits set on a directory on an external hard drive be respected under Windows?
(1)and although filesystems used in the Linux world now also support them, they are added "on top" of the traditional permissions, which still form the basis for access handling
| File permission change when copying to external hard drive |
1,425,693,332,000 |
I am newbie here, please be patient.
I have a directory containing thousands of files. Filename always start with 1 or 2 letter and have 4 character before underscore "_". The number of files for each pattern can be different, the part of file name after underscore changes.
Sample:
Parentdir:
->AA01_*.pdf
->AA01_*.html
->AA01_*.txt
...
->ZZ99_*.pdf
->ZZ99_*.html
->ZZ99_*.txt
...
->A001_*.pdf
->A001_*.html
->A001_*.txt
...
->Z999_*.pdf
->Z999_*.html
->Z999_*.txt
I would like to run a command that would create new directories using only letters from file-name and populate them with files starting with these letters.
If there is a file with the same name in the destination directory (updated file in the source dir), I'd like to keep most recent one. so:
New dir/files:
->AA
AA01_*.pdf
AA01_*.html
AA01_*.txt
...
->ZZ
ZZ99_*.pdf
ZZ99_*.html
ZZ99_*.txt
...
->A
A001_*.pdf
A001_*.html
A001_*.txt
...
->Z
Z999_*.pdf
Z999_*.html
Z999_*.txt
Can this be accomplished?
Thanks!
|
Loop across the set of files. Pick off the alphabetic prefix. Create the directory (if necessary) and move the file into it.
#!/bin/sh
for item in *
do
if [ -f "$item" ]
then
prefix="$(echo "$item" | sed 's/[^A-Z].*//')"
echo mkdir -p "$prefix"
echo mv "$item" "$prefix/"
fi
done
Remove the echo prefixes from mkdir and mv when you're happy it's going to do what you want.
| Sorting and copying files |
1,425,693,332,000 |
I have several folders within a parent folder, which all have the structure below, and am struggling to create a specific loop.
parentfolder/folder01/subfolder/map.png
parentfolder/folder02/subfolder/map.png
parentfolder/folder03/subfolder/map.png
parentfolder/folder04/subfolder/map.png
etc...
so each subfolder contains a file called map.png (i.e. same filename in all subfolders, but they are different files).
I would like to copy each map.png file and place it into the overall Parentfolder, but at the same time I want the copy to be renamed based on the Folder above 'subfolder'.
So for example, I want to copy map.png from parentfolder/folder01/subfolder to parentfolder whilst renaming it folder01.png (and for this then to be done accordingly for all others, using a loop).
I have tried something along these lines but am obviously sruggling to get it to do what I want it to:
for i in parentfolder/*; do
cd $i
cd subfolder
cp map.png ../../"$i".png
cd -
done
I am still a beginner and very new to this so would appreciate any help. Thanks so much.
|
You may try something like the following for loop,
for d in parentfolder/* ; do
cp "$d/subfolder/map.png" "$d.png"
done
You should run it when your current directory is on the same level of the parentfolder.
| cp command based on parent directory |
1,425,693,332,000 |
Summary
I have copied (rsync --archive) a folder from an ext4 filesystem to a zfs file system with compression on. Now, I'm trying to verify that both folders are identical so that I can safely delete the source folder.
When re-running rsync, no additional bytes are transferred. So, rsync is convinced that both folders are identical.
However, using du, du -b, or md5sum yield different results for both folders.
How can I convince myself that both folders are identical before deleting the source folder?
Examples
I've uploaded a test folder 883 containing four files.
fiedl@ext4 ▶ du 883
20 883
fiedl@zfs ▶ du 883
57 883
fiedl@ext4 ▶ du -s 883
20 883
fiedl@zfs ▶ du -s 883
57 883
fiedl@ext4 ▶ du -sb 883
4660 883
fiedl@zfs ▶ du -sb 883
570 883
fiedl@ext4 ▶ du 883/*
4 883/big_image001.gif
4 883/image001.gif
4 883/medium_image001.gif
4 883/thumb.png
fiedl@zfs ▶ du 883/*
10 883/big_image001.gif
10 883/image001.gif
10 883/medium_image001.gif
10 883/thumb.png
fiedl@ext4 ▶ tar -cf - 883 | md5sum
7c8a4ff31fdf594b04173789b23c7bb8 -
fiedl@zfs ▶ tar -cf - 883 | md5sum
f207dbadd75126665af300705774c97f -
|
md5sum and diff
Under the assumption that the observed differences are the results of different meta data and zfs compression, the respective md5sum of the individual files should still be the same.
cd /path/to/ext4 && find . | xargs md5sum | sort > ~/md5sum-index-ext4
cd /path/to/zfs && find . | xargs md5sum | sort > ~/md5sum-index-zfs
diff ~/md5sum-index-ext4 ~/md5sum-index-zfs
This workaround recursively lists all files within the directories using find, adds the md5sum for each file, sorts the results because find may return the contents in different order. Then the results for both folders can be compared. If they are identical, the diff is empty. Otherwise, the files with different binary content will show up in the diff.
diff -r
I was able to compare smaller folders directly using diff -r. However, this has crashed for large folders, maybe due to the lack of memory.
diff -r /path/to/ext4 /path/to/zfs
If both folders are identical, the diff is empty.
| Make sure two folders are identical on compressed zfs and ext4 |
1,537,101,239,000 |
In the following makefile
InputLocation:=./Test
OutputLocation:=$(InputLocation)/Output
Input:=$(wildcard $(InputLocation)/*.md)
Output:=$(patsubst $(InputLocation)/%, $(OutputLocation)/%, $(Input:Input=Output))
.PHONY: all
all: $(Output)
$(OutputLocation)/%.md : $(InputLocation)/%.md
cp -rf $< $@;
ActualFilePath="$<"
InterimFile1Path="$@"
#cp -rf $(ActualFilePath) $(InterimFile1Path);
cp -rf $< $@; copies file successfully.
While cp -rf $(ActualFilePath) $(InterimFile1Path) gives an error cp: missing file operand
Why is it so?
|
Run make -n to see the commands that would be executed, or run make without options and look at the commands that are executed. Doing this would probably already answer your question, and if not, it would allow us to know what happens.
From the fragment you show, it seems you want to assign shell variables and later use make variables. So TargetLocation seems to be a make variable, while ActualFilePath="$<" seems to be a command meant for the shell.
Depending on the rest of the file, this may work:
ActualFilePath="$<"; \
InterimFile1="tempHTML.md"; \
InterimFile1Path="$(TargetLocation)/$${InterimFile1}" ; \
cp -rf $${ActualFilePath} $${InterimFile1Path};
Edit
In the indented part of the rules, you are not assigning make variables, you specify shell commands.
This should work:
$(OutputLocation)/%.md : $(InputLocation)/%.md
cp -rf $< $@;
ActualFilePath="$<"; \
InterimFile1Path="$@"; \
cp -rf $${ActualFilePath} $${InterimFile1Path}
And this should work, too:
ActualFilePath="$<"
InterimFile1Path="$@"
$(OutputLocation)/%.md : $(InputLocation)/%.md
cp -rf $(ActualFilePath) $(InterimFile1Path);
| Makefile: Copy using make variables --> error; Not so without variables! |
1,537,101,239,000 |
I have a remote machine with a large number of numbered directories, like so:
dir1 dir2 dir3 ... dir40
each of which contain several numbered files:
file1 file2 file3 ... file2530
I want to copy only a selected range of the files in each directory. Since the files' names are identical in each directory, I want to re-create the directory hierarchy on my local machine. But since I don't want every file, I can't just use scp -r to copy every file in the directory.
I can't set up an automated connection with ssh keys on the remote machine, so I would prefer a method that doesn't involve repeated calls to a remote copy command. The files are also big, so I don't want to just copy the whole thing over and delete the ones I don't want with rm and brace expansion.
How can I copy a set of files from a remote machine, along with those files' parent directories, while preserving the directory structure and without copying every file in those directories?
|
You could use rsync, which will do only one ssh to the remote, and provide it with either a complete list of files, or a list of glob patterns of files to copy or not copy. For example,
rsync -navR --exclude='*-[4-9]?.out' --exclude='*-3[3-9].out' --exclude='*-???*.out' myremote:'dir*' mylocaldir
This would exclude filenames like file-40.out with 2 characters from 40 to 99, and also file-33.out to 39, and also file-100.out or bigger. Run the command with the -n option as shown to collect the list of names that would be transferred, and if this is ok remove the option to actually do the copy.
Note, rsync does not support braces {} in its glob patterns. Alternative ways of specifying the files to copy depend on how exotic your exclusion pattern is, but a foolproof method is to use -n and no exclude patterns to get the complete list of names, then edit this list and provide it as an --files-from list of files. You would also need to remove the dir* from the remote destination:
rsync -av --files-from=list myremote: mylocaldir
| Selectively copy from a collection of remote directories |
1,537,101,239,000 |
How can I move files from a full partition to one with more room?
Backstory:
Centos7 partitioned the 1TB hard drive on installation. I didn't realize that the partition mysql installs and runs from only has 50G. It reached maximum capacity and now the mysql service will not start so I can't simply drop or truncate tables. After I get this running I'll be searching online how to keep mysql tables on the large partition. I don't actually know why linux centos needs so many little partitions or what they are for. I don't have an internet browser on the linux machine so I can not copy and paste the output of df -h.
The partition mounted on / only has 50G while the partition mounted on /home has another 800G free.
Thanks for your help.
|
You should copy the contents of /var/lib/mysql onto a larger partition, remove the old copy from the space-constrained partition, and create a soft-link to the new location at /var/lib/mysql so that the system will find and use the new location instead.
As requested, here are actual commands, but as always please exercise extreme caution before running rm commands (i.e., check to make sure that your files have copied correctly -- using, for example, du -shx . in both places to check that the total size is approximately the same):
mkdir /home/var-lib-mysql
cp -ax /var/lib/mysql/. /home/var-lib-mysql/
rm -rf --one-file-system /var/lib/mysql
ln -sf -T /home/var-lib-mysql/ /var/lib/mysql
And, of course, keep in mind that this is a hack and you should refrain from ever creating a user account literally called "var-lib-mysql".
| How can I move files from a full partition to one with more room? |
1,537,101,239,000 |
I am running the command: sudo rsync -Hva --delete --progress --append-verify "/mnt/1/" "/mnt/2/". I went ahead and modified a text file in /mnt/2/. I then ran the command and I got the following output:
sending incremental file list
sent 13,320,053 bytes received 60,989 bytes 198,237.66 bytes/sec
total size is 1,745,978,866,295 speedup is 130,481.53
I checked the text file in /mnt/2/ and it still has my modification. Have I misunderstood the command append-verify? Does it not check file checksums? I also modified the file's time stamp and increased its file size.
To clarify, I do not want to sync from DEST to SRC. I simply want the sync from SRC to DEST to overwrite the change I made in DEST.
|
By default rsync ignores file times and sizes.
The manpage says about --append:
If a file needs to be transferred and its size on the receiver is the same or longer than the size on the sender, the file is skipped.
It shares this quality with --append-verify. The extra verification you were hoping to happen only happens after the append action (which probably never happens if you for instance added something to the file instead of deleted).
In this case, you probably want the -I flag, so as to ignore time and size of the file.
The append options are meant mostly to speed up updating larger files who only change at the bottom (like logfiles).
| Rsync's append-verify isn't mirroring directories |
1,537,101,239,000 |
I am trying to write a script, that will copy file or directory with added timestamp to filename/dirname
Something like:
cover.jpg --> cover_18-01-2014_17:37:32.jpg
directory --> directory_18-01-2014_17:37:32
I don't know how to add the timestamp to filename/dirname. Can anybody help?
Timestamp
now="$(date +'%d-%m-%Y_%T')"
|
I managed to write script on my own afterwhile
#timestamp
now="$(date +'%d-%m-%Y_%T')"
filename=$(basename "$1")
cp -r "$1" "$2"
if [ -f "$1" ]; then
extension="${filename##*.}"
name="${filename%.*}"
mv -f "$2/$filename" "$2/${name}_${now}.$extension"
else
mv -f "$2/$filename" "$2/${filename}_${now}"
fi
| Adding timestamp to file/dir name |
1,537,101,239,000 |
I have multiple files with an example of how they look like shown below.
-rw-r--r-- 1 my_user users 12 Dec 13 09:56 Example_30_001_20130913175000.DAT
-rw-r--r-- 1 my_user users 12 Dec 13 09:57 Example_30_002_20130913180854.DAT
-rw-r--r-- 1 my_user users 12 Dec 13 09:58 Example_30_003_20130913180857.DAT
-rw-r--r-- 1 my_user users 12 Dec 13 09:58 Example_30_004_20130913180901.DAT
-rw-r--r-- 1 my_user users 12 Dec 13 09:59 Example_30_005_20130913180904.DAT
-rw-r--r-- 1 my_user users 12 Dec 13 10:02 Example_30_006_20130913180907.DAT
-rw-r--r-- 1 my_user users 12 Dec 13 09:59 Example_30_007_20130913180911.DAT
My question is how do I copy them in the same directory and rename the copied files using a sh script such that they start with something like the filenames shown below?
Ex_Example_001.DAT
Ex_Example_002.DAT
Ex_Example_003.DAT
Ex_Example_004.DAT
Ex_Example_005.DAT
Ex_Example_006.DAT
Ex_Example_007.DAT
|
Execute this in the folder of your files:
find . -type f -name "Example_30*.DAT" | awk -F\_ '{printf "cp -v %s Ex_Example_%s.DAT\n", $0, $3}' | bash
find . -type f: search only for files
-name "Example_30*.DAT": file beginning with "Example_30" and ending with ".DAT"
| awk -F\_: pipe this to awk and set the delimiter to _
'{printf "cp -v %s Ex_Example_%s.DAT\n", $0, $3}': generate a command like this: cp -v oldname newname
| bash: and pipe this to bash to execute it
Output should look like this:
»./Example_30_002_20130913180854.DAT“ -> »Ex_Example_002.DAT“
»./Example_30_005_20130913180904.DAT“ -> »Ex_Example_005.DAT“
»./Example_30_003_20130913180857.DAT“ -> »Ex_Example_003.DAT“
»./Example_30_006_20130913180907.DAT“ -> »Ex_Example_006.DAT“
»./Example_30_007_20130913180911.DAT“ -> »Ex_Example_007.DAT“
»./Example_30_004_20130913180901.DAT“ -> »Ex_Example_004.DAT“
»./Example_30_001_20130913175000.DAT“ -> »Ex_Example_001.DAT“
Edit:
What if I want it to be put in a script outside of that folder? How
would I go about doing it?
Create a file called script. Add the following lines into that file:
#!/bin/bash
DIRECTORY=/path/to/dir/
cd $DIRECTORY
find . -type f -name "Example_30*.DAT" | awk -F\_ '{printf "cp -v %s Ex_Example_%s.DAT\n", $0, $3}' | bash
cd -
Make the script executable:
chmod u+x script
And then call the script:
./script
or
/absolute/path/to/script
| Copying and renaming files to a single directory by a single shell script |
1,537,101,239,000 |
I want to list the files in a directory, using ls. Typically, the files in that directory have the same name (except for the extension - one has extension .rej, one has .failed) .
If a pair of files with similar names have the same size, move the .failed file to a specific directory, and leave the .rej alone.
How can I do this?
|
You don't want to use ls, you want to use shell globbing and string manipulation:
$ for f in *.rej; do
size=$(stat --printf "%s" "${f%.rej}.failed") &&
if [ $(stat --printf "%s" "$f") -eq "$size" ]; then
mv "${f%.rej}.failed" backup/;
fi; done 2>/dev/null
Explanation
The stat --printf "%s" command will print the size a file in bytes. ${f%.rej}.failed will print whatever the name of the current .rej file is but with the .failed instead of the .rej extension. If that file exists, then size=$() will exit correctly and the script will continue (&&). So, if the $size of the .failed file is the same as the size of the .rej file, then the .failed file will be moved to the directory backup/.
| How to ignore files if there is an similar file with the same size? |
1,537,101,239,000 |
On my Ubuntu server there are about 150 shell accounts. All usernames begin with the prefix u12.. I have root access and I am trying to copy a directory named "somefiles" to all the home directories. After copying the directory the user and group ownership of the directory should be changed to user's. Username, group and home-dir name are same. How can this be done?
|
Do the copying as the target user. This will automatically make the target files. Make sure that the original files are world-readable (or at least readable by all the target users). Run chmod afterwards if you don't want the copied files to be world-readable.
getent passwd |
awk -F : '$1 ~ /^u12/ {print $1}' |
while IFS= read -r user; do
su "$user" -c 'cp -Rp /original/location/somefiles ~/'
done
| Copying a directory to multiple users home dir and changing user/group ownership |
1,537,101,239,000 |
I have a file that's being constantly held open and continuously modified by another process. This process is continuously seeking to different parts of the file and writing new blocks. I'd like to be able to make a copy of that file but as a snapshot of the file at a single instance of time.
What I don't want to happen is that I copy the first block of bytes, the file changes and then I copy the second block including the newly modified bytes.
Can Linux help me out here?
|
Can Linux help me out here?
Yes. But not in the way you probably hoped it would.
So, file systems on Linux generally follow the semantics that a change to a file that is read (in whichever way) should be reflected instantly as possible in all readers of that file. Notice how that is at odds with what you want.
What you can very well do is tell your filesystem or block device layer to make a snapshot, and here the semantics are different, namely, demanding consistency at the point of snapshotting.
So, you need to have either
a file system that supports snapshots, or
a block device sublayer that supports snapshots.
Consistency
As doneal very correctly points out:
None of this helps you if a snapshot is taken while a single "unit of consistency" (i.e., a search and write that leads to another consistent file state, probably what you consider a "change of datum" from an application view) is ongoing.
The things below both assume you're taking a snapshot at an instant that no write is currently ongoing. What they avoid the copy not being the exact same as the file was at the instant of starting the copy. If you make a snapshot in the middle of writing 8 kB of data, your snapshot contains 4 kB of new, and 4 kB of old data in the 8 kB chunk of data you meant to overwrite. That's what I'd call "data inconsistency".
This does infact mean that there's no help: your operating system cannot bring a file into a consistent state if you don't have a logical way of ensuring consistency for your file.
If you need that, you will have to look into how "proper" database systems ensure that a database is not in an unrecoverably broken state when a storage device suddenly is removed from the system.
Your file can't do that. You yourself can never guarantee consistency if you do not restrict snapshot points in time to happen only when there's no unfinished write! To achieve that, on any storage medium or operating system, you will have to change your file architecture.
The most common way to do that is implement a strictly ordered mechanism of 1. writing the change, including position, length and data to be done to the main file to a log at the end of the file (or another file), with a trailing checksum that allows you to check whether that happened completely. And, only after that has been fully completed, write the change to the main file. 3. occasionally, you need to go back and clean up the successfully committed log entries.
When you now do a snapshot while writing to the main data file, the data in the log doesn't agree with the data in the main data. You can replay the write from the log, and be back in a consistent state. If you do your snapshot while writing to the log, you can notice when reading the lock that the checksum of the log entry is not correct, and hence, that log entry cannot be complete, and hence, the main data is yet unaffected by what the log entry describes. You delete the broken log entry from the snapshotted log.
Filesystem snapshots
To the best of my knowledge, under Linux, only the btrfs and openZFS file systems support snapshots. Btrfs is part of the linux kernel, so that's probably easiest to work with.
In btrfs, your file system (say, a filesystem that is mounted on /srv/data) can have subvolumes. You can either access these as subdirectories or separately mount them.
A btrfs snapshot is just a subvolume that is identical to the current volume. That's "easy" for btrfs to implement, because that is a copy-on-write file system: normally, whenever you modify a file, a copy of the affected storage block is made, with the modified data inside. Then, the file metadata is updated: A file's content is just the data in a list of blocks, and if you change something in, say, the 4. block, then the fourth entry in that list is replaced with a reference to the "freshly copied and modified" block. This comes at very low overhead, as storage devices work in blocks, anyways - you can never read a single byte, you read a block, and writing a single byte and a block takes the same time. So, read, modify, write to a different position is as expensive as modification in-place.
Now, when a snapshot is made, all that happens is that any modifications to the file metadata after the snapshot will go to a separate data structure. So, basically, the snapshot and the currently "active" working view of the file system 100% share the data for everything that has not changed, but things that have changed are there twice: once in the modified version, and once as it were at the time of the snapshot.
So, put your file on a btrfs subvolume, make a snapshot, mount that snapshot: you got your file "frozen" in time.
Block Device Mapper Snapshots
These basically work with any modern Linux filesystem. In the world where these are common, XFS is a popular file system choice (but any file system should do).
LVM is the linux volume manager. That basically means it's a part of the kernel that you can give one or many block devices ("physical volumes"), tell it to assemble these to a "volume group", and from the accumulated storage pool then create "logical volumes".
A special case of these are "thin volumes", which basically means that you say "OK, I have 512 GB of storage in my volume group. I want to make a logical volume that I can format with a file system. I want that to be TB, maybe, eventually (if my customer actually uses these, for example), but for now, I don't even have that much space (new SSD hasn't even been ordered yet, but you also don't need all these 1 TB yet)". LVM will then create a volume that looks
| Can I copy a snapshot of a file that's being constantly modified? |
1,537,101,239,000 |
There are lots of similar questions out there, but none seems to address my problem: every time, the culprit is a legitimate permission issue, or an incompatible filesystem, none of which makes any sense here.
I'm transferring a file locally, on an ext4 filesystem, using rsync. A minimal example is:
cd /tmp
touch blah
mkdir test
rsync -rltDvp blah test
which returns the error:
rsync: [receiver] failed to set permissions on "/tmp/test/.blah.Gyvvbw": Function not implemented (38)
and the files have different permissions:
-rw-r--r-- 1 ted ted 0 Sep 29 15:49 blah
-rw------- 1 ted ted 0 Sep 29 15:49 test/blah
I'm running rsync as user ted and the filesystem is ext4, so it should support permissions just fine. Here is the corresponding line from df -Th:
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/c--3px--vg-root ext4 936G 395G 494G 45% /
I'm running rsync 3.2.3 protocol version 31 on Debian Sid, kernel 5.10.0-6-amd64.
|
The OP wrote,
apt-get update && apt-get upgrade, which apparently upgraded rsync (to version 3.2.3-8), fixed the problem.
The error was presumably caused by to a change to lchmod and fchmodat in the GNU C library.
| rsync failed to set permissions for a local copy ("Function not implemented") |
1,537,101,239,000 |
I want to duplicate a folder and all the subfolders, but I do not want to duplicate the contents of the files in this directory.
Let's say the folder I want to duplicate is
Folder0
Folder00
File000.x 1GB
File001.x 500MB
Folder01
File010.x 600MB
I want to create a duplicate that is like
Folder0
Folder00
File000.x 1KB
File001.x 1KB
Folder01
File010.x 1KB
How would you advise I go about this?
Alternatively, I can first create a regular duplicate of the folder, and then scrub the contents of each file.
|
You can use find:
find src/ -type d -exec mkdir -p dest/{} \; \
-o -type f -exec touch dest/{} \;
Find directory (-d) under (src/) and create (mkdir -p) them under dest/ or (-o) find files (-f) and touch them under dest/.
This will result in:
dest/src/<file-structre>
You can user mv creatively to resolve this issue.
Other (partial) solution can be achieved with rsync:
rsync -a --filter="-! */" sorce_dir/ target_dir/
The trick here is the --filter=RULE option that excludes (-) everything that is not (!) a directory (*/)
| Duplicating a directory skeleton - Only folders and file names, but not file contents |
1,537,101,239,000 |
I would like to only copy files from S3 that are from today out of a certain bucket with 100s of files. I tried the following: $ aws s3 ls s3://cve-etherwan/ --recursive --region=us-west-2 | grep 2018-11-06 | awk '{system("aws s3 sync s3://cve-etherwan/$4 . --region=us-west-2") }' but it doesn't quite work, I also get files from other dates.
How do I do this correctly?
|
That's because in the second aws s3 you use sync. Try cp instead. Also you can merge the "grep" and "awk" together.
$ aws s3 ls s3://cve-etherwan/ --recursive | awk '/^2018-11-06/{system("aws s3 cp s3://cve-etherwan/$4 .") }'
| Only copy files from a particular date from s3 storage |
1,537,101,239,000 |
This is my first post here, thanks for helping out! I have two external hard drives, HD #1 is NTFS and HD #2 is Mac OS Extended (I think this is the same as HFS+). I am copying many files from #1 to #2 (docs, pics, etc). I want to verify that all items copied correctly.
On #1 (NTFS), folder A reports this size: 8,137,638,456 bytes (8.14 GB on disk) for 2,721 items
On #2 (HFS+), folder A' reports this size: 8,137,677,392 bytes (8.14 GB on disk) for 2,721 items
How can I verify that everything copied correctly? Kaleidoscope isn't helpful for this, since it just shows that the folders differ, without specifying how.
Diff reports only this: that every subfolder in A' has .DS_STORE :
diff -r "/Volumes/WD Passport/A" "/Volumes/My Passport/A'"
Only in A': .DS_Store
Only in A'/SUBFOLDER: .DS_Store
Only in A'/SUBFOLDER: .DS_Store
...
How can I verify that everything copied correctly? And is there something about NTFS and HFS+ file systems such that copying from one to the other results in different binary representations of files?
|
You are comparing directory sizes on two different operating systems with two different file systems. There is no reason to expect them to be the same.
Your real question is how to verify that the data on drive 1 is identical to drive 2. The best tool I have found for accomplishing this is called hashdeep. For more than 12 years it has been my go-to tool to accomplish file integrity verification.
The Windows binaries and source code are available here.
For MacOS X, if you don't want to compile your own copy, you can obtain it from the Fink project. Sorry for not posting the link, but I am new here and don't have the requisite 10 reputation points to post more than two links. Fink can be found at www finkproject com
The way this tool works is that you create a list of file hashes from a source and then use the resultant hashes to verify that the copied files match. It's really straightforward, but you should be able to find some how-tos and videos via Google.
| Size difference copying from NFTS to HFS+ |
1,537,101,239,000 |
I have a web server and it using by few developers.
Web-site is under website user. Other users are like user1, user2 etc.
I have given sudo access to user1, user2.. to access website.
The issue I'm having now is users fails to copy scripts from website because some scripts not allows to read directly by the users. And even if I try to cp using sudo it fails because website don't have write permission to users directories.
I do not want to change the file permission due to some security seasons.
i saw somewhere that I can do this using tar, but couldn't figure out.
Can someone help...
Thanks!
|
You can do (as user1) something like
sudo -u website cat ~website/somefile > ~user1/somefile
Note that ~user1/somefile will be firstly created by user running the shell (user1), and the cat will be executed as website
You can use tar(1) with same trick, for multiple files:
sudo -u website tar cf - ~website/foo ~website/bar | tar xf -
Run as user1 in his directory, that will "create" tar archive on stdout as website, and the another tar (without sudo, so running as same user as the one running the shell, that is user1) would unpack that virtual tar file to current directory (to which user1 can write).
UPDATE Note that tar will create subdirectories leading to a file, you can reduce that behaviour by specifying -C so tar will entet specified directory before starting:
sudo -u website tar -C ~website -cf - foo bar | tar xf -
This way, foo and bar will be created in current directory without leading subdirectories (but if you added blah/baz, it would create blah as subdir in which baz resides)
| How to copy file from one use to another? |
1,537,101,239,000 |
So for example i have one folder called "test". Inside that folder, i created one another folder called "player" and a lot of text file, let's say 50 files.
[root@ip-10-0-7-70 test]# ls
kaka.txt player rooney.txt
Now i want to move all of that text file into "player" folder. What's the best way to do this?
I tried
cp -r ls | ^egrep 'player'
but it didn't work.
|
From the test directory, do:
mv -t player *.txt
Assuming all text files end in .txt.
This will mv all .txt files from current directory (test/) to player/ subdirectory.
| Is there any ways to copy all file to one specific sub-folder under the same parent-folder? [duplicate] |
1,537,101,239,000 |
I have 81 files in .fasta format that contain (up to) 53 items. Such as:
/User/MyData/Sample_1.fasta
/User/MyData/Sample_2.fasta
....
/User/MyData/Sample_81.fasta
Each .fasta file contains a name ID and string of characters delimited as:
>AT1G00001
ATCCACTGCTGTGTACCTGATCAGTGCTGACCCAYTGTGACACTGTG
>AT2G00002
AAAAATTTTGCCCGTGTGGGCCAAACTGTCATGCATGCACCGTACGTGCATGCAT
....
>ATXGXXXXX(up to 53)
AAACCCTCTTTGTGCCTGTGCATGCA
I would like to copy strings from each of my 81 .fasta files into a new .fasta file such that:
/User/MyData/AT1G00001.fasta
/User/MyData/AT2G00002.fasta
....
/User/MyData/ATXGXXXXX.fasta
And the content of one of these contains (after copying from all 'Sample_X.fasta' files in the directory):
>Sample_1
ATCCACTGCTGTGTACCTGATCAGTGCTGACCCAYTGTGACACTGTG
>Sample_2
ATCGACTCCCGTAGGACTGATTTTTCTGACCCCATTGTGACACTGTG
....
>Sample_81
TTCTGACCCCATTGTGACACTGTGATCGACTCCCGTAGGACTGATTT
I've come across one or two similar questions, but nothing with exactly the nuance of preserving the SampleName in the copied output file and am having some difficulty getting examples from similar but different questions to work.
Thank you so much for any help!
|
I have the following code for you; below it there's an explanation how it works.
First go into the working directory (cd /User/MyData/) to run this program:
awk '
FNR==1 { sample = FILENAME ; sub(/\.fasta/, "", sample }
/^>/ { target = substr($0,2)".fasta" ; next }
{ print ">" sample > target ; print > target }
' Sample_*.fasta
The awk program iterates over all Sample_*.fasta files. At start of each input file (FNR==1) it extracts the sample name from the current filename by removing the suffix ".fasta". If a line starts with > then the target filename for that record is taken from after the > character, and the filename suffix ".fasta" appended. For the other type of lines the previously extracted sample name is written to the target file, and in a second line the current data is written.
Note: If you observe problems with "too many open file descriptors" then the best choice is to switch to GNU awk if possible!
If GNU awk is not or can not be made available on your platform then you need a couple of additional changes; the key is to close each file after writing to it, by using the close() function, with the consequence that you have to append to the closed files. (This is more complex and also less performant, so it's worth thinking about getting GNU awk and use the first variant.)
Those changes would then result in a program like:
# because of the append operation you need to empty the file targets
# before calling subsequent awk code, e.g. by: rm -f AT???????.fasta
awk '
FNR==1 { sample = FILENAME ; sub(/\.fasta/, "", sample }
/^>/ { target = substr($0,2)".fasta" ; next }
{ printf ">%s\n%s\n", sample, %0 >> target ; close(target) }
' Sample_*.fasta
Note that before you call the awk program you have to make sure to remove or empty any existing output-files from previous calls (otherwise your new output would get appended to the data previously existing in the respective output file(s).
| How to copy lines from multiple files into one new file and keep file name? |
1,537,101,239,000 |
I'm using a script to download files in two steps:
First, I'm downloading a file containing a list of files from a server to my host machine using rsync.
Then, I'm using rsync to download the actual files (quite a lot) given in the list from the server to my host computer.
The problem is that the script is asking for the password regularly i.e. it keeps on asking for the password of my account on the server. The files are downloading without any problem, so I'm guessing that the for loop is causing the issue as it is asking for the password while downloading every single file in the list from the server.
If I'm correct then what could be a possible solution so that the script will ask for the password only once? if I'm wrong then please do correct me.
NOTE: BTW, key based authentication is not allowed.
#!/bin/bash
rsync --partial -z --remove-source-files server:~/list ~/.
for i in $(cat ~/list)
do
rsync --partial -z server:/some/location/$i ~/someplace/$i
done
|
Your theory sounds right to me. Each time through the for loop when you invoke rsync, it's reconnecting to the server and causing you to be re-prompted.
Rather than loop through the file, ~/list using for you could give this list directly to rsync using the --files-from= switch.
Example
$ rsync --partial -z --files-from=/some/list server:/some/location/ ~/someplace/
| Repetition of password while rsync-ing files? |
1,537,101,239,000 |
I have a directory ~/dir that contains a bunch of random folders like: ~/dir/av 801 and ~/dir/lm 320. I want to copy the contents of every inner folder (ie: av 801) into a different directory. The contents of that folder can consist of folders or files.
This is what I guessed the bash command would be:
cp ~/dir/*/* ~/target/
But it gives this error:
cp: when copying multiple files, last argument must be a directory
Is there a bash command that can do such a thing?
|
To copy directories, you need to tell cp to copy recursively by passing it the -r flag.
cp -R ~/dir/*/* ~/target/
If ~/target does not exist, you need to create it first.
mkdir ~/target
| Bash command that uses wildcard in place of folder to copy folder contents of multiple files into one directory? |
1,537,101,239,000 |
Suppose I use cp to copy a directory to another place. If the process takes long, and I create a new file under the source directory, will it be copied, or does it depend on something?
|
If you create a new file while cp is operating, it's likely that it won't be picked up. That might depend on the cp implementation: some gather a list of files when they start, others do it by chunks. If it's a recursive copy, all the cp implementations I've seen work directory by directory, so if you add the file to a directory that cp hasn't traversed yet, it will be copied.
If you add a file midway and you want to copy it, run rsync afterwards (after the copy is finished and the new file is fully written). Rsync will only copy the added file.
You can run rsync from the start (use rsync -a to do a recursive copy and preserve metadata). Rsync can do pretty much everything cp can do and much more, so you can ignore cp and always use rsync if you like (except on embedded systems that don't have rsync).
| Will files created under source directory after running `cp` be copied? |
1,337,549,815,000 |
I've been trying to copy some remote files to localhost using regex expression, but it reads the regex as if it would be a regular string and does not find any files matching that.
Any ideas why?
file-download
#!/bin/bash
scp "[email protected]:/home/student/download-this/[a-zA-Z0-9]+\.tar\.gz" .
I have also tried copying from localhost to remote device, but still the regex won't work with scp
file-upload
scp "/root/scripts/this[XYZ]_[0-9]{1,5}\.txt" [email protected]:/home/fs/upload
|
As said in comments, scp is not regex capable. Better use a glob like this:
scp '[email protected]:/home/student/download-this/[a-zA-Z0-9]*.tar.gz' .
# ^ ^
# need this single quotes
You can't use regex, instead, scp can handle glob, check
man bash | less +/'Pattern Matching'
To 'upload' a file by wildcard *:
scp [a-zA-Z0-9]*.tar.gz [email protected]:/home/student/upload-this/
| SCP not working with RegEx |
1,337,549,815,000 |
I'm trying to copy a command (istioctl) in my home directory on Debian so that I can always use it, as it will be added to my PATH variable automatically.
I tried ("link1" is a symbolic link to a hard drive containing the istioctl):
TestUser@ComputerName:~$ cp ~/link1/istio-1.12.2/bin/istioctl ~/cmd
and
TestUser@ComputerName:~$ cp ~/link1/istio-1.12.2/bin/istioctl ~/bin
neither directory existed in ~ before that. At least ll and ls didn't show them.
but this is what I get:
TestUser@ComputerName:~$ ll ~
total 171856
-rwxr-xr-x 1 TestUser users 87990272 Jan 24 19:47 bin
-rwxr-xr-x 1 TestUser users 87990272 Jan 24 19:50 cmd
lrwxrwxrwx 1 TestUser users 38 Jan 13 18:16 link1 -> /some/path1
lrwxrwxrwx 1 TestUser users 39 Jan 13 18:10 link2 -> /some/path2
lrwxrwxrwx 1 TestUser users 38 Jan 13 18:17 link3 -> /some/path3
lrwxrwxrwx 1 TestUser users 38 Jan 13 18:15 link4 -> /some/path4
lrwxrwxrwx 1 TestUser users 38 Jan 13 18:15 link5 -> /some/path5
TestUser@ComputerName:~$
TestUser@ComputerName:~$ ll ~/bin
-rwxr-xr-x 1 TestUser users 87990272 Jan 24 19:47 /home/TestUser/bin
TestUser@ComputerName:~$
TestUser@ComputerName:~$ ll ~/cmd
-rwxr-xr-x 1 TestUser users 87990272 Jan 24 19:50 /home/TestUser/cmd
I don't understand why the cmd and bin folders behave this way and why they don't contain the file.
Also tried as root:
root@ComputerName:~# cp ~/link1/istio-1.12.2/bin/istioctl /home/TestUser/bin
same thing.
|
You mistakenly thought that the cp command would create a directory at the destination and then put the source file in that directory. It doesn't work like that -- to put a source file into a destination directory, that directory has to already exist; otherwise, cp will simply create a destination file of that name.
This behavior is described (among man cp and other places) in the POSIX standard for cp:
cp [-Pfip] source_file target_file
cp [-Pfip] source_file... target
The first synopsis form is denoted by two operands, neither of which are existing files of type directory. The cp utility shall copy the contents of source_file ... to the destination path named by target_file.
The second synopsis form is denoted by two or more operands where the -R option is not specified and the first synopsis form is not applicable. It shall be an error if any source_file is a file of type directory, if target does not exist, or if target does not name a directory. The cp utility shall copy the contents of each source_file ... to the destination path named by the concatenation of target, a single character if target did not end in a , and the last component of source_file.
Essentially, you need to cp source-file destination-file or cp source-file pre-existing-directory.
To achieve want you want, mkdir ~/bin or mkdir ~/cmd and then cp ~/link1/istio-1.12.2/bin/istioctl ~/bin. It sounds like you may have added your $HOME directory to your PATH. That's legal, but slightly less common. More common would be to add a ~/bin or ~/cmd directory to your PATH. Ensure you've added that directory to your PATH for success.
| Can't get contents of directory in home and can't copy file there |
1,337,549,815,000 |
Example
Suppose I have 2 directories a/ and b/. Let's say they contain the following files:
a/
a/foo/1
a/bar/2
a/baz/3
b/
b/foo/1
b/bar/2
such that a/foo/1 and b/foo/1 are identical but a/bar/2 and b/bar/2 are different.
After merging a/ into b/, I want to get:
a/
a/bar/2
b/
b/foo/1
b/bar/2
b/baz/3
Explanation
a/foo/ and b/foo/ are (recursively) identical, so we remove a/foo/.
a/bar/2 and b/bar/2 are different, so we do nothing.
a/baz/ exists only in a/ but not in b/, so we move it to b/baz/.
Is there a ready-made shell command for this? I have a feeling that rsync may work, but I am unfamiliar with rsync.
|
Can't say that I know of a particular command that would do this for you. But you could accomplish this just using hashing.
Naïve example below:
#!/bin/bash
# ...some stuff to get the files...
# Get hashes for all source paths
for srcFile in "${srcFileList[@]}"
do
srcHashList+="$(md5sum "$srcFile")"
done
# Get hashes for all destination paths
for dstFile in "${dstFileList[@]}"
do
dstHashList+="$(md5sum "$dstFile")"
done
# Compare hashes, exclude identical files, regardless of their path.
for srci in "${!srcHashList[@]}"
do
for dsti in "${!dstHashList[@]}"
do
match=0
if [ "${srcHashList[$srci]}" == "${dstHashList[$dsti]}" ]
then
match=1
fi
if [ $match != 1 ]
then
newSrcList+=${srcFileList[$srci]}
newDstList+=${dstFileList[$dsti]}
fi
done
done
# ...move files after based on the new lists
This could definitely be done cleaner, especially if you only care about files with identical paths to eachother. Could also probably be done in linear time, but the general concept will work.
| How to merge 2 directories without overwriting *different* files with same relative path? |
1,337,549,815,000 |
Apologies if this seems too basic or has been asked before (I didn't find an answer when I searched but this is also my first time using this website) but I'm a complete beginner who is using Unix for lab research work (in biology- I've never taken a computer science course in my life) at university and this is my literal third day using the system.
I need to save a .txt output file that I have on our server to my personal computer and I'm not sure how. I've seen some resources online already that said I could use the command scp but I'm not sure if that does the action I'm wanting (essentially saving the document to my desktop like you would do with a .doc on word) or how to make sure I'm directing it to the right place. These files are very large and take multiple days to run through all of the data so it's important that whatever command I use doesn't risk data loss so I don't ruin the data and have to start over. This is my best attempt at what I think the command should look something like-
scp myname@host:FileName.txt ~/Desktop/
However when I tried this my computer told me it couldn't find the file I had requested. Any advice? I'm not married to using the scp command either if that isn't the best way of doing it. I don't think I can download any additional programs to save the file for me so that isn't an option, but I do know there's a get command and I'm sure there's other ways too. It just feels really frustrating that I can't seem to figure out something so simple that there isn't a tutorial for it.
I'd really appreciate your feedback, thanks
|
I know it was quick, but a friend of mine who actually works with computers just happened to get back to me not long after I posted here. I thought I would put the answer so anybody who is having the same problem as me in the future could find it too. The code that worked for me looked like this:
scp myname@host:Directory/FileName.txt ~/Desktop/
Like Freddy said up above you need to specify the path. Because I'm new at this I didn't actually know what that meant I needed to do- if youre totally new like me it just means you need to list out all of the directories and other subfiles/folders your file is in so the computer can find it. In my case it was just in the directory without a bigger folder. If your file is in a folder, it should work as long as you just keep adding the folders to the line in the order you go through them with a slash in between. Also important is that apparently you're usually supposed to write the command this way-
scp myname@host:/Directory/FileName.txt ~/Desktop/
That didn't work for me. For some reason my computer liked it better without the first slash after the colon, but if that way doesn't work for you try it with the slash, or maybe someone can follow up and explain why it should/shouldn't have the slash. At this point I don't know enough to say why it is that way.
I also needed to log off of the server (ssh) I was on, which was the one that was hosting my file. I'm sure this is obvious to anyone who knows how these things work, but again for anyone that might need help in the future it has something to do with the way the computer will try to search for the file. If you're in the directory where the file is supposed to be, apparently that interferes with the computer's ability to go in there and find the file. (Again, I'm sure theres a better explaination for this, I just don't know it yet.)
Anyways thanks again! I know this might not have been the most important question but it was definitely a good learning experience for me, and I hope it can be a useful resource for someone else. I'm especially glad I found this site so I can come back to it in the future.
| How to save/download a .txt file from terminal to personal computer (OSX) |
1,337,549,815,000 |
I am using the following rsync command which includes the "update" option, meaning it will skip files at the receiver which are newer. It works, except that I need it to tell me the files which were skipped because they are newer on the receiver.
rsync -ahHX --delete --itemize-changes --stats --update /path/to/source/ --exclude=/dir1/ --exclude=/dir2/ --exclude=/dir3/ /path/to/receiver/
I have reviewed the man page and I don't see such an option. I hope I just missed it or didn't understand something.
If rsync will not do this, what other tools can I use? I tried diff -rqw /path/to/source/ /path/to/receiver/ but that takes far too long. It is doing more than I need.
The total file size is 24.60 GB in 71,835 files.
|
I hope there is a better solution, but this is what I came up with:
First, run this check in the opposite direction of my copy operation:
rsync --dry-run -ahvP --itemize-changes --stats /path/to/receiver/ /path/to/source/
That will tell me which files are newer on the receiver (and will therefore be skipped by my rsync command). I can manually address those files, then run my original rsync command:
rsync -ahHX --delete --itemize-changes --stats --update /path/to/source/ --exclude=/dir1/ --exclude=/dir2/ --exclude=/dir3/ /path/to/receiver/
This accomplishes my goal, but it involves waiting for rsync to make the file list twice. It takes about 15 seconds each time. I can live with that if there is no better solution.
| rsync: notify of any files that were skipped |
1,337,549,815,000 |
Right now I have following command to copy all contents of the current directory to sub-directory, provided if the subdirectory is created in advance:
cp -p !($PWD/bakfiles2) bakfiles2/
But I have to some times visit those folders which I have never visited before, so sub-directory "bakfiles2" may not exist there, can I somehow create that backup directory with current timestamp(as to avoid conflictions with any existing directory), on the fly when with single copy command or bash script ?
It would be great if the script can ignore any sub-directory starting with a particular pattern which could then be reserved for backup directory names like _bak_* (Note: * means any number of any characters).
|
cp command doesn't have an option to create destination directory if doesn't exist while coping, but you can achieve with scripting.
or simply use rsync command which can create destination directory if doesn't exist only on last level.
rsync -rv --exclude='_bak_*/' /path/in/source/ /path/to/destination
note that leading / in /path/in/source/ will prevent coping source directory itself and adding --exclude option to don't sync directories with matched name.
| Backup all contents of current directory to a subdirectory inside the current directory, which will be created if not exists |
1,337,549,815,000 |
I want to connect from ServerA to ServerB , and check Oracle Database Status and PendingLogs then record results, and use the result on ServerA ,and compare with the result on serverA and generate logs on serverA.
I used ssh -q [email protected] sh -s < /root/script.sh > /root/output.txt
but I still have to enter password manually.
is there any way to turn off interactive login?
how can I run script file via spawn ssh?
|
1- is there any way to turn off interactive login?
Yes, use public key authentication or sshpass to enter password.
2- how can I run script file via spawn ssh ?
Yes, use expect script. If you want to run some other script inside (awk), you need to escape the special characters (\$).
| run script remotely and use result locally with ssh auto login |
1,337,549,815,000 |
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
cd /home/Sud/Minimal\ Packages/All/
if [ -d $line ]
then
cp $line*.rpm /home/Sud/NewFolder/rpms/
else
echo $line>>/home/Sud/NewFolder/notfound.txt
fi
done < "$1"
I am trying to run the above code to
Read a text file line by line
Search a folder if there is a directory by that name
a) if yes; copy contents of that directory to another directory
b) if not, copy the directory name to another text file.
Each time I run the script, it copies all names to notfound.txt even though they are present in the folder I'm searching.
Where am I going wrong?
|
if cp "$line"*.rpm destination/ ; then
echo $line "successful!"
else
echo $line "not found!"
echo "$line">> /home/Sud/notfound.txt
fi
This worked for me.
| Copy files from a directory if name present in a text file |
1,430,920,409,000 |
ftp -n ${FTP_HOST} << STOP
user ${FTP_USERNAME} ${FTP_PASSWORD}
binary
lcd ${FTP_FROM_DIR}
cd ${FTP_TO_DIR}
put ${reportFileName}
STOP
That is my code which is not successfully copying the file to remote host but using it manually it successfully copies the file to remote host.
When ran from a script, "(local-file) usage:put local-file remote-file" appears in the console.
what could be the problem?
|
I suspect part of your issue is with the way you're constructed your heredoc. Try it like so:
$ ftp -n ${FTP_HOST} << STOP
user ${FTP_USERNAME} ${FTP_PASSWORD}
binary
lcd ${FTP_FROM_DIR}
cd ${FTP_TO_DIR}
put ${reportFileName}
STOP
If you truly want those spaces/tabs in the command then you'll need to change to this form of the heredoc:
$ ftp -n ${FTP_HOST} <<-STOP
user ${FTP_USERNAME} ${FTP_PASSWORD}
binary
lcd ${FTP_FROM_DIR}
cd ${FTP_TO_DIR}
put ${reportFileName}
STOP
Whenever you have spaces/tabs within your heredoc you need to make use of the <<- for to tell the shell to strip the leading spaces/tabs out prior to running the commands that are contained within.
A sidebar on using echo
When you're parsing variables within scripts and you innocently use echo to print messages you're typically getting more than just the string you're interested in. echo will also include a trailing newline character. You can see it here in this example:
$ echo $(echo hi) | hexdump -C
00000000 68 69 0a |hi.|
00000003
The 0a is the ASCII code for a newline. So you'll typically want to disable this behavior by using the -n switch to echo:
$ echo -n $(echo hi) | hexdump -C
00000000 68 69 |hi|
00000002
Or even better upgrade to using printf.
References
How to use ftp in a shell script
| FTP "put" not copying file to remote host when ran from shell script but copies the file to remote host when ran manually |
1,430,920,409,000 |
I'm using CentOS 6.5 and Putty.
My problem is that directory file names are shown in dark blue color which is hard to read. I google searched and found this link; basically it's copying the DIR_COLORS file from /etc to the home directory so changes will only affect the user instead of everyone. The real problem is that whenever I run this command cp /etc/DIR_COLORS ~/.dir_colors, no .dir_colors file is created. And there is no error message too. I ran it using sudo too but also no file is created.
When I named the file dir_colors(without the dot) then the file will be created, but when I changed the color from 1;34 to 1;33 in DIR # directory, the dark blue color doesn't change to the new color. I'm guessing it is because the dot is missing before the file name. Any ideas why no file is created when I used .dir_colors?
|
Once the file is copied as per the instructions, just do cd to make sure you're in your home directory, then name_of_your_text_editor .dir_colors and edit the file all you want, and save it. Restart your terminal to see the results.
If for some reason you can't find the .dir_colors file, you could always use whatever file you want that contains the color definition you want like so:
eval `dircolors yourcolorfile`
Effect is immediate on your next ls. This won't persist across sessions so you could add that line to your .bashrc file and achieve the same result as having the .dir_colors file in your home directory. Usage is not limited to CentOS
| cp /etc/DIR_COLORS ~/.dir_colors not responding |
1,430,920,409,000 |
I have a question related to the serial terminal. I have a Linux computer and I connect to an ARM mini-computer running debian with screen /dev/ttyUSB0 115200. From there, I can login and type commands.
That means I can virtually transfer any text file, by creating a file on the ARM through the serial terminal, opening the file in the Linux computer and copy all the text to the opened terminal.
Now, I would like to transfer binary data. My ARM computer doesn't have access to internet and thus, can't install packages. That includes minicom, sadly. Besides, it doesn't have rz out of the box.
In summary, I can easily transfer everything that is text. Now, I would like to copy a binary file through it. How can it be done?
P.s.: This question extend Send a binary file through serial terminal, with a bit more constraints.
|
Use shar from GNU sharutils to make a text file out of the binary file, then copy and paste that text file as you’ve been doing. The text file will be a shell script which will recover the original binary file when executed. https://en.wikipedia.org/wiki/GNU_Sharutils
| Send a binary file though serial terminal |
1,430,920,409,000 |
I have something a directory like this
Main Directory
SubDirectory
SubsubDirectory1
xx.jpg
xx,jpg
SubsubDirectory2
xx.jpg
xx,jpg
SubDirectory
SubsubDirectory1
xx.jpg
xx,jpg
SubsubDirectory2
xx.jpg
xx,jpg
SubDirectory
SubsubDirectory1
xx.jpg
xx,jpg
SubsubDirectory2
xx.jpg
xx,jpg
I want to copy all files in all SubsubDirectory1 to a destination directory
|
If the files have non-unique names, then
cp "Main Directory"/*/Subsubdirectory1/* destdir
would overwrite some of the files at the destination. This would also fail if there are thousands of matching pathnames.
To get around this, using GNU cp:
for pathname in "Main Directory"/*/Subsubdirectory1/*; do
cp --backup=numbered "$pathname" destdir
done
This would create numbered backups of the files that would otherwise have been overwritten.
The same thing but using non-GNU cp:
for pathname in "Main Directory"/*/Subsubdirectory1/*; do
# create first stab at destination pathname
dest="destdir/${pathname##*/}"
i=0
while [ -e "$dest" ]; do
# destination name exists, remove backup number from end of
# pathname and replace with next one in the sequence
i=$(( i + 1 ))
dest="${dest%.~*~}.~$i~"
done
cp "$pathname" "$dest"
done
| copy files from multiple sub-directories to the same destination directory |
1,430,920,409,000 |
I want to copy files that require root to read/write from one system to another.
My current solution is to use sudo on each system and use tee as shown.
ssh host sudo cat /etc/somefile | sudo tee /etc/somefile > /dev/null
This works but tee sends it's input to stdout so I have to send tee's ouput to /dev/null.
I looked to the UNIX cat and copy command cp and did not find an answer.
See https://man7.org/linux/man-pages/man1/cp.1.html
and https://man7.org/linux/man-pages/man1/cat.1.html
UPDATED: I now realize that I should have stated that the solution needs to support sudo so the simple solution of using cat won't work.
For example,
ssh host sudo cat /etc/somefile | sudo cat > /etc/somefile
fails because the directory /etc can only be written by root (in my case) and the re-direction to the file > /etc/somefile runs under the current user (who doesn't have access to write to /etc).
|
There's:
sh -c 'exec cat > file'
Or for arbitrary $files, passed either as environment variables:
sudo FILE="$file" sh -c 'exec cat > "$FILE"'
Or as argument:
sudo sh -c 'exec cat > "$1"' sh "$file"
(where sh goes into $0, and the contents of $file in $1 for that inline script).
(see also >> in place of > to open in append mode, or 1<> to open without truncation (and in read+write mode) to overwrite the file in place, similar to dd's conv=notrunc).
In any case, do not do:
sudo sh -c "exec cat > $file"
As that fails if $file contains any character special in the shell syntax such as space, ;, $... and introduces a command injection vulnerability.
There's also:
dd bs=65536 of=file
(with status=none with the GNU implementation of dd or compatible to suppress the transfer report. There are more options to control how the file is opened, the list of which varies with the dd implementation).
On most systems:
cp /dev/stdin file
There's also moreutils's
sponge file
With GNU sort at least and on text input:
sort -mo file
With text input (or sed implementations that can cope with non-text):
sed -n 'w file'
With text input:
awk '{print > "file"}'
Or with GNU awk:
gawk -v ORS= '{print $0 RT > "file"}'
Here, you could also open the local file as root and then run ssh as the regular user:
sudo zsh -c '
USERNAME=$SUDO_USER ssh host 'sudo cat /etc/somefile' > /etc/somefile'
That means the data doesn't have to be shoved through an extra pipe.
You could also compress on the fly with xz and decompress on the local end with pixz which supports uncompressing into a file:
ssh host 'sudo xz -0 -c /etc/somefile' | sudo pixz -tdo /etc/somefile
| Does Unix have a command to read from stdin and write to a file (like tee without sending output to stdout)? |
1,430,920,409,000 |
I want to copy files in multiple subfolders with variable subfolder names.
Example:
mkdir rootdir
mkdir rootdir/dir1
mkdir rootdir/dir2
mkdir rootdir/dir3
touch rootdir/dir1/foo.txt
touch rootdir/dir2/foo.txt
touch rootdir/dir3/foo.txt
With known subfolder names, I can copy each file individually.
cp rootdir/dir1/foo.txt rootdir/dir1/bar.txt
cp rootdir/dir2/foo.txt rootdir/dir2/bar.txt
cp rootdir/dir3/foo.txt rootdir/dir3/bar.txt
But with an unknown number of subfolders with unknown subfolder names (I know the filenames), I can't do it anymore.
I can find the files...
ls ./**/foo.txt
find . -name foo.txt
... but I don't find the syntax which allows piping this information into cp (or into an alternative tool).
|
There are a few options:
find rootdir -type f -name foo.txt -execdir cp {} bar.txt \;
This searches for regular files called foo.txt anywhere in or under rootdir, and when one is found, cp is used to copy it to the name bar.txt in the same directory. The -execdir option is non-standard but commonly implemented and will execute the given utility in the directory where the file was found. The {} will be replaced by the found file's name.
Alternatively,
find rootdir -type f -name foo.txt -exec sh -c '
for pathname do
cp "$pathname" "${pathname%/*}/bar.txt"
done' sh {} +
This does basically the same thing, but calls a short in-line sh -c script with batches of found foo.txt files. The cp in the loop will copy each of these to the same directory as the found file, but with the filename part of the pathname replaced by bar.txt.
Using **, as you mention it in the question (assuming a bash shell):
shopt -s globstar nullglob dotglob
for pathname in rootdir/**/foo.txt; do
cp "$pathname" "${pathname%/*}/bar.txt"
done
In bash, setting the globstar shell option enables the use of ** for matching into subdirectories recurisvely, and dotglob will enable patterns to also match hidden names. The nullglob shell option makes patterns disappear completely instead of remaining unexpanded if there is no match.
Again, but with zsh (explicitly asking for regular files and enabling the equivalent handling of the globbing as dotglob and nullglob would do for bash):
for pathname in rootdir/**/foo.txt(.ND); do
cp $pathname $pathname:h/bar.txt
done
Here, $pathname:h would be the same as $pathname but with the filename portion of the pathname removed (:h as in "only the head", not the trailing bit).
| Copy file in multiple (variable) folders |
1,430,920,409,000 |
Is there any way to archive a folder and keep the owners and permissions intact? I'm doing a backup of some files, which I want to move using a usb-stick, which has a FAT filesystem. So the idea was to keep all this information and file setting within an archive.
I know that the -p option for tar keeps the permissions, but still not the ownership.
|
tar's default mode is to preserve ownership and permissions on archive creation; I don't believe there's even an option not to store the data. When you extract an archive, if you're a normal user, the default is to use stored permissions minus the umask and set the owner to whoever's extracting; if you're superuser, the default is to use stored permissions and ownership verbatim. There are options to control how these metadata are restored on extraction (see the man page).
| How do I archive a folder keeping owners and permissions intact? |
1,430,920,409,000 |
I'm trying to copy a folder (SRC) containing some files and subfolders.
The content and SRC itself have setgid bit enabled (that is the s in place of the x in the group triplet). Furthermore, the group of the whole content is srcgrp while the files have different owners (let's say me, she and they).
Now, I want to copy all the folder (SRC included) into another folder (let's say /mnt/d/SRC to /home/dog/data/SRC).
The problem is as follows: when I prompt cp -Rp /mnt/d/SRC /home/dog/data/SRC the folder is being copied in /home/dog/data/SRC but the owner of all the contents become me, even if I give chmod g-s /home/dog/data previously.
I'd like to keep the owners of source files. How could I get it?
Thank you.
|
You must run the copy command as root as otherwise the owner will be reset to you, and the group may be reset.
sudo cp -a /mnt/d/SRC /home/dog/data/SRC
The full rules are considered in order:
If you are root then all owner/group and permissions are kept
If you are a member of the group then the group name and permissions are kept
Otherwise owner and group are reset to you and your primary group
These rules are honoured using rsync. However, using (GNU) cp a further restriction is applied in that setuid/setgid bits are removed if the owner or group cannot be kept.
Example using a copy from src to dst, and then sdiff to list differences between the two directories.
Initial state for each attempt:
ls -l src
total 36
drwxr-xr-x 2 chris chris 4096 May 6 15:16 chris-dir
-rwxr-xr-x 1 chris chris 0 May 6 15:16 chris-file
drwxr-sr-x 2 chris chris 4096 May 6 15:16 chris-sgid-dir
-rwxr-sr-x 1 chris chris 0 May 6 15:16 chris-sgid-file
drwsr-xr-x 2 chris chris 4096 May 6 15:16 chris-suid-dir
-rwsr-xr-x 1 chris chris 0 May 6 15:16 chris-suid-file
drwxr-xr-x 2 root root 4096 May 6 15:16 root-dir
-rwxr-xr-x 1 root root 0 May 6 15:16 root-file
drwxr-sr-x 2 root root 4096 May 6 15:16 root-sgid-dir
-rwxr-sr-x 1 root root 0 May 6 15:16 root-sgid-file
drwsr-xr-x 2 root root 4096 May 6 15:16 root-suid-dir
-rwsr-xr-x 1 root root 0 May 6 15:16 root-suid-file
drwxr-xr-x 2 test1 test1 4096 May 6 15:16 test1-dir
-rwxr-xr-x 1 test1 test1 0 May 6 15:16 test1-file
drwxr-sr-x 2 test1 test1 4096 May 6 15:16 test1-sgid-dir
-rwxr-sr-x 1 test1 test1 0 May 6 15:16 test1-sgid-file
drwsr-xr-x 2 test1 test1 4096 May 6 15:16 test1-suid-dir
-rwsr-xr-x 1 test1 test1 0 May 6 15:16 test1-suid-file
Using (GNU) cp
cp -a src/. dst/ && sdiff -lw132 <(ls -l src | sort -k9) <(ls -l dst | sort -k9)
total 36 (
drwxr-xr-x 2 chris chris 4096 May 6 15:16 chris-dir (
-rwxr-xr-x 1 chris chris 0 May 6 15:16 chris-file (
drwxr-sr-x 2 chris chris 4096 May 6 15:16 chris-sgid-dir (
-rwxr-sr-x 1 chris chris 0 May 6 15:16 chris-sgid-file (
drwsr-xr-x 2 chris chris 4096 May 6 15:16 chris-suid-dir (
-rwsr-xr-x 1 chris chris 0 May 6 15:16 chris-suid-file (
drwxr-xr-x 2 root root 4096 May 6 15:16 root-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:16 root-dir
-rwxr-xr-x 1 root root 0 May 6 15:16 root-file | -rwxr-xr-x 1 chris chris 0 May 6 15:16 root-file
drwxr-sr-x 2 root root 4096 May 6 15:16 root-sgid-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:16 root-sgid-dir
-rwxr-sr-x 1 root root 0 May 6 15:16 root-sgid-file | -rwxr-xr-x 1 chris chris 0 May 6 15:16 root-sgid-file
drwsr-xr-x 2 root root 4096 May 6 15:16 root-suid-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:16 root-suid-dir
-rwsr-xr-x 1 root root 0 May 6 15:16 root-suid-file | -rwxr-xr-x 1 chris chris 0 May 6 15:16 root-suid-file
drwxr-xr-x 2 test1 test1 4096 May 6 15:16 test1-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:16 test1-dir
-rwxr-xr-x 1 test1 test1 0 May 6 15:16 test1-file | -rwxr-xr-x 1 chris chris 0 May 6 15:16 test1-file
drwxr-sr-x 2 test1 test1 4096 May 6 15:16 test1-sgid-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:16 test1-sgid-dir
-rwxr-sr-x 1 test1 test1 0 May 6 15:16 test1-sgid-file | -rwxr-xr-x 1 chris chris 0 May 6 15:16 test1-sgid-file
drwsr-xr-x 2 test1 test1 4096 May 6 15:16 test1-suid-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:16 test1-suid-dir
-rwsr-xr-x 1 test1 test1 0 May 6 15:16 test1-suid-file | -rwxr-xr-x 1 chris chris 0 May 6 15:16 test1-suid-file
Using rsync:
rsync -a src/ dst && sdiff -lw132 <(ls -l src | sort -k9) <(ls -l dst | sort -k9)
total 36 (
drwxr-xr-x 2 chris chris 4096 May 6 15:17 chris-dir (
-rwxr-xr-x 1 chris chris 0 May 6 15:17 chris-file (
drwxr-sr-x 2 chris chris 4096 May 6 15:17 chris-sgid-dir (
-rwxr-sr-x 1 chris chris 0 May 6 15:17 chris-sgid-file (
drwsr-xr-x 2 chris chris 4096 May 6 15:17 chris-suid-dir (
-rwsr-xr-x 1 chris chris 0 May 6 15:17 chris-suid-file (
drwxr-xr-x 2 root root 4096 May 6 15:17 root-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:17 root-dir
-rwxr-xr-x 1 root root 0 May 6 15:17 root-file | -rwxr-xr-x 1 chris chris 0 May 6 15:17 root-file
drwxr-sr-x 2 root root 4096 May 6 15:17 root-sgid-dir | drwxr-sr-x 2 chris chris 4096 May 6 15:17 root-sgid-dir
-rwxr-sr-x 1 root root 0 May 6 15:17 root-sgid-file | -rwxr-sr-x 1 chris chris 0 May 6 15:17 root-sgid-file
drwsr-xr-x 2 root root 4096 May 6 15:17 root-suid-dir | drwsr-xr-x 2 chris chris 4096 May 6 15:17 root-suid-dir
-rwsr-xr-x 1 root root 0 May 6 15:17 root-suid-file | -rwsr-xr-x 1 chris chris 0 May 6 15:17 root-suid-file
drwxr-xr-x 2 test1 test1 4096 May 6 15:17 test1-dir | drwxr-xr-x 2 chris chris 4096 May 6 15:17 test1-dir
-rwxr-xr-x 1 test1 test1 0 May 6 15:17 test1-file | -rwxr-xr-x 1 chris chris 0 May 6 15:17 test1-file
drwxr-sr-x 2 test1 test1 4096 May 6 15:17 test1-sgid-dir | drwxr-sr-x 2 chris chris 4096 May 6 15:17 test1-sgid-dir
-rwxr-sr-x 1 test1 test1 0 May 6 15:17 test1-sgid-file | -rwxr-sr-x 1 chris chris 0 May 6 15:17 test1-sgid-file
drwsr-xr-x 2 test1 test1 4096 May 6 15:17 test1-suid-dir | drwsr-xr-x 2 chris chris 4096 May 6 15:17 test1-suid-dir
-rwsr-xr-x 1 test1 test1 0 May 6 15:17 test1-suid-file | -rwsr-xr-x 1 chris chris 0 May 6 15:17 test1-suid-file
Note that where rsync has maintained the group it is because the non-root user running the command is a member of that group. Otherwise the group will be reset to the user's primary group.
| Keeping owners in a folder copy |
1,430,920,409,000 |
I found some bottleneck on my workflow which is as follows. I do have one master computer which needs to send data to other node machines. This is done in a for loop such as:
for all nodes: rsync <Options> <Master> <Node>
This works quite nice if the amount of nodes is not to large e.g. 4 or 8 (copying time around 2 min). However, its a linear curve. For 16 nodes it takes around 3.5 minutes and for 128 nodes its already 20 minutes (and then this stuff becomes almost the bottleneck of my workflow).
My intention is to get rid of the stupid for all loop and do things more like that:
1. copy master to node1
wait & check if successfull
2. copy master to node2 && copy node1 to node3
wait & check if successfull
3. copy master to node4 && copy node1 to node5 && copy node3 to node6
wait & check if successfull
...
My question: Is there anything in bash available which would help me to do something like that or are there more reliable solutions? I am a bit restricted based on the IT in terms of tools to use. Any suggestion is warmly welcomed.
Another nice thing I have in mind which might be even better:
1. copy master data to memory
2. send the memory stuff to all nodes simultaneously (if that is possible) and all nodes write simultaneously
Maybe rsync can handle something like that as well? Any idea is warmly welcomed.
For information purposes, I already created some test-script which is given below and can also be optimised. E.g., the master stuff is not necessary as we can handle, node0 as the master node. Hence, the code could be further reduced. However, there is still the check, if the copying was successful missing etc. - just as an example of my idea? You can copy it, create a master folder, put some files in, and create 128 nodes. Then run the script and compare with a serial copy (standard approach given in the script). Its faster even though we work on one machine/HDD only (hence, not really representative).
#!/bin/bash
#
# Tobias Holzman
# 25.03.2023
#
# Description
# Script that speeds-up the function serial copying from master to nodes
# by using other nodes at which the data are already copied.
#
# Standard Approach:
# forAll(nodes, nodei)
# {
# rsync <masterData> <nodei_path>
# }
#
#------------------------------------------------------------------------------
function copyToNode ()
{
# Simple check if folder exist
if [ ! -d $1/triSurface ]
then
echo "Error"
return 0
fi
rsync -av --progress $1/triSurface $2/ > /dev/null &
return 0
}
#------------------------------------------------------------------------------
t1=$(date +%s)
# How many nodes we have (for test purpose)
nCopy=$(ls -d node* | wc -l)
nodesEmpty=()
nodesCopied=()
copyNodes=()
# Create a string array that includes all node names
for i in $(seq 0 $nCopy)
do
nodesEmpty+=("node$i")
done
echo "We need to copy the data from master to $nCopy nodes"
echo "----------------------------------------------------"
i=0
done=0
dataAtHowManyNodes=0
while true
do
i=$((i+1))
# Copy array nodesCopied which we work with in one loop
# as we dont want to change the original array
copyNodes=("${nodesCopied[@]}")
echo ""
echo " ++ Copy run #$i"
echo " ++ Remaining nodes to which we need to copy: ${nodesEmpty[@]}"
echo " ++ Available nodes used for copy : ${nodesCopied[@]}"
echo " |-> parallel copy sequences: $dataAtHowManyNodes"
# Only master copy
if [ $dataAtHowManyNodes -eq 0 ]
then
nodeToCopy=${nodesEmpty[0]}
echo " | |-> master to $nodeToCopy"
copyToNode "master" "$nodeToCopy"
# Add node to nodesCopied
nodesCopied+=("$nodeToCopy")
# Remove node from nodesEmpty
unset nodesEmpty[0]
# Update the index
nodesEmpty=(${nodesEmpty[*]})
else
for ((j=0; j<$dataAtHowManyNodes; j++))
do
echo " |-> copy sequenz $j"
nodeToCopy=${nodesEmpty[0]}
# Master to node copy
if [ $j -eq 0 ]
then
echo " | |-> master to $nodeToCopy"
copyToNode "master" "$nodeToCopy"
# Add node to nodesCopied
nodesCopied+=("$nodeToCopy")
# Remove node from nodesEmpty
unset nodesEmpty[0]
# Update the index
nodesEmpty=(${nodesEmpty[*]})
# Node to node copy
else
nodeMaster=${copyNodes[0]}
# Remove copyNode to ensure its not used again
unset copyNodes[0]
# Update the index
copyNodes=(${copyNodes[*]})
echo " | |-> $nodeMaster to $nodeToCopy"
copyToNode "$nodeMaster" "$nodeToCopy"
# Add node to nodesCopied
nodesCopied+=("$nodeToCopy")
# Update the index
nodesCopied=(${nodesCopied[*]})
# Remove node from nodesEmpty
unset nodesEmpty[0]
# Update the index
nodesEmpty=(${nodesEmpty[*]})
fi
# Check if still remaining emptyNodes
if [ ${#nodesEmpty[@]} -eq 0 ]
then
echo " ++ Done ..."
done=1
break
fi
done
fi
wait
if [ $done -eq 1 ]
then
echo ""
break
fi
dataAtHowManyNodes=$(echo "scale=0; 2^$i" | bc)
echo " |-> Data are now on $dataAtHowManyNodes nodes"
done
t2=$(date +%s)
dt=$(echo "scale=0; ($t2 - $t1)" | bc)
t=$(echo "scale=2; $dt/60" | bc)
echo "Time = $dt s ($t min)"
#------------------------------------------------------------------------------
|
You should have a look to programs which use multicast or broadcast dataframes. Then the master won’t be much limited by the network bandwidth since all files will be transmitted once.
mrsync can be interesting here. There is also uftp.
See https://serverfault.com/questions/173358/multicast-file-transfers
| Methodology to copy data from one machine to many others in a fast way |
1,430,920,409,000 |
We need to periodically archive some big files older than 2 days to a NAS while keeping their directory tree structure. Those files are kept for 7 days in the source directory.
At first we used find for this:
find ${SOURCE_DIR} -type f -mtime +2 -exec ksh -c 'mkdir -p $(dirname ${DEST_NAS_DIR}$0) && cp -p $0 ${DEST_NAS_DIR}$0' {} \;
However we noticed that the script is copying already archived files, thus each execution takes too much time.
cp doesn't have the -n / --no-clobber option. So, how can I avoid overwriting the same files in destination? any idea?
Regards!
|
If rsync is available on your system, you may use its --ignore-existing flag:
find ${SOURCE_DIR} -type f -mtime +2 \
-exec rsync --ignore-existing '{}' ${DEST_NAS_DIR} \;
Possibly the -u flag might be interesting - it would check if the sender has newer versions of existing files, too, and update them if so.
See if you want --archive mode activated: it means to be recursive and preserve several information regarding times, ownership and more. Check man rsync for more details.
| In AIX, how to avoid overwrite a file with cp? |
1,430,920,409,000 |
How can I copy a folder that contains symlinks and retain the symlinks in the destination folder? I'm doing something like this with PHP/Bash:
system("cp -r production-clone-target production-sites/{$instanceName}");
but the symlinks do not appear in the destination folder.
|
Try adding the --preserve=links switch to your cp command.
From man cp:
--preserve[=ATTR_LIST]
preserve the specified attributes (default: mode,ownership,timestamps),
if possible additional attributes: context, links,xattr, all
Edit: If under OS X; use cp -a.
| Copy folder with symlinks |
1,430,920,409,000 |
I heard that rsync isn't the best one when creating the first backup in terms of performance. Instead it is the best for the later backups. So I wonder what are some better commands for creating the first backup, and what your usages for them are? Thanks!
Reference:
rsync isn't a good option for copying files to an empty destination.
If you're migrating data to an empty destination, you already know all
files need to be copied so the checking rsync does is
counterproductive and actually increases the time the transfer takes.
This is mentioned somewhere in the rsync man page or FAQ or something,
I think.
|
rsync is great for keeping two directories up to date by comparing them and only moving over what had changed. You could totally use rsync for the first time. It just will obviously have nothing to compare against and just move everything over. So with that though in mind you could just use cp or scp if you're moving the files to a remote server.
If you want more backup options like detailed changes and more control over your backups check out https://help.ubuntu.com/community/BackupYourSystem
| Creating the first backup |
1,430,920,409,000 |
I want to copy new files from my NAS to my external hard drive. I did copy the new files manually up to this point. Now I want to use a script to automate this for me. However everything I tried so far with rsync seems to copy all files and not only the new files.
I tried with the following command:
rsync -ar \
--ignore-existing \
--size-only \
--progress --info=progress2 \
--dry-run \
"/Volumes/NAS" "/Volumes/G-DRIVE mobile SSD R-Series"
I also tried without size-only, with checksum etc. Can somebody explain the behaviour to me or what I need to do in order to complete my task? When printing stats it always shows that it would transfer all the files. Also with the -i parameter I get the following information for every file: >f++++++++++
|
The correct command should probably be this:
rsync --archive --progress --info=progress2 \
'/Volumes/NAS/' '/Volumes/G-DRIVE mobile SSD R-Series'
The -a (--archive) flag implies -t (--times) and -r (--recursive) so you don't need to specify those explicitly. Using --size-only is a poor choice when you have size and timestamp available from using -a (--archive), but if you're copying to a limited filesystem such as FAT you may need a time window such as --modify-window=2 to overcome the two second time granularity. You shouldn't use --ignore-existing unless you want to ignore changes to the source files once you've got a copy of any sort on the destination.
Definitely use --dry-run (-n) while testing, but remember to remove it when you're ready for the rsync action to be applied.
Finally, the source directory should usually have a trailing slash. The documentation describes it like this,
A trailing slash on the source changes this behavior to avoid creating an additional directory level at the destination. You can think of a trailing / on a source as meaning "copy the contents of this directory" as opposed to "copy the directory by name".
| Copy new files from NAS to External Hard Drive with rsync |
1,430,920,409,000 |
I wish to get a file /export/home/remoteuser/stopforce.sh from remotehost7 to localhost /tmp directory.
I fire the below command to establish that the file exists on the remote host:
[localuser@localhost ~]$ ssh remoteuser@remotehost7 ' ls -ltr /export/home/remoteuser/stopforce.sh'
This system is for the use by authorized users only. All data contained
on all systems is owned by the company and may be monitored, intercepted,
recorded, read, copied, or captured in any manner and disclosed in any
manner, by authorized company personnel. Users (authorized or unauthorized)
have no explicit or implicit expectation of privacy. Unauthorized or improper
use of this system may result in administrative, disciplinary action, civil
and criminal penalties. Use of this system by any user, authorized or
unauthorized, constitutes express consent to this monitoring, interception,
recording, reading, copying, or capturing and disclosure.
IF YOU DO NOT CONSENT, LOG OFF NOW.
##################################################################
# *** This Server is using Centrify *** #
# *** Remember to use your Active Directory account *** #
# *** password when logging in *** #
##################################################################
lrwxrwxrwx 1 remoteuser oinstall 65 Aug 30 2015 /export/home/remoteuser/stopforce.sh -> /u/marsh/external_products/apache-james-3.0/bin/stopforce.sh
From the above we are sure that the file exist on remote although it is softlink.
I now try to get the actual file using rsync but it gives error.
[localuser@localhost ~]$ /bin/rsync --delay-updates -F --compress --copy-links --archive remoteuser@remotehost7:/export/home/remoteuser/stopforce.sh /tmp/
This system is for the use by authorized users only. All data contained
on all systems is owned by the company and may be monitored, intercepted,
recorded, read, copied, or captured in any manner and disclosed in any
manner, by authorized company personnel. Users (authorized or unauthorized)
have no explicit or implicit expectation of privacy. Unauthorized or improper
use of this system may result in administrative, disciplinary action, civil
and criminal penalties. Use of this system by any user, authorized or
unauthorized, constitutes express consent to this monitoring, interception,
recording, reading, copying, or capturing and disclosure.
IF YOU DO NOT CONSENT, LOG OFF NOW.
##################################################################
# *** This Server is using Centrify *** #
# *** Remember to use your Active Directory account *** #
# *** password when logging in *** #
##################################################################
rsync: [sender] link_stat "/export/home/remoteuser/stopforce.sh" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1651) [Receiver=3.1.2]
rsync: [Receiver] write error: Broken pipe (32)
The localhost is linux while the remotehost7 is solaris.
Can you please suggest why i get this error and what is the fix to the problem?
|
You are using the --copy-links option. This is documented with the text
When symlinks are encountered, the item that they point to (the
referent) is copied, rather than the symlink. [...]
If your symbolic link does not point to a file that exists, then the --copy-links option would make rsync complain that it can't find the file. If --copy-links was not used, the symbolic link itself would be copied.
The fix to this problem depends on what it is you want to achieve. Either make sure that the file referenced by the symbolic link exists, or do not use the --copy-links option.
Personally, if I was using --archive I would probably be trying to make an as true copy as possible of the file hierarchy or file, in which case I would not use --copy-links (to be able to preserve symbolic links).
| rsync not working even when the destination file exists |
1,430,920,409,000 |
I'm a relatively new Linux user and was transferring over to a new computer, so I decided to copy some files (some configs, downloads, and home files) over to a hard drive (from a previous laptop). Instead of using sudo nautilus I used sudo cp -r instead as I thought it would be faster. But when I transfer over these files to the new PC I can't do anything to them unless I use sudo (NOTE: the passwords of the two devices are different). Is there a way I can 'truly' copy these files ignoring these privileges. The command or technique can use sudo, as long as I don't need to use sudo ever again to access these files.
I have the old computer from which I got these files, but I don't currently have access to it, so I'm looking for methods that can be done on the new computer.
For example, I copied over some Projects that use make, and when I use make/make clean, It says that permission is denied, even though the folder is in my Home folder. So I need to pass in sudo for each of these commands in order for it to work.
Any sort of help will be appreciated, including guides of any sort. Thanks!
|
As you have used sudo cp -r the new files belong to the root user. If you want to copy files while maintaining the original permissions use sudo cp -p -r.
To "fix" your files now, try sudo chown -R yourusername filepath, where yourusername should be the proper owner of the files, instead of root, and filepath is the root folder where these files are.
Note that some filesystems (mostly legacy ones) may lack the support for such permissioning.
Try reading about file permissions and file ownership in linux. You will find out that you can do much more complex permissioning once you get to know how.
| Used `sudo cp -r` instead of `sudo nautilus`, can't access files without using `sudo`. How do I copy them ignoring privileges? |
1,430,920,409,000 |
I have many files (with a .fna extension). I need to move them, if they contain the character > a certain number of times.
I know which files contain > a certain number of times (9 times in this example) with this command.
grep -c ">" *.fna | grep ":9"
Which gives an output like this
5242_ref.fna:9
9418_ref.fna:9
But I don't know how to move those files to another folder.
Thanks.
|
You could use zsh with its expression-as-a-glob-qualifier to select files that only have nine such symbols:
$ hasnine() { [[ $(tr -dc '>' < "$REPLY" | wc -c) -eq 9 ]]; }
$ mv *.fna(+hasnine) location/
The first line defines a function whose purpose is to create a true/false filter for files that have nine > symbols in them. The tr command acts on its input, expected in the REPLY variable, deleting anything that is not a >, then asks wc to count the number of resulting characters, and then compares that output to 9.
The second line executes the mv command for matching *.fna files to the (example) location directory. Matching *.fna files also have to pass the expression qualifier, which is given as the name of the function we defined.
| move files that contain a pattern a certain number of times |
1,430,920,409,000 |
sorry for my English...
I normality copy a file without options,
cp origin destination
but sometime I to show
cp -u origin destination
In man cp give options as an obligation.
My question can please anyone explains the difference between deploy -u and not used it, I muss to use an option when I'll an ordinary copy from Ato B?
Thanks
|
cp -u will only update the destination, if it does not exist or the source is newer that the destination.
| cp without options |
1,430,920,409,000 |
I'm writing a bash script and in it I'm doing something like the following:
#!/bin/sh
read -p "Enter target directory: " target_dir
cp some/file.txt $target_dir/exists/for/sure/
When I run this shell script I see and input:
./my_script.sh
Enter target directory: ~/my_dir
But I get the error/output:
cp: directory ~/my_dir/exists/for/sure/ does not exist
And, as I'm trying to make obvious: That directory 100% exists. i.e. I can run the following without receiving any error:
cd ~/my_dir/exists/for/sure/
What's going on here?
|
Problem is, that ~ is taken literally and not expanded when you type it as input for read.
Test it:
$ read target
~
$ ls $target
ls: cannot access '~': No such file or directory
(note, the quotes around ~)
Use this:
eval target=$target # unsafe
or better, but expands just ~:
target="${target/#\~/$HOME}"
or even better, do not type variables or alike into read in the first place.
| cp command saying directory does not exist when it does [duplicate] |
1,430,920,409,000 |
I have millions of xml files in a folder. The name of the files follow a specific pattern:
ABC_20190101011030931_6049414.xml
In this I am interested only in the last set of digits before xml 6049414. I have a list of around 8000 such numbers in a text file. The details in the text file is as follows - a number in a line:
104638
222885
108880071
I am using the following code to move the files from the folder that matches the number given in the text file:
#folder where the xml files are stored
cd /home/iris/filesToExtract
SECONDS=0
#This line reads each number in the hdpvr.txt file and if a match is found moves that file to another folder called xmlfiles.
nn=($(cat /home/iris/hdpvr.txt));for x in "${nn[@]}";do ls *.xml| grep "$x"| xargs -I '{}' cp {} /home/iris/xmlfiles;done
#this line deletes all the other xml files from filesToExtract folder
find . -name "*.xml" -delete
echo $SECONDS
I am facing two issues. 1 Some of the files are not getting moved despite there is a match and 2. Even if the match is found in the middle part of the file name for example
from this ABC_20190101011030931_6049414.xml -> this 20190101011030931
if a match is found it still moves....how can I get the exact matches and move the files.
|
Would something like this make the job ?
pushd /home/iris/filesToExtract
for i in $(</home/iris/hdpvr.txt); do find . -mindepth 1 -maxdepth 1 -type f -name "*_$i.xml" -print0 | xargs -r -0 -i mv "{}" /home/iris/xmlfiles; done
find . -mindepth 1 -maxdepth 1 -type f -name "*.xml" -delete
popd
pushd will move you in the specified directory
for+find line will get the ID from your text file, find files ending like _ID.xml and move them in the /home/iris/xmlfiles folder
the last find like will delete the non moved files but only in this folder and not sub ones
popd will put you back in your original directory
You can also do it the brutal way with mv but it will throw errors if a file is not found
pushd /home/iris/filesToExtract
for i in $(</home/iris/hdpvr.txt); do mv "*_$i.xml" /home/iris/xmlfiles; done
find . -mindepth 1 -maxdepth 1 -type f -name "*.xml" -delete
popd
| Copying files based on partial names in a file |
1,430,920,409,000 |
Let's assume that I have these files:
/1/tEst.mp4
/1/Test.mP4
/1/subdirectory/TEST2.mp4
/1/.20181106Test2.mp4
How can I copy all of these files into /2/Videos with a single command line?
All files that end with “mp4” and have “test” inside the name should be included. Case-insensitive, if possible.
I could use the file explorer to search for all files named “test” and filter by video, but is there any way to do it from the terminal?
|
This seems doable in bash:
set -o nocasematch dotglob globstar
cp /1/**/*test*.mp4 /2/Videos/
| How to copy all files in all directories with specific filename to one destination? |
1,430,920,409,000 |
I was wondering if I have a file that has apparent size of 1 GiB but actual size of 0 B can get copied to, for example, a USB flash stick with 512 kiB free space?
You can create a file as such using:
dd if=/dev/null of=big-file bs=1 seek=1GiB
Now you can see the apparent and actual sizes:
du --apparent-size -hs big-file
Note: remove the --apparent-size option to get the actual size.
So, the main question is: which size is taken into consideration when copying a file to some directory, be that a hard disk drive, USB flash stick, DVD, etc?
|
Actual size.
Of the destination file.
The simplest implementation is to not try to predict this in advance.
In addition to that, a small fraction is required as overhead. This overhead would be even more complex to predict.
Support for sparse files varies, depending on both the filesystem, and the program making the copy. Most notably, FAT filesystems do not support sparse files.
GNU cp automatically detects and handles sparse files. It does not check for space in advance. (Based on the documentation for GNU coreutils 8.29)
By default, sparse SOURCE files are detected by a crude heuristic and
the corresponding DEST file is made sparse as well. That is the behavior selected by --sparse=auto. Specify --sparse=always to create a
sparse DEST file whenever the SOURCE file contains a long enough
sequence of zero bytes.
Notice that filesystems can have different block sizes. This can cause the amount of space used by the sparse file to increase, or even decrease!
( Writing file data generally also requires internal filesystem structures which take up space of their own. This is not shown in the file's size - neither the actual nor the apparent size. In traditional UNIX filesystem design, you also need a free inode. The space available for inodes is shown by df -i. And most obviously, you need some space to store the file name :-) ; this is often shown in the size of the parent directory. This is discussed specifically in the question Can I run out of disk space by creating a very large number of empty files? )
| On copying, which is considered: actual or apparent size? |
1,430,920,409,000 |
I have been facing this issue ever since I started using Linux distributions. While copying/moving either graphically or with cp, one/many big content (anything like a big text file, tar.gz archive, ISO image file, and movies), some part of the content is written to disk and some part cached in memory (RAM). During the copying time, the amount of shared and cached memory dramatically increased (checked with free -m).
After some time the file manager (like Dolphin or PCManFM) or cp shows that copying is finished, but data is actually not written to disk until I do a sync
I think this is not a hardware issue.
I checked with many internal and external hard drives, and USB flash drives of various brands, but all with the same result.
Not a hard drive APM issue. I always disable hard drive power management.
The problem is same with dd and cat, like dd if=live.iso of=/dev/sdb.
Not a distribution-specific issue. I checked with Debian, Fedora, Ubuntu, Slax, etc.
I have not crosschecked with other Unix-like OSes. If anyone have/had the same issue with FreeBSD, OpenBSD, etc. please let me know.
What is the problem and how can I solve it?
|
As I commented (and for obvious performance reasons) the kernel is using a page cache. So this is a feature, not a problem. See http://linuxatemyram.com/ for more.
You could (but I don't recommend doing that) using some mount options (to disable, or lower the use of, the page cache), and you need to umount any device (e.g. an USB key) before unpluggging or removing it. Then the kernel would flush all the data before unmounting.
You can also do a sync.
| Content cached in RAM while writing to disk - Linux |
1,430,920,409,000 |
i am trying execute ssh-copy-id in one port different than 22 (default). I researched and found the command below
$ssh-copy-id -i ~/.ssh/id_rsa.pub "[email protected] -p 22001"
but, when execute the command, i got this error:
/usr/bin/ssh-copy-id: ERROR: ssh: connect to host 192.168.0.1 -p 22001 port 22: Connection refuse
It seems that command dont understand the port.
|
$ ssh-copy-id
Usage: /usr/bin/ssh-copy-id [-h|-?|-n] [-i [identity_file]] [-p port] [[-o <ssh -o options>] ...] [user@]hostname
So in your case simply use:
$ ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22001 [email protected]
Because of your usage of quotes, the -p 22001 part became part of the hostname which explains the error you got.
| ssh-copy-id different port |
1,430,920,409,000 |
What I want to do is basically
cp long/directory/path/file long/directory/path/file-copy
This or similar operations are pretty common. Typing out the whole path twice is obviously awkward, even with auto-completion or copy-paste. There are a couple of simple ways to do it, but none seems ideal:
First cd into the directory, then simply cp file file-copy. Least keystrokes, but I end up in the directory, which I didn't really want.
Wrap the above in a subshell, sh -c 'cd dir-path; cp file file-copy', to make the cp local. Fair enough, but that means I have to type the commands in a string literal, which disables auto-completion and isn't overall nice.
Define the dir as a variable, then just cp "$dir/file" "$dir/file-copy". Can work nicely, but I'm kind of paranoid about namespace pollution...
echo dir-path | while read p; do cd "$p"; cp file file-copy; done. Basically combines subshell-wrapping with variable definition, by emulating a lambda-calculus stype let-binding with read. I quite like it, but it's just a bit weird, and involves a lot of extra characters.
Anything I can come up with that uses sed 's/file/file-copy/' needs even more boilerplate around it.
Open the file with an editor, then save-as. Not well applicable to non-text files, and often resource-costly.
Is there a more elegant alternative? Ideally one that works with mv as well, analogously.
|
As per this question, you can use:
cp /long/path/to/file{,-copy}
This a shell expansion done by bash (not sure about others), so it will work with any command.
| How to copy a file within its original directory, most easily? [duplicate] |
1,430,920,409,000 |
I guess the question first has to learn from your comments, before it grows into a grown-up question.
Here is the tricky situation:
I have a folder destination with many files (pdfs), that, unfortunately, have the same, recent, timestamp (date of last change of file). However, these timestamps are wrong, they merely reflect a date of copying. I have also a backup folder, the source, with some of these files, with their older timestamps.
Now I wish to have the old timestamp on my destination folder IFF the destination file and the source file are otherwise the same.
How to fix the metadata of those files at the destination that are younger than the same file at the source?
|
It seems to me that you don't really want to copy the files at all, but just fix up the metadata (date).
Accordingly you can use something like this:
rsync --dry-run -av --existing --size-only src/ dst
The directories src and dst are the source and intended destination directories. When you're happy it looks like it's going to work, remove the --dry-run flag.
Note that the --size-only flag tells rsync to compare only by file size (and name). It does not check the content of each file. If you want to check the content you might as well just copy the correct files in the first place
| fixing metadata if destination file is younger than source file |
1,430,920,409,000 |
Example I have these files
/sdcard/testfolder/file1
/sdcard/testfolder/file2
/sdcard/testfolder/file3
/sdcard/testfolder/file4.ext
I would like to create .sha256 files for each
/sdcard/testfolder/file1.sha256
/sdcard/testfolder/file2.sha256
/sdcard/testfolder/file3.sha256
/sdcard/testfolder/file4.ext.sha256
The method should work on every possible valid character in the file names and folder names
My starting point was to try and use find
NOTE : I am using find from "toybox 0.8.0-android" and cannot change it (immutable unrootable file system) however it does seem to be fully featured. My shell is "MirBSD Korn Shell" 2014 https://launchpad.net/mksh and I also cannot change it
find /sdcard/testfolder -type file -exec echo {} \;
for example returns the file list, one file per line
So one way to do this would be to replace 'echo {}' with equivalent to
sha256sum /sdcard/testfolder/file4.ext > /sdcard/testfolder/file4.ext.sha256
Maybe something like
find /sdcard/testfolder -type file -exec sha256sum {} > {}.sha256 \;
Unfortunately, find -exec does not work with that specific syntax.
Looking at an extensive list of find command example
https://sysaix.com/43-practical-examples-of-linux-find-command
I does seem the -exec command parameter1 {} parameter2 ;
is the only format but I could be wrong ?
If possible I would like to keep this as single command
Another avenue might be to pipe into another command, however I can't find how to refer to the filename from the pipe as a command line argument, maybe not possible ?
find /sdcard/testfolder -type file | sort | sha256sum $filename? > $filename?.sha256
|
What I would do:
find /sdcard/testfolder -not -name '*sha256' -type file -exec sh -c '
sha256sum "$1" > "$1.sha256"
' sh {} \;
| How to recursively create .sha256 hash files for every file in a folder? |
1,430,920,409,000 |
I need to find all project.updated files in some nested directory and use these files for content replacement of project.json file in the same directory.
I'm using BusyBox (1.33.1).
/apps
/project1
project.json
project.updated
/project2
project.json
As you can see, there is a project.updated file in the project1 folder. The content of this file should replace project.json file.
This is only working, if I know the exact file:
cat /apps/project1/project.updated > /apps/project1/project.json
How do I do this in a dynamic way as there are many projects and only a few of them have an project.updated file?
|
Use find and -execdir which executes the given command in the directory where the file is found:
find /apps -type f -name '*updated'\
-execdir bash -c 'cat "$0" > "$(basename "$0" .updated).json"' {} \;
For a dry-run, just echo the command first and maybe print the found file:
find /apps -type f -name '*updated' -print\
-execdir bash -c 'echo cat "$0" \> "$(basename "$0" .updated).json"' {} \;
Please do not forget to escape the redirection here \>!
(works with sh, too and is POSIX-compliant, if you need it portable)
If -execdir is not available, -exec will do, but one needs to define the dirname of the target file, too:
find /apps -type f -name '*updated'\
-exec bash -c 'cat "$0" > "$(dirname "$0")/$(basename "$0" .updated).json"' {} \;
Or simpler, but not avaialable in sh:
find /apps -type f -name '*updated'\
-exec bash -c 'cat "$0" > "${0/%updated/json}"' {} \;
Where ${0/%updated/json} matches updated at the END of string $0 and replaces it with json. Since $0 contains the whole path as the result from find, -execdir is not necessary.
| How to replace content of nested files? |
1,628,689,358,000 |
I have a directory structure like this;
dir
├── dirA
│ └── file1
│ └── subdir
└── dirB
└── file2
└── subdir
I need to move file1 to dirA/subdir and file2 to dirB/subdir. How can I do it in Linux?
|
Gnu find
find dir -mindepth 2 -maxdepth 2 -type f -execdir sh -c 'mv -t ./*/ "$1"' find-sh {} \;
find dir \
-mindepth 2 -maxdepth 2 -type f \
-execdir sh -c '
mv -t ./*/ "$1"
' find-sh {} \;
original directory structure
dir
├── dirA/
│ ├── fileA
│ └── subdir/
│ ├── e
│ ├── q
│ └── w
└── dirB/
├── fileB
└── subdir/
├── c
├── x
└── z
After the move operation
dir
├── dirA/
│ └── subdir/
│ ├── e
│ ├── fileA
│ ├── q
│ └── w
└── dirB/
└── subdir/
├── c
├── fileB
├── x
└── z
| Move multiple files to subdirectories in linux |
1,628,689,358,000 |
I have a directory which has many sub-folders and under sub-folders there are other sub-folders. I want to copy all the directory and subdirectories into another location with copying only files with certain names in these directories (while preserving the hierarchy).
Let's say, copy all the directory and sub-directories, then if these directories have files with .txt extensions, copy them too.
What is the best way to do this on a Unix/Linux system?
|
Using rsync:
rsync -a --include='*/' --include='*.txt' --exclude='*' source_dir/ target_dir
This should create a copy of the source_dir directory hierarchy as target_dir, with files whose names match *.txt copied (only).
The --include and --exclude options are handled in a left-to-right fashion and the first pattern that matches a name "wins". The way these options are used here ensures that all directories are processed, as well as any name matching *.txt, but ignores everything else.
The -a (--archive) option ensures that the source_dir hierarchy is processed recursively, and that as much as possible of file meta data is preserved in the copy (see the rsync manual for details).
| Create the same subfolders in another folder with copying files with certain names |
1,628,689,358,000 |
I'd like to move a folder that contains multiple files within it from my local folder to a ssh [email protected] machine's temp drive. What would be the best method in doing this? Thanks for your help.
|
You will need to be able to ssh as a user who has write permissions to that system's /tmp directory or wherever you are trying to copy the filed. Assuming that you can:
rsync -avhH /directory/to/copy user@system:/tmp
scp -r /directory/to/copy user@system:/tmp
If the user can't write to the directory, assuming it's /tmp, you can create a directory for the user in /tmp with (althout /tmp is normally world writable but in case it isn't for some reason on your system):
mkdir /tmp/directory
And then give write permissions by making the user the owner:
chown username /tmp/directory
After that, the you can use the rsync or scp commands above.
| Moving a file from local drive to a remote machine |
1,628,689,358,000 |
I have a folder(A) which is structured like this
Main Directory(A)
|
|
Subdir------Subdir2-----Subdir3
| | |
| | |
f0--f1 f0--f1 f0--f1
I want to copy and paste all files(recursively) in A to a new directory B. BUT I don't want to preserve the directory structure,i.e all files in A should be in B without any children directories.
|
Simple, in a shell :
$ find A -type f -exec cp {} B \;
| Copy files from folders and sub folders without preserving directory structure |
1,628,689,358,000 |
I am trying to copy/move a large file (15 GB) to a directory in Linux and want to have a dependency on that event.
Now lets say I have a file named abc.txt, and I am running below command:
mv /usr/tmp/abc.txt /usr/data/
When the move process start I see a file in data directory with the actual file name i.e. abc.txt but with data still being in transit. As the data directory list the file abc.txt in its directory my dependent process thinks that the file is available and it start the dependent process however the file is not completely moved and hence my dependent process triggers prematurely.
Is there a way I can move a file with transient name i.e. while the data transfer is going on it will use a transient name(some swap file name) and change the name to actual file when it is completely transferred?
|
You must be moving between two different filesystems, so in effect the file is copied. Try to first copy it then, and after that's done, move within the destination. This should do:
mv /usr/tmp/abc.txt /usr/data/.abc.txt && mv /usr/data/.abc.txt /usr/data/abc.txt
I assume your watching process won't recognise the hidden file. Otherwise you could make a temp directory at the target location or something similar.
| Copy or move large file with transient name till the file is completely transfered to destination in linux |
1,628,689,358,000 |
I've been trying to find a way to free up some space on a drive by moving folders to another. I have a folder full of stuff I can move, doesn't need to be all of it, to free up some space.
In Windows I'd just select a bunch of folders, get the properties, it'd tell me how much space it was taking up, I'd select more or less and then move them. I can't do that from a bash terminal, and I have no idea how to go about it. Google searching keeps leading me down the path of moving all files over a certain size, which isn't what I'm trying to do.
|
On a GNU system, you could script it as:
#! /bin/bash -
usage() {
printf >&2 '%s\n' "Usage: $0 <destination> <size> [<file1> [<file2>...]]"
exit 1
}
(($# >= 2)) || usage
dest=$1
size=$(numfmt --from=iec <<< "$2") || usage
shift 2
(($# == 0)) && exit
selected=()
sum=0
shopt -s lastpipe
LC_ALL=C du -s --null --block-size=1 -- "$@" |
while
((sum < size)) &&
IFS= read -rd '' rec &&
s=${rec%%$'\t'*} &&
file=${rec#*$'\t'}
do
selected+=("$file")
((sum += s))
done
((${#selected[@]} == 0)) ||
exec mv -t "$dest" -- "${selected[@]}"
Used for instance as:
that-script /dest/folder 1G *
To move as many files as necessary from the expansion of that * glob to make up at least 1GiB.
| How to find the first X gb of data? |
1,628,689,358,000 |
What is the most accurate way to copy a file or a folder from one linux machine to another using commands?
|
There are various options like ftp, rsync etc. but the most useful of these is the scp which comes preinstalled with openssh package. Syntax is simple:
scp file.txt user@host:/folder/to/which/user/has/permissions
There are some other flags, for example, if you are using a different port other than 22 for ssh, you'd need to mention that in the command with -P option.
scp -P PORT file.txt user@host:/folder/to/which/user/has/permissions
For directories, it is advised to archive folder(s) in some container. The most easy is one is tar:
tar -cvf myfolder.tar folder1 folder2 folderN
And then use scp to send it across to another Linux machine (just replace file.txt with myfolder.tar).
| Methods to copy a file or a folder one linux server to another linux server [closed] |
1,628,689,358,000 |
A fried of mine gave me her laptop because it was behaving wierdly. This was (Still is) a hard drive failure, and I suggested her to backup her files before installing a new hard drive.
I booted the computer with Ubuntu on a flash drive to backup her files (Mainly family picture and movies), but I still struggle to copy the files : when a file is corrupted, the copy function waits for the copy to complete.
Is there a way I can tell the cp function to skip damaged files ? (Maybe telling it to skip files when the copy speed is too low)
If it isn't possible, can you suggest me a tool which may be useful in that peculiar case ?
Thanks in advance.
|
If you have a spare disk of equal or greater size you could use ddrescue to clone the drive which will attempt to recover good parts during a read error.
WARNING: these are destructive commands, if you get them wrong you could lose data - double check you have the correct drives before you run them.
Assuming /dev/sdf is the failing drive and /dev/sdw is a working drive you can:
sudo ddrescue /dev/sdf --force /dev/sdw
Or to copy it to a file on a mounted drive (assuming the mounted drive is big enough) you can.
sudo ddrescue /dev/sdf /path/to/file.img
Once you have it on a working drive you should be able to mount/copy the files as normal even if some might be corrupted.
| Copy files from a failing drive |
1,628,689,358,000 |
I have a directory with many files in it. The only ones I care about are those with the extension .jar. However, there are other files in the directory as well.
I have a source directory with only .jar files. I want to achieve the equivalent of:
rm destdir/*.jar
cp sourcedir/*.jar destdir
However, I want to do this without actually removing the files that are already there, if they are the right files already.
In other words, I want it to be possible to run a single command that will:
For any .jar files present in sourcedir but not in destdir, copy them over.
For any .jar files present in both sourcedir and destdir, ensure that the copy in destdir matches the copy in sourcedir and overwrite it if it doesn't.
For any .jar files present in destdir but not in sourcedir, delete the file.
For any other files present in destdir (without a .jar extension), ignore them—do not delete them or change them.
It seems that this should be possible with rsync. How can I do this?
|
You can use:
rsync -av --include='*.jar' --exclude='*' --delete \
sourcedir/ destdir/
The -a option is archive mode, it preserves things like links and timestamps (check the man page for a full explanation). The -v is for verbosity, remove if you don't care about logs.
That should handle your first three option. The --delete option will take care of your fourth requirement while ignoring excluded files.
If you wish to include directories that contain .jar files you can use
rsync -av --include='*.jar' --include='*/' --exclude='*' --delete \
sourcedir/ destdir/
Based on the answer located here, if you have further questions it should help you: Rsync filter: copying one pattern only
| How to use rsync to force all *.jar files to match? |
1,628,689,358,000 |
I 'm working on a bash script to copy files from a single USB drive to multiple others.
I'm currently using rsync that copies from the source to a single destination, going through all of the output drives in a loop one at a time:
for line in $(cat output_drives_list); do
rsync -ah --progress --delete mountpoints/SOURCE/ mountpoints/$line/
done
I'm trying to optimize the process to get maximum use of the USB bandwidth, avaiding the bottleneck of a single drive write speed.
Is is possible to do something like rsync, but with multiple output directories, that will write to all output drives at once, but read only once from the input?
I guess that some of this is already taken care of by the system cache, but that only optimizes for read.
If I run multiple rsync processes in parallel, this might optimize the write speed, but I'm also afraid it'll butcher the read speed.
Do I need to care about single-read when copying in parallel?
|
I can not test it but if you start more processes in the background it might be the solution:
START=$(date +%s)
for line in $(cat output_drives_list); do
rsync -ah --progress --delete mountpoints/SOURCE/ mountpoints/$line/ &
done
jobs # get a list of running jobs
wait # wait for all processes to complete
sync
echo It took: $(( $(date +%s)-START )) seconds
Edit:
Added date-stuff after 'benchmarking' was mentioned.
| Fastest way to duplicate files from one USB flash drive to multiple others? |
1,628,689,358,000 |
I have a long list of folder as follow:
001_bat_3513
002_mon_3213
003_bat_3515
004_btt_3540
005_bat_4513
055_bpt_8523
056_bot_3513
058_bat_1513
.
.
From this list:
How can I copy the folders ( and all its content) that has the part " bat" in its name?
|
You can use shell globbing for this:
cp -rp *bat*/ /destination/
Here *bat*/ will expand to directories having bat in their names.
Or using find, which will work even if there are so many files that you get an error because the command line is too long:
find . -maxdepth 1 -type d -name '*bat*' -exec cp -rpt /destination {} +
| Copy folders has specific part of name and it is content |
1,628,689,358,000 |
I had an external hard drive that I mounted internally. It came formatted with NTFS, and I wanted to move to ext4. So I copied everything I wanted to keep onto other drives, created a brand new partition table (GPT) with a single ext4 partition, and now I'm trying to copy everything back. I'm using rsync -a --info=progress2 for most of the copy operations.
My problem is that after 100 GB or so, I tend to get weird errors:
rsync: write failed on "somepath": Read-only file system (30)
rsync error: error in file IO (code 11) at receiver.c(389) [receiver=3.1.0]
If I try to list the directory that rsync was working on when it failed, I see weird results:
drwx------ 3 pdaddy pdaddy 4096 Aug 28 2011 subdirectory1
drwx------ 3 pdaddy pdaddy 4096 Mar 12 2014 subdirectory2
d????????? ? ? ? ? ? subdirectory3
d????????? ? ? ? ? ? subdirectory4
Trying to list the directories with question marks in their listings, and even some of them without, gives me:
ls: reading directory subdirectory3: Input/output error
total 0
Even fdisk has errors:
~ % fdisk /dev/sde
fdisk: unable to read /dev/sde: Input/output error
If I try to unmount the drive, the umount command hangs. I ran htop and saw that umount was using 100% of one CPU core. I assumed it was committing journals or some such, so I let it go all night once, but it was in the same state in the morning. Issuing sudo reboot or sudo init 6 while umount is hung results in yet another hung terminal. I have to hold the power button. Just now I tried rebooting without explicitly unmounting, and it hung with a black screen (the monitor went to sleep), and no response via ssh or the keyboard.
After a hard power cycle, I unmounted the disk and did sudo fsck.ext4 -f /dev/sde1, and there were no errors. I checked the files, and they seemed to all be there and a sample of them were correct.
I assumed the errors had something to do with the journal being too large (maybe it's limited to a maximum size?), so I remounted with -o data=writeback. I figured it's a good idea anyway to mount this way temporarily while restoring terabytes worth of files.
This helped to marginally speed the copy, but did not help with the errors. Twice more, I've gotten into the same state. A hard power cycle is the only thing I can do, and afterward, a disk check shows no errors, the files seem okay, and I can copy another 100 GB or so.
What's going on? I think the disk itself is healthy. I had no problems with it before reformatting. Should I do a sector scan on the disk? It's 5 TB, so I'm hesitant to do that.
I've restored some more files, watching the kernel logs, as suggested by Stephen Kitt. Before rsync failed, I started seeing some funky errors:
[ 8807.572286] ata4.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen
[ 8807.572290] ata4.00: failed command: WRITE FPDMA QUEUED
[ 8807.572293] ata4.00: cmd 61/40:00:c0:57:b6/05:00:b7:00:00/40 tag 0 ncq 688128 out
[ 8807.572293] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout)
[ 8807.572295] ata4.00: status: { DRDY }
The last three messages repeat many times, then I get:
[ 8807.572412] ata4: hard resetting link
[ 8808.060464] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 8808.062462] ata4.00: configured for UDMA/133
[ 8808.076459] ata4.00: device reported invalid CHS sector 0
The last message repeats 20 times or so, and then I get:
[ 8808.076526] ata4: EH complete
47 seconds later, the sequence repeats itself. And again 81 seconds after that, and 120 seconds after that, except this time, it starts with:
[ 9160.779935] ata4.00: NCQ disabled due to excessive errors
The next time, it's different. It starts the same, but then I see:
[ 9235.819291] ata4: hard resetting link
[ 9241.181501] ata4: link is slow to respond, please be patient (ready=0)
[ 9245.839449] ata4: COMRESET failed (errno=-16)
This repeats a couple of times, and then:
[ 9290.922301] ata4: limiting SATA link speed to 1.5 Gbps
[ 9290.922303] ata4: hard resetting link
[ 9295.948393] ata4: COMRESET failed (errno=-16)
[ 9295.948400] ata4: reset failed, giving up
[ 9295.948401] ata4.00: disabled
There are some new errors:
[ 9295.948522] sd 3:0:0:0: [sdf] FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 9295.948524] sd 3:0:0:0: [sdf] CDB:
[ 9295.948525] Write(16): 8a 00 00 00 00 00 b9 0c fd 00 00 00 40 00 00 00
[ 9295.948538] blk_update_request: I/O error, dev sdf, sector 3104636160
[ 9295.948542] EXT4-fs warning (device sdf1): ext4_end_bio:317: I/O error -5 writing to inode 49807774 (offset 155189248 size 4194304 starting block 388079688)
[ 9295.948543] Buffer I/O error on device sdf1, logical block 388079264
(Note that I've shuffled some drives since I started this post, and this drive is now sdf instead of sde.)
This last error repeats several times with different logical blocks, and then I get this an equal number of times:
[ 9295.948585] EXT4-fs warning (device sdf1): ext4_end_bio:317: I/O error -5 writing to inode 49807774 (offset 155189248 size 4194304 starting block 388079856)
There's more of the same, and all the while the copy is still going on without complaining. Finally I get:
[ 9295.950321] Aborting journal on device sdf1-8.
[ 9295.950345] Buffer I/O error on dev sdf1, logical block 610304000, lost sync page write
[ 9295.950361] EXT4-fs (sdf1): Delayed block allocation failed for inode 49807775 at logical offset 0 with max blocks 1024 with error 30
[ 9295.950362] Buffer I/O error on dev sdf1, logical block 0, lost sync page write
[ 9295.950365] EXT4-fs (sdf1): This should not happen!! Data will be lost
[ 9295.950365]
[ 9295.950366] EXT4-fs error (device sdf1) in ext4_writepages:2421: Journal has aborted
[ 9295.950368] EXT4-fs error (device sdf1): ext4_journal_check_start:56: Detected aborted journal
[ 9295.950370] JBD2: Error -5 detected when updating journal superblock for sdf1-8.
[ 9295.950371] EXT4-fs (sdf1): Remounting filesystem read-only
[ 9295.950372] EXT4-fs (sdf1): previous I/O error to superblock detected
[ 9295.950379] Buffer I/O error on dev sdf1, logical block 0, lost sync page write
[ 9295.950394] Buffer I/O error on dev sdf1, logical block 0, lost sync page write
[ 9326.009002] scsi_io_completion: 10 callbacks suppressed
[ 9326.009007] sd 3:0:0:0: [sdf] FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 9326.009009] sd 3:0:0:0: [sdf] CDB:
[ 9326.009011] Write(16): 8a 00 00 00 00 00 00 00 0f b8 00 00 00 08 00 00
[ 9326.009018] blk_update_request: 10 callbacks suppressed
[ 9326.009020] blk_update_request: I/O error, dev sdf, sector 4024
[ 9326.009023] Buffer I/O error on dev sdf1, logical block 247, lost async page write
(Note that this time I did not unmount and remount with data=writeback, so it was doing its default journaling.)
After this, the rsync failed, presumably because the file system was remounted read-only.
I'm sorry for the log dump. I've tried to pare it down to the essentials, but I'm afraid I'm not familiar enough with what's going on here to pare it down any further.
|
This looks like a hardware issue, rather than a kernel bug. You can try the following:
re-seat the SATA cable
use another SATA cable
run SMART diagnostics (the self-tests, see smartmontools)
run a destructive badblocks scan
If you have a spare drive or computer you could also try switching (use another drive in the same computer, use the troublesome drive in another computer) to check whether the motherboard's at fault. Since the drive seems to have issues under load a simple dd if=/dev/zero of=... with appropriate size parameters might be enough to reproduce the errors.
I'm not sure if your drive's warranty would apply since it was originally an external drive...
| Filesystem errors when restoring many files |
1,628,689,358,000 |
I want to copy the last used (or maybe created) files of a total size to another folder. Is this possible without additional tools?
I have an USB drive of a certain size that is less than the total size of a folder. As I can't copy all files to USB I like to copy based on latest usage until there is no more space. Ideally the method also supports updating the files without the need to erase all files and re-copy them.
|
On the assumption (based on the [linux] tag) that you have bash available, as well as the stat and sort commands; on the further assumption that you want to sync the most-recently-modified files first (see man stat for other timestamp options), then here is a bash script that will loop through all the files in the current directory (for f in * is the key line for that), gathering their last-modified timestamps into an array, then it loops through the sorted timestamps and prints -- a sample! -- rsync command for each file (currently has timestamp debugging information attached as proof).
You'll have to adjust the rsync command for your particular situation, of course. This script will output rsync commands for every file in the current directory; my suggestion would be to either execute these rsync's "blindly", letting the ones at the end fail, or to put them into a script to execute separately.
This script does not attempt to optimize the space utilization of the destination in any way -- the only ordering it does is the last-modification timestamp (and the arbitrary ordering of the associative array in case there are multiple files modified in the same second).
#!/usr/bin/env bash
declare -A times
# gather the files and their last-modified timestamp into an associative array,
# indexed by filename (unique)
for f in *
do
[ -f "$f" ] && times[$f]=$(stat -c %Y "$f")
done
# get the times in (unique) sorted order
for times in ${times[@]}
do
echo $times
done | sort -run | while read t
do
# then loop through the array looking for files with that modification time
for f in "${!times[@]}"
do
if [[ ${times[$f]} = $t ]]
then
echo rsync "$f" -- timestamp ${times[$f]}
fi
done
done
| Copy last used files of total size |
1,628,689,358,000 |
I have like 5GB of data and a very slow USB-Drive.
Should I use dd, cp or rsync?
Should I compress them first into for example 7z / tar or not?
In short: What is the best way and the best practices to copy all those files to the USB-Drive?
Thanks for your help :)
|
One good way is to compress them (on the same media where they are) and the transfer the archive to USB drive. Using this way you will have much less updates on USB for filesystem and less operations create/open/close (related to files).
About copy operation for me cp will be OK.
| How to Tranfser huge amount of Files to slow USB Drive (dd, cp, rsync, 7z, tar) |
1,628,689,358,000 |
I have computer 1 logging voltage data to a file volts.json every second.
My second computer connects via ssh and grabs that file every 5 minutes. Splunk indexes that file for a dashboard.
Is scp efficient in this manner, if so then ok. Next is how to manage the file and keep it small without growing to 2mb lets say? is there a command to roll off the earlier logs and keep the newest?
the json looks like this right now:
{
"measuredatetime": "2022-06-27T18:00:10.915668",
"voltage": 207.5,
"current_A": 0.0,
"power_W": 0.0,
"energy_Wh": 2,
"frequency_Hz": 60.0,
"power_factor": 0.0,
"alarm": 0
}
{
"measuredatetime": "2022-06-27T18:00:11.991936",
"voltage": 207.5,
"current_A": 0.0,
"power_W": 0.0,
"energy_Wh": 2,
"frequency_Hz": 59.9,
"power_factor": 0.0,
"alarm": 0
}
|
To keep directories synchronized through ssh,the typical tool is rsync.
To roll log files and save space, logrotate is well dedicated.
To secure an unattended simple task through ssh, .ssh/authorized_keys with forced command is an excellent practice.
Example:
set /etc/logrotate.d/volts file (imitate classical syslog settings)
create a task-dedicated key pair with ssh-keygen; in this particular case, you do not want a passphrase; security is ensured by autorized_keys restrictions
in .ssh/authorized_keys, set:
command="rsync --server --sender -logDtpre.iLsf . /path/to/volts/" ssh-rsa AAAAB3NzaC1yc2E[...pubkey...] blabla
on the other side, in crontab, set
rsync -e "ssh -i /path/to/privatekey" -a otherhost:/path/to/volts/ /path/to/volts
On computer 1, you could also replace the log file by a named pipe, make a daemon script that consumes the stream and writes safely to a file (e.g using a semaphore to manage concurrent I/O), so that you have a good control over the data integrity.
| How can I optimize the transfer of files between two systems and also trim the file |
1,628,689,358,000 |
I want a script to copy the same file from source folder to target folder by incrementing the filename (ex. file1, file2, file3, file4,....). This would be for some performance testing.
I've got this code so far, but how best can it be achieved?
#!/bin/sh
for i in 1 2
do cp /tmp/ABC*/folder1/ABC*$i
done
|
You want a forever-repeating loop
#!/bin/sh
n=1
while true
do
cp /tmp/ABCfile "/mnt/share/reception/13_calling_cards_data/ABCfile.$n"
n=$(( n+1 ))
done
| script to copy same file to a folder continuously in linux |
1,628,689,358,000 |
I'm a first time poster & new-ish to coding:
I need to copy a file ING_30.tif into 100s of folders, so I'm hoping to use a terminal command to do this quickly.
Both ING_30.tif and the folders I need to copy it into are in the same parent folder, ING/.
I don't want to copy ING_30.tif into every folder in ING/, just the folders in ING/ that start with AST (e.g. AST_1113054).
These AST* folders already have other files in them which I don't want to remove.
Help would be appreciated. I found posts online that copied a file into multiple folders, but not code that let me specify that I only want to copy into the AST* folders. I also saw that some commands copied over the file but removed existing files already in the folder, which I need to avoid. Thanks!
|
Another option:
for d in AST*/; do cp ING_30.tif "$d"; done
This will loop over all directories (enforced by the trailing /) that match the glob pattern AST* and copy the file there.
This is safer than using xargs in case your directory names can contain spaces or other funny characters (see answers to this question for more insight), unless you have the GNU version which accepts the -0 option, and in addition have a means to feed the directory names as NULL-terminated strings to xargs.
| Copy file into multiple folders, but only when starting with AST |
1,628,689,358,000 |
This is something I imagine I might have to submit a patch or feature request for, but I'd like to know if it is possible to create a hardlink to a file, that when that hardlink which was not the original file is editted, that it would be copied first before it was actually editted?
Which major filesystem would this apply to?
Thanks.
|
After you create a hard link to a file, there are just two links to one file. While you may remember which link was first and which was second, the filesystem doesn't.
So it is just possible for an editor to determine whether there is more than one link to a file or not. An editor may or may not preserve the link when it saves the new file.
What you may want is a filesystem that supports cp --reflink. That way you get a space efficient copy, but when you change the copy, your original file is not modified.
| How can I have it so, that when hardlinks which are not the original, are editted, that they would first be copied then editted? |
1,628,689,358,000 |
I am trying to back up data (230 GB, 160k files), over USB3.0 to a newly bought external Seagate Expansion Portable Drive of 4 TB, formatted as NTFS. I am running Ubuntu 18.04.3 LTS.
I first tried using a simple cp command in the terminal, but after only copying a few percent, the copying started stuttering and became slow. After some time the disk became unresponsive. Remounting the disk did not work. I tried connecting the disk to another computer, and was first unable to mount it, and then after a few attempts, it would mount but read/write would be very slow.
Once the cp starts failing, I get the following errors in dmesg (all these messages are repeating multiple times but with some different numbers):
[67598.098118] sd 4:0:0:0: [sdb] tag#18 uas_zap_pending 0 uas-tag 19 inflight: CMD
[67598.098122] sd 4:0:0:0: [sdb] tag#18 CDB: Write(16) 8a 00 00 00 00 01 1c 75 24 18 00 00 04 00 00 00
[67598.225621] usb 1-9: reset high-speed USB device number 5 using xhci_hcd
[67598.378202] scsi host4: uas_eh_device_reset_handler success
[67598.378466] sd 4:0:0:0: [sdb] tag#14 FAILED Result: hostbyte=DID_RESET driverbyte=DRIVER_OK
[67598.378468] sd 4:0:0:0: [sdb] tag#14 CDB: Write(16) 8a 00 00 00 00 01 1c 74 d0 18 00 00 04 00 00 00
[67598.378470] blk_update_request: I/O error, dev sdb, sector 4772384792 op 0x1:(WRITE) flags 0x104000 phys_seg 128 prio class 0
[67598.378473] buffer_io_error: 246 callbacks suppressed
[67598.378474] Buffer I/O error on dev sdb2, logical block 596515075, lost async page write
[67635.212662] scsi host4: uas_eh_device_reset_handler start
[67635.213657] sd 4:0:0:0: [sdb] tag#28 uas_zap_pending 0 uas-tag 10 inflight: CMD
[67635.213658] sd 4:0:0:0: [sdb] tag#28 CDB: Write(16) 8a 00 00 00 00 01 1c 75 e8 18 00 00 04 00 00 00
[67635.340988] usb 1-9: reset high-speed USB device number 5 using xhci_hcd
[67635.490335] scsi host4: uas_eh_device_reset_handler success
I left the disk for a week, and then did a SMART scan using the Seagate Bootable Tool, which showed no issues.
Thus, I attempted copying the data again. The disk would mount properly now and I could read/write without issues, so I started an rsync command. First I did
rsync -avh source dest
It worked, albeit slowly, for about 20 % of the data, then it started stuttering so I stopped the transfer. I restarted the transfer using
rsync -avhW source dest --inplace
to try and make it faster. It ran great, much faster than the first attempt, but after a few minutes, I received errors:
rsync: recv_generator: failed to stat "..." : Input/output error (5)
rsync: write failed on "...": Input/output error (5)
rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.2]
In dmesg I see the following:
[ 6772.890553] buffer_io_error: 1092735 callbacks suppressed
[ 6772.890556] Buffer I/O error on dev sdb2, logical block 874428, async page read
Once this happened, the disk became unresponsive. After a few minutes I could remount it, and checking the folder to which I was copying data it is completely empty, including the files that were properly copied during my first rsync attempt. I did not try to restore any data to see if the data was still intact, I suppose that it is only the file table that has been corrupted.
The files that were being copied at the time of failure are of type .mat.gz, on the order of 1 MB each.
As a sidenote, an old external Seagate disk recently broke when I was copying small amounts of data from it, on this computer (the infamous click of death...), which was also my first HDD ever to die.
I have no idea what to make of this, if the problem lies with how I am copying data (can copying data destroy disks?), if the problem lies with hardware (computer, HDD, USB-SATA converter, ...) or if it has to do with Ubuntu... Normally I only run Manjaro and I never experienced this kind of issues.
|
Thank you for all your help.
I have now solved the issue. I reformatted the drive to ext4 and after that I used the command
rsync -avhW source dest --inplace --exclude=".*/"
where
-a is for archive, which preserves ownership, permissions etc.
-v is for verbose, to see what is happening
-h is for human-readable, so the transfer rate and file sizes are easier to read
-W is for copying whole files only, without delta-xfer algorithm which should reduce CPU load
--inplace tells rsync to not create a temporary copy of the file to be transferred, which is then copied to the destination. This should speed up the process.
--exclude=".*/" is for excluding all hidden folders
Average data rate for 392 GB was 81.3 MB/s, which is much faster than what I achieved before reformatting the drive.
dmesg was clear of errors this time.
Note that I did not attempt to make a fresh NTFS partition on the drive to see if it was the particular NTFS paritition, from the factory, that was the problem or if it is NTFS in itself that was causing the issues. This unfortunately means that I do not have a complete answer to what the problem was. I also did not attempt to increase the timeout thresholds, but given the much faster data rate this time around, I would say that increased timeout thresholds would at best be a workaround, and not a solution.
| copying files (cp and rsync) to external HDD causes i/o errors and loss of data on destination |
1,628,689,358,000 |
Like many teams, we now have people working from home. These remote clients are behind firewalls (which we do not control), and they do not have static IP addresses. In short, we cannot directly SSH into these clients. However, the clients can SSH into our server. (Hardened SSH is already set up on all clients and the server for other reasons.)
Our requirement is to keep a set of files (in a few different directories) in sync on each client, and to do it efficiently. I am trying to avoid having each client run an rsync command every NN seconds. It is preferable for the client to be notified when any of the relevant files on the server have been changed.
Furthermore, our implementation can use only SSH, rsync, inotify tools, and either bash or Python (as well as tools like awk, cut, etc.). Specifically, we cannot use NextCloud, OwnCloud, SyncThing, SeaFile, etc.
The only open incoming port on the server is for SSH and the only packages we wish to maintain or update are core packages from our distro's repository.
Our idea is to have each client establish a reverse SSH tunnel to the server. Then the server can run a script like this:
#!/bin/bash
while true; do
inotifywait -r -e modify,attrib,close_write,move,create,delete /path/to/source/folder
for port_user in "$(netstat -Wpet | grep "ESTABLISHED" | grep 'localhost.localdomain:' | grep 'sshd:' | cut -d ':' -f2-3 | cut -d ' ' -f1,4)"; do
uport=$(echo $port_user | cut -d ' ' -f1)
uu=$(echo $port_user | cut -d ' ' -f2)
sudo -u $uu rsync -avz -e "ssh -p $uport -i /home/$uu/.ssh/id_ed25519" /path/to/source/folder $uu@localhost:/path/to/destination/folder
done
done
I am seeking feedback. First, can the above bash script be improved or cleaned up? It seems like I had to use far too many cut statements, for example.
EDIT: here are the responses to the excellent questions and comments by roaima.
The script on the file server is running as root. The script on the client is not.
& 7. This is example output of my netstat command
netstat -Wpetl
tcp 0 0 localhost.localdomain:22222 0.0.0.0:* LISTEN myuser 42137 8381/sshd: myuser
"You have a race condition..." - Thank you. We are going to ignore this issue for now.
"You have an omission problem..." - Thank you again. I believe this is easily remedied on the client side. Here is the client side script that will launch upon user login:
#!/bin/bash
synchost=sync.example.com
syncpath="path/to/sync/folder"
uu=$(logname)
uport=222222 #hard code per client device
# initial sync upon connecting:
rsync -avzz -e "ssh -i /home/$uu/.ssh/id_ed25519" /"$syncpath"/ $uu@$synchost:/"$syncpath"
# loop until script is stopped when user logs out
while true; do
inotifywait -r -e modify,attrib,close_write,move,create,delete /"$syncpath"
rsync -avzz -e "ssh -i /home/$uu/.ssh/id_ed25519" /"$syncpath"/ $uu@$synchost:/"$syncpath"
done
There is also an on-demand script the user can run at any time for force a sync. It is the above script without the while loop.
Here is the current version of the server script:
syncpath="path/to/sync/folder"
while true; do
inotifywait -r -e modify,attrib,close_write,move,create,delete /"$syncpath"
netstat -Wpetl | grep "LISTEN" | grep 'localhost.localdomain:' | grep 'sshd:' | while read proto rq sq local remote state uu inode prog
do
uport=${local#*:}
sudo -u $uu rsync -avzz -e "ssh -p $uport -i /home/$uu/.ssh/id_ed25519" /"$syncpath"/ $uu@localhost:/"$syncpath"
done
done
"You should consider a timeout for each ssh/rsync to the client so that if they disconnect while you're attempting a transfer you don't end up blocking everyone else".
This is a good suggestion. However, some valid rsync updates may run for a much longer time than average. Can you suggest a proper way to deal with the normal & necessary long rsync updates at times while also handling the rare situation of a client disconnecting during an update?
I have an idea for one approach that may solve the timeout as well as (most of) the race condition in a very, very simple way. First, the initial client side sync upon each user login should take care of long-running update operations. Therefore, server side sync operation times will not have such a long right tail. We can optimize the timeout parameters and sleep time and use an approach like the following:
syncpath="path/to/sync/folder"
while true; do
inotifywait -r -e modify,attrib,close_write,move,create,delete /"$syncpath"
netstat -Wpetl | grep "LISTEN" | grep 'localhost.localdomain:' | grep 'sshd:' | while read proto rq sq local remote state uu inode prog
do
uport=${local#*:}
timeout 300s sudo -u $uu rsync -avzz -e "ssh -p $uport -i /home/$uu/.ssh/id_ed25519" /"$syncpath"/ $uu@localhost:/"$syncpath"
done
sleep 90
netstat -Wpetl | grep "LISTEN" | grep 'localhost.localdomain:' | grep 'sshd:' | while read proto rq sq local remote state uu inode prog
do
uport=${local#*:}
timeout 900s sudo -u $uu rsync -avzz -e "ssh -p $uport -i /home/$uu/.ssh/id_ed25519" /"$syncpath"/ $uu@localhost:/"$syncpath"
done
done
One last comment. The parameters shown for the rsync commands are not final. Suggestions are appreciated, but we also intend to spend some time evaluating all the options for the rsync commands.
|
A number of thoughts
Your script is (presumably) running as root, so that netstat -Wpet can run and sudo -u ${user} operation is simplified.
Using a reverse connection such as ssh -R 20202:localhost:22 centralserver I cannot get a port and user combination from the netstat | grep | grep | cut ... line.
netstat -Wpet | grep "ESTABLISHED" | grep sshd:
tcp 0 36 centralserver:ssh client:37226 ESTABLISHED root 238622975 15198/sshd: roaima
As a result I can't usefully test possible changes to your script. What are you expecting to see here?
You have a race condition, such that if a second file is changed after the inotifywait has completed it may not get propagated to all your target systems until another file has been changed.
A fix for this might be to listen for events from a single instance of inotifywait and run the set of rsync transfers on each event. However, depending on the frequency of updates this might saturate your clients' network connections
You have an omission problem, in that a client connecting after a set of changes will not receive those changes until the next file change. If the updates are this critical you need to consider some way of updating the client copy immediately they have connected
You should consider a timeout for each ssh/rsync to the client so that if they disconnect while you're attempting a transfer you don't end up blocking everyone else
Given a snippet of bash code such as this, you may be able to replace cut statements with variable manipulation (%, #, and / operators)
while read -r proto recvq sendq localaddrport foreignaddrport state user inode pidprogram name
do
localaddr="${localaddrport%:*}" localport="${localaddrport#*:}"
foreignaddr="${foreignaddrport%:*}" foreignport="${foreignaddrport#*:}"
pid="${pidprogram%/*}" program="${pidprogram#*/}"; program="${program%:}"
echo "Foreign address = $foreignaddr and port = $foreignport"
echo "PID = $pid, program = $program"
echo "Name = $name"
done < <(netstat -Wpet | grep '\<localhost.localdomain:.*\<ESTABLISHED\>.*/sshd:')
If we could see expected output of your netstat command it might be possible to use awk to simplify the line processing
| efficiently rsync multiple clients (which are behind firewalls) to a server |
1,628,689,358,000 |
I have Ubuntu installed on a VM on my PC.
I need to copy some folder ( recursively ) from Ubuntu to the server.
If I want to download the folders from the server to my VM machine I did this:
rsync -av [email protected]:/home/my_folder ./
my_folder contains other folders and files.
What if I want to upload a folder to the server ?
Hope is clear
|
You'd just need to switch the arguments (which represent source and destination) like
rsync -av ./newfolder [email protected]:/home/my_folder
Have a look at man rsync which explains that a trailing slash after newfolder would only transfer the contents of said directory - in this case, you might want to change the upload destination to .../my_folder/newfolder.
| Push folders with rsync on a server |
1,628,689,358,000 |
I am using Manjaro Gnu/Linux and I have a directory named files. Under this directory, I have around 650 sub directories, with names such as: dir1, dir2, dir3, dir4, ...
Under each sub directory there are varying number of .jpg images (say, from 2 to 11).
Say as an example, under dir1 subdirectory, the images are imgaf001.jpg and srep0001.jpg.
I want to write a command/script to copy all such images to a new directory names all_images such that the images are renamed to the names of their sub directories.
For example: For dir1 sub directory, imgaf001.jpg changes to dir1_1.jpg and srep0001.jpg changes to dir1_2.jpg (after the underscore, the count of image comes).
How can I achieve this?
Thanks
|
You could run this script in the directory named files:
mkdir all_images
find -type f -name '*.jpg' -exec sh -c '
c=1
for f in "$@"; do
pdir=${f%/*}
pdir=${pdir##*/} #Now pdir conains the parent directory name
cp -- "$f" "all_images/${pdir}_${c}.jpg"
c=$((c+1))
done
' findsh {} +
Sample directories with images:
$ ls dir*
dir1:
asj.jpg assa.jpg
dir2:
kasj.jpg kkl.jpg
After script execution:
$ ls all_images/
dir1_1.jpg dir1_2.jpg dir2_3.jpg dir2_4.jpg
If you prefer the counter to be restarted upon source directory change, so that the result is dir1_1.jpg dir1_2.jpg dir2_1.jpg dir2_2.jpg, then do a little adaptation in the for loop:
mkdir all_images
find -type f -name '*.jpg' -exec sh -c '
for f in "$@"; do
pdir=${f%/*}
pdir=${pdir##*/} #Now pdir conains the parent directory name
[ "$pdir" != "$oldpdir" ] && c=1
cp -- "$f" "all_images/${pdir}_${c}.jpg"
oldpdir=$pdir
c=$((c+1))
done
' findsh {} +
| Copying and renaming images |
1,628,689,358,000 |
I have a case when I need to move data from an old server: host1 to a new server: host2.
The problem is host1 cannot see host2, but I can use another server (localhost) to SSH to both host1 and host2.
Imagine it should work like this: host1 -> localhost -> host2
How can I use rsync to copy files between host1 and host2? I tried this command on localhost server but it says The source and destination cannot both be remote.
rsync -avz host1:/workspace host2:/rasv1/old_code-de
|
I ended up with the solution from https://unix.stackexchange.com/users/312074/eblock
with
scp -3 host1 host2
| How to use rsync to copy files between 2 remote servers based on the localhost server? [duplicate] |
1,628,689,358,000 |
I have the following directory structure:
top_dir
|________AA
|_______f1.json
|_______f2.json
|________BB
|_______f1.json
|_______f2.json
|________CC
|_______f1.json
|_______f2.json
I would like to write a script / command line command to get the following structure
new_dir
|_______f1_AA.json
|_______f2_AA.json
|_______f1_BB.json
|_______f2_BB.json
|_______f1_CC.json
|_______f2_CC.json
I tried reading into some solutions for renaming files and copying moving files with the same. However, I am not yet able to solve this.
Thanks!
|
Using a loop:
mkdir /path_to/new_dir
cd /path_to/top_dir
for i in */*.json; do
cp "$i" "/path_to/new_dir/$(basename "$i" .json)_$(dirname "$i").json"
done
$(basename "$i" .json) prints the filename without suffix, e.g. f1
$(dirname "$i") prints the directory name, e.g. AA
| Copy files with the same name but in different dirs into a new dir while renaming them |
1,628,689,358,000 |
I regularly update my system running KDE Neon, but this time after the update something broke in the "file copy" process. The system slows down during copying to external hdd or pendrive so much so that the system becomes unusable, CPU usage runs too high. Initially after reading some online forums I thought it was some taskbar animation issue, but after I tried to copy big files using terminal and tty, the results are the same in both cases, so the problem is not with the animations. Any ideas on what's causing the issue?
My system specs:
CPU: Intel i5-7200U
RAM: 8GB
HDD: 1TB
|
After a bit of research I noticed the kernel was updated to 5.3 from 5.0, it was certainly due to system updates. After downgrading the kernel to 5.0 all things came back to normal. I dont know whats wrong with version 5.3, but it resulted in very high cpu usage specially mount.ntfs process which is around 60 to 70 percent. The whole kde desktop seems to freeze when coping of large files is going on. Even on fat 32 system the problem was there. I also tried the kernel 5.4, same issue was there.
| Copying files slows down the system, making it unusable (KDE Neon) |
1,558,719,020,000 |
I have a deeply nested folder structure in which there are hundreds of files called data.log. I need a script to rename each of these data.log files according to the name of the parent folder they are in and then move the renamed filed to a defined target folder. The original data.log files should remain in place.
Example:
The file /opt/slm/data/system/amd-823723/data.log needs to be renamed to amd-823723 and then moved to /opt/slm/output/, whereby the original data.log file remains in place.
|
#!/bin/bash
OUTDIR=/opt/slm/output/
find /opt/slm/data -name data.log |
while read FILE; do
OUTFILE="$(basename "$(dirname "$FILE")")"
cp -p "$FILE" "$OUTDIR$OUTFILE"
done
| Special "copy and rename" case |
1,558,719,020,000 |
Is there an elegant and fast way to copy a certain directory structure and only select a random amount of files to be copied with it. So for example you have the structure:
--MainDir
--SubDir1
--SubSubDir1
--file1
--file2
--...
--fileN
--...
--SubSubDirN
--file1
--file2
--...
--fileN
--...
I want to copy the entire folder structure but choose only a specific number of random files from {files1-filesN} of each SubSubDir to be copied along.
|
Since you tagged this as linux I'll assume GNU utilities.
Copy directory structure from $src to $dest:
find "$src" -type d -print0 | cpio -padmv0 "$dest"
Also copy a random sample of $nfile files from each leaf subdirectory of $src:
find "$src" -type d -links 2 -exec \
sh -c 'find "$1" -type f -print0 | shuf -z -n "$2"' sh {} "$nfiles" \; | \
cpio -padmv0 "$dest"
Here the first find finds leaf subdirectories (-links 2), then the second find finds files in each of these subdirectories. shuf chooses a random sample of files, and finally cpio copies them.
| Copy directory structure with random number of files |
1,558,719,020,000 |
I want to move files from directory A to directory B. But there are some conditions.
directory A structure:
a.txt_20170502
b.txt_20170502
a.txt_20170507
asd.txt_20170509
asd.txt_20170522
So, I want to rename a.txt_20170502 to a.txt and move that file to directory B, but if a.txt is present in directory B, it will not move that file.
Example:
a.txt
asd.txt
This process continue until all the candidate files are moved from directory A to B.
I am confused how I can check if files are already in that directory,It will not move that file.
Condition :-
There is another script running in background which will fetching data from directory B.
So, if any files are present in directory B , it will be automatically copied mainframe server.
|
for file in A/*.txt_*; do
newfile="B/${file##*/}" # remove A path, add B path
newfile="${newfile%_*}" # remove trailing suffix
if [[ ! -f "$newfile" ]]; then
mv "$file" "$newfile"
fi
done
This will iterate over all files in A that matches *.txt_*. It will construct a new file path by replacing the A path with the B path and strip the trailing _xxxxxxxx suffix from the filename. If the new filename is not already present under B, the file will be moved there.
| move only uniq files from one directory to other [closed] |
1,558,719,020,000 |
Very simple script to copy a file
#!/bin/bash
#copy file
mtp-getfile "6" test2.jpg
I set it as executable and run it using
sudo sh ./test.sh
It gives me a file called test2.jpg that has no icon and I cannot open
I get a 'Failed to open input stream for file' error
However, if I simply issue the following from the command line
mtp-getfile "6" test2.jpg
It works as expected.
What is wrong with my script?
I checked and the resulting .jpg file in each case has the same number of bytes. Very strange.
|
Need to do
sudo chown <user> <copied file name>
Not sure why permissions would be different in each case
| copying binary file(.jpg) works from command line but not from script |
1,558,719,020,000 |
We are using the following rsync command in a script to copy files from source to destination.
rsync -av --exclude 'share/web/sessions/' --rsync-path "sudo rsync" /sdata/ 172.31.X.X:/sdata/ &>/home/fsync/rsyncjob/output
Now, we have a cleanup script on source host which is removing some of the files after some particular no of days based on our requirement. We want that the files once they are removed from source host , rsync should also get them removed from destination host .
For that I can see, rsync provides --delete-before and --delete-after options to get the files removed from destination host once they were removed from source host. But I am little skeptical in using these options as the man page says This option can be dangerous if used incorrectly! It is a very good idea to first try a run using the --dry-run option (-n) to see what files are going to be deleted.
My updated command is as follows
rsync -av --exclude 'share/web/sessions/' --delete-after --rsync-path "sudo rsync" /sdata/ 172.31.X.X:/sdata/ &>/home/fsync/rsyncjob/outpu
Are these options correct ? These are production hosts for us and I want to be sure before using these options. Also any expert advice .
|
I would not use --delete-after because it forces rsync to rescan the file list.
Best option today is to use --delete-during (or --del for short). If you want to retain the "delete after" effect due to I/O error concerns, use --delete-delay.
See the man page for reference:
Some options require rsync to know the full file list, so these
options disable the incremental recursion mode. These include:
--delete-before, --delete-after, --prune-empty-dirs, and --delay-updates. Because of this, the default delete mode when you specify --delete is now --delete-during when both ends of the
connection are at least 3.0.0 (use --del or --delete-during to request
this improved deletion mode explicitly). See also the --delete-delay
option that is a better choice than using --delete-after.
And of course the relevant portions for each method.
| Using --delete option with rsync |
1,558,719,020,000 |
I want to do this. I want to create an alias called 'bu' (backup). The bu alias would call upon the copy tool to copy any file passed to a directory I will manually setup in /root/backup/
$bu testfile.txt
cp testfile.txt /root/backup/
So I think I need to create a bash script, and point the alias to that script ( could be wrong here), but im not sure how to approach the bash script to achieve this.
|
You can use a bash script called bu. Put this code inside a file bu:
#!/bin/bash
cp "$1" /root/backup
and then save it in your $PATH or add the directory where you put the file to your $PATH. Lastly make the script executable: chmod +x bu.
| attempting to create a script, with an alias, that will backup a single file |
1,558,719,020,000 |
We are using rsync to copy files from one location to another location on a different host. We want to exclude a directory from the source location so that files from this directory are not copied . The files inside this directory are actually session data and this data were removed once the session completes . So we are excluding this directory.
For this we modified our rsync to exclude the directory as follows:
rsync -av --exclude '/share/web/moodle/moodledata/sessions/' --rsync-path "sudo rsync" /share/ 172.31.X.X:/share/ &>/home/fsync/rsyncjob/output."$datetime";
But I can see that the directory is not getting excluded as from the log file can see it is trying to copy the log file:
rsync: send_files failed to open "/share/web/moodle/moodledata/sessions/sess_bnktcuvnv965dj4qtv1438o651": Permission denied (13)
rsync: send_files failed to open "/share/web/moodle/moodledata/sessions/sess_d2vvo9ip79jc13qafgq5of50s2": Permission denied (13)
Please suggest fix to exclude the directory
|
Always try to see if there's already a similar discussion before posting the question.
With rsync, all exclude (or include) paths beginning with / are are anchored to the root of transfer. The root of transfer in this case is /share. Use relative path, instead of absolute path and it should work.
For further reference,
rsync exclude directory not working
| rsync failing to exclude directory [duplicate] |
1,558,719,020,000 |
I have OpenBSD 5.6 installed on my notebook computer and would like to copy files from my USB flash drive to the root of the installed OS. I figured out how to mount the USB drive using these commands:
# mkdir /mnt/usb
# mount /dev/sd1i /mnt/usb
# cd /mnt/usb
How do I copy files from /mnt/usb to the root of the installed OS?
|
The basic command to copy files is cp. You might want to use ls first to get a list of files (and directories):
cp some_file ~/new_name
copies a file under /mnt/usb to the file new_name in your home directory.
If you want to copy all files ending in .jpg and .JPEG to a new directory pictures under your home directory, you can use e.g.:
mkdir ~/pictures/
cp *.jpg *.JPEG ~/pictures/
I recommend you keep your root directory clean, but if you have to copy things there, / is the path to root, so the last command would be:
cp *.jpg *.JPEG /
| Copy files from USB flash drive to root in OpenBSD |
1,558,719,020,000 |
I did this to copy files between Windows and Linux.
C:\Documents and Settings\668340\My Documents\putty>pscp "C:\Documents and Settings\563456\abc.txt" "[email protected]:/home/auto/"
But it prompts for a password and how do i automate this task using private/public key pairs between windows and linux
Even winscp works but only problem is that i have to use hard coded password in the code, which I don't want. Instead I want to use private/public key addition concept but not aware how to do it when it is windows.
EDIT:i mean i can go with passwordless authentication but still it involes ,the manual process of generating public,private keys(.ppk) using puttygen.exe in windows and then
copying the public key to .ssh\authorized in linux.is there any way to automate this.
EDIT:if there is any of copying files by running some shell script of command in unix to copy file to windows that could also be usefull as i can create public/private keys in unix automatiaclly.
|
As you want to authenticate without prior key exchange, I see no other option than using password authentication (at least the first time).
So you need to hard-code the password in your script. You can give the password to pscp with its -pw option. But, I do not know how safe this is (at least in Linux, all user normally can see the command line of all processes).
| how to copy files from windows to linux witout password using a script or program? |
1,558,719,020,000 |
I have an issue where I want to connect a very old system (UNIX) to a new machine. This old machine logs in by scp using a hardcoded oldsystemaccess user and copies a file into a subdirectory of the new servers webroot /var/www/newserver/test/import. The new machine is running an Ubuntu 18 LAMP stack with no advanced configuration done to the apache, running as www-data in that same group.
On the new server, I added this user oldsystemaccess to the www-data group using sudo usermod -aG www-data oldsystemaccess and I changed the permissions on the directory to rwxrwxr-x with sudo chmod 775 /var/www/newserver/test/import. After this change, the file was successfully copied over to the new system.
Now the actual issue I am trying to fix: when the files are owned by oldsystemaccess my PHP scripts cannot read or handle the files properly and since I am unable to change the way the files are copied from the old server or run additional commands when they are I am looking for a way to change the files created in the directory import to be owned by www-data user and group on creation.
For now I added a crontab entry running chown -R www-data:www-data /var/www/newserver/test/import every minute but I feel like it is a really bad solution. I am looking for something like umask but for the file ownership.
|
On some systems (like FreeBSD if you believe Wikipedia), the setuid bit on a directory would have my desired behaviour (Link). But since it is not default behaviour on the Ubuntu I am running I found the solution for the problem I am having was the following:
Set the setgid bit using chmod so new files are owned by the www-data group.
sudo chmod 2775 /var/www/newserver/test/import
Apparently the script copying the files has an umask of 007 set, which caused files to be created with rwxrwx--- permissions, but I overlooked this when asking the question.
If you need the write-permission bit set on files inside the directory you can then set up an ACL to automatically make files writeable by the group that owns them:
sudo setfacl -d -m group:www-data:rwx /var/www/newserver/import
| Change owning user and group on file creation |
1,558,719,020,000 |
I have two directories, lets call them X and Y
Within them I have 100k+ files, .jpg files in X and .txt files in Y
I want to randomly select N files from X and copy to folder Z
This should be manageable using find + shuffle.
I then want to find all of the files in Y with the same names of the files that were copied to Z, but they are .txt files and copy them to directory W
To visualize:
N files from X >> Z
Same N files from Y >> W
How would I go about doing that?
|
#!/bin/bash
X=/path/to/X
Y=/path/to/Y
Z=/path/to/Z
mapfile -d '' -t files < <(find "$X" -type f -name '*.jpg' -print0 |
shuf -z -n 10 -)
for f in "${files[@]}"; do
echo cp "$f" "$Z"
bn=$(basename "$f" ".jpg")
echo cp "$Y/$bn.txt" "$Z"
done
This script is untested but should do a dry-run of what you want. Set the X, Y, and Z variables to the correct values, then run it to see what it would do, adjust as needed and when it works as required, remove the echo from both of the cp lines.
It works by first populating an array ($files) with 10 random .jpg filenames from directory $X. It uses NUL as the filename separator, so will work with any filenames, even those including annoying characters like spaces, tabs, and shell meta-characters.
Then it iterates over each of those filenames to 1. copy them to directory $Z, 2. extract the basename portion of the filename, 3. copy the basename + .txt from directory $Y to directory $Z.
BTW, this requires bash version 4.4-alpha (released late 2015) or newer, because that's when the -d option was added to mapfile.
| Selecting n random files from one directory and copying them to another folder + other files with same same, but different filetype |
1,624,962,712,000 |
I need to copy some files that are like this:
folder1/name1.csv
folder1/name2.csv
folder2/name1.csv
folder2/name2.csv
folder3/name1.csv
folder3/name2.csv
All folder* are subdirectory of a directory.
What I want to do is to copy all the file "name*" into a new directory new_dir but I have to change their name.
Looking for help I tried
find . -name 'name*.csv' -exec cp --backup=t '{}' new_dir/ \;
But I obtain "cp: './new_dir/name1.csv' e './new_dir/name1.csv' are the same file".
How can I add a prefix or suffix to the name so that I can copy them?
Adding an integer is ok, so that in the new_dir files will be as:
new_dir/name10.csv
new_dir/name21.csv
new_dir/name12.csv
new_dir/name23.csv
new_dir/name14.csv
new_dir/name25.csv
...
Or, even better, if I can rename the file adding the name of the folder from where they are copied, as:
new_dir/name1folder1.csv
new_dir/name2folder1.csv
new_dir/name1folder2.csv
new_dir/name2folder2.csv
new_dir/name1folder3.csv
new_dir/name2folder3.csv
...
Thanks in andvance.
|
Given
$ tree folder*
folder1
├── name1.csv
├── name2.csv
└── name3.csv
folder2
├── name1.csv
├── name2.csv
└── name3.csv
folder3
├── name1.csv
├── name2.csv
└── name3.csv
0 directories, 9 files
then using a shell loop with parameter expansion to slice'n'dice the names
$ for f in folder*/name*.csv; do
b="${f##*/}";
echo cp "$f" new_dir/"${b%.csv}${f%/*}.csv"
done
cp folder1/name1.csv new_dir/name1folder1.csv
cp folder1/name2.csv new_dir/name2folder1.csv
cp folder1/name3.csv new_dir/name3folder1.csv
cp folder2/name1.csv new_dir/name1folder2.csv
cp folder2/name2.csv new_dir/name2folder2.csv
cp folder2/name3.csv new_dir/name3folder2.csv
cp folder3/name1.csv new_dir/name1folder3.csv
cp folder3/name2.csv new_dir/name2folder3.csv
cp folder3/name3.csv new_dir/name3folder3.csv
| Copy and rename files adding a dynamic prefix/suffix |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.