date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,302,715,144,000 |
On my Fedora 20 system I use scp a lot, and this is the second time I encounter this. When I run this command:
scp -r -P PORT user@host:/home/user/something/{file1,folder1,folder2,folder3,folder4} folder/folder2/
it asks me for the password for each file/directory it transfers.
user@host's password: "password here"
Question:
What is happening here?
Is this normal, I would think this is very peculiar behavior?
|
Your local shell (probably bash) is expanding
user@host:/home/user/something/{file1,folder1,folder2,folder3,folder4}
into:
user@host:/home/user/something/file1 user@host:/home/user/something/folder1 user@host:/home/user/something/folder2 user@host:/home/user/something/folder3 user@host:/home/user/something/folder4
Instead, you can do:
scp -r -P PORT user@host:"/home/user/something/file1 /home/user/something/folder1 /home/user/something/folder2 /home/user/something/folder3 /home/user/something/folder4" folder/folder2/
or, if you know user's login shell on the remote end is bash, you can use brace expansion too:
scp -r -P PORT user@host:"/home/user/something/{file1,folder1,folder2,folder3,folder4}" folder/folder2/
to have the remote shell split the string into arguments instead of the local shell.
| SCP command with selected file and directories for download asks for password for each new file or directory |
1,302,715,144,000 |
So I understand that rsync works by using a checksum algorithm that updates files according to file size and date. But isn't this synchronization dependent on how source and destination are called? Doesn't the order of source to destination change the behavior of what gets synced?
Let me get to where I am going..
Obviously this input1
rsync -e ssh -varz ~/local/directory/ [email protected]:~/remoteFolder/
is not the same as input2
rsync -e ssh -varz [email protected]:~/remoteFolder/ ~/local/directory/
But I thought that either way you called rsync, the whole point of it was so that any file that is newer is updated between the local and remote destinations.
However, it seems that it's dependent on whether your new file is located under the source rather than the destination location. If I updated file1 on my local machine (same, but older file1 on server), saved it, and then used code2 to sync between locations, I noticed that the new version of file1 on my local machine does not upload to the server. It actually overwrote the updated file with the older version. If I use input2 it does upload modified to server.
Is there a way to run rsync in which it literally synchronizes new and modified files no matter whether source or destination locations are called first according to newer/modified file location?
|
No, rsync only ever syncs files in one direction. Think of it as a smart and more capable version of cp (or mv if you use the --remove-source-files option): smart in the sense that it tries to skip data which already exists at the destination, and capable in the sense that it can do a few things cp doesn't, such as removing files at the destination which have been deleted at the source. But at its core, it really just does the same thing, copying from a source to a destination.
As Patrick mentioned in a comment, unison will do what you're looking for, updating in both directions. It's not a standard tool, though - many UNIX-like systems will have rsync installed, but usually not unison. But on a Linux distribution you can probably find it in your package manager.
| Is rsync really bidirectional or more unidirectional? |
1,302,715,144,000 |
I want to duplicate a directory on an FTP server I'm connected to from my Mac via the command-line
Let's say I have file. I want to have files2 with all of file's subdirectories and files, in the same directory as the original. What would be the simplest way to achieve this?
EDIT:
With mget and mput you could download all files and upload them again into a different folder but this is definitely NOT what i want/need (I started this question trying to avoid duplicating with this download upload method from the dektop client)
|
What you have is not a unix command line, what you have is an FTP session. FTP is designed primarily to upload and download files, it's not designed for general file management, and it doesn't let you run arbitrary commands on the server. In particular, as far as I know, there is no way to trigger a file copy on the server: all you can do is download the file then upload it under a different name.
Some servers support extensions to the FTP protocol, and it's remotely possible that one of these extensions lets you copy remote files. Try help site or remotehelp to see what extensions the server supports.
If you want a unix command line, you need remote shell access, via rsh (remote shell) or more commonly in the 21st century ssh (secure shell). If this is a web host, check if it provides ssh access. Otherwise, contact the system administrator. But don't be surprised if the answer is no: command line access would be a security breach in some multi-user setups, so there may be a legitimate reason why it's not offered.
| Easiest way to duplicate directory over FTP |
1,302,715,144,000 |
I have an embedded Linux on a custom board and I would like to send and receive file over its serial port.
The only way to communicate with this device is over serial and the device offers a console on this serial port.
This board doesn't have kermit neither busybox rx nor lrzsz.
- Sending file to remote
I was able to send file to the board following this thread.
Host side Remote side
cat file | base64 > file_b64
cat > file_b64
minicom's ctrlA-S => send 'file_b64'
cat file_b64 | base64 --decode > file
- Getting file from remote
Now I would like to retrieve a file from remote system.
Minicom has a tool for receiving files but as I only have the serial port's console using minicom to issue commands on remote side, I can't find how to do it.
I have tried using kermit on host side but it seems that I also needs to have kermit on the remote side.
EDIT:
I have also tried to reverse the sending method but with no success as i receive nothing from serial port on host side.
Host side Remote side
cat file | base64 > file_b64
(sleep 10; cat file_b64 > /dev/ttyS0) &
minicom's ctrlA-X => exit minicom
cat /dev/ttyUSB0 > file_b64
Can't use minicom's receive tool cause it only support xmodem, ymodem, zmodem and kermit tranfers and not ascii.
Is there a way to retrieve files from remote without having to type commands into its console?
|
Finally found out that I was issuing the wrong command on receiver's side.
Receive command shall be : cat < /dev/ttyUSB0 > file_b64
Summary
To receive from remote :
Host side | Remote side
|
| #Encode to base64
| cat file | base64 > file_b64
|
| #Send after timeout
| (sleep 10; cat file_b64 > /dev/ttyS0) &
|
#############################################################
### Exit minicom but keep configuration (ctrlA-Z, then Q) ###
#############################################################
|
#Receive file |
cat < /dev/ttyUSB0 > file_b64 |
|
#Decode file |
cat file_b64 | base64 -di > file |
| Retrieve file over serial without kermit and lrzsz |
1,302,715,144,000 |
I have a large file (2-3 GB, binary, undocumented format) that I use on two different computers (normally I use it on a desktop system but when I travel I put it on my laptop). I use rsync to transfer this file back and forth.
I make small updates to this file from time to time, changing less than 100 kB. This happens on both systems.
The problem with rsync as I understand it is that if it think a file has changed between source and destination it transfers the complete file. In my situation it feels like a big waste of time when just a small part of a file has changes. I envisage a protocol where the transfer agents on source and destination first checksums the whole file and then compare the result. When they realise that the checksum for the whole file is different, they split the file into two parts, A and B and checksum them separately.
Aha, B is identical on both machines, let's ignore that half. Now it splits A into A1 and A2. Ok, only A2 has changed. Split A2 into A2I and A2II and compare etc. Do this recursively until it has found e.g., three parts that are 1 MB each that differs between source and destination and then transfer just these parts and insert them in the right position at the destination file. Today with fast SSDs and multicore CPUs such parallelisation should be very efficient.
So my question is, are there any tools that works like this (or in another manner I couldn't imagine but with similar result) available today?
A request for clarification has been posted. I mostly use Mac so the filesystem is HFS+. Typically I start rsync like this
rsync -av --delete --progress --stats - in this cases I sometimes use SSH and sometimes rsyncd. When I use rsyncd I start it like this rsync --daemon --verbose --no-detach.
Second clarification: I ask for either a tool that just transfers the delta for a file that exists in two locations with small changes and/or if rsync really offers this. My experience with rsync is that it transfers the files in full (but now there is an answer that explains this: rsync needs an rsync server to be able to transfer just the deltas, otherwise (e.g., using ssh-shell) it transfers the whole file however much has changed).
|
Rsync will not use deltas but will transmit the full file in its entirety if it - as a single process - is responsible for the source and destination files. It can transmit deltas when there is a separate client and server process running on the source and destination machines.
The reason that rsync will not send deltas when it is the only process is that in order to determine whether it needs to send a delta it needs to read the source and destination files. By the time it's done that it might as well have just copied the file directly.
If you are using a command of this form you have only one rsync process:
rsync /path/to/local/file /network/path/to/remote/file
If you are using a command of this form you have two rsync processes (one on the local host and one on the remote) and deltas can be used:
rsync /path/to/local/file remote_host:/path/to/remote/file
| Smarter filetransfers than rsync? |
1,302,715,144,000 |
I need to copy some files from multiple local directories to multiple remote directories.
The command:
scp -v /file/source1/* username@host_server:/file/destination1
scp -v /file/source2/* username@host_server:/file/destination2
scp -v /file/source3/* username@host_server:/file/destination3
It asks for password again and again.
The command:
scp file1 file2 ... fileN user@host:/destination/directory/
Will put all the files into one destination directory.
But my destination are different for all the files.
|
You can't have multiple destinations in one scp command. If you want to make a single SSH connection, you'll need to use some other tool.
The simplest solution is to mount the remote filesystem over SSHFS and then use the cp command. This requires SFTP access.
mkdir host_server
sshfs username@host_server:/file host_server
cp /file/source1/* host_server/destination1
cp /file/source2/* host_server/destination2
cp /file/source3/* host_server/destination3
fusermount -u host_server
rmdir host_server
Another solution is to first organize the files locally, then copy the hierarchy. This requires rsync.
mkdir destination1 destination2 destination3
ln -s /file/source1/* destination1
ln -s /file/source2/* destination2
ln -s /file/source3/* destination3
rsync -a --copy-unsafe-links destination1 destination2 destination3 username@host_server:/file
rm -r destination1 destination2 destination3
Another solution is to keep using scp, but first open a master connection to the server. This is explained in Using an already established SSH channel
Alternatively, just bear with it and make three scp connections. But don't use your password to log in; instead, create a key pair and load the private key into your key agent (ssh-add ~/.ssh/id_rsa), then you won't have to type anything on each connection.
| scp files from multiple directories on local to multiple directories on remote in one command |
1,302,715,144,000 |
My problem is that I need to backup the files on my Linux machine to my Windows laptop. My external hard drive died, and so backing up to an external drive is out of the question for the time being.
These are the methods I've tried:
Samba
Samba with Gadwin GUI
Windows Shared Folder, Wirelessly (I can't access it, even though both machines indicate a connection)
I don't want to try Samba again, because it's just too complex for me -- the 15-odd tutorials I used were either incomplete or assumed too much knowledge on the part of the reader. I've spent about 8 hours trying to make it work and I give up.
I've heard that you can connect two computers with an ethernet cable. Only problem is that it's not a cross-over cable, and I don't have a router, so they would have to be directly connected with a regular rj-45 cable.
I don't want to upload files to the cloud, because I have a lot of files to transfer and want it to be speedy.
|
NitroShare may be able to do what you're looking for. It is a small app that allows files to quickly be sent between machines on the same network.
Once installed on both your Linux and Windows machines, the two machines should automatically discover each other. Use the menu in the system tray to send a file or directory to a specific machine on the network:
Download links are available here.
| Transfer files between Windows and Linux machines? |
1,302,715,144,000 |
Is an implementation of Microsoft's Background Intelligent Transfer Service (BITS) available for Linux systems?
I'm looking at my options for transferring large files to a remote Linux server over the internet and I don't want it eat all of my (limited!) upstream bandwidth.
I've successfully used BITS on Windows systems in the past but this time I'll need to be transferring to and from Linux servers.
If it makes any difference both systems are likely to be running Ubuntu based systems although ideally I'd like a solution that is distro independent.
|
First, the easy way: rsync has a --bwlimit parameter. That's a constant rate, but you can use that to easily throttle it down.
Now, if you want the adaptive rate, there is the linux traffic control framework, which is actually fairly complicated. There are several references I'm aware of:
Linux Advanced Routing & Traffic Control
Traffic Control HOWTO
A Practical Guide to Linux Traffic Control
Personally, when I have to set this up, I use tcng to simplify the task. Here is an example:
dev office {
egress {
class ( <$ssh> )
if ip_tos_delay == 1 && tcp_sport == PORT_SSH ;
class ( <$kyon> )
if ip_dst == 172.16.1.62; // monitoring host
class ( <$fast> )
if ip_tos_delay == 1;
class ( <$default> )
if 1;
htb() {
class ( rate 1440kbps, ceil 1440kbps ) {
$ssh = class ( rate 720kbps, ceil 1440kbps ) { sfq; };
$kyon = class ( rate 360kbps, ceil 1440kbps ) { sfq; };
$fast = class ( rate 180kbps, ceil 1440kbps ) { sfq; };
$default = class ( rate 180kbps, ceil 1440kbps ) { sfq; };
}
}
}
}
In that example, traffic being sent out over the office interface is being classified into several classes: ssh, kyon, fast, and default. The link (a T1, when this was in use) is capped at 1440kbps (this must be slightly lower than the actual link rate, so that buffering happens on the Linux box, not a router). You can see that ssh is assigned 720kbps, kyon 360, etc. All can burst to the full rate (the ceil). When there is contention, the 'rate' acts as a ratio, so ssh would be given 1/2, kyon 1/4, etc. The 'sfq' says how to handle multiple ssh sessions; sfq is a form of round-robin.
| Transfer large files without hogging the bandwidth (is there a BITS equivalent for Linux?) |
1,302,715,144,000 |
Had a question on NFS mounts and how they interact with transferring files on a low level. I'm trying to understand the latency involved with transferring files from within the same mount.
Say you SSH into a VM that has a mount setup. The VM is in USA and the mount is in Europe. Now execute the following command:
sudo mv /mnt/serverInEurope/dir1/file.txt /mnt/serverInEurope/dir2/file.txt
Does the VM in USA read the file, just to write it back to the Europe mount?
Second question is very similiar:
sudo mv /mnt/serverOneInEurope/file.txt /mnt/serverTwoInEurope/file.txt
If I'm transferring from one mounted server in Europe to another using a VM in the USA, will the VM read the data locally before it transfers from Europe mount to Europe mount? Or is the mv'ing of a file intelligent enough to execute the transfer entirely between the mounts in Europe?
These are very crucial distinctions because I'm transferring petabytes or more of information within different servers in Europe.
Thanks for your time.
|
Using mv for a file or folder within an NFS mount will apply the operation remotely. (See this list of API functions or this overview.) This example will execute almost immediately regardless of the size of the file, provided that dir1 and dir2 are part of the same mountpoint:
mv /mnt/serverInEurope/dir1/file.txt /mnt/serverInEurope/dir2/file.txt
Using mv to move a file or folder between mountpoints will require the client to process the data. In this scenario the data will perform a double hop across the Atlantic, even if serverOneInEurope and serverTwoInEurope are in the same physical rack:
mv /mnt/serverOneInEurope/file.txt /mnt/serverTwoInEurope/file.txt
In this second instance it would be preferable to gain access to serverOneInEurope and have it directly transfer data to serverTwoInEurope. Failing that, spin up a VM in the same datacentre and mount both shares "locally".
| Unix NFS Mounts and Moving Files |
1,302,715,144,000 |
I have a device running Raspian, that does not have the lrzsz package installed on it. I only have a serial port to the device, and can connect to the device using screen or minicom, but unfortunately I cannot connect find a way to send files over. Also, the device does not have an internet connection.
Is there some way of either transferring files serially without lrzsz, or some way of getting lrzsz to the device serially?
|
There might be simpler and more robust ways to transfer files, but this should
work:
base64 encode your file on the host system
base64 file > file.64
Redirect the serial output to a file on the Pi:
cat < /dev/ttyAMA0 > file.64
Use minicom's paste feature: Ctrl + A, Y, then select the file to be transferred. Press Ctrl + D on the Pi after the transfer is finished.
The file is then transferred to the Pi as file.64
Now base64 decode it:
base64 -d file.64 > file
The base64 conversion is required because binary files are transmitted and echoed and some sequences might alter or terminate the session and mess with the terminal or corrupt the transfer. Any other conversion that prevents “unsafe” characters to be echoed back to the screen will do as well, but base64 seems like a good fit here and it's installed on the Pi by default.
| Serial File Transfer without lrzsz |
1,302,715,144,000 |
I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2).
Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second.
Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs?
Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable.
As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far.
Do you have any advice? Thanks.
|
rsync --partial is one simple way to do it if you have rsync, since it runs over ssh just fine. What --partial does is keep a partially downloaded file, so you can just resume from where you got interrupted.
| Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows? |
1,302,715,144,000 |
I'm doing a file transfer using sftp. Using the get -r folder command, I'm surprised about the order that the program is downloading the content.
It looks like it would be selecting the files it needs to download randomly. I can't believe that this is actually the case and I'm asking myself what's going on behind the scenes?
What's the order that sftp follows when downloading a folder with its content?
From what I can see so far, it is not by name nor by size.
|
When you list the directory contents with the ls command, it will sort the listing into alphanumeric order according to current locale's sorting rules by default. It is easy to assume that this is the "natural order" of things within the filesystem - but this isn't true.
Most filesystems don't sort their directories in any way: when adding a new file to a directory, the new file basically gets the first free slot in the directory's metadata structure. Sorting is only done when displaying the directory listing to the user. If a single directory has hundreds of thousands or millions of files in it, this sorting can actually require non-trivial amounts of memory and processing power.
When the order in which the files are processed does not matter, the most efficient way is to just read the directory metadata in order and process the files in the order encountered without any explicit sorting. In most cases this would mean the files will be processed basically in the order they were added to the directory, interspersed with newer files in cases where an old file was deleted and a later-added file reclaimed its metadata slot.
Some filesystems might use tree structures or something else in their internal design that might enforce a particular order for their directory entries as a side effect. But such an ordering might be based on inode numbers of the files or some other filesystem-internal detail, and so would not be guaranteed to be useful for humans for any practical purpose.
As @A.B said in the question comments, a find command or a ls -f or ls --sort=none would list the files without any explicit sorting, in whatever order the filesystem stores its directories.
| In what order does sftp fetch files when using "get -r folder"? |
1,302,715,144,000 |
I can transfer file /tmp/file using rsync to remote server:
rsync -R -av /tmp/file root@server:/
but how can I provide the list of files to be transfert from a pipe? I tried using the --files-from= option with /dev/stdin, but that does not work:
echo /tmp/file | rsync -R -av --files-from=/dev/stdin root@server:/
(neither does it work with regular file)
Hoe can I make rsync read from pipe, so that I can use output of find, for example?
|
You still need to specify both source and target arguments to rsync, even when you're reading pathnames from a file:
echo /tmp/file | rsync -av --files-from=- / user@server:/
The pathnames read by rsync would be relative to the / source directory. The -R option is implied when using --files-from, and standard input may be specified with -.
See also
man rsync
Why is looping over find's output bad practice?
You may want to ask a separate question about your intention to pass output from find to rsync using --from-file. There is probably a much safer way of doing what you intend to do. For example, you may want to use -print0 with find and pair that up with -from0 with rsync. See the manuals for rsync and find on your system.
| rsync: read list of files from pipe |
1,302,715,144,000 |
I want to let netcat on my server execute a script that works on a file that has just been sent and have the output of this script be sent as the response to the client. My approach is:
On the receiving site:
nc.traditional -l -p 2030 -e "./execute.sh" > file.iso
On the sending site:
cat file.iso - | nc.traditional -w 1 serverAddress 2030
Right now, the receiving site executes the script before the file has been transferred completely, but sends the output of the script back to the sending site and then closes the connection. I'd like the receiving site to wait until the file has been completely transferred before executing the script.
|
You need some way for the receiving end to recognize the end of the transferred file. With cat file - | nc in the sending side, the data stream through the pipe will make no separation between the contents of the file, and whatever the user types on the terminal (cat - reads the terminal). Also, by default, netcat doesn't react to an EOF on its input, but we can use -q to have it send the EOF along.
So, the receiving script could be something like this:
#!/bin/bash
filename=$1
cat > "$filename"
hash=$(shasum "$filename" |sed -e 's/ .*//')
printf "received file \"%s\" with SHA-1 hash %s\n" "$filename" "$hash"
The cat reads the input until EOF, saving the file. Whatever follows, is executed after the file is received. Here, the script sends back the SHA-256 hash of the data it received.
Then, on the receiving side, run:
$ nc.traditional -l -p 12345 -c "./receivefile.sh somefilename"
and on the sending side:
$ cat sourcefile | nc.traditional -q 9999 localhost 12345
received file "somefilename" with SHA-1 hash 3e8a7989ab68c8ae4a9cb0d64de6b8e37a7a42e5
The script above takes the file name as an argument, so I used -c instead of -e, but of course you can also redirect the output of nc to the destination file, as you did in the question.
If you want to send the filename from the sending side too, you could do something like this on the receiving side:
#!/bin/bash
read -r filename
[[ $filename = */* ]] && exit 1 # exit if the filename contains slashes
cat > "$filename"
echo "received $filename"
and then send with (echo filename; cat filename) | nc -q 9999 localhost 12345. (You may want to be way more careful with the remote-supplied filename, here. Or just use something designed for file transfer, like scp.)
Instead of using nc, you could do this similarly with SSH:
cat filename | ssh user@somehost receive.sh
| How to let Netcat execute command after file transfer is complete? |
1,302,715,144,000 |
Summary:
I can't understand a peculiar discrepancy in network transfer.
Why is there such a discrepancy in syncing from one machine to the other and vice versa?
Also:
Given the maximum network transfer speed is about 110 M/sec, and the local disk speed for a similar operation is about 200 M/sec (so, no bottleneck there), why is there a much lower speed rsyncing between the two machines, than the theoretical 100M/sec?
Details:
First of all, server is
# uname -a
FreeBSD das 10.1-RELEASE-p8 FreeBSD 10.1-RELEASE-p8 #25 d0fb866(releng/10.1): Thu Nov 13 07:57:26 EST 2014 root@bellicose:/usr/obj/root/pcbsd-build-10-STABLE/git/freebsd/ sys/GENERIC amd64
Client is:
# uname -a
Darwin compute.internal 13.4.0 Darwin Kernel Version 13.4.0: Sun Aug 17 19:50:11 PDT 2014; root:xnu-2422.115.4~1/RELEASE_X86_64 x86_64
Both machines have 16GB of ram.
By doing a local rsync on the server, one kind of knows what to expect as disk speed, at least in that circumstance.
Using a binary file, test.bin, 732M, local rsync on FreeBSD server shows about 200 M/sec:
# rsync --stats -h test.bin copy.bin
[....]
sent 732.54M bytes received 35 bytes 209.30M bytes/sec
total size is 732.36M speedup is 1.00
That is about 200 M/sec.
On the mac mini client I have almost 70M/sec:
# rsync --progress --stats -h test.bin copy.bin
test.bin
732.36M 100% 70.06MB/s 0:00:09 (xfr#1, to-chk=0/1)
[....]
sent 732.54M bytes received 35 bytes 69.77M bytes/sec
total size is 732.36M speedup is 1.00
Now, doing a network speed test with iperf:
On the server (the FreeBSD server):
# iperf -f M -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[ 4] local 192.168.1.5 port 5001 connected with 192.168.1.226 port 50757
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-30.0 sec 3356 MBytes 112 MBytes/sec
On the client (OS X mac mini):
# iperf -f M -M 9000 -c 192.168.1.5 -t 30 -w 80K
WARNING: attempt to set TCP maxmimum segment size to 9000 failed.
Setting the MSS may not be implemented on this OS.
------------------------------------------------------------
Client connecting to 192.168.1.5, TCP port 5001
TCP window size: 0.08 MByte (WARNING: requested 0.08 MByte)
------------------------------------------------------------
[ 4] local 192.168.1.226 port 50757 connected with 192.168.1.5 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-30.0 sec 3356 MBytes 112 MBytes/sec
So, I could assume that network connection (a straight cat 7 cable between the two nics) is about 110 M/sec.
Now, this is the puzzling situation:
If I rsync from the FreeBSD server to the mac mini, I get a transfer speed of about 50 M/sec:
# rsync --progress --stats -h test.bin -e "ssh -l gsl" '192.168.1.226:/tmp/'
Password:
test.bin
732.36M 100% 57.10MB/s 0:00:12 (xfr#1, to-chk=0/1)
[....]
sent 732.45M bytes received 46 bytes 50.51M bytes/sec
total size is 732.36M speedup is 1.00
But rsync in the opposite direction gives a much lower transfer rate, 20M/sec:
# rsync --progress --stats -h test.bin -e "ssh -l gsl" '192.168.1.6:/tmp/'
test.bin
732.36M 100% 19.55MB/s 0:00:35 (xfr#1, to-chk=0/1)
[....]
sent 732.54M bytes received 35 bytes 20.07M bytes/sec
total size is 732.36M speedup is 1.00
My two questions:
Why is there such a discrepancy in syncing from one machine to the other and vice versa?
Also:
Given the maximum network transfer speed is about 110 M/sec, and the local disk speed for a similar operation is about 200 M/sec (so, no bottleneck there), why is there a much lower speed rsyncing between the two machines, than the theoretical 100M/sec?
Could somebody please help understand this, perhaps offering some advice on how to improve the transfer speed?
Update: Based on @dhag's answer, I tried to copy a file with netcat, ie using no compression:
On the "server" (pushing) side:
time cat test.bin | nc -l 2020
nc -l 2020 0.25s user 6.29s system 77% cpu 8.462 total
On the receiving side (FreeBSD):
time nc 192.168.1.226 2020 > test.bin
nc 192.168.1.226 2020 > test.bin 0.09s user 4.00s system 62% cpu 6.560 total
If I am not mistaken, that should mean 732M/6.29s = 117M/sec, which kind of exceed theiperf stats. Perhaps a caching issue?
Update 2: Using rsync with no encryption at all (only possible if using a daemon, and the rsync:// protocol):
# rsync --progress --stats -h test.bin rsync://[email protected]/share ⏎
test.bin
732.36M 100% 112.23MB/s 0:00:06 (xfer#1, to-check=0/1)
[....]
sent 732.45M bytes received 38 bytes 112.69M bytes/sec
total size is 732.36M speedup is 1.00
This also confirms @dhag's ideas.
|
I can only provide a guess, which is that the discrepancy is explained by varying computational, memory, caching or disk characteristics of the two hosts:
If we assume that CPU is a bottleneck, then it would make some sense if the slower machine were slower at sending (this assumes that encrypting is more computationally heavy than decrypting). This can be tested by switching to a cipher that is lighter to compute (try adding -c arcfour to your SSH command line; in this case, passing --rsh="ssh -c arcfour" to rsync).
If we assume that files are being read / written straight from / to the disk, then the disk could conceivably be a bottleneck; read speeds of 100 MBps are well within reach of more modern computers, but not of older ones, or computers running on laptop-class drives (such as, I believe, the Mac Mini).
If we further assume that the operating system uses filesystem caches, the situation could be complicated further:
If the source file is contained in the filesystem cache, in RAM, then it can be read much faster than 100 MBps;
if the target system applies write-caching and is able to fit a significant part of the file in RAM (in your case it should be, since RAM is much bigger than your test file), then it can claim the write is complete before it has actually reached the disk (this could mean that your measured 200MBps is).
The disk-versus-cache unknown can be tested by flushing the filesystem cache prior to reading (how to do so is OS-dependent): then sending the file will be at least as slow as the disk dictates. Conversely, by reading the file completely before sending (perhaps with cat file.bin >/dev/null), one can influence the OS into caching it.
To further investigate whether CPU is an issue, it would make sense to run top while the transfer is ongoing; if the rsync or ssh process is taking 100% of a core, then that would be a bottleneck.
| Understanding why such a discrepancy in network transfer? |
1,302,715,144,000 |
I have around 50 gigabytes that I would like to move. I want to do it over TCP/IP (hence network in the title) optimized for a local area network. My problem is that the connection occasionally gets interrupted and I never seem to get all of the data reliably to its destination. I'd like this thing to
not give up so easily
keep retrying automatically (assuming that both machines are powered up).
My approach would be to use rsync.
SOURCE=/path/to/music/ # slash excludes "music" dir
DESTINATION=/path/to/destination
rsync \
--archive \ # archive mode; equals -rlptgoD (no -H,-A,-X)
--compress \ # compress file data during the transfer
--progress \ # show progress during transfer
--partial \ # delete any partially transferred files after interrupt, on by default but I added it for kicks
--human-readable \ #output numbers in a human-readable format
"$SOURCE" \
"$DESTINATION" \
Are there other parameters that I should consider?
|
Rsync Parameters
It would seem that my rsync parameters are fine.
I had to add a parameter to deal with files that exist after a connection failure. The choices were --ignore-existing or --update to avoid rewriting things already written. I am still not sure which one is better (perhaps someone knows) but in this case I went with with --update after reading this https://askubuntu.com/questions/399904/rsync-has-been-interrupted-copy-from-beginning
Compare:
--update skip files that are newer on the receiver
--ignore-existing skip updating files that already exist on receiver
Connection Interruptions
The connection problem conundrum (flaky wifi etc.) was solved by continually calling rsync when an exit code is not zero, thereby forcing my process to continue until the transfer is a success. (unless I cut the power, lightning strikes my power lines, or I kill it using a signal)
To handle network disconnects, I used a while loop.
while [ 1 ]
do
# STUFF
done
while [ 1 ] has a caveat: using a signal like ctrl c for an interrupt (SIGINT) will not work unless you add an explicit check for any exit codes above 128 that calls break.
if [ "$?" -gt 128 ] ; then break
then you can check for rsync's exit code. Zero means all files have been moved.
elif [ "$?" -eq 0 ] ; then exit
Otherwise, the transfer is not complete.
else sleep 5
Script Example sync-music.sh
The rsync script assumes ssh passwordless key authentication.
#!/bin/bash
SOURCE="/path/to/Music/"
DESTINATION="[email protected]:/media/Music"
while [ 1 ]
do
rsync -e 'ssh -p22'\
--archive \
--compress \
--progress \
--partial \
--update \
--human-readable \
"$SOURCE" \
"$DESTINATION"
if [ "$?" -gt 128 ] ; then
echo "SIGINT detected. Breaking while loop and exiting."
break
elif [ "$?" -eq 0 ] ; then
echo "rsync completed normally"
exit
else
echo "rsync failure. reestablishing connection after 5 seconds"
sleep 5
fi
done
| How can I move (rsync) a huge quantity of data reliably the can handle network interruptions? |
1,302,715,144,000 |
I use Fedora 13 and very recently I brought a new Apple iPod shuffle. I would like to know whether I can transfer music into my iPod without using iTunes. I tried using gtkpod and RhythmicBox, but that is of no avail.
|
I'm not sure about state now but Apple is know for play of cat and mouse. You may find one day that update of software of iPod had broken its compatibly with Linux by completly redesigning its format.
Until one day someone reverse engeneer the new format and supplied patches for projects. It lasts as long as Apple would not decide to switch format again.
In short: iPod is not the best player for Linux enthusiast but when you have it you may be able to use it.
PS. Also Banshee have iPod support
| Configuring iPod on Linux |
1,302,715,144,000 |
I tried to copy a large video from my server to my local device with
rsync -aP remotefile.mov localfile.mov
But the local file does not show up unless I stop the rsync process.
I can then watch the partial video with no problem in VLC.
How can I watch it while still rsyncing?
|
You can use the option
--inplace
This option changes how rsync transfers a file when its data needs to be updated: instead of the default
method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes
the updated data directly to the destination file.
so this will copy whith you being able to watch it while copying:
rsync -aP --inplace remotefile.mov localfile.mov
Also worth mentioning here would be a transferlimit if you don't want to use up all the upload trafic on your server with:
--bwlimit=KBPS
And when copying folders, copy small files first with
--max-size=10m
I would use this alltogether (max. 7.5MB of my 10MB bandwidth):
rsync -aP --max-size=10m --inplace [email protected]:/pathto/remotefolder/ localfolder/
rsync -aP --bwlimit=7.5m --inplace [email protected]:/pathto/remotefolder/ localfolder/
see explainshell.com
| rsync and watch partially transferred video file |
1,302,715,144,000 |
I was using a script to copy the contents of a folder via SCP, without copying the folder itself. Something like this:
scp -i id_rsa -P "$PORT" -r "$HOST:/folder1/folder2/." "backup"
(I'm not able to use * because I want to include dot files, too.)
This has recently stopped working and I'm getting the following error:
scp: error: unexpected filename: .
I think the cause for this are these changes to SCP from November 2018.
Does this mean I'm no longer able to copy contents of a folder via SCP without copying the folder itself?
|
I think your interpretation is correct. It was probably an undocumented feature, removed from the undocumented api (see the web archive of the protocol). One workaround is for you to create a symbolic link in the backup directory before the copy.
ln -s . folder2
| Copying contents of a folder via SCP results in `scp: error: unexpected filename: .` |
1,302,715,144,000 |
Someone else is giving me a dataset that is too large to send via email, Dropbox, etc., so I'm thinking we can use sftp or scp. But how will she be able to do this without my giving her the password to my machine? This is a one-time transfer of data, so I'd rather not go through a lot of trouble -- if it's too much work I'll just give her my password and then change it when she's done transferring the files.
|
Is your machine accessible over the Internet?
The first hurdle is that your machine may not be accessible over the Internet at all!
Most client machines cannot be accessed directly over the Internet because they don't have a public IP address. It's like having a phone that can call out, but can't be called. This came about mainly because there's a limited supply of IP addresses; unless your ISP supports IPv6 or you have a very atypical configuration, you have a single IP address at home, and that's the address of your home router. Your computers can make outgoing connections because the home router provides NAT functionality.
Most home routers can be configured to allow incoming connections to be routed to a particular machine on the local network. To allow incoming SSH connections, route port 22 to your computer. See your router's documentation for how to do this.
If you're unlucky and your ISP doesn't give a public IP address, you won't be able to make incoming connections. To check whether you have a public IP address, connect to your router's administrative interface and check whether its external address is in the private range (internal addresses are in the private range except in atypical configurations).
Giving shell or file access to your machine
The (relatively) easy way to give someone access to your machine is to create a user account for them. With an ordinary user account, they'll be able to see a lot of things, but they won't be able to modify your files (unless you went out of your way to make them world-writable), and they won't be able to see the files that are in a private directory (drwx------ permissions).
For better security, configure the account to be usable only to manipulate files in a particular directory over SFTP. This is a bit more difficult (I kind of expected OSX to provide an easy-to-use GUI for that, but apparently not); see Create a remote only user in OS X? or How to set up an SFTP server on a Mac & then enable a friend to upload files to it from their iPhone, iPad, or other iDevice for instructions.
You'll need to enable remote access. There is an OSX knowledge base entry for that. Enable only the one user who is supposed to have remote access. Do not enable remote access for an account that may have a weak password!
Set a random password on the account and tell them to copy-paste it and save it in a file. Don't expose a machine with weak, human-chosen passwords to the Internet. You can use the following command to generate the password:
</dev/urandom tr -dc A-Za-z0-9 | head -c 16; echo
Transfering files piece by piece
So yeah, sending files over the Internet is still difficult.
The low-tech solution is to use one of the many file sharing websites. They make their money through ads, so don't even think of visiting one without an ad blocker, and be very careful where you click because they're likely to try to serve you malware. After downloading a file, check that it's the right file: calculate its SHA-2 checksum with
sha -a256 /path/to/file
on OSX, sha256sum /path/to/file on Linux.
| Transferring files to someone else via sftp |
1,302,715,144,000 |
I have a Canon EOS 350D which has a bent pin in the CF slot. Therefore I do not want to take out the card more often then needed now, retrieving the images via the mini-USB port should be possible, in principle.
There are two options I can set the camera to:
Print/PTP
PC Connection
See this screenshot of the camera menu:
Neither option lets anything appear in Dolphin (file manager) or /dev. How can I retrieve the images from my camera?
|
You could try to use Digikam. With Digikam you can import from a variety of cameras, the 350D should be supported.
| Retrieve photos via USB from Canon EOS 350D |
1,302,715,144,000 |
I have the string xyz which is a line in file1.txt, I want to copy all the lines after xyz in file1.txt to a new file file2.txt. How can I achieve this?
I know about cat command. But how to specify the starting line?
|
Using GNU sed
To copy all lines after xyz, try:
sed '0,/xyz/d' file1.txt >file2.txt
1,/xyz/ specifies a range of lines starting with the first and ending with the first occurrence of a line matching xyz. d tells sed to delete those lines.
Note: For BSD/MacOS sed, one can use sed '1,/xyz/d' file1.txt >file2.txt but this only works if the first appearance of xyz is in the second line or later. (Hat tip: kusalananda.)
Another approach, as suggested by don_crissti, should work for all sed:
{ printf %s\\n; cat file1.txt; } | sed '1,/xyz/d' >file2.txt
Example
Consider this test file:
$ cat file1.txt
a
b
xyz
c
d
Run our command:
$ sed '1,/xyz/d' file1.txt >file2.txt
$ cat file2.txt
c
d
Using awk
The same logic can used with awk:
awk 'NR==1,/xyz/{next} 1' file1.txt >file2.txt
NR==1,/xyz/{next} tells awk to skip over all lines from the first (NR==1) to the first line matching the regex xyz. 1 tells awk to print any remaining lines.
| How to copy the rest of lines of a file to another file [duplicate] |
1,302,715,144,000 |
I’ve made a script that would work on rhel distros and forks. It’s for personal use to automatically download repositories and software that I use. When I make the script executable on the host machine I can right click on the script and choose run as a program. When I copy the script to a flash drive and then copy it from a flash drive to another computer running the same operating system I have to make it executable again to give back the function to right click and run as a program. There are obvious workarounds to still use the script but being able to right click and run as a program is the most streamlined and useful for what my script is doing. So how do I make my script keep that functionality when I transfer it to another pc via usb?
|
When I copy the script to a flash drive and then copy it from a flash drive to another computer running the same operating system I have to make it executable again to give back the function to right click and run as a program.
Execute permissions are not preserved when you copy files to and from the flash drive because the file system on the drive does not support unix-style permissions. Most likely, the flash drive is formatted with exFAT or vFAT.
Potential solutions:
Format the drive with a Linux file system, like Ext2/3/4 or XFS. There too many to list all of them here. This is the only viable solution if you want to run the script directly from the USB drive.
Use a container that supports Linux permissions, like tar, to hold the file while it is on the drive. zip also supports Linux permissions to an extent. 7z does not.
Bypass the USB drive by transferring the files over the network, using tools like scp and rsync.
| How to make my script stay executable on different devices? |
1,302,715,144,000 |
I would like to request information on using rsync. I tried reading the manuals, but the examples are few and confusing for me.
I do not need advanced features or live sync or remote sources or remote destinations. Everything is with ext4. Just using my laptop's HDD and an external HDD over USB. On Ubuntu.
My ultimate object is to move the contents of my /home to an external drive. Wipe my laptop, switch it to LVM, re-install Ubuntu, update, install same programs I had before, then boot a live USB and copy the contents of my backed up /home (now on my external HDD) onto the /home of the new installation (installed with same username and UID as last time).
I would like to keep all permissions and ownership the same.
I tried copy-pasting everything onto the external drive, but I got error messages. I know that doing a copy-paste from the GUI on a live USB will change everything to root ownership (which would be double plus not good).
I see all of these flags in the man page ... and all I understand is
rsync /home/jonathan /media/jonathan/external-drive/home/jonathan
from
rsync /source/file/path /destination/file/path
I already use this hard drive to back up most folders and big files like Movies, etc. Is there a way to copy-paste what I want, while saving permissions, and only adding the hitherto ignored .config files and only changing changed files? I would like to be able to do this manually about once a week to back up settings AND my personnel files in case I ever need to reinstall in an emergency or my hard drive fails.
|
Here is a quick rsync setup and what it does.
rsync -avz /home/jonathan /media/jonathan/external-drive/home/jonathan
This will recursively copy the files, preserve attributes permissions ownership etc. from /home/jonathan to the external folder.
for safe keeping you could also do a tar to get everything together and then send one file over.
tar zcvf /media/jonathan/external-drive/home/jonathan/jonathansFiles.tgz /home/jonathan
then uncompress later.
tar zxvf jonathansFiles.tgz
| Using rsync to back up /home folder with same permissions, recursive |
1,462,812,663,000 |
I'm trying to write a simple alternative script for uploading files to the transfer.sh service. One of the examples on the website mentions a way of uploading multiple files in a single "session":
$ curl -i -F filedata=@/tmp/hello.txt \
-F filedata=@/tmp/hello2.txt https://transfer.sh/
I'm trying to make a function that would take any number of arguments (files) and pass them to cURL in such fashion. The function is as follows:
transfer() {
build() {
for file in $@
do
printf "-F filedata=@%q " $file
done
}
curl --progress-bar -i \
$(build $@) https://transfer.sh | grep https
}
The function works as expected as long as no spaces are in the filenames.
The output of printf "-f filedata=@%q " "hello 1.txt" is -F filedata=@test\ 1.txt, so I expected the special characters to be escaped correctly. However, when the function is called with a filename that includes spaces:
$ transfer hello\ 1.txt
cURL does not seem to interpret the escapes and reports an error:
curl: (26) couldn't open file "test\"
I also tried quoting parts of the command, e.g. printf "-F filedata=@\"%s\" " "test 1.txt", which produces -F filedata=@"test 1.txt".
In such case cURL returns the same error: curl: (26) couldn't open file ""test". It seems that it does not care about quotes at all.
I'm not really sure what causes such behaviour. How could I fix the function to also work with filenames that include whitespace?
Edit/Solution
It was possible to solve the issue by using an array, as explained by @meuh. A solution that works in both bash and zsh is:
transfer () {
if [[ "$#" -eq 0 ]]; then
echo "No arguments specified."
return 1
fi
local -a args
args=()
for file in "$@"
do
args+=("-F filedata=@$file")
done
curl --progress-bar -i "${args[@]}" https://transfer.sh | grep https
}
The output in both zsh and bash is:
$ ls
test space.txt test'special.txt
$ transfer test\ space.txt test\'special.txt
######################################################################## 100.0%
https://transfer.sh/z5R7y/test-space.txt
https://transfer.sh/z5R7y/testspecial.txt
$ transfer *
######################################################################## 100.0%
https://transfer.sh/4GDQC/test-space.txt
https://transfer.sh/4GDQC/testspecial.txt
It might be a good idea to pipe the output of the function to the clipboard with xsel --clipboard or xclip on Linux and pbcopy on OS X.
The solution provided by @Jay also works perfectly well:
transfer() {
printf -- "-F filedata=@%s\0" "$@" \
| xargs -0 sh -c \
'curl --progress-bar -i "$@" https://transfer.sh | grep -i https' zerop
}
|
One way to avoid having bash word-splitting is to use an array to carry each argument without any need for escaping:
push(){ args[${#args[*]}]="$1"; }
build() {
args=()
for file
do push "-F"
push "filedata=@$file"
done
}
build "$@"
curl --progress-bar -i "${args[@]}" https://transfer.sh | grep https
The build function creates an array args and the push function adds a new value to the end of the array. The curl simply uses the array.
The first part can be simplified, as push can also be written simply as args+=("$1"), so we can remove it and change build to
build() {
args=()
for file
do args+=("-F" "filedata=@$file")
done
}
| posting data using cURL in a script |
1,462,812,663,000 |
The main reason I want this is my heavy use of dircolors, especially for ls --color=auto.
For example, whenever a .mp3 file is copied from NTFS, it will have permissions set by umask 022 which ought to be standard value in most modern distros.
However, for audio files this makes no sense: due to the fact that their permissions get set to 755 (rwxr-xr-x), they will get the same color as an executable shell script, while I'd really like to have this color reserved for true executables. This is not Windows; even with the x permission set for owner/group/other you cannot expect ./track1.mp3 to work in terminal so that it make an attempt to pick a default console player.
So I'd like to have a certain umask ONLY for audio files, i. e. that any files like .mp3, .wav, .ogg and so on would always get set their mode to 644, while leaving all other files copied to this place at their default umask of 022.
Is there any way to accomplish this?
(NOTE: cp --preserve will NOT preserve original permissions on NTFS either, since NTFS is notoriously ignorant about *NIX permission systematic.)
|
I would use the install tool to copy from NTFS.
install -m644 file1 ... fileN destination_directory
| Setting correct permissions automatically for certain file type when file is copied from non-Linux file system |
1,462,812,663,000 |
I need to find a way to copy files from mymachine to a server priv-server sitting on a private NATted network via a server pub-server with a public IP. The behind-NAT machine priv-server only has certs for user@mymachine, so the certs need to be forwarded from mymachine via pub-server to priv-server
So in order to log on with SSH with just one command, I use:
$ ssh -tA user@pub-server 'ssh user@priv-server'
— this works perfectly well. The certs are forwarded from mymachine to priv-server via pub-server, and all is set up nicely.
Now, I'd normally use scp for any file transfer needs but I'm not aware of a way to pass all of the tunneling information to scp.
|
Instead use a more low level form of copying files by catting them locally, and piping that into a remote cat > filename command on priv-server:
$ cat file1.txt | ssh -A user@pub-server 'ssh user@priv-server "cat > file1.txt"'
or with compression:
$ gzip -c file1.txt | ssh -A user@pub-server 'ssh user@priv-server "gunzip -c > file1.txt"'
Outtake from man ssh:
-A Enables forwarding of the authentication agent connection. This can also be specified on a per-host basis in a configuration file.
-t Force pseudo-tty allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
I initially wasn't aware of an answer, but after a good night's sleep and writing this question, I saw a problem with the command I was trying initially, fixed it, and it worked. But as this seems like a useful thing to do, I decided to share the answer.
| Copying a file using SSH over a tunnel with cert forwarding |
1,462,812,663,000 |
I often transfer large files from a remote server using rsync with the following command :
rsync -rvhm \
--progress \
--size-only \
--stats \
--partial-dir=.rsync-partial \
"user@server::module/some_path" "some_path"
That way, even if the transfer fails, I can resume it later and I know that I'll only have complete files in some_path on the destination, since all partial transfers will stay in some_path/.rsync-partial.
When a transfer resumes, rsync first checks the partial one to determine where exactly to resume (I guess) and I'm fine with that. The problem is that when it's done with this check, the partial file gets copied outside of the .partial-rsync folder for resume. Therefore, I'm left with a partial transfer (that will be replaced or deleted at the next pause or when the transfer finishes) along with the ongoing one.
This is inconvenient since :
I don't have much free space on the destination ;
The files are quite large ;
If the partial transfer went "far enough", I might not be able to resume it since rsync will try to copy it first and will complain that there isn't enough space available to resume ;
There is no reason that I can think of to keep a copy of the partial file to resume the transfer : the partial file itself should be used.
Is there a way to avoid this behavior or is this by design ? And if so, why would we want it to work this way ?
|
It doesn't look like this is possible yet although there is a patch available allowing the use of the --inplace option in conjunction with --partial-dir to avoid this copy.
Refer to Bug 13071 for further details but from the description:
If --inplace is used with --partial-dir, or with any option implying it (--delay-update), the inplace behavior will take place in the partial directory, each file being moved to its final destination once completed.
Unfortunately, so far this patch has not been applied.
| rsync keeps previous partial file when resuming |
1,462,812,663,000 |
Using scp command I want to move files from local system to a remote system. I'm doing something like this:
$ scp file1 root@abc:root /root/tmp
With this command I'm able to upload file1 to abc:/root, but the problem is that it changes the names to tmp like in my case, I want to keep the name the same as the original and just copy it to the folder.
How can I do this?
|
Do this:
$ scp file1 root@abc:/root/tmp/
This would also work:
$ scp file1 root@abc:~/tmp/
If the directory /root/tmp isn't on the remote system abc, you can do this, and rsync will create the remote directory for you:
$ rsync -ave ssh file1 root@abc:/root/tmp
Lastly if you have to use ssh you can do this:
$ cat file1 | ssh root@abc "mkdir /root/tmp; cat >> /root/tmp/file1"
| using scp command to transfer files keeping the same names intact? |
1,462,812,663,000 |
I am running a 32-bit Linux virtual machine on KVM. The host machine is a 64-bit Linux machine connected to a LAN. Attempting to transfer files with scp from the KVM machine to a server on the LAN gives abysmal performance, about 500kB/s over gigabit Ethernet. Around 1% of the expected rate. Any suggestions?
|
Consider using virtio. It allows a direct connection between the VM and the host without the need to emulate (slow) hardware. I measured high network performance improvements with it.
For example, you can enable the virtio network device by the kvm command line parameter "-net nic,model=virtio".
If you are using the virtio block devices, please note that the new device names are then "vda1" etc., but this should be not a problem since current Linux distributions detect the partitions according to their UUIDs.
| Poor network performance from KVM virtual machine |
1,462,812,663,000 |
I have to transfer a 400Gb database consisting of a single file over the Internet from a server where I have full control to an other computer at the opposite border of the ocean (but which uses a slow connection). The transfer should take a full week and in order to reduce all protocol overhead (even using ftp would had 10Min overhead), I’m using a raw tcp connection.
I already transferred 10% of the file, but I learned there will be a scheduled outage in some hours.
How can I ask netcat or socat or curl to serve the FILE:database.raw at the offset it was interrupted? or
|
If your command, as stated in the comments is:
socat -u FILE:somedatabase.raw TCP-LISTEN:4443,keepalive,tos=36
you can, on the sending side, do a seek and start serving from there:
socat -u gopen:somedatabase.raw,seek=1024 TCP-LISTEN:4443,keepalive,tos=36
on the receiving side you also need to seek:
socat tcp:example.com:4443 gopen:somedatabase.raw,seek=1024
Check the manpage for socat, there are other options you might be interested in.
| How to resume a file transfer using netcat or socat or curl? |
1,462,812,663,000 |
I have a file on a remote server that I want to transfer to my android device over ssh, only using the android device in the process.
Using this setup, I tried an scp from the android device
scp remote_user@remote_host:file file
After being prompted for the password I got permission denied.
I then tried to transfer it from the remote server
scp -P 2222 file root@SSHDroid-ip:/mnt/extSdCard/file
Without being prompted for the password I now get the message that the network (of the android device) is unreachable: lost connection.
Is this a permission problem? I have transferred files over ssh from the remote server before, so I suppose the problem is on the side of the android device.
Edit.
I can transfer the file, from the remote server to the android device via scp, to the home path of the SSHDroid server on the android device. This home path is very cumbersome and deep, and can not be reached with the regular android API of the device.
So I can transfer it to the home path of the SSHDroid server, but not to the path of my SD card on the android device. Where can I change/check the permission settings of the android device?
|
Physically go to the remote_host and change the file owner to remote_user.
sudo chown remote_user /path/to/file
Then you should have permissions to copy the file.
| Using scp to transfer files to an android device |
1,462,812,663,000 |
When doing ifconfig from hour to hour I notice that the counters for RX/TX bytes transfers resets:
RX bytes:921640934 (921.6 MB) TX bytes:4001470884 (4.0 GB)
How come? I would like to keep track how much data i transfer from day to day but they keep resetting.
|
Seems like counters are 32bit integers so they "wrap around" at ~4GB.
| Shomehow the rx/tx-counters on the interface resets |
1,462,812,663,000 |
I want to transfer gigabytes of data from server A to B. They are in different networks, but both reachable over SSH. I am in another network (neither A nor B's) so instead of tunneling through my system I'd prefer to do the transfer server to server. How can I best do this over an encrypted channel like SSH?
The simplest option might be an encrypted tar pipe (tar | openssl enc | netcat), but it doesn't feel very neat.
Another option is to temporarily add a user and use that for an ssh pipe (tar | ssh), or temporarily add an authorized key to an existing user and do the same thing. This does allow for a race condition, even if the odds are negligible that one of the servers has malware that is waiting for specific conditions such as these. It also doesn't feel entirely clean.
What is the best way for such a one-off data transfer?
|
If I understand correctly, you want to establish an SSH connection between A and B, let's say from A to B. It is possible to establish a TCP connection from A to B (B isn't behind a firewall that makes this impossible), but you're concerned that allowing a user from A to log in to B might allow a security breach on B if A isn't trustworthy.
OpenSSH provides a simple solution for that. In ~/.ssh/authorized_keys, you can restrict a key to be valid to execute one command only. Create a new key pair on A:
serverA$ ssh-keygen -N '' -f ~/.ssh/copy_to_B.id_rsa
serverA$ cat ~/.ssh/copy_to_B.id_rsa.pub
Take the generated public key and add a forced command at the end of the line. Also add the option restrict to prevent things like port forwarding (which could e.g. allow A to make requests that come from inside B's firewall perimeter). The new line in ~/.ssh/authorized_keys on B would look like this:
ssh-rsa AAAA… luc@serverA restrict command="cat >~/backups/B/latest.tgz"
Now, when using this key to log in to B, the command cat >~/backups/B/latest.tgz will be executed regardless of what you pass to ssh. This way, all B can do is write to the file
serverA$ tar … | ssh -i ~/.ssh/copy_to_B.id_rsa luc@serverB whatever
If the SSH server on B is not recent enough, it might not have restrict. If it doesn't have restrict, then use no-port-forwarding instead, plus all the other no-… options that are available (check man sshd on B).
You can refer to the original command in the forced command through the variable SSH_ORIGINAL_COMMAND, but beware that if you do something complicated here it would be difficult to ensure that you aren't opening a security hole. Here's a simple wrapper that allows B to write to any file in ~/backups/B by passing the desired file name as a command — note that the wrapper whitelists characters, to avoid things like writing to ../../.ssh/authorized_keys.
ssh-rsa AAAA… luc@serverA restrict command="case $SSH_ORIGINAL_COMMAND in *[!-.A-Za-z0-9_]*) echo >&2 'bad target file'; exit 120;; [!-.]*) cat >~/backups/B/\"$SSH_ORIGINAL_COMMAND\";; *) echo >&2 'bad target file'; exit 120;; esac"
| How to make a temporary ssh pipe? |
1,462,812,663,000 |
When downloading /var/log/apache2/other_vhosts_access.log (100 MB) from a distant server to my local computer via SFTP, I noticed that the network transfer was not compressed.
Indeed similar compressed files are ~ 10 MB and it would have taken 1/10th of the downloading time I observed.
Is there an option in SSH/SFTP settings to auto-compress file transfer to reduce bandwidth and uploading/downloading time?
(The server has Ubuntu and the local computer is using Win + WinSCP).
|
On WinSCP, transport compression can be enabled in the SSH page on the Advanced Site Settings dialog:
For an OpenSSH command line client, the -C option to sftp (passed through as the -C option to ssh) provides transport compression for the session.
| Auto-compress SFTP file-transfer to reduce network data usage |
1,462,812,663,000 |
First I will elaborate what is the scenario here.
We have 2 servers both are ubuntu 14.04 LTS and we have a drive called /storage/ of 70TB It includes many files of 30GB size each and other ones as well. So as both are the remote servers and I want to move all this data to my other remote server's same drive as /storage/.
Is there any way to do it fastly and stably so that there will be no data loss in that?
once i have tried to move only one file from one to other which worked fine with this link. Any help will be thankful.
|
Is there any way to do it fastly
It depends of the network connection speed between source and destination server.
70 TB is a lot of data. It might be worthy physically disconnecting the drive from the server and remounting it on the destination server.
and stably so that there will be no data loss in that?
If you copy the files via scp their integrity is ensured by the crypto protocols used by the program itself. So as long scp finishes with a zero status, you know that everything went well.
| Transferring 70TB data from one remote server to another |
1,462,812,663,000 |
A service on a linux server is only able to do full backups, where each backup is a .tar archive (no compression). Many contents of the archive do not change from day to day. Each .tar file size is about 3GB (slowly increasing from day to day).
I want to transfer the backups to another server, which archives them. The transfer is done through the internet.
A requirement is that the backups are not altered (the result is again a list of .tar files, whose md5 sum is still identical to the original files on the server).
I'm currently using rsync to transfer the files, which works great, but all files are transferred with their full size. As far as I know rsync does some kind of deduplication on transfers, but only on a per-file level (right?).
Is there any way to transfer few similar files through a SSH connection without retransmitting identical chunks of the files (so some kind of reduplication), that
does not require write access on the server (no unpacking of the tar files)
is tolerant to connection losses (does not leave temp files on abortions and detects not correctly transmitted files)
is able to resume the transfer after connection losses (do not retransmit all files if connection aborts)
does not require any additional tools on the server (besides the standard unix toolchain including rsync)
still uses a client-initiated SSH connection for the transfer
|
One thing you might do is to (on the receiving side) copy the last backup file to the new name before starting rsync. Then it will transfer only the diffs between what you have and what you should have.
If you do this, be careful if you have rsync -u (update only, based on timestamp) that you ensure that your copy is older than the new source file.
| transfer many similar files over ssh |
1,462,812,663,000 |
How can I transfer data by excluding files over 100MB but including files that are over 100MB if they match a pattern of known file extensions?
I've read through rsync options and I don't think I can achieve this with rsync, because --max-size= is not flexible like this, even in combination with --include or --exclude.
|
In two steps (for simplicity, even though these steps can definitely be combined).
First transfer "small" files:
find /source/path -type f -size -100M -print0 |
rsync -av -0 --files-from=- / user@server:/destination/
Then transfer "big" files whose filenames match pattern:
find /source/path -type f -size +99M -name 'pattern' -print0 |
rsync -av -0 --files-from=- / user@server:/destination/
This is, however, untested.
-print0 in GNU find (and others) will print the found names with a nul delimiter, and -0 with rsync will make --files-from- interpret this standard input stream in that particular way.
The file paths read with --files-from should be relative to the specified source, that's why I use / as the source in rsync (I'm assuming /source/path in find is an absolute path).
Combined variation (also not tested):
find /source/path -type f \
\( -size -100M -o -name 'pattern' \) -print0 |
rsync -av -0 --files-from=- / user@server:/destination/
With more than one allowable pattern string for "big" files:
find /source/path -type f \
\( -size -100M -o -name 'pattern1' -o -name 'pattern2' -o -name 'pattern3' \) -print0 |
rsync -av -0 --files-from=- / user@server:/destination/
Each pattern may be something like *.mp4 or whatever file extensions you use. Note that these needs to be quoted, as in -name '*.mp4'.
| How can I transfer data by excluding files over 100MB but including files that are over 100MB if they match a pattern of known file extensions? |
1,462,812,663,000 |
I have two computers connected to the same router (so they are essentially connected in a LAN). Both run some GNU+Linux distribution. I have a bunch of files, in a directory ~/A/ on my first computer that I would like to transfer to my second computer.
The names of the files in A are contained in a certain list, say names_list. Now I would like for each of these files to be accessible via a local address, provided with reference to the router (such as 192.168.2.1:2112/name_of_file or something similar), so that the second computer may simply download each file one-by-one when given the names_list.
How can I do this? The downloading part is trivial, I am asking mainly regarding setting up the host computer to provide files at specific local addresses.
|
Plenty of remote filesystems exist. There are three that are most likely to be useful to you.
SSHFS accesses files via an SSH shell connection (or more precisely, via SFTP). You don't need to set up anything exotic: just install the OpenSSH server on one machine, install the client on the other machine, and set up a way to log in from the client to the server (either with a password or with a key). Then mount the remote directory on the first computer:
mkdir ~/second-computer-A
sshfs 192.168.2.1:A ~/second-computer-A
SSHFS is the easiest one to set up as long as you have access to all the files through your user account on the second computer.
NFS is Unix's traditional network filesystem protocol. You need to install an NFS server on the server. Linux provides two, one built into the kernel (but you still need userland software to manage the underlying RPC protocol and the additional lock protocol) and one as a pure userland software. Pick either; the kernel one is slightly faster and slightly easier to set up. On the server, you need to export the directory you want to access remotely, by adding an entry to /etc/exports:
/home/zakoda/A 192.168.2.2(rw,sync)
On the second computer, as root:
mkdir /media/second-computer-A
mount -t nfs 192.168.2.1:/home/zakoda/A /media/second-computer-A
By default NFS uses numerical user and group IDs, not user and group names. So this only works well if you have the same user IDs on the server and on the client. If you don't, set up nfsidmap on the server.
Samba is Windows's network filesystem protocol (rather, it's an open-source implementation of the protocol, which was called SMB and is now called CIFS). It's also available on Linux and other Unix-like systems. It's mainly useful to mount files from a Windows machine on a Unix machine or vice versa, but it can also be used between Unix machines. It has the advantage that matching accounts is easier to set up than with NFS. The initial setup is a bit harder but there are plenty of tutorials, e.g. server and client.
| Make files available through local address |
1,462,812,663,000 |
A vendor has provided these FTP connection params so I can upload some data for them...
Host: host.com
Port: 46800
Protocol: FTP – File Transfer Protocol
Encryption: Require implicit FTP over TLS
Logon Type: Normal
User: [ username ]
Password: [ password ]
It isn't working for me...
$ ftp -p host.com 46800
Connected to host.com
421 Service not available, user interrupt. Connection closed.
ftp>
I suspect the "Require implicit FTP over TLS" param might be the issue? (Maybe?)
TLS isn't mentioned in the FTP man page.
What would be a command that would allow me to connect and upload?
|
The ftp program is for the insecure ftp protocol. Your vendor has specified that you use Implicit FTP over TLS which is a way to encrypt the connection and keep your credentials and data private over the Internet.
Fortunately, there is a program called lftp which understands this protocol.
lftp
open -u [username] ftps://host.com:46800
Password: [enter your password]
ls
[your remote files should be listed]
lftp supports many protocols. This webpage lists them in an easy to read table.
| How can I connect and upload to this FTP host on the console? |
1,462,812,663,000 |
I've never seen or heard anything like this before, and I can't find anything else online that is at all similar.
I've upgraded my network to gigabit and have been transferring large files lately (this one in question is a bunch of DVD images totaling over 200GB). Whenever I try to copy a set of files a few gigs and larger, I've noticed this odd behavior where Mint will load a chunk of the data into RAM-- usually about 1.2 GB or smaller-- sometimes only a few hundred megs-- and then start transferring. When it gets done transferring that, it will literally halt the transfer, spit out the old hunk of data, and wait to continue transferring until the next chunk of data is loaded into the RAM. Then it will resume transferring across the network. Then it repeats. And repeats. And repeats. Until the data is all done.
Here is a screenshot of the System Monitor during one of these weird moments.
. You can see the death of the transfer at the precise moment the RAM dumps the data, and then you see the RAM level out at the same moment that the transfer resumes again. What's also funny is that I actually have six gigs of RAM, not 3.2 as Sys. Monitor would have you believe-- this is the second time Mint hasn't reported it all of the sudden. But that's a question for another day.
It's not the worst thing in the world, but it is a little annoying when every other OS I've used simultaneously loads data in and out of the RAM while it's transferring across the network. It doesn't have to pause to think about it. It would save me time while I'm moving these large sets of data if I could remedy this.
Are there any suggestions, remedies, diagnoses, or theories?
|
Marco's comment inspired me to try a few things that I didn't think of, and I discovered the answer. Well, I guess I discovered an alternative. If anyone knows more about this, please add an answer.
I ought to have specified beforehand how I was transferring the file. This was done over the network (of course) via a WebDAV connection to my Synology NAS.
After Marco's comment, I tested copying about 11.7 GB to the NAS using several different methods:
Samba: Not only was the average speed much faster, but it didn't have the waiting-for-data-to-load problem.
FTP: The average speed was faster, the transfer didn't stop to wait for data to load into the ram, but sometimes the CPU would get a little funny... and by that I mean that it maxed out one of the cores, and I had to kill the FTP process because it kept eating up the CPU even after I cancelled the transfer.
WebDAV: Same as before-- the RAM would grab a bunch of data, data would transfer, then RAM would dump it and grab more, transfer that, &tc.
So I have discovered that Samba is the better method in this instance. I did a little Googling and saw that some people feel that WebDAV is a clunky protocol especially for LANs.
Still, I don't know if this is just the way WebDAV is-- if other people have the same problem-- or if it's something wrong with Mint, or if it's just my particular setup of Mint. So I think I'll give this a few days before I select this as the best answer just to see if others have better solutions/more to add that I can't add.
| Linux Mint Stops Network File Transfers to Load Data into RAM |
1,462,812,663,000 |
I know how I can send files to a specific directory on a remote server using ssh, but I don't know how to specify it.
|
There are a few methods.
The simplest way if you're just transferring a file once in a while.
scp myfile.txt [email protected]:/home/user/
scp stands for secure copy and it transfers over SSH.
There is also sftp
sftp [email protected]
> cd /home/user/
> put myfile.txt
I guess the only real advantage to using this is that you can transfer multiple files without typing in your SSH password all the time. (If you don't use a keyring that is)
If you're going to be regularly transferring files take a look at rsync. A simple usage of rsync might look like:
rsync mydir/ [email protected]:/home/user/
But take a look at the man page as there are tons of options.
Finally, there is a sshfs. With this method you can mount an SSH server to your local filesystem like any other filesystem, then you can just copy files into it.
sshfs [email protected]:/home/user/ /mnt/ssh/
cp myfile.txt /mnt/ssh/
| How to specify where files are transferred to using ssh |
1,462,812,663,000 |
I have tried to transfer about 50 Gb files from a Redhat Linux variant unsuccessfully to my Debian 8.1.
I would like to find other ways than external HDD to move data. There are USB3 connections and HDMI to both machines but nothing else.
I am not allowed to install BTsync to transfer the files fast between each other.
How can you mass transfer of big files easily between two Linux boxes of different variants?
|
The fact that one machine is running Red Hat and the other Debian won't cause you any problems. For most intents and purposes, the differences between distributions are insignificant.
Realistically, you have two options for your data transfer:
Using a removable disk, connected using USB or eSATA or similar.
Using the network. Once both machines can connect to one another over the network, you can use any one of a variety of tools to do the file transfer. You mentioned that you cannot using BitTorrent Sync but something like rsync may well be an option or, failing, that sftp or scp.
| Mass transfer big files from one Linux box to another Linux box? |
1,462,812,663,000 |
Is there any way to automate the process of copying the files between windows and unix
without doing it manually, using tools such as winscp.
I need to copy files from unix to windows such by executing some commands in windows. I goolged it and found these tools that can do this.
ftp
sftp
scp
pscp
winscp console.
EDIT : Can I get something using pscp because I just found out that ftp is not enable in my servers.
Please suggest a way of doing it and I need to run whatever script/command in windows only for copying the unix files.
EDIT 2 : Getting this error in winscp console for sftp :
winscp> get abc.sh c:\
abc.sh | 0 KiB | 0.0 KiB/s | binary | 0%
Can't create file 'c:\abc.sh'.
System Error. Code: 2.
The system cannot find the file specified
(A)bort, (R)etry, (S)kip, Ski(p) all:
|
I would use WinSCP script for this Here you have some good piece of documentation on how to do this. Example script :
# Automatically abort script on errors
option batch abort
# Disable overwrite confirmations that conflict with the previous
option confirm off
# Connect using a password
# open sftp://user:[email protected] -hostkey="ssh-rsa 1024 xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx"
# Connect
open sftp://[email protected] -hostkey="ssh-rsa 1024 xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx"
# Change remote directory
cd /home/user
# Force binary mode transfer
option transfer binary
# Download file to the local directory d:\
get examplefile.txt d:\
# Disconnect
close
# Connect as a different user
open [email protected]
# Change the remote directory
cd /home/user2
# Upload the file to current working directory
put d:\examplefile.txt
# Disconnect
close
# Exit WinSCP
exit
Then save it to example.txt and use this command :
winscp.exe /console /script=example.txt
| copying files from unix to windows? |
1,462,812,663,000 |
Assumed I run a Windows guest in a QEMU virtual machine on a Debian host. Hereby the Debian host is a common desktop computer with internet access.
How can I set up a SFTP file exchange between guest and host but prevent the guest (= Windows) from accessing the internet?
Set up a Virtual Network Interface (NIC) for the belonging Windows machine in virt-manager (default setting is network NAT with device virtio)
Install network driver in guest machine (Windows)
Install WinSCP in the guest machine (Windows)
But what then? Where can I prevent public internet access only for this guest? Is this already possible in virt-manager without messing up the host firewall?
Several other guest machines should not be affected by this.
|
Create a new virtual network in virt-manager with its connectivity set to Isolated virtual network.
In this configuration, VMs on this network can only access other VMs on the same network and the host (using only the host's IP address for the isolated network).
| QEMU: Enable SFTP file exchange (guest ⇆ host) but prohibit guests access to public internet? |
1,498,149,937,000 |
We need to do a once-only archive copying of users' home folders to an archive server (pending final deletion) when they leave, in case they later discover that they may still require some of their files (although we do of course very strongly encourage them to take their own backup of everything they might still need before they go).
We had been using scp for this, but have now got inadvertently snared by a former user who had installed some software which had created an unusual symlink structure in one of their folders, which seemed to result in scp looking ever upwards and then trying to copy rather more than was expected, before being stopped.
Unfortunately, it turns out that scp seems to always follow symlinks and does not appear to have any option to prevent this.
I am looking for an alternative way to backup a user folder that avoids this problem (and ideally is no more complicated than it absolutely needs to be).
tar could be a possibility, but I am slightly concerned that the creation of a tarball locally before copying it to the archive server could use a not insignificant amount of storage space, and might pose some difficulties in the event that our fileserver becomes rather more full at some point in the future.
Another possibility might be to use rsync, but this seems possibly over-the-top for a once-only file transfer, and I know from previous experience that tuning rsync's own options can sometimes be fiddly in itself.
Can anyone suggest a reliable and simple alternative to scp for this?
|
If you like tar except for the temp file, this is easy: don't use a temp file. Use a pipe.
cd /home ; tar cf - user | gzip | ssh archivehost -l archiveuser 'cat > user.archived.tar.gz'
Substitute xz or whatever you prefer for gzip. Or move it over to the other side of the connection, if saving CPU cycles on the main server is more important than saving network bandwidth (and CPU on the archive server)
cd /home ; tar cf - user | ssh archivehost -l archiveuser 'gzip > user.archived.tar.gz'
You could stick a gpg in there too. Have a key pair that's just for these archives, encrypt with the public key when storing, use the private key when you need to recover something.
More details as requested:
I intend user to be the user whose home directory /home/user you are archiving. archivehost is the server where you're going to store the archives, and archiveuser is an account on the archive server that will own the archives.
tar cf - user means "create a tar archive of user and write it to stdout". The -c is "create", and -f - is "use stdin/stdout as the file". It will probably work with just tar c user since -f - is likely to be the default, but originally the default action of tar was to read or write a tape device. Using an explicit -f - may just be a sign that I'm old.
The tar z flag would be fine, except then I couldn't show how to move it to the other side of the ssh. (Also, connecting gzip and tar with an explicit pipe is one of those "old people" things - tar didn't always have that option.) Plus I can substitute bzip2, lzop, xz -3v, or any other compression program without needing to remember the corresponding tar options.
I never heard of --checkpoint before, so you'll just have to rely on your own tests for that one.
| Archiving user home folder to remote server, without following symlinks? |
1,498,149,937,000 |
How can I enable hotspot from my laptop using terminal? I don't need internet connection to do that. I want to set up a server so that I can transfer my files from my laptop to my mobile or another laptops. Is it possible using terminal? I have used python -m SimpleHTTPServer but for that I have to either connect to my mobile hotspot or any other common shared network. But I can't upload in laptop. Only download.
(Main problem is to create hotspot using terminal. I use DEEPIN 15.3 debian based)
|
There is a snap package (it's a new packaging technique created by Ubuntu developers) called wifi-ap. You can use it from terminal to create a wireless network, and even share internet if you want.
To install the package : snap install wifi-ap.
Then you have to configure the access point with this command wifi-ap.config.
To be able to install and manage snap packages, you have first to install snapd with the command sudo apt-get install snapd.
Then, add this line to your ~/.bashrc : export PATH=/snap/bin:$PATH, to be able to call the installed snaps from terminal.
| Wireless Transfer ,How to create hotspot |
1,498,149,937,000 |
I have been trying to get this to work for a few days now, but I can't figure it out. I am trying to scp a folder full of .tar files from my Ubuntu-Server Server to my Windows Desktop. I want to push it from my server to my Desktop, because I'd like to automate the process via a bash script.
I am using a command like this:
scp -r path/to/folder Username@Windowsmachineip:C:/path/to/folder/
When I execute the command, I get the error no such file or directory, but it does create a folder with the name of the folder on my server.
What I can do is a copy single files, but only if I specify a name for the file on my desktop, like this
scp -r path/to/folder/file Username@Windowsmachineip:C:/path/to/folder/file
If I try it without the filename at the end, I get the same error. I also tried it with the -p flag, following a suggestion, but that throws the same error. I tried pulling from the Server to my desktop, but get the same error. I also tried sftp which gives this output:
dest open "/E:/backup/minecraft/backup_minecraft_24_03_2024_06:01:04.tar": No such file or directory
upload "backup/minecraft/backup_minecraft_24_03_2024_06:01:04.tar" to "/E:/backup/minecraft/backup_minecraft_24_03_2024_06:01:04.tar" failed
The error I get with scp and -v flag is:
scp: debug1: fd 3 clearing O_NONBLOCK
Sending file modes: C0666 471111680 backup_minecraft_23_03_2024_22:29:33.tar
Sink: C0666 471111680 backup_minecraft_23_03_2024_22:29:33.tar
scp: E:/backup/minecraft/backup_minecraft_23_03_2024_22:29:33.tar: No such file or directory
Why doesn't this work?
|
You have colons in your filenames: backup_minecraft_23_03_2024_22:29:33.tar
These are not permitted characters for Windows systems. Remove (or replace) the colons and the files will transfer correctly
The following [are] reserved characters:
< (less than)
> (greater than)
: (colon)
" (double quote)
/ (forward slash)
\ (backslash)
| (vertical bar or pipe)
? (question mark)
* (asterisk)
| scp copies folder but not contents of the folder |
1,498,149,937,000 |
I've been using Linux (mostly Ubuntu, but also Manjaro and Fedora) for over 10 years. Lately I've toyed with the idea of moving to Mac OSX. From what I've read, ext4 and HFS+ don't really talk well together. Is there a 'relatively' efficient and painless way of transferring the linux files to OSX?
|
ftp, sshfs, scp, cifs/samba, rsync, netcat/nc, http (nginx/apache/python3 -m http.server) - just make sure you've checksummed all the files on both ends and hashes do match. All of them are supported under Mac and Linux.
| What is a good way to transfer files from linux to OSX? |
1,498,149,937,000 |
The initial problem was that when performing git clone via ssh the transfer rate is very slow, then it pauses and finally fails with
connection reset via peer
Background
ssh server is a Raspberry Pi running Raspbian
ssh client I have tried with both an OSX as well as another Raspberry with Raspbian but have the same issue
git clone on the LAN is never an issue but when attempted over the WAN shows this problem, I do have an openWrt router that has port forwarding to expose the ssh port of the raspberry pi onto the router for WAN access
I do have a firewall running on the router that is visible from the Internet.
IPv4 is being used
RPi is connected to the router via a wired connection
The following ssh clients were used:
OSX: OpenSSH_8.1p1, LibreSSL 2.7.3
RPi: openssh-client/stable,now 1:7.9p1-10 armhf
scp observations
I said let me try scp to make sure this is working fine before I look at git clone. Here are my observations:
scp of files smaller than 64KB is very fast and done under a second.
scp -P 31415 user@host:/tmp/64KB /dev/null
64KB 100% 64KB 310.4KB/s 00:00
scp of files larger than 64KB is very slow, even if I just have 1 extra KB, and sometimes fails
scp -P 31415 user@host:/tmp/65KB /dev/null
65KB 100% 65KB 284.2KB/s 00:00
Connection to xxxxxxx closed by remote host.
I tried to do an scp -vvv and did a diff of the two transfers and I see the following differences.
-64KB 100% 64KB 288.5KB/s 00:00
+65KB 100% 65KB 267.3KB/s 00:00
debug3: receive packet: type 96
debug2: channel 0: rcvd eof
debug2: channel 0: output open -> drain
@@ -190,6 +190,18 @@ debug2: channel 0: chan_shutdown_read (i0 o3 sock -1 wfd 4 efd 6 [write])
debug2: channel 0: input open -> closed
debug3: receive packet: type 97
debug2: channel 0: rcvd close
+debug3: receive packet: type 98
+debug1: client_input_channel_req: channel 0 rtype [email protected] reply 1
+debug3: send packet: type 100
+debug3: receive packet: type 98
+debug1: client_input_channel_req: channel 0 rtype [email protected] reply 1
+debug3: send packet: type 100
+debug3: receive packet: type 98
+debug1: client_input_channel_req: channel 0 rtype [email protected] reply 1
+debug3: send packet: type 100
+debug3: receive packet: type 98
+debug1: client_input_channel_req: channel 0 rtype [email protected] reply 1
+debug3: send packet: type 100
debug3: channel 0: will not send data after close
debug2: channel 0: almost dead
debug2: channel 0: gc: notify user
I do see that with the 65KB I see some addition +debug3: receive packet: type 98, but I lack the understanding to interpret this.
I have already gone through few solutions like turning of TCPTimestamps, changing the MTU size, etc, but none of them helped.
|
Setting "IPQoS" to "none" fixed the problem. Thank you so much! I seemed to need to set the option on both the client and server.
| scp transfer becomes very slow when the file size is greater than 64KB |
1,498,149,937,000 |
The data transfer to my mp3 player is very slow via USB connection.
I have a mp3-player from Samsung (YP-M1JCB/EDC) which I have connected to my Fedora linux (the pc connection in the mp3 player is set to MSC, i.e. mass storage device class).
When I connect the mp3 player to my computer with a usb cable, with dmesg I see this:
[1351555.669080] usb 2-2: new high-speed USB device number 17 using ehci-pci
[1351555.812993] usb 2-2: New USB device found, idVendor=04e8, idProduct=5123
[1351555.813047] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[1351555.813059] usb 2-2: Product: YP-M1
[1351555.813065] usb 2-2: Manufacturer: Samsung Electronics
[1351555.813071] usb 2-2: SerialNumber: b37c03ac0f1647c2a9720aae4e913080
[1351555.896394] scsi47 : usb-storage 2-2:1.0
[1351556.899771] scsi 47:0:0:0: Direct-Access Samsung YP-M1 1.0 PQ: 0 ANSI: 0
[1351556.900481] scsi 47:0:0:1: Direct-Access Samsung microSD 1.0 PQ: 0 ANSI: 0
[1351556.902422] sd 47:0:0:0: Attached scsi generic sg3 type 0
[1351556.904403] sd 47:0:0:0: [sdc] 1896703 4096-byte logical blocks: (7.76 GB/7.23 GiB)
[1351556.904617] sd 47:0:0:1: Attached scsi generic sg4 type 0
[1351556.904922] sd 47:0:0:0: [sdc] Write Protect is off
[1351556.904930] sd 47:0:0:0: [sdc] Mode Sense: 00 06 00 00
[1351556.907342] sd 47:0:0:0: [sdc] Asking for cache data failed
[1351556.907361] sd 47:0:0:0: [sdc] Assuming drive cache: write through
[1351556.910613] sd 47:0:0:1: [sdd] Attached SCSI removable disk
[1351556.911467] sd 47:0:0:0: [sdc] 1896703 4096-byte logical blocks: (7.76 GB/7.23 GiB)
[1351556.912448] sd 47:0:0:0: [sdc] Asking for cache data failed
[1351556.912457] sd 47:0:0:0: [sdc] Assuming drive cache: write through
[1351556.913372] sdc: sdc1
[1351556.916978] sd 47:0:0:0: [sdc] 1896703 4096-byte logical blocks: (7.76 GB/7.23 GiB)
[1351556.919093] sd 47:0:0:0: [sdc] Asking for cache data failed
[1351556.919111] sd 47:0:0:0: [sdc] Assuming drive cache: write through
[1351556.919120] sd 47:0:0:0: [sdc] Attached SCSI removable disk
Seems ok to me.
I then mount the device /dev/sdc1:
sudo mount -o uid=erik /dev/sdc1 /mnt/usb-stick/
When I create a small text file on the device, and then unmount it, there seems to be no problem. I can read the text file on the device (it has a text reading app).
But when I copy some bigger files (mp3 files) to the device, it takes forever. Well, the command line
cp supermusic.mp3 /mnt/usb-stick/Music/
finishes after a few seconds. But when I try to unmount the device, it never finishes. dmesg shows:
[1352056.822086] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352087.878103] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352118.854062] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352149.830105] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352180.870081] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352211.846060] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352211.969584] sd 48:0:0:0: [sdc] Unhandled error code
[1352211.969601] sd 48:0:0:0: [sdc]
[1352211.969607] Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK
[1352211.969612] sd 48:0:0:0: [sdc] CDB:
[1352211.969617] Write(10): 2a 00 00 1b 51 02 00 00 1e 00
[1352211.969634] end_request: I/O error, dev sdc, sector 14321680
[1352242.822056] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352273.862064] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352304.838066] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352335.814100] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352366.854074] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352397.830096] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352397.954124] sd 48:0:0:0: [sdc] Unhandled error code
[1352397.954141] sd 48:0:0:0: [sdc]
[1352397.954147] Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK
[1352397.954153] sd 48:0:0:0: [sdc] CDB:
[1352397.954157] Write(10): 2a 00 00 1b 51 20 00 00 1e 00
[1352397.954174] end_request: I/O error, dev sdc, sector 14321920
[1352428.870469] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352459.846068] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352490.822088] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352521.862078] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352552.838052] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352583.878077] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352584.005386] sd 48:0:0:0: [sdc] Unhandled error code
[1352584.005401] sd 48:0:0:0: [sdc]
[1352584.005407] Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK
[1352584.005413] sd 48:0:0:0: [sdc] CDB:
[1352584.005417] Write(10): 2a 00 00 1b 51 3e 00 00 1e 00
[1352584.005434] end_request: I/O error, dev sdc, sector 14322160
[1352614.854055] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352628.359667] usb 1-2: USB disconnect, device number 46
[1352645.830068] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352676.870073] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352707.846090] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352738.822066] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352769.862077] usb 2-2: reset high-speed USB device number 18 using ehci-pci
[1352769.985579] sd 48:0:0:0: [sdc] Unhandled error code
[1352769.985596] sd 48:0:0:0: [sdc]
[1352769.985601] Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK
[1352769.985608] sd 48:0:0:0: [sdc] CDB:
[1352769.985611] Write(10): 2a 00 00 1b 51 5c 00 00 1e 00
[1352769.985630] end_request: I/O error, dev sdc, sector 14322400
There seem to be some problems, but why? I have connected many other mass storage devices and never had a problem.
How to see if data is still transferred? And how fast? If I go to /mnt/usb-stick/Music the file seems to be there already (full size).
PS: When I just remove the USB cable after waiting very long time, when I reconnect it and start fsck.vfat on the partition it finds a loooot of errors with chinese glyphs which takes a very long time to fix.
|
The USB connection resets indicate that there is something physically wrong with your USB device (or maybe the electrical connection). The fact that other drives do not exhibit the problem supports this theory.
The logs confirm this. (“end_request: I/O error, dev sdc, …”, etc.)
Discard it or do not use it for important things anymore.
| where or how to see if data is still being transferred over usb to mass storage |
1,498,149,937,000 |
I have a some internally available servers (all Debian), that share a LetsEncrypt wildcard certificate (*.local.example.com). One server (Server1) keeps the certificate up-to-date and now I'm looking for a process to automatically distribute the .pem-files from Server1 to the other servers (e.g. Server2 and Server3).
I don't allow root logins via SSH, so I believe I need an intermediary user.
I've considered using a cronjob on Server1 to copy the updated .pem-files to a users directory, where
a unprivileged user uses scp or rsync (private key authentication) via another cronjob to copy the files to the Server2/3. However, to make this a more secure process, I wanted to restrict the user's privileges on the Server2/3 to chroot to their home directory and only allow them to use scp or rsync. It seems like this isn't a trivial configuration and most methods are outdated, flawed or requite an extensive setup (rbash, forcecommand, chroot, ...).
I've also considered to change the protocol to sftp, which should allow me to use the restricted sftp environment, via OpenSSH but I have no experience.
An alternative idea was to use an API endpoint (e.g. FastAPI, which is already running on Server1) or simply a webserver via HTTPS with custom API-Secrets or mTLS on Server1 to allow Server2/3 to retrieve the .pem-files.
At the moment, the API/webserver approach seems most reasonable and least complex, yet feels unnecessarily convoluted. I'd prefer a solution that doesn't require additional software.
Server1 has .pem-files (owned by root) and Server2/3 need those files updated regularly (root-owned location). What method can I use to distribute those files automatically in a secure manner?
|
I've settled on an rsync-only user, that can only rsync data to a predefined directory using ssh-keys (https://gist.github.com/jyap808/8700714). I rsync the files with script that runs after successful letsencrypt deployments. On the receiving servers, I have an inotifywait service running that moves the files to the appropriate locations right after they've synced onto the server.
| How to distribute HTTPS certificate/key securely and automatically on internal servers |
1,498,149,937,000 |
Here I have command to transfer inputFile.tar to another bluetooth device (10:68:3F:57:7D:B6).
obexftp -b 10:68:3F:57:7D:B6 -p inputFile.tar
However, is it possible to use stdout as input for obexftp?
For example, I want to do something like this:
cat inputFile.tar | obexftp -b 10:68:3F:57:7D:B6 ...
gzip -fc inputFile.tar | obexftp -b 10:68:3F:57:7D:B6 ...
How can I do this? How do I tell obexftp to use stdout?
|
I can't find a way with the pipe but you can try this:
obexftp -b 10:68:3F:57:7D:B6 - p $(cat inputFile.tar)
Return the sdout of the command in the $( )
| Pipe stdout to obexftp bluetooth transfer? |
1,498,149,937,000 |
I have a USB drive with Manjaro ARM on it (which is used for Raspberry Pi 4 system) and an empty SD card.
Is there a possible way to transfer the OS from USB drive to SD card, while preserving the partitions?
If it is possible, can it be done while Manjaro is running?
Here is the output of lsblk:
$ lsblk # partitions on USB Drive
sda 8:0 1 14.9G 0 disk
├─sda1 8:1 1 213.6M 0 part /boot
└─sda2 8:2 1 14.7G 0 part /
zram0 254:0 0 11.2G 0 disk [SWAP]
|
Is there a possible way to transfer the OS from USB drive to SD card, while preserving the partitions?
Yes, assuming the sd card it as least as big as the usb drive. You can run blockdev --getsize64 /dev/sda to get the size of your usb drive in bytes, and by changing the device path to the sd card you can ensure it is at least as many bytes.
It is perhaps not very likely that the devices are exactly the same size, so I would preferrably create an identical partition table (with same partition numbers for minimum hassle) manually on the sd card. This ensures that the extra disk space potentially available on the sd card can later be used for e.g. extending the root partition or creating new partitions. Use sfdisk -l /dev/sda to get a list of partitions on /dev/sda in units of sectors, and then use fdisk /dev/sdb to create the same partitions on the sd card (assuming your sd card device is /dev/sdb, please update as necessary).
After recreating the partitions you can copy the contents of each partition one at a time.
If it is possible, can it be done while Manjaro is running?
Yes, but in that case you should mount the filesystems read only to avoid the risk of the operating system corrupting the copy should it write anything to the disk while you are copying.
Here are the commands to do just that and to copy the two partitions you listed in your question, assuming you have created the partitions as described above, and again assuming your sd card is /dev/sdb:
mount /dev/sda1 -oremount,ro
dd if=/dev/sda1 of=/dev/sdb1 bs=1048576
mount /dev/sda2 -oremount,rw
mount /dev/sda2 -oremount,ro
dd if=/dev/sda2 of=/dev/sdb2 bs=1048576
mount /dev/sda2 -oremount,rw
Possibly some software might not like that the root filesystem is temporarily mounted read-only; a reboot will fix that.
| Transferring OS from one medium to another |
1,498,149,937,000 |
I'm trying to copy the x86_64 parts of the CentOS 8 repository from http://mirror.centos.org/centos/8/ directly to Artifactory. I've successfully copied some functions (BaseOS, extras, etc) into the local file system, then using jfrog to upload them into Artifactory, but would prefer to copy directly from the CentOS web site into the Artifactory repository.
I've tried compiling httpfs and httpfs2 on the machine Artifactory runs on, with the idea that I could mount the CentOS 8 web site locally and use jfrog rt u to copy from the “local” (fuse file system) into Artifactory, but they failed to compile (if httpfs/httpfs2 is considered a very good way to do this I'll detail the compile errors).
I've used the jfrog rt command successfully (in spite of its strange references), so am comfortable with that, but am open to any method that works.
The target in Artifactory is a directory below the RPM repository defined in Artifactory, so that rules out a couple possibilities (making a remote Artifactory repository and copying from there to the subdirectory, copying into a new Artifactory repository).
One possibility I haven't tried is copying (curl, wget, etc) directly into some OS directory that Artifactory owns (and automatically indexes?), so if that's a possibility let me know.
|
Rclone is a wonderful way of mounting all sorts of things, including http read-only, so the CentOS 8 repo could be mounted on the Artifactory host, and jfrog could upload it from there.
| How does one copy parts of a repository to Artifactory? |
1,498,149,937,000 |
Trying to transfer a file from my raspberryPi to my laptop (in the same network) via the terminal, but it takes a while and then says
ssh: connect to host 192.168.1.121 port 22: Connection timed out
lost connection
The command in the terminal reads:
$ scp examplefile.txt [email protected]:C:\Users\bruno\Documents
When I enter $ ping 192.168.1.121 I get PING 192.168.1.121 (192.168.1.121) 56(84) bytes of data. and the transfer in the other direction:
scp C:\Users\Documents\examplefile.txt openhabian@openhabianpi:home
works fine. Whats the issue?
|
I'm guessing your laptop is running Windows 10. For the command you mentioned not working, you need to setup an SSH server on the laptop.
Point is, scp relies on remote SSH server to copy files to and from the remote host. As in your case, I'm assuming the Raspberry Pi has an SSH server running on it, so you can always copy files to and from it with scp on another machine.
| Network file transfer not going through |
1,498,149,937,000 |
Hi I have two linux machines. Server A (LUbuntu) and Server B (Raspbian). What I want to do is have Server A check in specific a directory(NFS mounted on A) from Server B and IF there are any files transfer them to a location on server A. Ideally after the transfer, any transfered files on B are deleted. Note, that I do not want to sync A and B and I do not want any transfer from A--->B but only from B--->A. Also ideally, i do not want to transfer duplicate files but just delete them.
What is the best way to achieve this without too much srcipting?
Since A is not online all the time, resuming file transfer is important.
Are there any existing software to do this or do I have to script everything myself.
Thanks
|
The basic flaw in this scheme is that there is no way for host B to know when it is safe to copy the files. What if A is writing a file to the NFS-shared dir, and host B picks up on that file before it is completely written? Host B will copy the partial file, and if that (partial) copy is successful, it will delete the entire file, including the part that it didn't copy, because Host A hadn't finished writing it yet.
That said, this seems like a fairly simple application for rsync. Read the man page, particularly the --partial option, perhaps the --checksum option, and especially the caveat in the section about the --remove-source-files option. The suggested workaround of using filenames such as *.new when host A writes files to be copied by host B, then renaming the .new files to whatever its name is supposed to be, may work for you. Renaming within a single filesystem is generally an atomic operation, so host B just has to be patient and ignore any *.new files. Once they get renamed, host B will transfer them the next time the cron job runs.
| Automatic file transfer between two linux machines |
1,498,149,937,000 |
I need to transfer 80gb's of unsorted text documents from one computer to another and all I have is a 32gb USB. Is there an option to make rsync automatically pause when the USB is full without losing its place?
Manually watching and pausing it is not an option.
|
I don't think you can do that just like this.
You will have to be more creative and e.g. split manually.
Fill USB stick with first 32G:
tar czf - / | dd if=/dev/stdin of=/usbstick/bla bs=32k count=100k iflag=fullblock
Write the first 32G of the resulting tar to the destination
dd if=/usbstick/bla of=/tarfile bs=32k count=100k
Fill USB stick with next 32G:
tar czf - / | dd if=/dev/stdin of=/usbstick/bla bs=32k count=100k iflag=fullblock skip=100k
Write the 32G-64G to the destination:
dd if=/usbstick/bla of=/tarfile bs=32k count=100k seek=100k
Do this (increment skip and seek by 100k each time) until the tar file on the destination is complete.
Finally, extract the tar
tar -C /destination xzf /tarfile
Man, I'd just connect a crossover cable.
| Automatically pauseing rsync if target is full |
1,498,149,937,000 |
I've found the following command pretty convenient for transferring files to servers I have ssh access to:
cat myfile.txt | ssh my.host.edu "cat > myfile.txt"
Is there a similarly convenient command for bringing files to my local computer?
If it would make it any easier, I don't mind installing command line utilities from the standard Ubuntu repos.
|
Try this one :
ssh my.host.edu "cat myfile.txt" > myfile.txt
But if you want to do file transfert overt ssh, use sftp which is a tool who is dedicated to this.
| Transfering files using ssh |
1,498,149,937,000 |
I have a command-line program that continuously generates output on the shell.
I would like to be able to transfer the output to another unix host for which I know IP, username and password.
Since the program does not terminate, I would also like to continuously update the file, without removing the previous output.
Is there a way to do it from command line?
|
I think an ideal method would involve sending it to the remote server via syslog. This simple ssh command should also work:
somecommand | ssh somehost 'cat - >> file.log'
| transfer file to remote host and append to file if existing |
1,498,149,937,000 |
I have a Solaris 11 system which has several NFS exports which are accessed my other systems within my LAN. I use a Linux system as a client for testing.
I wrote a quick script to test read speed and I average at around 110Mbps (or 13MB/s) on a Gigabit LAN. And I would have think it could get much faster. SSH (scp) only gives me 3.8MB/s but that's with encryption.
http gives me 11.5M/s, similar to NFS than. ain't this low?
what could be the bottleneck from these numbers?
|
NFS can't really maximise the throughput, because the client keeps sending please send me this much data to the server (this much being limited to a few kilobytes) and waits for the full answer before asking for more which means dead times, when all queues are empty. All FS over the network (CIFS, SSHFS) have the same kind of issue (and IIRC scp as well, or maybe it's only sftp, I can't remember)
Beside the encryption overhead, ssh also has some more performance limitations (see here for details).
HTTP, unless you use a client that performs chunked requests, should be straight TCP, so shouldn't have this kind of limitation. TCP should use its congestion control algorithm to maximize the throughput while avoiding congestion. While the first few kilobytes may be transferred slowly, you should be able to maximize your bandwidth within a few 10th of seconds if the two machines are connected via the same switch. It's possible that there be some poor network quality (like the odd packet loss).
Things you may want to try:
Transfer the content of /dev/zero over a plain TCP connection (using socat or nc for instance), to rule out the bottle neck being FS access.
Check your network interface statistics for transmission errors, and TCP stack statistics (netstat -s)
Test with iperf (both TCP and UDP).
| targeted network speed over LAN |
1,498,149,937,000 |
I wish to get a file /export/home/remoteuser/stopforce.sh from remotehost7 to localhost /tmp directory.
I fire the below command to establish that the file exists on the remote host:
[localuser@localhost ~]$ ssh remoteuser@remotehost7 ' ls -ltr /export/home/remoteuser/stopforce.sh'
This system is for the use by authorized users only. All data contained
on all systems is owned by the company and may be monitored, intercepted,
recorded, read, copied, or captured in any manner and disclosed in any
manner, by authorized company personnel. Users (authorized or unauthorized)
have no explicit or implicit expectation of privacy. Unauthorized or improper
use of this system may result in administrative, disciplinary action, civil
and criminal penalties. Use of this system by any user, authorized or
unauthorized, constitutes express consent to this monitoring, interception,
recording, reading, copying, or capturing and disclosure.
IF YOU DO NOT CONSENT, LOG OFF NOW.
##################################################################
# *** This Server is using Centrify *** #
# *** Remember to use your Active Directory account *** #
# *** password when logging in *** #
##################################################################
lrwxrwxrwx 1 remoteuser oinstall 65 Aug 30 2015 /export/home/remoteuser/stopforce.sh -> /u/marsh/external_products/apache-james-3.0/bin/stopforce.sh
From the above we are sure that the file exist on remote although it is softlink.
I now try to get the actual file using rsync but it gives error.
[localuser@localhost ~]$ /bin/rsync --delay-updates -F --compress --copy-links --archive remoteuser@remotehost7:/export/home/remoteuser/stopforce.sh /tmp/
This system is for the use by authorized users only. All data contained
on all systems is owned by the company and may be monitored, intercepted,
recorded, read, copied, or captured in any manner and disclosed in any
manner, by authorized company personnel. Users (authorized or unauthorized)
have no explicit or implicit expectation of privacy. Unauthorized or improper
use of this system may result in administrative, disciplinary action, civil
and criminal penalties. Use of this system by any user, authorized or
unauthorized, constitutes express consent to this monitoring, interception,
recording, reading, copying, or capturing and disclosure.
IF YOU DO NOT CONSENT, LOG OFF NOW.
##################################################################
# *** This Server is using Centrify *** #
# *** Remember to use your Active Directory account *** #
# *** password when logging in *** #
##################################################################
rsync: [sender] link_stat "/export/home/remoteuser/stopforce.sh" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1651) [Receiver=3.1.2]
rsync: [Receiver] write error: Broken pipe (32)
The localhost is linux while the remotehost7 is solaris.
Can you please suggest why i get this error and what is the fix to the problem?
|
You are using the --copy-links option. This is documented with the text
When symlinks are encountered, the item that they point to (the
referent) is copied, rather than the symlink. [...]
If your symbolic link does not point to a file that exists, then the --copy-links option would make rsync complain that it can't find the file. If --copy-links was not used, the symbolic link itself would be copied.
The fix to this problem depends on what it is you want to achieve. Either make sure that the file referenced by the symbolic link exists, or do not use the --copy-links option.
Personally, if I was using --archive I would probably be trying to make an as true copy as possible of the file hierarchy or file, in which case I would not use --copy-links (to be able to preserve symbolic links).
| rsync not working even when the destination file exists |
1,498,149,937,000 |
Suppose, I have some file with path %p which I need to gzip on the fly and send to a remote server and I'm not allowed to use rsync and similar mirroring tools.
I do the following:
gzip -c -9 %p | ssh user@server "cat > backupPath"
In the basic normal case it works good, but I'm wondering what happens when the connection to the remote server fails during the file sending because I'd like to be sure the file is fully sent and saved.
Will just the part of the file be written to "backupPath" or will it follow the "all or nothing" strategy - i.e. error happens, file with "backupPath" address is not created on the remote host (which is more suitable for me)?
|
It looks like renaming file after download completion is the option since the renaming is an atomic operation. As far as I understand checking the zip integrity doesn't guarantee the file is downloaded completely - there is a very small probability that the part of the file is also correct zip file. But, anyway, I like the idea of drewbenn to check that integrity for an additional safety.
So, thanks to icarus and drewbenn answers I've come to the final decision:
gzip -c -9 %p | ssh user@server 'set -e; cat > /var/tmp/file.txt.part; gzip -t /var/tmp/file.txt.part; sync /var/tmp/file.txt.part; mv /var/tmp/file.txt.part /var/tmp/file.txt'
| What happens on ssh file transfer when disconnects? |
1,473,877,642,000 |
i am trying to automate a simple process, but i m new and stuck. I have a number of bash scripts that when ran, zip and move files to specific directories on the linux box. I want to create a bash script that will transfer said files to a specific disk of a windows box on the same network.
IE :
From Linux Box : [email protected]
To Windows Box : [email protected]
I ve seen several commands that can do this, i would like this done by a batch script so that i can implement some simple controls on what was moved succesfully and so on. Which of these tools should i use?
ssh / sftp / scp
Or should i prefer some solution like Winscp
|
I guess you need an SSH Server running in your Windows box, in order to do it this way. AFAIK WinSCP is only client, which means that your script should run in your windows box and copy the files from your linux box.
I would use something like Bitvise SSH Server, exchange ssh keys between windows and linux boxes and, run the script in linux (with scp) as you planned.
| Copy files from Linux Server to Windows - bash script |
1,473,877,642,000 |
I was in the middle of a file-transfer about an hour ago and, when I came back after a while, the PC had crashed, so I'd like to see if all the files were transferred successfully.
I'm using Arch + KDE.
EDIT: I just used plain old dolphin with Ctrl+X and Ctrl+V. I have a TrueNAS set up, which runs ZFS RAIDZ2, so I just enabled NFS shares and mounted it on my desktop PC.
|
The only way to see which processes were running at the time of the crash would be if you had kdump or some other crash-dump mechanism set up ahead of time, and the dump was actually performed successfully. Then you could use the ps command of the crash utility to get the list of processes at the time the crash happened.
A mounted NFS filesystem is supposed to be a very close equivalent to a local filesystem. But you apparently were doing a move operation from one filesystem to another, which is always going to be implemented as a copy+delete, and any sane implementation will delete the original only after the copy has been successfully completed.
So, if the transferred files are no longer present in their original location, you can be sure that the files were successfully transferred; if the copying part of the operation were interrupted, one or more of the files would still be present at the original location.
| Is it possible to see which processes were running when the PC crashed last time? |
1,473,877,642,000 |
So I made a simple Bash script that can use your keyboard LEDs (numlock and capslock) to transmit data (inspired by LTT from their "Do NOT Plug This USB In! – Hak5 Rubber Ducky" video). This is the script I have:
#!/bin/bash
for i in `cat /dev/stdin | perl -lpe '$_=unpack"B*"' | sed 's/./\ &/g'`
do export E=`expr $E + 1`
echo "Bit number $E has a value of $i"
if (( $i == 0 ))
then
xdotool key Caps_Lock
sleep 0.1
xdotool key Caps_Lock
else
xdotool key Num_Lock
sleep 0.1
xdotool key Num_Lock
fi
done
It does something different, and that is have the LEDs for the keyboard common low instead of common high (meaning the LEDs are off longer compared to Linus' video). However, it has a flaw. It waits until an EOF from stdin, which isn't what I'd like. I'd like it to act like minimodem, reading data while it's being written to stdin (well, after a newline, at least). Is there a way I can do this without:
changing programming languages, and
without breaking the entire script?
Thank you in advance.
|
There are many ways to do this. Since you're already using perl for part of the job, probably the easiest method is to do the entire thing in perl, with the Term::Readkey module. For example:
#!/usr/bin/perl -l
use Term::ReadKey;
# trap INT so we can reset the terminal on ^C
$SIG{INT} = sub { exit };
ReadMode 3;
while ($_ = ReadKey 0) {
last if m/\cD/;
@bits = split //, unpack "B*";
for my $i (0..$#bits) {
print "Bit number $i has a value of $bits[$i]";
if ($bits[$i] == 0) {
system("xdotool key Caps_Lock; sleep 0.1; xdotool key Caps_Lock");
} else {
system("xdotool key Num_Lock; sleep 0.1; xdotool key Num_Lock");
};
};
};
END {
ReadMode 0;
};
Alternatively, if you don't want to install a module from CPAN, you can use stty as described in perldoc -f getc to make getc read a single character at a time. Or use the setattr() function from the POSIX module (which is included with perl) instead of running stty.
However, since you want to do it in bash (and cat & perl & sed), you could try something based on this:
First, realise that you never need to use cat to pipe data into a program which can already read from stdin (as perl and sed and almost everything else can do).
Then realise that whenever you're piping perl's output into sed, you're probably doing it wrong and can do whatever you're doing in sed in the perl script instead. perl also has a s/// operator, just like sed.
Remember that bash has arrays, and you can read the output of a program into an array with the bash mapfile built-in and process substitution.
and
bash can read a single character at a time with read -n 1.
#!/bin/bash
while read -n 1 char ; do
case "$char" in
$'\004') break ;; # Ctrl-D
esac
# This uses perl to print each bit separated by a newline. we could do it with s/// in perl,
# but here i'm using split and join. the output from perl is read into bash array $bits.
mapfile -t bits < <(printf '%s' "$char" | perl -lne 'print join("\n", split //, unpack"B*")')
# that expr stuff is incredibly ugly. and decades obsolete for shell arithmetic.
# i'm going to use let instead because I also find (( i=i+1 )) to be incredibly ugly.
count=0
for i in "${bits[@]}" ; do
let count+=1
echo "Bit number $count has a value of $i"
if [ "$i" -eq 0 ] ; then
xdotool key Caps_Lock
sleep 0.1
xdotool key Caps_Lock
else
xdotool key Num_Lock
sleep 0.1
xdotool key Num_Lock
fi
done
done
| How to make Bash not wait for EOF? |
1,473,877,642,000 |
I have SUSE Linux Enterprise Server (SP 3), am trying to install the MZ 510 driver by copying the files on my pc to the directory on the server, i get permission denied. Which command should i use to have the read and write permission so i can upload my files.
|
In comments you say you're trying to put the files in "the root directory". It's unclear if this is / or /root, but in any case, writing to any of these two directories require root permissions, and I'm suspecting that you don't log in as root with winscp (you don't tell).
You really only need to offload the files on your SuSE system somewhere. Later, if the files needs to be in some specific place, you may log in on the machine and, as root, move the files to wherever you need them to be.
Two suggestions for where to put the files:
In the /tmp directory. All users have the ability to create files there.
In the home directory of the user that you use to connect with winscp as. The user trivially has write permissions in their own home directory.
As for how to do this, I don't know, as I've never used winscp and don't know how it works.
With ordinary OpenSSH scp, you would do
scp the files username@hostname:
where the files are the names of the files that you need to transfer. This would put the files in the home directory of the username user on the host hostname. Using ...hostname:/tmp at the end would put the files in the /tmp directory.
| Copying some files from mypc to a host using winscp |
1,473,877,642,000 |
I am trying to ssh to another machine in my lab using the IP address. The IP address is 137.84.4.211 and let's say the host name is MyName. My IP address is 137.82.81.10. I don't know if this mean two computer share the same local host or not.
I tried
$ ssh -vv [email protected]
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 137.84.4.211 [137.84.4.211] port 22.
debug1: connect to address 137.84.4.211 port 22: Operation timed out
ssh: connect to host 137.84.4.211 port 22: Operation timed out
Of course it took about a minute before return Operation timed out.
I also did
$ sudo tcptraceroute 137.84.4.211 22
Selected device en0, address 137.82.81.10, port 55360 for outgoing packets
Tracing the path to 137.84.4.211 on TCP port 22 (ssh), 30 hops max
1 137.82.81.253 0.488 ms 0.370 ms 0.357 ms
2 a0-a1.net.ubc.ca (142.103.78.250) 0.648 ms 0.660 ms 0.683 ms
3 anguborder-a0.net.ubc.ca (137.82.123.137) 132.591 ms 1.959 ms 1.525 ms
4 343-oran-cr1-ubcab.vncv1.bc.net (134.87.2.234) 0.549 ms 0.639 ms 0.625 ms
5 cr1-bb3900.vantx1.bc.net (206.12.0.33) 0.747 ms 0.547 ms 0.620 ms
6 vncv1rtr1.canarie.ca (205.189.32.172) 0.700 ms 1.426 ms 1.126 ms
7 abilene-1-lo-jmb-706.sttlwa.pacificwave.net (207.231.240.8) 3.955 ms 4.010 ms 4.080 ms
8 198.71.46.246 29.355 ms 29.405 ms 29.261 ms
9 159.238.0.10 29.510 ms 29.404 ms 29.613 ms
10 159.238.0.9 32.455 ms 36.140 ms 32.574 ms
11 * * *
12 * * *
13 * * *
14 * * *
15 * * *
16 * * *
17 * * *
18 * * *
19 * * *
20 * * *
21 * * *
22 * * *
23 * * *
24 * * *
25 * * *
26 * * *
27 * * *
28 * * *
29 * * *
30 * * *
Destination not reached
As both computers are at Vancouver, BC (at less than 50 meters) I am a little surprise to see that it is searching through nodes in Ontario (205.189.32.172) and
Wyoming (159.238.0.9).
I made sure the firewall is off and all connections are allowed. Both computers run on Mac OS X El Capitan
|
The IP-addresses 137.84.4.211 and 137.82.81.10 are not on the same network. Most likely, it should be either 84 or 82 in both of the addresses.
| `ssh` fails to reach destination |
1,473,877,642,000 |
I have a Windows Server 2008 R2 machine and an AIX 6.1 server. Now I would like to map 2 AIX folders to Windows for an application to access - The application on Windows is IBM Connect:Direct and needs to permanently transfer files to and from the Directories on the Unix server. On windows-to-windows this is easy, you just specify in the UNC paths in the Connect:Direct config files eg: \\192.168.30.30\d$\BARC\Input\
STEP1 COPY
FROM (
FILE="&F"
)
TO (
FILE="\\192.168.30.30\d$\BARC\Input\&DF"
DISP=RPL
)
How is this done?
|
Install Samba. It provides Windows-compatible SMB/CIFS file and print sharing.
| How to map Unix Directories to Windows Server |
1,473,877,642,000 |
I have been messing around bbses lately and want to download some things off them but can't figure out how to on Ubuntu server 18.04. I have tried quite a few things. I know that it is modem downloads so I tried getting Irzsz with the command
sudo apt-get install - y Irzsz
and it won't work when I go to download things off of the bbs server. The message I get back is failed to download 123.zip. The install and download of Irzsz worked fine did I forget to configure something for Irzsz to work?
The downloads are xmodem and ymodem. What am I doing wrong?
|
In order to use modem-style protocols, you need a communications program that can run an external rx/rb/sx/sb utility from the lrzsz package and temporarily pass the communication stream to it until the utility exits, or has the equivalent functionality built-in. The ordinary telnet command can't do that, and so it's unsuitable for downloading files from BBSes.
C-Kermit (package name ckermit in Ubuntu) is a communications program that supports both the Telnet network protocol and Xmodem file transfers.
For Xmodem and Ymodem file transfers, you typically have to first give the BBS a command to prepare for the file transfer, and then another command to your local communications program to actually transfer the file (before the BBS's file transfer function times out). The Zmodem protocol includes a feature that allows communications programs that support it to auto-detect the beginning of a Zmodem file transfer, and so it would be easier to use.
Also note that the package that contains the stand-alone Xmodem/Ymodem/Zmodem tools is not "Irzsz" but lrzsz (lower-case L instead of upper-case I).
This old list of Linux telnet clients for BBS access might be useful to you.
| BBS downloads on Ubuntu Server [closed] |
1,473,877,642,000 |
I have a seriously big problem, when i download any file for example: myfile.zip, or when i load a page for example: myserver.com/welcome.html, the response headers not include content-length and the samething when i download a file, the download show this:
I other words, does not show the size of the file and not show the progress downloaded, because my website not answer content-length, on any file.
The same problem with a normal page, response header below:
My .htaccess code:
Header set Content-Length %{HTTP:Content-Length}
ErrorDocument 404 /error/NotFound(.html)
ErrorDocument 403 /error/Forbidden(.html)
|
The really problem are that you are using a Low quality hosting, that uses Behold Thumper http server, it server mixes the power of apache and nginx, is good for build php websites, but is bad for serve files, i build a solution based on F#, when you call "Header set" you are breaking the directory listing, look below the solution >>
RewriteRule ^(.*)$ $1 [NS,E=no-gzip:1,E=dont-vary:1]
SetEnv no-gzip 1 #Allow content-length
Options @F MultiViews #Fsharp provider enable
deny access Nginx $0 #Block nginx
ServeOnly Apache $1 #Allow default apache
Please ask right the next time, asks with much editions can be low quality questions.
If you want personal suggest come to my github: geeekyeGirl
| How enable content-length header using .htaccess? |
1,473,877,642,000 |
I have a script that exports logs from log management server and send those exports to the archiver server.
When I run this script manually, it complete it's task without any problem. It download the exports and send the files to my other server. The thing is, when I write a cronjob to automize this work flow, it just downloads the files from the log management server but it can't send the files to my archiver server.
The script is as follow:
#!/bin/bash
/opt/splunk/bin/splunk search "(sourcetype=*) earliest=-15m" -output rawdata -maxout 0 > /opt/access_archive/archive_ALL_EXPORTS.dmp
cdate=$(date +"%Y%m%d_%H%M%S")
shopt -s extglob
exported_file=archive_ALL_EXPORTS.dmp
mv "$exported_file" "${cdate}_$exported_file"
scp ${cdate}_$exported_file root@<IP_ADDRESS>:/root
As you can see in the script, I download the dmp file with the name of archive_ALL_EXPORTS.dmp and put exact date time prefix to the name of this file. Than when I try to send this file with scp, it doesn't do anything.
Crontab is as follow:
* * * * * /usr/bin/bash /opt/access_archive/export.sh
I also tried:
* * * * * /opt/access_archive/export.sh
Also cron use bin/bash: (/etc/crontab)
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
EDIT:
I also try with this script in case there is a mistake because of escape characters but the result is the same.
#!/bin/bash
/opt/splunk/bin/splunk search "(sourcetype=*) earliest=-15m" -output rawdata -maxout 0 > /opt/access_archive/archive_ALL_EXPORTS.dmp
scp archive_ALL_EXPORTS.dmp root@<IP_ADDRESS>:/root
|
There isn't a need for escape characters or sth else. I forgot to give the full path for the file as in below command. After that, it worked.
mv "/opt/access_archive/$exported_file" "/opt/access_archive/${cdate}_$exported_file"
| crontab and scp file transfer via script |
1,321,145,927,000 |
I'm looking for the simplest method to print the longest line in a file. I did some googling and surprisingly couldn't seem to find an answer. I frequently print the length of the longest line in a file, but I don't know how to actually print the longest line. Can anyone provide a solution to print the longest line in a file? Thanks in advance.
|
cat ./text | awk ' { if ( length > x ) { x = length; y = $0 } }END{ print y }'
UPD: summarizing all the advices in the comments
awk 'length > max_length { max_length = length; longest_line = $0 } END { print longest_line }' ./text
| How to print the longest line in a file? |
1,321,145,927,000 |
I have a command that produces output in color, and I would like to pipe it into a file with the color codes stripped out. Is there a command that works like cat except that it strips color codes? I plan to do something like this:
$ command-that-produces-colored-output | stripcolorcodes > outfile
|
You'd think there'd be a utility for that, but I couldn't find it. However, this Perl one-liner should do the trick:
perl -pe 's/\e\[?.*?[\@-~]//g'
Example:
$ command-that-produces-colored-output | perl -pe 's/\e\[?.*?[\@-~]//g' > outfile
Or, if you want a script you can save as stripcolorcodes:
#! /usr/bin/perl
use strict;
use warnings;
while (<>) {
s/\e\[?.*?[\@-~]//g; # Strip ANSI escape codes
print;
}
If you want to strip only color codes, and leave any other ANSI codes (like cursor movement) alone, use
s/\e\[[\d;]*m//g;
instead of the substitution I used above (which removes all ANSI escape codes).
| Program that passes STDIN to STDOUT with color codes stripped? |
1,321,145,927,000 |
How can I ask ps to display only user processes and not kernel threads?
See this question to see what I mean...
|
This should do (under Linux):
ps --ppid 2 -p 2 --deselect
kthreadd (PID 2) has PPID 0 (on Linux 2.6+) but ps does not allow to filter for PPID 0; thus this work-around.
See also this equivalent answer.
| Can ps display only non kernel processes on Linux? |
1,321,145,927,000 |
Let's say we have a text file of forbidden lines forbidden.txt. What is a short way to filter all lines of a command output that exist in the text file?
cat input.txt | exclude-forbidden-lines forbidden.txt | sort
|
Use grep like this:
$ grep -v -x -F -f forbidden.txt input.txt
That long list of options to grep means
-v Invert the sense of the match, i.e. look for lines not matching.
-x When matching a pattern, require that the pattern matches the whole line, i.e. not just anywhere on the line.
-F When matching a pattern, treat it as a fixed string, i.e. not as a regular expression.
-f Read patterns from the given file (forbidden.txt).
Then pipe that to sort or whatever you want to do with it.
| How to filter out lines of a command output that occur in a text file? |
1,321,145,927,000 |
I'm using less to view log files quite a lot and every so often I'd like to filter the output by hiding lines which contains some keywords.
In less it's possible to filter-out lines with &!<keyword> but that only works for one keyword at a time.
I'd like to specify a list of keywords to filter-out. Is this at all possible?
|
You can use a regular expression:
&!cat|dog|fish
| Hide lines based on multiple patterns |
1,321,145,927,000 |
I have a directory in which lots of files (around 200) with the name temp_log.$$ are created with several other important files which I need to check.
How can I easily list out all the files and exclude the temp_log.$$ files from getting displayed?
Expected output
$ ls -lrt <exclude-filename-part>
-- Lists files not matching the above given string
I have gone through ls man page but couldn't find anything in this reference. Please let me know if I have missed any vital information here.
Thanks
|
With GNU ls (the version on non-embedded Linux and Cygwin, sometimes also found elsewhere), you can exclude some files when listing a directory.
ls -I 'temp_log.*' -lrt
(note the long form of -I is --ignore='temp_log.*')
With zsh, you can let the shell do the filtering. Pass -d to ls so as to avoid listing the contents of matched directories.
setopt extended_glob # put this in your .zshrc
ls -dltr ^temp_log.*
With ksh, bash or zsh, you can use the ksh filtering syntax. In zsh, run setopt ksh_glob first. In bash, run shopt -s extglob first.
ls -dltr !(temp_log.*)
| List files not matching given string in filename |
1,321,145,927,000 |
I have a .CSV file with the below format:
"column 1","column 2","column 3","column 4","column 5","column 6","column 7","column 8","column 9","column 10
"12310","42324564756","a simple string with a , comma","string with or, without commas","string 1","USD","12","70%","08/01/2013",""
"23455","12312255564","string, with, multiple, commas","string with or, without commas","string 2","USD","433","70%","07/15/2013",""
"23525","74535243123","string , with commas, and - hypens and: semicolans","string with or, without commas","string 1","CAND","744","70%","05/06/2013",""
"46476","15467534544","lengthy string, with commas, multiple: colans","string with or, without commas","string 2","CAND","388","70%","09/21/2013",""
5th column of the file has different strings. I need to filter out the file based on the 5th column value. Lets say, I need a new file from the current file which has records only with the value "string 1" in its fifth field.
For this I tried the below command,
awk -F"," ' { if toupper($5) == "STRING 1") PRINT }' file1.csv > file2.csv
but it was throwing me an error as following:
awk: { if toupper($5) == "STRING 1") PRINT }
awk: ^ syntax error
awk: { if toupper($5) == "STRING 1") PRINT }
awk: ^ syntax error
I then used the following which gives me an odd output.
awk -F"," '$5="string 1" {print}' file1.csv > file2.csv
Output:
"column 1" "column 2" "column 3" "column 4" string 1 "column 6" "column 7" "column 8" "column 9" "column 10
"12310" "42324564756" "a simple string with a comma" string 1 without commas" "string 1" "USD" "12" "70%" "08/01/2013" ""
"23455" "12312255564" "string with string 1 commas" "string with or without commas" "string 2" "USD" "433" "70%" "07/15/2013" ""
"23525" "74535243123" "string with commas string 1 "string with or without commas" "string 1" "CAND" "744" "70%" "05/06/2013" ""
"46476" "15467534544" "lengthy string with commas string 1 "string with or without commas" "string 2" "CAND" "388" "70%" "09/21/2013" ""
P.S: I used the toupper command to be on the safe side, as I am not sure if the string will be in lower or higher case. I need to know what is wrong with my code and if the space in the string matters while searching for a pattern using AWK.
|
awk -F '","' 'BEGIN {OFS=","} { if (toupper($5) == "STRING 1") print }' file1.csv > file2.csv
Output
"12310","42324564756","a simple string with a , comma","string with or, without commas","string 1","USD","12","70%","08/01/2013",""
"23525","74535243123","string , with commas, and - hypens and: semicolans","string with or, without commas","string 1","CAND","744","70%","05/06/2013",""
I think This is What you want.
| Filter a .CSV file based on the 5th column values of a file and print those records into a new file |
1,321,145,927,000 |
Given a file L with one non-negative integer per line and text file F, what would be a fast way to keep only those lines in F, whose line number appears in file L?
Example:
$ cat L.txt
1
3
$ cat F.txt
Hello World
Hallo Welt
Hola mundo
$ command-in-question -x L.txt F.txt
Hello World
Hola mundo
I'm looking for a command that can handle a file L with 500 million or more entries; file L is sorted numerically.
Note: I'm halfway through an implementation for a command-in-question but I just wondered, whether one might be able to use some Unix tools here as well.
Update: Thank for all the answers, I learned a lot today! I would like to accept more than one answer, but that's not possible.
I took the fastest solution from the current answers an put them into a standalone tool: filterline.
|
grep -n | sort | sed | cut
( export LC_ALL=C
grep -n '' | sort -t: -nmk1,1 ./L - |
sed /:/d\;n | cut -sd: -f2-
) <./F
That should work pretty quickly (some timed tests are included below) with input of any size. Some notes on how:
export LC_ALL=C
Because the point of the following operation is to get the entire file of ./F stacked up inline with its ./L lineno's file, the only characters we'll really need to worry about are ASCII [0-9]digits and the :colon.
For that reason it is more simple to worry about finding those 11 characters in a set of 128 possibles than it is if UTF-8 is otherwise involved.
grep -n ''
This inserts the string LINENO: into the head of every line in stdin - or <./F.
sort -t: -nmk1,1 ./L -
sort neglects to sort its input files at all, and instead (correctly) presumes they are presorted and -merges them in -numerically sorted order, ignoring basically anything beyond any possible -k1,1st occurring -t:colon character anyway.
While this may require some temp space to do (depending on how far apart some sequences may occur), it will not require much as compared to a proper sort, and it will be very fast because it involves zero backtracking.
sort will output a single stream where any lineno's in ./L will immediately precede the corresponding lines in ./F. ./L's lines always come first because they are shorter.
sed /:/d\;n
If the current line matches a /:/colon delete it from output. Else, auto-print the current and next line.
And so sed prunes sort's output to only sequential line pairs which do not match a colon and the following line - or, to only a line from ./L and then the next.
cut -sd: -f2-
cut -suppresses from output those of its input lines which do not contain at least one of its -d:elimiter strings - and so ./L's lines are pruned completely.
For those lines which do, their first : colon-delimited -field is cut away - and so goes all of grep's inserted lineno's.
small input test
seq 5 | sed -ne'2,3!w /tmp/L
s/.*/a-z &\& 0-9/p' >/tmp/F
...generates 5 lines of sample input. Then...
( export LC_ALL=C; </tmp/F \
grep -n '' | sort -t: -nmk1,1 ./L - |
sed /:/d\;n | cut -sd: -f2-
)| head - /tmp[FL]
...prints...
==> standard input <==
a-z 1& 0-9
a-z 4& 0-9
a-z 5& 0-9
==> /tmp/F <==
a-z 1& 0-9
a-z 2& 0-9
a-z 3& 0-9
a-z 4& 0-9
a-z 5& 0-9
==> /tmp/L <==
1
4
5
bigger timed tests
I created a couple of pretty large files:
seq 5000000 | tee /tmp/F |
sort -R | head -n1500000 |
sort -n >/tmp/L
...which put 5mil lines in /tmp/F and 1.5mil randomly selected lines of that into /tmp/L. I then did:
time \
( export LC_ALL=C
grep -n '' | sort -t: -nmk1,1 ./L - |
sed /:/d\;n | cut -sd: -f2-
) <./F |wc - l
It printed:
1500000
grep -n '' \
0.82s user 0.05s system 73% cpu 1.185 total
sort -t: -nmk1,1 /tmp/L - \
0.92s user 0.11s system 86% cpu 1.185 total
sed /:/d\;n \
1.02s user 0.14s system 98% cpu 1.185 total
cut -sd: -f2- \
0.79s user 0.17s system 80% cpu 1.184 total
wc -l \
0.05s user 0.07s system 10% cpu 1.183 total
(I added the backslashes there)
Among the solutions currently offered here, this is the fastest of all of them but one when pitted against the dataset generated above on my machine. Of the others only one came close to contending for second-place, and that is meuh's perl here.
This is by no means the original solution offered - it has dropped a third of its execution time thanks to advice/inspiration offered by others. See the post history for slower solutions (but why?).
Also, it is worth noting that some other answers might very well contend better if it were not for the multi-cpu architecture of my system and the concurrent execution of each of the processes in that pipeline. They all work at the same time - each on its own processor core - passing around the data and doing their small part of the whole. It's pretty cool.
but the fastest solution is...
But it is not the fastest solution. The fastest solution offered here, hands-down, is the C program. I called it cselect. After copying it to my X clipboard, I compiled it like:
xsel -bo | cc -xc - -o cselect
I then did:
time \
./cselect /tmp/L /tmp/F |
wc -l
...and the results were...
1500000
./cselect /tmp/L /tmp/F \
0.50s user 0.05s system 99% cpu 0.551 total
wc -l \
0.05s user 0.05s system 19% cpu 0.551 total
| Filter file by line number |
1,321,145,927,000 |
I have some text-files I use to take notes in - just plain text, usually just using cat >> file. Occasionally I use a blank line or two (just return - the new-line character) to specify a new subject/line of thought. At the end of each session, before closing the file with Ctrl+D, I typically add lots (5-10) blank lines (return-key) just to separate the sessions.
This is obviously not very clever, but it works for me for this purpose. I do however end-up with lots and lots of unnecessary blank lines, so I'm looking for a way to remove (most of) the extra lines. Is there a Linux-command (cut, paste, grep, ...?) that could be used directly with a few options? Alternatively, does anybody have an idea for a sed, awk or perl (well in any scripting-language really, though I'd prefer sed or awk) script that would do what I want? Writing something in C++ (which I actually could do myself), just seems like overkill.
Case #1: What I need is a script/command that would remove more than two (3 or more) consecutive blank lines, and replace them with just two blank lines. Though it would be nice if it also could be tweaked to remove more than one line (2 or more) and/or replace multiple blank lines with just one blank line.
Case #2: I could also use a script/command that would remove a single blank line between two lines of text, but leave multiple blank lines as is (though removing one of the blank lines would also be acceptable).
|
Case 1:
awk '!NF {if (++n <= 2) print; next}; {n=0;print}'
Case 2:
awk '!NF {s = s $0 "\n"; n++; next}
{if (n>1) printf "%s", s; n=0; s=""; print}
END {if (n>1) printf "%s", s}'
| How to remove multiple blank lines from a file? |
1,321,145,927,000 |
I am running a utility that doesn't offer a way to filter its output. Nothing in the text of the output indicates that a particular function failed but it does show in red. The output is so long that at the end when it reports some # of errors I can't always scroll to see the output where the error occurred.
How can I filter out non-red text?
pseudo code:
dolongtask | grep -color red
Edit
The command outputs other colors as well and I need to be able to filter out all text that isn't red. Also the text coloring is multiline.
|
Switching the color is done through escape sequences embedded in the text. Invariably, programs issue ANSI escape sequences, because that's what virtually all terminals support nowadays.
The escape sequence to switch the foreground color to red is \e[31m, where \e designates an escape character (octal 033, hexadecimal 1b, also known as ESC, ^[ and various other designations). Numbers in the range 30–39 set the foreground color; other numbers set different attributes. \e[0m resets all attributes to their default value. Run cat -v to check what the program prints, it might use some variant such as \e[0;31m to first reset all attributes, or \e[3;31 to also switch italics on (which many terminals don't support).
In ksh, bash or zsh, you can use $'…' to enable backslash escapes inside the quotes, which lets you type $'\e' to get an escape character. Note that you will then have to double any backslash that you want to pass to grep. In /bin/sh, you can use "$(printf \\e)" or type a literal escape character.
With the GNU grep -o option, the following snippet filters red text, assuming that it starts with the escape sequence \e[31m, ends with either \e[0m or \e[30m on the same line, and contain no embedded escape sequence.
grep -Eo $'\e\\[31m[^\e]*\e\\[[03]?m'
The following awk snippet extracts red text, even when it's multiline.
awk -v RS='\033' '
match($0, /^\[[0-9;]*m/) {
color = ";" substr($0, 2, RLENGTH-2) ";";
$0 = substr($0, RLENGTH+1);
gsub(/(^|;)0*[^03;][0-9]*($|;)/, ";", color);
red = (color ~ /1;*$/)
}
red'
Here's a variation which retains the color-changing commands, which could be useful if you're filtering multiple colors (here red and magenta).
awk -v RS='\033' '
match($0, /^\[[0-9;]*m/) {
color = ";" substr($0, 2, RLENGTH-2) ";";
printf "\033%s", substr($0, 1, RLENGTH);
$0 = substr($0, RLENGTH+1);
gsub(/(^|;)0*[^03;][0-9]*($|;)/, ";", color);
desired = (color ~ /[15];*$/)
}
desired'
| Filter output of command by color |
1,321,145,927,000 |
I have a huge csv file with 10 fields separated by commas. Unfortunately, some lines are malformed and do not contain exactly 10 commas (what causes some problems when I want to read the file into R). How can I filter out only the lines that contain exactly 10 commas?
|
Another POSIX one:
awk -F , 'NF == 11' <file
If the line has 10 commas, then there will be 11 fields in this line. So we simply make awk use , as the field delimiter. If the number of fields is 11, the condition NF == 11 is true, awk then performs the default action print $0.
| Keep only the lines containing exact number of delimiters |
1,321,145,927,000 |
If I have a directory full of files and sub directories. What is the best way to list just the regular files which fall alphabetically before a given string?
Currently the best I can do using bash is the following:
for x in `find . -maxdepth 1 -type f | sort`
do
if [[ "$x" > './reference' ]]
then
break
fi
echo $x
done
I feel like there is a more concise way to do this, but I'm not sure what it is. Any ideas?
|
if you need all of them
find . -maxdepth 1 -type f | sort | awk '$0 > "./reference"'
if you need the first
find . -maxdepth 1 -type f | sort | awk '$0 > "./reference"{print;exit}'
| Find files alphabetically before a given string |
1,321,145,927,000 |
Using jq, how can we select json elements from an array based on inclusion/exclusion of each element's key in some allowlist/blocklist?
I want to do a case-insensitive contains (so allowlist/blocklist case would not matter).
Here is what I tried (not implemented blocklist):
allowlist='["happy", "good"]'
blocklist='["sad", "bad"]'
jq --argjson allowlist "$allowlist" \
--argjson blocklist "$blocklist" \
'.[]
| select(.my_key | ascii_downcase
| contains($allowlist[]))' \
<<< '[{"my_key": "neutral"}, {"my_key": "neutral good"},
{"my_key": "neutral bad"}, {"my_key": "good"},
{"my_key": "bad"}, {"my_key": "happy sad bad"},
{"my_key": "neutral happy sad"}]'
Expected output:
{"my_key": "neutral good"}
{"my_key": "good"}
|
Using select, any and all, your filter comes down to
jq --argjson allowlist "$allowlist" \
--argjson blocklist "$blocklist" '.[] |
select( any ( .my_key ; contains( $allowlist[] ) ) ) |
select( all ( .my_key ; contains( $blocklist[] ) | not ) )'
Add ascii_downcase to the value of my_key in the above filter, if you needed contains to work with all lower case.
| How to select based on an allowlist/blocklist using jq |
1,321,145,927,000 |
Should this question be moved to stackoverflow instead?
I often need to read log files generated by java applications using log4j. Usually, a logged message (let's call it a log entry) spans over multiple lines. Example:
INFO 10:57:01.123 [Thread-1] [Logger1] This is a multi-line
text, two lines
DEBUG 10:57:01.234 [Thread-1] [Logger2] This entry takes 3 lines
line 2
line 3
Note that each log entry starts at a new line and the very first word from the line is TRACE, DEBUG, INFO or ERROR and at least one space.
Here, there are 2 log entry, the first at millisecond 123, the other at millisecond 234.
I would like a fast command (using a combination of sed/grep/awk/etc) to filter log entries (grep only filters lines), eg: remove all the log entries containing text 'Logger2'.
I considered doing the following transformations:
1) join lines belonging to the same log entries with a special sequence of chars (eg: ##); this way, all the log entries will take exactly one line
INFO 10:57:01.123 [Thread-1] [Logger1] This is a multi-line##text, two lines
DEBUG 10:57:01.234 [Thread-1] [Logger2] This entry takes 3 lines##line 2##line 3
2) grep
3) split the lines back (ie: replace ## with \n)
I had troubles at step 1 - I do not have enough experience with sed.
Perhaps the 3 steps above are not required, maybe sed can do all the work.
|
There is no need to mix many instruments. Task can be done by sed only
sed '/^INFO\|^DEBUG\|^TRACE\|^ERROR/{
/Logger2/{
:1
N
/\nINFO\|\nDEBUG\|\nTRACE\|\nERROR/!s/\n//
$!t1
D }
}' log.entry
| Filtering multi-lines from a log |
1,321,145,927,000 |
Since a few days, my web/mailserver (centos 6.4) is sending out spammails by the bunch, and only stopping the postfix service is putting an end to it.
SMPT is set up to only accept connections over ssl and using username/pwd. And I already changed the password of the (suspected) infected emailaccount.
Email was set up via iRedMail.
Any help on identify and stopping this is more then welcome!
ADDED:
Some logs excerpts:
Mar 23 05:01:52 MyServer postfix/smtp[9494]: 4E81026038: to=<[email protected]>, relay=mail.suddenlinkmail.com[208.180.40.132]:25, delay=3, delays=0.07/0/2.4/0.5, dsn=2.0.0, status=sent (250 Message received: [email protected])
Mar 23 05:02:01 MyServer postfix/smtp[9577]: 209BA26067: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=14, delays=12/0/0/2, dsn=2.0.0, status=sent (250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as B654226078)
Mar 23 05:02:01 MyServer postfix/smtp[9495]: 8278726077: to=<[email protected]>, relay=mx-biz.mail.am0.yahoodns.net[98.139.171.245]:25, delay=0.88, delays=0.25/0/0.47/0.14, dsn=4.7.1, status=deferred (host mx-biz.mail.am0.yahoodns.net[98.139.171.245] said: 421 4.7.1 [TS03] All messages from [IPADDRESS] will be permanently deferred; Retrying will NOT succeed. See http://postmaster.yahoo.com/421-ts03.html (in reply to MAIL FROM command))
A mailheader of an undeliverable report:
Return-Path: <MAILER-DAEMON>
Delivered-To: [email protected]
Received: from localhost (icantinternet.org [127.0.0.1])
by icantinternet.org (Postfix) with ESMTP id 4669E25D9D
for <[email protected]>; Mon, 24 Mar 2014 14:20:15 +0100 (CET)
X-Virus-Scanned: amavisd-new at icantinternet.org
X-Spam-Flag: YES
X-Spam-Score: 9.501
X-Spam-Level: *********
X-Spam-Status: Yes, score=9.501 tagged_above=2 required=6.2
tests=[BAYES_99=3.5, BAYES_999=0.2, RAZOR2_CF_RANGE_51_100=0.5,
RAZOR2_CF_RANGE_E8_51_100=1.886, RAZOR2_CHECK=0.922, RDNS_NONE=0.793,
URIBL_BLACK=1.7] autolearn=no
Received: from icantinternet.org ([127.0.0.1])
by localhost (icantinternet.org [127.0.0.1]) (amavisd-new, port 10024)
with ESMTP id FOrkYnmugXGk for <[email protected]>;
Mon, 24 Mar 2014 14:20:13 +0100 (CET)
Received: from spamfilter2.webreus.nl (unknown [46.235.46.231])
by icantinternet.org (Postfix) with ESMTP id D15BA25D14
for <[email protected]>; Mon, 24 Mar 2014 14:20:12 +0100 (CET)
Received: from spamfilter2.webreus.nl (localhost [127.0.0.1])
by spamfilter2.webreus.nl (Postfix) with ESMTP id 7FB2EE78EFF
for <[email protected]>; Mon, 24 Mar 2014 14:20:13 +0100 (CET)
X-Virus-Scanned: by SpamTitan at webreus.nl
Received: from mx-in-2.webreus.nl (mx-in-2.webreus.nl [46.235.44.240])
by spamfilter2.webreus.nl (Postfix) with ESMTP id 3D793E78E5A
for <[email protected]>; Mon, 24 Mar 2014 14:20:09 +0100 (CET)
Received-SPF: None (mx-in-2.webreus.nl: no sender authenticity
information available from domain of
[email protected]) identity=pra;
client-ip=62.146.106.25; receiver=mx-in-2.webreus.nl;
envelope-from=""; x-sender="[email protected]";
x-conformance=sidf_compatible
Received-SPF: None (mx-in-2.webreus.nl: no sender authenticity
information available from domain of
[email protected]) identity=mailfrom;
client-ip=62.146.106.25; receiver=mx-in-2.webreus.nl;
envelope-from=""; x-sender="[email protected]";
x-conformance=sidf_compatible
Received-SPF: None (mx-in-2.webreus.nl: no sender authenticity
information available from domain of
[email protected]) identity=helo;
client-ip=62.146.106.25; receiver=mx-in-2.webreus.nl;
envelope-from=""; x-sender="[email protected]";
x-conformance=sidf_compatible
Received: from athosian.udag.de ([62.146.106.25])
by mx-in-2.webreus.nl with ESMTP; 24 Mar 2014 14:20:03 +0100
Received: by athosian.udag.de (Postfix)
id 3B16E54807C; Mon, 24 Mar 2014 14:19:59 +0100 (CET)
Date: Mon, 24 Mar 2014 14:19:59 +0100 (CET)
From: [email protected] (Mail Delivery System)
Subject: ***Spam*** Undelivered Mail Returned to Sender
To: [email protected]
Auto-Submitted: auto-replied
MIME-Version: 1.0
Content-Type: multipart/report; report-type=delivery-status;
boundary="36D9C5488E5.1395667199/athosian.udag.de"
Content-Transfer-Encoding: 7bit
Message-Id: <[email protected]>
|
Pravin offers some good general points, but doesn't really elaborate on any of them and doesn't address your likely actual problems.
First, you need to find out how postfix is receiving those messages and why it's choosing to relay them (the two questions are very likely related).
The best way to do it is by looking at the message ID of any one of the messages and then grepping the mail.log file for all log entries regarding it. This will tell you at the very least where the message came from and what postfix did with it right up until it left its care and went on into the world. Here's a (redacted) sample excerpt:
Mar 26 00:51:13 vigil postfix/smtpd[9120]: 3B7085E038D: client=foo.bar.com[1.2.3.4]
Mar 26 00:51:13 vigil postfix/cleanup[9159]: 3B7085E038D: message-id=<------------@someserver>
Mar 26 00:51:13 vigil postfix/qmgr[5366]: 3B7085E038D: from=<[email protected]>, size=456346, nrcpt=2 (queue active)
Mar 26 00:51:13 vigil postfix/lmtp[9160]: 3B7085E038D: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=0.3, delays=0.11/0/0/0.19, dsn=2.0.0, status=sent (250 2.0.0 Ok, id=04611-19, from MTA([127.0.0.1]:10025): 250 2.0.0 Ok: queued as 6EA115E038F)
Mar 26 00:51:13 vigil postfix/qmgr[5366]: 3B7085E038D: removed
This tells me the following things:
The message came in from foo.bar.com, a server with IP address 1.2.3.4 calling itself foo.bar.com
(Implied by the lack of warnings) According to forward and reverse DNS, that address does indeed match that name.
The message was meant for a user named [email protected], which the server decided was an acceptable destination address.
As per its configuration, the mail server relayed the message through 127.0.0.1:10024 (our spam/virus filter) for further processing.
The filter said "Okay, I'll queue this as message with ID 6EA115E038F and handle it from here."
Having received this confirmation, the main server declared it was done and removed the original message from the queue.
Now, once you know how the message got into the system you can start finding out where the problem lies.
If it came from elsewhere and was relayed to somewhere else entirely,
postfix is currently functioning as an open relay. This is very, very
bad and you should tighten up your smtpd_recipient_restrictions and
smtpd_client_restrictions settings in /etc/postfix/main.cf.
If it came in from localhost, it's very likely that one webhosting user or another has been compromised with a php script that sends out spam on demand. Use the find command to look for .php files that were recently added or altered, then take a good look at any suspicious names.
Anything more specific will depend too much on the outcome of the above investigation so it's pointless to attempt to elaborate. I will leave you with the more general admonishment to at the very least install and configure postgrey at earliest opportunity.
| My Postfix installation is sending out spam; how to stop it? |
1,321,145,927,000 |
I am attempting to write a filter using something like sed or awk to do the following:
If a given pattern does not exist in the input, copy the entire input to the output
If the pattern exists in the input, copy only the lines after the first occurrence to the output
This happens to be for a "git clean" filter, but that's probably not important. The important aspect is this needs to be implemented as a filter, because the input is provided on stdin.
I know how to use sed to delete lines up to a pattern, eg. 1,/pattern/d but that deletes the entire input if /pattern/ is not matched anywhere.
I can imagine writing a whole shell script that creates a temporary file, does a grep -q or something, and then decides how to process the input. I'd prefer to do this without messing around creating a temporary file, if possible. This needs to be efficient because git might call it frequently.
|
If your files are not too large to fit in memory, you could use perl to slurp the file:
perl -0777pe 's/.*?PAT[^\n]*\n?//s' file
Just change PAT to whatever pattern you're after. For example, given these two input files and the pattern 5:
$ cat file
1
2
3
4
5
11
12
13
14
15
$ cat file1
foo
bar
$ perl -0777pe 's/.*?5[^\n]*\n?//s' file
11
12
13
14
15
$ perl -0777pe 's/.*?10[^\n]*\n?//s' file1
foo
bar
Explanation
-pe : read the input file line by line, apply the script given by -e to each line and print.
-0777 : slurp the entire file into memory.
s/.*?PAT[^\n]*\n?//s : remove everything until the 1st occurrence of PAT and until the end of the line.
For larger files, I don't see any way to avoid reading the file twice. Something like:
awk -vpat=5 '{
if(NR==FNR){
if($0~pat && !a){a++; next}
if(a){print}
}
else{
if(!a){print}
else{exit}
}
}' file1 file1
Explanation
awk -vpat=5 : run awk and set the variable pat to 5.
if(NR==FNR){} : if this is the 1st file.
if($0~pat && !a){a++; next} : if this line matches the value of pat and a is not defined, increment a by one and skip to the next line.
if(a){print} : if a is defined (if this file matches the pattern), print the line.
else{ } : if this is not the 1st file (so it's the second pass).
if(!a){print} if a is not defined, we want the whole file, so print every line.
else{exit} : if a is defined, we've already printed in the 1st pass so there's no need to reprocess the file.
| Remove lines from file up to a pattern, unless the pattern doesn't exist |
1,321,145,927,000 |
I would like to take the output from a program and interactively filter which lines to pipe to the next command.
ls | interactive-filter | xargs rm
For example, I have a list of files that a pattern cannot match against to reduce. I would like a command interactive-filter that will page the output of the file list and I could interactively indicate which lines to forward to the next command. In this case each line would then be removed.
|
iselect provides an up-down list, (as input from a prior pipe), in which the user can tag multiple entries, (as output to the next pipe):
# show some available executables ending in '*sh*' to run through `whatis`
find /bin /sbin /usr/bin -maxdepth 1 -type f -executable -name '*sh' |
iselect -t "select some executables to run 'whatis' on..." -a -m |
xargs -d '\n' -r whatis
Output after pressing the spacebar to tag a few on my system:
dash (1) - command interpreter (shell)
ssh (1) - OpenSSH SSH client (remote login program)
mosh (1) - mobile shell with roaming and intelligent local echo
yash (1) - a POSIX-compliant command line shell
vipe allows interactively editing (with one's favorite text editor) what goes through a pipe. Example:
# take a list of executables with long names from `/bin`, edit that
# list as needed with `mcedit`, and run `wc` on the output.
find /bin -type f | grep '...............' | EDITOR=mcedit vipe | xargs wc
Output (after deleting some lines while in mcedit):
378 2505 67608 /bin/ntfs-3g.secaudit
334 2250 105136 /bin/lowntfs-3g
67 952 27152 /bin/nc.traditional
126 877 47544 /bin/systemd-machine-id-setup
905 6584 247440 total
Note on push & pull:
iselect starts with a list in which nothing is selected.
vipe starts with a list in which every item shown will be sent through the pipe, unless the user deletes it.
In Debian-based distros, both utils can be installed with apt-get install moreutils iselect.
| Is there a interactive filter tool when paging output? |
1,321,145,927,000 |
I'm using nfsen and I need to apply a filter to get specific ip range and I can't find the syntax. I searched in the doc of nfdump and tcpdump but nothing.
For now the netflows captured provides from multiples address and the ip range I want to get (and only those address) is from 130.190.0.0 to 130.190.127.255 with a mask /17
Or another way to explain this, I only want adress that start by 130.190 I don't care about other like 216.58, 51.254...etc there are a lot more
|
If you want a filter to capture on packets mathing 130.190.0.0/17:
tcpdump net 130.190.0.0/17
| tcpdump ip range |
1,321,145,927,000 |
I'd like to extract only a specific value from command output.
The string that the command returns is something like this:
Result: " 5 Secs (11.2345%) 60 Secs (22.3456%) 300 Secs (33.4567%)"
And I want to filter only the "60 Secs" value between ()
22.3456%
How can I do that?
|
If that is the exact string that the command returns, then sed will work.
command_output | sed 's/.*60 Secs..\(.*\)..300.*/\1/'
That prints everything between 60 Secs ( and ) 300.
Result:
22.3456%
| Extract values from a string that follow a specific word using sed |
1,321,145,927,000 |
I wanted to download all graphic files from our organisation's graphics repository web page. They are Illustrator (.ai) format and Corel Draw (.cdr) format.
They are directly hyperlinked (i.e. <a href="http://server/path-to-file.ai">...</a>.
|
wget includes features to support this directly:
wget -r -A "*.ai,*.cdr" 'address-of-page-with-hyperlinks'
-r enables recursive mode so it will download more than the given URL, and -A limits the files it will download and keep in the end.
| Filter hyperlinks from web page and download all that match a certain pattern |
1,321,145,927,000 |
Reading the manpage of tcpdump I found this example
tcpdump 'tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 and not src and dst net localnet'
but I don't understand it, especially the last part.
The tcp[tcpflags] & (tcp-syn|tcp-fin) != 0 part filters all the packets having either the SYN or the FIN bit set.
What does not src and dst net localnet filter?
The explanation in the same manpage says
To print the start and end packets (the SYN and FIN packets) of each
TCP conversation that involves a non-local host.
but in my opinion src is not an expression by itself.
|
You can parse the second part of that filter thusly
not ( (src and dest) net localnet )
It's shorthand for
not src net localnet and not dest net localnet
| What does this tcpdump line means? |
1,321,145,927,000 |
ls -lrt |grep 'Jun 30th' | awk '{print $9}' | xargs mv -t /destinationFolder
I'm trying to move files from certain date or pattern or createuser. It doesn't work properly without the -t option.
Could someone please enlighten on xargs -n and -t in move command?
|
From the mv man page
-t, --target-directory=DIRECTORY
move all SOURCE arguments into DIRECTORY
mv's default behavior is to move everything into the last argument so when xargs executes the command it does it like
mv /destinationFolder pipedArgs without the -t it would try to move everything into the last arg piped to xargs. With the -t you're explicitly saying to move it HERE.
From the xargs man page
-n max-args
Use at most max-args arguments per command line. Fewer than max-args arguments will be used if the size (see the -s option) is exceeded, unless the -x option is given, in which case xargs will exit.
Normally xargs will pass all the args at once to the command. For example
echo 1 2 3 4 |xargs echo
1 2 3 4
executes
echo 1 2 3 4
While
echo 1 2 3 4 |xargs -n 1 echo
executes
echo 1
echo 2
echo 3
echo 4
| What does '-t' do in mv command? Example below |
1,321,145,927,000 |
I'm trying to build a comprehensive filter / exclude file, to prevent backuping stuff that doesn't make sense do backup, such as temporary / cache data or easily recreatable files.
I would appreciate if you could share (part of) your exclude list for rsync backups.
Here is what I have so far:
## Universal excludes
lost+found
ld.so.cache
# backup text files (e.g. from Emacs)
- *~
- \#*\#
# Commonly distributed Mac OS X cache
- .DS_Store
# Commonly distributed Windows cache
- Thumbs.db
## Root file system
- /dev/
- /etc/modules.conf
- /media/
- /proc/
- /sys/
- /tmp/
- /usr/portage/
- /usr/src/
- /var/tmp/
- /var/log/
# Of the mounted stuff, whitelist only my two data partitions
- /mnt/
+ /mnt/data1
+ /mnt/data2
# Common package managers (apt, yum)
- /var/cache/apt/
- /var/cache/yum/
## Filters for home dirs (assumes /home/<user> dir structure)
# Cache
- /home/*/.cache/
# Downloads
- /home/*/Downloads/
+ /home/*/Downloads/src/
# Dropbox
- /home/*/Dropbox
# Temporary files / cache
- /home/*/.local/share/Trash
- /home/*/.cache
- /home/*/.Trash
# X Windows System
- /home/*/.xsession-errors*
# Gnome temp stuff
- /home/*/.compiz*/session
- /home/*/.gksu.lock
- /home/*/.gvfs
# Common Applications
# Adobe Reader
- /home/*/.adobe/**/AssetCache/
- /home/*/.adobe/**/Cache/
- /home/*/.adobe/**/Temp/
- /home/*/.adobe/**/UserCache.bin
# Dropbox temp stuff
- /home/*/.dropbox/
- /home/*/.dropbox-dist/
# Gimp
- /.gimp-*/tmp
- /.gimp-*/swap
# Mozilla Firefox
- /home/*/.mozilla/firefox/*/Cache/
- /home/*/.mozilla/firefox/*/lock
- /home/*/.mozilla/firefox/*/.parentlock
# Mozilla Thunderbird
- /home/*/.mozilla-thunderbird/*/lock
- /home/*/.mozilla-thunderbird/*/.parentlock
# Pidgin (accounts.xml contains passwords in clear text)
- /home/*/.purple/accounts.xml
What else do you think would be useful to add to this filter file, or what do you use and why?
|
I usually also exclude:
/etc/mtab
/run/
/var/run/
/var/cache/pacman/pkg/ (on Arch Linux systems)
| What do you filter / exclude list when doing backup with rsync? [closed] |
1,321,145,927,000 |
[root@localhost ~]# ps aux | grep ata
root 19 0.0 0.0 0 0 ? S 07:52 0:00 [ata/0]
root 20 0.0 0.0 0 0 ? S 07:52 0:00 [ata_aux]
root 1655 0.0 2.6 22144 13556 tty1 Ss+ 07:53 0:18 /usr/bin/Xorg :0 -nr -verbose -auth /var/run/gdm/auth-for-gdm-t1gMCU/database -nolisten tcp vt1
root 3180 0.0 0.1 4312 728 pts/0 S+ 14:09 0:00 grep ata
[root@localhost ~]# ps aux | grep ata | cut -d" " -f 2
[root@localhost ~]#
I would expect second column in the output; but not getting anything. Any ideas ?
|
With -d " ", the field separator is one (and only one) space character. Contrary to the shell word splitting, cut doesn't treat space any different than any other character. So cut -d " " -f2 returns "" in root 19, just like it would return "" for cut -d: -f2 in root:::19.
You'd need to either squeeze the blanks to transform any sequence of space into one space:
ps aux | grep ata | tr -s ' ' | cut -d ' ' -f2
Or use awk where in its default spitting mode, it doesn't use a separator but splits into the list of sequences of non-blank characters:
ps aux | awk '/ata/{print $2}'
In this case though, you may want to use:
pgrep -f ata
Or at least:
ps -eo pid= -o args= | awk '/ata/{print $1}'
To match against the arguments only.
| not getting desired output with cut command? |
1,321,145,927,000 |
I have a file in this format:
[#] OWNER_NAME NAME SIZE
[6] Robottinosino Software 200
[42] Robottinosino Ideas worth zero 188
[12] Robottinosino Ideas worth zero or more 111
[13] I am Batman Hardware 180
[25] Robottinosino Profile Pictures 170
and I would like to be able to do the following using command line tools:
my_command "Ideas worth zero"
and get this result:
42
and not risk getting this result:
12
I have thought of using grep to identify the line, awk to get the 1st field but I am not sure how to reliably and efficiently match on the whole 'NAME' field short of counting at which column the text 'OWNER_NAME' and 'SIZE' appear in the header and get everything in-between with some whitespace trimming.
Notice 'OWNER_NAME' could be more than one word: e.g. 'OWNER_NAME' = "I am Batman".
Any ideas with accompanying implementation?
What I have to go by here, is just the old family of cat, head, tail, awk, sed, grep, cut, etc.
|
It's not like I haven't tried before asking... here's my attempt... but it looks way too complicated to me. Disregard the logic that handles dirty files gracefully, it was not part of the question and it's not the focus of the text look-up anyway. It just so happens that the files I have sometimes do not start with "HEADER" but with some garbage, with all the rest of the data being absolutely fine, always.
#!/bin/bash
file_to_scan="${1}"
name_to_lookup="${2}"
ASSUME_FIRST_LINE_IS_HEADER="false" # Sometimes input files begin with spurious lines
FILE_HEADER_REGEX='^\[#\][[:blank:]]+OWNER_NAME[[:blank:]]+NAME[[:blank:]]+SIZE\s*$'
FIELD_HEADER_NAME=' NAME'
FIELD_HEADER_SIZE=' SIZE'
if [ "$ASSUME_FIRST_LINE_IS_HEADER" == "true" ]; then
header_line=$(head -n 1 "${file_to_scan}")
else
header_line="$(
grep \
--colour=never \
--extended-regexp \
"${FILE_HEADER_REGEX}" \
"${file_to_scan}"
)"
fi
colstartend=($(
printf "${header_line}" \
| \
awk \
-v name="${FIELD_HEADER_NAME}" \
-v size="${FIELD_HEADER_SIZE}" \
'{
print index($0, name)+1;
print index($0, size);
}'
))
sed -E "1,/${FILE_HEADER_REGEX}/d" "${file_to_scan}" \
| \
awk \
-v name_to_lookup="${name_to_lookup}" \
-v colstart="${colstartend[0]}" \
-v offset="$(( ${colstartend[1]} - ${colstartend[0]} ))" \
'{
name_field = substr($0, colstart, offset);
sub(/ *$/, "", name_field);
if (name_field == name_to_lookup) {
print substr($1, 2, length($1)-2)
}
}'
| Text file look-up by column |
1,321,145,927,000 |
I want to find all directories that are named Contents and then from the list of found results, I want to filter them and only include those that have a file named Database.json directly inside them.
find / -type d -name Contents 2>/dev/null |
while read dir;
do
if [ -f $dir/Database.json ]; then
echo $dir
fi
done
This code works. But I think it's overkill. I think there should be an easier way to do it.
Can I rewrite this code to become simpler?
|
With GNU find, you can do:
LC_ALL=C find . -path '*/Contents/Database.json' -printf '%h\n'
Where %h gives you the head of the path, that is the dirname.
LC_ALL=C is needed for * to match any sequence of bytes regardless of what the user's locale regards as characters, as file paths can be made of any sequence of bytes other than 0 which don't have to make up characters in the user's locale.
Similar with zsh, also giving you a sorted list, excluding hidden ones, add also reporting symlinks to directories that contain a Database.json:
print -rC1 -- **/Contents/Database.json(N:h)
For a closer equivalent to GNU find's approach:
print -rC1 -- **/Contents(NDoN/e['
[[ -e $REPLY/Database.json || -L $REPLY/Database.json ]]'])
Where:
D includes hidden files (Dot files)
oN does Not order the list.
/ selects the files of type directory.
e['code'] checks that the dirs contain a Database.json file of any type including broken symLink.
In any case, you cannot post-process the list like you do there. Instead, you'd do:
With GNU find and zsh or bash:
while IFS= read -ru3 -d '' file; do
something with "$file"
done 3< <(LC_ALL=C find . -path '*/Contents/Database.json' -printf '%h\0')
With zsh:
for file (**/Contents/Database.json(N:h)) something with $file
See also:
Why is printf better than echo?
When is double-quoting necessary?
| Find directories with name, but filter them if they contain a specific top-level file in them |
1,321,145,927,000 |
How can I remove a single filter?
tc filter show dev peth1
shows
filter parent 1: protocol ip pref 16 u32
filter parent 1: protocol ip pref 16 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 16 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:2 match 5bd6aaf9/ffffffff at 12
Why does that not work?:
tc filter del dev peth1 pref 1 protocol ip handle 800:800 u32
|
Old post, but for reference, it wouldn't work for a few reasons:
The priority should be 16 and not 1
The filter handle should be 800::800 and not 800:800
You must supply the parent qdisc that the filter is attached to
This should work:
tc filter del dev peth1 parent 1: handle 800::800 prio 16 protocol ip u32
| remove tc filter (Traffic Shaping) |
1,321,145,927,000 |
I installed Getmail to retrieve emails from another email server and Procmail to filter the incoming emails. (I am running Debian/Squeeze.)
The recipe I created has this code:
:0:
* ^[email protected]
Xyz
I thought this will make sure that all incoming emails will be saved in ~/Maildir/Xyz/ as individual files. Instead, it seems to be creating a file called Xyz (not a directory) inside ~/Maildir/ and appending new emails to the same file.
How do I save incoming mails as individual files to a folder, instead of a single file?
|
The top level of procmail recipes are reserved for assignment of procmail variables. Add the following to the top of your procmail recipe.
MAILDIR="$HOME/Maildir/"
When defining where the mail should be delivered, you have defined Xyz as a file, not a directory. It should instead read:
:0:
* ^[email protected]
Xyz/
procmail is extremely powerful with many options. I'm always amazed at what it can do.
| Savings emails as individual files using Procmail |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.