date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,446,123,631,000 |
I'm setting up a proFTPd server so I can upload files to my webserver, but I've never tried this before. I've installed proftpd, added a user with a home folder: /home/FTP-shared and added /bin/false shell to it as well.
But what do I do configuration-wise now in proftp to be able to login with this user, and up and d... |
For your first question, you can read it here.
For your second question, I'm currently using mount --bind.
| Creating a proFTPd user |
1,446,123,631,000 |
I need to upgrade proftpd that is running on my Ubuntu 14.04 server.
Since I want to keep all configfiles as they are I thought the best option would be to compile a newer version 1.3.5b and just copy in the binary to replace the current running.
Would work fine in theory but I am running into issues because I probabl... |
Castaglia's answer is easier to use with ProFTPd, and works on any system.
As a more general solution for Debian packages (including Ubuntu), you can find the configure options in the debian/rules file (that link takes you directly to the version used in 14.04):
CONF_ARGS := --prefix=/usr \
--with-includes=$(... | How to find proftpd compile options Ubuntu 14.04 |
1,446,123,631,000 |
I am trying to set up an FTP server on one of my devices that runs DietPi and I selected proFTPD as a server.
I have installed the software and followed some set-up information I found here. But then I noticed that the service was not running. After trying to find it in via ps aux | grep proftpd I did not succeed.
Aft... |
The strace output indicates that the error is caused by the attempt to create /run/proftpd.sock, which apparently already exists.
Try fuser /run/proftpd.sock to see if any process is holding onto it; it will report the PID numbers of any such processes. Then use ps -fp <PID number here> to get more information about t... | proFTPD not working due to socket bind error |
1,446,123,631,000 |
I have so far: installed ProFTPD, turned ipv6 off in the conf, changed server name from Debian to jon-virtual-machine, and jailed users to their home folder. But it says I can't determine IP address or process the conf file.
|
It's difficult to answer this question because of lack of information provided (proftpd.conf, /etc/hosts, output of ifconfig and hostname).
I guess it's a problem related to your changed hostname. If so, try to modify your /etc/hosts from:
x.y.z.t Debian
where x.y.z.t is your actual IP address, to:
x.y.z.t jon-... | ProFTPD won't start or restart [closed] |
1,323,179,527,000 |
I don't understand the need for an rsync server in daemon mode. What are the benefits from it if I can use rsync with SSH or telnet?
|
Many, but I will cite a few off the top of my head.
What if ssh/rsh are not available on the remote server or if they are broken in terms of configuration or stricter network rules? Using rsh/ssh still would require the client (depends on the sender or receiver role), the remote side would have to however fork the rs... | What is the need for rsync server in daemon mode |
1,323,179,527,000 |
This is nothing new to me at least, that SATA actually "talks" SCSI, hence why these SATA devices show up as SCSI devices in Linux.
A related question has been asked before, e.g. Why do my SATA devices show up under /proc/scsi/scsi?
However what fails to be mentioned where I've seen this discussed before is exactly in... |
SCSI and ATA are entirely different standards. They are currently both developed under the aegis of the INCITS standards organization but by different groups. SCSI is under technical committee T10, while ATA is under T13.1
ATA was designed with hard disk drives in mind, only. SCSI is both broader and older, being a st... | In what sense does SATA "talk" SCSI? How much is shared between SCSI and ATA? |
1,323,179,527,000 |
I'm interested in a single command that would download the contents of a torrent (and perhaps participate as a seed following the download, until I stop it).
Usually, there is a torrent-client daemon which should be started separately beforehand, and a client to control (like transmission-remote).
But I'm looking for ... |
I gave a try to lftp:
lftp -c "torrent $1"
where $1 is the .torrent file.
Unlike
lftp -e "torrent $1"
lftp -c must exit when the command is done (lftp -e leaves you in its command pronpt).
It also does seeding. (I don't know yet how seeding interacts with -c.)
Seeding after the command finished
This is actually don... | command-line tool for a single download of a torrent (like wget or curl) |
1,323,179,527,000 |
I'm reading a book on network programming with Go. One of the chapters deals with the /etc/services file. Something I noticed while exploring this file is that certain popular entries like HTTP and SSH, both of which use TCP at the transport layer, have a second entry for UDP. For example on Ubuntu 14.04:
ubuntu@vm1:~... |
Basically, it's because that was the tradition from way back when port numbers started being assigned through until approximately 2011. See, for example, §7.1 “Past Principles” of RFC 6335:
TCP and UDP ports were simultaneously assigned when either was
requested
It's possible they will be un-allocated someday, of co... | Why do popular TCP-using services have UDP as well as TCP entries in /etc/services? |
1,323,179,527,000 |
This is more like a conceptual question. I need some clarifications.
Today I was learning some socket programming stuff and wrote a simple chat server and chat client based on Beej's Guide to Network Programming. (chat server receives clients message and send messages to all the other clients)
I copied the chat se... |
Telnet is defined in RFC 854. What makes it (and anything else) a protocol is a set of rules/constraints. One such rule is that Telnet is done over TCP, and assigned port 23 - this stuff might seem trivial, but it needs to be specified somewhere.
You can't just send whatever you want, there are limitations and special... | Why telnet is considered to be a protocol? Isn't it just a simple TCP send/echo program? |
1,323,179,527,000 |
$ netstat -nat
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN
tcp ... |
By default sshd uses ipv4 and ipv6. You can configure the protocol sshd uses through the AddressFamily directive in /etc/ssh/sshd_config
For ipv4 & ipv6 (default)
AddressFamily any
For ipv4 only
AddressFamily inet
For ipv6 only
AddressFamily inet6
After you make any changes to sshd_config restart sshd for the chan... | Why does SSH show protocol as tcp6 *and* tcp in netstat? |
1,323,179,527,000 |
How can I copy a folder from http://public.me.com/ (a service related to iDisk, or MobileMe) to my local filesystem with a Unix tool (like wget, a command-line non-interactive tool)?
The problem is that the web interface is actually a complex Javascript-based thing rather than simply exposing the files. (Even w3m can'... |
That server is clearly running a partial or broken implementation of WebDAV. Note that you need to connect to an URL like https://public.me.com/ix/rudchenko, not the normal URL https://public.me.com/rudchenko. I tried several clients:
With a normal HTTP downloader such as wget or curl, I could download a file knowing... | How to copy someone's else folders from public.me.com with a wget-like tool? |
1,323,179,527,000 |
Why Linux syslog file: /var/log/syslog does not follow the timestamp format defined in the protocol https://www.rfc-editor.org/rfc/rfc5424#page-11?
|
From RFC 5424 (which lays down the syslog protocol and refers to RFC 3339 for timestamps) "1. Introduction":
This document describes the standard format for syslog messages and
outlines the concept of transport mappings. It also describes
structured data elements, which can be used to transmit easily
parseable... | Why Linux syslog file does not follow the RFC3339 protocol? |
1,323,179,527,000 |
One of my professors was telling us about scalability problems, and said that the X protocol was a prime example of a not scalable protocol. Why is that? Is it because it is very hardware dependent? I know that X is used in modern unix/linux environments, if it's not scalable than why is it used so widely?
|
One reason he may have said this is that if you look at the traffic that flows back and forth between a client and a server, it's fairly verbose. This doesn't present an issue when the traffic is only having to go locally on a single box between the 2, however when the traffic needs to go over a network connection, th... | Does the X windowing system suffer from scalability? |
1,323,179,527,000 |
I've read a little about Internet protocols and deduced that on local area network, there is no need to use the IP protocol, though it's normally used.
Is there a possibility to turn off the IP protocol in Linux and use only MAC (ethernet) addresses for frame delivering? How would you do it?
I guess there will be a pr... |
There are protocols like AoE (ATA over Ethernet) that allows communication without IP. The problem is that such protocols aren't that common. In fact, I can't see any at the moment, except for dinosaurs such as the old file sharing protocols of yore like Banyan Vines, DECNET, etc.
There's a reason why IP took over aft... | Local area network without using the IP protocol in Linux |
1,323,179,527,000 |
What is the use case and usage of /etc/protocols?
I can see it lists the number of protocols available. But what is the significance?
For example, my Linux machine is not running OSPF but I see OSPF in /etc/protocols.
What does it mean? What is the significance of that file? Do we edit that file?
|
The file is documented in man 5 protocols:
This file is a plain ASCII file, describing the various DARPA
internet protocols that are available from the TCP/IP subsystem.
It should be consulted instead of using the numbers in the ARPA
include files, or, even worse, just guessing them. These numbers
will occur in the ... | What is the significance of /etc/protocols in Linux? |
1,323,179,527,000 |
in RHEL 8.8 when I am trying to test NFS tcp versus udp and versions 3 versus 4.0, 4.1 and 4.2, I observe on my nfs client a mountproto= in addition to proto= when typing mount. What is the significance of this and what does it mean?
Should I be able to, in RHEL 8.8, have NFS operating showing specifically vers=3 and... |
In older NFS versions (v2 and v3) there are two distinct RPC services handled by separate software: the "MOUNT" protocol that's used only to obtain the initial filesystem handles from rpc.mountd, and the "NFS" protocol that's used for everything else.
So the mountproto= option defines the transport used to access rpc.... | difference NFS proto and mountproto |
1,323,179,527,000 |
Where do application layer protocols reside? Are they part of library routines of language e.g. C, C++, Java?
As goldilocks says in his answer, this is about the implementation of application layer protocols.
|
Where do application layer protocols reside?
Protocols are an abstraction, so they don't really "reside" anywhere beyond specifications and other documentation.
If you mean, where are they implemented, there's a few common patterns:
They may be implemented first in native C as libraries which can be wrapped by for... | Are application layer protocols part of library routines? |
1,323,179,527,000 |
I was looking at this link here: http://www.debianadmin.com/linux-snmp-oids-for-cpumemory-and-disk-statistics.html and noticed that the OIDs are the same ones I see for the same stats for our appliance. Is this some kind of standard with SNMP maybe an RFC or something? Does anyone know where I can find the list that t... |
The list you're looking for is most probably at http://www.oid-info.com/
Yes, this is some kind of standard: OIDs are objects in the MIB, the global root MIB was defined in RFC 1155. It has since been extended, the SNMP MIB is RFC 1157.
| Where do I find the OID descriptions for SNMPv2 in Linux? |
1,323,179,527,000 |
I have Linux machine red-hat 5.X
please advice - which what command I can identify if someone is tiring to
copy files from my machine VIA sftp or ftp
is it possible to verify this on my Linux machine ?
thx
|
Sure you can use lsof to see what activity is currently taking place on the server. Here's what the output would look like for an idle connection to an SFTP server.
$ sudo /usr/sbin/lsof -p $(pgrep sftp)
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sftp-serv 30268 sam cwd DIR 0,19 ... | how to know on my linux machine if connection VIA sftp is active |
1,323,179,527,000 |
I know that ssh-add is "front-end" of ssh-agent.
But ssh-agent on my computer is already running (I could find it in top). When I type ssh-add, it said "Could not open a connection to your authentication agent".
How ssh-add communicate with ssh-agent in details?
My situation is
#! /bin/sh
# hello.sh
eval $(ssh-agent... |
The output of ssh-agent -s is some environment variable assignments, something like SSH_AUTH_SOCK=blahblah; export SSH_AUTH_SOCK etc. When you run eval $(ssh-agent -s), the shell executes that as code, and those variables get set in that shell. The variables there contain the information ssh-add needs to contact the ... | ssh-add not able to connect to ssh-agent |
1,323,179,527,000 |
When I write programs for my own FPGA, I must select UART to emulate a terminal and for my FPGA design but I don't know exactly what that means.
I believe that UART is a basic serial transmission protocol, isn't it? And is that the protocol between the program and the terminal and therefore I must choose UART from my... |
A UART (Universal Asynchronous Receiver Transmitter) is not a protocol, it's a piece of hardware capable of receiving and transmitting data over a serial interface. I presume you are selecting some design block for your FPGA design implementing an UART.
| What is the relation between UART and the tty? |
1,323,179,527,000 |
I am reading how the TCP states work and especially the connection termination part.
All of the books or online material I read, shows that for the termination procedure these states are followed from the side initiated (active) the connection termination:
ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, TIME-WAIT, CLOSED
And the... |
The Linux TCP stack and conntrack have two different visions of the TCP connection. What you're seeing in /proc/net/ip_conntrack is different from what the kernel sees. The kernel state is stored in /proc/net/tcp and /proc/net/tcp6 and can be displayed with netstat.
As seen here: https://serverfault.com/questions/3130... | Why TCP TIME-WAIT State is present at both ends after a connection termination? |
1,323,179,527,000 |
Let's say:
$ ls -l /dev/input/by-id
lrwxrwxrwx 1 root root 10 Feb 10 03:47 usb-Logitech_USB_Keyboard-event-if01 -> ../event22
lrwxrwxrwx 1 root root 10 Feb 10 03:47 usb-Logitech_USB_Keyboard-event-kbd -> ../event21
$ ls -l /dev/input/by-path/
lrwxrwxrwx 1 root root 10 Feb 10 03:47 pci-0000:00:14.0-usb-0:1.1:1.0-event-... |
Kernel does not decide the bInterfaceProtocol. The value is received from the connected USB device.
A variety of protocols are supported HID devices. The bInterfaceProtocol
member of an Interface descriptor only has meaning if the bInterfaceSubClass
member declares that the device supports a boot interface, other... | Is value of bInterfaceProtocol fixed or decided by Kernel? |
1,323,179,527,000 |
I have the desktop-server Debian Jessie machine running for testing purposes just for 19 hours now. I have already set a few rules as you can see above. But I am not really into networking. So it needs some revision.
Here is my iptables -L -v:
Chain INPUT (policy DROP 1429 packets, 233K bytes)
pkts bytes target pr... |
The above iptables config will only let TCP and UDP packets get past the firewall (unless they came from loopback). The default rule of the INPUT chain has been set to DROP, meaning that every packet that isn't explicitly ACCEPTed will be discarded. There should be no weird packets from loopback, so only TCP/UDP pac... | iptables - how to drop protocols [closed] |
1,440,012,027,000 |
Background
I'm copying some data CDs/DVDs to ISO files to use them later without the need of them in the drive.
I'm looking on the Net for procedures and I found a lot:
Use of cat to copy a medium: http://www.yolinux.com/TUTORIALS/LinuxTutorialCDBurn.html
cat /dev/sr0 > image.iso
Use of dd to do so (apparently the m... |
All of the following commands are equivalent. They read the bytes of the CD /dev/sr0 and write them to a file called image.iso.
cat /dev/sr0 >image.iso
cat </dev/sr0 >image.iso
tee </dev/sr0 >image.iso
dd </dev/sr0 >image.iso
dd if=/dev/cdrom of=image.iso
pv </dev/sr0 >image.iso
cp /dev/sr0 image.iso
tail -c +1 /dev/s... | Is it better to use cat, dd, pv or another procedure to copy a CD/DVD? |
1,440,012,027,000 |
If myfile is increasing over time, I can get the number of line per second using
tail -f | pv -lr > /dev/null
It gives instantaneous speed, not average.
How can I get the average speed (i.e the integral of the speed function v(t) over the monitoring time).
|
With pv 1.2.0 (December 2010) and above, it's with the -a option:
Here with both current and average, line-based:
$ find / 2> /dev/null | pv -ral > /dev/null
[6.28k/s] [70.1k/s]
With 1.3.8 (October 2012) and newer, you can also use -F/--format with %a:
$ find / 2> /dev/null | pv -lF 'current: %r, average: %a' > /dev... | How to get an average pipe flow speed |
1,440,012,027,000 |
I need to encrypt a large file using gpg. Is it possible to show a progress bar like when using the pv command?
|
progress can do this for you — not quite a progress bar, but it will show progress (as a percentage) and the current file being processed (when multiple files are processed):
gpg ... &
progress -mp $!
| How to show progress with GPG for large files? |
1,440,012,027,000 |
I want to install Scientific Linux from USB. I don't know why unetbootin doesn't work but I am not curious to find out: after all, I transferred to Linux from Windows to see and learn the underlying procedures. I format my USB drive to FAT32 and run this command as root:
# pv -tpreb /path/to/the/downloaded/iso | sudo ... |
A CD-ROM and USB stick use entirely different methods to boot. For an ISO9660 image on a CD-ROM, it's the El Torito Specification that makes it bootable; for a USB stick, it needs a Master Boot Record style boot sector.
ISOLINUX, the bootloader that is used in ISO9660 CD-ROM images to boot Linux, has recently added a... | Creating a bootable Linux installation USB without unetbootin |
1,440,012,027,000 |
I used md5sum with pv to check 4 GiB of files that are in the same directory:
md5sum dir/* | pv -s 4g | sort
The command completes successfully in about 28 seconds, but pv's output is all wrong. This is the sort of output that is displayed throughout:
219 B 0:00:07 [ 125 B/s ] [> ] 0% ... |
The pv utility is a "fancy cat", which means that you may use pv in most situations where you would use cat.
Using cat with md5sum, you can compute the MD5 checksum of a single file with
cat file | md5sum
or, with pv,
pv file | md5sum
Unfortunately though, this does not allow md5sum to insert the filename into its ... | Using pv with md5sum |
1,440,012,027,000 |
I was able to backup a drive using the following command.
pv -EE /dev/sda > disk-image.img
This is all well and good, but now I have no way of seeing the files unless I use this command
pv disk-image.img > /dev/sda
This, of course, writes the data back to the disk which is not what I want to do. My question is what... |
You backed up the whole disk including the MBR (512 bytes), and not a simple partition which you can mount, so you have to skip the MBR.
Please try with:
sudo losetup -o 512 /dev/loop0 disk-image.img
sudo mount -t ntfs-3g /dev/loop0 /mnt
Edit: as suggested by @grawity:
sudo losetup --partscan /dev/loop0 disk-image.i... | With the command pv it is possible to clone a drive, how do I mount it? [duplicate] |
1,440,012,027,000 |
I'm trying to use pv, but I want to hide the command I piped in's output while still be able to see pv's output. Using command &> /dev/null | pv doesn't work (as in, pv doesn't receive any data). command produces output on both standard output and standard error, and I don't want to see either.
I tried using a grep pi... |
man pv says:
To use it, insert it in a pipeline between two processes, with the appropriate options. Its standard input will be passed through to its standard output and progress will be shown on standard error.
The output you see comes from pv. The progress bar is on stderr, and the content you piped in is on stdou... | Pipe a command to pv but hide all the original command's output |
1,440,012,027,000 |
I want to run a sequence of command pipelines with pv on each one. Here's an example:
for p in 1 2 3
do
cat /dev/zero | pv -N $p | dd of=/dev/null &
done
The actual commands in the pipe don't matter (cat/dd are just an example)...
The goal being 4 concurrently running pipelines, each with their own pv output. Howev... |
Found that I can do this with xargs and the -P option:
josh@subdivisions:/# seq 1 10 | xargs -P 4 -I {} bash -c "dd if=/dev/zero bs=1024 count=10000000 | pv -c -N {} | dd of=/dev/null"
3: 7.35GiB 0:00:29 [ 280MiB/s] [ <=> ... | How can I run multiple pv commands in parallel? |
1,440,012,027,000 |
I would like to track progress of a slow operation using pv. The size of the input of this operation is known in advance, but the size of its output is not. This forced me to put pv to the left of the operation in the pipe.
The problem is that the long-running command immediately consumes its whole input because of bu... |
In your setup the data has passed pv while it is still processed on the right side. You could try to move pv to the rightmost side like this:
seq 20 | while read line; do sleep 1; echo ${line}; done | pv -l -s 20 > /dev/null
Update:
Regarding your update, maybe the easiest solution is to use a named pipe and a subshe... | How track progress of a command in a pipe if only the size of its input is known in advance? |
1,440,012,027,000 |
I've been able to define a physical volume (LVM) in two ways:
Creating a 8e (Linux LVM type) partition and then # pvcreate /dev/sdb1
Usign pvcreate directly using a non-partitioned disk and then # pvcreate
/dev/sdc <-- note the lack of number since there aren't any partitions.
My disks are not local, I use both sc... |
This was asked recently but it was in the context of local disks. In that situation, there is a good reason to use a partition table on the disk even if you only intend to make it a single big partition spanning the entire disk: documenting the fact that the disk is actually in use, thus preventing accidents.
I believ... | Define physical volume inside non-partitioned disk |
1,440,012,027,000 |
I ran sed on a large file, and used the pv utility to see how quickly it's reading input and writing output. Although pv showed that sed read the input and wrote the output within about 5 seconds, sed did not exit for another 20-30 seconds. Why is this?
Here's the output I saw:
pv -cN source input.txt | sed "24629045,... |
There are two reasons. In the first place, you don't tell it to quit.
Consider:
seq 10 | sed -ne1,5p
In that case, though it only prints the first half of input lines, it must still read the rest of them through to EOF. Instead:
seq 10|sed 5q
It will quit right away there.
You're also working with a delay between e... | Why doesn't sed exit immediately after writing the output? |
1,440,012,027,000 |
I need to encrypt and be able to decrypt files with openssl, currently I do this simply with:
openssl enc -aes-256-cbc -salt -in "$input_filename" -out "$output_filename"
and the decryption with:
openssl enc -aes-256-cbc -d -salt -in "$input_filename" -out "$output_filename"
But with large files, I would like to se... |
You should try
openssl enc -aes-256-cbc -d -salt -in "$input_filename" | pv -W >> "$output_filename"
From the Manual:
-W, --wait:
Wait until the first byte has been transferred before showing any progress information or calculating any ETAs. Useful if the program you are piping to or from requires extra information ... | How to use pv to show progress of openssl encryption / decryption? |
1,440,012,027,000 |
I have a script on a Linux machine with a fancy pv piped to a second pv that count a subset of the outputted lines.
Here's the script:
max=1000
for (( i=0; i<max; i++ )); do
[[ $(shuf -i 1-100 -n 1) -lt 20 ]] && echo REMOVE || echo LEAVE
done | pv -F "%N %b / $(numfmt --to=si $max) %t %p %e" -c -N 'Lookups' -l -s $m... |
The two pv processes in the pipe may start in any order. The output from the latest pv will be in the bottom line.
Delay pv you want in the bottom line. Instead of pv … (where … denotes all its arguments) use a subshell:
( </dev/null sleep 1; exec pv … )
In theory the other pv may still start after the delayed one, b... | Multiple pv order |
1,440,012,027,000 |
I was testing different methods to produce random garbage and comparing their speed by piping output to pv, as in:
$ cmd | pv -s "$size" -S > /dev/null
I also wanted a "baseline reference", so I measured the the fastest "generator", cat, with the fastest source, /dev/zero:
$ cat /dev/zero | pv -s 100G -S > /dev/null... |
The killer is the use of two processes.
With cat | pv, cat reads and writes, and pv reads and writes, and both processes need to run:
$ perf stat sh -c 'cat /dev/zero | pv -s 100G -S > /dev/null'
100GiB 0:00:26 [3.72GiB/s] [====================================================================================>] 100% ... | pipe and redirection speed, `pv` and UUOC |
1,440,012,027,000 |
How does the following command work?
pv file.tar.gz | tar -xz
From my understanding the pipe operator | creates a pipe and stdout of pv is mapped to the O_WRONLY end of the pipe and tar's stdin is mapped to the O_RDONLY with both O_WRONLY and O_RDONLY existing in pipefs
This is all well and good, but the following is... |
The progress bar is a feature of pv, it is written on standard error. From the pv manual:
pv shows the progress of data through a pipeline by giving information
such as time elapsed, percentage completed (with progress bar), current
throughput rate, total data transferred, and ETA.
To use it, insert... | How does pv work? |
1,440,012,027,000 |
Executing this command displays the output on console. But when output is piped to another command it does not work. See below.
(pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img )
0:00:01 [25.2MiB/s] ETA 0:00:18
0:00:02 [23.7MiB/s] ETA 0:00:18
0:00:03 [ 100MiB/s] ETA 0:00:07
0:00:04 [ 199MiB/s] ETA 0:00:01... |
Use pv -f …
From man 1 pv:
-f, --force
Force output. Normally, pv will not output any visual display if standard error is not a terminal. This option forces it to do so.
(pv -fF $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 2>&1 | tr -d ':[]'
| pv not printing to a pipe |
1,440,012,027,000 |
I use the following command to verify ~700 GiB of backed-up files:
$ find -type f -exec md5sum {} + | sort > ~/checksums
This takes many hours, so I would like to integrate pv into the command to show the progress.
I could do this:
$ find -type f -exec pv {} + | md5sum
But it concatenates all of the files, resulting... |
Your first command should not be able to run at all as you can't use a pipe in an -exec like that (this was apparently a typo in the original question).
Instead:
find . -type f -exec md5sum {} + | sort -o ~/checksums
or, with pv,
find . -type f -exec md5sum {} + | pv | sort -o ~/checksums
In both of the above, md5su... | Use pv with find -exec |
1,440,012,027,000 |
I'm using pv for sending files via ssh.
I can change "active pv" the limit at under 100M without any problem.
When i set active pv process to 100M or 1G or higher I cant change rate anymore...
BUT! if i change 5-10 times 1M to 2M, 2M to 1M pv can set sometimes to new rate.
I couldn't find any solution for the problem... |
This is caused by accounting in pv, which effectively means its rate-limiting is read-limited rather than write-limited. Looking at the source code shows that rate-limiting is driven by a “target”, which is the amount remaining to send. If rate-limiting is on, once per rate limit evaluation cycle, the target is increa... | Pipe-Viewer problem with changing Rate-Limit |
1,440,012,027,000 |
I am writing a batch script to sort through gigs and gigs of data. All of the data is text but the script will take a long long time to execute. I would like to give some visual indication that the script is running. I found the program called pv which allows you to create progress bars and other nice CLI progress in... |
There will typically (lacking zero-copy trickery) be measurable overhead due to the extra IPC: copying the data from one process to another, rather than the "workhorse" process reading files directly. A pipe may also result in loss of performance (or functionality) for other reasons: with piped input a process cannot ... | Pipe Viewer - Progress monitor performance consequence |
1,440,012,027,000 |
Which command produces more data per second? This could be useful to quickly fill a file with garbage data or to test data transfer rates. So far, I found that "/dev/zero" is the quickest one.
$ cat /dev/urandom | pv > /dev/null
3,04GO 0:08:22 [5,83MB/s] [ <=> ]
$ yes | pv > /dev/null... |
The system interprets /dev/zero as literally just an endless stream of zeroes, and I believe this is the fastest way to obtain useless information. In all likelihood, you're going to be bottlenecked by your physical disk speed, and so this should be as fast as you'd ever need even if there are any faster methods.
Also... | Which command produces more data per second? |
1,440,012,027,000 |
I'm using PV for my ZFS send-recv replication.
I use ZFS resume token too but i want to pause and resume like sigstop, sigcontinue.
Because using resume token means sending same thing again.
So how do you manage pause and resume with pv?
BTW: "pv - monitor the progress of data through a pipe"
|
Using "sigstop - sigcont" signal on ZFS send or receive process cause error.
Only way to using these signals works with when you use "PV".
You can stop and cont PV but when you stop PV zfs still trying to send and I don't know yet consequences or is it cause any problem or CPU, I/O usage on host.
I stoped few hours ... | What is the best way pause "zfs send via PV" and resume |
1,440,012,027,000 |
I've just written over the wrong hard drive using the command:
sudo sh -c 'pv /dev/sdb >/dev/sdc'
How do I go about undoing this?
I was creating the first even backup of the drive, and I backed up over the wrong drive... The drive which got written over also has no backups, I was going to backup that drive next.
Bot... |
If you do not have backups, your data wasn't important.
It's gone. There is no undo. Especially not with encryption involved.
something that produces output > /dev/somedisk overwrites data on the device. Whatever is overwritten can not be restored, so your only chance would be if you noticed and cancelled it right awa... | Backed up over wrong hard drive |
1,440,012,027,000 |
I was copying a very big file and I accidentally stopped it. Can I resume copying data without need to delete copy and copy data again?
Command I used:
pv original.data > copy.data
|
Continue with dd:
dd if=original.data of=copy.data ibs=512 obs=512 seek=NNN skip=NNN status=progress
You have to get byte count in the copy.data. Then replace NNNs with byte count divided by 512 (value set to ibs and obs).
| Continue copying file |
1,440,012,027,000 |
So when I do (as a root)
fdisk -l
I see /dev/sda1 and /dev/sda2
Now I am practicing creating logical volumes, when I tried partitioning
/dev/sda2
I got two new partition /dev/sda2p1 and /dev/sda2p2
and then I run
partprobe
but then when I try creating a pv
/dev/sda2p1 /dev/sda2p2
It says these devices are not fou... |
What am I doing wrong?
LVM Logical Volumes are not created with fdisk. You need to use lvcreate instead.
I did chose type 8e when creating these partition lvm
Setting the partition type using fdisk, let you hint that a partition may contain an LVM Physical Volume. Like setting any other partition type, this does... | lvm and a partitioning question |
1,440,012,027,000 |
This is my dd command which I need to modify:
dd if=/tmp/nfs/image.dd of=/dev/sda bs=16k
Now I would like to use pv to limit the speed of copying from NFS server. How can I achieve that? I know that --rate-limit does the job, but I am not sure how to construct pipes.
|
If for some reason you must read the block device using a block size of 16K:
dd if=/mnt/nfs bs=16k | pv -L <rate> > /dev/sda
Where <rate> is the maximum allowed amount of bytes per second to be transferred, or the maximum allowed amount of kibibytes, mibibytes, gibibytes, [...] per second to be transferred if K, M, G... | How to redirect dd to pv? [duplicate] |
1,440,012,027,000 |
I created an image from a 256GB HDD using following command:
dd if=/dev/sda bs=4M | pv -s 256G | gzip > /mnt/mydrive/img.gz
later I tried to restore the image to another 512GB HDD on another computer using following command:
gzip -d /mnt/mydrive/img.gz | pv -s 256G | dd of=/dev/sda bs=4M
the 2nd command shows very l... |
Following command is not exactly doing what you are intending to do
gzip -d /mnt/mydrive/img.gz > /dev/sda
The command is decompressing the file /mnt/mydrive/img.gz and creating a file called img which is the ungzipped copy of img.gz. The > /dev/sda is not doing anything useful because nothing is sent to /dev/sda via... | restoring hdd image using gzip throws error no space left on device |
1,440,012,027,000 |
I have this perl script and I discovered the pv command and decided to use it to get some feedback into what is going on with the randomness in terms of throughput. After a few tests1 I decided to throttle the command, like so:
perl_commands < /dev/urandom | pv -L 512k | tr -cd SET
5.5MiB 0:00:11 [ 529kiB/s] [ ... |
pv doesn't know about the system power states. All it sees is that the clock changed by a very large amount at some point.
My guess is that pv doesn't care if the amount of time between two clock readouts suddenly gets large and just calculates the throughput based on the time interval. Since the interval is very larg... | Why does it temporarily look like the pv command transfer limit is no longer enforced when I come out of suspend to ram? |
1,440,012,027,000 |
how can i monitor netcat transferring from android to my linux machine
i used this command on android device ( sender ) to make a full dump for my device :
dd if=/dev/block/mmcblk0 | busybox nc -l -p 8888
on receiver side i use this command :
nc 127.0.0.1 8888 > device_image.dd
i need to watch the progress with ... |
Inserting pv in your receive-side pipeline should allow you to observe progress:
nc 127.0.0.1 8888 | pv >device_image.dd
If you had pv available on the sending side, you could also use it there:
dd if=/dev/block/mmcblk0 | pv | busybox nc -l -p 8888
But pv probably won't be available on your Android device unless you... | watch netcat transfer dump from android to pc |
1,440,012,027,000 |
In pv, the rate meter is displayed as
47.5MiB 0:00:00 [ 165MiB/s] [================================>] 100%
where the unit used for the transfer stats is MiB (1024 bytes). Is it possible to change this unit to MB (1000 bytes)?
|
The nice thing about Linux is that you have access to the sources, so it is pretty much always possible to change something to do what you would like it to do, if you make the effort.
In this case, it is not too difficult to download the sources, and look through them to see if it is obvious what to change. Then jus... | Can the unit displayed in the transfer rate meter in pv be changed? |
1,440,012,027,000 |
I am trying to understand how the redirection is exactly working in this command
# tar -czf - ./Downloads/ | (pv -p --timer --rate --bytes > backup.tgz)
What is the english translation ?
All the data of tar is redirected as input to pv and then pv redirects it to backup.tgz ?
Then why is the bracket around pv require... |
See What are the shell's control and redirection operators? and Order of redirections for background.
tar -czf - ./Downloads/ | (pv -p --timer --rate --bytes > backup.tgz)
tells the shell to run tar -czf - ./Downloads/ with standard output redirected to (pv -p --timer --rate --bytes > backup.tgz).
(pv -p --timer --ra... | How does redirection to pv actually work? |
1,440,012,027,000 |
I'm running bash files for zfs send jobs and this my bash file example:
zfs send -Rc tank/test@snap | pv -fs datasize -F "%p***%t***%e***%r***%b" |
mbuffer -q -s 128k -m 1G -O ip:port
When I start the bash I want to know PV pid.
I couldn't figure out how can I take pv pid.
|
Pipe viewer has an option for this job.
You can save pid to a file with this command.
-P FILE, --pidfile FILE
Save the process ID of pv in FILE. The file will be truncated if it already exists, and will be removed when pv exits. While pv is running, it will contain a single number - the process ID of pv - followed by ... | How to get PID when start bash script |
1,388,399,518,000 |
I'm planning to use rsnapshot to backup my whole Linux system, though I'm confused by -x option (same of one_fs in rsnapshot.conf). The man page says:
-x one filesystem, don't cross partitions within each backup point
I understand it's not a specific rsnapshot option since rsync, cp, tar and other commands provide t... |
“Do not cross filesystem boundaries” means “do not look inside mount points”. A boundary between filesystems is a mount point. Effectively, this means “only act on the specified partition”, except that not all filesystems are on a partition. See What mount points exist on a typical Linux system?
When you make a backup... | Meaning of Crossing filesystem boundaries, --one-file-system, etc |
1,388,399,518,000 |
I've looked around a bit without finding an exact answer to my question, which is how to specify a directory to be excluded from only the remote filesystem backup.
Say I've got two machines: a desktop (server) and a laptop. My home directory on each of them is /home/tom. rsnapshot lives on the desktop (localhost) with... |
There is a fourth field for the backup line, which can be used for such tasks. So your line should look like follows.
backup tom@laptop:/home/tom/ laptop/ exclude=/home/tom/music
You can add more per backup options by separating these with a comma. For further reading consult the man page of rsnapshot.
| Specifying remote directories to be excluded from rsnapshot backup |
1,388,399,518,000 |
I'm planning a backup strategy based on rsnapshot.
I want to do a full system backup excluding files and directories that would be useless for the restore to have a working system again. I already excluded:
# System:
exclude /dev/*
exclude /proc/*
exclude /sys/*
exclude /tmp/*
exclude /run/*
exclude /mnt/*
exclude /me... |
First off, you should read up a little on rsync's include/exclude syntax. I get the feeling that what you want to do is better done using ** globs than * globs. (** expands to any number of entries, whereas * expands only to a single entry possibly matching multiple directory entries. The details are in man rsync unde... | Entries I can safely exclude doing backups |
1,388,399,518,000 |
I have two Debian 8 servers:
Server A: at home, lots of storage
Server B: vps at commercial host, running web and mail services
Both are pet projects, not business stuff.
Server B runs rsnapshot which works fine. Server A and B can SSH to each other passwordlesly with certificates, that also works fine. They do not ... |
You will be running as root on server A, which runs rsnapshot, and ssh-ing to your dedicated user backupmaker on B. Normally, you will want this user to be able to sudo rsync, so that you can read all the files to send back to A.
Assume, for example, you have a user on A who can sudo, and another user on B who can sud... | Proper way to set up rsnapshot over ssh |
1,388,399,518,000 |
I occasionally make changes to my rsnapshot.conf and I'm wondering if there's any way I can do a test run that is sync-ed to a location other than the normal flow... something that's not an interval. Is this possible? how?
|
I don't have an rsnapshot setup to test this on. Be careful.
Personally, I think that the best thing to do is to carefully evaluate the output of rsnapshot -t interval. However if you want to actually move files, one way to do it might be to create an alternate config file that is identical to your real config file b... | Can I do a "test run" with rsnapshot? |
1,388,399,518,000 |
I was running rsnapshot as root and I got the following error. Why would this happen? what is .gvfs?
rsnapshot weekly slave-iv
rsync: readlink_stat("/home/griff/.gvfs") failed: Permission denied (13)
IO error encountered -- skipping file deletion
rsync... |
.gvfs directories are mount points (sometimes). You may want to use the one_fs option in your rsnapshot configuration (so that it passes --one-file-system to rsync).
Gvfs is a library-level filesystem implementation, implemented in libraries written by the Gnome project (in particular libgvfscommon). Applications lin... | root user denied access to .gvfs in rsnapshot? |
1,388,399,518,000 |
Consider the following scenario:
I use Linux device mapper to create a snapshot of an ext4 file system.
The snapshot is mounted as read-only; the source volume is mounted as read-write.
I read the snapshot, and simultaneously write (too much) to the source volume. Eventually, the copy-on-write table fills up.
Now ex... |
When the COW fills up, you start getting I/O errors on write operations.
LVM2 allows you to check the size and usage of the COW, and resize it if necessary.
| Linux device-mapper & ext4: what happens when the COW table fills up? |
1,388,399,518,000 |
Drive A is 2TB in a closet at home.
Drive B is 2TB in my office at work.
I'd like drive A to be the one I use regularly and
to have rsync mirror A to B nightly/weekly.
The problem I have with this is that multiple users have
stuff on A.
I have root run rsync -avz from A to $MYNAME:B
Root can certainly read everything ... |
If it is intended as a backup (I'm looking at the tag), not as a remote copy of working directory, you should consider using tools like dar or good old tar. If some important file gets deleted and you won't notice it, you will have no chance to recover it after the weekly sync.
Second advantage is that using tar/dar w... | How do I backup via rsync to a remote machine, preserving permissions and ownership? |
1,388,399,518,000 |
I have an Rsnapshotting local server that takes snapshots of folders of miscellaneous computers within the local LAN.
There are daily, weekly, monthly and yearly snapshots.
So somebody puts a file into one of those folders being monitored by Rsnapshot and then some hours later the Rsnapshot server takes his daily snap... |
Rsnapshot takes a snapshot every day and every seven days the oldest daily snapshot becomes the new weekly snapshot. The other dailies are discarded. That's the basic idea, to store a relatively low number of snapshots, but with high granularity for the recent days and decreasing granularity for older data.
If I under... | Control how long Rsnapshot keeps a file after being deleted |
1,388,399,518,000 |
I am using rsnapshot to make daily backups of a MYSQL database on a server. Everything works perfectly except the ownership of the directory is root:root. I would like it to be root:backups to enable me to easily download these backups to a local computer over an ssh connection. (My ssh user has sudo permissions but I... |
In the default /etc/rsnapshot configuration file is the following:
# Specify the path to a script (and any optional arguments) to run right
# after rsnapshot syncs files
# cmd_postexec /path/to/postexec/script
You can use cmd_postexec to run a chgrp command on the resulting files which need their group ownership ch... | Rsnapshot: folder ownership permissions to 'backups' group instead of root |
1,388,399,518,000 |
I have the following problem. I'm currently need to store my backup on a cloud solution like dropbox since my local nas is broken. That's why I have to encrypt my backup. I'm using rsnapshot to generate it.
On the NAS I didn't encrypt it so I'm not experienced with it. What I've done is, I've zipped the latest backup... |
The time it takes to encrypt is proportional to the size of the data, plus some constant overhead. You can't save time for the whole operation by splitting the data, except by taking advantage of multiple cores so that it takes the same CPU time overall (or very slightly little more) but less wall-clock time. Splittin... | how to efficiently encrypt backup via gpg |
1,388,399,518,000 |
I use rsnapshot to make regular backups of my systems' filesystems to a remote server.
(For those familiar with rsync but less used to rsnapshot here is a brief introduction to its workings. A backup is a file-by-file copy of a source file-system tree, much like cp -a would produce. The "current" backup is always hour... |
After the snapshot, you can use rsnapshot diff which calls rsnapshot-diff to note the differences between two snapshots. It just compares inode numbers so is fairly efficient.
Alternatively, before each backup create a file outside the backup tree to note the time, touch timestamp. Then before a new backup, create a n... | Backup with rsnapshot only if there are changes |
1,388,399,518,000 |
On my Xen Host I first create an up-to-date snapshot of all VMs and then I use rsnapshot to backup all my important folders daily.
Secondly I backup the same folders on an external server via rsync
how can I ensure all those folders are successfully backed up on the external server?
|
Before I do the external backup, I create a definition file /root/folders_to_backup_external in each VM and a cronjob in each VM to create a hidden file .backupped_folder that contains the current date in all folders, that are defined in rsnapshot with
# create hidden files with date to check in external server
19 2 ... | Ensure all rsnapshot folders from all VMs in a Xen host are successfully backupped via rsync |
1,388,399,518,000 |
I am in the process of setting up a (proper) backup system which is built upon my NAS and rsnapshot. I have the NAS which has two drives in case one dies which in itself is not a backup, but I am also taking daily, weekly and monthly rsnapshots of the NAS which is being stored onto an external HDD.
I want to have an e... |
It could be hard work trying to rename remote daily.0 directories to keep in sync with renaming done locally by rsnapshot. This might be needed to avoid an rsync of the entire snapshot directory from local to remote having to do a lot of work. It would be much simpler to have separate snapshots independently generated... | Remote Syncing RSnapshots |
1,388,399,518,000 |
CRONTABS
I'm using rsnapshot with cron. Here's what sudo crontab -l shows me.
0 */4 * * * /usr/bin/rsnapshot hourly
30 3 * * * /usr/bin/rsnapshot daily
0 3 * * 1 /usr/bin/rsnapshot weekly
OUTPUT
I went to check on the backup folder to see if everything is working correctly, but ... |
After some extended discussion it appears that the filesystem may corrupted. As an example, rm -rf fails - as root - on a normal tree of files.
After unmounting the filesystem, fsck identified it as NTFS.
Frustratingly I have seen NTFS fail on other Linux-based platforms under the heavy loads incurred from rsnapshot. ... | What the heck is going on with my cron scheduler? (rsnapshot) |
1,388,399,518,000 |
I am using rsnapshot to manage my backups. The file /home/foo/bar is a symbolic link to a folder, and I want to exclude it. The --exclude option does not work, because if the pattern ends with a / then it will only match a directory, not a symlink.
How could I do this?
The rsync man page says:
if the pattern ends wi... |
Unless you set rsync_short_args or rsync_long_args in the configuration file for rsnapshot, you'll use rsync -a to perform the copy.
The default action for rsync -a is to copy a symlink as a symlink, and to ignore the target of the symlink. (Obviously, if the target of the symlink is within your source file tree it wi... | How to exclude symbolic link in rsync/rsnapshot |
1,388,399,518,000 |
I am trying to setup ssh-key based login for a remote server to rsnapshot daily. The key I am using is a normal user key, but it obviously doesn't have root access, so when rsnapshot connects to the server with the user key, /root for example won't be backed up. What is the best way to setup rsnapshot in the simplest ... |
A perhaps not so simple way is to connect as root but to limit the key used to connect to only run specific invocations of rsync; this requires an /root/.ssh/authorized_keys entry along the lines of
from="192.0.2.*",command="/root/limit-rsnap" ssh-rsa AAAAB3N...
which limits both where the backup is expected to orig... | rsnapshot a remote server - best practice for permissions |
1,388,399,518,000 |
I'm trying to set remote backup for my website.
in /etc/rsnapshot.conf, I've set the following things but its still not working.
snapshot_root /abc_backups/
backup [email protected]:/var/www/abc.com/html/
Can anyone help me out on how to set this?
My website server is on 1.2.3.4 and the source is /var/www/ab... |
Are you sure you have a abc_backups directory at the root of your filesystem? I really doubt it (and even if you did, this is not a good practice). Also backup takes 2 arguments, not one as in your example. First the source (what you backup) and then the destination.
Based on your description, change your backup line ... | rsnapshot settings confusion |
1,388,399,518,000 |
I'd like to cron a local backup to a locally mounted USB drive. I'm using rsnapshot and want it to back up an LVM snapshot onto the USB drive. But, unless I run the cron as root it complains that I can't make an LVM snapshot because I don't have permission to look at /dev/mapper/control. Am I missing something?
Thi... |
Take a look at this topic in the CentOS wiki, titled: rsnapshot Backups. It has examples that show how to backup using rsnapshot:
excerpt from that page
# crontab -e
#MAILTO="" ##Supresses output
MAILTO=me
###################################################################
#minute (0-59), ... | rsnapshot LVM without root |
1,388,399,518,000 |
my goal is to backup a remote server. However, I first want to get just a local backup working, running on Ubuntu 20.
For this, my /etc/rsnapshot.conf file is the following:
config_version 1.2
snapshot_root /var/backupsFromRsnapshot/
cmd_rsync /usr/bin/rsync
# The retain arguments define the number of snap... |
You've tested the configuration (-t) but you haven't yet run it. Here's what the man page (see man rsnapshot) says,
-t test, show shell commands that would be executed
Use this to run the rsnapshot backup, optionally with -v to see what's going on:
rsnapshot alpha
Don't mix retain and interval; they mean the same t... | Problems getting Rsnapshot to work, even just for a local backup |
1,388,399,518,000 |
EDIT: the solution to this problem is the marked solution underneath + enabling PermitRootLogin without-password in /etc/ssh/sshd_config
I'm trying to backup my entire system to my local server, but I even though I'm running rsnapshot as sudo, I get permission errors in /var/, /etc/ and /usr/. Is there a way to fix t... |
Here's your relevant backup line
backup [email protected]:/ popbackup/
You're running the source backup as gisbi rather than as root, so it cannot open the problematic files listed as errors.
I'd be inclined to run the source sender as root, and use --fake-super on the receiving side with a non-root account. This ... | Permission errors backing up entire system using rsnapshot over local server |
1,388,399,518,000 |
For backing up data in my office I use a Raspberry Pi model B (I had a spare one) running rsnapshot. Basically, every night it copies data from a bunch of smb mounted folders to a couple of external hard drives (fuseblk).
I gradually added data to back up and recently the whole process became really slow: it takes som... |
There are a couple of problems here slowing the backup solution down.
You're using rsync to copy between two "local" filesystems.
Just because one of them happens to be SMB is irrelevant to rsync. If the filesystem is mounted as part of the local system then rsync has to treat it as local. This means that any changed... | rsnapshot very slow |
1,388,399,518,000 |
The basic problems is that I have a Domain Connected QNAP and want to publish the RSnapshot snapshots via Samba so users can recover their own files from backups. (As per the original RSnapshot HowTo: http://rsnapshot.org/rsnapshot/docs/docbook/rest.html#restoring-backups)
However unless I set a Default ACL (setfacl -... |
Instead of trying to solve this through Samba, I've reset the samba configuration to the default that the QNAP created. (I.e. un-commented the commented out lines. This also seems safer in the long run since the Web GUI can potentially overwrite the tuned smb.conf file if new shares etc are created by myself or other ... | Unable to explore sub-directory in Samba share with Linux ACLs |
1,388,399,518,000 |
** edit 8/6/15 *
So the crux of my problem turned out not to be some quirkiness with the config file. In the end it turned out I simply had multiple ssh directories in 2 different places, and was using the wrong one. It's an embarrassing mistake to make, but live and learn, right?
I'm trying to do a backup of my N... |
If you have put this command in your cmd_ssh line, like this:
cmd_ssh /usr/bin/ssh -p 22 -i /home/thelemur/.ssh/id_rsa_n900
then you have unfortunately tickled an interesting almost-bug in rsnapshot. The problem is that the cmd_ssh parameter takes the entire value - including spaces - as the ssh alternative to ru... | Having trouble with rsnapshot via ssh (from debian laptop) of Nokia n900 |
1,431,168,210,000 |
I am testing a hard disk with SmartMonTools.
Hard disk status prior to the testings (only one short test performed days ago):
$ sudo smartctl -l selftest /dev/sda
smartctl 6.2 2013-07-26 r3841 [i686-linux-3.16.0-30-generic] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
==... |
In smartctl -a <device> look for Self-test execution status.
Example when no test is running:
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Example while ... | SmartMonTools: How can I know if there is any smartctl test running on my hard disk? |
1,431,168,210,000 |
The tl;dr: how would I go about fixing a bad block on 1 disk in a RAID1 array?
But please read this whole thing for what I've tried already and possible errors in my methods. I've tried to be as detailed as possible, and I'm really hoping for some feedback
This is my situation: I have two 2TB disks (same model) set up... |
All these "poke the sector" answers are, quite frankly, insane. They risk (possibly hidden) filesystem corruption. If the data were already gone, because that disk stored the only copy, it'd be reasonable. But there is a perfectly good copy on the mirror.
You just need to have mdraid scrub the mirror. It'll notice the... | Linux - Repairing bad blocks on a RAID1 array with GPT |
1,431,168,210,000 |
Sometimes I have strange troubles booting my computer (which runs Debian). So I issued "dmesg" command. In its output I saw a lot of errors. However, when I run extended SMART test on hard disks (using "smartctl -t long /dev/sda" command), the result is that my disks are not broken.
What can be the reason of those err... |
First, keep in mind that SMART saying that your drive is healthy doesn't necessarily mean that the drive is healthy. SMART reports are an aid, not an absolute truth.
If all you are interested in is what to do, rather than why, then feel free to scroll down to the last few paragraphs; however, the interim text will tel... | according to SMART hard disk is not broken, but I have errors in dmesg |
1,431,168,210,000 |
I have an external HDD which does not report SMART information properly (it gives nonsense results).
As such, the smartd daemon (part of smartmontools) keeps giving false alarms on how the device might be failing.
In /etc/smartmontools/smartd.conf (I'm using the default, here) I see a bunch of options but none that r... |
You need to comment out the DEVICESCAN line, and put in lines for individual devices. Mine, for example, looks like this:
/dev/sda -d removable -n standby,8 -S on -o on -a \
-m root -M exec /usr/share/smartmontools/smartd-runner \
-r 194 -R 5 -R 183 -R 187 -s L/../../6/01
/dev/sdb -d removable -n sta... | How to get smartd to ignore an HDD? |
1,431,168,210,000 |
I have a server with three identical SATA/600 3TB drives: /dev/sda, /dev/sdb, /dev/sdc. The drives are partitioned, using GPT with three partitions each:
1 MB: Reserved partition for boot loader
1 GB: RAID1 /dev/md0 ( ext2 ( /boot ) )
3 TB: RAID1 /dev/md1 ( encrypted volume ( LVM ( volume group ( Swap, /, /etc, /home... |
# DEVICESCAN For all disks with SMART capabilities.
#
# -o off Turn off automatic running of offline tests. An offline test
# is a test which may degrade performance.
#
# -n standby Do not spin up the disk for the periodic 30 minute (default)
# SMART status polling, i... | Monitor disk health using smartd (in smartmontools) on a high availability software RAID 1 server |
1,431,168,210,000 |
I'd like to start storing the SMART data over time and see any trends based on disk ID/serial number. Something that would let me, for example just get the smart information from disks once a day and put it in a database. Is there already a tool for this in Linux, or do I have to roll my own?
|
There are already tools which can do this, often as part of a more general monitoring tool. One I find useful is Munin, which has a SMART plugin to trace the available attributes:
Munin is available in many distributions.
smartmontools itself contains a tool which can log attributes periodically, smartd. You might fi... | Are there any tools available to store SMART data over time? |
1,431,168,210,000 |
I use my recently bought 1T Seagate Backup Plus Slim external hard disk ID 0bc2:ab24 Seagate RSS LLC (NTFS filesystem) as a backup tool.
I want to run the Smartmontools software on this disk, but when I tried to enable it using
smartctl -s on -d scsi /dev/sdb (as a root)
I got the following response:
smartctl 6.6 201... |
Have you checked this question on askubuntu? https://askubuntu.com/questions/207573/how-to-enable-smart
If this fails, it could be that your USB enclosure doesn't support SMART,
I experienced this with one enclosure of mine. In that case you would need
to connect the drive directly via SATA or use a different enclosur... | Unable to enable SMART support for external hard drive |
1,431,168,210,000 |
I have an external USB-drive which is giving me the following output on running the command
$ smartctl /dev/sdb -H
on it:
SMART Status not supported: Incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.
Cou... |
I haven't seen this kind of warning you've got, yet. But apparently it means that smartctl only evaluated the attribute table (see below) because there is no further information from SMART explicitly about the health which is typically a part of the ATA protocol. The response overall is considered not reliable in this... | SMART health-test and status |
1,431,168,210,000 |
In 2013 I used a program that analyzes the HDD and gives detailed and deep information about the hard disk. However, that program, CrystalDiskInfo, only works on Windows.
Is there a GUI that is similar to the CrystalDiskInfo which displays the information based on the S.M.A.R.T characteristics?
I am looking specifical... |
Yes, there’s GSmartControl, which provides a GUI showing the SMART information from all the drives attached to the system it’s run on.
In Mint it’s packaged as gsmartcontrol.
| is there a way to get detailed information about HDD in linux |
1,431,168,210,000 |
I've just set up CentOS 7 on a server with NVMe drives, and was suprised not to be able to run smartctl on them:
# smartctl -a /dev/nvme0
/dev/nvme0: Unable to detect device type
Please specify device type with the -d option.
# smartctl -a /dev/nvme0 -d nvme
/dev/nvme0: Unknown device type 'nvme'
Then I noticed that... |
OK, I found 2 alternatives.
Getting a precompiled binary that works on CentOS 7
Even though their packages page only offers Smartmontools 6.2 for CentOS 7, their SVN builds page offers binaries that do work on CentOS.
The proper archive has a .linux suffix, for example I chose:
smartmontools-6.6-0-20170503-r4430.linu... | Smartmontools with NVMe support on CentOS 7 |
1,431,168,210,000 |
I'm having a wired problem in my laptop. It works fine but almost every hour the screen freezes. When I force the shutdown and start it again, I see problems similar to this:
The only solution I found is turning over the laptop for few seconds before starting it again. This help me see my Ubuntu work normally withou... |
Backup Immediately
Go buy an additional external HDD/SSD and make a full CloneZilla Live backup right now! The dead giveaway that your drive is in imminent danger of failing is the following parameter:
184 End-to-End_Error 0x0032 096 096 099 Old_age Always FAILING_NOW 4
Especially as you've been ... | Random EXT4 FS errors |
1,431,168,210,000 |
I'm testing SMART support on some Compact Flash cards. After running smartctl -A on my card I'm getting the output below (also available here: http://pastebin.com/BX8GcLCX). The UPDATED column says offline, does anyone know exactly what that means? UPDATE - it means the data is only collected offline.
Also all the v... |
Most of the standard fields for SMART data were defined with only rotational, magnetic harddrives in mind. None of these really appear appropriate for your CF card.
Vendors are able to define their own attributes as well and those are not standardized. smartmontools is distributed a database (it's stored /var/lib/smar... | Understanding smartctl output for a CF card |
1,431,168,210,000 |
I use Marvell 88SE9230 controller on my home Linux server. HP does have utility to setup raid and get some stats. But I'm wondering how to get any status from a Linux system. Quick googling shows only Linux drivers for accessing array itself on previous versions of kernel, but I want to know SMART status of drives.
Sm... |
Can confirm, same lack of support here (exact same output as OP when attempting to GET SMART stats off a device through Marvel chipset).
:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s Controller (rev 11)
Linux fermmy 5.13.0-39-generic #44~20.04.1-Ubuntu SMP Thu Mar 24 16:43:35 UTC 2022 ... | Linux on Marvell 88SE9230. How to get stats? |
1,431,168,210,000 |
Ubuntu 17.04; ext4 filesystem on 4TB WD green SATA [WDC WD40EZRX-22SPEB0]
Mount (on startup, from fstab) failed with bad superblock. fsck reported / inode damaged, but repaired it. 99% of files restored (the few that are lost are available in backup). Repaired volume mounts and operates normally.
Looking at the SMAR... |
According to the SMART readings, the disk seems fine at the moment.
The exciting ones for disk sectors are these
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x003... | ext4 : bad block fixed, but is this disk dying? |
1,431,168,210,000 |
I have a zpool (3x 3TB Western Digital Red) that I scrub weekly for errors that comes up OK, but I have a recurring error in my syslog:
Jul 23 14:00:41 server kernel: [1199443.374677] ata2.00: exception Emask 0x0 SAct 0xe000000 SErr 0x0 action 0x0
Jul 23 14:00:41 server kernel: [1199443.374738] ata2.00: irq_stat 0x400... |
In the end, it's your data, so you would be the one to say whether the drive should be replaced or not. In the end, it's just spinning rust.
Though, I should point out that it appears you've created a cat/RAID0 pool, so if a drive fails, you'll lose everything. And without a mirror, ZFS is unable to repair any faile... | ZFS - "Add. Sense: Unrecovered read error - auto reallocate failed" in syslog, but SMART data looks OK |
1,431,168,210,000 |
For a long time, SMART data told me:
Now, I got this:
So my question is: What happened to that bad sector? How did it "go away" seemingly in its own?
|
The firmware of your drive mistakenly "thought" a certain sector electrical/mechanical parameters were out of normal but subsequent accesses made it "think" otherwise, so the error disappeared. I've seen it many times.
As the units of data are becoming physically smaller and smaller it's bound to happen more often tha... | How did my disk change from "One bad sector" to "Disk OK"? |
1,431,168,210,000 |
S.M.A.R.T. has found an unrecoverable read-error on one of my disks, but zpool status lists all disks as ONLINE (I.E. not DEGRADED).
Do you know why might that be? I though ZFS would know of any errors as soon as anyone...
Do I need to run a scrub in order for it to recheck the status of all disks?
Can I have S.M.A.R... |
Do you know why might that be? I though ZFS would know of any errors as soon as anyone...
Do I need to run a scrub in order for it to recheck the status of all disks?
Can I have S.M.A.R.T. automatically report to ZFS somehow?
No, it does not check all blocks all the time, it just makes sure that each written bloc... | Why does ZFS not report disk as degraded? |
1,431,168,210,000 |
My daily driver (Debian Bookworm RC3 + KDE Plasma) is configured to send me emails containing error notifications.
Today, I received the following email:
This message was generated by the smartd daemon running on:
host name: desk
DNS domain: local.lan
The following warning/error was logged by the smartd daemo... |
Try installing the nvme-cli package with
apt-get install nvme-cli
and then retrieve the errors using
nvme error-log /dev/nvme0
| How can I view the smart logs for an NVMe disk in Linux when smartclt is showing there are errors? |
1,431,168,210,000 |
I have a Raspberry Pi (running Raspbian) that is booting from a microSD card. Since it's acting as a home server, naturally I want to monitor the microSD card for errors. Unfortunately though, microSD cards don't support SMART like other disks I have, so I am unsure how to monitor the disk for errors.
How can I monito... |
You can replace smartctl -t long selftests with badblocks (no parameters). It performs a simple read-only test. You can run it while filesystems are mounted. (Do NOT use the so-called non-destructive write test).
# badblocks -v /dev/loop0
Checking blocks 0 to 1048575
Checking for bad blocks (read-only test): done
Pass... | How to test a disk that does not support SMART for errors? |
1,431,168,210,000 |
I use smartctl -t long to execute full surface test on a drive, it automatically closes then the test is run in background.
Then I use smartctl -H to view the result. But it doesn't say how long ago the reported test was done, or if there's one running at the moment.
Is there any way to know it?
|
smartctl -a will show you the relevant information, including in particular the drive’s age (in power-on hours) and the times at which the last self-tests ran; this will give you some idea of how long ago they ran. For example,
...
9 Power_On_Hours 0x0032 080 080 000 Old_age Always - 14... | When was smartctl last run? |
1,431,168,210,000 |
I keep receiving mails from smartctl related to unreadable and uncorrectable sectors (these are the two errors that I get:
Device: /dev/sdb [SAT], 209 Currently unreadable (pending) sectors
Device: /dev/sdb [SAT], 200 Offline uncorrectable sectors
Is there a way to fix those errors? I also did a conveyance smart test ... |
Unreadable sectors are a major sign that the drive is on it's way out. Drives can die without showing bad sectors beforehand, but if a drive starts showing this kind of error it's almost guaranteed that it's not long for this world.
A 'short' SMART test doesn't actually verify the entire disk, so it can miss things t... | Smartctl utility giving uncorrectable and unreadable sectors error on HDD |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.