date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,300,135,516,000
I have two linux servers. Let's say they are C and S. C is client of S On my S machine, I type. $ netstat -an | grep ESTABLISHED tcp 0 0 192.168.1.220:3306 C:57010 ESTABLISHED Then I can know C is connecting now. In the C machine, I'd also like to know the process name which is opening the port 57010 and connecting the S server. How can I do that? Of course I have root permission of C.
One way is to say lsof -i:57010 -sTCP:ESTABLISHED. This walks the kernel's open file handle table looking for processes with an established TCP connection using that port. (Network sockets are file handles on *ix type systems.) You'd use -sTCP:LISTEN on the server side to filter out only the listener socket instead. Because of the way lsof works, it can only see processes your user owns unless you run it as root. It's also fairly inefficient, since a typical *ix system has a lot of file handles open at any given time. The netstat method given in another answer is faster and usually has lower access requirements. The lsof method has one great advantage, however: not all *ix type OSes have a netstat flag for including the process name in the output, whereas lsof has been ported to every *ix type OS you're likely to use. OS X's netstat is this way, for example. It has a -p option, but it does something different from netstat -p on Linux. For an uncommon port number like the one in your question, you can typically get away without adding lsof's -s flag, because a given machine is unlikely to have programs both connecting to the port and listening on it. It can be helpful to add it with port numbers like HTTP's 80, where it is likely you'll have multiple programs using that port at once. It's fortunate that the -s flag is optional in many situations, because that usage only works with lsof version 4.81 and newer. In older versions, -s meant something else entirely! That's a 2008 vintage change, but it can still bite unexpectedly. RHEL 5 ships with lsof 4.78, for example.
How can I know the process name which is opening a tcp port?
1,300,135,516,000
Both shows the number of files we're using. Yet they both shows different results. root@host [~]# lsof /home4 root@host [~]# lsof /home2 root@host [~]# lsof /home4 Then we got fuser -uvm /home4 root 2621 Frce. (root)crond root 2635 Frce. (root)atd root 4554 frce. (root)S99firstboot root 4768 .rce. (root)firstboot root 6533 .rce. (root)setup root 11042 Frce. (root)leechprotect root 11049 Frce. (root)httpd root 17885 Frce. (root)httpd root 18163 .rce. (root)sshd root 18184 .rce. (root)bash nobody 18619 Frce. (nobody)httpd nobody 18679 Frce. (nobody)httpd nobody 18812 Frce. (nobody)httpd nobody 18821 Frce. (nobody)httpd nobody 18841 Frce. (nobody)httpd nobody 18843 Frce. (nobody)httpd nobody 18850 Frce. (nobody)httpd nobody 18869 Frce. (nobody)httpd nobody 18885 Frce. (nobody)httpd nobody 18901 Frce. (nobody)httpd nobody 18914 Frce. (nobody)httpd root 18932 .rc.. (root)flush-7:0 root 30728 Frce. (root)cphulkd root 30756 Frce. (root)cpsrvd-ssl root 30806 Frce. (root)cpdavd root 30833 .rce. (root)queueprocd root 30904 Frce. (root)tailwatchd root 30913 Frce. (root)cpanellogd Where did I went wrong?
The usage of the two are different. For lsof, to show opened files for certain path only, put -- in front of first path specified: lsof -- /home4 lsof -- /home4 /home2 lsof will show all opened file containing the path. For fuser, on the other hand, show process opening the file you specified fuser -uv <filename> To show processes accessing a particular path, use -m fuser -uvm /home3 fuser is more useful in identifying process id opening a particular file. lsof is useful to find out all file(s) opened by particular process.
What's the difference between lsof and fuser -uvm
1,300,135,516,000
When I try to remount a partition as read-only, I get an error /foo is busy. I can list all files open from /foo with lsof /foo but that does not show me whether files are open read-only or read-write. Is there any way to list only files which are open as read-write ?
To answer this question specifically, you could do: lsof /foo | awk 'NR==1 || $4~/[0-9]+u/' This will show files which are opened read-write under the mount point foo. However, likely you really want to do is list all files which are open for writing. This would include files a which opened write-only as well as those opened read-write. For this you would do: lsof /foo | awk 'NR==1 || $4~/[0-9]+[uw]/' These commands should work provided FD is the 4th field in the output and none of the other fields are blank. This is the case for me on Debian when I include a path in the lsof command, however if I don't it prints and extra TID field which is sometimes blank (and will confuse awk). Mileage may vary between distros or lsof versions.
lsof: show files open as read-write
1,300,135,516,000
I'm trying (with minimal success at present) to setup the Dovecot mail server on my Fedora 24 server. I've installed Dovecot and set the conf file up, all fine. But when I run: systemctl restart dovecot After editing the conf file I get this message Job for dovecot.service failed because the control process exited with error code. See "systemctl status dovecot.service" and "journalctl -xe" for details Running systemctl status dovecot.service gives me a different error [root@fedora app]# systemctl status dovecot.service ● dovecot.service - Dovecot IMAP/POP3 email server Loaded: loaded (/usr/lib/systemd/system/dovecot.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Tue 2016-08-16 15:02:30 UTC; 37min ago Docs: man:dovecot(1) http://wiki2.dovecot.org/ Process: 11293 ExecStart=/usr/sbin/dovecot (code=exited, status=89) Process: 11285 ExecStartPre=/usr/libexec/dovecot/prestartscript (code=exited, status=0/SUCCESS) Aug 16 15:02:30 fedora dovecot[11293]: Error: service(imap-login): listen(*, 993) failed: Address already in use Aug 16 15:02:30 fedora dovecot[11293]: master: Error: service(imap-login): listen(*, 993) failed: Address already in use Aug 16 15:02:30 fedora dovecot[11293]: Error: service(imap-login): listen(::, 993) failed: Address already in use Aug 16 15:02:30 fedora dovecot[11293]: master: Error: service(imap-login): listen(::, 993) failed: Address already in use Aug 16 15:02:30 fedora dovecot[11293]: Fatal: Failed to start listeners Aug 16 15:02:30 fedora dovecot[11293]: master: Fatal: Failed to start listeners Aug 16 15:02:30 fedora systemd[1]: dovecot.service: Control process exited, code=exited status=89 Aug 16 15:02:30 fedora systemd[1]: Failed to start Dovecot IMAP/POP3 email server. Aug 16 15:02:30 fedora systemd[1]: dovecot.service: Unit entered failed state. Aug 16 15:02:30 fedora systemd[1]: dovecot.service: Failed with result 'exit-code'. I tried running lsof -i | grep 993 but this yields no processes. Any idea how to fix this?
netstat is your friend when you're trying to troubleshoot a lot of network-related problems. To find a listening port, I would use netstat -tulpn | grep :<port number> For example, to find what pids are listening on port 22, I would run: netstat -tulpn | grep :22 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 3062/sshd That tells me that sshd with pid 3062 is listening on port 22.
Finding the process id by port number
1,300,135,516,000
This is a hypothetical question, not a problem I currently have. How do you detect which process has used a file now or in the past? To find out which process is accessing filename right now, lsof filename or fuser filename will do the work. But what if one wanted to know which processes accessed filename in the last 24 hours? One could get away with this ugly (*) hack while true; do fuser filename; sleep 1; done and let it run for 24 hours in another term. But is there actually a better system, without setting up a whole audit framework? (*) not to mention that fuser could fail to detect the access if it took less than 1 sec...
If your system has audit enabled, you can use that subsystem to audit access to specific files. For example, to audit files opening (or trying to open) the /etc/shadow file, you can use the following rule: auditctl -a exit,always -S open -F path=/etc/shadow Later on, you can then use this command to list the audited events corresponding to accesses to this file: ausearch -f /etc/shadow Note you need to be root to configure and query the audit system. See the auditctl(8) man page for more details on how to set rules and the ausearch(8) man page for details on how to query the audit logs. If you don't have audit enabled, you should look up information on how to do that specific to your Linux distribution, since details will vary.
How to detect which process has used a file now or in the past
1,300,135,516,000
I am trying to get a list of open files per process. I ran the following one-liner from PerlMonks: lsof | perl -lane '$x{"$F[0]:$F[1]"}++; END { print "$x{$_}\t$_" for sort {$x{$a}<=>$x{$b}} keys %x}' It returns the total count of open files, and the process name and pid. The result is sorted in ascending order, and the last line is as follows: 1065702 java:15437 So when I run lsof -p 15437, I would expect it to return the same number, however I'm getting: $ lsof -p 15437 | wc -l 403 Why the discrepancy? Addendum A third source of discrepancy: $ cd /proc/15437/fd $ ls -1 | wc -l 216
lsof without arguments gives you the information for all the threads of the every process. While lsof -p "$pid" only lists open files for the process. To get the same number, you'd need: lsof -a -K -p "$pid" Also note that lsof doesn't only list files open on file descriptors, it also lists mmapped files (as seen in /proc/*/task/*/maps), the current working directory (as seen in /proc/*/task/*/cwd), the root directory (/proc/*/task/*/root).
Discrepancy with lsof command when trying to get the count of open files per process
1,300,135,516,000
Our memcache daemon reports non-zero 'curr_connections'... $ telnet memcache-server 11211 Escape character is '^]'. stats ... STAT curr_connections 12 ... ...and yet, lsof shows no socket connections: $ ssh memcache-server # lsof -P -i -n | grep memcache memcached 1759 memcached 26u IPv4 11638 0t0 TCP *:11211 (LISTEN) memcached 1759 memcached 27u IPv6 11639 0t0 TCP *:11211 (LISTEN) memcached 1759 memcached 28u IPv4 11642 0t0 UDP *:11211 memcached 1759 memcached 29u IPv6 11643 0t0 UDP *:11211 I am guessing 'curr_connections' does not mean what I think it does...
I think you're correct in your logic that stat curr_connections is the number of connections that are current. curr_connections - Number of open connections to this Memcached server, should be the same value on all servers during normal operation. This is something like the count of mySQL's "SHOW PROCESSLIST" result rows. Source: Memcached statistics (stats command) When I setup memcached I noticed that it always maintained 10 as the least amount of curr_connections too. $ echo stats | nc localhost 11211 | grep curr_conn STAT curr_connections 10 But why 10? If you run memcached in verbose mode you'll notice the following output: $ memcached -vv ... <26 server listening (auto-negotiate) <27 server listening (auto-negotiate) <28 send buffer was 212992, now 268435456 <28 server listening (udp) <28 server listening (udp) <28 server listening (udp) <29 send buffer was 212992, now 268435456 <28 server listening (udp) <29 server listening (udp) <29 server listening (udp) <29 server listening (udp) <29 server listening (udp) If you tally up the listening servers (8) + 2 servers (auto-negotiated) you'll discover why there are 10 base servers to start, at least that's what I thought at first. But you need to dig deeper to better understand what's going on. It would appear that memcached is multi-threaded and so the output that you were taking notice of isn't really how one would tally up the connections. Notice the threads The output of ps -eLf shows the threads: $ ps -eLf | grep memc saml 20036 4393 20036 0 6 20:07 pts/7 00:00:00 memcached -vv saml 20036 4393 20037 0 6 20:07 pts/7 00:00:00 memcached -vv saml 20036 4393 20038 0 6 20:07 pts/7 00:00:00 memcached -vv saml 20036 4393 20039 0 6 20:07 pts/7 00:00:00 memcached -vv saml 20036 4393 20040 0 6 20:07 pts/7 00:00:00 memcached -vv saml 20036 4393 20041 0 6 20:07 pts/7 00:00:00 memcached -vv There is one master node and 5 worker threads. Here's what the output of lsof looks like when I've made 3 connections to memcached using the same method as you, telnet localhost 11211.     So it would appear that each thread is maintaining a connection (or a reference to each connection) as they're made and kept open. As soon as the telnet connections are closed they go away from this list. So how can we count the connections? So if you wanted to tally up the connections you could either subtract 10 from the curr_connections result, or you could run lsof and count the number of connections that are associated with the main PID. Notice this output: $ lsof |grep memcac | grep IPv memcached 20036 saml 26u IPv4 7833065 0t0 TCP *:memcache (LISTEN) memcached 20036 saml 27u IPv6 7833066 0t0 TCP *:memcache (LISTEN) memcached 20036 saml 28u IPv4 7833069 0t0 UDP *:memcache memcached 20036 saml 29u IPv6 7833070 0t0 UDP *:memcache memcached 20036 saml 30u IPv6 7962078 0t0 TCP localhost:memcache->localhost:38728 (ESTABLISHED) That last line is an active connection. So we could count them like this: $ lsof -p $(pgrep memcached) | grep "memcache->" | wc -l 1 But your output shows IPv4 & IPv6, what's up with that? To simplify the example even more let's start memcached up and force it to only listen on a single IPv4 address, 192.168.1.20. In the above examples we were starting memcached up on all interfaces and connecting to it using localhost, so let's take another look. $ memcached -vv -l 192.168.1.20 ... <26 server listening (auto-negotiate) <27 send buffer was 212992, now 268435456 <27 server listening (udp) <27 server listening (udp) <27 server listening (udp) <27 server listening (udp) Notice that we're only getting 1/2 as many? Previously we had 2 auto-negotiated servers and 8 UDP, this time we have 1 auto and 4 UDP. Why? Well we've told memcached to only listen on the IPv4 interface, previously it was listening on everything, the IPv4 and localhost. We can convince ourselves of this by attempting to connect to the server on localhost: $ telnet localhost 11211 Trying ::1... telnet: connect to address ::1: Connection refused Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused See we can't connect. But we can using the IPv4 address: $ telnet 192.168.1.20 11211 Trying 192.168.1.20... Connected to 192.168.1.20. Escape character is '^]'. What does stats say now? STAT curr_connections 5 See? The curr_connections is showing the number 5 (1 auto + 4 UDP). 4 UDPs? Why are we running 4 UDPs? This would appear to be a default setting in memcached. You can see the settings using the stats settings command when youtelnet` to the server: stats settings STAT maxbytes 67108864 STAT maxconns 1024 STAT tcpport 11211 STAT udpport 11211 STAT inter 192.168.1.20 STAT verbosity 2 ... STAT num_threads 4 STAT num_threads_per_udp 4 ... Can we change that value? Sure it's the -t # switch to memcached. $ memcached -vv -l 192.168.1.20 -t 1 ... <11 server listening (auto-negotiate) <12 send buffer was 212992, now 268435456 <12 server listening (udp) So now we only have the main listener (auto) and 1 UDP thread. If we check stats now: STAT curr_connections 2 Incidentally, we can't set the number of threads to 0. $ memcached -vv -l 192.168.1.20 -t 0 Number of threads must be greater than 0 So the lowest we can get curr_connections to go is 2. If we open up 6 telnets (5 from ourself - greeneggs and 1 from another remote server named skinner) we'll see the following in our lsof output: $ lsof -p $(pgrep memcached) | grep ":memcache" memcached 24949 saml 11u IPv4 847365 0t0 TCP greeneggs.bubba.net:memcache (LISTEN) memcached 24949 saml 12u IPv4 847366 0t0 UDP greeneggs.bubba.net:memcache memcached 24949 saml 13u IPv4 855914 0t0 TCP greeneggs.bubba.net:memcache->greeneggs.bubba.net:48273 (ESTABLISHED) memcached 24949 saml 14u IPv4 872538 0t0 TCP greeneggs.bubba.net:memcache->skinner.bubba.net:41352 (ESTABLISHED) memcached 24949 saml 15u IPv4 855975 0t0 TCP greeneggs.bubba.net:memcache->greeneggs.bubba.net:48298 (ESTABLISHED) memcached 24949 saml 16u IPv4 855992 0t0 TCP greeneggs.bubba.net:memcache->greeneggs.bubba.net:48305 (ESTABLISHED) memcached 24949 saml 17u IPv4 859065 0t0 TCP greeneggs.bubba.net:memcache->greeneggs.bubba.net:48316 (ESTABLISHED) memcached 24949 saml 18u IPv4 859077 0t0 TCP greeneggs.bubba.net:memcache->greeneggs.bubba.net:48326 (ESTABLISHED) So we still have the auto + the 1 UDP and 6 other connections. Our stats command shows this: STAT curr_connections 8 So no surprise there.
Memcache 'stats' reports non-zero 'curr_connections' - but lsof shows no socket connections
1,300,135,516,000
Here is an abridged output of lsof -i tcp:XXXXXX: COMMAND PID USER FD TYPE DEVICE python3 9336 root 3u IPv4 3545328 python3 9336 root 5u IPv4 3545374
$ man 8 lsof | grep -A 10 '^\s\{7\}DEVICE' DEVICE contains the device numbers, separated by commas, for a character special, block special, regular, directory or NFS file; or ``memory'' for a memory file system node under Tru64 UNIX; or the address of the private data area of a Solaris socket stream; or a kernel reference address that identifies the file (The kernel reference address may be used for FIFO's, for example.); or the base address or device name of a Linux AX.25 socket device. Usually only the lower thirty two bits of Tru64 UNIX kernel addresses are displayed. Or type man 8 lsof Inside man you can search with /. Than type directley without space a regex. In your case ^\s*DEVICE and you will jump to DEVICE.
What does the DEVICE field stand for in lsof? [closed]
1,300,135,516,000
I am using this command below and trying to separate the columns I only want to get the PID to use it in my python script. I can easily get this line by line but then how to separate into columns in a non hacky way? I can easily split by space but lets face it that is a terrible idea, any suggestions? root@python-VirtualBox:/var/python# lsof | grep TCP lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sshd 3449 root 3u IPv4 24248 0t0 TCP *:22 (LISTEN) sshd 3449 root 4u IPv6 24257 0t0 TCP *:22 (LISTEN)
I think awk is good for this because it splits fields for you: lsof | awk '$8 == "TCP" { print $2 }' If field 8 is "TCP", then print field 2.
separate lsof output by column
1,300,135,516,000
I'm working on a Linux (Scientific Linux CERN SLC release 6.9 (Carbon)) machine on which I am unable to install programs and on which the lsof or fuser commands are not available. I'm trying to remove an NFS dotfile on this machine but I keep getting the Device or resource busy error so I'd like to find out which process (I suspect it might be one I have previously started with nohup) still has a file descriptor to this file. How can I achieve this?
Use /proc/<PID>/fd. Example....we want to figure out which pid has /var/log/audit/audit.log open. fuser tells us it's pid 255. [root@instance-1 ~]# fuser /var/log/audit/audit.log /var/log/audit/audit.log: 255 [root@instance-1 ~]# So using a non fuser solution: [root@instance-1 ~]# find /proc/*/fd -ls|grep /var/log/audit/audit.log 188652 0 l-wx------ 1 root root 64 Jul 1 06:22 /proc/255/fd/5 -> /var/log/audit/audit.log [root@instance-1 ~]#
Find processes which have a file open without lsof or fuser
1,300,135,516,000
I have Centos 6.7 running java application via a wrapper programme. So first I ran this. lsof -p 15200 | wc -l and I got the results immediately as 200 next I ran this lsof -p 15232 | wc -l I keep taking too long and never generated any results. What other method can I use to get the total open files? I need to know cause my system keep hanging after certain time. I will maybe need to increase the open file size.
You can get the number files opened by a process identified by a PID, for instance 15232, doing: ls -l /proc/15232/fd | wc -l from the Debian lists: I am trying to figure out the meaning of: /proc/$PID/fd/* files. These are links that point to the open files of the process whose pid is $PID. Fd stands for "file descriptors", which is an integer that identifies any program input or output in UNIX-like systems. This is also actually where the lsof command drinks the information to give you the files of a process. This is a feature of the linux kernel, and is distribution agnostic.
lsof command taking too long for a particular process id
1,300,135,516,000
I am looking at the output of lsof -i and I am getting confused! For example, the following connection between my java process and the database shows as IPv6: [me ~] % lsof -P -n -i :2315 -a -p xxxx COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java xxxx me 93u IPv6 2499087197 0t0 TCP 192.168.0.1:16712->192.168.0.2:2315 (ESTABLISHED) So the output type is IPv6 but it clearly shows an IPv4 address in the NAME column. Furthermore, the connection was configured with an IPv4 address! (In this example, 192.168.0.2) Thanks very much for any insight!
In Linux, IPv6 sockets may be both IPv4 and IPv6 at the same time. An IPv6 socket may also accept packets from an IPv4-mapped IPv6 address. This feature is controlled by the IPV6_V6ONLY socket option, whose default is controlled by the net.ipv6.bindv6only sysctl (/proc/sys/net/ipv6/bindv6only). Its default is 0 (i.e. it's off) on most Linux distros. This could be easily reproduced with: [prompt] nc -6 -l 9999 & nc -4 localhost 9999 & [4] 10892 [5] 10893 [prompt] lsof -P -n -i :9999 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME nc 10892 x 3u IPv6 297229 0t0 TCP *:9999 (LISTEN) nc 10892 x 4u IPv6 297230 0t0 TCP 127.0.0.1:9999->127.0.0.1:41472 (ESTABLISHED) nc 10893 x 3u IPv4 296209 0t0 TCP 127.0.0.1:41472->127.0.0.1:9999 (ESTABLISHED) [prompt] kill %4 %5 The client socket is IPv4, and the server socket is IPv6, and they're connected.
Why does lsof indicate my IPv4 socket is IPv6?
1,521,959,301,000
I found a Unix socket being used in the output of the lsof command: COMMAND PID TID TASKCMD USER FD TYPE DEVICE SIZE/OFF NODE NAME screen 110970 username 4u unix 0xffff91fe3134c400 0t0 19075659 socket The "DEVICE" column holds what looks like a memory address. According to the lsof man page: DEVICE contains the device numbers, separated by commas, for a character special, block special, regular, directory or NFS file; or ``memory'' for a memory file system node under Tru64 UNIX; or the address of the private data area of a Solaris socket stream; or a kernel reference address that identifies the file (The kernel reference address may be used for FIFO's, for example.); or the base address or device name of a Linux AX.25 socket device. Usually only the lower thirty two bits of Tru64 UNIX kernel addresses are displayed. My question is, which of these am I looking at with the value 0xffff91fe3134c400? Also, how can I interact with it? I know I can use netcat to connect to a Unix domain socket, but from reading examples online it looks like you have to specify a file.
To find the file associated with the UNIX socket, you can use the +E flag for lsof to show the endpoint of the socket. From the man pages: +|-E +E specifies that Linux pipe, Linux UNIX socket and Linux pseudoterminal files should be displayed with endpoint information and the files of the endpoints should also be displayed For instance, here's some example from a question someone had where he was trying to find out the endpoint of fd 6 of a top process: # lsof -d 6 -U -a +E -p $(pgrep top) COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dbus-daem 874 messagebus 12u unix 0xffff9545f6fee400 0t0 366381191 /var/run/dbus/system_bus_socket type=STREAM ->INO=366379599 25127,top,6u top 25127 root 6u unix 0xffff9545f6fefc00 0t0 366379599 type=STREAM ->INO=366381191 874,dbus-daem,12u The -U flag for lsof shows only Unix socket files. Notice that you will only see the name of the socket file for the listening processes. The other process will not show the name of the unix socket file, but with +E lsof will show the inode of the listening socket file, and will also add a line for the process listening to this socket (along with the socket file name). In this example notice that we only asked lsof to show the file descriptors of top command, but lsof added another line for dbus-daem - which is the listening process, and the socket file it listens to is /var/run/dbus/system_bus_socket. Pid 25127 (inode 366379599) interacts with inode 366381191 (type=STREAM ->INO=366381191 874,dbus-daem,12u) Inode 366381191 belong to pid 874, and you can see this process has the fd that is the listening side for the second process (/var/run/dbus/system_bus_socket type=STREAM ->INO=366379599 25127,top,6u), and there you can see that the socket file name is /var/run/dbus/system_bus_socket. Also, how can I interact with it? Now that you have the filename of the UNIX socket, you can interact with it in various ways, such as: socat - UNIX-CONNECT:/run/dbus/system_bus_socket nc -U /run/dbus/system_bus_socket For additional information: How can I communicate with a Unix domain socket via the shell on Debian Squeeze?
Interacting with Unix Socket found in lsof
1,521,959,301,000
Finding the PID of an established connection is trivial using netstat or lsof. However, I have a process which is creating a connection ever 60 seconds to our database and locking it up by maxing out the failed connection attempt limit. I can increase the failed connection limit to something extremely high on the database, or I can try to track down what is making the connection, and I have chosen the latter. Based on tcpdump/wireshark, I can see that what is happening is that a connection is established and then the connecting server immediately closes the connection before the server can even respond. What I don't know is why. The first step is to find out what PID is opening the connection. Unfortunately, this seems easier said than done. The problem is that when a connection goes into TIME_WAIT state, it is no longer associated with a PID. Since my connection has a lifetime of less than a tenth of a second, is there any way to record this information? netstat and lsof appear to be able to poll every second, but this simply isn't fast enough with the connection attempt I am dealing with. Is there a hook that I can connect to to dump this information to a log? Or is my only option to brute force it with a loop and some coding? I use CentOS 6.
Consider using SystemTap. It is dynamic instrumenting engine that dynamically patches kernel so you can track any in-kernel event such as opening a socket. It is actively developed by RedHat so it is supported in CentOS. Installing To install SystemTap on CentOS 6: Enable debuginfo repository: sed -i 's/^enabled=0/enabled=1/' /etc/yum.repos.d/CentOS-Debuginfo.repo Install SystemTap: yum install systemtap Install debuginfo packages for kernel. It can be done manually, but there is a tool that can do it automatically: stap-prep Tracing SystemTap has not tapset probe for TCP connection, but you may directly bind to kernel functions! You can also do it on socket level. I.e. create script called conn.stp: probe kernel.function("tcp_v4_connect") { printf("connect [%s:%d] -> %s:%d\n", execname(), pid(), ip_ntop(@cast($uaddr, "struct sockaddr_in")->sin_addr->s_addr), ntohs(@cast($uaddr, "struct sockaddr_in")->sin_port)); } This will give you the following output: # stap conn.stp connect [nc:2552] -> 192.168.24.18:50000 connect [nc:2554] -> 192.168.24.18:50000 connect [nc:2556] -> 192.168.24.18:50000 However tracking disconnection events seem to be more trickier.
log PID of each connection attempt
1,521,959,301,000
The lsof manpage says the following about the TYPE column. TYPE is the type of the node associated with the file - e.g., GDIR, GREG, VDIR, VREG, etc. Can someone please explain (or point me to a link which explains) what these mean. I have tried googling on these but all the links take me to the lsof man page only. If you find a link, do tell me how you googled it :)
Types starting with V are virtual types. That is, there is no corresponding inode on any physical disk but only a vnode in a virtual filesystem (like /proc). It seems those types only belong to BSD-like systems (AIX, Darwin, FreeBSD, HPUX, Sun etc.) and won't occur on a Linux system. As with the non-virtual types, DIR stands for directory and REG for a regular file. I couldn't find the meaning of GDIR and GREG as they even don't appear in the lsof source code. But I guess they just stand for the non-virtual (generic?) directories and files.
What do GDIR, GREG, VDIR, VREG mean in lsof output
1,521,959,301,000
I use wget to download files (most are zip files) automatically for me during the night. However, sometimes in the morning I find that a few files cannot be unzipped. I don't know why this is happening, perhaps it's something wrong with the remote server. I want to write a script to test zip files in my download folder periodically using 'unzip -t', but I don't want to test on the files that are being downloaded. So how can I tell if a file is being used by wget?
You can use fuser, or lsof. fuser foo.zip The output looks like so: $ fuser archlinux-2013.02.01-dual.iso /home/chris/archlinux-2013.02.01-dual.iso: 22506 $ awk -F'\0' '{ print $1 }' /proc/22506/cmdline wget
How to tell if a file is being downloaded by wget?
1,521,959,301,000
I have a Raspberry Pi running Debian Jessie. I have pi-hole installed to block ad-serving domains (https://pi-hole.net). Going through the logs, I noticed a lot of queries from a Chinese domain. lsof -i shows me the following list that I feel is suspected: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sshd 1742 root 3u IPv4 16960 0t0 TCP raspberrypi:ssh->116.31.116.47:50600 (ESTABLISHED) sshd 1743 sshd 3u IPv4 16960 0t0 TCP raspberrypi:ssh->116.31.116.47:50600 (ESTABLISHED) sshd 1774 root 3u IPv4 16990 0t0 TCP raspberrypi:ssh->183.214.141.105:56265 (ESTABLISHED) sshd 1775 sshd 3u IPv4 16990 0t0 TCP raspberrypi:ssh->183.214.141.105:56265 (ESTABLISHED) sshd 1869 root 3u IPv4 17068 0t0 TCP raspberrypi:ssh->116.31.116.47:33525 (ESTABLISHED) sshd 1870 sshd 3u IPv4 17068 0t0 TCP raspberrypi:ssh->116.31.116.47:33525 (ESTABLISHED) sshd 1910 root 3u IPv4 17122 0t0 TCP raspberrypi:ssh->116.31.116.47:35816 (ESTABLISHED) sshd 1911 sshd 3u IPv4 17122 0t0 TCP raspberrypi:ssh->116.31.116.47:35816 (ESTABLISHED) sshd 1931 root 3u IPv4 17158 0t0 TCP raspberrypi:ssh->116.31.116.47:49492 (ESTABLISHED) sshd 1932 sshd 3u IPv4 17158 0t0 TCP raspberrypi:ssh->116.31.116.47:49492 (ESTABLISHED) sshd 1935 root 3u IPv4 17163 0t0 TCP raspberrypi:ssh->183.214.141.105:23828 (ESTABLISHED) sshd 1936 sshd 3u IPv4 17163 0t0 TCP raspberrypi:ssh->183.214.141.105:23828 (ESTABLISHED) sshd 1937 root 3u IPv4 17168 0t0 TCP raspberrypi:ssh->116.31.116.47:53628 (ESTABLISHED) sshd 1938 sshd 3u IPv4 17168 0t0 TCP raspberrypi:ssh->116.31.116.47:53628 (ESTABLISHED) sshd 1940 root 3u IPv4 17176 0t0 TCP raspberrypi:ssh->116.31.116.47:57858 (ESTABLISHED) sshd 1941 sshd 3u IPv4 17176 0t0 TCP raspberrypi:ssh->116.31.116.47:57858 (ESTABLISHED) sshd 1944 root 3u IPv4 17194 0t0 TCP raspberrypi:ssh->183.214.141.105:28355 (ESTABLISHED) sshd 1945 sshd 3u IPv4 17194 0t0 TCP raspberrypi:ssh->183.214.141.105:28355 (ESTABLISHED) I already changed my password, restarted my Pi and checked for any unknown users (which there were none). How do I proceed making my Pi secure again?
There may or may not be a security breach. It may just be an idiot trying to brute force crack passwords. If they connect, try a password, it fails, they don't try another or close the connection then you can see these connections which will eventually be closed by the sshd. /var/log/auth.log should have some information on the login attempts. the last command may show you successful logins.
Suspicious network activity: sshd process showing up with lsof
1,521,959,301,000
I need a list of the opened files, ports and so on by a process. Now whenever I use lsof -p <PID> I can parse the output, in a python script, but the problem is that sometimes I am getting some columns that are empty. Therefore I am getting bad results while parsing the output. I know that I can manually look for the FDs in /proc for each process, but this has to be in POSIX standard. So my question is, is there anyway to make lsof print just the list of the opened files and nothing else? I am thinking something like for example the user specific ps command (ps -eopid,user,comm,command), where we can specify what commands come in the output. In this case I want to specify only the 'Name' columns in the lsof -p <PID> output.
lsof has a "post-processable" output format with the -F option (see the OUTPUT FOR OTHER PROGRAMS section in the manual). lsof -nPMp "$pid" -Fn | sed ' \|^n/|!d s/ type=STREAM$//; t end s/ type=DGRAM$//; t end s/ type=SEQPACKET$// : end s|^n||' Will list open files that resolve to a path on the file system. -nPM disables some of the processing that lsof does by default and which we don't care for here like resolving IP adresses, port or rpc names. -p "$pid", specify the process whose open files to list -Fn: by field output. Ask for the name part. | sed post process with sed to select only the part we're interested in: \|^n/|!d: skip anything that doesn't start with n/ s/ type=...$/;t end: remove those strings at the end of the line and jump to the end label if successful. : end: the end label. s|^n||: remove the leading n character that lsof inserts to identify the field being output. However note that non-printable characters in file names are encoded (like \n for newline, ^[ for ESC...) in an ambiguous way (as in ^[ could mean either ^[ and ESC). Also, for deleted files, on Linux at least, you'll still get a file path but with (deleted) appended. Again, no way to discriminate between a deleted file and a file whose name ends in (deleted). Looking at the link count will not necessarily help as the deleted file could be linked elsewhere. See also the removing of type=* which we do for Unix domain sockets that may have actually occurred in the file name. What that means is that though it will work in most cases, you can't post-process that output reliably in the general case. Not to mention that lsof itself may fail to parse the information returned by the kernel correctly, or that the kernel may fail to provide that information in a reliably parseable format
Custom lsof output
1,521,959,301,000
When I run lsof I see the output java 1752736 user 9995u sock 0,8 0t0 1432559505 protocol: TCPv6 java 1752736 user *527u sock 0,8 0t0 1444900878 protocol: TCPv6 What does the * in front of the file descriptor indicate?
Newer versions of the man page explain it (addressing this issue). Checking out its roff source code (emphasize mine): The FD column contents constitutes a single field for parsing in post-processing scripts. FD numbers larger than 9999 are abbreviated to a * followed by the last three digits. E.g., 10001 appears as *001
lsof output file descriptor with asterisk not documented
1,521,959,301,000
I know that lsof can list all files which are being opened by running processes. If there is a process, which will open a file and then will be terminated, I don't think I can catch the file that is opened by the process with lsof because the process terminated itself too fast. So I'm looking for such a tool (named XXX), allowing me to do the thing as below: XXX ./my_process args And the output of the command should be like this: file1 file2 file3 Which means that the my_process opens three files: file1, file2 and file3 while running.
You could use strace: strace -e trace=open -o trace.log ./my_process args
Is it possible to list all files that are opened by a process [duplicate]
1,521,959,301,000
I've noticed, if a file is renamed, lsof displays the new name. To test it out, created a python script: #!/bin/python import time f = open('foo.txt', 'w') while True: time.sleep(1) Saw that lsof follows the rename: $ python test_lsof.py & [1] 19698 $ lsof | grep foo | awk '{ print $2,$9 }' 19698 /home/bfernandez/foo.txt $ mv foo{,1}.txt $ lsof | grep foo | awk '{ print $2,$9 }' 19698 /home/bfernandez/foo1.txt Figured this may be via the inode number. To test this out, I created a hard link to the file. However, lsof still displays the original name: $ ln foo1.txt foo1.link $ stat -c '%n:%i' foo* foo1.link:8429704 foo1.txt:8429704 $ lsof | grep foo | awk '{ print $2,$9 }' 19698 /home/bfernandez/foo1.txt And, if I delete the original file, lsof just lists the file as deleted even though there's still an existing hard link to it: $ rm foo1.txt rm: remove regular empty file ‘foo1.txt’? y $ lsof | grep foo | awk '{ print $2,$9,$10 }' 19698 /home/bfernandez/foo1.txt (deleted) So finally... My question What method does lsof use to keep track open file descriptors that allow it to: Keep track of filename changes Not be aware of existing hard links
You are right in assuming that lsof uses the inode from the kernel's name cache. Under Linux platforms, the path name is provided by the Linux /proc file system. The handling of hard links is better explained in the FAQ: 3.3.4 Why doesn't lsof report the "correct" hard linked file path name? When lsof reports a rightmost path name component for a file with hard links, the component may come from the kernel's name cache. Since the key which connects an open file to the kernel name cache may be the same for each differently named hard link, lsof may report only one name for all open hard-linked files. Sometimes that will be "correct" in the eye of the beholder; sometimes it will not. Remember, the file identification keys significant to the kernel are the device and node numbers, and they're the same for all the hard linked names. The fact that the deleted node is displayed at all is also specific to Linux (and later builds of Solaris 10, according to the same FAQ).
How does `lsof` keep track of open file descriptors' filenames?
1,521,959,301,000
I was going through the questions of this site. In this particular question, I see the command lsof being used to list the files that are open for a particular user. I ran the below command in my terminal. lsof -a -u root -d txt I am seeing a long output which are completely irrelevant (at least to me). I am finding it hard to understand the output produced. This is mapping-d 3992 root txt REG 8,5 29728 7169187 /usr/libexec/mapping-daemon clock-app 4005 root txt REG 8,5 88048 7169216 /usr/libexec/clock-applet mixer_app 4007 root txt REG 8,5 53484 7169197 /usr/libexec/mixer_applet2 gnome-scr 4078 root txt REG 8,5 168628 1337742 /usr/bin/gnome-screensaver notificat 4081 root txt REG 8,5 34732 1324168 /usr/libexec/notification-daemon gnome-ter 4219 root txt REG 8,5 309400 1318348 /usr/bin/gnome-terminal gnome-pty 4221 root txt REG 8,5 12384 1899888 /usr/lib/vte/gnome-pty-helper bash 4222 root txt REG 8,5 735932 196459 /bin/bash firefox 15551 root txt REG 8,5 121288 2424613 /usr/lib/firefox/firefox npviewer. 15790 root txt REG 8,5 174364 1996912 /usr/lib/nspluginwrapper/npviewer.bin bash 15908 root txt REG 8,5 735932 196459 /bin/bash lsof 16014 root txt REG 8,5 129820 1323233 /usr/sbin/lsof lsof 16015 root txt REG 8,5 129820 1323233 /usr/sbin/lsof I was expecting, if I had opened a txt file as the root user, only that file's information would be displayed, if I run the lsof command. Can someone please help me in understanding what this command actually produces as the output?
lsof lists all files that are accessed by a program one way or another. The fourth column (FD) describes the way in which the program is accessing the file. Here are some common cases (there are others): A number: the file is opened by the process, and the number is the file descriptor. Letters after the file name indicate the opening mode (e.g. r for read-only, w for write-only, u for both). cwd: the file is the process's current working directory. txt: the file is the process's executable. mem: the file is mapped to the process's virtual memory space. The descriptor type txt has nothing to do with “text files” in the sense of containing human-readable text or of having a name ending with .txt. Here “text” is an odd bit of terminology refering to executable code, as in the text segment of an executable file which is the section that contain the code. This strange name comes from a now-defunct programming community which predates Unix (General Electric, whose other naming legacy in the Unix world is the “GECOS field”). Thus what you're seeing is the main executable of each process.
lsof - debug the output information
1,521,959,301,000
Given a pid, I can get all the files open for writing something like: lsof -p 28827 | awk '$4 ~ "[wW]"{print $(NF-1), $NF}' One of those ends up being a pipe: 28827 232611 pipe I want to look-up all the files open by that pipe. If I just do: lsof | grep 232611 That gives me a bunch of processes, one of which is a tee: COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME <app> 28827 <me> 1w FIFO 0,8 0t0 232611 pipe <app> 28827 28836 <me> 1w FIFO 0,8 0t0 232611 pipe <app> 28827 28901 <me> 1w FIFO 0,8 0t0 232611 pipe .... tee 28828 <me> 0r FIFO 0,8 0t0 232611 pipe How can I programmatically find the PID for the tee (or generally, any process open with r access)? I can't simply check $4 ~ "r" since for most of the rows, $4 isn't even the FD column.
It should be enough to just grep for digits followed by one or more rs: lsof | grep -P '\b\d+r+\b' Or, if you don't have GNU grep: lsof | grep -E '\b[0-9]+r+\b' The \bs mark word boundaries and ensure that only entire fields are matched. Alternatively, if your grep supports it, you can use the -w flag: lsof | grep -wE '[0-9]+r+' So, using that, you can get the relevant PIDs with lsof | grep -wE '[0-9]+r+' a | awk '{print $2}' @derobert pointed out in the comments below that, had I taken the time to actually read through the 2562 lines of man lsof, I would have found that it offers an -F option that lets you choose the fields printed. To get the file's access type, use a: lsof -p 28827 -F a
Get all files open for writing with pid, recursively
1,521,959,301,000
It threw errors about kernel sources missing. So, I looked and sure enough this box doesn't have them. The documentation I have says to install them via sysinstall. That failed both automatically and manually configured server references. I then found elsewhere that sysinstall is no longer supported and that sources should be pulled with Subversion. I pulled the sources into /usr/src with subversion. Lsof still pukes on compile. The Makefile dependency that it is missing appears to be /usr/src/sys/kern/kern_lockf.c. I've got a /usr/src/sys/kern with several files, but no kern_lockf.c anywhere to be found. Supposedly I have the current sources and the current ports. What's going on?
Ultimately, the following command line appears to have solved the problem. I don't recall the original source (or command line) I had used, so I don't know if the documentation I was using was wrong or it was a problem with the mirror: svn checkout svn://svn.freebsd.org/base/release/9.2.0/ /usr/src
How do I update lsof port on FreeBSD 9.2?
1,521,959,301,000
I'm running qemu's like this: $ sudo qemu -boot d -m 1024 \ -netdev tap,id=tap0 \ -device virtio-net-pci,netdev=tap0,id=vth0 \ -drive file=ubuntu.iso,media=cdrom,cache=none,if=ide \ -monitor pty \ -serial pty \ -parallel none \ -nographic When I check /dev/pts/: $ sudo lsof +d /dev/pts/ Qemu pty's do not show up, although they do work using for example: $ sudo screen /dev/pts/8 How can I figure out which pty's are from which qemu?
You can do it this way using virsh along with some scripting: $ for i in `virsh list | awk '{print $2}' | egrep -v "^$|Name"`; do printf "%-14s:%s\n" $i $(virsh ttyconsole $i | grep -v "^$"); done cobbler :/dev/pts/1 xwiki :/dev/pts/3 fan :/dev/pts/4 mercury :/dev/pts/5 mungr :/dev/pts/0 win2008R2-01 :/dev/pts/7 Incidentally those same VMs through an lsof command: $ lsof|grep qemu|grep ptmx qemu-kvm 3796 root 14u CHR 5,2 0t0 993 /dev/ptmx qemu-kvm 3895 root 14u CHR 5,2 0t0 993 /dev/ptmx qemu-kvm 3972 root 14u CHR 5,2 0t0 993 /dev/ptmx qemu-kvm 4294 root 15u CHR 5,2 0t0 993 /dev/ptmx qemu-kvm 11897 root 14u CHR 5,2 0t0 993 /dev/ptmx qemu-kvm 16250 root 15u CHR 5,2 0t0 993 /dev/ptmx It doesn't look like lsof shows which pty they're using, just that they're using the ptmx. See the ptmx man page for more info. References Setting up a serial console in qemu and libvirt The left side are the names of the VMs and the right side is the pts.
How can I figure out which pty's are from which qemu?
1,521,959,301,000
We've been working to clean up some space in our /opt mount. A large culprit of space consumption was log files for some processes we run (to the tune of 2 to 12 GB each). We've cleaned these up by truncating them. However, this seems to be skewing our output from df -H. /dev/mapper/Sys-opt 76G 72G 0 100% /opt When running du -sh * on this directory, the sizes don't add up. When running lsof | grep log, I see that many of the files we've deleted still show up with (deleted) appended to the end. My question is, (a) should I be concerned with this?, and (b) is there a way to get my df -H back to normal, without restarting the box / these processes? Will restarting the processes even fix it (I see several entries for the same logs, which are from processes I know have been restarted recently)?
Unix uses reference counting to figure out whether a file is in use or if the data can be deleted/reused. An open filehandle counts as a reference - so until it is closed, this space will be occupied. Restarting the process with the open filehandle will close the filehandle, and if it has been removed from the directory structure, it'll therefore disappear when it's ref count drops to zero. So yes - restarting your process will make that file vanish. This can happen with overly verbose logging from a daemon - it's still writing to it's log file, despite some well intentioned individuals having a clearout.
Should I be concerned with large (deleted) files in lsof? [duplicate]
1,521,959,301,000
I was trying to figure out what ports are in use on my Linux Ubuntu machine. I was reading the article How to check if port is in use on Linux or Unix and saw one of their commands was: sudo lsof -i -P -n | grep LISTEN I am still getting my feet wet with a lot of Linux commands, but I had just recently learned about lsof for listing all open files, so I wanted to understand what these flags were for. (And what are the -P and -n flags for? I have looked at the manual, but it’s simply not helping. It seems that the -i flag is the most important one here.) I found that if I did a grep for 'LISTEN' without the -i flag, I got totally different results than with. In the manual, it says this: -i [i] This option selects the listing of files any of whose Internet address matches the address specified in i. If no address is specified, this option selects the listing of all Internet and x.25 (HP-UX) network files. I really don't understand what this means, and definitely don't understand how it helps me figure out what ports are in use.
-i selects Internet files or sockets. It works with an optional address parameter. Without that parameter, it selects all sockets. You can use additional filters with this option to select by IPv4/IPv6, by TCP/UDP and so on. The manpage lists several examples: -i 4 to select IPv4 sockets, -i 6 to select IPv6 sockets. -i TCP or -i UDP to select by protocol. -i @hostnameor -i @ipaddress to select by the name/IP of the interface the socket is bound to. -i :portto select sockets bound to a specific port. To illustrate the other two options, consider the following example. This entry is from my system, showing two CUPS ports: cupsd 855 root 9u IPv6 25870 0t0 TCP localhost:ipp (LISTEN) cupsd 855 root 10u IPv4 25871 0t0 TCP localhost:ipp (LISTEN) You'll notice that the port is specified as ipp - the Internet Printing Protocol. To turn that back to a number, the -P option is used: cupsd 855 root 9u IPv6 25870 0t0 TCP localhost:631 (LISTEN) cupsd 855 root 10u IPv4 25871 0t0 TCP localhost:631 (LISTEN) The hostname is displayed as localhost here. In larger networks, lsof will make an effort to list the hostnames by looking them up. As an optimization, you can skip that hostname resolution step with -n. With -n, the IP addresses are shown instead of the hostnames: cupsd 855 root 9u IPv6 25870 0t0 TCP [::1]:631 (LISTEN) cupsd 855 root 10u IPv4 25871 0t0 TCP 127.0.0.1:631 (LISTEN) As an aside, I like to use the ss command to keep track of listening ports. The syntax I commonly use is ss -ltnp, which says: 1) show listening ports, 2) only TCP ports, 3) no hostname lookup, 4) show process IDs. The result looks like this (same CUPS ports): LISTEN 0 5 127.0.0.1:631 0.0.0.0:* users:(("cupsd",pid=855,fd=10)) LISTEN 0 5 [::1]:631 [::]:* users:(("cupsd",pid=855,fd=9))
What does the "i" flag mean in lsof?
1,521,959,301,000
I am trying to find the reason why my long-running app sometimes busts the maximum open file descriptor limit (ulimit -n). I would like to periodically log how many file descriptors the app has open so that I can see when the spike occurred. I know that lsof includes a bunch of items that are excluded from /proc/$PID/fd... Are those items relevant with regard to the open file descriptor limit? I.e. should I be logging info from lsof or from /proc/$PID/fd?
tl;dr ls -U /proc/PID/fd | wc -l will tell you the number that should be less than ulimit -n. /proc/PID/fd should contain all the file descriptors opened by a process, including but not limited to strange ones like epoll or inotify handles, "opaque" directory handles opened with O_PATH, handles opened with signalfd() or memfd_create(), sockets returned by accept(), etc. I'm not a great lsof user, but lsof is getting its info from /proc, too. I don't think there's another way to get the list of the file descriptors a process has opened on Linux other than procfs, or by attaching to a process with ptrace. Anyways, the current and root directory, mmapped files (including its own binary and dynamic libraries) and controlling terminal of a process are not counted against the limit set with ulimit -n (RLIMIT_NOFILE), and they also don't appear in /proc/PID/fd unless the process is explicitly holding open handles to them.
lsof versus /proc/$PID/fd versus ulimit -n
1,521,959,301,000
I am trying to repair permissions on my external HD. I cannot empty my trash when it is plugged in, because I get a bunch of "such file is in use". I read online that this might be resolved by repairing permissions on the drive. I am currently unable to unmount the drive because it is in use the second I restart or unplug and replug it in. I used lsof to see what is using it but I am unable to understand this and can't seem to find a clear guide to learn what this means. The output is below: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mds 59 root 23r DIR 1,9 1701 5 /Volumes/SEAGATE mds 59 root 31r DIR 1,9 1701 5 /Volumes/SEAGATE Command ps ax | egrep '[ /](PID|mds)' Output PID TT STAT TIME COMMAND 660 ?? Ss 0:12.49 /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framework/Support/mds 673 ?? Ss 0:08.68 /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framework/Versions/A/Support/mds_stores Command /usr/bin/sudo kill 660 Output //new line$ Command sudo lsof /dev/disk2s2 Output COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mds 1599 root 11r DIR 1,8 1764 5 /Volumes/SEAGATE In that order If I run the bash file several times in a row I can get PID TT STAT TIME COMMAND 1737 ?? Ss 0:00.69 /System/Library/Frameworks/CoreServices.framework/Frameworks/Metadata.framework/Support/mds But the drive is still locked by mds Just to show that the exception was added, here are screenshots:
Too fast diagnosis I read online that this might be resolved by repairing permission on the drive. Unfortunatly, from the description of your problem, this is wrong. What need to be repaired is the filesystem on your external disk SEAGATE. Analysis of lsof The output of your lsof command tells that the command mds (1st column) is actually reading your filesystem /volumes/SEAGATE (last colume). To learn more about this fantastic command, just read the manual which is coming with MacOS X: man lsof mds is a MacOS X server in charge of providing an access to the metadata of all your filesystems. Its most important clients are Finder and Spotlight. If you can't eject your external disk, this is legitimate and due to mds still reading it. If you nonetheless extract it, you will surely corrupt its filesystem. Free and repair the filesystem Now that it is corrupted, here is how to fix this. Open System Preferences > Spotlight select Privacy window and add (+) your SEAGATE external disk to stop Spotlight to try to index it. If mds is still running: ps ax | egrep '[ /](PID|mds)' You will have to kill it: _pid_to_kill=`ps ax | egrep '[ /]mds' | awk '{print $1}'` if [ "${_pid_to_kill}" ] ; then echo "${_pid_to_kill}" | while read _pid ; do /usr/bin/sudo kill ${_pid} done fi Check with lsof that your SEAGATE disk is now free: lsof /Volumes/SEAGATE If this is OK, GOTO 4. If killing mds doesn't free /Volumes/SEAGATE then there is another process accessing this filesystem through mds. (This might be an anti-virus or a crapware. And this is quite another size of investigation). In this case, the fast path will be to stop launchd from starting mds. Proceed as follows: cd /System/Library/LaunchDaemons /usr/bin/sudo launchctl unload com.apple.metadata.mds.plist Check that you don't have anymore mds process: ps ax | egrep '[ /](PID|mds)' Check with lsof that your SEAGATE disk is now free: lsof /Volumes/SEAGATE This should be OK, GOTO 4. Start Disk Utility and check your disk SEAGATE. I suspect that some repairs will be needed. In this case repair it. Eject it, and check that you don't have any more any "file in use" error message. Open System Preferences > Spotlight select Privacy window and remove (-) your SEAGATE external disk to permit Spotlight to index it. If you passed strep 3. where you had to stop launchd from starting mds you will have to enable this function back (otherwise a lot of thing managing your filesystem will fail). Proceed as follows: cd /System/Library/LaunchDaemons /usr/bin/sudo launchctl load com.apple.metadata.mds.plist
Mac drive in use, understanding lsof
1,521,959,301,000
Sometimes I need to kill a process (the reasons why are not important). And I know I can find that process with the following command: lsof -i :8080, being my candidate the last process in the output table. For example, if I run the command, an output like this will appear: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 15112 dani 70u IPv4 3178183 0t0 TCP localhost:39998->localhost:http-alt (CLOSE_WAIT) java 15112 dani 137u IPv4 3181607 0t0 TCP localhost:39999->localhost:http-alt (CLOSE_WAIT) java 15112 dani 138u IPv4 3181608 0t0 TCP localhost:40000->localhost:http-alt (CLOSE_WAIT) java 15112 dani 139u IPv4 3181609 0t0 TCP localhost:40001->localhost:http-alt (CLOSE_WAIT) java 15112 dani 140u IPv4 3181610 0t0 TCP localhost:40002->localhost:http-alt (CLOSE_WAIT) java 19509 dani 50u IPv6 4617361 0t0 TCP *:http-alt (LISTEN) java 19509 dani 52u IPv6 6642445 0t0 TCP localhost:http-alt->localhost:42996 (CLOSE_WAIT) So, my target will be that PID 19509. Using pipes, how can I cherry-pick last line's PID? I want to reach a command like lsof -i :8080 | something here to get the PID | kill -9 I'm running Linux Mint KDE x64
Answering your question literally, here's one way to list the last PID displayed by lsof: lsof … | awk 'END {print $2}' Awk is a text processing language which reads input and processes it line by line. In the code, END {…} executes the code in the braces after the whole input is processed, and effectively operates on the last line. $2 is the second whitespace-delimited field on the line. And here are some ways to kill it (each line works on its own): kill $(lsof … | awk 'END {print $2}') lsof … | awk 'END {print $2}' | xargs kill lsof … | awk 'END {system("kill " $2)}' However, I dispute your assertion that the right process to kill is always the last one. lsof displays processes by increasing PID, which is meaningless. Even on systems where process IDs are assigned sequentially (which is not the case on all Unix variants, not even on all Linux installations), they wrap once they reach the maximum value (commonly 32767). Thus deciding between processes by comparing PIDs is meaningless. You need some other information to decide which process to kill. Depending on what kind of information you're after and on whether you might have output that contains “weird” characters (like spaces in file or program names), you may use a tool like awk to process the output of lsof, or you may use the -F option to lsof which produces output that's a bit harder to parse in simple cases but (almost) not prone to ambiguity and easier to parse robustly. For example, if you want to kill any process that's listening on port 8080, here's how you can do it: lsof -n -i :8080 -F | awk ' sub(/^p/,"") {pid = $0} $0 == "n*:http-alt" {print pid} ' | xargs kill The call to the sub function replaces p at the beginning of a line by an empty string. If this replacement is performed, the code block {pid = $0} is executed; this way the pid variable contains the last PID value displayed by lsof. The second awk line prints the value of the pid variable if the line is exactly "n*:http-alt", which is lsof's way to report a socket listening on port 8080 on all interfaces. This particular criterion actually doesn't require any parsing (I only showed it above as an example). You can make lsof display just processes listening on the specified port: lsof -n -a -iTCP:8080 -sTCP:LISTEN -Fp | sed 's/^p//' | xargs kill Or, for this, you can use netstat instead. netstat -lnpt | awk '$4 ~ /:8080$/ {sub(/\/.*/, "", $7); print $7}' Explanation of the awk code: if the 4th column ends with :8080, replace everything after the first / in the 7th column (to remove the process name part and keep only the PID part), and print it.
Command for killing specific PID provided by previous command
1,521,959,301,000
This command is doing the rounds for detecting currently running processes using glibc: lsof | grep libc | awk '{print $2}' | sort | uniq I find it intensely annoying, since /libc/ matches not only libc, but on my system: /lib/x86_64-linux-gnu/libcap.so.2.24 /lib/x86_64-linux-gnu/libcgmanager.so.0.0.0 /lib/x86_64-linux-gnu/libcom_err.so.2.1 /lib/x86_64-linux-gnu/libcrypt-2.19.so /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /usr/lib/libcamel-1.2.so.45.0.0 /usr/lib/unity-settings-daemon-1.0/libcolor.so /usr/lib/unity-settings-daemon-1.0/libcursor.so /usr/lib/x86_64-linux-gnu/colord-plugins/libcd_plugin_camera.so /usr/lib/x86_64-linux-gnu/colord-plugins/libcd_plugin_scanner.so /usr/lib/x86_64-linux-gnu/gtk-3.0/modules/libcanberra-gtk3-module.so /usr/lib/x86_64-linux-gnu/libcairo-gobject.so.2.11301.0 /usr/lib/x86_64-linux-gnu/libcairo.so.2.11301.0 /usr/lib/x86_64-linux-gnu/libcanberra-0.30/libcanberra-pulse.so /usr/lib/x86_64-linux-gnu/libcanberra-gtk3.so.0.1.9 /usr/lib/x86_64-linux-gnu/libcanberra.so.0.2.5 /usr/lib/x86_64-linux-gnu/libcap-ng.so.0.0.0 /usr/lib/x86_64-linux-gnu/libck-connector.so.0.0.0 /usr/lib/x86_64-linux-gnu/libcolordprivate.so.1.0.23 /usr/lib/x86_64-linux-gnu/libcolord.so.1.0.23 /usr/lib/x86_64-linux-gnu/libcroco-0.6.so.3.0.1 /usr/lib/x86_64-linux-gnu/libcupsmime.so.1 /usr/lib/x86_64-linux-gnu/libcups.so.2 /usr/lib/x86_64-linux-gnu/samba/libcliauth.so.0 /usr/lib/x86_64-linux-gnu/samba/libcli_cldap.so.0 /usr/lib/x86_64-linux-gnu/samba/libcli-ldap-common.so.0 /usr/lib/x86_64-linux-gnu/samba/libcli-nbt.so.0 /usr/lib/x86_64-linux-gnu/samba/libcli_smb_common.so.0 /usr/lib/x86_64-linux-gnu/samba/libcli_spoolss.so.0 /usr/lib/x86_64-linux-gnu/samba/liblibcli_lsa3.so.0 /usr/lib/x86_64-linux-gnu/samba/liblibcli_netlogon3.so.0 I understand that libc is used by pretty much everything, but I'd like to refine this command. Consider its applications for other libraries. How can I do this? The flaw in this approach, in my opinion, is that the package providing libc may include a whole host of other shared library files (all of which may or may not be covered under the umbrella term that is the library name, like glibc, or libboost-python). Using a word boundary or - to mark the end of libc doesn't really fix this. This may not be a flaw, perhaps, as Braiam points out, since many or all of these .so files may link to a core file in which the flaw resides, as is the case for libc6 on Debian. If a specific OS/distro is needed, assume Debian Linux. Also annoying is that this command could be condensed to lsof | awk '/libc/{print $2}' | sort -u.
That's a roundabout and grossly inaccurate method. You know the location of the library file, so you don't need to use heuristics to match it, you can search for the exact path. There's a very simple way to list the processes have a file open: fuser /lib/x86_64-linux-gnu/libc.so.6 However this lists the processes that have the current version of the file open, i.e. the processes that use the new copy of the library. If you want to list the processes that have the deleted copy, you can use lsof, but searching for an exact path. Restrict lsof to the filesystem containing the deleted file for performance (and possibly to avoid blocking on e.g. temporarily inaccessible network filesystems). lsof -o / | awk '$4 == "DEL" && $8 == "/lib/x86_64-linux-gnu/libc-2.13.so" {print $2}' If the upgraded package contains several libraries and you want to detect any of the libraries, list all the library files from the package. Here's a way to do it programmatically (for a Debian package, adjust according to your distribution, e.g. rpm -ql glibc on Red Hat). lsof -o / | awk ' BEGIN { while (("dpkg -L libc6:amd64 | grep \\\\.so\\$" | getline) > 0) libs[$0] = 1 } $4 == "DEL" && $8 in libs {print $2}'
How do I detect running processes using a library package?
1,521,959,301,000
I just performed a fresh ubuntu install and i am seeing the following in lsof: userA@az1:~$ lsof COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME init 1 root cwd unknown /proc/1/cwd (readlink: Permission denied) init 1 root rtd unknown /proc/1/root (readlink: Permission denied) init 1 root txt unknown /proc/1/exe (readlink: Permission denied) init 1 root NOFD /proc/1/fd (opendir: Permission denied) kthreadd 2 root cwd unknown /proc/2/cwd (readlink: Permission denied) kthreadd 2 root rtd unknown /proc/2/root (readlink: Permission denied) kthreadd 2 root txt unknown /proc/2/exe (readlink: Permission denied) kthreadd 2 root NOFD /proc/2/fd (opendir: Permission denied) Is this normal? If not how do I fix it? Trying to search for this particular error has lead me nowhere. I am concerned that something is wrong because root is getting Permission denied errors. ls -la result for the proc folder: dr-xr-xr-x 145 root root 0 Jan 13 17:33 proc ls -la results for contents are: dr-xr-xr-x 9 root root 0 Jan 13 17:34 1 and for the contents of process 1. sudo ls -la /proc/1/ total 0 dr-xr-xr-x 9 root root 0 Jan 13 17:34 . dr-xr-xr-x 145 root root 0 Jan 13 17:33 .. dr-xr-xr-x 2 root root 0 Jan 13 17:42 attr -rw-r--r-- 1 root root 0 Jan 13 17:42 autogroup -r-------- 1 root root 0 Jan 13 17:42 auxv -r--r--r-- 1 root root 0 Jan 13 17:34 cgroup --w------- 1 root root 0 Jan 13 17:42 clear_refs -r--r--r-- 1 root root 0 Jan 13 17:34 cmdline -rw-r--r-- 1 root root 0 Jan 13 17:42 comm -rw-r--r-- 1 root root 0 Jan 13 17:42 coredump_filter -r--r--r-- 1 root root 0 Jan 13 17:42 cpuset lrwxrwxrwx 1 root root 0 Jan 13 17:35 cwd -r-------- 1 root root 0 Jan 13 17:35 environ lrwxrwxrwx 1 root root 0 Jan 13 17:34 exe dr-x------ 2 root root 0 Jan 13 17:35 fd dr-x------ 2 root root 0 Jan 13 17:42 fdinfo -r-------- 1 root root 0 Jan 13 17:42 io -r--r--r-- 1 root root 0 Jan 13 17:42 latency -r--r--r-- 1 root root 0 Jan 13 17:35 limits -rw-r--r-- 1 root root 0 Jan 13 17:42 loginuid dr-x------ 2 root root 0 Jan 13 17:42 map_files -r--r--r-- 1 root root 0 Jan 13 17:35 maps -rw------- 1 root root 0 Jan 13 17:42 mem -r--r--r-- 1 root root 0 Jan 13 17:42 mountinfo -r--r--r-- 1 root root 0 Jan 13 17:42 mounts -r-------- 1 root root 0 Jan 13 17:42 mountstats dr-xr-xr-x 5 root root 0 Jan 13 17:42 net dr-x--x--x 2 root root 0 Jan 13 17:42 ns -r--r--r-- 1 root root 0 Jan 13 17:42 numa_maps -rw-r--r-- 1 root root 0 Jan 13 17:42 oom_adj -r--r--r-- 1 root root 0 Jan 13 17:42 oom_score -rw-r--r-- 1 root root 0 Jan 13 17:42 oom_score_adj -r--r--r-- 1 root root 0 Jan 13 17:42 pagemap -r--r--r-- 1 root root 0 Jan 13 17:42 personality lrwxrwxrwx 1 root root 0 Jan 13 17:35 root -rw-r--r-- 1 root root 0 Jan 13 17:42 sched -r--r--r-- 1 root root 0 Jan 13 17:42 schedstat -r--r--r-- 1 root root 0 Jan 13 17:42 sessionid -r--r--r-- 1 root root 0 Jan 13 17:42 smaps -r--r--r-- 1 root root 0 Jan 13 17:42 stack -r--r--r-- 1 root root 0 Jan 13 17:35 stat -r--r--r-- 1 root root 0 Jan 13 17:42 statm -r--r--r-- 1 root root 0 Jan 13 17:35 status -r--r--r-- 1 root root 0 Jan 13 17:42 syscall dr-xr-xr-x 3 root root 0 Jan 13 17:35 task -r--r--r-- 1 root root 0 Jan 13 17:42 timers -r--r--r-- 1 root root 0 Jan 13 17:42 wchan
It appears that you did not run lsof as root, given that you show a prompt with $. Run sudo lsof to execute the lsof command as root. Some information about a process, such as its current directory (pwd), its root directory (root), the location of its executable (exe) and its file descriptors (fd) can only be viewed by the user running the process (or root). That's normal behavior. Sometimes the permission to access files in /proc doesn't match the permission in the directory entries, it's finer-grained (for example, it depends on processes' effective UID as well as real UID). You might get “permission denied” as root in some unusual circumstances, when you're root only in a namespace. If you just installed a new machine, you won't be seeing this.
root permission denied on /proc/1/exe
1,521,959,301,000
What does this mean? How does lsof find such FDs? I.e. compared to normal FDs, which can be found easily like ls -l /proc/$PID/fd/$FD. $ lsof -p $(pgrep pulseaudio) | head -n1 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME $ lsof -p $(pgrep pulseaudio) | grep DEL pulseaudi 25911 alan-sysop DEL REG 0,5 2334404 /memfd:pulseaudio pulseaudi 25911 alan-sysop DEL REG 0,5 2340448 /memfd:pulseaudio pulseaudi 25911 alan-sysop DEL REG 0,5 2335426 /memfd:pulseaudio pulseaudi 25911 alan-sysop DEL REG 0,5 2340018 /memfd:pulseaudio pulseaudi 25911 alan-sysop DEL REG 0,5 2340021 /memfd:pulseaudio pulseaudi 25911 alan-sysop DEL REG 0,5 2334322 /memfd:pulseaudio pulseaudi 25911 alan-sysop DEL REG 0,5 2336421 /memfd:pulseaudio man lsof only documents a DEL value for the TYPE column, not the FD column.
lsof usually reports entries from the Linux /proc/<PID>/maps file with mem in the TYPE FD column. However, when lsof can't stat(2) a path in the process maps file and the maps file entry contains (deleted), indicating the file was deleted after it had been opened, lsof reports the file as DEL. https://stackoverflow.com/a/37160579/1601027, as pointed out by @don_crissti. lsof cannot show the size of these files, even when run as root. (My lsof is version 4.89). However, if you have both a new enough kernel and root access, you can both see the maps in ls -l /proc/$PID/map_files/, and you can run stat --dereference on individual files to show their size. This could be used to inspect the resources used by "deleted" mapped files. In particular memfds, which never appear in the filesystem and are always considered as (deleted) files. $ ls -l /proc/$(pgrep pulseaudio)/map_files | head total 0 lr--------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 562004ac5000-562004ada000 -> /usr/bin/pulseaudio lr--------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 562004cda000-562004cdb000 -> /usr/bin/pulseaudio lr--------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 562004cdb000-562004cdc000 -> /usr/bin/pulseaudio lrw-------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 7fab98000000-7fab9c000000 -> /memfd:pulseaudio (deleted) lrw-------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 7fab9c000000-7faba0000000 -> /memfd:pulseaudio (deleted) lrw-------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 7faba0000000-7faba4000000 -> /memfd:pulseaudio (deleted) lrw-------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 7faba4000000-7faba8000000 -> /memfd:pulseaudio (deleted) lrw-------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 7faba8000000-7fabac000000 -> /memfd:pulseaudio (deleted) lrw-------. 1 alan-sysop alan-sysop 64 Mar 18 23:50 7fabac000000-7fabb0000000 -> /memfd:pulseaudio (deleted) $ sudo stat --dereference /proc/$(pgrep pulseaudio)/map_files/7fab98000000-7fab9c000000 File: /proc/25911/map_files/7fab98000000-7fab9c000000 Size: 67108864 Blocks: 0 IO Block: 4096 regular file Device: 5h/5d Inode: 2399078 Links: 0 Access: (0777/-rwxrwxrwx) Uid: ( 1000/alan-sysop) Gid: ( 1000/alan-sysop) Context: unconfined_u:object_r:user_tmp_t:s0 Access: 2018-03-18 23:47:48.714061694 +0000 Modify: 2018-03-18 23:47:48.713061683 +0000 Change: 2018-03-18 23:47:48.713061683 +0000 Birth: - E.g. at least it was possible to see that no individual memfd, at least held directly by an FD or by a memory mapping, was consuming gigabytes on its own. It would still be nice to have some better tooling or scripts around this though. $ sudo du -aLh /proc/*/map_files/ /proc/*/fd/ | sort -h | tail du: cannot access '/proc/self/fd/3': No such file or directory du: cannot access '/proc/thread-self/fd/3': No such file or directory 108M /proc/10397/map_files/7f1e141b4000-7f1e1ad84000 111M /proc/14862/map_files/ 112M /proc/10397/map_files/ 113M /proc/18324/map_files/7efdda2fb000-7efddaafb000 121M /proc/18324/map_files/7efdea2fb000-7efdeaafb000 129M /proc/18324/map_files/7efdc82fb000-7efdc8afb000 129M /proc/18324/map_files/7efdd42fb000-7efdd4afb000 129M /proc/18324/map_files/7efde52fb000-7efde5afb000 221M /proc/26350/map_files/ 3.9G /proc/18324/map_files/ $ ps -x -q 18324 PID TTY STAT TIME COMMAND 18324 pts/1 S+ 0:00 journalctl -b -f $ ps -x -q 26350 PID TTY STAT TIME COMMAND 26350 ? Sl 4:35 /usr/lib64/firefox/firefox $ sudo ls -l /proc/18324/map_files/7efde52fb000-7efde5afb000 lr--------. 1 root root 64 Mar 19 00:32 /proc/18324/map_files/7efde52fb000-7efde5afb000 -> /var/log/journal/f211872a957d411a9315fd911006ef03/user-1001@c3f024d4b01f4531b9b69e0876e42af8-00000000002e2acf-00055bbea4d9059d.journal
FD column of `lsof` shows DEL in some cases, instead of an FD number
1,521,959,301,000
I'm trying to use lsof in an IF statement to tar a directory ONLY if other programs are NOT using files within the directory. I can get this info at a shell prompt with lsof +d /mydir/ If it doesn't return any output, I know the directory is not in use. How do I format that into a conditional statement so that if (lsof returns anything) then echo "$folder in use" else tar $folder Thanks
Count the characters in the answer, like this: #!/bin/bash a="$(lsof +d "${folder}")" if [[ "${#a}" -gt 0 ]] then echo "${folder} in use" else tar "${folder}" fi
How to test if processes are using files in directory
1,521,959,301,000
So, I've been able to figure out bits of this myself but am having trouble piecing them together. I have a task I need to automate - I have folders filled with gigabytes of obsolete files, and I want to purge them if they meet two criteria. Files must not have been modified in the past 14 days - for this, I'm using find - find /dir/* -type f -mtime +14 And the files cannot be in use, which can be determined by lsof /dir/* I don't know bash quite well enough yet to figure out how to combine these commands. Help would be appreciated. I think I essentially want to loop through each line of output from find, check if it is present in output from lsof, and if not, rm -f -- however, I am open to alternative methodologies if they accomplish the goal!
The following should work: for x in `find <dir> -type f -mtime +14`; do lsof "$x" >/dev/null && echo "$x in use" || echo "$x not in use" ; done Instead of the echo "$x not in use" command, you can place your rm "$x" command. How does it work: find files, last modified 14 days or longer ago: find <dir> -type f -mtime +14 loop over items in a list: for x in <list>; do <command>; done execute command 2 if result of lsof is 0, else execute command 1: lsof "$x" && <command 1> || <command 2> This relies on the lazy evaluation of Bash to execute command 1 or command 2. On my system (Ubuntu 14.04) this works with file names with spaces in them and even for file names with ? and * in them. This is however no guarantee that it will work with every shell on any system. Please test before replacing the echo command with the rm command.
How to iterate through files and delete those older than x days but NOT in use
1,521,959,301,000
I've got output coming from lsof -F that looks like this: p7646 g7646 R8300 csocat u1000 Labe f3 au l tIPv4 G0x80002;0x0 d640391 o0t0 PTCP n*:51352 TST=LISTEN TQR=0 TQS=0 I am trying to capture the value 51352, which is a bound port I'm interested in knowing. I'm close in that I can get the n*:51352 value with this: awk '/^n/ { print $1 }' and can actually get the exact 51352 value I need with two separate awks: awk '/^n/ { print $1 }' | awk -F':' '{print $2}' but is there a better way, either a single-command awk or a cleaner non-awk solution? There should only ever be one line starting with n* so I don't need to worry about handling multiple lines.
Combine the two: awk -F: '/^n\*:/ {print $2}'
How to capture the port number from this `lsof -F` output using awk (or something better)?
1,521,959,301,000
Output of lsof on my RHEL7 shows that one file with file descriptor mem is used by 40 processes. Does it mean that this file is mapped in memory 40 times or what? Could someone please explain what does memory mapped files mean? Does it mean that it 40 times in my memory? # lsof /usr/lib/locale/locale-archive COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME vmtoolsd 605 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive agetty 656 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive tuned 963 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive iostat 1199 adm mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive chkMtaMem 1205 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive snmpd 4704 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive sleep 5461 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive cmsubagt 6487 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive sleep 6649 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc1 6803 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc2 6835 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc3 6836 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc4 6856 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc5 6884 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc6 6889 usr mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc7 6893 usr1 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive cmfpagt 7704 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc8 7943 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive crond 8001 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive sh 8005 adm mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive iostat 8014 adm mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive crond 20427 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc9 20648 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc10 20649 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc10 20760 usr2 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc9 20777 usr2 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc11 21353 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc12 21354 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc13 21355 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc14 21356 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc15 21357 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc16 21358 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc17 21554 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc18 21569 usr2 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc19 21590 usr2 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc20 21647 usr2 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc21 22016 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc22 22017 root mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc23 22104 usr2 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive proc24 22122 usr2 mem REG 8,5 106070960 50808629 /usr/lib/locale/locale-archive
Have a look at the difference between virtual and physical memory. Many processes can map the same physical memory. If 10 processes map the same file, then at most one copy will be cached in RAM. If memory is not-shared, then if one process changes it, then this one page (with the change), is duplicated before committing the write. (So not all of the memory is copied. This is called copy on write or COW). A memory mapped file is when you ask the OS to map a file into memory. It does not load the file until you start reading/writing and then only what is needed. Memory mapping is just a different interface to read/write/seek/etc. You can also access memory through read/write/seek. So how is this done? It is done by realizing that some things are the same: That is swap and files are the same; and ram and file cache are the same. So when you open a file it is mapped as swap (don't worry it won't be used to swap out other stuff). When you start reading a seg-fault is generated, and the OS swaps it in, from the file (The exception (seg-fault) is handled by the OS so not passed on to a process) (actually it is not a segmentation fault, it is a page fault). It does not matter what interface you use memory or file. They are just interfaces over the same functionality.
Linux memory mapped files
1,521,959,301,000
I was trying to see the progress of a already running rsync and cp task and found this answer that allowed me to see what was currently happening When i went to the man page for lsof and see that -c is what is used to select the process (cp in the example below) to look at and: -a causes list selection options to be ANDed, as described above. -b causes lsof to avoid kernel functions that might block - lstat(2), readlink(2), and stat(2). But I don't really understand the combination with 3-999 What does lsof -ad3-999 -c cp do?
-d3-999 just excludes the standard file descriptors (0,1,2) from the listing. The -d is used to specify a list or range of fds: -d s specifies a list of file descriptors (FDs) to exclude from or include in the output listing. The file descriptors are spec- ified in the comma-separated set s - e.g., ``cwd,1,3'', ``^6,^2''. (There should be no spaces in the set.) The list is an exclusion list if all entries of the set begin with `^'. It is an inclusion list if no entry begins with `^'. Mixed lists are not permitted. A file descriptor number range may be in the set as long as neither member is empty, both members are numbers, and the ending member is larger than the starting one - e.g., ``0-7'' or ``3-10''. Ranges may be specified for exclusion if they have the `^' prefix - e.g., ``^0-7'' excludes all file descriptors 0 through 7. The guy who wrote that probably gave up on understanding why negated ranges do not work (just like me) and wrote it that way instead.
What does lsof -ad3-999 -c rsync do?
1,521,959,301,000
In the lsof man page, there is the following sentence: An open file may be a regular file, a directory, a block special file, a character special file, an executing text reference, a library, a stream or a network file (Internet socket, NFS file or UNIX domain socket.) What is an executing text reference?
The portions of an executable file that contain the machine instructions are called the text sections, and taken together they're called the text segment. On modern Unix and Unix-like systems, the file containing the text segment is kept open while the process is running so that pages full of machine instructions can be read (paged) into memory when necessary (see Demand Paging). $ lsof -p $$ | grep txt bash 3117 me txt REG 8,1 1021112 393938 /bin/bash If all copies of the executable file happen to get deleted (more precisely, unlinked) while the process is still running, the reference will be sufficient to ensure that the file's contents remain accessible for as long as the process is running. This is why you can (usually) install system updates and not break any running processes.
What is an 'executing text reference'?
1,521,959,301,000
I often have trouble unnounting filesystems because compton is keeping a subdirectory open. Here is one line from lsof I have right now: compton 30043 valmi cwd DIR 254,0 32768 7485 /media/truecrypt1/videos I cannot for the life of me figure what it is doing with this directory (it is not used by any other process nor was it ever opened in any application besides bash). So far, I always just ended up restarting X when this happened, but I would love for someone to tell me how to make compton understand that it should let go my directory, or as a consolation tell me what it is doing with it. If this is relevant, this is compton-git 20121102-2 from Debian stable, with 3.5.0-7 and everything up-to-date.
The fourth column of lsof's output tells you that this directory is the current working directory (cwd) of the process. Most probably compton was started in this directory. Most probably you might kill the process and restart it in another directory (e.g. /). You might try forcing it to leave the directory with this hack: Attach a GDB to the process by issuing $ gdb -p <pid> where <pid> would be the PID of the process, Inside gdb issue > p chdir("/") > detach > quit $ and > are the respective program's prompts. Note: In case compton has a particular reason for being in this directory this might crash the process in a mere horrible way. I didn't find any calls in compton's source code that suggest it is there on purpose, but be warned. On the other hand… this would also solve your problem. ;)
Forcing compton to free directory
1,521,959,301,000
I am reading in man lsof that +L enables the listing of file link counts. A specification of the form "+L1" will select open files that have been unlinked. I don't understand why deleted files should have count 1. Should not the count for deleted files be 0 ?
Well, yes. The manpage on my Debian system says “When +L is followed by a number, only files having a link count less than that number will be listed.”
link count of deleted files
1,521,959,301,000
I want to remove my external HDD as safely as possible. I want to use umount --lazy: Lazy unmount. Detach the filesystem from the file hierarchy now, and clean up all references to this filesystem as soon as it is not busy anymore. (Requires kernel 2.4.11 or later.) Then after a short delay I plan to kill any processes with open files on the device where the filesystem is still quasi-mounted. I can't use lsof for an accurate list of the open files as the filesystem has become invisible to new processes. If I use lsof before umount -l, there is a race contition of a new file being opened in between the two invocations. Is there any way of finding out which processes are accessing a DEVICE rather than a filesystem?
You could try to remount the volume read-only. This works only if nothing on that volume is opened for writing. You will probably not get rid of the race condition that a file could be opened read-only or that a process could have its current working directory on that volume but if you detach the hardware then you can at least be sure that its file system is in order.
List processes accessing device after `umount --lazy`
1,521,959,301,000
I tried with lsof -F c somefile But I get p1 cinit p231 cmountall p314 cupstart-udev-br p317 cudevd Instead of init mountall ... Any way to get just the command?
Man page says procces ID is always selected. depending on your need, you may use awk to filter out process lsof -F c somefile | awk '/^c/ { print substr($0,2)}'
List only commands with lsof
1,521,959,301,000
I'm really sorry if this question was asked before. I really need help with this as it involves very important data and I've been unable to do it for the past 2 hours. Essentially I accidentally removed the wrong folder due to a typo when running rm -r. From what I know the files are still there, but the links are removed. Since the process that was using these files is still running it seems like they can still be accessed for now, until the process is ended (then they should be lost forever). I have been trying for the past 2 hours or so to write the data from the process to a folder to restore the files. But can only find how to do it for single files. The output when running lsof | grep ./ still shows all the files and sizes. Making me believe that it's still possible to recover it. The process itself also seems to be able to access the files just fine, as the program is still working as it should even though the links are gone. I tried the method where you use cp /proc/32184/fd/103 ./resoration, but this only restores a single file it seems. Is there a way to quickly restore all the files without having to shut down the server and risk losing it? I've made some manual backups of some of the data, but the size is too big to do it for all of it. Thanks a whole whole whole lot to the person who can help me with this, and once again sorry if it's been asked before.
You could try scripting it yourself: PID=..your process.. export RESTORE_TO_DIR=some_place find "/proc/$PID/fd" -lname '* (deleted)' -printf '%p %l\0' | xargs -0 sh -c ' for l; do f=${l%% *}; t="$RESTORE_TO_DIR${l#* }" echo mkdir -p "${t%/*}" && echo cp -vb "$f" "${t% (deleted)}" done ' sh Remove the echos from before mkdir and cp if it looks OK. If you set RESTORE_TO_DIR to an empty string, it should restore the deleted files at their original place. Contrary to some opinions, you cannot create hard links to deleted files (inodes not referenced from any directory).
Recover deleted folder that's still loaded by an active process
1,396,980,259,000
I recently discovered that a buggy program called pcmanfm was writing 200 MB per second to its run.log file, so I had to find ways to combat that. I discovered what file it was that it was writing to in a laborious manner: du -h for various directories trying to find the offending file. I'm now faced with another similar situation. Something is filling up my hard drive and I have no idea what it is, although I can guess. Is there a way to use lsof to find out what 1 or 2 files are being written to at a high rate? Can I sort the file list by file size? Can I sort the file list by writing rate i.e. bytes/second?
I am finding that iotop is quite effective, however it updates its display too rapidly to allow for cut-and-paste of anything like PIDs and program paths. UPDATE: This requires use of the -d option to specify an update delay. UPDATE 2: On Raspbian, sysdig is not available and fatrace is broken.
How to use lsof to find high activity file writing?
1,396,980,259,000
I've got a server with 2 network interfaces. Due to a restrictive NAT firewall, it establishes an SSH tunnel to a server on the internet: ssh -fNTMS "/tmp/tunnel.socket" host; ssh -S "/tmp/tunnel.socket" -O forward -R "0:localhost:22" placeholder Normally it connects via a wired 1GB ethernet connection (eth0); but it's unreliable, as it's in an office where people move stuff around, and the cable "falls out" (unfortunately I can't use glue). It also has a mobile 4G internet connection (eth1), which is slower, and more expensive. To ensure the tunnel is still working, I'm periodically using the -O check command: ssh -S "/tmp/tunnel.socket" -O check placeholder Master running (pid=3430) echo $? 0 If this -O check fails, the socket will be closed (via -O exit), and a new SSH connection will be established. If the failure was due to the eth0 network cable "falling out", then Linux will automatically use eth1. This works really well. But, when eth0 is back up again, I'd like to switch back to it. So I'm thinking, when running -O check, I could see if the tunnel is currently using eth1 (the point of this question), and if eth0 is back, re-connect. Routing information: ip route default via 192.168.1.1 dev eth0 proto dhcp src 192.168.1.225 metric 100 default via 192.168.2.1 dev eth1 proto dhcp src 192.168.2.241 metric 200 Note how eth1 has a metric of 200, so eth0 gets priority when it's working. I can't find anything in /proc/3430/ I can list connections with lsof: lsof -ai -p 3430 -n -P COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ssh 3430 craig 3u IPv4 69362 0t0 TCP 192.168.1.225:43878->1.1.1.1:22 (ESTABLISHED) And netstat -tpln does not show interfaces when listing sockets.
I'm being stupid... the tunnel uses the default interface with the lowest metric, it's not tired to a single interface. As soon as eth0 comes back up, the data is sent via eth0. This causes the connection to effectively fail, where I need to use ServerAliveInterval/ServerAliveCountMax to notice and close the connection.
Check which network interface an SSH tunnel is using
1,396,980,259,000
when we run lsof on port 6060 as the following # lsof -i TCP:6060 | more COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME app_lot 3495 root 12u IPv6 9238779 1t0 TCP *:krb0934 (LISTEN) app_lot 3495 root 13u IPv6 9208460 1t0 TCP linux_server45:krb0934->43.55.3.22:5992 (CLOSE_WAIT) app_lot 3495 root 21u IPv6 9402392 1t0 TCP linux_server45:krb0934->34.22.50.28:6005 (CLOSE_WAIT) app_lot 3495 root 28u IPv6 9208462 0t0 TCP linux_server45:krb0934->54.33.6.161:23096 (CLOSE_WAIT) we see many close wait , we not want to kill the pid - 3495 is it possible to force closing the - CLOSE_WAIT ? without restart the application - app_lot
There is no way to close a socket in the CLOSE_WAIT state (or any other state) externally. If a misbehaving program is accumulating CLOSE_WAIT connections, the only way to free those connections is to kill it. This is a bug in the application, the best solution is to get it fixed. (I’m not saying that’s feasible or realistic.) Alternatively, you could connect to it with a debugger and close the connections from the debugger...
Is it possible to force ending of (close wait) connections?
1,396,980,259,000
I known that I can give a specific name to the TUN interface using --dev option, but I didn't and I have now on a router machine something like a hundred client configs. with less clients I was able to dig the log to search for the name of the interface and link it to a named config file, but now there is too much activity. I have played for a while with lsof and udevadm but I'm still not able to link a specific tunX interface with an OpenVPN instance. I would like to know which OpenVPN instance/config-name/process is linked to a specific TUN, like tun4 for example, is there a solution for that?
So I came with a solution inspired by A.B comment. $ ps ax | \ awk '/[o]penvpn/{print $7" "$1;system("grep iff /proc/"$1"/fdinfo/*")}'` which give me both running config and it's linked TUN interface.
How to link a tunX interface to a specific OpenVPN instance?
1,396,980,259,000
I'm trying to change the output of lsof -i4TCP:PORT to include a custom name. This will help me identify the server process as started by my daemon. Below is a picture with arrow pointing to what I'd like to control. I've created a custom gem, executed the process there, and it still says Ruby. Rather than go down the 'rabbit hole', wondering if anyone else has had this need. I'd essentially like to do exactly what docker has done, and show the process tagged with my program name.
Thanks to @Stéphane for the great answer. But in my case the best solution was to bundle my scripts as a Mac OSX app. You can control your process name in your project's Info.plist.
How can I change the command title when running a server?
1,396,980,259,000
we have kafka service ( as systemctl service ) and we configured in that service number of open files example: [Service] LimitMEMLOCK=infinity LimitNOFILE=1500000 Type=forking User=root Group=kafka now when service is up , we want to understand the consuming of number of files by kafka services from googled , I understand from - https://www.cyberciti.biz/faq/howto-linux-get-list-of-open-files/ that we can use the command fstat in order to capture the number of open files as fstat -p {PID} since we are using production RHEL 7.6 secured server , then its not clear if fstat can be installed on our server therefore we want to learn about other ideas? appreciate to get other approach other approach as suggest is by - ls "/proc/$pid/fd" but here is real example from my machine ls /proc/176909/fd |more 0 1 10 100 1000 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 1001 10010 10011 10012 . . . so we get a long list of numbers , so how to find the count of open files ?
The LimitNOFILE directive in systemd (see man systemd.exec) corresponds to the RLIMIT_NOFILE resource limit as set with setrlimit() (see man setrlimit). Can be set in some shells with ulimit -n or limit descriptors. This specifies a value one greater than the maximum file descriptor number that can be opened by this process. Attempts (open(2), pipe(2), dup(2), etc.) to exceed this limit yield the error EMFILE. (Historically, this limit was named RLIMIT_OFILE on BSD.) So it's not strictly speaking the limit of number of open file descriptors (let alone open files) in that a process with that limit could have more open files if it had fds above the limit prior to that limit being set (or inherited upon creation (clone() / fork())) and could not get a fd above the limit even if it had very few opened fds. On Linux, /proc/<pid>/fd is a special directory that contains one magic symlink file for each fd that the process has opened. You can get their number by counting them: () {print $#} /proc/$pid/fd/*(NoN) in zsh for instance (or ls "/proc/$pid/fd" | wc -l as already shown by Romeo). You can get the highest pid value by sorting them numerically in reverse and get the first. () {print $1} /proc/$pid/fd/*(NnOn:t) Or with GNU ls: ls -rv "/proc/$pid/fd" | head -n1 To get a report of number of open fds for all processes, you could do something like: (for p (/proc/<->) () {print -r $# $p:t $(<$p/comm) $p/exe(:P)} $p/fd/*(NoN)) | sort -n More portably, you could resort to lsof: lsof -ad0-2147483647 -Ff -p "$pid" | grep -c '^f' For the number of open file descriptors and: lsof -ad0-2147483647 -Ff -p "$pid" | sed -n '$s/^f//p' For the highest.
how to find the number of open files per process
1,396,980,259,000
On Solaris, when I type the command lsof -l I encountered this error: lsof: can't read namelist from /dev/ksyms Anyone knows what this error means and how I can get open the list of open FD's using lsof in Solaris?
From the lsof FAQ: 17.12.7 Why does lsof on my Solaris 7, 8 or 9 system say, "can't read namelist from /dev/ksyms?" You're probably trying to use an lsof executable built for an earlier Solaris release on a 64 bit Solaris 7, 8 or 9 kernel. The output from lsof -v will tell you the build environment of your lsof executable. You should also have gotten a warning message that lsof is compiled for a different Solaris version than the one under which it is running -- something like this: lsof: WARNING: compiled for Solaris release X; this is Y You need to build lsof on the system where you want to use it. For 64 bit Solaris 7, 8 and 9 you need a compiler that can generate 64 bit Solaris executables -- e.g., the Sun Workshop 5 C compiler or later, or a recent gcc version like 3.2.
"lsof: can't read namelist from /dev/ksyms" on Solaris
1,396,980,259,000
I just wonder if somewhere we have some command that gives the following we have server - RHEL 7.2 redhat version usually before umount on mount point folder , we need to kill the PIDS that related mount point folder example lets say we want to perform umount /golden/mnt1 So we need to kill all PID that related to /golden/mnt1 example lsof /golden/mnt1 gives the PIDS as 54642 5459 65753 6581 94763 4826 so we need to kill all above PIDS only then we can do safety umount /golden/mnt1 what we are searching , is maybe command that do both approach of lsof and kill all PIDS appreciate to get ideas about this
fuser -k -m /golden/mnt1 See man fuser
find & kill the processes on mount point folder before umont
1,396,980,259,000
I want to know if kill the delete process , can help to clean the memory cache sometimes we get from lsof many deleted files so does killing them can give more available memory ? example: lsof | grep delete lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/42/gvfs Output information may be incomplete. cupsd 1619 root 10r REG 253,0 2979 38250477 /etc/passwd+ (deleted) gnome-set 5731 gdm 14r REG 253,0 65536 51102558 /etc/pki/nssdb/cert8.db;5c644c01 (deleted) gnome-set 5731 gdm 16r REG 253,0 16384 51197440 /etc/pki/nssdb/key3.db;5c644c01 (deleted) pool 5731 5795 gdm 14r REG 253,0 65536 51102558 /etc/pki/nssdb/cert8.db;5c644c01 (deleted) kill 1619 we are worried if process that are delete are consume memory also we can see the follwing: lsof | grep deleted | wc -l 3421
Often processes expressly create and open files and delete them directly so that the file can be used more securely, and to ensure that the file is removed when the process ends. In short, this is a feature; especially with the files you show in your question. Killing these processes will disturb the workings of your system. There are cases where having deleted files still opened could be a bug (e.g. a process is writing to a log file, but the log file is compressed and deleted without signalling the process to reopen the log file). You can truncate such files via the /proc/ filesystem: find the file descriptor number with lsof as shown above, and then do > /proc/12345/fd/123 to truncate the file (12345 is process ID, 123 is the file descriptor number. However this is very seldomly necessary. Note that the processes aren't deleted, only the directory entry referencing those files have been removed.
kill deleted files and clean the memory cache
1,396,980,259,000
I don't know quite how to ask this question and I'm not even sure this is the place to ask it. It seems rather complex and I don't have a full understanding of what is going on. Frankly, that's why I'm posting - to get some help wrapping my head around this. My end goal is to learn, not to solve my overall problem. I want to understand when I can expect to encounter the situation I'm about to describe and why it happens. I have a perl module which I've been developing. One of the things it does is it detects whether there is input on standard in (whether that's via a pipe or via a redirect (i.e. <)). To catch redirects, I employ a few different checks for various cases. One of them is looking for 0r file descriptors in lsof output. It works fairly well and I use my module in a lot of scripts without issue, but I have 1 use-case where my script thinks it's getting input on STDIN when it is not - and it has to do with what I'm getting in the lsof output. Here are the conditions I have narrowed down the case to, but these are not all the requirements - I'm missing something. Regardless, these conditions seem to be required, but take my intuition with a hefty grain of salt, because I really don't know how to make it happen in a toy example - I have tried - which is why I know I'm missing something: When I run a perl script from within a perl script via backticks, (the inner script is the one that thinks it has been intentionally fed input on STDIN when it has not - though I should point out that I don't know whether it's the parent or child that actually opened that handle) An input file is supplied to the inner script call that resides in a subdirectory The file with the 0r file descriptor that lsof is reporting is: /Library/Perl/5.18/AppendToPath This file does not show up in the lsof output under other conditions. And if I do eof(STDIN) before and after the lsof call, the result is 1 each time. -t STDIN is undefined. fileno(STDIN) is 0. I read about this file here and if I cat it, it has: >cat /Library/Perl/5.18/AppendToPath /System/Library/Perl/Extras/5.18 It appears this is a macOS-perl-specific file meant to append to the @INC perl path, but I don't know if other OS's provide analogous mechanisms. I'd like to know more about when that file is present/opened and when it's closed. Can I close it? It seems like the file content has already been read in by the interpreter maybe - so why is it hanging around in my script as an open file handle? Why is it on STDIN? What happens in this case when I actually redirect a file in myself? Is the child process somehow inheriting it from the parent under some circumstance I'm unaware of? UPDATE: I figured out a third (possibly final) requirement needed to make that AppendToPath file handle be open on STDIN during script execution of the child script. It turns out I had a line of code at the top of the parent script (probably added to try and solve a similar problem when I knew even less than I know now about detecting input on STDIN) that was closing STDIN. I commented out that close and everything started working without any need to exclude that weird file (i.e. that file: /Library/Perl/5.18/AppendToPath no longer shows as open on STDIN in lsof). This was the code I commented out: close(STDIN) if(defined(fileno(STDIN)) && fileno(STDIN) ne '' && fileno(STDIN) > -1); It had a comment above it that read: #Prevent the passing of active standard in handles to the calls to the script #being tested by closing STDIN. So I was probably learning about standard input detection at the time I wrote that years ago. My module probably ended up using -t STDIN and -f STDIN, etc, but I'd switched those out to work around a problem like this one using lsof so I could see better what was going on. So with the current module (using either lsof or my new(/reverted?) streamlined version using -t/-f/-p works just fine (as intended) when I don't close STDIN in the parent. However, I would still like to understand why that file is on STDIN in a child process when the parent closes STDIN...
When I run a perl script from within a perl script via backticks, (the inner script is the one falsely thinking there is input on STDIN) The inner script RIGHTLY thinks there's input on STDIN, it's just that another file open got file descriptor 0 (which, to perl, is always gievn the file handle STDIN). As you know, programs run via qx{...} or `...` in perl inherit the stdin file descriptor from the outer script, just like any other subprocess. Because the inner script inherits the raw file descriptor 0, not the perl STDIN file handle, this creates problems with buffering, as either the inner or the outer script may end up reading more input that it needs, up to leaving nothing for the other. Consider the example: $ echo text | perl -e '$junk=`perl -e "eof(STDIN)"`; print while <>' $ # nothing! Just by "testing for EOF", the inner script will leave no input for the outer script. Doing an unbuffered read with sysread in the inner script will however work as expected: $ cat inner.pl sysread STDIN, $d, 2 $ echo text | perl -e '$junk = `perl inner.pl`; print while <>' xt [from the other answer] With: your-script <&- stdin will be closed. Closing file descriptors like stdin is never a good idea (daemons redirect them from /dev/null, they never close them), but is especially bad when running a script written in a language like perl or python, because that may cause stdin to end up open (and referring to the script) instead of closed: $ cat script.pl seek STDIN, 0, 0; print while <STDIN>; $ perl script.pl <&- seek STDIN, 0, 0; print while <STDIN>; That happens because system calls like open(2) or socket(2) return the first free file descriptor; if stdin is closed, the returned fd will "become" the stdin.
Edge case - detecting input on STDIN in perl
1,396,980,259,000
I have SSH'd into a remote machine. I would like to get the current working directory (and ideally execute commands like ls) on that remote machine, but from outside this process. Here are my processes $ ps 49100 ttys001 0:00.21 -zsh 52134 ttys002 0:00.21 -zsh 52171 ttys002 0:00.05 ssh [email protected] Terminal 2 (ttys002) is where I am currently SSH'd into a remote machine. Is it possible to get the current working directory of the remote host from the client computer? ie without just typing pwd into Terminal 2. If I run lsof, I can get the current working directory on the local machine of the process, but not the current working directory of the remote machine. ~ $ lsof -p 52171 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ssh 52175 falky cwd DIR 1,4 2816 994619 /Users/falky If this just isn't possible, would there be something I could do before SSHing into the remote machine that would allow me to do this? For example, could I set up a pseudo terminal? Or could I install something on the remote machine that sends a ping back to my local machine? Any advice/direction here would be helpful.
If this just isn't possible, would there be something I could do before SSHing into the remote machine that would allow me to do this? You could start the ssh client in the "connection sharing mode": ssh -M -S ~/.ssh/%r@%h:%p user@localhost user@localhost's password: ... user@localhost$ echo $$ 5555 user@localhost$ cd /some/path In another terminal: ssh -S ~/.ssh/%r@%h:%p user@localhost <no need to enter the password again> user@localhost$ ls -l /proc/5555/cwd <listing of /some/path> Refer to the ssh(1) manpage for the -S and -M options, and to ssh_config(1) for the Control* config options.
Get working directory inside SSH client process from outside process
1,396,980,259,000
I need to write lsof output for several network ports into bash variable. Simple $(lsof -i :5555) doesn't work - lsof waits for quit command (Ctrl-C) every time I call it. I can't figure out how to solve my task.
A long running lsof process usually means that DNS resolution is timing out or not working correctly which is delaying the output of it. You can disable DNS resolution by adding the -n option. Of course you might want to check out why DNS resolution is taking too long on your server as well.
run lsof in non-interactive mode
1,396,980,259,000
I am looking for a way to track which files are used by program installer(InstallAnywhere). I cannot use lsof because as far as I know it works on active processes and I want a tool which will work something like that: Time: -------------------------------------------------------- Tool start here: |-------------------------------------------| Installer starts here: |-----------------------------------|
You can also consider invoking your command under strace: strace -f -e trace=file -o /path/to/logfile your_command logfile would contain every file-related operation performed by your_command or its child processes.
Tracking which files are used by program
1,396,980,259,000
pfiles in Solaris suspends files for a short period while examining them, however, lsof does not. How does lsof work that allows it to retrieve information while not suspending files?
Well, lsof read the kernel volatile memory, while pfiles directly read directly from the application interface, thus causing it to suspend for a short period of time. For that reason, lsof does not truly provide an accurate system picture of the system, but it's better then the option of freezing the process while inspecting it.
lsof compares to pfiles, difference?
1,396,980,259,000
Per the EXAMPLES section in the lsof(8) man page on manpages.ubuntu.com, I should be able to run a command/take action if a process has a file open: To take action only if a process has /u/abe/foo open, use: lsof /u/abe/foo echo "still in use" When I try this syntax (in repeat mode), it doesn't work: $ lsof -r /u/abe/foo tput bel lsof: status error on tput: No such file or directory lsof: status error on bel: No such file or directory ======= I reviewed the man page, but it's lengthy and I guess I've overlooked something. What am I missing?
Even though the manpage lists it, that is not a valid lsof invocation. lsof will consider echo and still in use as names: lsof /u/abe/foo echo "still in use" To have a result from lsof and act on it you could use the exit code. In bash: lsof filename && echo "still in use" or: lsof filename || echo "not in use" If you want this to happen repeatedly you can put the above code in a loop or put lsof in repeat mode and then process its standard output, for example in bash: lsof -t -r2 filename | grep --line-buffered -v '=======' | while read pid do echo Process $pid is using the file done
Have lsof take action if a process has a file open — and, ideally, do so repeatedly
1,396,980,259,000
I'm hitting an issue where I need to get the unresolved symlink of a shell process. For example given a symlink ~/link -> ~/actual, if bash is launched with a $PWD of ~/link, I need to fetch that from outside the bash process. Getting the resolved cwd is possible using lsof or /proc as called out in https://unix.stackexchange.com/a/94359/115410 but I'm beginning to think it's not possible to get the unresolved path. I have tried to use lsof -b to not use readlink but the logging says that path never tries to use readlink anyway. It does appear possible to read the environment via /proc/.../environ and parse out PWD but this is slow, /proc may not exist on the system and I believe there are some security implications to trying to read a processes' environment. Here is the code in question I'm trying to fix: lsof on macOS: exec('lsof -OPln -p ' + this._ptyProcess.pid + ' | grep cwd', (error, stdout, stderr) => { ... }); /proc on Linux: Promises.readlink(`/proc/${this._ptyProcess.pid}/cwd`);
The logical value of the current working directory (logical cwd, what you call “unresolved pwd”) is an internal concept of the shell, not a concept of the kernel. The kernel only remembers the fully resolved path (physical cwd). So you won't get the information you want through generic system interfaces. You have to get the shell's cooperation. The PWD environment variable is how shells transmit the logical cwd to their subprocesses. When a shell runs another shell, the parent sets PWD to the logical cwd (it does this whenever it runs a program), and the child checks PWD that the value of PWD is sensible and if so uses that as its logical cwd, falling back to the physical cwd if $PWD is missing or wrong. Observe: #!/bin/sh mkdir /tmp/dir ln -sf dir /tmp/link cd /tmp/link sh -c 'echo Default behavior: "$PWD"' env -u PWD sh -c 'echo Unset PWD: "$PWD"' PWD=/something/fishy sh -c 'echo Wrong PWD: "$PWD"' rm /tmp/link rmdir /tmp/dir Output: Default behavior: /tmp/link Unset PWD: /tmp/dir Wrong PWD: /tmp/dir Reading the process's environment doesn't have any particular security implications, but what it tells you is what the value of PWD was when the process started. If the processed changed to another directory, the value in the environment is no longer relevant. What you need is the value that would be in the environment if the shell ran another process now. But the only way for this to appear is to actually make the shell run another process. The typical way for GUIs to find the cwd of a shell that they run is to make the shell print it out. If you need the information occasionally and want to leave maximum control over the shell configuration to the user, ensure that the shell is displaying a prompt and issue the pwd command. This is simple, works even in “exotic” shells like csh and fish, but is ambiguous in corner cases (e.g. a directory name containing newlines). If it's ok to tweak the shell configuration, you can make the shell print an escape sequence each time it displays a prompt (PS1 for many shells, but the way to make it include the current directory varies), or when it changes directories (chpwd_functions in zsh, more invasive ways in other shells). Note that if a component of the logical cwd has been moved or removed, or if the symbolic link has been changed to point elsewhere, the value may be wrong. On the other hand the physical cwd will always be correct. (If the directory has been removed, Linux's /proc/PID/cwd, which is where all programs such as ps and lsof get their information, will appear as a broken symlink whose target ends in (deleted). I don't know what lsof reports on macOS for a deleted current directory.) So if you do find out the logical cwd, you should probably ensure that it matches the physical cwd and fall back to the physical cwd if it doesn't.
Get the unresolved pwd of a shell from another process
1,396,980,259,000
Under the folder /home/testing/scripts on a Linux machine, we have 234 different scripts that do sanity and testing as /home/testing/scripts/test.network.py /home/testing/scripts/test.hw.py /home/testing/scripts/test.load.sh . . . in some cases we want to kill all running scripts so in order to find the running scripts pids we do that lsof /home/testing/scripts/ and to kill all pids we use: for proccess in `lsof /home/testing/scripts/ | awk '{print $2}' | grep -v proccess`; do kill $proccess; done lets say we run only the script - /home/testing/scripts/test.network.py and from ps -ef |grep "testing/scripts" we get root 5793 17546 84 09:20 ? 00:00:00 python3 -u /home/testing/scripts/test.network.py so from lsof we should get the same pid number as lsof /home/testing/scripts/ now , I just to know if my approach for proccess in `lsof /home/testing/scripts/ | awk '{print $2}' | grep -v proccess`; do kill $proccess; done is good enough to kill all running scripts under /home/testing/scripts/ or maybe other suggestions?
You could tidy up the command a little, but it seems to me that it's reasonably accurate already. I'd match the filename more tightly to reduce potential mismatches, and I'd ensure that each candidate PID was actually numeric, lsof | awk -v p='^/home/testing/scripts/' '$9~p && $2+0 {print $2}' | sort -u | xargs echo kill Notice the ^ at the beginning of the directory assignment to the awk variable p. You should also escape characters that could be interpreted as part of a Regular Expression (i.e. . should be represented as \., the * as \*, etc.) Remove echo when you're ready to have the script really perform the kill operation.
what is the right way to know all script PIDS that runs under folder
1,396,980,259,000
On an Ubuntu VM iotop shows me that some "apache2 -k start" processes are producing a total disk read load of constantly between 4 M/s and 7 M/s even while no requests are being logged. lsof shows me about 5000 regular files being used by www-data. How can I determine what is causing so much disk IO while there shouldn't be any at all?
Indications of high I/O will likely require a tracing tool to dig into the details of what that I/O is; strace is a common way to do this: strace -e trace=file -ff -o output -y -p $some_httpd_pid_here -e trace=file traces file related operations (there's other handy specifiers, see the fine manual) though will not show read calls that may be necessary to figure out which file descriptors are being read from; for that -e trace=open,read or instead just trace everything and then grep the output... -ff follows forks, good if CGI or such are being spawned, or if you're instead tracing the httpd master process as it starts. -o output interacts with -ff and produces output or output.* files to be poked at later. -y isn't portable to older versions of strace but does save the trouble of finding out what file descriptor number 42 or whatever referrers to. (strace can also be horribly slow; see also on Linux sysdig or SystemTap for alternative takes on tracing things or otherwise debugging what the kernel is doing...)
Apache: High disk read load, no requests
1,396,980,259,000
I need to check the number of file descriptors which are open by a Java process. The output of lsof is almost 40000 lines long. Here's just the beginning: COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME java 12003 jboss cwd DIR 253,7 4096 1835012 /obcdn/Jboss/bin java 12003 jboss rtd DIR 253,0 4096 2 / java 12003 jboss txt REG 253,7 7336 2621459 /obcdn/java1.8/bin/java java 12003 jboss mem REG 253,2 111080 171382 /usr/lib64/libresolv-2.17.so java 12003 jboss mem REG 253,2 27776 133531 /usr/lib64/libnss_dns-2.17.so java 12003 jboss mem REG 253,7 278078 1966631 /obcdn/Jboss/modules/system/layers/base/org/yaml/snakeyaml/main/snakeyaml-1.15.0.redhat-1.jar java 12003 jboss mem REG 253,7 360979 1835896 /obcdn/Jboss/modules/system/layers/base/org/apache/james/mime4j/main/apache-mime4j-0.6.0.redhat-5.jar java 12003 jboss mem REG 253,7 32957 1835471 /obcdn/Jboss/modules/system/layers/base/com/fasterxml/jackson/jaxrs/jackson-jaxrs-json-provider/main/jackson-module-jaxb-annotations-2.5.4.redhat-1.jar java 12003 jboss mem REG 253,7 28742 1835469 /obcdn/Jboss/modules/system/layers/base/com/fasterxml/jackson/jaxrs/jackson-jaxrs-json-provider/main/jackson-jaxrs-base-2.5.4.redhat-1.jar java 12003 jboss mem REG 253,7 16843 1835470 /obcdn/Jboss/modules/system/layers/base/com/fasterxml/jackson/jaxrs/jackson-jaxrs-json-provider So it appears that the process is being using almost 40000 file descriptors, that seems a bit too much, also what I'm worried about is that an ulimit -a shows this maximum number of open files open files (-n) 40000 Just to confirm my analysis: is each line of the lsof output actually a FD opened, or should I grep the output somehow to have the correct count ? Thanks
mem aren't FDs, they're from mmap(). So I would grep -v " mem " to be sure. cwd, rtd, and txt are not FDs either, but there should be exactly one of each, so they won't have a very significant effect on your numbers.
Count the number of file descriptors opened by a process with lsof
1,396,980,259,000
As a follow up to an answer about unsynced files i was wondering if lsof counts delayed writes as open files? If an application has closed a file, but the file is not yet physically on the device, but still in the kernel buffer, pending a delayed write to the actual device, does lsof list such a file as open or is it closed and invisible for lsof? And if not is there a way to determine whether a manual sync is needed?
It's considered closed, and will not be shown. If it considered it open, what file descriptor would you expect it to report? Closing a file removes the file descriptor. I don't think there's any command that will tell if there are buffered writes to a file. But as mentioned in the other question, the eject command on removable media will sync it before returning. Shutting down the system will also sync all files. This is why you should not physically remove a device without first using a command like eject.
Do `lsof` listings include unsynced/delayed writes?
1,396,980,259,000
I use Linux Mint 13, and sometimes (rarely) I find myself not being able to list my home directory contents. When I try to do so: $ cd $ ls then, ls just waits indefinitely. The same with any other application when it tries to read directory contents: I have to kill that application eventually. I have used this linux distribution for about a year, my machine is typically always on (24/7), and I first faced this issue a couple of weeks ago. Then, I just tried to close all applications, that didn't help, then I rebooted the machine, and it helped: problem was "fixed". Today I faced it again. This time I tried to find a bit more about the reason: I googled lsof, tried to use it, but... it waits indefinitely, too! More, it waits even if I try to lsof any directory, not just home directory. Say, $ lsof /path/to/any/file causes lsof to wait indefinitely. Just in case, I tried to use lsof on remote machine via ssh, it works. So, it seems like deeper problem on my local machine. (I'm not going to reboot the machine now, I hope to catch the reason) UPD: parts of dmesg output: Nov 12 14:35:36 dimon-progr kernel: [1305000.288107] INFO: task lsof:32463 blocked for more than 120 seconds. Nov 12 14:35:36 dimon-progr kernel: [1305000.288112] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Nov 12 14:35:36 dimon-progr kernel: [1305000.288116] lsof D c1044aa0 0 32463 1 0x00000084 Nov 12 14:35:36 dimon-progr kernel: [1305000.288122] f10f3dc0 00000086 f10f3d68 c1044aa0 00000001 f3108ca0 c18e43c0 c18e43c0 Nov 12 14:35:36 dimon-progr kernel: [1305000.288132] eea0a18a 0004a2af f45073c0 ee00a5e0 ed9c25e0 ee00a5e0 f10f3db4 f10f3d84 Nov 12 14:35:36 dimon-progr kernel: [1305000.288141] c105be37 ee00a5e0 f10f3d9c c105c535 00000296 f10f3d9c f10f3d9c c1027378 Nov 12 14:35:36 dimon-progr kernel: [1305000.288150] Call Trace: Nov 12 14:35:36 dimon-progr kernel: [1305000.288160] [<c1044aa0>] ? try_to_wake_up+0x140/0x190 Nov 12 14:35:36 dimon-progr kernel: [1305000.288167] [<c105be37>] ? recalc_sigpending+0x17/0x40 Nov 12 14:35:36 dimon-progr kernel: [1305000.288172] [<c105c535>] ? __set_task_blocked+0x35/0x80 Nov 12 14:35:36 dimon-progr kernel: [1305000.288178] [<c1027378>] ? default_spin_lock_flags+0x8/0x10 Nov 12 14:35:36 dimon-progr kernel: [1305000.288183] [<c1576d2d>] ? _raw_spin_lock_irqsave+0x2d/0x40 Nov 12 14:35:36 dimon-progr kernel: [1305000.288188] [<c1575135>] schedule+0x35/0x50 Nov 12 14:35:36 dimon-progr kernel: [1305000.288193] [<c121755d>] request_wait_answer+0x6d/0x1f0 Nov 12 14:35:36 dimon-progr kernel: [1305000.288198] [<c106a390>] ? add_wait_queue+0x50/0x50 Nov 12 14:35:36 dimon-progr kernel: [1305000.288203] [<c1217758>] fuse_request_send+0x78/0xb0 Nov 12 14:35:36 dimon-progr kernel: [1305000.288208] [<c121bd6c>] fuse_do_getattr+0x12c/0x280 Nov 12 14:35:36 dimon-progr kernel: [1305000.288213] [<c113d80d>] ? complete_walk+0x7d/0x100 Nov 12 14:35:36 dimon-progr kernel: [1305000.288219] [<c121c381>] fuse_update_attributes+0x41/0xa0 Nov 12 14:35:36 dimon-progr kernel: [1305000.288224] [<c121c684>] fuse_getattr+0x44/0x50 Nov 12 14:35:36 dimon-progr kernel: [1305000.288228] [<c11370e2>] vfs_getattr+0x42/0x70 Nov 12 14:35:36 dimon-progr kernel: [1305000.288233] [<c121c640>] ? fuse_listxattr+0x130/0x130 Nov 12 14:35:36 dimon-progr kernel: [1305000.288237] [<c113716c>] vfs_fstatat+0x5c/0x80 Nov 12 14:35:36 dimon-progr kernel: [1305000.288241] [<c11371e0>] vfs_stat+0x20/0x30 Nov 12 14:35:36 dimon-progr kernel: [1305000.288245] [<c1137456>] sys_stat64+0x16/0x30 Nov 12 14:35:36 dimon-progr kernel: [1305000.288251] [<c100ceec>] ? syscall_trace_enter+0x15c/0x170 Nov 12 14:35:36 dimon-progr kernel: [1305000.288256] [<c1576ed4>] syscall_call+0x7/0xb Nov 12 14:35:36 dimon-progr kernel: [1305000.288260] [<c1570000>] ? encode+0x26/0x2b
Processes attempting to access a filesystem block indefinitely if the filesystem driver never responds. For a filesystem that is stored on a storage device, the main cause for not responding is that the underlying hardware is not responding or is faulty. This usually produces copious messages in the kernel logs (visible with dmesg on Linux or in the appropriate log file such as /var/log/kern.log), and eventually causes a timeout and an I/O error (EIO). Network-backed filesystems might not respond because no response from the server is coming, which could be because the network is down, or the server machine is down, or the server program isn't running or configured properly. Depending on the filesystem type, on the driver and on its configuration, this can result in a timeout or in an infinite wait. NFS, in particular, defaults to an infinite wait: it's stateless (if the server goes down in the middle of an operation, the operation can resume when the server comes back), so clients block until the server responds (because if the server does come back eventually then the filesystem will behave correctly). For FUSE filesystems, it's up to the program implementing the filesystem. FUSE is very flexible since it can be implemented by arbitrary programs. The reverse side of the coin is that sometimes FUSE filesystems aren't very robust internally or are dependent on a lot of other components that might misbehave. If a filesystem isn't responding, first check what type of filesystem it is. On Linux, look for the mount point in /proc/mounts; the mount point is the second field and the filesystem type is the third field. This tells you where to look for more clues: For filesystems on a storage device, look in the kernel logs. For network-backed filesystems, check network connectivity and check if the server is responding. Relevant logs are typically in service logs (e.g. /var/log/syslog or /var/log/daemon.log or a log that's specific to the network service). For FUSE filesystems, check if the process is responding. If you have processes blocked in I/O and you've given up on waiting for the filesystem to come back up, you may want to forcibly unmount the filesystem. If it's a FUSE filesystem, killing the process that provides it will do the trick. For any type of filesystem, on Linux, you can perform a “lazy unmount” with umount -l: this detaches the filesystem from its mount point, even if the filesystem driver is stuck; the driver keeps operating (e.g. it keeps communicating with the hardware if that's what it's doing).
Failed to list directory contents: process waits infinitely
1,396,980,259,000
root@host [~]# fsck /home2 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) /dev/sdb1: clean, 6018617/91578368 files, 54524459/366284000 blocks root@host [~]# fsck /home4 fsck from util-linux-ng 2.17.2 e2fsck 1.41.12 (17-May-2010) /dev/sdd1: clean, 8094369/91578368 files, 75999625/366284000 blocks fsck returns no error root@host [~]# lsof /home4 root@host [~]# lsof /home2 lsof returns no user root@host [~]# mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw,relatime,usrjquota=quota.user,jqfmt=vfsv0) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) /dev/mapper/VolGroup-lv_home on /home type ext4 (rw,relatime,usrjquota=quota.user,jqfmt=vfsv0) /dev/sdc1 on /home3 type ext3 (rw,relatime) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) /usr/tmpDSK on /tmp type ext3 (rw,noexec,nosuid,loop=/dev/loop0) /tmp on /var/tmp type none (rw,noexec,nosuid,bind) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) root@host [~]# iostat -xk Linux 2.6.32-279.19.1.el6.x86_64 (host.buildingsuperteams.com) 01/06/2013 _x86_64_ (16 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 18.91 0.02 39.17 20.22 0.00 21.67 mount shows that there is sdd1 and sdb1 is not mounted Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.16 11.93 1.35 3.30 53.91 59.92 48.95 0.10 21.87 3.70 1.72 sdb 0.49 219.57 22.00 99.14 224.17 1275.44 24.76 7.44 61.38 7.45 90.24 sdd 0.46 226.39 23.26 92.71 260.61 1277.34 26.52 0.67 5.77 7.71 89.40 sdc 0.00 1.79 0.28 0.05 5.03 7.38 74.28 0.00 14.34 2.05 0.07 dm-0 0.00 0.00 1.45 14.91 53.66 59.50 13.83 1.56 95.36 1.06 1.73 dm-1 0.00 0.00 0.04 0.10 0.18 0.41 8.00 0.00 21.25 2.44 0.04 dm-2 0.00 0.00 0.01 0.00 0.05 0.01 8.49 0.00 7.32 1.84 0.00 iostat report huge writes What would the reason be? I will replace the hard disk anyway. But this puzzles me to no end This caused a server crash already. I unmout the drive. iostat -x 1 shows empty, which is what's expected. So all this time I saw past data?
It is possible your VolGroup-lv_root is created on that drive. Check output of following command pvs It display physical volumes information about LV. More info about LVM (1), (2), (3)
iostat report huge writes to drives that's not even mounted
1,396,980,259,000
Possible Duplicate: Access history of a file I know if a file is "being accessed" I can use lsof to see who (which process) is accessing it, but lsof is slow and heavy and I don't think I would be able to run it fast enough to see if a file is accessed or not. So it there a way to watch a file, and see if it ever get accessed and if yes by who?
Assuming you're running Linux: You can use the audit subsystem to monitor access to a particular file. You can use the inotify subsystem to watch for activities on files. There is a nice API for inotify, which makes it more useful for somethings than the audit subsystem, but inotify does not provide you with any information about who made the change that triggered a notification.
How can I monitor if anybody (any process) access is certain file [duplicate]
1,396,980,259,000
I run a python code inside docker container performing the following calls import socket as s,subprocess as sp;s1=s.socket(s.AF_INET,s.SOCK_STREAM); s1.setsockopt(s.SOL_SOCKET,s.SO_REUSEADDR, 1);s1.bind(("0.0.0.0",9001));s1.listen(1);c,a=s1.accept(); I'm trying to get info using ss and see the open sockets, but can't get them docker run --rm --publish 9001:9001 -it --name python-app sample-python-app reverseshell.py docker inspect --format='{{.State.Pid}}' python-app 1160502 > sudo ss -a -np | grep 9001 tcp LISTEN 0 4096 0.0.0.0:9001 0.0.0.0:* users:(("docker-proxy",pid=1160459,fd=4)) tcp LISTEN 0 4096 [::]:9001 [::]:* users:(("docker-proxy",pid=1160467,fd=4)) however lsof gives me more info: > sudo lsof -p 1160502 lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. lsof: WARNING: can't stat() fuse.portal file system /run/user/1000/doc Output information may be incomplete. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME python 1160502 dmitry cwd DIR 0,1364 108 19497 /workspace python 1160502 dmitry rtd DIR 0,1364 188 256 / python 1160502 dmitry txt REG 0,1364 6120 6529 /layers/paketo-buildpacks_cpython/cpython/bin/python3.10 python 1160502 dmitry mem REG 0,30 6529 /layers/paketo-buildpacks_cpython/cpython/bin/python3.10 (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 9492 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/_posixsubprocess.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 9518 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/fcntl.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 9514 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/array.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 9527 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/select.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 9520 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/math.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 9499 /layers/paketo-buildpacks_cpython/cpython/lib/python3.10/lib-dynload/_socket.cpython-310-x86_64-linux-gnu.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 634 /lib/x86_64-linux-gnu/libm-2.27.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 692 /lib/x86_64-linux-gnu/libutil-2.27.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 619 /lib/x86_64-linux-gnu/libdl-2.27.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 670 /lib/x86_64-linux-gnu/libpthread-2.27.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 609 /lib/x86_64-linux-gnu/libc-2.27.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 6705 /layers/paketo-buildpacks_cpython/cpython/lib/libpython3.10.so.1.0 (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 591 /lib/x86_64-linux-gnu/ld-2.27.so (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 3735 /usr/lib/locale/locale-archive (path dev=0,32, inode=1544914) python 1160502 dmitry mem REG 0,30 1365 /usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache (stat: No such file or directory) python 1160502 dmitry mem REG 0,30 1091 /usr/lib/locale/C.UTF-8/LC_CTYPE (stat: No such file or directory) python 1160502 dmitry 0u CHR 136,0 0t0 3 /dev/pts/0 python 1160502 dmitry 1u CHR 136,0 0t0 3 /dev/pts/0 python 1160502 dmitry 2u CHR 136,0 0t0 3 /dev/pts/0 python 1160502 dmitry 3u sock 0,8 0t0 75159952 protocol: TCP at least I have this line showing that fd=3 opens socket [75159952] but without actual port number. python 1160502 dmitry 3u sock 0,8 0t0 75159952 protocol: TCP so how to find with ss information about open socket over port 9001 that is not docker-proxy?
You have to switch to the correct network namespace first, because socket state is per namespace (namely per network namespace). For example by using nsenter. sudo has to be moved first, because nsenter also requires privileges. In one line (and using ss's own filtering features) this becomes: sudo nsenter -t $(docker inspect --format='{{.State.Pid}}' python-app) --net -- \ ss -a -np sport == 9001
ss doesn't display socket info related to the process opening SOL_SOCKET
1,396,980,259,000
Have a file File.txt and if any process is trying to copy the file from /source/ to /destination/ Is there a way to identify if the file File.txt (or any other file ) available in /destination/ is completely copied, or the process is still going on. I tried lsof but its not working **Error** : lsof: WARNING: can't stat() nfs file system Any suggestions
Try to use rsync with --progress option next time. Here is a little snippet that will help you to get the progress status every second until it reaches 100% percent ; Just replace your source and destination file paths. n="$(du -sh <path_to_your_source_file> | awk '{print $1}' | sed 's/[^0-9.]*//g')" while true; do sourcesize="$n" destdir=<path_to_your_dest_file> copyprogress="$(export | du -sh $destdir | awk '{print $1}' | sed 's/[^0-9.]*//g' )" ; echo "scale=3 ; $copyprogress / $sourcesize * 100" | bc | xargs echo -n ; echo % completed ; sleep 1 ; done Also you can check whether cp command is still running by using pidof and use ps to check the full command that is being executed:
is there any way to identify if a file is still being copied from one directory to other
1,396,980,259,000
I am running Jenkins with lots of jobs which require lots of open files so I have increased file-max limit to 3 million. It still hits 3 million sometimes so I am wondering how far I can go. Can I set /proc/sys/fs/file-max to 10 million? How do I know what the hard limit of file-max is? I am running CentOS 7.7 (3.10.X kernel)
The kernel itself doesn’t impose any limitation on the value of file-max, beyond that imposed by its type (unsigned long, so 4,294,967,295 on typical 32-bit systems, and 18,446,744,073,709,551,615 on typical 64-bit systems). However each open file consumes around one kilobyte of memory, so you’ll be limited by the amount of physical RAM installed; ten million open files would consume approximately ten gigabytes of memory. The kernel initialises file-max to 10% of the usable memory at boot, which means the “hard” limit on any given system is approximately ten times the default value.
How to find max limit of /proc/sys/fs/file-max
1,396,980,259,000
I have a computer periodically syncing folders of content with another computer using Resilio Sync. The receiving computer has a sorting process on an hourly cron, which analyses the folders and their contents before moving & cataloging them in a seperate filesystem. My issue is the hourly cron will run and process folders without their full contents if the sync is incomplete. The hourly cron process requires the entire contents of the folder to process the contents correctly. Is there a simple way of checking the contents of the receiving sync folder aren't open, i've looked into lsof, however perhaps there's an easier way? I could switch from the resilio sync process to rsync if that would help.
I suspect there are many ways to do this. The first that came to me to to include a checksum. On the sending server you can run: tar -cf - FILES | md5sum > my_sum.md5 Where we use tar to create (c) a file (f) onto stdin (-) from FILES which can be a glob, directory, or space delineated list of files which then gets piped to md5sum and the hash is saved in my_sum.md5. On the receiving side you can add a check to your cron job to first try to find my_sum.md5 (if it's not there then obviously we haven't copied everything), if it is there, check to see that a similar generation of the checksum on the receiving side has a matching checksum.
How do I confirm a file sync has completed before executing a command?
1,524,503,234,000
I set my TCP server to localhost 127.0.0.1, but instead of seeing 127.0.0.1 in the host portion of the lsof output, I see an asterisk. After running lsof -i... my_process 66666 root 5u IPv4 0xffff...c0 0t0 TCP *:5001 (LISTEN) What does this asterisk mean? Is my socket bound to localhost, does it not have an address or something else?
That means that limiting your server to localhost was not successful (maybe you have to restart is?) because it is listening on all interfaces, accepting all destination IP addresses.
What does the asterisk (*) mean in lsof output?
1,524,503,234,000
Looking for a way to pass the second column of output to geoiplookup, ideally on the same line, but not necessarily. This is the best I can muster. It's usable, but the geoiplookup results are unfortunately below the list of connections. I wanted more integrated results. If anyone can suggest improvements, they would be welcome. ns () { echo "" while sleep 1; do lsof -Pi | grep ESTABLISHED | sed "s/[^:]*$//g" | sed "s/^[^:]*//g" | sed "s/://g" | sed "s/->/\t/g" | grep -v localdomain$ | tee >(for x in `grep -o "\S*$"`; do geoiplookup $x | sed "s/GeoIP.*: /\t/g"; done) done } The results currently look something like this: <Port> <URL or IP if no reverse available #1> <Port> <URL or IP if no reverse available #2> <geoiplookup trimmed result #1> <geoiplookup trimmed result #2>
In terms of improvements, it'd depend on what data from lsof -Pi you're interested in. Here's a "one liner" (not so much..) that prints command, PID, user, node, IP, port & geoip: echo -e "COMMAND\tPID\tUSER\tNODE\tIP\tPORT\tGEO"; IFS=$'\n'; for line in $(lsof -Pi | grep ESTABLISHED | grep -E '*(([0-9]{1,3})\.){3}([0-9]{1,3}){1})*'); do cmdpidusr=$(echo $line | awk '{print $1,$2,$3}'); node=$(echo $line | awk '{print $8}'); ipadd=$(echo $line | awk '{print $9}' | cut -d ">" -f 2 | cut -d ":" -f 1); port=$(echo $line | awk '{print $9}' | cut -d ">" -f 2 | cut -d ":" -f 2); geoip=$(geoiplookup $ipadd | cut -d : -f 2); echo -e "$cmdpidusr\t$node\t$ipadd\t$port\t$geoip"; done | column -t; unset IFS e.g. output: ┌─[root@Fedora]─[~]─[10:22 am] └─[$]› echo -e "COMMAND\tPID\tUSER\tNODE\tIP\tPORT\tGEO"; IFS=$'\n'; for line in $(lsof -Pi | grep ESTABLISHED | grep -E '*(([0-9]{1,3})\.){3}([0-9]{1,3}){1})*'); do cmdpidusr=$(echo $line | awk '{print $1,$2,$3}'); node=$(echo $line | awk '{print $8}'); ipadd=$(echo $line | awk '{print $9}' | cut -d ">" -f 2 | cut -d ":" -f 1); port=$(echo $line | awk '{print $9}' | cut -d ">" -f 2 | cut -d ":" -f 2); geoip=$(geoiplookup $ipadd | cut -d : -f 2); echo -e "$cmdpidusr\t$node\t$ipadd\t$port\t$geoip"; done | column -t; unset IFS COMMAND PID USER NODE IP PORT GEO synergys 16444 user1 TCP 172.1.1.1 59116 IP Address not found ssh 21557 root TCP 1.2.3.4 2291 GB, United Kingdom if you wanted it as a function in your .bashrc or something, you can make it look a bit nicer: iplookup() { echo -e "COMMAND\tPID\tUSER\tNODE\tIP\tPORT\tGEO" IFS=$'\n' # set field separator to new line for line in $(lsof -Pi | grep ESTABLISHED | grep -E '*(([0-9]{1,3})\.){3}([0-9]{1,3}){1})*'); do # this is just a regex grep to pull lines with valid IPv4 addresses only cmdpidusr=$(echo $line | awk '{print $1,$2,$3}') node=$(echo $line | awk '{print $8}') ipadd=$(echo $line | awk '{print $9}' | cut -d ">" -f 2 | cut -d ":" -f 1) port=$(echo $line | awk '{print $9}' | cut -d ">" -f 2 | cut -d ":" -f 2) geoip=$(geoiplookup $ipadd | cut -d : -f 2) echo -e "$cmdpidusr\t$node\t$ipadd\t$port\t$geoip" done | column -t # organise the columns unset IFS # set field separator to default } note that this wouldn't work for IPv6 lookups, because you'd need to use geoiplookup6 for that. You could add a conditional that checks the IP type then runs geoiplookup/6 depending on the output. e.g: ... type=$(echo $line | awk '{print $5}') if [ "$type" = "IPv4" ]; then geoip=$(geoiplookup $ipadd | cut -d : -f 2) else geoip=$(geoiplookup6 $ipadd | cut -d : -f 2) fi ... but to use that with the above code, you'd need to either remove the IPv4 regex, or add to it to include IPv6
Trying to pass some output of lsof -Pi to geoiplookup
1,524,503,234,000
When I use ss (socket statistics) to show the usages of port 5432 I get: $ sudo ss -ln | grep -E 'State|5432' Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess u_str LISTEN 0 244 /var/run/postgresql/.s.PGSQL.5432 54481 * 0 tcp LISTEN 0 244 127.0.0.1:5432 0.0.0.0:* When using lsof (list of open files) instead I get no result: $ sudo lsof -i tcp:5432 Why is that? Related to: Why do nmap, ss (netscan?) and lsof give different results? Difference between lsof -i : & socket statistics ss -lp | grep ? Edit with answers from comments: sudo ss -lnp does not show the pid of the process(es) that have that listening socket the 127.0.0.1:5432 0.0.0.0:* on the last line was a copy-paste error, sorry about that, I have removed it I am running those commands in a WSL terminal, Postgres is not running anywhere Edit with new findings: I have found out this is happening only when Docker Desktop is running (even though there is no container running): ss doesn't output anything once I quit Docker Desktop. It looks like this might be an issue somehow related with Docker Desktop: I have reported it in this GitHub issue.
It turned out it was after all a problem on my machine. I had another instance of WSL running side to side that I forgot of and that one had a Postgres server running and listening to that port. I wrongly assumed they were running in isolation from each other while instead they are not. Uninstalling Postgres from that WSL instance fixed my issue.
Why ss show a port is in use but lsof doesn't?
1,524,503,234,000
I am attempting to find the best way to determine when the second file (of a matching criteria) is created. The context is an audit log rotation. Given a directory where audit logs are created every hour, I need to execute a parsing awk script upon the audit log that has been closed off. What I mean by that, is that every hour a new audit log is created, and the old audit log is closed, containing up to an hours worth of information. The new log file is to be left alone until it too is closed and a new one created. I could create a bash shell script and use find /home/tomcat/openam/openam/log -name amAuthentication.* -mmin -60 and then have this executed every 10 minutes via a crontab entry but I'm not sure how to write the rest of it. I suppose the script could start off by saving the contents of that find to a temp file, and upon every crontab execution, compare the new find command contents and when it changes, use the temp file as the input to the awk script. When the awk script is complete, save the contents of the new find to the file. A colleague has suggested I use the old school 'sticky-bit' to flag processed files. Perhaps this is the way to proceed.
For closure, this is the start of the script I am going to use. It needs more work to make it robust and do logging but you should get the general idea. #!/bin/sh # This script should be executed from a crontab that executes every 5 or 10 minutes # the find below looks for all log files that do NOT have the sticky bit set. # You can see the sticky bit with a "T" from a "ls -l". for x in `find /home/tomcat/openam/openam/log/ -name "log-*" -type f ! -perm -1000 -print` do # Look for open files. For safety, log that we are skipping them if lsof | grep $x > /dev/null; then # create a log entry on why I'm not processing this file... echo $x " is open" else # $x "is closed and not sticky" # run the awk scripts to process the file! echo $x " processing with awk..." # Set the sticky bit to indicate we have processed this file chmod +t $x fi done
How to determine the newly closed file within a continuous audit log rotation?
1,524,503,234,000
when we run lsof to capture deleted files , we see the following: ( example ) lsof +L1 java 193699 yarn 1760r REG 8,16 719 0 93696130 /grid/sdb/hadoop/hdfs/data/current/PLP-428352611-43.21.3.46-1502127526112/current/path/nbt/dir37/blk_1186014689_112276769.meta (deleted) what is the reason that PID still running in spite files already deleted lsof +L1 | awk '{print $2}' | sort | uniq 193699 is it possible to avoid this scenario?
Too long to put in a comment, so adding as an answer: That's a Java application keeping those files open, so yes, this scenario can be avoided by using a proper programming style and using the ObjectOutputStream object: //create a Serializable List List lNucleotide = Arrays.asList( "adenine", "cytosine", "guanine", "thymine", "sylicine" ); //serialize the List //note the use of abstract base class references try{ //use buffering OutputStream file = new FileOutputStream("lNucleotide.ser"); OutputStream buffer = new BufferedOutputStream(file); ObjectOutput output = new ObjectOutputStream(buffer); try{ output.writeObject(lNucleotide); } finally{ output.close(); } } catch(IOException ex){ logger.log(Level.SEVERE, "Cannot create Silicon life form.", ex); } By closing the file at an application level you will avoid this problem. So this is not a result of Unix or Linux doing anything wrong but inherent to your application.
is it possible to avoid open files? [closed]
1,524,503,234,000
My server is compromised, and I try to get the open files of the malicious user, it says that the user does not have a UID , so it cannot find anything. I see there are processes which are running by that user but I cannot get the files opened by it.
it says that the user does not have a UID I can certainly agree with that. A user as defined by "malicious user running things on a server" does not have an UID, but that is because that user is a person who may (perhaps) run things as different UIDs on your machine (from now on I'll stick to the term "attacker"). Therefore the correct path is to try to figure out what path the attacker took to gain control of a process on your machine that could perform fork() and exec(). That is a pretty good starting point because, at the end of the day, a successful attack starts by gaining control (possibly in a very convoluted way) of a process that can do that. How the attacker manages to get to that point is what varies in the attack. That said, the canon answer on what to do with a compromised server is to nuke it from orbit. It is not your computer anymore, and you cannot be sure on how much access the attacker achieved. Has the attacker achieved root privileges, then you may as well be connecting to a VM running on what once has been your server. Or you could even be dealing against a rootkit. (Note: "nuke from orbit" means a "fresh install" by most people) Back to the actual question: why the username does not have a UID? Or better, the implied one by the text of the question: (how a process is running without an UID?) The idea that a process is running without an UID is absurd. The kernel structure that maintains the existence of a process (actually the KSE) contains an UID field which must be populated (even if it is populated with rubbish in case of, say, a kernel bug). Therefore every process always has an UID. What you are most likely dealing with is that the UID is not listed in /etc/passwd, which, although strange for a process, is no different from doing touch leet; chown 1337 leet (assuming that you do not have a user leet with that UID , that is). Pretty much all standard *nix tools work on UIDs as they would for usernames. i.e. lsof -u username is equivalent to lsof -u `id -u username` And find . -user username is also equivalent to find . -user `id -u username` Therefore back to the first quote (emphasis mine): it says that the user does not have a UID Whatever the it is in there, it is not a standard *nix tool. Or you're really more screwed than you believe, and are running inside some rather strange environment created by the attacker.
why the username does not have a UID?
1,524,503,234,000
If I try to umount a mounted disk. It says I can't because it is used by another process, which is strange because I have nothing accessing it that I can find. So I tried using lsof to find what is using it. And the result is as below I can't because of bash. Well that's the most generic info ever. How can I find what specifically is using it?
The problem is you are current "in" the mounted drive. It says that in your screenshot here: [root@localhost vldsk_damo] If you issue pwd (at a guess) it will say: /mnt/vldsk_damo Best fix, type cd (to send you to your $HOME) or cd /, then try umount ...
A mounted device is busy because bash is using the volume
1,524,503,234,000
Using lsof command I would like to print out TCP connections with ESTABLISHED state but ignoring the ones with localhost. I tried: lsof -itcp@^127.0.0.1 -stcp:established lsof -itcp@(^127.0.0.1) -stcp:established lsof -itcp -i ^@127.0.0.1 -stcp:established and others similar, but always getting sintax error response. What is correct sintax?
It doesn't look like you can negate network addresses in lsof. If on Linux, you could use lsfd from util-linux instead: lsfd -Q '(type =~ "^TCP") and (name =~ "state=established") and (name !~ "addr=(\[::1\]|127)")' Or as mentioned by @A.B ss from iproute2: ss -tp state established not dst 127.0/8 not dst '[::1]'
how can I list, with lsof command, TCP Established connections ignoring localhost?
1,524,503,234,000
Using the -F option for lsof, I can specify which fields are printed: lsof -w -F pcfn However, the output is split on multiple lines, ie one line per field: p23022 csleep fcwd n/home/testuser frtd n/ ftxt n/usr/bin/sleep fmem n/usr/lib/locale/locale-archive fmem n/usr/lib/x86_64-linux-gnu/libc-2.28.so fmem n/usr/lib/x86_64-linux-gnu/ld-2.28.so f0 n/dev/pts/20 f1 n/dev/pts/20 f2 n/dev/pts/20 how can I get custom fields printed on one line?
The lsof -F output is meant to be post-processable. AFAICT, lsof renders backslashes and control characters including TAB and newline¹ at least when they're found in one of the fields with some \x notation (\\, \t, \n for backslash, TAB and newline respectively here)², so it should be possible to format that output using TAB-separated values for each of the opened files and that to still be post-processable: LC_ALL=C lsof -w -F pcfn | LC_ALL=C awk -v OFS='\t' ' {t = substr($0, 1, 1); f[t] = substr($0, 2)} t == "n" {print f["p"], f["c"], f["f"], f["n"]}' On your sample, that gives: 23022 sleep cwd /home/testuser 23022 sleep rtd / 23022 sleep txt /usr/bin/sleep 23022 sleep mem /usr/lib/locale/locale-archive 23022 sleep mem /usr/lib/x86_64-linux-gnu/libc-2.28.so 23022 sleep mem /usr/lib/x86_64-linux-gnu/ld-2.28.so 23022 sleep 0 /dev/pts/20 23022 sleep 1 /dev/pts/20 23022 sleep 2 /dev/pts/20 And on lsof -w -F pcfn -a -d3 -p "$!" after: perl -e '$0 = "a\nb\t"; sleep 999' 3> $'x\ny z\tw' & That gives: 7951 a\nb\t 3 /home/stephane/x\ny z\tw To get the actual file names from that output you'd still need to decode those \x sequences. Note that with that lsof command, you get records for every thread of every process, but you don't include the thread id in your list of fields, so you won't know which thread of the process has the file opened, maybe not a problem as it's rare for threads of a same process to have different opened files, but that still means you'll get some duplication in there which you could get rid of by piping to LC_ALL=C sort -u. You can also disable thread reporting with lsof 4.90 or newer with -Ki. You may also want to include the TYPE field to know how to interpret the NAME field. Beware lsof appends  (deleted) when the opened file has been deleted, and AFAICT, there's no foolproof way to disambiguate that from a file whose name ends in  (deleted) ¹ That doesn't necessarily mean that lsof can safely cope with filenames that contain newline characters. For instance, on Linux, it still uses the old /proc/net/unix API instead of the netlink one to retrieve information about Unix/Abstract domain sockets, and that one falls appart completely if socket file paths contain newline characters. One can easily trick lsof into thinking a process has some socket opened instead of another by binding to sockets with forged file paths. ² it leaves non-control characters as-is though, and the encoding of some characters (such as α encoded as 0xa3 0x5c in BIG5) in some locales do include the 0x5c byte which is the encoding of backslash as well. So here, we're forcing the locale to C to make sure all bytes above 0x7f are rendered as \xHH to avoid surprises when post-processing.
lsof: print custom fields on one line
1,524,503,234,000
I have the following output from lsof -i:portnumber [ztao@MongoDB ~]$ lsof -i:6379 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME redis-ser 5341 ztao 4u IPv6 23457 0t0 TCP *:6379 (LISTEN) redis-ser 5341 ztao 5u IPv4 23459 0t0 TCP *:6379 (LISTEN) redis-ser 5341 ztao 6u IPv4 23533 0t0 TCP localhost:6379->localhost:6633 (ESTABLISHED) redis-ser 5341 ztao 7u IPv4 23535 0t0 TCP localhost:6379->localhost:6634 (ESTABLISHED) redis-ser 5341 ztao 8u IPv4 23538 0t0 TCP localhost:6379->localhost:6635 (ESTABLISHED) redis-ser 5341 ztao 9u IPv4 23540 0t0 TCP localhost:6379->localhost:6636 (ESTABLISHED) redis-ser 5341 ztao 10u IPv4 23839 0t0 TCP localhost:6379->localhost:6747 (ESTABLISHED) redis-ser 5341 ztao 11u IPv4 23842 0t0 TCP localhost:6379->localhost:6748 (ESTABLISHED) newsProvi 5349 ztao 6u IPv4 23530 0t0 TCP localhost:6633->localhost:6379 (ESTABLISHED) newsProvi 5349 ztao 7u IPv4 23532 0t0 TCP localhost:6634->localhost:6379 (ESTABLISHED) newsProvi 5349 ztao 8u IPv4 23536 0t0 TCP localhost:6635->localhost:6379 (ESTABLISHED) newsProvi 5349 ztao 9u IPv4 23539 0t0 TCP localhost:6636->localhost:6379 (ESTABLISHED) newsDistr 5456 ztao 12u IPv4 23838 0t0 TCP localhost:6747->localhost:6379 (ESTABLISHED) newsDistr 5456 ztao 13u IPv4 23841 0t0 TCP localhost:6748->localhost:6379 (ESTABLISHED) I am having trouble understanding what localhost:6379->localhost:6633 (ESTABLISHED) means. I tried to search but could not find an answer. This must be some really basic knowledge, but I am new and I don't know. any helps?
localhost:6379->localhost:6633 (ESTABLISHED) means that there’s an established connection between localhost’s ports 6379 and 6633. (“Established” is a state in the TCP/IP state machine; other protocols have similar states.) The arrow doesn’t represent a direction in terms of communications; it reflects the ports’ ownership. The left side of the arrow is the port belonging to the listed process (Redis), the right side of the arrow is the port belonging to the remote end of the connection. Since both sides of the connection are local, you can see the other end of the connection: newsProvi 5349 ztao 6u IPv4 23530 0t0 TCP localhost:6633->localhost:6379 (ESTABLISHED)
How to interpret the port mapping strings under name column of lsof results?
1,524,503,234,000
I am working with CentOS 7 OS hosting a set of docker containers. By using a web browser I can reach a service on port 80 and I get back a response. A bit of local knowledge helps me understand that the response comes from one of the docker containers. However, I have a big problem: I can't seem to find a way for the OS to indicate that port 80 is open. Here is what I have tried (all with root user): netstat -tulnp | grep 80 lists nothing listening on port 80 ss -nutlp | grep 80 lists nothing listening on port 80 lsof -i -P | grep 80 also lists nothing listening on port 80 wget 127.0.0.1 successfully fetches index.html Interrogating Docker directly through docker ps is not really the answer I am looking for, because we must be able to interrogate the OS and see what process is responsible for treating requests to port 80. It's also not helpful, because docker ps returns several containers that have the following entry in the PORTS column: PORTS 80/tcp 8080/tcp 80/tcp Again, I don't want to go to docker for answers, because there must be a way to interrogate the OS and identify the process responsible for handling port 80. My only guess is that docker installs some sort of low-level driver that intercepts such network requests. Any suggestions on how to get CentOS to hand out this information, accompanied with command line commands, would be greatly appreciated!
Are you certain the docker host is listening on port 80? It might be redirected from port 80 to whatever port it is listening on using the built-in firewall. If you are running IPTABLES, you could check this by using: iptables -L -t nat You would then see a chain named DOCKER which will tell you what redirects are in place, similar to this: Chain DOCKER (2 references) target prot opt source destination RETURN all -- anywhere anywhere RETURN all -- anywhere anywhere DNAT tcp -- anywhere anywhere tcp dpt:http-alt to:172.17.0.3:80 DNAT tcp -- anywhere anywhere tcp dpt:4433 to:172.17.0.3:443 DNAT tcp -- anywhere anywhere tcp dpt:1688 to:172.17.0.4:1688
How can I get CentOS to correctly list open ports?
1,524,503,234,000
A strange situation. I started telnet 0 8081 and lsof -i (run under root) doesn't list this connection, but netstat -n does. Why can this be?
I just simulated your scenario and was able to get 8081 in both netstat and lsof. lsof -i displays 8081 as tproxy and so your grep might not be finding it. Try this with -P which shows the numerical ports: lsof -i -P | grep 8081
Why can `lsof -i` not show an open connection which `netstat -n` lists?
1,524,503,234,000
I have several problems: some process is attempting to send data and the firewall is rejecting it at the rate it is sending it out firewall logs flood the system (may need to rate-limit the logging) lsof -i :port does not list the process, but there has to be something causing the packets to keep being sent. netstat -patune lists it in the SYN_SENT state not listening The port that it is using does not make sense to me so that is one oddity and the other being how traffic continues to be sent.
I couldn't resolve what was trying to connect, but what killed whatever process was trying to connect was simply ifconfig interface_name down.
what process is listening on a given port
1,524,503,234,000
when I use lsof as regular user, I get following warnings: lsof: WARNING: can't stat() tmpfs file system /home/testuser/.cache testuser is another user on my systems, and my own user has no access to the tmpfs filesystem mounted at /home/testuser/.cache. I suspect, lsof found in /etc/fstab (or in /proc/mounts) that this tmpfs exists and tries to search it and fails on not having permissions to other user's home: $ grep /home/testuser/.cache /proc/mounts tmpfs /home/testuser/.cache tmpfs rw,nosuid,nodev,noexec,noatime,size=4194304k,mode=700,uid=1001,gid=1001 0 0 Anyways, how can I supress these warnings, or tell lsof not to search paths of other users, or something that would get rid of this warning?
You can disable warnings with -w: lsof -w
lsof: WARNING: can't stat() tmpfs file system
1,524,503,234,000
Is there any way to combine the -i and -p options of lsof in a logical conjunction? It seems to me that the default behavior is to show files which satisfy one or the other condition, which I think is a bit odd.
Use the -a option as shown in one of the examples in the lsof man page: To list all open IPv4 network files in use by the process whose PID is 1234, use: lsof -i 4 -a -p 1234 The “Options” section explains: Normally list options that are specifically stated are ORed - i.e., specifying the -i option without an address and the -ufoo option produces a listing of all network files OR files belonging to processes owned by user ``foo''. The exceptions are: […] The -a option may be used to AND the selections. For example, specifying -a, -U, and -ufoo produces a listing of only UNIX socket files that belong to processes owned by user ``foo''. Caution: the -a option causes all list selection options to be ANDed; it can't be used to cause ANDing of selected pairs of selection options by placing it between them, even though its placement there is acceptable. Wherever -a is placed, it causes the ANDing of all selection options. Items of the same selection set - command names, file descriptors, network addresses, process identifiers, user identifiers, zone names, security contexts - are joined in a single ORed set and applied before the result participates in ANDing. Thus, for example, specifying [email protected], [email protected], -a, and -ufff,ggg will select the listing of files that belong to either login ``fff'' OR ``ggg'' AND have network connections to either host aaa.bbb OR ccc.ddd. […] -a causes list selection options to be ANDed, as described above.
Combine multiple lsof options
1,524,503,234,000
We can list only files opened by a specific PID as lsof -p 1000 lsof -p 1000 | wc -l How can we list/count the files opened by a specific program/COMMAND (e.g., java)? And so much better, if we can group the number of open files for each program. I want to inspect which programs have high numbers of opened files. I want something like lsof -c "java" # -c is an imaginary argument similar to -p for process I use Ubuntu 20.04.
I don't think there is an argument for such a thing implemented on lsof and I don't know what flags are available on your lsof binary. I think you could achieve what you want with something like this, maybe replacing the head with a 'grep java': lsof | awk '{print $1}' | sort | uniq -c | sort -rn | head lsof: Basically I'm listing all the opened files awk '{print $1}': printing only the first column which is the process name sort: you need to sort before applying uniq otherwise it will split the count, so java may appear several times depending on the order lsof prints. uniq -c: group by process name and count lines The last 2 are for readability. The problem with this is that all the java instances will be combined, I suppose you could apply the same logic for PIDs and then filter your java instances and child processes by PID. Hope it helps.
How can we count/list all files opened by a specific program/COMMAND? [duplicate]
1,524,503,234,000
I need to test whether any process is listening on a specific socket; fuser does not exist on the target system but lsof does. I run this command: lsof -tU /path/to/socket It lists the PID of the listener, which is great but lsof exits with a status of 1. I change the command to see what's wrong: lsof -tUV /path/to/socket It again lists the PID but also adds this: lsof: no file use located: /path/to/socket Is there any way to suppress this extra check of 'file use' so that it exits with 0 when it does find listeners on the socket? I've looked through the man page but can't find what I'm after. I'd like to use it sensibly like this: sock=/path/to/socket if [[ ! -S $sock ]] || ! lsof -tU $sock &>/dev/null; then # task to activate socket listener fi
If you're on a system with a recent version of ss (like that from iproute2-ss190107 on Debian 10), you can use ss instead of lsof: sock=/path/to/socket ino=$(stat -c 'ino:%i' "$sock") && ss -elx | grep -w "$ino" sock=/path/to/socket if ino=$(stat -c 'ino:%i' "$sock") && ss -elx | grep -qw "$ino" then # yeah, somebody's listening on $sock fi There are two important things to notice here: The real address of a Unix socket is the device,inode number tuple, not the pathname. If a socket file is moved, whichever server was listening on it will be accessible via the new path. If a socket file is removed, another server can listen on the same path (that's why the directory permissions of a Unix socket are important, security-wise). lsof isn't able to cope with that, and may return incomplete / incorrect data. ss is itself buggy, and because the unix_diag netlink interface ss is using returns the device number in the format internally used by the Linux kernel, but ss assumes that it's in the format used by system calls interfaces like stat(2), the dev: entry in the ss -elx output above will be manged. However, de-mangling it may be unwise, because one day they may just decide to fix it. So, the only course of action is to treat dev: as pure garbage, and live with the risk of having two socket files with the same inode, but on different filesystems, which the test above is not able to handle. If all of the above doesn't matter for you, you can do the same lousy thing lsof does (matching on the path the socket was first bound to), with: sock=/path/to/socket ss -elx | grep " $sock " which should also work on older systems like Centos 7. At least this does have the advantage of only listing the listening sockets ;-)
How do I get lsof to stop complaining when testing for a socket?
1,524,503,234,000
In linux we can run ss -x or lsof -U +E and we can see what type unix socket has. But in macOS there is no ss or we can run lsof -U which only shows TYPE - unix, but I would like to know with some utility what exactly so_type a unix socket has.
MacOS appears to support the 'netstat' command. Netstat has long been deprecated for linux due to the interface it used. 'ss' is a syntactically similar command for linux.
How can I find out what so_type an existing unix socket has in macOS?
1,524,503,234,000
from lsof we can see the following output lsof /var | grep delete rsyslogd 9664 root 4w REG 253,2 25589554694 67267903 /var/log/messages-20210513 (deleted) rsyslogd 9664 root 7w REG 253,2 9865832185 67294059 /var/log/secure-20210619 (deleted) libvirtd 9666 root 21r REG 253,2 10406312 134328488 /var/lib/sss/mc/initgroups (deleted) qmgr 10241 postfix 8r REG 253,2 10406312 134328488 /var/lib/sss/mc/initgroups (deleted) gdm-sessi 13304 root 8r REG 253,2 10406312 134328488 /var/lib/sss/mc/initgroups (deleted) <----------------------- dbus-daem 14198 gdm 4r REG 253,2 10406312 134328488 /var/lib/sss/mc/initgroups (deleted) dbus-daem 14535 gdm 5r REG 253,2 10406312 134328488 /var/lib/sss/mc/initgroups (deleted) sssd 16743 root 15r REG 253,2 10406312 134328488 /var/lib/sss/mc/initgroups (deleted) sssd_be 16746 root 22r REG 253,2 10406312 134328488 /var/lib/sss/mc/initgroups (deleted) after investigation we saw that gdm-session takes ~40G from /var as deleted file so after we killed the PID --> 13304 we decreased /var from 98% used to 59.4G used since we are dealing with very important production server we want to know if we can avoid such of this behaviors that some deleted file as gdm-session can crash the OS by reaching /var to became 100% /var size is 100G appreciate for any useful suggestion ?
You may define a maximum file size limit for a process via prlimit prlimit --fsize=1G:2G -p 12345 sets soft and hard file size limits for the process with PID 12345 to 1 and 2 gigabytes (or gibi ... not entirely sure), respectively. This may be done even after the process started. Be aware that this will kill the process if the limit is reached. More in the info pages.
how to avoid delete files that comes from gdm-session & cause of increasing used /var
1,524,503,234,000
Is there a way to list all the files being opened on the system? Either display all the currently opened files (with their full path, probably some lsof option), or, more interesting in my case, just list the pathnames as they are being opened (in the manner of tail -f), which I don't think lsof is able to do.
The program inotifywait is intended for performant file monitoring such as what you are looking for. Here is a proof of concept: $ inotifywait -qrm -e open -e access --format "%e %f" tmp/ OPEN hello OPEN,ISDIR ACCESS,ISDIR OPEN hello The output comes from running touch tmp/hello, followed by less tmp/h<TAB> (which tab-completes to less tmp/hello), followed by Enter to open the file with Less. When run system wide, you will probably want to exclude places such as /proc and /sys. You might also want to pipe it through grep -vE '^(OPEN|ACCESS),ISDIR$ to exclude directories. Finally, you should also have a look at the caveat given for recursive monitoring: Warning: If you use this option while watching the root directory of a large tree, it may take quite a while until all inotify watches are established, and events will not be received in this time. Also, since one inotify watch will be established per subdirectory, it is possible that the maximum amount of inotify watches per user will be reached. The default maximum is 8192; it can be increased by writing to /proc/sys/fs/inotify/max_user_watches.
Is there an open filename monitor?
1,524,503,234,000
On one session I append some text to a file as below : while true;do echo some_text >> file1 ; done On another session from same dir I run : lsof file1 which returns no output. Any idea why ? Shouldn't lsof report the process writing to the file ? I'm on RHEL 7.2
It just "bad luck" (or if you prefer, a very narrow time window). You can slow the process with pv to throttle the writes to lengthen the time during which the file is open: echo "0000000000000000000000000000000000000000000000000000000000" | pv -L 2 >> opened.dat and in another terminal: lsof opened.dat COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME pv 30636 me 1w REG 253,1 60 24642407 opened.dat
While appending text to a file, lsof doesn't show the file as being open / accessed
1,524,503,234,000
This is a problem I often encounter, this time with the output of lsof, but I am searching for a general solution for such problems: selecting a column. Here I try to get the TYPE column of the output of lsof COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME lsof 16113 root cwd DIR 0,58 40960 7602184 /home/rubo77 lsof 16113 root rtd DIR 259,7 4096 2 / lsof 16113 root 4r FIFO 0,12 0t0 294763 pipe lsof 16113 root 7w FIFO 0,12 0t0 294764 pipe lsof 16648 root rtd DIR 259,7 4096 2 / riot-web 4399 4424 ruben 25u unix 0xffff9543f9ad7000 0t0 53133 type=STREAM thunderbi 4650 5835 ruben DEL REG 259,7 2752546 /usr/share/icons/hicolor/icon-theme.cache ... I tried lsof|perl -lane 'print $F[5]' But this sometimes gets the 6th column, sometimes the 5th I get it with lsof|cut -c50-54|sort|uniq -c 375 CHR 610 DIR 211 FIFO ... But this seems a bit unclean because you have to fix the character position. The main problem is, that in some lines the 5th column is empty Is there a solution that really selects only the 6th column of an output? The best solution would be a tool where you just say show the Xth line, where the tool would analyse the first line and automatically detects by analysing the following lines if each column is aligned right, centre or left and then just select the content of that column.
You can use the -F option to get output more suitable for parsing e.g. lsof -F t | awk '/^t/ {print substr($0,2)}' See the OUTPUT FOR OTHER PROGRAMS section of man lsof More generally, unless your fields are delimited unambiguously you may need to resort to searching for the character position in the header line e.g. awk -v field="TYPE" 'NR==1 {c = index($0,field)} {print substr($0,c,length(field))}'
Get a certain column of an output with content aligned right and some columns not always filled
1,524,503,234,000
I am exploring oracle processes and it's lsof output. I am wondering what is /proc/<pid>/cmdline. 145 same cmdline openfiles displayed for each process of oracle. So what exactly is this? Eg: #lsof -u oracle | grep cmdline oracle 2664 oracle 17r REG 0,3 0 9492 /proc/1/cmdline oracle 2664 oracle 18r REG 0,3 0 9495 /proc/2/cmdline . . . oracle 12586 oracle 160r REG 0,3 0 20528 /proc/2614/cmdline oracle 12586 oracle 161r REG 0,3 0 20529 /proc/2662/cmdline # lsof -u oracle | grep cmdline | awk '{print $2}' | sort | uniq -c 145 12297 145 2664 145 2666 145 2670 145 2672 145 2674 145 2676 145 2678 145 2680 145 2682 145 2684 145 2686 145 2688 145 2690 145 2692 145 2694 145 2696 145 2698 145 2700 145 2702 145 2775 145 2777 145 2795 145 2799 145 2900 145 6323
From the manpages for proc(5): /proc/[pid]/cmdline This read-only file holds the complete command line for the process, unless the process is a zombie. In the latter case, there is nothing in this file: that is, a read on this file will return 0 characters. The command-line arguments appear in this file as a set of strings separated by null bytes ('\0'), with a further null byte after the last string.
Why all oracle processes are reading /proc/<pid>/cmdline of multiple system processes?
1,524,503,234,000
I want maximize the number of folders encrypted with ecryptfs and decrypted at login with the module pam_ecryptfs.so. Which folders cannot possibly be encrypted before login in? I guess a lsof ran by pam_exec.so should give me the answer. Do you have a better strategy? Example of whitelisted folders: /boot, /etc/pam.d PS: I am using Ubuntu Desktop 16.10. Please don't mention full disk LUKS encryption (which is done).
Ecryptfs is designed to encrypt a user's home directory. Although it can be used otherwise, it wasn't designed for that and won't be easy to set up. Ecryptfs normally gets mounted at login time, so it can only encrypt the user's data. A user's data is normally under the user's home directory, that's what the home directory is for. System-wide files cannot be encrypted with ecryptfs unless you mount it before logging in. But if that's what you want, then there's no point in using ecryptfs: it would be harder to set up and would be slower than using LUKS/dmcrypt to encrypt the whole filesystem, and there would be no security benefit whatsoever. Using ecryptfs to protect a home directory has an advantage over whole disk encryption when you want the machine to boot unattended, but want the user's file to be protected until the user logs in. If the decryption happens before logging in then there's no point in using ecryptfs. Looking for dependencies of pam_exec.so is futile. What you need before logging in is a whole lot of system services. The pam_exec module is just one part of the login process, the rest of the login process also needs to be available, as do the login program, the logging subsystem, all the programs used to initialize devices, and many, many other system services. If you want all of these to be encrypted, then you need something that requests the decryption key very early in the boot process, and the way to do that is whole-disk encryption, with LUKS.
What are the folder that I cannot set as encrypted folders decrypted at login? [closed]
1,524,503,234,000
I found this command line is not working ssh i01n10 "/usr/sbin/lsof -p $(pgrep -nf a.out)" it shows the error lsof: no process ID specified However ssh i01n10 "$(pgrep -nf a.out)" correctly gives the PID Why lsof is not seeing the PID?
The lsof command can't see your PID because of shell expansion. That means $(pgrep -nf a.out) will be executed on your local server, not remote. To avoid this expansion, use single quote instead of double quote. Simple example: $ foo=local $ ssh debian8 "foo=remote; echo $foo" local $ ssh debian8 'foo=remote; echo $foo' remote You might have problem with your pgrep command. This is my simple test using -of instead of -nf flags (Use -af flag to see full command): // on remote server # sleep 200 & [1] 27228 # exit // on local host $ ssh debian8 'echo $(pgrep -nf sleep)' 27244 <-- not expected pid $ ssh debian8 'echo $(pgrep -of sleep)' 27228 <-- this one This $() actually launches a subshell, pgrep doesn't report itself as a match but it does report its parent shell. Hence, using -n option will not give you actual pid but the pid of pgrep itself.
Why "lsof" is not working when it used in ssh? [duplicate]
1,524,503,234,000
pidof returns a space separated list of pids. lsof -p requires a comma serparated one. This can be solved with sed via: lsof -p `pidof postgres| sed -r 's/ /,/g'` However, the extra pipe seems a bit much for a simple operation. Is there a simpler way?
A good alternative is psgrep -d , lsof -p $(pgrep -d , postgres) -d Specifies the delimeter.
More succinct alternative to "lsof -p $(pidof postgres| sed -r 's/ /,/g')"
1,395,122,243,000
I need to write a bash script wherein I have to create a file which holds the details of IP Addresses of the hosts and their mapping with corresponding MAC Addresses. Is there any possible way with which I can find out the MAC address of any (remote) host when IP address of the host is available?
If you just want to find out the MAC address of a given IP address you can use the command arp to look it up, once you've pinged the system 1 time. Example $ ping skinner -c 1 PING skinner.bubba.net (192.168.1.3) 56(84) bytes of data. 64 bytes from skinner.bubba.net (192.168.1.3): icmp_seq=1 ttl=64 time=3.09 ms --- skinner.bubba.net ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 3.097/3.097/3.097/0.000 ms Now look up in the ARP table: $ arp -a skinner.bubba.net (192.168.1.3) at 00:19:d1:e8:4c:95 [ether] on wlp3s0 fing If you want to sweep the entire LAN for MAC addresses you can use the command line tool fing to do so. It's typically not installed so you'll have to go download it and install it manually. $ sudo fing 10.9.8.0/24     Using ip If you find you don't have the arp or fing commands available, you could use iproute2's command ip neigh to see your system's ARP table instead: $ ip neigh 192.168.1.61 dev eth0 lladdr b8:27:eb:87:74:11 REACHABLE 192.168.1.70 dev eth0 lladdr 30:b5:c2:3d:6c:37 STALE 192.168.1.95 dev eth0 lladdr f0:18:98:1d:26:e2 REACHABLE 192.168.1.2 dev eth0 lladdr 14:cc:20:d4:56:2a STALE 192.168.1.10 dev eth0 lladdr 00:22:15:91:c1:2d REACHABLE References Equivalent of iwlist to see who is around?
Resolving MAC Address from IP Address in Linux
1,395,122,243,000
Say I create a bridge interface on linux (br0) and add to it some interfaces (eth0, tap0, etc.). My understanding is that this interface act like a virtual switch with all its interfaces/ports that I add to it. What is the meaning of assigning a MAC and an IP address to that interface? Does the interface act as an additional port on the switch/bridge which allows other ports to access the host machine? I have seen some pages talk about assigning an IP address to a bridge. Is the MAC assignation implied (or automatic)?
Because a bridge is an ethernet device it needs a MAC address. A linux bridge can originate things like spanning-tree protocol frames, and traffic like that needs an origin MAC address. A bridge does not require an ip address. There are many situations in which you won't have one. However, in many cases you may have one, such as: When the bridge is acting as the default gateway for a group of containers or virtual machines (or even physical interfaces). In this case it needs an ip address (because routing happens at the IP layer). When your "primary" NIC is a member of the bridge, such that the bridge is your connectivity to the outside world. In this case, rather than assigning an ip address to (for example) eth0, you would assign it to the bridge device instead. If the bridge is not required for ip routing, then it doesn't need an ip address. Examples of this situation include: When the bridge is being used to create a private network of devices with no external connectivity, or with external connectivity provided through a device other than the bridge.
Why assign MAC and IP addresses on Bridge interface
1,395,122,243,000
After substantial research I still haven't found an answer to this query, how can I modify the command 'ifconfig' to show my computer's MAC address?
The command that you want on MacOS, FreeBSD, and TrueOS is: ifconfig -a link OpenBSD's ifconfig doesn't have this. Further reading ifconfig. Mac OS 10 Manual Pages. Apple corporation. 2008. ifconfig. FreeBSD Manual Pages. 2015. https://unix.stackexchange.com/a/319354/5132
How to view your computer's MAC address using 'ifconfig'?