date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,472,759,778,000 |
Possible Duplicate:
Creating a UNIX account which only executes one command
I am trying to setup a user account that only has as minimal rights as possible.
The user should be able to log in via SSH and then use the "su" command to get root access BUT nothing else.
Is this even possible? So no basic commands like "cd", "ls" or "mkdir" should be available! The user should only be able to see one empty folder and then be able to use the "su" command to get full system access of the real OS (if chroot is used to achieve this).
Any ideas how this could be achieved?
Thanks in advance!
|
I would suggest allowing to connect only via public key. Then you can connect that public key with your own command by supplying it in ~/.ssh/authorized_keys like that:
command="/path/to/mycommand" ssh-rsa ...
Whenever the user logs into that account with that key your command is executed instead of the usual shell. That command can for example be a shell script, or even just something like su -.
That should do what you asked for. - But please think again, it that is really what you want.
| SSH minimal rights user - su only [duplicate] |
1,472,759,778,000 |
I'm setting up the *ARR suite apps in jails (using the Bastille manager). I used to do this in debian and docker but this time I moved to freeBSD to try out it's native zfs support.
In setting up I need to setup a uniform user, setup external mounts (the involved bit) and install the apps on each jail. I did this manually on a trial system and it works perfectly (finally!).
In docker this was all automated in the form of compose scripts. I write it up once and then don't need to worry about it when I reinstall/upgrade the host..
Is there any automation tool I can use in my case?
|
The automation tool would be simple shell scripts or if you are already using Bastille you would create an equally simple template.
You can either do your actions from "outside" the jail by directly modifying the the filesystem running a script on the host system. Or you can do them running "within" the context of the jail (container) using jexec(8)
jexec myjail /bin/sh
This will start a shell within the container context. Rather than starting an interactive shell you could just start a shell script.
The same principle applies when you wrap everything with the Bastille tool. Look for CMD in the template section.
At the top of that page they link to a repo with a lot of ready to use examples. If we take a quick look at apache we see the following content in the Bastillefile:
PKG apache24
SYSRC apache24_enable=YES
SYSRC apache24_flags=""
CMD httpd -t
SERVICE apache24 start
The Bastillefile is then a list of Template Automation Hooks which are typical FreeBSD primitives. And the CMD let you run any command/script.
Most user setup is typically done automatically on FreeBSD using ports/packages (ie. apache). Which is why you do not see a lot of recipes doing that. If you want to look further into that you should checkout the ports tree an look into how packages are created. This is documented in FreeBSD Porter's Handbook. So for FreeBSD it would not be unusual that you roll your own packages as part of your automation (typically using poudiere). This would however be overkill for a home setup but this is "how things are done".
But we can still setup users without much ado. We simply use pw and just expand the example given in the man page with the -u option:
pw useradd -u 1001 -n gsmith -c "Glurmo Smith" -s csh -m -w random
You can run that from the host into the jail using jexec or as a CMD in the Bastillefile.
If you want to run pw outside the jail you should use the options -R rootdir and -V etcdir. You would then point those into the chroot location of your jail.
Remember that when you run things with jexec or CMD you only have access to the files within the context of the chroot assigned to the jail. That is: If you want to run a script inside the jail it needs to be within the jails filesystem.
| Automate deploying a bunch of thin jails |
1,472,759,778,000 |
I'm trying to use FreeNAS on my office along with Syncthing, and some things are giving me headache.
FreeNAS installed Syncthing as a jail, and I did the jexec command to login into the jail, but now I can't figure out a way to login back into my server. I'm as root@syncthing instead of being in root@freenas. How do I undo the jexec on the shell?
|
You were unable to just?
$ exit
| Logout from jail |
1,472,759,778,000 |
Jailed a user using https://olivier.sessink.nl/jailkit/ on Centos7
sudo jk_jailuser -m -j /home/jail -v -s /bin/bash testing
sudo jk_cp -v -f /home/jail /bin/bash
Then made /home/jail/home/testing 0777 so I can sftp as my username and get files. All working.
Then attempted to su testing as local user or attempted to putty in as testing, and shell instantly closed.
Changed home to 0700, and now can do so again.
Why does this happen?
|
For security reasons ssh and jailkit do not allow writable root directory / and abort if they detect bad permissions.
For example if root is writable users could provide their own configuration files in /etc (and their own /etc/passwd) or their own dynamic libraries in /lib to abuse setuid binaries (or already running privileged processes in using same chroot) for privilege escalation.
| Why will shell automatically close for jailed user with 777 home? |
1,472,759,778,000 |
How can I run firejail as a root user?
# firejail --seccomp firefox
Reading profile /etc/firejail/firefox.profile
Reading profile /etc/firejail/disable-common.inc
Reading profile /etc/firejail/disable-programs.inc
Reading profile /etc/firejail/disable-devel.inc
Error: --noroot option cannot be used when starting the sandbox as root.
|
Edit /etc/firejail/firefox.profile and comment or delete the noroot line
| How to run firejail as root? |
1,459,275,092,000 |
I work in a large office building with hundreds of other computers on the same LAN. I don't have any reason to communicate with most of these computers, and when I do, it's always on an "opt-in" basis (like adding a network mount to my fstab). But Linux Mint is automatically adding printers throughout the building, and the "Network" sidebar in my file manager is filled with computers that belong to people I don't know. Finally, /var/log/syslog is filled with entries like the following, that make it difficult to find issues of real importance:
org.gtk.vfs.Daemon[2500]: ** (process:6388): WARNING **: Failed to resolve service name 'XXX': Too many objects
avahi-daemon[872]: dbus-protocol.c: Too many objects for client ':1.65', client request failed.
I would like to disable this automatic discovery of services, especially printers and network shares. I also would like to ensure that my computer is not automatically broadcasting any information about itself to the rest of the LAN.
What steps should I take to do this? Is it sufficient to disable avahi-daemon?
|
Stop the CUPS service (embodied by a process called cupsd), for example
sudo service cups stop
Open /etc/cups/cupsd.conf in your favorite editor, for example
sudo vim /etc/cups/cupsd.conf
Look if there is a line in this file saying
Browsing Yes
and change this line to
Browsing No
This should disable the sharing of your own print queues installed locally with the other computers in the same network. (I'm simply assuming you do not want this, given that you also do not want to 'see' other printers shared by other computers...)
Likewise, make sure that file has the following lines:
BrowseLocalProtocols none
BrowseDNSSDSubTypes none
DefaultShared No
The first two should disable the automatic addition of printers shared on the network.
Now start the CUPS service again, for example
sudo service cups start
| How can I limit engagement with a large office LAN? |
1,459,275,092,000 |
I used to be able to ssh [email protected] between machines on my LAN but it is no longer working. I can ssh using the IP of course, but it's DHCP so it may change from time to time. Both machines run Debian 9.12, one is a VM in a Windows host, but still, it DID work ; I haven't fooled around with the config files, just regular updates.
ping hostname.local
ping: hostname.local: Name or service not known
(it might not be exactly that message as I translate from French)
ssh hostname.local
ssh: Could not resolve hostname hostname.local: Name or service not known
(ssh outputs in English)
From avahi.org :
Avahi is a system which facilitates service discovery on a local network via the mDNS/DNS-SD protocol suite
I've looked into /etc/resolv.conf, /etc/avahi/avahi-daemon.conf, /etc/nsswitch.conf but it's standard out-of-the-box config.
/etc/resolv.conf (reset by network-manager each time it starts)
# Generated by NetworkManager
search lan
nameserver xx.xx.xx.xx # DNS IPs obtained from DHCP
nameserver xx.xx.xx.xx
man resolv.conf says that the search list contains only the local domain name by default (something like that, I translated from man page in French) ; shouldn't it be localinstead of lan ?
I tried to change it and ping or ssh another host on my lan right away (without restarting network-manager), it didn't work. And when I restart network-manager, it rewrites /etc/resolv.conf and sets search lan.
/etc/nsswitch.conf (default, I haven't made any change)
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.
passwd: compat
group: compat
shadow: compat
gshadow: files
hosts: files mdns4_minimal [NOTFOUND=return] dns myhostname
networks: files
protocols: db files
services: db files
ethers: db files
rpc: db files
netgroup: nis
I've tried to discover hosts and services with avahi-browse and nbtscan, which rely on avahi (zeroconf / Bonjour), but they seem to find only the host on which they run.
(I know this is a possible duplicate of other questions, but I didn't find any answer and I don't have enough reputation to do anything)
|
Found it !
It seems that my router has a DNS server indeed :
nslookup host_ip router_ip
Server: 192.168.1.254
Address: 192.168.1.254#53
69.1.168.192.in-addr.arpa name = hostname.lan.
So that answers the .localvs .lanquestion. In recent Debian, the local domain is .lan.
Still, ping hostname.lan returns unknown host.
Thanks to https://askubuntu.com/questions/623940/network-manager-how-to-stop-nm-updating-etc-resolv-conf, I found out that /etc/resolv.conf is a symlink to /var/run/NetworkManager/resolv.conf ; so I had to replace it with my own resolv.conf :
search lan
nameserver 192.168.1.254
so that it uses the router's DNS (which will route the queries if necessary).
Restarting network-manager systemctl restart network-manager and it works like a charm :
$ ping hostname.lan
PING hostname.lan (192.168.1.69) 56(84) bytes of data.
64 bytes from hostname.lan (192.168.1.69): icmp_seq=1 ttl=64 time=2.02 ms
(ping google.fr to make sure WAN queries are processed)
| Can't resolve hostname.local on LAN |
1,459,275,092,000 |
How to disable LAN connection at startup in Debian Jessie? Ido not know what establishes that connection. Is it any configuration file or my wicd that starts on boot. But when I open wicd I can see that connection via the wire is established after system startup. I do not want this connection because the modem connection does not work then.
How to disable LAN on boot?
interfaces file:
root@debian:cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet dhcp
root@debian:/home/gameboy#
|
These two lines define the actions to be applied to the eth0 interface:
allow-hotplug eth0
iface eth0 inet dhcp
The man page for interfaces (man interfaces) will describe it in glorious detail, but essentially, it's saying
If we have an eth0 interface, allow it to be defined
When we find eth0 bring it up with DHCP
In older versions of Debian you could simply comment out both lines. However, in newer versions this tells other network managers to take control of the interface, so that's not advisable.
Instead, change dhcp to manual, which tells the other network managers that you want control of the interface retained in this file, but that you don't want it brought up automatically:
allow-hotplug eth0
iface eth0 inet manual
| Disable LAN connection at startup in Debian |
1,459,275,092,000 |
I have a newly built computer with MSI MEG X399 CREATION Motherboard, which includes two Ethernet ports with Intel i211 Gigabit Ethernet. I am running Debian 10, but the Ethernet doesn't work. I can't ping any host: sometimes the lights on the Ethernet plug become bright, sometimes they don't and I need to restart the router, I can never get a connection. At boot, only the enp8s0 interface is visible via ifconfig, and the enp9s0 isn't, although it can be put up with ifconfig enp9s0 up).
I also have tried to live boot into Ubuntu 19.04 and Kali 19.3, but the LAN isn't working.
Wifi is running ok. I also downloaded the igb driver from the Intel website and compiled it myself, but still no success.
|
Well, I could resolve the problem. Running sudo ethtool -s enp8s0 speed 10 duplex half autoneg off I was able to get a connection (although slow and unstable). I figured out the problem was an ancient long CAT5 cable in the wall and potentially also the low quality router.
| Debian 10 intel i211 LAN not working |
1,459,275,092,000 |
As you may have heard, Microsoft announced that Windows 10 will allow you to download updates not just from their servers, but from multiple sources using peer-to-peer over LAN and the Internet.
My question is: Is there such feature that exists for Linux?
source: http://wccftech.com/windows-10-lets-updates-torrents/
|
To my knowledge no distro uses peer-to-peer for downloading packages. If you have lots of computers running the same Linux distribution most package management systems allow you to run your own repositories and mirrors. This is useful if you have a slow internet connection or if you have a very large amount of machines running the same distribution.
I did a quick google search for this, however, and I found apt-p2p. This looks like a peer-to-peer solution for Debian (and possibly Debian derivatives like Ubuntu) repositories. I have never tried this, but it might be worth looking into.
Edit: I also found p2pacman for Arch Linux, this piece of software seems to be more of an experiment than something ready for wide usage.
| Download updates from peer computer over LAN |
1,459,275,092,000 |
For instance, if one wants to access the account bob on a machine on a local network behind a router, they would simply type:
$ ssh -p xx [email protected]`
However, how does ssh handle the possibility of two machines on the local network having the same username? Is there a flag to differentiate between user bob on machine A vs a different user bob on machine B, or does ssh throw an error?
|
Why would ssh care about reiterating usernames on different hosts? It is absolutely expected that this will happen. Hint: the root user is omnipresent, is it not?
So the answer to your questions is: ssh handles it the same way everything else would handle it: by not caring about which user is being referenced until talking to the host in question.
A simplified expansion on the above:
The first thing that happens is that the ssh client attempts to establish a conversation with the remote ssh server. Once a communications channel is opened, the client looks to see if it's a known host (e. g. an entry is present in ~/.ssh/known_hosts), and handle things properly if it's either an unknown host or a known host with invalid credentials (e. g. the host key had changed).
Now that all that is out of the way and a line of communication is properly open between the ssh server and client, the client will say to the server "I would like to authenticate for the user bob". Naturally, the server won't care about any other bobs on the network; only itself.
| How does ssh handle 2 computers on the local network with the same username? |
1,459,275,092,000 |
There's a specific remote subdomain mymachine.home.com whose the IP changes on all the time that I need to update DNSMasq to have it resolve correctly.
But I tested pinging xxxxxx.home.com which doesn't exist and it all seems to want to point to my NginX reverse proxy. (A lot of my entries point there.)
How can I stop DNSMasq from resolving non-existent subdomains to a local IP? (It's always the same IP)
OR
How can I tell DNSMasq to always use external DNS for a specific entry?
This is my installation Configuration.
#DNS_Config
#Dont use external file for custom dns
no-hosts
#Use this file for DNS nameservers (contains googles DNS)
resolv-file=/etc/resolv.dnsmasq.conf
##CUSTOM_DNS
#PING home.com = 192.168.1.210
address=/home.com/192.168.1.210
##DHCP##
##DHCP_Config
#Listen only on eth0
interface=eth0
#DHCP Range 192.168.1.2-127 (12hr lease time)
dhcp-range=192.168.1.2,192.168.1.127,12h
#RESERVE DNSMasq SERVER IP
dhcp-host=00:0c:29:c2:56:bf,192.168.1.149
#Change DEFAULT GATEWAY (Default is same IP as DNSMasq Server)
dhcp-option=option:router,192.168.1.1
##STATIC_DHCP
#STATIC IP to 2 MAC Address that WILL NOT be used simultaneously [IE: Laptop with WiFi + LAN]
dhcp-host=00:00:00:00:00:00,00:00:00:00:00:00,192.168.1.210
#Assign STATIC IP by MAC
dhcp-host=00:00:00:00:00:00,192.168.1.211
|
The most common reason why you get a bogus IP address for a nonexistent domain is that your ISP converts negative answers into the address of their ad servers, to serve you more ads when you make a typo in the address of a website. This is definitely a shady practice, but unfortunately some ISPs do it.
You can commonly counter that by using different upstream DNS servers, such as OpenDNS (server=208.67.222.222 and server=208.67.220.220) or Google (server=8.8.8.8 and server=8.8.4.4). If you need your ISP's servers for their own customer services, you can use them only in a specific domain (server=/myisp.com/203.0.113.1).
This being said, in your case, the problem is that your configuration doesn't do what you want it to do.
How can I stop DNSMasq from resolving non-existent subdomains to a local IP.
By not declaring the domain to be existing. Dnsmasq does not substitute an IP address for non-existent domains.
How can I tell DNSMasq to always use external DNS for a specific entry.
By not declaring a value for that host name, or a wildcard that encompasses that host name.
The problem in your configuration is the line
address=/home.com/192.168.1.210
This is a wildcard: it declares that 192.168.1.210 is the address of home.com and all host names under that domain. You don't want this wildcard, so remove it.
Instead, declare home.com and any individual domain of home.com as host names, not as wildcards, by listing them in /etc/hosts. Remove the no-hosts line — you want to declare specific hosts, so you need it! Or, if you prefer not to use /etc/hosts, keep this line and add a line pointing to a different file declared with addn-hosts=/path/to/hosts-file. Or, if you want to keep it all inside dnsmasq.conf, replace that line by host-record=home.com,192.168.1.210
If you want to have a wildcard with exceptions (return 192.168.1.210 for all xxx.home.com except for mymachine.home.com, or always query the upstream servers and substitute 192.168.1.210 only for negative answers), I don't think dnsmasq can do this.
| Dnsmasq points nonexistent subdomains point to local IP |
1,459,275,092,000 |
This is a equivalent question asked here for OSX. What is the easiest way to find out a NetBIOS name of a WIndows PC in my LAN by MAC address and vice versa?
It can be done by IP with:
nmblookup -A a.b.c.d
nmblookup pc_netbios_name
Is there a similar command for MAC address?
|
You can find out the MAC address of a recently contacted device by its IP address using the arp table:
ping -c1 -w1 10.0.2.2
PING 10.0.2.2 (10.0.2.2) 56(84) bytes of data.
64 bytes from 10.0.2.2: icmp_seq=1 ttl=63 time=0.785 ms
--- 10.0.2.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.785/0.785/0.785/0.000 ms
arp -n 10.0.2.2
Address HWtype HWaddress Flags Mask Iface
10.0.2.2 ether 52:54:00:12:35:02 C eth0
You could merge this into a little function:
iptoarp() {
local ip="$1"
ping -c1 -w1 "$ip" >/dev/null
arp -n "$ip" | awk '$1==ip {print $3}' ip="$ip"
}
iptoarp 10.10.0.2 # --> 52:54:00:12:35:02
I know of no easy way to get an IP address or NetBIOS name from a MAC address. Either run arpwatch and scan the log file for chat from that device, or ping each IP address on your LAN in turn and look for the arp response.
| How can I find out the name of a Windows PC on my LAN by MAC address? |
1,459,275,092,000 |
I'm using dnsmasq as a DNS server only (no dhcp), mapping Lan's hostnames to relative ip using /etc/hosts, but in the same Lan there's some ip assigned dynamically by a router (and I'd like to keep so, I don't want to use dnsmasq'd dhcp but I want to keep the ip dynamic).
Any way to map a MAC-address with a hostname so the DNS can respond correctly for a dynamically assigned ip?
I seen the documentation of dnsmasq and played around with /etc/ethers and dhcp-host= but the former is only for MAC->ip (not hostname) and the latter only if dhcp is enabled, so far found nothing else.
|
Solved, see poor-mans-device-discovery-dns new link
Use dnsmasq's option addn-hosts=/etc/dyn.hosts to read an additional hosts file, which is generated periodically using the command arp-scan (and cron or whatever).
| dnsmasq as dns only, map mac-address to hostname for dynamic ip |
1,459,275,092,000 |
I have two computers connected to the same router (so they are essentially connected in a LAN). Both run some GNU+Linux distribution. I have a bunch of files, in a directory ~/A/ on my first computer that I would like to transfer to my second computer.
The names of the files in A are contained in a certain list, say names_list. Now I would like for each of these files to be accessible via a local address, provided with reference to the router (such as 192.168.2.1:2112/name_of_file or something similar), so that the second computer may simply download each file one-by-one when given the names_list.
How can I do this? The downloading part is trivial, I am asking mainly regarding setting up the host computer to provide files at specific local addresses.
|
Plenty of remote filesystems exist. There are three that are most likely to be useful to you.
SSHFS accesses files via an SSH shell connection (or more precisely, via SFTP). You don't need to set up anything exotic: just install the OpenSSH server on one machine, install the client on the other machine, and set up a way to log in from the client to the server (either with a password or with a key). Then mount the remote directory on the first computer:
mkdir ~/second-computer-A
sshfs 192.168.2.1:A ~/second-computer-A
SSHFS is the easiest one to set up as long as you have access to all the files through your user account on the second computer.
NFS is Unix's traditional network filesystem protocol. You need to install an NFS server on the server. Linux provides two, one built into the kernel (but you still need userland software to manage the underlying RPC protocol and the additional lock protocol) and one as a pure userland software. Pick either; the kernel one is slightly faster and slightly easier to set up. On the server, you need to export the directory you want to access remotely, by adding an entry to /etc/exports:
/home/zakoda/A 192.168.2.2(rw,sync)
On the second computer, as root:
mkdir /media/second-computer-A
mount -t nfs 192.168.2.1:/home/zakoda/A /media/second-computer-A
By default NFS uses numerical user and group IDs, not user and group names. So this only works well if you have the same user IDs on the server and on the client. If you don't, set up nfsidmap on the server.
Samba is Windows's network filesystem protocol (rather, it's an open-source implementation of the protocol, which was called SMB and is now called CIFS). It's also available on Linux and other Unix-like systems. It's mainly useful to mount files from a Windows machine on a Unix machine or vice versa, but it can also be used between Unix machines. It has the advantage that matching accounts is easier to set up than with NFS. The initial setup is a bit harder but there are plenty of tutorials, e.g. server and client.
| Make files available through local address |
1,459,275,092,000 |
I am running a local network over a home router with dhcp enabled. So in case of any reboot of the router, my devices get some random IP within a given range. Is there any way to check the IPs of the other devices without getting into the router? like I have an app in my mobile named "Network scanner" which within a given network, checks and shows all the IPs of the other devices connected to that network. Is it possible to do it from a desktop GNU/Linux machine by any mean? So that I can call specific devices by their IP in an easier way without getting into the router.
|
Sure you can.Just install nmap tool
yum install -y nmap
then run :
nmap -sn 10.42.0.0/24
Of course you'll need to replace the IP range with the appropriate values for your network.
| Is there any way to scan a lan network with dhcp enabled to see the IP of the connected devices from a GNU/Linux machine? |
1,459,275,092,000 |
I can connect to the Internet without a problem by using wireless networks.
I have a DSL connection on defined which can connect to the Internet when LAN is connected. When I plug-in network cable, network-manager icon keeps waiting for address from LAN and connecting DSL does nothing. What should I do?
Output of ifconfig:
eth0 Link encap:Ethernet HWaddr 28:d2:44:ce:05:84
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:22 errors:0 dropped:2 overruns:0 frame:0
TX packets:45 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1320 (1.2 KiB) TX bytes:15390 (15.0 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:252 errors:0 dropped:0 overruns:0 frame:0
TX packets:252 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:18528 (18.0 KiB) TX bytes:18528 (18.0 KiB)
wlan0 Link encap:Ethernet HWaddr 30:10:b3:14:07:6b
inet addr:192.168.1.4 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::3210:b3ff:fe14:76b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:5330 errors:0 dropped:0 overruns:0 frame:0
TX packets:5671 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3872796 (3.6 MiB) TX bytes:1117262 (1.0 MiB)
|
It looks like your cabled connection is not obtaining an IP address from the DSL connection.
This can be either because there is no DHCP server running on the network you are connecting to or because the interface is not properly configured.
Try doing the following:
Install the ifupdown-extra package, (as root): apt-get install ifupdown-extra
Disable the radio interface by using the hardware on/off radio switch.
Check if your LAN interface (eth0) obtains an IP address
If it does not, force it to obtain an IP address using dhcp: dhclient eth0
If the above does not work try to setup an IP address for your interface staticly running: ip addr add 192.168.1.15 dev eth0; ip route add default via 192.168.1.1
If step 4 or 5 works, test your network connection runnig network-test (tool provided by the ifupdown-extra package)
If network-manager was not able to configure your LAN interface (i.e. step 3 failed, but the others worked) maybe your system is not properly configured to make it manage interfaces. Review the following entry in the Debian Wiki: https://wiki.debian.org/NetworkManager and make sure your /etc/network/interfaces does not have any eth0 entry which would prevent it from working.
| Debian Jessie: My LAN is not working! |
1,459,275,092,000 |
I just set up my raspberry pi. It is working great and I can easily access it from my local Windows machine using SSH. I gave it a custom hostname. I can acces the pi with this hostname only when the samba server is running on the pi. When I stop it I can no longer use the custom hostname and have to use the IP.
I was wondering if there was another way Windows systems would recognize the hostname of a linux system that does not involve setting up a samba server.
This is also relevant for me since I plan on creating a dual boot system with my local machine and accessing it from another Windows system within the LAN.
|
You can either setup a DNS server and add an entry for your Pi's hostname + IP to it. All the systems that need resolve this hostname will need to make use of this DNS server.
Your other option is to add an entry in your system's hosts file with an entry that specifies the Pi's hostname + IP address.
1.2.3.4 pi-host
NOTE: Yes Windows systems do have a host file, just like Linux/Unix systems do too.
You can see details about how to do this here: http://www.rackspace.com/knowledge_center/article/how-do-i-modify-my-hosts-file.
NOTE: This file is generally located here on Vista/Windows 7 systems: 5. In the filename field type C:\Windows\System32\Drivers\etc\hosts.
| Create FQDN in LAN for Windows systems without samba |
1,459,275,092,000 |
I'm really new in all of this, and I'm not really sure of how to do that, but I have an HDD connected to the LAN; from Windows, I run:
net use y: \\192.168.1.200\my-path\ condor /user:admin
I'm trying to do the same on Linux, "translating" the line above to work on my Fedora, but I'm not really sure on how to do it. I tried:
sudo mount //192.168.1.200/my-path/ -t cifs /mnt/y -o "username=admin"
but this doesn't work and I don't find anything about it because I'm not sure of what to search for to get the answer.
|
You could install Gigolo and see what it returns. Gigolo is a graphical utility for the virtual filesystem GIO/GVfs which manages files over a TCP/IP network.
I think there's even a package for Fedora, in which case you can install it via
yum install gigolo
Here's its manpage.
| mount lan hdd into linux fedora |
1,459,275,092,000 |
I am running a RPi 4b with Raspbian (Bullseye), Logitech Media Server (LMS) 8.2.0 and Squeezelite 1.9.9. For automated startup of the Squeezelite process whenever a certain USB device is connected, I have defined the following udev rules:
SUBSYSTEM=="usb", ACTION=="add", ENV{DEVTYPE}=="usb_device", ENV{PRODUCT}=="154e/300a/3", RUN+="/usr/bin/DAC_start.sh"
SUBSYSTEM=="usb", ACTION=="remove", ENV{DEVTYPE}=="usb_device", ENV{PRODUCT}=="154e/300a/3", RUN+="/usr/bin/DAC_stop.sh"
This is my DAC_start.sh script:
#!/bin/sh
######### DAC_start.sh #########
date >> /tmp/udev.log
echo "Starting Squeezelite" >> /tmp/udev.log
sleep 5s
/usr/bin/squeezelite -o hw:CARD=ND8006,DEV=0 -D -n MediaPlayer -d all=debug -f /tmp/sq.log | at now
###############################
This is my DAC_stop.sh script:
#!/bin/sh
######### DAC_stop.sh #########
date >> /tmp/udev.log
echo "Stopping Squeezelite ..." >> /tmp/udev.log
pkill squeezelite
###############################
Both scripts work fine when I execute them manually (both as pi and root): Squeezelite successfully connects to the LMS and USB device, music can be played.
The udev rules also work and get fired when I connect my USB DAC (which I can see from the log files).
However, when squeezelite gets started by udev, squeezelite seems to be unable to connect to my LMS server which is on the same LAN, actually on the same machine. This is the Squeezelite logfile (I think the more important messages are at the very bottom, but i copied all messages for your convience in case I am overlooking something):
/usr/bin/squeezelite -o hw:CARD=ND8006,DEV=0 -D -n MediaPlayer -d all=debug -f /tmp/sq.log
[16:22:50.362611] stream_init:454 init stream
[16:22:50.362971] stream_init:455 streambuf size: 2097152
[16:22:50.376806] output_init_alsa:936 init output
[16:22:50.377007] output_init_alsa:976 requested alsa_buffer: 40 alsa_period: 4 format: any mmap: 1
[16:22:50.377081] output_init_common:360 outputbuf size: 3528000
[16:22:50.377333] output_init_common:384 idle timeout: 0
[16:22:50.410804] test_open:301 sample rate 1536000 not supported
[16:22:50.410907] test_open:301 sample rate 1411200 not supported
[16:22:50.411049] test_open:301 sample rate 32000 not supported
[16:22:50.411085] test_open:301 sample rate 24000 not supported
[16:22:50.411118] test_open:301 sample rate 22500 not supported
[16:22:50.411151] test_open:301 sample rate 16000 not supported
[16:22:50.411184] test_open:301 sample rate 12000 not supported
[16:22:50.411216] test_open:301 sample rate 11025 not supported
[16:22:50.411249] test_open:301 sample rate 8000 not supported
[16:22:50.411330] output_init_common:426 supported rates: 768000 705600 384000 352800 192000 176400 96000 88200 48000 44100
[16:22:50.500287] output_init_alsa:1002 memory locked
[16:22:50.500456] output_init_alsa:1008 glibc detected using mallopt
[16:22:50.501072] output_init_alsa:1026 unable to set output sched fifo: Operation not permitted
[16:22:50.501080] output_thread:685 open output device: hw:CARD=ND8006,DEV=0
[16:22:50.501156] decode_init:153 init decode
[16:22:50.502046] alsa_open:354 opening device at: 44100
[16:22:50.502132] register_dsd:908 using dsd to decode dsf,dff
[16:22:50.502166] register_alac:549 using alac to decode alc
[16:22:50.502198] register_faad:663 using faad to decode aac
[16:22:50.502229] register_vorbis:385 using vorbis to decode ogg
[16:22:50.502325] register_opus:328 using opus to decode ops
[16:22:50.502361] register_flac:336 using flac to decode ogf,flc
[16:22:50.502392] register_pcm:483 using pcm to decode aif,pcm
[16:22:50.502433] register_mad:423 using mad to decode mp3
[16:22:50.502463] decode_init:194 include codecs: exclude codecs:
[16:22:50.503117] alsa_open:425 opened device hw:CARD=ND8006,DEV=0 using format: S32_LE sample rate: 44100 mmap: 1
[16:22:50.503159] discover_server:795 sending discovery
[16:22:50.503272] alsa_open:516 buffer: 40 period: 4 -> buffer size: 1764 period size: 441
[16:22:50.503349] discover_server:799 error sending disovery
[16:22:55.504955] discover_server:795 sending discovery
[16:22:55.505246] discover_server:799 error sending disovery
[16:23:00.510091] discover_server:795 sending discovery
[16:23:00.510360] discover_server:799 error sending disovery
[16:23:05.515053] discover_server:795 sending discovery
[16:23:05.515329] discover_server:799 error sending disovery
[16:23:10.519882] discover_server:795 sending discovery
[16:23:10.520185] discover_server:799 error sending disovery
[16:23:15.528387] discover_server:795 sending discovery
[16:23:15.528659] discover_server:799 error sending disovery
[16:23:20.535819] discover_server:795 sending discovery
[16:23:20.536007] discover_server:799 error sending disovery
[16:23:25.541079] discover_server:795 sending discovery
[16:23:25.541333] discover_server:799 error sending disovery
[16:23:30.549470] discover_server:795 sending discovery
[16:23:30.549640] discover_server:799 error sending disovery
[16:23:35.559568] discover_server:795 sending discovery
[16:23:35.559857] discover_server:799 error sending disovery
[16:23:40.568356] discover_server:795 sending discovery
[16:23:40.568646] discover_server:799 error sending disovery
[16:23:45.576730] discover_server:795 sending discovery
[16:23:45.577009] discover_server:799 error sending disovery
[16:23:50.586202] discover_server:795 sending discovery
[16:23:50.586502] discover_server:799 error sending disovery
[16:23:55.596574] discover_server:795 sending discovery
[16:23:55.596872] discover_server:799 error sending disovery
[16:24:00.604989] discover_server:795 sending discovery
[16:24:00.605269] discover_server:799 error sending disovery
[16:24:05.615978] discover_server:795 sending discovery
[16:24:05.616278] discover_server:799 error sending disovery
[16:24:10.625168] discover_server:795 sending discovery
[16:24:10.625472] discover_server:799 error sending disovery
[16:24:15.633952] discover_server:795 sending discovery
[16:24:15.634246] discover_server:799 error sending disovery
[16:24:20.642357] discover_server:795 sending discovery
[16:24:20.642648] discover_server:799 error sending disovery
[16:24:25.650821] discover_server:795 sending discovery
[16:24:25.651113] discover_server:799 error sending disovery
[16:24:30.662745] discover_server:795 sending discovery
[16:24:30.663055] discover_server:799 error sending disovery
[16:24:35.670289] discover_server:795 sending discovery
[16:24:35.670566] discover_server:799 error sending disovery
[16:24:40.674134] discover_server:795 sending discovery
[16:24:40.674460] discover_server:799 error sending disovery
[16:24:45.679650] discover_server:795 sending discovery
[16:24:45.679984] discover_server:799 error sending disovery
[16:24:50.689070] discover_server:795 sending discovery
[16:24:50.689366] discover_server:799 error sending disovery
[16:24:55.697415] discover_server:795 sending discovery
[16:24:55.697709] discover_server:799 error sending disovery
[16:25:00.705845] discover_server:795 sending discovery
[16:25:00.706128] discover_server:799 error sending disovery
[16:25:05.714279] discover_server:795 sending discovery
[16:25:05.714583] discover_server:799 error sending disovery
[16:25:10.723306] discover_server:795 sending discovery
[16:25:10.723601] discover_server:799 error sending disovery
[16:25:15.728709] discover_server:795 sending discovery
[16:25:15.728977] discover_server:799 error sending disovery
It seems as if Squeezelite, when started by udev, has no access to the LAN? I also tried starting Squeezelite with the -s 192.168.1.20 parameter (which is the IP of my LMS) -- but without success. It still cannot connect to the LMS server. Any ideas what I am doing wrong?
I used the approach described above on an RPi with piCore OS (which is a Tiny Core Linux distribution), and it worked like a charm...
|
As already suggested by @dirkt, starting Squeezelite by udev as a process is not a good idea: LAN access under udev is somehow limited, and the process will get killed after some time anyway. It is the preferred way to start Squeezelite as a service.
To do so, the udev rules need to be defined as follows:
# cat /etc/udev/rules.d/50-DAC.rules
SUBSYSTEM=="usb", ACTION=="bind", ENV{PRODUCT}=="154e/300a/3", TAG+="systemd", ENV{SYSTEMD_WANTS}="DAC_sql_start.service"
SUBSYSTEM=="usb", ACTION=="unbind", ENV{PRODUCT}=="154e/300a/3", TAG+="systemd", ENV{SYSTEMD_WANTS}="DAC_sql_stop.service"
Note that I used the bind resp. unbind action for the USB device which (in my case) proved to be more robust than the add resp. remove. Also, I had to put ENV{...} around the SYSTEMD_WANTS for a reason I don't understand...
The corresponding services need to be defined as follows:
# cat /lib/systemd/system/DAC_sql_start.service
[Unit]
Description=Squeezelite by DAC script (start)
[Service]
ExecStart=/usr/bin/DAC_sql.sh start
# cat /lib/systemd/system/DAC_sql_stop.service
[Unit]
Description=Squeezelite by DAC script (stop)
[Service]
ExecStart=/usr/bin/DAC_sql.sh stop
I modified the shell script DAC_sql.sh as follows:
# cat /usr/bin/DAC_sql.sh
#!/bin/sh
######### DAC_sql.sh #########
#sleep 5s
case $1 in
"start")
#date >> /tmp/DAC.log
#echo "Starting Squeezelite..." >> /tmp/DAC.log
/usr/bin/squeezelite -o hw:CARD=ND8006,DEV=0 -s 127.0.0.1 -D -n MediaPlayer #-d all=debug -f /tmp/sq.log;;
;;
"stop")
#date >> /tmp/DAC.log
#echo "Stopping Squeezelite..." >> /tmp/DAC.log
systemctl stop DAC_sql_start
;;
esac
##############################
It now works as expected: Whenever the USB device is switched on (bound into the system), Squeezelite is started as a service. When the USB device is switched off (unbound from the system), the Squeezelite service is stopped.
| udev without LAN access? |
1,459,275,092,000 |
I have 2 computers PC1 and PC2
1) PC1 can't boot from CD-ROM or USB, the only way is to boot from LAN
2) PC2 is runing debian jessie (which I want to use as server) and contains the Iso image debian8.iso,
How to install debian jessie on my PC1 through LAN using the debian8.iso ?
|
You could build a PXE Boot server. You can do this using several operating systems, including Debian.
PC2 is configured as a PXE Boot server, potentially using resources from the ISO image. PC1 is then configured in the BIOS to boot from those resources.
There's a lot of stuff required on PC2 to support this, and PC1 needs to support network booting from the BIOS as well.
There's a long write-up on the Debian Wiki, I'll try to summarise it (but the process is quite complex).
PC2 is your Server.
You need to install a DHCP service on your server, and configure it to allow booting.
You need to install a TFTP service on your server.
You need to get a network boot image, and configure it as a resource within the TFTP service. You can do this using apt on PC2 (the package is known as debian-installer-$VERSION-netboot-$ARCH where $VERSION in your case is 8 and $ARCH needs to match your target machines architecture).
On your client (PC1) you need to configure the network boot to point at PC2, reboot PC1 and if you've got everything configured correctly it should boot.
Read the Wiki I linked, it has more detail.
| How to install debian through LAN? |
1,459,275,092,000 |
We have a number of linux devices, each living in its own LAN (each LAN with its own router, connected to internet). Each device can connect to internet but cannot be directly reached (i.e. no public ip address, no possibility to configure the router to forward incoming traffic). We are on public infrastructures using mainly IPv4.
We are looking for an (existing?) infrastructure/service, where:
each device could be part of a virtual WAN, where each device gets a private ip address, possibly with a static assignment
is it ok to share WAN ip addresses with other entities, we do not need to have a reserved WAN
each private ip address can connect to the other ip addresses of the WAN
cryptography/anonimization is not necessary
decent latency and throughput: would like max 200-300 ms ping roundtrip, and would like at least 64-128 kbit/s on average
WAN with multiple access points, if possible.
ok to pay for the service, of course
We have already tested TOR: it fits quite well, in particular hidden services allow each device to be reached by the others; but the performance is really bad on average, with very high latency and very low bandwidth.
We have already tested OPENVPN in client/server mode: pretty good but we would need to mantain an openvpn server in the cloud. And we would get only a single entry point (unless we setup more servers...).
Any other ideas?
Are there any usable virtual WAN infrastructures ready to use (beside tor and others, which unluckily are not so good for us in terms of performance)?
|
Provided you can get at least one central system that is internet routable, you might try tinc. It's a bare-bones minimalistic mesh-style VPN with very low overhead and reasonably good security. Provided you're not constantly saturating the link, it's sufficiently light-weight to run with no issues on an AWS EC2 t2.micro instance or a 5$/month instance from most other providers, and it only needs an absolute bare minimum of other stuff on the system. Latency can be as low as 1-2% higher than a direct link, and throughput is typically only a few percent at most below idealized for the path.
In my own experience, it's a bit more involved to set up than OpenVPN and has somewhat sub-par support for WIndows and Android, but provides a bit better performance for equivalent security and is somewhat more resilient to failures (if you've got multiple entry points and one goes down, stuff that was already connected to that one will failover to other ones automatically without any need for administrative intervention), and provides somewhat easier to use IPv6 support (both for the outter transport, and the WAN itself).
| virtual WAN for devices in different LANs |
1,459,275,092,000 |
I wanted to assign Unique Local Addresses (ULA) to a couple of machines inside my LAN alongside the link-local ones and the globally routable ones. I am currently running dual stack. I also wanted to keep them short, like fd69:6666::.
One machine is running Debian Jessie (kernel 3.16.0-4-amd64) and the other one Linux Mint 17.2 (kernel 3.16.0-38-generic x86_64).
After following this guide: Set Up An IPv6 LAN with Linux. I ended up with the following configuration:
/etc/network/interfaces:
allow-hotplug eth0
iface eth0 inet static
address 192.168.1.100
netmask 255.255.255.0
broadcast 192.168.1.255
gateway 192.168.1.1
dns-nameservers 192.168.1.1
auto eth0
iface eth0 inet6 static
address fd69:6666:: #fd69:7777:: on the other machine.
netmask 64
/etc/radvd.conf:
interface eth0
{
AdvSendAdvert on;
prefix fd69:6666::/64 { #fd69:7777:: on the other machine.
AdvOnLink on;
AdvAutonomous on;
};
}
Problem is that I end up having both machines with the fd69:6666 prefix and nothing else! IPv6 connectivity stops working. What am I doing wrong?
|
Try using fd69::6666 instead. Using fd69:6666:: only sets the network part of the address. Remember to change the netmask too! This should be the result:
/etc/network/interfaces
auto eth0
iface eth0 inet6 static
address fd69::6666
netmask 64
| Cannot make ipv6 ULA addresses work |
1,459,275,092,000 |
My question regards ssh, sftp, scp, rsync, etc. commands where you connect to another machine which can be in your Local Area Network (LAN) or at some remote location you would typically connect via the Wide Area Network (WAN) or greater internet.
For instance, if you were to use these each of these two commands,
ssh [email protected]
ssh [email protected]
whereby you connect namely to a remote machine or a LAN machine respectively, does the actual connection path change if you are in the same LAN? Note publicIP.com represents the IP address or domain name that applies to both the host and the machine executing this command.
As an example, consider the case you are at home and have two machines connected to the internet through the same router. I would expect the second command to send data from machine1-->router-->machine2. Does the first command do the same or does it do machine1-->router-->some remote path-->router-->machine2? And in the second case, will this contribute to the bandwidth your ISP monitors and caps?
|
First your router is not just a router, it is also a ethernet switch, a DHCP server, A wifi hotspot, a modem, …
2nd it should be routed the best way: if on same sub-net 192.168.0.x then it will be routed by the machines, and not go through the router (not the router part of the router, just the ethernet switch).
What happens when you use a domain name e.g. publicIP.com
First the name is looked up: this may be done using /etc/hosts, bonjour/avahi, DNS, or other resolver. (This step may involve asking a public DNS server, so some public traffic. But it is cached for several minutes.)
Then an attempt it made to connect to the ip address.
e.g. If we do ssh [email protected] and the DNS A record of publicIP.com is 192.168.0.100, then the DNS look up returns 192.168.0.100. Then ssh does the connection ssh [email protected], and therefore routed the same as if you specified 192.168.0.100.
A note on http
In http the original name is also passed to the server (after the connection is made), this is sometimes use to distinguish which virtual server to connect to (at the same IP address).
| If you ssh to another device in your LAN, can you accidentally connect through a less direct pathway (WAN)? |
1,459,275,092,000 |
I have installed LAN messenger on linux mint.
It was installed successfully but was not running.
So I have tried "sudo lmc" in terminal and than it gave the following error in a popup window:
A port address conflict has been detected. LAN Messenger will close now.
|
The following command in terminal worked for me.
lmc /noconfig
| A port address conflict has been detected. LAN Messenger will close now |
1,540,996,919,000 |
On separate machine Ive setup some www server based on Debian 8 for testing purposes. It has also samba and GIT server installed. Today we got public ip from our ISP and i redirected port :80 to our LAN machine server.
LAN SERVER [*.*.0.200:8010] <= INSIDE ROUTER [*.*.1.100] (:80 redirection to *.0.200:8010 ) <= OUTSIDE ROUTER(:80 redirection to *.*.1.100:80)
Our project already has some authorisation and login mechanisms applied but all is on very early state (pre alpha) so we affraid about security. But server need to be exposed for our boss (for progress tracking) and people which work in offices in another city (testers) who are the targets of this project.
So we tought about setup VPN tunnel for outside users to our LAN Server. Already read some articles like this one...
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-openvpn-server-on-debian-8
And now we (me, because i will have to configure it) are thinking about setup redirections this way
LAN SERVER [*.*.0.200:8010] <= VPN Server [*.*.1.210] (:80 redirection to *.0.200:8010 ) <= INSIDE ROUTER [*.*.1.100] (:80 redirection to *.0.210:80 ) <= OUTSIDE ROUTER(:80 redirection to *.*.1.100:80)
where VPN server will be either separate machine or virtual machine with its own IP set on LAN SERVER machine. All outside traffic will have to pass trough VPN but LAN users can directly access server by 0.200:8010 address.
What do you think about this idea?
|
Ok my idea was wrong because i havent fully undestand how VPNs work. Especially the fact that OpenVPN creates its own interface so there is no need to create another one. I went trough all tutorial like it was written, only changing minor things and it worked.
I reccomend Hak5 video which is very similar , or (strangely) the same as tutorial above but little more detailed.
https://www.youtube.com/watch?v=XcsQdtsCS1U
ADVICE. In my opinion it is better after installation to begin with generating certificate and server key and then configure server and firewall because then you have better understanding of server.conf file.
| Securing LAN www server with OpenVPN? |
1,540,996,919,000 |
I have connected 2 PCs using ethernet cable and set up FTP on one of them to transfer some >100GB of files. However, trying to download it, I run into a problem of speed not more than 50kB/s. It happens whether I download through Nautilus or Filezilla.
However, if I try to download a large file using Google Chrome, it downloads at speed around 50MB/s, which is pretty good. But Chrome cannot download directories.
What can be a solution to either speed up LAN or download a directory through Chrome?
UPD: I tried to create a torrent and send it that way, but it's no better, stays around 100kB/s...
UPD1: I changed the cable and it didn't change, also it stops completely, if I turn on WiFi parallel to the cable.
UPD2: I found an advise to edit /etc/default/grub to disable IPv6, but it didn't help as well.
A small detail: both sender and receiver file systems are NTFS, does it make a difference?
|
I found it. The problem was far not in the connection, it is in the file system. I was trying to copy from NTFS to NTFS. When I formatted the receiving FS to ext4, speed increased to 40+MB/sec.
| Super-slow LAN speed, unless downloading through Chrome's FTP client [closed] |
1,540,996,919,000 |
BACKGROUND DETAILS:
I've inhereted a QNAP TS-459U-RP.
I currently have it connected to my home router via an ethernet/rj45 cable. My computer is connected to the same router. How do I access the QNAP from my computer?
If I open nautilus and go to + Other Locations, I now see 2 new options under Networks which only become visible if the QNAP is connected to the router and switched on. The 2 options which become avaible in + Other locations are:
NASC3C6BC(FTP)
NASC3C6BC(SAMBA)
THE MESSAGES:
When I click on the 2 options within Nautilus, I get the following messages:
For the FTP option I get Opening "qnap-001.local (ftp) You can stop this operation by clicking cancel."
Nothing happens if I do not click Cancel.
For the SAMBA option I get Unable to access location - Failed to retrieve share list from server: Connection timed out
I then Get an OK button.
THE RESET BUTTON:
I do not know the username and password. I have checked the manual and it says that I should press the reset button at the back for 3 seconds to reset the admin password and that after 3 seconds it will make a beeping sound. When I press the reset button at the back for 3 seconds or longer, it makes no beeping sound.
|
You need to configure the QNAP using its web GUI. There are lots of manuals, tutorials, etc. at https://qnap.com. Default username is admin, possibly with password admin.
To find its IP address you can either use the QNAP Finder tool (Windows, downloadable from QNAP's website), or look at whatever issues DHCP addresses on your network.
| How do I connect to a QNAP [closed] |
1,540,996,919,000 |
I have two network interfaces on a client PC, one wired, one wireless. The
wired is connected directly to a server PC running DHCP, which gives the
client PC a 10.0... address. The client PC also connects to a wireless
router, which gives it a 192.168... address. This was all set up and
detected automatically by NetworkManager. I am able to ping the server PC
and wireless router.
The problem is I want to use the wired connection as a LAN only, and the
wireless for WAN access to the wider Internet. But NetworkManager thinks the
opposite, and tries to use the LAN to go to the outside Internet. If I
unplug the cable it correctly uses wireless. But I'd like to have them both
connected at the same time.
I'm running Debian unstable if that matters.
Is there some way to configure this?
|
Ok, I figured it out by looking at the answer to this question:
NetworkManager changes default routing policy
To summarize:
Open up NetworkManager's graphical connection editor
$ nm-connection-editor
In the GUI:
Click on "Wired connection 1".
Click on the gear button for settings.
Click on the "IPv4 Settings" tab.
Click on the "Routes..." button.
Check the "Use this connection only for resources on its network" box.
| Set up wired ethernet LAN to coexist with wireless WAN using NetworkManager |
1,540,996,919,000 |
I'm running the latest version of Pop! Os.
I have a laptop connected to my main computer via an ethernet cable.
The laptop IP is 169.254.83.40 and my main computer IP is 169.254.83.50 on the connected interface. My wifi interface is 192.168.0.20 on the main computer.
When I am connected to wifi and I attempt to ping my laptop from my main computer I get the following.
PING 169.254.83.40 (169.254.83.40) 56(84) bytes of data.
From 192.168.0.20 icmp_seq=1 Destination Host Unreachable
From 192.168.0.20 icmp_seq=2 Destination Host Unreachable
From 192.168.0.20 icmp_seq=3 Destination Host Unreachable
When disconnected from wifi I can ping successfully. Below is my output from ifconfig.
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.0.20 netmask 255.255.255.0 broadcast 192.168.0.255
ether e8:4e:06:7d:d7:8f txqueuelen 1000 (Ethernet)
RX packets 38305 bytes 36156135 (36.1 MB)
RX errors 0 dropped 1 overruns 0 frame 0
TX packets 26255 bytes 3680006 (3.6 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 169.254.83.50 netmask 255.0.0.0 broadcast 169.255.255.255
ether a8:a1:59:2b:6c:ee txqueuelen 1000 (Ethernet)
RX packets 1294 bytes 87685 (87.6 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1818 bytes 121833 (121.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 12029 bytes 1257930 (1.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 12029 bytes 1257930 (1.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
netstat -nr
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 enp3s0
0.0.0.0 169.254.83.50 0.0.0.0 UG 0 0 0 enp4s0
169.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 enp4s0
169.0.0.0 169.254.83.50 255.0.0.0 UG 0 0 0 enp4s0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp3s0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp3s0
|
The reason that you are seeing this is that the wifi connection added a route that was more specific than the one on your wired connection (I've notated the important stuff with asterisks):
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 enp3s0
0.0.0.0 169.254.83.50 0.0.0.0 UG 0 0 0 enp4s0
**169.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 enp4s0
169.0.0.0 169.254.83.50 255.0.0.0 UG 0 0 0 enp4s0
**169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 enp3s0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp3s0
(enp4s0 is wired, enp3s0 is wireless)
The route that your wired network takes to get to 169.254.83.40 is 169.0.0.0/255.0.0.0 (aka /8), while the wifi network adds a route that is 169.254.0.0/255.255.0.0 (aka /16). The /16 is considered more specific because it refers to a smaller network, so it takes priority over the wired network.
As to how to fix the problem, I'd check the wireless network configuration or DHCP server to see why the additional routes are being pushed. In the meantime, you can manually remove the erroneous route with
ip route del 192.254.0.0/16 dev enp3s0
but that isn't a real fix, just a band-aid.
| When connected to Wifi I am unable to ping directly connected computer |
1,540,996,919,000 |
I have a measurement device (an Agilent frequency counter) that I want to communicate with directly via it's LAN interface. However, my Ubuntu 20.04 laptop does not seem to want to establish a connection. I'm afraid there is some basic networking concept that I am not applying.
What I've tried: I've tested the device's LAN connection with a couple Windows PC's, and they connected (nearly) automatically. Curiously, when connected to my Ubuntu machine, the device still self-reports a fully functional LAN connection, and even displays its DHCP-assigned IP address. However, my laptop's wired network status oscillates between off and "Connecting", but never connects. Pinging the device's IP results in 100% loss. Yet more curiously, I am able to successfully connect to the device through a Windows VM on my Ubuntu machine.
My only lead is that the ip route show line for the device's network prefix is:
169.254.0.0/16 dev wlp0s20f3 scope link metric 1000
where I would expect the ethernet device here, not the wlan.
Any pointers are very appreciated, as are resources related to basic networking that would help me here.
|
Turn DHCP off on your device, and assign the device the address 192.168.2.2 with subnet 255.255.255.0 and gateway 192.168.2.1. Then on your Ubuntu machine, create a new manual connection without dhcp for your Ethernet port, and set the address to 192.168.2.3, with subnet 255.255.255.0 and gateway 192.168.2.1. You should now be able to ping your device from the computer.
Checkout pages 30-33 in this link for more details.
| Trouble connecting to a LAN device |
1,540,996,919,000 |
what is the advantage to configure the network with UUID?
uuidgen eno093736
1b64d026-c6a6-447c-85aa-40c5d2db2c4e
for example we want to configure BOND under /etc/sysconfig/network-script
what is the positive thing that we configure the BOND with UUID?
UUID=ef23ca7b-93d6-4b40-9fa0-4a9b208914e50
|
that`s because if you scan your NIC ports again or update your OS, the device names may change. its like filesystems. in fstab you should always enter uuid of LUN because sometimes after a reboot the lun names will change. but uuids of ports or LUNs never change.
| what is the advantage to configure the network ETH with UUID? |
1,540,996,919,000 |
I removed gnome(running bare openbox)and now lan and wlan both do not work anymore.
I used this to set up my wifi before, but now it does not work anymore and insmod tells that the file already exists, even though I did not insert the module yet:
How to check if USB WiFi-Adapter is not working or router is out of range?
For lan, could be caused by a cable defect, but that is pretty unlikly. Unfortunetely, I cannot test that right now.
Anyone has any idea on how to fix this?
(Sorry for the bad writing, forced to type on a tablet right now which is a pain in the neck.)
|
Check all interfaces by running ifconfig -a and then run ifup command on disabled interfaces, e.g. ifup eth0
| No internet connection after uninstalling gnome [Fedora + Openbox] |
1,540,996,919,000 |
My goal is to setup a firewall & Intrusion Prevention system using Snort. I have a spare pc available with at least 2 physical NIC's, which ran pfSense having a firewall with Snort, but this time I want to do the setup myself.
So far I managed to install Debian 9 as a headless system with ssh login (and if really needed I could add a keyboard and screen temporary).
I wanted to start with just a firewall, without Snort.
How to I achieve the following:
- is it possible to put the firewall just in between my IPS cable modem router and my LAN? The ISP router has DHCP/NAT enabled, which I can't turn off.
- I want to achieve a "plug&play" firewall that I could just put in between, without turning it into a double NAT (which I had before using pfSense). I mean, if possible I don't want to have different networks, eg. a 192.168.x.x one and a for example 10.x.x.x one.
- the firewall is headless, logging in via ssh
Internet
WAN
|
|
ISP Cable Modem & Router with DCHP
gateway 192.168.0.1
|
|
[eth0]
Firewall
[eth1]
| ________ Wireless AP
| /
|_____ Switch__/_________ PC1
\
\________ ...
I tried to setup a bridge on br0 (via /etc/network/interfaces) adding eth0 and eth1. The bridge had an IP address and it worked fine, where I could still connect to the internet from devices behind the switch via the AP.
So I learned bridges don't care about IP addresses.... which doesn't sound good to build a firewall with eventually snort (IPS).
I've read about iptables and using the "physical dev".
Maybe I'm force to do double NAT and setup routing?
The problem is I don't know enough to know what is best and how to go about it. Sure, I've googled (a lot) and found for example on aboutdebian.org articles about proxy/NAT and firewalling... but most articles asume you can have a modem only, but I can't turn off DCHP nor I can configure the range of it. It's always the full 255.255.255.0 range.
|
Seems I’ve found a working solution... maybe trivial once you know it, but keep in mind I didn’t know linux nor much networking. So, here.s what I learned:
- you need to use a bridge if you want ‘plug&play’, because it just passes trafic. You could setup a router, but then what comes behind the firewall, needed a different LAN (eg. 10.x.x.x instead of 192.168.x.x). I would also end up with double-NAT and needed to run a DHCP server to provide all devices behind the router/firewall an IP address. So, that why I went with a bridge: no need to change existing setup, but just put the bridge in between.
Now, getting the firewalling at work on a Bridge can be done using IPTABLES. Since a bridge doesn’t look at level 3 (IP), but only at level 2 (MAC address/ethernet frame) I.ve found that using the iptables-extension “physdev” is needed. The man page about it gave me some info.
So far I was able to block a ping or port 80; 443 etc. just for testing.... but it proves this way it would work out ok. Important is to use the FORWARD chain. For example:
iptables -A FORWARD -m physdev --physdev-in eth0 --physdev-out eth1 -p icmp --icmp-type echo-request -j DROP
Next things to find out:
- how to block IPV6... not sure if I need to add rules to IP6TABLES or just disable it all together on the host. In my internal LAN only IPV4 addresses would be needed. Would I miss out anything if I would block/not use IPV6?
- check out eptables
- get into Snort
... but I feel I got where I wanted to be.
| How to setup a firewall between my ISP cable modem/router and my LAN? |
1,327,678,484,000 |
A huge (up to 2 GiB) text file of mine contains about 100 exact duplicates of every line in it (useless in my case, as the file is a CSV-like data table).
What I need is to remove all the repetitions while (preferably, but this can be sacrificed for a significant performance boost) maintaining the original sequence order. In the result each line is to be unique. If there were 100 equal lines (usually the duplicates are spread across the file and won't be neighbours) there is to be only one of the kind left.
I have written a program in Scala (consider it Java if you don't know about Scala) to implement this. But maybe there are faster C-written native tools able to do this faster?
UPDATE: the awk '!seen[$0]++' filename solution seemed working just fine for me as long as the files were near 2 GiB or smaller but now as I am to clean-up a 8 GiB file it doesn't work any more. It seems taking infinity on a Mac with 4 GiB RAM and a 64-bit Windows 7 PC with 4 GiB RAM and 6 GiB swap just runs out of memory. And I don't feel enthusiastic about trying it on Linux with 4 GiB RAM given this experience.
|
An awk solution seen on #bash (Freenode):
awk '!seen[$0]++' filename
If you want to edit the file in-place, you can use the following command (provided that you use a GNU awk version that implements this extension):
awk -i inplace '!seen[$0]++' filename
| How to remove duplicate lines inside a text file? |
1,327,678,484,000 |
Say I have a huge text file (>2GB) and I just want to cat the lines X to Y (e.g. 57890000 to 57890010).
From what I understand I can do this by piping head into tail or viceversa, i.e.
head -A /path/to/file | tail -B
or alternatively
tail -C /path/to/file | head -D
where A,B,C and D can be computed from the number of lines in the file, X and Y.
But there are two problems with this approach:
You have to compute A,B,C and D.
The commands could pipe to each other many more lines than I am interested in reading (e.g. if I am reading just a few lines in the middle of a huge file)
Is there a way to have the shell just work with and output the lines I want? (while providing only X and Y)?
|
I suggest the sed solution, but for the sake of completeness,
awk 'NR >= 57890000 && NR <= 57890010' /path/to/file
To cut out after the last line:
awk 'NR < 57890000 { next } { print } NR == 57890010 { exit }' /path/to/file
Speed test (here on macOS, YMMV on other systems):
100,000,000-line file generated by seq 100000000 > test.in
Reading lines 50,000,000-50,000,010
Tests in no particular order
real time as reported by bash's builtin time
4.373 4.418 4.395 tail -n+50000000 test.in | head -n10
5.210 5.179 6.181 sed -n '50000000,50000010p;57890010q' test.in
5.525 5.475 5.488 head -n50000010 test.in | tail -n10
8.497 8.352 8.438 sed -n '50000000,50000010p' test.in
22.826 23.154 23.195 tail -n50000001 test.in | head -n10
25.694 25.908 27.638 ed -s test.in <<<"50000000,50000010p"
31.348 28.140 30.574 awk 'NR<57890000{next}1;NR==57890010{exit}' test.in
51.359 50.919 51.127 awk 'NR >= 57890000 && NR <= 57890010' test.in
These are by no means precise benchmarks, but the difference is clear and repeatable enough* to give a good sense of the relative speed of each of these commands.
*: Except between the first two, sed -n p;q and head|tail, which seem to be essentially the same.
| cat line X to line Y on a huge file |
1,327,678,484,000 |
I have a huge (70GB), one line, text file and I want to replace a string (token) in it.
I want to replace the token <unk>, with another dummy token (glove issue).
I tried sed:
sed 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new
but the output file corpus.txt.new has zero-bytes!
I also tried using perl:
perl -pe 's/<unk>/<raw_unk>/g' < corpus.txt > corpus.txt.new
but I got an out of memory error.
For smaller files, both of the above commands work.
How can I replace a string is such a file?
This is a related question, but none of the answers worked for me.
Edit:
What about splitting the file in chunks of 10GBs (or whatever) each and applying sed on each one of them and then merging them with cat? Does that make sense? Is there a more elegant solution?
|
The usual text processing tools are not designed to handle lines that don't fit in RAM. They tend to work by reading one record (one line), manipulating it, and outputting the result, then proceeding to the next record (line).
If there's an ASCII character that appears frequently in the file and doesn't appear in <unk> or <raw_unk>, then you can use that as the record separator. Since most tools don't allow custom record separators, swap between that character and newlines. tr processes bytes, not lines, so it doesn't care about any record size. Supposing that ; works:
<corpus.txt tr '\n;' ';\n' |
sed 's/<unk>/<raw_unk>/g' |
tr '\n;' ';\n' >corpus.txt.new
You could also anchor on the first character of the text you're searching for, assuming that it isn't repeated in the search text and it appears frequently enough. If the file may start with unk>, change the sed command to sed '2,$ s/… to avoid a spurious match.
<corpus.txt tr '\n<' '<\n' |
sed 's/^unk>/raw_unk>/g' |
tr '\n<' '<\n' >corpus.txt.new
Alternatively, use the last character.
<corpus.txt tr '\n>' '>\n' |
sed 's/<unk$/<raw_unk/g' |
tr '\n>' '>\n' >corpus.txt.new
Note that this technique assumes that sed operates seamlessly on a file that doesn't end with a newline, i.e. that it processes the last partial line without truncating it and without appending a final newline. It works with GNU sed. If you can pick the last character of the file as the record separator, you'll avoid any portability trouble.
| Replace string in a huge (70GB), one line, text file |
1,327,678,484,000 |
I have a fairly large file (35Gb), and I would like to filter this file in situ (i.e. I don't have enough disk space for another file), specifically I want to grep and ignore some patterns — is there a way to do this without using another file?
Let's say I want to filter out all the lines containing foo: for example...
|
At the system call level this should be possible. A program can open your target file for writing without truncating it and start writing what it reads from stdin. When reading EOF, the output file can be truncated.
Since you are filtering lines from the input, the output file write position should always be less than the read position. This means you should not corrupt your input with the new output.
However, finding a program that does this is the problem. dd(1) has the option conv=notrunc that does not truncate the output file on open, but it also does not truncate at the end, leaving the original file contents after the grep contents (with a command like grep pattern bigfile | dd of=bigfile conv=notrunc)
Since it is very simple from a system call perspective, I wrote a small program and tested it on a small (1MiB) full loopback filesystem. It did what you wanted, but you really want to test this with some other files first. It's always going to be risky overwriting a file.
overwrite.c
/* This code is placed in the public domain by camh */
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <errno.h>
int main(int argc, char **argv)
{
int outfd;
char buf[1024];
int nread;
off_t file_length;
if (argc != 2) {
fprintf(stderr, "usage: %s <output_file>\n", argv[0]);
exit(1);
}
if ((outfd = open(argv[1], O_WRONLY)) == -1) {
perror("Could not open output file");
exit(2);
}
while ((nread = read(0, buf, sizeof(buf))) > 0) {
if (write(outfd, buf, nread) == -1) {
perror("Could not write to output file");
exit(4);
}
}
if (nread == -1) {
perror("Could not read from stdin");
exit(3);
}
if ((file_length = lseek(outfd, 0, SEEK_CUR)) == (off_t)-1) {
perror("Could not get file position");
exit(5);
}
if (ftruncate(outfd, file_length) == -1) {
perror("Could not truncate file");
exit(6);
}
close(outfd);
exit(0);
}
You would use it as:
grep pattern bigfile | overwrite bigfile
I'm mostly posting this for others to comment on before you try it. Perhaps someone else knows of a program that does something similar that is more tested.
| Is there a way to modify a file in-place? |
1,327,678,484,000 |
I have two big files (6GB each). They are unsorted, with linefeeds (\n) as separators. How can I diff them? It should take under 24h.
|
The most obvious answer is just to use the diff command and it is probably a good idea to add the --speed-large-files parameter to it.
diff --speed-large-files a.file b.file
You mention unsorted files so maybe you need to sort the files first
sort a.file > a.file.sorted
sort b.file > b.file.sorted
diff --speed-large-files a.file.sorted b.file.sorted
you could save creating an extra output file by piping the 2nd sort output direct into diff
sort a.file > a.file.sorted
sort b.file | diff --speed-large-files a.file.sorted -
Obviously these will run best on a system with plenty of available memory and you will likely need plenty of free disk space too.
It wasn't clear from your question whether you have tried these before. If so then it would be helpful to know what went wrong (took too long etc.). I have always found that
the stock sort and diff commands tend to do at least as well as custom commands unless
there are some very domain specific properties of the files that make it possible to
do things differently.
| Diffing two big text files |
1,327,678,484,000 |
I tried it with SCP, but it says "Negative file size".
>scp matlab.iso xxx@xxx:/matlab.iso
matlab.iso: Negative file size
Also tried using SFTP, worked fine until 2 GB of the file had transferred, then stopped:
sftp> put matlab.iso
Uploading matlab.iso to /home/x/matlab.iso
matlab.iso -298% 2021MB -16651.-8KB/s 00:5d
o_upload: offset < 0
Any idea what could be wrong? Don't SCP and SFTP support files that are larger than 2 GB? If so, then how can I transfer bigger files over SSH?
The destination file system is ext4. The Linux distribution is CentOS 6.5. The filesystem currently has (accessible) large files on it (up to 100 GB).
|
The original problem (based on reading all comments to the OP question) was that the scp executable on the 64-bit system was a 32-bit application. A 32-bit application that isn't compiled with "large-file support" ends up with seek pointers that are limited to 2^32 =~ 4GB.
You may tell if scp is 32-bit by using the file command:
file `which scp`
On most modern systems it will be 64-bit, so no file truncation would occur:
$ file `which scp`
/usr/bin/scp: ELF 64-bit LSB shared object, x86-64 ...
A 32-application should still be able to support "large files" but it has to be compiled from source with large-file support which this case apparently wasn't.
The recommended solution is perhaps to use a full standard 64-bit distribution where apps are compiled as 64-bit by default.
| Transferring large (8 GB) files over ssh |
1,327,678,484,000 |
Is useful to use -T largefile flag at creating a file-system for a partition with big files like video, and audio in flac format?
I tested the same partition with that flag and without it, and using tune2fs -l [partition], I checked in "Filesystem features" that both have "large_file" enabled. So, is not necessary to use -T flag largefile?
|
The -T largefile flag adjusts the amount of inodes that are allocated at the creation of the file system. Once allocated, their number cannot be adjusted (at least for ext2/3, not fully sure about ext4). The default is one inode for every 16K of disk space. -T largefile makes it one inode for every megabyte.
Each file requires one inode. If you don't have any inodes left, you cannot create new files. But these statically allocated inodes take space, too. You can expect to save around 1,5 gigabytes for every 100 GB of disk by setting -T largefile, as opposed to the default. -T largefile4 (one inode per 4 MB) does not have such a dramatic effect.
If you are certain that the average size of the files stored on the device will be above 1 megabyte, then by all means, set -T largefile. I'm happily using it on my storage partitions, and think that it is not too radical of a setting.
However, if you unpack a very large source tarball of many files (think hundreds of thousands) to that partition, you have a chance of running out of inodes for that partition. There is little you can do in that situation, apart from choosing another partition to untar to.
You can check how many inodes you have available on a live filesystem with the dumpe2fs command:
# dumpe2fs /dev/hda5
[...]
Inode count: 98784
Block count: 1574362
Reserved block count: 78718
Free blocks: 395001
Free inodes: 34750
Here, I can still create 34 thousand files.
Here's what I got after doing mkfs.ext3 -T largefile -m 0 on a 100-GB partition:
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/loop1 102369 188 102181 1% /mnt/largefile
/dev/loop2 100794 188 100606 1% /mnt/normal
The largefile version has 102 400 inodes while the normal one created 6 553 600 inodes, and saved 1,5 GB in the process.
If you have a good clue on what size files you are going to put on the file system, you can fine-tune the amount of inodes directly with the -i switch. It sets the bytes per inode ratio. You would gain 75% of the space savings if you used -i 65536 while still being able to create over a million files. I generally calculate to keep at least 100 000 inodes spare.
| largefile feature at creating file-system |
1,327,678,484,000 |
I have a few files sized > 1 GB each. I need to remove last few bytes from the files. How can I do it? I prefer to edit file in place to save disk space.
I am on HP-UX.
|
Try using hexedit I haven't tried it on HP-UX but it should work. It allows you to move to a location in a file and truncate. I'm pretty sure that it does not read the whole file in but just seeks to the appropriate location for display.
Usage is fairly simple once you have launched it the arrow keys allow you to move around. F1 gives help. Ctrl-G moves to a location in the file (hint: to move to end use the size of the file from the bottom row of the display). Position the cursor on the first byte that you want to truncate and then press Escape T once you confirm the truncate will have been done. Ctrl-x exits.
| How can I edit a large file in place? |
1,327,678,484,000 |
I have a 900GB ext4 partition on a (magnetic) hard drive that has no defects and no bad sectors. The partition is completely empty except for an empty lost+found directory. The partition was formatted using default parameters except that I set the number of reserved filesystem blocks to 1%.
I downloaded the ~900MB file xubuntu-15.04-desktop-amd64.iso to the partition's mount point directory using wget. When the download was finished, I found that the file was split into four fragments:
filefrag -v /media/emma/red/xubuntu-15.04-desktop-amd64.iso
Filesystem type is: ef53
File size of /media/emma/red/xubuntu-15.04-desktop-amd64.iso is 1009778688 (246528 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 32767: 34816.. 67583: 32768:
1: 32768.. 63487: 67584.. 98303: 30720:
2: 63488.. 96255: 100352.. 133119: 32768: 98304:
3: 96256.. 126975: 133120.. 163839: 30720:
4: 126976.. 159743: 165888.. 198655: 32768: 163840:
5: 159744.. 190463: 198656.. 229375: 30720:
6: 190464.. 223231: 231424.. 264191: 32768: 229376:
7: 223232.. 246527: 264192.. 287487: 23296: eof
/media/emma/red/xubuntu-15.04-desktop-amd64.iso: 4 extents found
Thinking this might be releated to wget somehow, I removed the ISO file from the partition, making it empty again, then I copied the ~700MB file v1.mp4 to the partition using cp. This file was fragmented too. It was split into three fragments:
filefrag -v /media/emma/red/v1.mp4
Filesystem type is: ef53
File size of /media/emma/red/v1.mp4 is 737904458 (180153 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 32767: 34816.. 67583: 32768:
1: 32768.. 63487: 67584.. 98303: 30720:
2: 63488.. 96255: 100352.. 133119: 32768: 98304:
3: 96256.. 126975: 133120.. 163839: 30720:
4: 126976.. 159743: 165888.. 198655: 32768: 163840:
5: 159744.. 180152: 198656.. 219064: 20409: eof
/media/emma/red/v1.mp4: 3 extents found
Why is this happening? And is there a way to prevent it from happening? I thought ext4 was meant to be resistant to fragmentation. Instead I find that it immediately fragments a solitary file when all the rest of the volume is unused. This seems to be worse than both FAT32 and NTFS.
|
3 or 4 fragments in a 900mb file is very good. Fragmentation becomes a problem when a file of that size has more like 100+ fragments. It isn't uncommon for fat or ntfs to fragment such a file into several hundred pieces.
You generally won't see better than that at least on older ext4 filesystems because the maximum size of a block group is 128 MB, and so every 128 MB the contiguous space is broken by a few blocks for the allocation bitmaps and inode tables for the next block group. A more recent ext4 feature called flex_bg allows packing a number of ( typically 16 ) block groups' worth of these tables together, leaving longer runs of allocatable blocks but depending on your distribution and what version of e2fsprogs was used to format it, this option may not have been used.
You can use tune2fs -l to check the features enabled when your filesystem was formatted.
| Why are these files in an ext4 volume fragmented? |
1,327,678,484,000 |
I have a directory with about 100000 small files (each file is from 1-3 lines, each file is a text file). In size the directory isn't very big (< 2GB). This data lives in a professionally administered NFS server. The server runs Linux. I think the filesystem is ext3, but I don't know for sure. Also, I don't have root access to the server.
These files are the output of a large scale scientific experiment, over which I don't have control. However, I have to analyze the results.
Any I/O operation/processing in this directory is very, very slow. Opening a file (fopen in python), reading from an open file, closing a file, are all very slow. In bash ls, du, etc. don't work.
The question is:
What is the maximum number of files in a directory in Linux in such a way that it is practical to do processing, fopen, read, etc? I understand that the answer depends on many things: fs type, kernel version, server version, hardware, etc. I just want a rule of thumb, if possible.
|
As you surmise, it does depend on many things, mostly the filesystem type and options and to some extent the kernel version. In the ext2/ext3/ext4 series, there was a major improvement when the dir_index option appeared (some time after the initial release of ext3): it makes directories be stored as search trees (logarithmic time access) rather than linear lists (linear time access). This isn't something you can see over NFS, but if you have some contact with the admins you can ask them to run tune2fs -l /dev/something |grep features (perhaps even convince them to upgrade?). Only the number of files matters, not their size.
Even with dir_index, 100000 feels large. Ideally, get the authors of the program that creates the files to add a level of subdirectories. For no performance degradation, I would recommend a limit of about 1000 files per directory for ext2 or ext3 without dir_index and 20000 with dir_index or reiserfs. If you can't control how the files are created, move them into separate directories before doing anything else.
| Number of files per directory |
1,327,678,484,000 |
I have a log of 55GB in size.
I tried:
cat logfile.log | tail
But this approach takes a lot of time. Is there any way to read huge files faster or any other approach?
|
cat logifle.log | … here is superfluous, and actually contributes to it being slow. tail logfile.log without the cat would make a lot more sense!
It's going to be much faster, because when the input is not seekable, what tail needs to do is read all the standard input, line by line, keep the last 10 lines around in a buffer (in case they should be the last 10 lines); and having the input come from a pipe through your cat mechanism makes sure it's not seekable.
That is slow, and unless a line in your file can have gigabytes in size, pretty stupid: just skip the first 54.9 GB. The remaining 100 MB will certainly not be less than the last 10 lines! And getting the last 10 lines from 100 MB should be fast enough.
tail --bytes 100M logfile.log | tail
However, if you're using GNU Coreutil¹'s tail implementation, that already does this (i.e., it seeks to the end of the file minus 2.5 kB, and looks from there). By not abusing cat here but letting tail read the file itself (or just using redirection, works the same!) instead, you get a much faster result.
¹ GNU Coreutils, modern busybox are the two implementations of tail that I've checked; both do this. Stéphane points out below that even the original 1970s PWB Unix implementation does this – but it's still merely an implementation detail.
| How do I read the last lines of a huge log file? |
1,327,678,484,000 |
I have ~30k files. Each file contains ~100k lines. A line contains no spaces. The lines within an individual file are sorted and duplicate free.
My goal: I want to find all all duplicate lines across two or more files and also the names of the files that contained duplicated entries.
A simple solution would be this:
cat *.words | sort | uniq -c | grep -v -F '1 '
And then I would run:
grep 'duplicated entry' *.words
Do you see a more efficient way?
|
Since all input files are already sorted, we may bypass the actual sorting step and just use sort -m for merging the files together.
On some Unix systems (to my knowledge only Linux), it may be enough to do
sort -m *.words | uniq -d >dupes.txt
to get the duplicated lines written to the file dupes.txt.
To find what files these lines came from, you may then do
grep -Fx -f dupes.txt *.words
This will instruct grep to treat the lines in dupes.txt (-f dupes.txt) as fixed string patterns (-F). grep will also require that the whole line matches perfectly from start to finish (-x). It will print the file name and the line to the terminal.
Non-Linux Unices (or even more files)
On some Unix systems, 30000 file names will expand to a string that is too long to pass to a single utility (meaning sort -m *.words will fail with Argument list too long, which it does on my OpenBSD system). Even Linux will complain about this if the number of files are much larger.
Finding the dupes
This means that in the general case (this will also work with many more than just 30000 files), one has to "chunk" the sorting:
rm -f tmpfile
find . -type f -name '*.words' -print0 |
xargs -0 sh -c '
if [ -f tmpfile ]; then
sort -o tmpfile -m tmpfile "$@"
else
sort -o tmpfile -m "$@"
fi' sh
Alternatively, creating tmpfile without xargs:
rm -f tmpfile
find . -type f -name '*.words' -exec sh -c '
if [ -f tmpfile ]; then
sort -o tmpfile -m tmpfile "$@"
else
sort -o tmpfile -m "$@"
fi' sh {} +
This will find all files in the current directory (or below) whose names matches *.words. For an appropriately sized chunk of these names at a time, the size of which is determined by xargs/find, it merges them together into the sorted tmpfile file. If tmpfile already exists (for all but the first chunk), this file is also merged with the other files in the current chunk. Depending on the length of your filenames, and the maximum allowed length of a command line, this may require more or much more than 10 individual runs of the internal script (find/xargs will do this automatically).
The "internal" sh script,
if [ -f tmpfile ]; then
sort -o tmpfile -m tmpfile "$@"
else
sort -o tmpfile -m "$@"
fi
uses sort -o tmpfile to output to tmpfile (this won't overwrite tmpfile even if this is also an input to sort) and -m for doing the merge. In both branches, "$@" will expand to a list of individually quoted filenames passed to the script from find or xargs.
Then, just run uniq -d on tmpfile to get all line that are duplicated:
uniq -d tmpfile >dupes.txt
If you like the "DRY" principle ("Don't Repeat Yourself"), you may write the internal script as
if [ -f tmpfile ]; then
t=tmpfile
else
t=/dev/null
fi
sort -o tmpfile -m "$t" "$@"
or
t=tmpfile
[ ! -f "$t" ] && t=/dev/null
sort -o tmpfile -m "$t" "$@"
Where did they come from?
For the same reasons as above, we can't use grep -Fx -f dupes.txt *.words to find where these duplications came from, so instead we use find again:
find . -type f -name '*.words' \
-exec grep -Fx -f dupes.txt {} +
Since there is no "complicated" processing to be done, we may invoke grep directly from -exec. The -exec option takes a utility command and will place the found names in {}. With + at the end, find will place as many arguments in place of {} as the current shell supports in each invocation of the utility.
To be totally correct, one may want to use either
find . -type f -name '*.words' \
-exec grep -H -Fx -f dupes.txt {} +
or
find . -type f -name '*.words' \
-exec grep -Fx -f dupes.txt /dev/null {} +
to be sure that filenames are always included in the output from grep.
The first variation uses grep -H to always output matching filenames. The last variation uses the fact that grep will include the name of the matching file if more than one file is given on the command line.
This matters since the last chunk of filenames sent to grep from find may actually only contain a single filename, in which case grep would not mention it in its results.
Bonus material:
Dissecting the find+xargs+sh command:
find . -type f -name '*.words' -print0 |
xargs -0 sh -c '
if [ -f tmpfile ]; then
sort -o tmpfile -m tmpfile "$@"
else
sort -o tmpfile -m "$@"
fi' sh
find . -type f -name '*.words' will simply generate a list of pathnames from the current directory (or below) where each pathnames is that of a regular file (-type f) and that has a filename component at the end that matches *.words. If only the current directory is to be searched, one may add -maxdepth 1 after the ., before -type f.
-print0 will ensure that all found pathnames are outputted with a \0 (nul) character as delimiter. This is a character that is not valid in a Unix path and it enables us to process pathnames even if they contain newline characters (or other weird things).
find pipes its output to xargs.
xargs -0 will read the \0-delimited list of pathnames and will execute the given utility repeatedly with chunks of these, ensuring that the utility is executed with just enough arguments to not cause the shell to complain about a too long argument list, until there is no more input from find.
The utility invoked by xargs is sh with a script given on the command line as a string using its -c flag.
When invoking sh -c '...some script...' with arguments following, the arguments will be available to the script in $@, except for the first argument, which will be placed in $0 (this is the "command name" that you may spot in e.g. top if you are quick enough). This is why we insert the string sh as the first argument after the end of the actual script. The string sh is a dummy argument and could be any single word (some seem to prefer _ or sh-find).
| How to find duplicate lines in many large files? |
1,327,678,484,000 |
I have a 250 MB text file, all in one line.
In this file I want to replace a characters with b characters:
sed -e "s/a/b/g" < one-line-250-mb.txt
It fails with:
sed: couldn't re-allocate memory
It seems to me that this kind of task could be performed inline without allocating much memory.
Is there a better tool for the job, or a better way to use sed?
GNU sed version 4.2.1
Ubuntu 12.04.2 LTS
1 GB RAM
|
Yes, use tr instead:
tr 'a' 'b' < file.txt > output.txt
sed deals in lines so a huge line will cause it problems. I expect it is declaring a variable internally to hold the line and your input exceeds the maximum size allocated to that variable.
tr on the other hand deals with characters and should be able to handle arbitrarily long lines correctly.
| Basic sed command on large one-line file: couldn't re-allocate memory |
1,327,678,484,000 |
In the same spirit as this other question: cat line X to line Y on a huge file:
Is there a way to open from within Emacs (and
show on a buffer) a given set of lines (e.g. all lines between line X and Y) from a huge text file?
E.g. Open and show in a buffer all lines between lines 57890000 and 57890010 from file huge.txt
Update:
I am interested in a solution that at least can open the lines in read-only (just for display purposes), although it would be great if I can also edit the lines (and save to the original file).
|
If you want to open the whole file (which requires ), but show only part of it in the editor window, use narrowing. Select the part of the buffer you want to work on and press C-x n n (narrow-to-region). Say “yes” if you get a prompt about a disabled command. Press C-x n w (widen) to see the whole buffer again. If you save the buffer, the complete file is selected: all the data is still there, narrowing only restricts what you see.
If you want to view a part of a file, you can insert it into the current buffer with shell-command with a prefix argument (M-1 M-!); run the appropriate command to extract the desired lines, e.g. <huge.txt tail -n +57890001 | head -n 11.
There is also a Lisp function insert-file-contents which can take a byte range. You can invoke it with M-: (eval-expression):
(insert-file-contents "huge.txt" nil 456789000 456791000)
Note that you may run into the integer size limit (version- and platform-dependent, check the value of most-positive-fixnum).
In theory it would be possible to write an Emacs mode that loads and saves parts of files transparently as needed (though the limit on integer sizes would make using actual file offsets impossible on 32-bit machines). The only effort in that direction that I know of is VLF (GitHub link here).
| Emacs: Open a buffer with all lines between lines X to Y from a huge file |
1,327,678,484,000 |
we would like to understand copytruncate before rotating the file using logrotate with below configuration:
/app/syslog-ng/custom/output/all_devices.log {
size 200M
copytruncate
dateext
dateformat -%Y%m%d-%s
rotate 365
sharedscripts
compress
postrotate
/app/syslog-ng/sbin/syslog-ng-ctl reload
endscript
}
RHEL 7.x, 8GB RAM, 4 VCpu
Question:
How does logrotate truncate the file, when syslog-NG already opened file for logging? Is it not the contention of resource? Does syslog-NG close the file immediately, when it has nothing to log?
|
Truncating a logfile actually works because the writers open the files for writing using O_APPEND.
From the open(2) man page:
O_APPEND: The file is opened in append mode. Before each write(2), the
file offset is positioned at the end of the file, as if with lseek(2).
The modification of the file offset and the write operation are
performed as a single atomic step.
As mentioned, the operation is atomic, so whenever a write is issued, it will append to the current offset matching the end of file, not the one saved before the previous write operation completed.
This makes an append work after a truncate operation, writing the next log line to the beginning of the file again, without the need to reopen the file.
(The same feature of O_APPEND also makes it possible to have multiple writers appending to the same file, without clobbering each other's updates.)
The loggers also write a log line using a single write(2) operation, to prevent a log line from being broken in two during a truncate or concurrent write operation.
Note that loggers like syslog, syslog-ng or rsyslog typically don't need to use copytruncate since they have support to reopen the log files, usually by sending them a SIGHUP. logrotate's support for copytruncate exists to cater for other loggers which typically append to logfiles but that don't necessarily have a good way to reopen the logfile (so rotation by renaming doesn't work in those cases.)
Please note also that copyrotate has an inherent race condition, in that it's possible that the writer will append a line to the logfile just after logrotate finished the copy and before it has issued the truncate operation. That race condition would cause it to lose those lines of log forever. That's why rotating logs using copytruncate is usually not recommended, unless it's the only possible way to do it.
| How copytruncate actually works? |
1,327,678,484,000 |
I need to copy one very large file (3TB) on the same machine from one external drive to another. This might take (because of low bandwidth) many days.
So I want to be prepared when I have to interrupt the copying and resume it after, say, a restart.
From what I've read I can use
rsync --append
for this (with rsync version>3). Two questions about the --append flag here:
Do I use rsync --append for all invocations? (For the first invocation when no interrupted copy on the destination drive yet exists and for the subsequent invocations when there is an interrupted copy at the destination.)
Does rsync --append resume for the subsequent invocations the copying process without reading all the already copied data? (In other words: Does rsync mimic a dd-style seek-and-read operation?)
|
Do I use rsync --append for all invocations?
Yes, you would use it each time (the first time there is nothing to append, so it's a no-op; the second and subsequent times it's actioned). But do not use --append at all unless you can guarantee that the source is unchanged from the previous run (if any), because it turns off the checking of what has previously been copied.
Does rsync --append resume for the subsequent invocations… without reading all the already copied data?
Yes, but without rsync --partial would probably have first deleted the target file.
The correct invocation would be something like this:
rsync -a -vi --append --inplace --partial --progress /path/to/source/ /path/to/target
You could remove --progress if you didn't want to see a progress indicator, and -vi if you are less bothered about a more informational result (you'll still get told if it succeeds or fails). You may see -P used in other situations: this is the same as --partial --progress and can be used for that here too.
--append to continue after a restart without checking previously transferred data
--partial to keep partially transferred files
--inplace to force the update to be in-place
If you are in any doubt at all that the source might have changed since the first attempt at rsync, use the (much) slower --append-verify instead of --append. Or better still, remove the --append flag entirely and let rsync delete the target and start copying it again.
| Is rsync --append able to resume an interrupted copy process without reading all the copied data? |
1,327,678,484,000 |
I have a large folder with 30M small files. I hope to backup the folder into 30 archives, each tar.gz file will have 1M files. The reason to split into multi archives is that to untar one single large archive will take month.. pipe tar to split also won't work because when untar the file, I have to cat all archives together.
Also, I hope not to mv each file to a new dir, because even ls is very painful for this huge folder.
|
I wrote this bash script to do it.
It basically forms an array containing the names of the files to go into each tar, then starts tar in parallel on all of them.
It might not be the most efficient way, but it will get the job done as you want.
I can expect it to consume large amounts of memory though.
You will need to adjust the options in the start of the script.
You might also want to change the tar options cvjf in the last line (like removing the verbose output v for performance or changing compression j to z, etc ...).
Script
#!/bin/bash
# User configuratoin
#===================
files=(*.log) # Set the file pattern to be used, e.g. (*.txt) or (*)
num_files_per_tar=5 # Number of files per tar
num_procs=4 # Number of tar processes to start
tar_file_dir='/tmp' # Tar files dir
tar_file_name_prefix='tar' # prefix for tar file names
tar_file_name="$tar_file_dir/$tar_file_name_prefix"
# Main algorithm
#===============
num_tars=$((${#files[@]}/num_files_per_tar)) # the number of tar files to create
tar_files=() # will hold the names of files for each tar
tar_start=0 # gets update where each tar starts
# Loop over the files adding their names to be tared
for i in `seq 0 $((num_tars-1))`
do
tar_files[$i]="$tar_file_name$i.tar.bz2 ${files[@]:tar_start:num_files_per_tar}"
tar_start=$((tar_start+num_files_per_tar))
done
# Start tar in parallel for each of the strings we just constructed
printf '%s\n' "${tar_files[@]}" | xargs -n$((num_files_per_tar+1)) -P$num_procs tar cjvf
Explanation
First, all the file names that match the selected pattern are stored in the array files. Next, the for loop slices this array and forms strings from the slices. The number of the slices is equal to the number of the desired tarballs. The resulting strings are stored in the array tar_files. The for loop also adds the name of the resulting tarball to the beginning of each string. The elements of tar_files take the following form (assuming 5 files/tarball):
tar_files[0]="tar0.tar.bz2 file1 file2 file3 file4 file5"
tar_files[1]="tar1.tar.bz2 file6 file7 file8 file9 file10"
...
The last line of the script, xargs is used to start multiple tar processes (up to the maximum specified number) where each one will process one element of tar_files array in parallel.
Test
List of files:
$ls
a c e g i k m n p r t
b d f h j l o q s
Generated Tarballs:
$ls /tmp/tar*
tar0.tar.bz2 tar1.tar.bz2 tar2.tar.bz2 tar3.tar.bz2
| how to create multi tar archives for a huge folder |
1,327,678,484,000 |
I need to view a large (50000x40000 px) png image on Linux. Unfortunately most tools (eog, convert etc.) either crashes or fail with note about too little memory.
Is there a way to view this image (I would prefer to see both the resized image and details)?
|
I would try viewing it in gimp. Should be in your distros' repositories, main website's here. Lots of tutorial are available through a simple google search.
When I tried to open your image size I needed to up Gimp's default paging limit so that it could accommodate it. It's under the menu Edit -> Preferences:
If Gimp can't handle the image or you want something lighter then you might want to try feh. Feh's main web site is here. Again should be in repositories. You can run it from the terminal like this:
feh -F <image>
This will size it to fit the screen.
| Viewing large image on Linux |
1,327,678,484,000 |
Here is what I do right now,
sort -T /some_dir/ --parallel=4 -uo file_sort.csv -k 1,3 file_unsort.csv
the file is 90GB,I got this error message
sort: close failed: /some_dir/sortmdWWn4: Disk quota exceeded
Previously, I didn't use the -T option and apparently the tmp dir is not large enough to handle this. My current dir has free space of roughly 200GB. Is it still not enough for the sorting temp file?
I don't know if the parallel option affect things or not.
|
The problem is that you seem to have a disk quota set up and your user doesn't have the right to take up so much space in /some_dir. And no, the --parallel option shouldn't affect this.
As a workaround, you can split the file into smaller files, sort each of those separately and then merge them back into a single file again:
## split the file into 100M pieces named fileChunkNNNN
split -b100M file fileChunk
## Sort each of the pieces and delete the unsorted one
for f in fileChunk*; do sort "$f" > "$f".sorted && rm "$f"; done
## merge the sorted files
sort -T /some_dir/ --parallel=4 -muo file_sort.csv -k 1,3 fileChunk*.sorted
The magic is GNU sort's -m option (from info sort):
‘-m’
‘--merge’
Merge the given files by sorting them as a group. Each input file
must always be individually sorted. It always works to sort
instead of merge; merging is provided because it is faster, in the
case where it works.
That will require you to have ~180G free for a 90G file in order to store all the pieces. However, the actual sorting won't take as much space since you're only going to be sorting in 100M chunks.
| Sort large CSV files (90GB), Disk quota exceeded |
1,327,678,484,000 |
I have two partial disk images from a failing hard drive. File B contains the bulk of the disk's contents, with gaps where sector reads failed. File A is the result of telling ddrescue to retry all the failed sectors, so it is almost entirely gaps, but contains a few places where rereads succeeded. I now need to merge the interesting contents of File A back into File B. The algorithm is simple:
while not eof(A):
read 512 bytes from A
if any of them are nonzero:
seek to corresponding offset in B
write bytes into B
and I could sit down and write this myself, but I would first like to know if someone else has already written and debugged it.
(To complicate matters, due to limited space, File B and File A are on two different computers -- this is why I didn't just tell ddrescue to attempt to fill in the gaps in B in the first place -- but A can be transferred over the network relatively easily, being sparse.)
|
Your algorithm is implemented in GNU dd.
dd bs=512 if=A of=B conv=sparse,notrunc
Please verify this beforehand with some test files of your choice. You don't want to inadvertently damage your File B. A better algorithm would be to check whether B also has zeroes at that position, alas that's something dd does not do.
As for two different computers, you have several options. Use a network filesystem that supports seeks on writes (not all do); transfer the file beforehand; or pipe through SSH like so:
dd if=A | ssh -C B-host dd of=B conv=sparse,notrunc
# or the other way around
ssh -C A-host dd if=A | dd of=B conv=sparse,notrunc
The ssh -C option enables compression, you'd be transferring gigabytes of zeroes over the network otherwise.
| Merge nonzero blocks of huge (sparse) file A into huge file B |
1,327,678,484,000 |
How do I delete this large directory?
stat session/
File: ‘session/’
Size: 321540096 Blocks: 628040 IO Block: 4096 directory
Device: 903h/2307d Inode: 11149319 Links: 2
Access: (0755/drwxr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-09-29 14:34:40.910894275 +0200
Modify: 2022-09-29 14:35:09.598400050 +0200
Change: 2022-09-29 14:35:09.598400050 +0200
Birth: -
Note that the size of directory (not the content, but the directory entry itself) is over 300MB.
Number of inodes is over 11 million.
The directory has no subdirectories, only large number of files.
None of the usual commands work. I have tried these:
rsync -a --delete empty_dir/ session/
rm -rf session
find . -type f --delete
If I run ls -f1 inside, it hangs.
If I run mv -- * ../.tmp_to_delete inside, it hangs.
If I run du inside, it hangs.
At the moment the rsync --delete is running since two days, reading at the rate of up to 7MB/s, and I see no change in the stat output for the directory.
I assume the large size of the directory is the problem.
|
Solved: 4 days later the rsync finished the job - all files have been deleted - but it needed at least 2 days to read directory info, before deleting a single file.
This is information for anyone who might have a similar problem: use screen and be patient.
| Delete huge directory that causes all commands to hang |
1,327,678,484,000 |
I have a large tar file (60GB) containing image files. I'm using mmap() on this entire file to read in these images, which are accessed randomly.
I'm using mmap() for the following reasons:
Thread safety -- I cannot seek an ifstream from multiple threads.
I can avoid extra buffering.
I get some caching (in the form of a requested page already being resident.)
The question is what happens when I've read every image in that 60GB file? Certainly not all
of the images are being used at once -- they're read, displayed, and then discarded.
My mmap() call is:
mmap(0, totalSize, PROT_READ, MAP_SHARED | MAP_NORESERVE, fd, 0);
Here's the question: does the kernel see that I've mapped read-only pages backed by a file and simply purges the unused pages on memory pressure? I'm not sure if this case is recognized. Man pages indicate that MAP_NORESERVE will not require backing swap space, but there doesn't seem to be any guarantee of what happens to the pages under memory pressure. Is there any guarantee that the kernel will purge my unneeded pages before it, say, purges the filesystem cache or OOM's another process?
Thanks!
|
A read-only mmap is largely equivalent to open followed by lseek and read. If a chunk of memory that's mapped in a process is backed up by a file, the copy in RAM is considered part of the disk cache, and will be freed under memory pressure, just like a disk cache entry created by reading from a file.
I haven't checked the source, but I believe MAP_NORESERVE makes no difference for read-only mappings.
| Behavior of mmap'd memory on memory pressure |
1,327,678,484,000 |
I have FILE_A which has over 300K lines and FILE_B which has over 30M lines.
I created a bash script that greps each line in FILE_A over in FILE_B and writes the result of the grep to a new file.
This whole process is taking over 5+ hours.
I'm looking for suggestions on whether you see any way of improving the performance of my script.
I'm using grep -F -m 1 as the grep command.
FILE_A looks like this:
123456789
123455321
and FILE_B is like this:
123456789,123456789,730025400149993,
123455321,123455321,730025400126097,
So with bash I have a while loop that picks the next line in FILE_A and greps it over in FILE_B. When the pattern is found in FILE_B i write it to result.txt.
while read -r line; do
grep -F -m1 $line 30MFile
done < 300KFile
Thanks a lot in advance for your help.
|
The key to performance is reading the huge file only once.
You can pass multiple patterns to grep by putting them on separate lines. This is usually done by telling grep to read patterns from a file:
grep -F -f 300KFile 30MFile
This outputs the matches in the order of the large file, and prints lines that match multiple patterns only once. Furthermore, this looks for patterns anywhere in the line; for example, if the pattern file contains 1234, then lines such as 123456,345678,2348962342 and 478912,1211138,1234 will match.
You can restrict to exact column matches by preprocessing the pattern. For example, if the patterns do not contain any special character ()?*+\|[]{}:
<300KFile sed -e 's/^/(^|,)/' -e 's/$/($|,)/' |
grep -E -f - 30MFile
If retaining only the first match for each pattern is important, make a first pass to extract only the relevant lines as above, then do a second pass in awk or perl that tracks patterns that have already been seen.
<300KFile sed -e 's/^/(^|,)/' -e 's/$/($|,)/' |
grep -E -f - 30MFile |
perl -l -F, -ape '
BEGIN {
open P, "300KFile" or die;
%patterns = map {chomp; $_=>1} <P>;
close P;
}
foreach $c (@F) {
if ($patterns{$c}) {
print;
delete $patterns{$c};
}
}
'
| Grepping over a huge file performance |
1,327,678,484,000 |
I'm looking for an editor that will open the file in chunks (not try to read the whole file into memory) as I'm trying to hand-edit a 200G file.
|
This may not be exactly what you're looking for, but hexedit will open large files like that. It only has what's displayed in the buffer (and maybe a little bit more) in memory. It's made for editing raw disk files (e.g., /dev/sda) and will display in hex side by side with ASCII or EBCDIC.
| What linux editor can open a 200G file for editing within a minute or two? |
1,327,678,484,000 |
I have a file that is 12 GB that I am trying to copy from a MacBook Air to a Debian computer, using a USB. I tried formatting the USB in many different ways, such as NTFS, FAT32, OS X Journalizing but either the MacBook Air complained it couldn't copy such a large file, it only had read-only access, or when I formatted it from the MacBook, the Linux computer couldn't recognize the file system.
Is there a file system type recognized by both systems that can be used to transfer large files?
|
hfs+ can handle large files, and it has write support by Linux.
Although MacOS has only readonly NTFS support, there are third-party tool for read-write operations with it.
You can use split to split a large file into smaller ones. You can later unsplit them with cat. To have a better command line than the MacOS gives you, you can use brew.
There is nothing what would avoid you to partition an usb drive, except that Windows won't see the extra partitions on hilarious reasons. However, fortunately your current setup ignores the windows trouble.
| What filesystem allows transferring files between Linux and OS X? |
1,327,678,484,000 |
Access logs are more or less sorted by time, but to aggregate connections by time (uniq -c), you need to sort them a bit more. For a huge access log sort is very inefficient, because it buffers and sorts the whole file before printing out anything.
Do you know any option for sort or version of sort, that could sort only given ammount of lines at once, the print that block?
I have searched for the following keywords: "streaming sort", "block sort", "approximate sort". I have read the whole manual through, without use. Setting the buffer size (-S) did not influenced this.
|
Try split --filter:
split --lines 1000 --filter 'sort ... | sed ... | uniq -c' access.log
This will split access.log into chunks of 1000 lines and pipe each chunk through the given filter.
If you want to save the results for each chunk separately, you can use $FILE in the filter command and possibly specify a prefix (default is x):
split --lines 1000 --filter '... | uniq -c >$FILE' access.log myanalysis-
This will generate a file myanalysis-aa containing the result of processing the first chunk, myanalysis-ab for the second chunk, etc.
The --filter option to split was introduced in GNU coreutils 8.13 (released in September 2011).
| how to sort access log efficiently in blocks |
1,327,678,484,000 |
This is a follow up question from Sort large CSV files (90GB), Disk quota exceeded.
So now I have two CSV files sorted, as file1.csv and file2.csv
each CSV file has 4 columns, e.g.
file 1:
ID Date Feature Value
01 0501 PRCP 150
01 0502 PRCP 120
02 0501 ARMS 5.6
02 0502 ARMS 5.6
file 2:
ID Date Feature Value
01 0501 PRCP 170
01 0502 PRCP 120
02 0501 ARMS 5.6
02 0502 ARMS 5.6
Ideally, I want to diff the two files in such a way that if two rows in the two files have the same ID, Date and Feature, but different values, then output something like:
ID Date Feature Value1 Value2
Of course, this might be asking too much. Something like
ID1 Date1 Feature1 Value1 ID2 Date2 Feature2 Value2
also works.
In the above example, I would like to output
01 0501 PRCP 150 170
or
01 0501 PRCP 150 01 0501 PRCP 150
I think the main question is how to compare in such a way and how to output to a csv file. Thanks.
Sample output from Gilles answer:
The output from comm is
$ head -20 comm_output.txt
ACW00011604,19490101,PRCP,0
AE000041196,20070402,TAVG,239
AE000041196,20070402,TAVG,244
AE000041196,20080817,TMIN,282
AE000041196,20130909,TAVG,350
AE000041196,20130909,TMAX,438
AE000041196,20130909,TMIN,294
AE000041196,20130910,TAVG,339
AE000041196,20130910,TAVG,341
AE000041196,20150910,TAVG,344 The output of awk is
$ head awk_output.csv
,
ACW00011604,19490101,PRCP,0,,,
AE000041196,20070402,TAVG,239,,,
AE000041196,20070402,TAVG,244,,,
AE000041196,20080817,TMIN,282,,,
AE000041196,20130909,TAVG,350,,,
AE000041196,20130909,TMAX,438,,,
AE000041196,20130909,TMIN,294,,,
AE000041196,20130910,TAVG,339,,,
AE000041196,20130910,TAVG,341,,,
AE000041196,20150910,TAVG,344,,,
Here is the sample input, if you insist
head file1.csv
ACW00011604,19490101,PRCP,0
ACW00011604,19490101,SNOW,0
ACW00011604,19490101,SNWD,0
ACW00011604,19490101,TMAX,289
ACW00011604,19490101,TMIN,217
ACW00011604,19490102,PRCP,30
ACW00011604,19490102,SNOW,0
ACW00011604,19490102,SNWD,0
ACW00011604,19490102,TMAX,289
ACW00011604,19490102,TMIN,228
head file2.csv
ACW00011604,19490101,SNOW,0
ACW00011604,19490101,SNWD,0
ACW00011604,19490101,TMAX,289
ACW00011604,19490101,TMIN,217
ACW00011604,19490102,PRCP,30
ACW00011604,19490102,SNOW,0
ACW00011604,19490102,SNWD,0
ACW00011604,19490102,TMAX,289
ACW00011604,19490102,TMIN,228
ACW00011604,19490102,WT16,1
|
Let's review tools that combine two files together line by line in some way:
paste combines two files line by line, without paying attention to the contents.
comm combines sorted files, paying attention to identical lines. This can weed out identical lines, but subsequently combining the differing line would require a different tool.
join combines sorted files, matching identical fields together.
sort can merge two files.
awk can combine multiple files according to whatever rules you give it. But with such large files, you're likely to get best performance by using the most appropriate special-purpose tools than with generalist tools.
I'll assume that there are no duplicates, i.e. within one files there are no two lines with the same ID, date and feature. If there are duplicates then how to cope with them depends on how you want to treat them. I also assume that the files are sorted. I also assume that your shell has process substitution, e.g. bash or ksh rather than plain sh, and that you have GNU coreutils (which is the case on non-embedded Linux and Cygwin).
I don't know if your separators are whitespace or tabs. I'll assume whitespace; if the separator is always exactly one tab then declaring tab as the separator character (cut -d $'\t', join -t $'\t', sort -t $'\t') and using \t instead of [ \t]\+ should squeeze a tiny bit of performance.
Set the locale to pure ASCII (LC_ALL=C) to avoid any performance loss related to multibyte characters.
Since join can only combine rows based on one field, we need to arrange for fields 1–3 to appear as a single field. To do that, change the separator, either between 1 and 2 and 2 and 3 or between 3 and 4. I'll change 1–3 to use ; instead of whitespace. That way you get all the line combinations, whether they're identical or not. You can then use sed to remove lines with identical values.
join -a 1 -a 2 <(sed 's/[ \t]\+/;/; s/[ \t]\+/;/' file1.csv) <(sed 's/[ \t]\+/;/; s/[ \t]\+/;/' file2.csv) |
sed '/[ \t]\(.*\)[ \t]\+\1$/d' |
tr ';' '\t'
Note that unpairable lines end up as a 4-column line with no indication as to whether they came from file 1 or file 2. Remove -a 1 -a 2 to suppress all unpairable lines.
If you have a majority of identical lines, this wastes time joining them and weeding them out. Another approach would be to use comm -3 to weed out the identical lines. This produces a single output stream where the lines are in order, but lines from file 2 have a leading tab. You can then use awk to combine consecutive lines where the two files have the same fields 1–3. Since this involves awk, it might well end up being slower if there are a lot of non-identical lines.
comm -3 file1.csv file2.csv |
awk '
$1 "\t" $2 "\t" $3 == k { if ($4 != v) print k "\t" v "\t" $4; next; }
{ print k "\t" v }
{ k=$1 "\t" $2 "\t" $3; v=$4; }
'
| diff two large CSV files (each 90GB) and output to another csv |
1,327,678,484,000 |
Essentially, I'm hoping someone with an advanced knowledge of tar/bz2 can answer whether this is possible.
The situation is that we have a periodic 24GB feed of data from a vendor, as a .tbz file. (tar+bzip2). This is downloaded from the vendor via curl. To dramatically speed up this slow process, I'd like to obtain:
A list of files contained in the .tbz file
the byte ranges of the specific files that we care about (a small subset of the whole archive).
Curl has the ability specify a byte range for downloading a file, so my hope is that if we download the first x bytes of a file, it might have an index of where we need to seek those relevant files from. From what I understand, tar itself has this information, but I'm not sure if the bzip2 compression allows for this in addition.
|
No.
Tar is a concatenation of files data, interleaved with files metadata (tar headers). That alone wouldn't necessarily be the dead end, since one could read the header, find out data length and (if the server allowed for that) skip to the next header (e.g. via the same functionality that allows to resume HTTP transmissions).
What really makes this difficult is the compression - the de-/compressed data usually depends on the preceding ones, thus on everything that precedes it. Now, for bzip2 everything is a block of 100kB to 900kB (with 100kB steps IIUC). Thus your algorithm would have to:
get the beginning of file;
read decompressed chunk length L from the header;
decompress the block - that means download the data as needed, until end of
the bz2 block is reached;
check the tar header and lengths H of the first file's tar header and D of its data;
skip to next file: either it is in the decoded block (H + D < L) or additional compressed data has to be fetched (H + D > L). And this is exactly where it breaks - if I understand the bzip2 format correctly, the header doesn't contain the compressed block length (only uncompressed). Hence if you need to fetch another block, you can't really seek in the stream even if the underlying medium allowed you to.
Summary: if you can negotiate change of format to something that contains compressed block size in its header, it is solvable. On the other hand, one 24GB compressed tar file is a rather insane format for distribution of anything - it is one single-layer BD and I don't think a reasonable person would think of compressing contents to go onto a disc into a single file instead of splitting it into parts of at most 1-2 GB of size. So if negotiation is possible try asking about that (splitting into smaller pieces).
Another thing that could help you a little bit would be getting the list of files together with file sizes separately - that would allow you to make at least some guesses about what do download (and you could always get the bloacks around if needed). Such a list can be produced easily - just by redirecting tar's stdout to a file:
tar cvv all_the_uncompressed_gigabytes 2>list.txt | bzip2 -9 > data.tar.bz2
| Is it possible to get file list, byte range from the head of a tar+bzip2 file? |
1,327,678,484,000 |
I am currently trying to remove all newlines that are not preceded by a closing parenthesis, so I came up with this expression:
sed -r -i -e ":a;N;$!ba;s/([^\)])\n/\1/g;d" reallyBigFile.log
It does the job on smaller files, but on this large file I am using (3GB), it works for a while then returns with an out of memory error:
sed: Couldn't re-allocate memory
Is there any way I could do this job without running into this issue. Using sed itself is not mandatory, I just want to get it done.
|
Your first three commands are the culprit:
:a
N
$!ba
This reads the entire file into memory at once. The following script should only keep one segment in memory at a time:
% cat test.sed
#!/usr/bin/sed -nf
# Append this line to the hold space.
# To avoid an extra newline at the start, replace instead of append.
1h
1!H
# If we find a paren at the end...
/)$/{
# Bring the hold space into the pattern space
g
# Remove the newlines
s/\n//g
# Print what we have
p
# Delete the hold space
s/.*//
h
}
% cat test.in
a
b
c()
d()
e
fghi
j()
% ./test.sed test.in
abc()
d()
efghij()
This awk solution will print each line as it comes, so it will only have a single line in memory at a time:
% awk '/)$/{print;nl=1;next}{printf "%s",$0;nl=0}END{if(!nl)print ""}' test.in
abc()
d()
efghij()
| Out of memory while using sed with multiline expressions on giant file |
1,327,678,484,000 |
The default reserved blocks percentage for ext filesystems is 5%. On a 4TB data drive this is 200GB which seems excessive to me.
Obviously this can be adjusted with tune2fs:
tune2fs -m <reserved percentage> <device>
however the man page for tune2fs states that one of the reasons for these reserved blocks is to avoid fragmentation.
So given the following (I have tried to be specific to avoid wildly varying opinions):
~4TB HDD
Used to store large files (all >500mb)
Once full, Very few writes (maybe once a month 1-5 files are replaced)
Data only (no OS or applications running from the drive)
Moderate reads (approx 20tb a week and the whole volume read every 3 months)
HDD wear is of concern and killing a HDD for the sake of saving 20GB is not the desired outcome (Is this even a concern?)
What is the maximum percentage that the drive can be filled to without causing (noticeable from a performance and/or hdd wear perspective) fragmentation?
Are there any other concerns with filling a large data hdd to a high percentage and/or setting the reserved blocks count to say 0.1%?
|
The biggest problem with fragmentation is free space fragmentation, which means that when your filesystem gets full and there are no longer big chunks of free space left, your filesystem performance falls off a cliff. Each new file can allocate only small chunks of space at a time, so is very fragmented. Even when other files are deleted, the previously written files are splattered all over the disk, causing new files to be fragmented again.
In the usage case you describe above (~500MB files, relatively few overwrites or new files being written, old ~500MB files being deleted periodically, I'm assuming some kind of video storage system) you will get relatively little fragmentation - assuming your file size remains relatively constant. This is especially true if your writes are single-threaded, since multiple write threads will not be competing for the small amount of free space and interleaving their block allocations. For every old file deleted from disk, you will get a few hundred MB of contiguous space (assuming the file was not fragmented to begin with), and it would be filled up again.
If you do have multiple concurrent writers, then using fallocate() to reserve large chunks of space for each file (and truncate() at the end to free up any remaining space) will avoid fragmentation as well. Even without this, ext4 will try to reserve (in memory) about 8MB of space for a file while it is being written, to avoid the worst fragmentation.
I'd recommend that you keep at least a decent multiple of your file size free (e.g. 16GB or more) so that you don't ever get to the point of consuming all the dribs and drabs of free blocks and introducing permanent free space fragmentation.
| Recommended maximum percentage to fill a large ext4 data drive |
1,327,678,484,000 |
The GNU parallel grepping n lines for m regular expressions example states the following:
If the CPU is the limiting factor parallelization should be done on
the regexps:
cat regexp.txt | parallel --pipe -L1000 --round-robin grep -f - bigfile
This will start one grep per CPU and read bigfile one time per CPU,
but as that is done in parallel, all reads except the first will be
cached in RAM
So in this instance GNU parallel round robins regular expressions from regex.txt over parallel grep instances with each grep instance reading bigfile separately. And as the documentation states above, disk caching probably ensures that bigfile is read from disk only once.
My question is this - the approach above appears to be seen as better performance-wise than another that involves having GNU parallel round robin records from bigfileover parallel grep instances that each read regexp.txt, something like
cat bigfile | parallel --pipe -L1000 --round-robin grep -f regexp.txt -
Why would that be? As I see it assuming disk caching in play, bigfile and regexp.txt would each be read from disk once in either case. The one major difference that I can think of is that the second approach involves significantly more data being passed through pipes.
|
It is due to GNU Parallel --pipe being slow.
cat bigfile | parallel --pipe -L1000 --round-robin grep -f regexp.txt -
maxes out at around 100 MB/s.
In the man page example you will also find:
parallel --pipepart --block 100M -a bigfile grep -f regexp.txt
which does close to the same, but maxes out at 20 GB/s on a 64 core system.
parallel --pipepart --block 100M -a bigfile -k grep -f regexp.txt
should give exactly the same result as grep -f regexp.txt bigfile
| GNU Parallel - grepping n lines for m regular expressions |
1,327,678,484,000 |
I am currently using rsync to copy a 73GB file from a Samsung portable SSD T7 to an HPC cluster.
rsync -avh path/to/dataset [email protected]:/path/to/dest
The following applies:
My local machine (where my T7 is connected) is a VirtualBox VM running Ubuntu 20.
The T7 transfer speeds should be up to approx. 1000MB/s.
Network gives me an approximate upload speed of 7.9Mbps.
Rsync transfer speed is probably bottlenecking this to 1-5MB/s according to this answer.
The problem is that the move is still not done after 9 hours. According to 1, using cp instead is better with an empty directory (for the first time). I do not understand this or whether it is actually true. Can someone explain this?
|
You say that,
Your network speed is approximately 8 Mb/s.
Your rsync runs at about 1-5 MB/s.
Given that 1 MB/s is approximately 10 Mb/s I'd say that rsync is doing you a big favour.
Me, I'd probably have added compression with -z, and since you're using rsync with sensible flags over a network connection it's probably with interrupting it and restarting with compression. It'll just pick up where it left off.
Quick calculation: 73 GB is 73,000 MB*, which is 730,000 Mb‡. Approximately. You've got a network speed of 8 Mb/s, so that means the copy should take around 730,000/8 = 91,250 seconds. 25 hours, assuming theoretical maximum use of network bandwidth.
* Why 1000:1 instead of 1024:1? Partly because GB:MB strictly is 1000:1, but largely because this is an approximation calculation
‡ Why 10:1 instead of 8:1 as suggested by 8 bits to 1 byte? Two reasons: (a) it's an O(n) approximation, and (b) there's packet/protocol overhead to consider.
Now to try and answer the question strictly as asked, which is, "Is cp faster than rsync during the first run?". If you're using cp you need to have something managing the transport between the local and remote servers; that could be something like sshfs or NFS, or alternatively you might mean scp.
cp over an NFS mounted filesystem. With a well-tuned network this is probably fairly efficient.
cp over sshfs. This will include not only encryption overheads of ssh but also the translation between kernel and userspace for the FUSE implementation of the filesystem. Inefficient.
scp (implicitly over ssh). This will include encryption overhead from ssh. Acceptable, and might futher benefit from link compression (-C).
rsync (implicitly over ssh since we've not mentioned rsyncd). This will include encryption overhead from ssh. Acceptable, and might futher benefit from protocol compression (-z).
I've not taken quantitative performance metrics; these are qualitative assessments. However, note the comments on a similar answer Why is scp so slow and how to make it faster?, although to be fair that's discussing comparative transfer rates of multiple files rather than one large one.
However, where rsync wins across a network connection is that it's restartable. With the right flags (i.e. --partial) even mid-way through a transfer. For a single 73GB file taking around 25 hours to transfer that's a massive potential advantage.
| Is cp faster than rsync during the first run? |
1,327,678,484,000 |
How can I edit a really big file with vi? e.g.: a 20 GByte log file. Are there any modifications for it to handle this size?
|
If you're using Vim on a 32-bit system, I don't think there is a way to make it do this. It has a hard file size limit based on the size of a 32-bit integer.
If you can do your editing on a 64-bit system, Vim's file size limit becomes something in the exabytes range.
Vim also has an arbitrary lower limit, over which it will page chunks of the file into RAM from its swap file. On the 64-bit CentOS 5.x system I'm currently using, the Vim default for this is approximately 3 GB. You can raise the maxmem and maxmemtot limits in order to avoid swapping, if you have enough real RAM to load the entire file. If you do not, you'll end up using the OS's general-purpose swap space instead, which probably won't be any faster. To be clear, you do not need to raise this limit. It just allows Vim to use more real RAM, if you have it.
If you mean some other implementation of vi, please say which. File size limits are one of the areas where vi implementations differ.
| Edit really big files with vi |
1,327,678,484,000 |
My Uni account is over disk quota and requesting more quota takes time. Unfortunately, I need disk space now and I noticed that the Trash file in my home directory is quite a large file.
I’m assuming that this somehow relates to my mail inbox? (I do have some doubts, since although his file clearly contains email messages, tail Trash reveals that the last message is from 2006).
Furthermore, my mail client (connected via IMAP) reveals that my trash folder is empty.
Can I just delete the Trash file or do I need to fear dire consequences? The very first message in that file reads as follows:
From MAILER-DAEMON Sun Dec 3 13:40:15 2006
Date: 03 Dec 2006 13:40:15 +0100
From: Mail System Internal Data <MAILER-DAEMON@human>
Subject: DON'T DELETE THIS MESSAGE -- FOLDER INTERNAL DATA
Message-ID: <1165149615@human>
X-IMAP: 1135694114 0000000927 $NotJunk $Junk JunkRecorded
Status: RO
This text is part of the internal format of your mail folder, and is not
a real message. It is created automatically by the mail system software.
If deleted, important folder data will be lost, and it will be re-created
with the data reset to initial values.
Furthermore, there’s another file in my home folder called Deleted Messages – what’s the difference between the two?
|
This Trash file is unrelated to Postfix. It's also probably not what you see over IMAP: while the IMAP server could be configured to serve this file as a mail folder called Trash, it's likely that it's showing a directory called Trash near other directories that you see over IMAP.
The mail you show is an oddity of Pine. It's likely that you used Pine at some point, and that you or your administrator configured it to save deleted mails into this Trash file. Deleting this file is unlikely to cause any trouble (of course, make a backup on your own PC or wherever just in case one of your old deleted mails turned out to be important).
| Can I safely delete ~/Trash file? |
1,327,678,484,000 |
I have a file that is roughly 30GB. I am using,
awk ' $1 <= 2000 ' myfile.txt >> myfile_new.txt
But, I get this error after a while -
awk: read error (Bad address)
I guess this is because the file is too large. Is there any way to get past this? How else can I do this?
What I need to do is quite simple, I have to extract all records that have value less than 2000 in a column.
|
You're probably using mawk, which has some optimisations that can result in errors like this when dealing with particularly large data. gawk will likely not have these issues when running it in the same way.
| awk read error bad address |
1,389,235,186,000 |
As I know, the last command shows recent logins by all users. But my PC (CentOS) has been used for roughly a year, and there are many users logged. I tried to used the last command, but it just showed from June 2013 to September 2013.
My question is: how can I show the latest users that have logged in recently, say (December 2013 through Jan 2014)?
|
last reads from a log file, usually /var/log/wtmp and prints the entries of successful login attempts made by the users in the past. The output is such that the last logged in users entry appears on top. In your case perhaps it went out of notice because of this.
In order to check when was the file /var/log/wtmp was written last, you can use stat command:
stat /var/log/wtmp
In order to print the output in reverse, you can pipe output of last to GNU tac (opposite of cat) as follows:
last | tac
| "Last" command: How to show latest user login? |
1,389,235,186,000 |
As I was investigating a server that is rebooting in a regular fashion, I started looking through the "last" utility but the problem is that I am unable to find what the columns mean exactly. I have, of course, looked through the man but it does not contain this information.
root@webservice1:/etc# last reboot
reboot system boot 3.2.13-grsec-xxx Thu Apr 12 09:44 - 09:58 (00:13)
reboot system boot 3.2.13-grsec-xxx Thu Apr 12 09:34 - 09:43 (00:08)
reboot system boot 3.2.13-grsec-xxx Thu Apr 12 09:19 - 09:33 (00:13)
reboot system boot 3.2.13-grsec-xxx Thu Apr 12 08:51 - 09:17 (00:25)
reboot system boot 3.2.13-grsec-xxx Thu Apr 12 00:11 - 09:17 (09:05)
reboot system boot 3.2.13-grsec-xxx Wed Apr 11 19:40 - 09:17 (13:36)
reboot system boot 3.2.13-grsec-xxx Sun Apr 8 22:06 - 09:17 (3+11:10)
reboot system boot 3.2.13-grsec-xxx Sat Apr 7 14:31 - 09:17 (4+18:45)
reboot system boot 3.2.13-grsec-xxx Fri Apr 6 10:20 - 09:17 (5+22:56)
reboot system boot 3.2.13-grsec-xxx Thu Apr 5 00:16 - 09:17 (7+09:01)
reboot system boot 3.2.13-grsec-xxx Tue Apr 3 07:34 - 09:17 (9+01:42)
reboot system boot 3.2.13-grsec-xxx Tue Apr 3 02:31 - 09:17 (9+06:45)
reboot system boot 3.2.13-grsec-xxx Mon Apr 2 23:17 - 09:17 (9+09:59)
The first columns makes sense up to the kernel versions included. What do these times represent exactly ? The last one seems to be the uptime.
Secondly, this is supposed to be a server on 24/7 except the times don't seem to match which could mean that it is experiencing downtime or somthing similar. For example, if we look at the two last lines, does it mean that my server was off from Apr 2 09:17 until Apr3 02:31 ?
As for the background information, this is a Debian Squeeze server.
EDIT
If the last colums are start time, stop time and uptime, how can you interpret these two lines :
reboot system boot 3.2.13-grsec-xxx Tue Apr 3 07:34 - 09:17 (9+01:42)
reboot system boot 3.2.13-grsec-xxx Tue Apr 3 02:31 - 09:17 (9+06:45)
The second session seems to end after the first one starts which doesn't make sense to me.
|
I guess this is a three year old post, but I'll respond anyway, for the benefit of anyone else who happens across it in the future, like I just did recently.
From reading other posts and monitoring the output myself over a period of time, it looks like each line lists the start date and time of the session, the end time of the session (but not the end date), and the duration of the session (how long they were logged in) in a format like
(days+hours:minutes)
The reboot user appears to be noted as having logged in whenever the system is started, and off when the system was rebooted or shutdown, and on those lines, the "session duration" information is the length of time (days+hours:minutes) that "session" lasted, that is, how long the system was up before it was shutdown.
For me, the most recent reboot entry shows the current time as the "logged off" time, and the session duration data for that entry matches the current uptime output.
So on this line:
reboot system boot 3.2.13-grsec-xxx Tue Apr 3 07:34 - 09:17 (9+01:42)
The system was started on Tuesday, April 3rd, at 7:34 am, and it was shutdown 9 days and 1 hour and 42 minutes later (on April 12th), at 9:17 in the morning. (Or, this output was gathered at that time, and this is the most recent reboot entry, and "reboot" hasn't actually "logged off" yet. In which case the output will change if you run the last command again.)
Why you would have 2 entries for the reboot user, on April 3rd, that were both 9 days long, is a mystery to me; my systems don't do that.
| Meanings of the columns in "last" command |
1,389,235,186,000 |
Can anybody explain me what is the meaning of the last column of the output of the last command? I'm particularly interested in its meaning with respect to the reboot pseudo-user.
reboot system boot 2.6.32-28-generi Sat Feb 12 08:31 - 18:09 (9+09:37)
What does that 9+09:37 mean?
|
reboot and shutdown are pseudo-users for system reboot and shutdown, respectively. That's the mechanism for logging that information, with kernel versions to same place, without creating any special formats for the wtmp binary file.
Quote from man wtmp:
The wtmp file records all logins and logouts. Its format is exactly like utmp except that a null username indicates a logout on the associated terminal. Furthermore, the terminal name ~ with username shutdown or reboot indicates a system shutdown or reboot and the pair of terminal names | / } logs the old/new system time when date(1) changes it.
wtmp binary file do not save other than timestamp for events. For example, last calculates additional things, such as login times.
reboot system boot 2.6.32-28-generi Mon Feb 21 17:02 - 18:09 (01:07)
...
user pts/0 :0.0 Sat Feb 12 18:52 - 18:52 (00:00)
user tty7 :0 Sat Feb 12 18:52 - 20:53 (02:01)
reboot system boot 2.6.32-28-generi Sat Feb 12 08:31 - 18:09 (9+09:37)
The last column (in parentheses) is the length of event. For the user reboot, it's uptime.
After the latest reboot, time is current uptime. For earlier reboots, time is uptime after that reboot (so in the last line of my example it's uptime until the first line; there were no reboots in between). Number(s) before + means number of days. In the last line, it's 9 days, 9 hours and 37 minutes, and in the first line current uptime is 1 hour and 7 minutes.
Note, however, that this time is not always accurate — for example, after a system crash and unusual restart sequence. last calculates it as the time between it and next reboot/shutdown.
| Output of the "last" command |
1,389,235,186,000 |
Last shows "crash" at 12:02 and 14:18, but the system didn't stop working at that time. The reboot at 15:03, on the other hand, was to recover from an actual crash - our system stopped responding at 14:46. Why does last show two "crashes" prior to the actual crash of the machine?
[admin@devbox log]$ last | head
myuser pts/2 myhostname Wed Sep 28 15:12 still logged in
myuser pts/2 myhostname Wed Sep 28 15:09 - 15:12 (00:02)
myuser pts/2 myhostname Wed Sep 28 15:07 - 15:09 (00:01)
myuser pts/1 myhostname Wed Sep 28 15:06 still logged in
myuser pts/0 myhostname Wed Sep 28 15:04 still logged in
reboot system boot 2.6.18-274.el5PA Wed Sep 28 15:03 (00:09)
myuser pts/1 myhostname Wed Sep 28 14:18 - crash (00:44)
myuser pts/0 myhostname Wed Sep 28 12:02 - crash (03:01)
EDIT: The reboot at 15:03 is real enough - but the two "crash" entries at 14:18 and 12:02 I can't explain.
|
last prints crash as the logout time when there is no logout entry in the wtmp database for a user session.
The last entry in last output means that myuser logged in on pts/0 at 12:02 and, when the system crashed between 14:18 and 15:03, that user should be still logged in.
Usually, in wtmp there are two entries for each user session. One for the login time and one for the logout time. When a system crashes, the second entry could be missing. So last supposes that the user was still logged on when the system crashed and prints crash as the logout time.
To be clearer, that two "crash" lines are only the two session that were active when the system crashed around 15:00, not two system crashes.
| Can't explain "crash" entries in output of the 'last' command |
1,389,235,186,000 |
In Mac OS X, if I don't touch it for a while, it will lock the screen and one must use password to unlock it, but this kind of log in is not recorded by last command. I want to know if anybody tried to break into my MacBook when I am not in front of it. Is there any way I can log such attempts?
|
If you suspect that someone has correctly guessed your password and got in, you can check this via the Console. To access Console press ⌘+space and type 'console' in the Spotlight box that appears. Click return.
Click on 'Diagnostic and Usage Messages' on the left panel. At the time of the correct login attempt you see something like this:
Note: 'screen locked, user typed correct password'.
Now if someone tried, yet failed, you'd see something like this under system.log (also accessible via Console):
I hope that's of some assistance to you.
| How to know when and which user logged into the system under Mac OS X? Last is not enough! |
1,389,235,186,000 |
What can be a reason for last -x reboot showing two last entries as still running? Besides, I'm pretty sure I did not reboot this server December, 16, though it could be something with power on hosting provider. If that was the case, would this be expected result?
root@cthulhu-new ~ # last -x reboot
reboot system boot 4.13.0-0.bpo.1-a Sat Dec 16 06:26 still running
reboot system boot 4.13.0-0.bpo.1-a Thu Dec 7 07:56 still running
reboot system boot 4.13.0-0.bpo.1-a Wed Dec 6 11:10 - 07:55 (20:44)
wtmp begins Tue Dec 5 19:59:27 2017
Here's the output of last -x shutdown which has no entry for Dec 16:
root@cthulhu-new ~ # last -x shutdown
shutdown system down 4.13.0-0.bpo.1-a Thu Dec 7 07:55 - 07:56 (00:00)
shutdown system down 4.13.0-0.bpo.1-a Wed Dec 6 11:10 - 11:10 (00:00)
wtmp begins Tue Dec 5 19:59:27 2017
|
Yes, it would be the expected result. last doesn’t know that your system was shut down on December 16, presumably because it wasn’t an orderly shut down (power loss or something like that). Because of the way it displays boots, it considers that the last two boots are still running.
Things will sort themselves out to some extent the next time the system is shut down; last will then use the new shut down time for both December 7 and 16 boots.
| Debian - two entries in `last reboot` in `still running` |
1,389,235,186,000 |
I have a problem, like I need to find the directories that got updated yesterday. I tried using find command but its listing all the files that got updated in the directories. But I need only the directory names.
|
You can use -type d in the find string:
find /path/to/target -type d -mtime 1
| How to find Directories that updated last day in linux? |
1,389,235,186,000 |
I have notebook and want to store information for every day, when computer is running or not (with precise to minutes).
Here is output from last command:
dima tty1 :0 Sat Apr 14 21:56 gone - no logout
reboot system boot 4.15.15-1-ARCH Sat Apr 14 21:56 still running
root tty2 Sat Apr 14 21:18 - 21:56 (00:37)
dima tty1 :0 Sat Apr 14 20:38 - down (01:17)
reboot system boot 4.15.15-1-ARCH Sat Apr 14 20:38 - 21:56 (01:17)
dima tty1 :0 Sat Apr 14 12:36 - down (06:19)
reboot system boot 4.15.15-1-ARCH Sat Apr 14 12:36 - 18:56 (06:19)
dima tty1 :0 Thu Apr 12 20:08 - down (1+16:28)
reboot system boot 4.15.15-1-ARCH Thu Apr 12 20:07 - 12:36 (1+16:28)
dima tty1 :0 Thu Apr 12 13:33 - down (06:34)
reboot system boot 4.15.15-1-ARCH Thu Apr 12 13:32 - 20:07 (06:34)
I want something like that, but also with information about when my notebook get to suspend/resume.
Could you say, please, which command should I use?
|
If systemd is your init system, then you can see it in that manner (works from root only):
[root@centos7 src]# journalctl -t systemd-sleep
-- Logs begin at Sat 2018-04-14 23:06:52 MSK, end at Sun 2018-04-15 01:30:01 MSK. --
Apr 15 00:18:55 centos7.localdomain systemd-sleep[3365]: Suspending system...
Apr 15 00:23:14 centos7.localdomain systemd-sleep[3365]: System resumed.
If you use initd as init system, then you can grep your dmesg for one of this patterns (output depend on kernel version and kernel distribution):
# entering to suspend state
kernel: PM: Preparing system for freeze sleep
# exit from suspend state
kernel: Suspending console(s) (use no_console_suspend to debug)
kernel: PM: suspend of devices complete after 60.341 msecs
| How to make `last` shows also suspend/resume times? |
1,389,235,186,000 |
I see a suspicious pattern in a last command output on RHEL:
$ last reboot
reboot system boot 3.10.0-514.21.1. Wed Dec 13 10:25 - 11:53 (01:28)
reboot system boot 3.10.0-514.21.1. Mon Oct 30 16:23 - 11:53 (43+20:30)
reboot system boot 3.10.0-514.21.1. Fri Oct 20 16:53 - 11:53 (53+20:00)
reboot system boot 3.10.0-514.21.1. Mon Oct 16 09:21 - 11:53 (58+03:32)
reboot system boot 3.10.0-514.21.1. Fri Aug 25 15:53 - 11:53 (109+21:00)
reboot system boot 3.10.0-514.21.1. Tue Aug 22 15:36 - 11:53 (112+21:16)
reboot system boot 3.10.0-514.21.1. Fri Jul 21 16:38 - 11:53 (144+20:15)
reboot system boot 3.10.0-514.21.1. Fri Jun 9 15:00 - 16:18 (42+01:17)
reboot system boot 3.10.0-514.21.1. Mon Jun 5 11:20 - 16:18 (46+04:57)
reboot system boot 3.10.0-514.21.1. Thu Jun 1 09:49 - 16:18 (50+06:28)
reboot system boot 3.10.0-514.el7.x Wed May 31 17:46 - 09:49 (16:02)
Namely, the 10th column shows the same time datum on several rows (e.g., 11:53 seven times, and 16:18 three times).
The man page does not explain what each column should represent.
Do you know the purpose of the 10th column of the last command's output?
|
When listing reboots, the tenth column shows the last “down time” following the boot, i.e. the time at which the system was shut down, as far as last can determine. This actually involves combining multiple records from the information stored in the system; to do so, last keeps track of the last down time it’s seen, and uses that blindly when it displays a “reboot” line.
Thus if the system is shut down abruptly, the shut down time won’t be stored, and last will use the previous record instead. Looking at your results:
reboot system boot 3.10.0-514.21.1. Wed Dec 13 10:25 - 11:53 (01:28)
reboot system boot 3.10.0-514.21.1. Mon Oct 30 16:23 - 11:53 (43+20:30)
reboot system boot 3.10.0-514.21.1. Fri Oct 20 16:53 - 11:53 (53+20:00)
reboot system boot 3.10.0-514.21.1. Mon Oct 16 09:21 - 11:53 (58+03:32)
reboot system boot 3.10.0-514.21.1. Fri Aug 25 15:53 - 11:53 (109+21:00)
reboot system boot 3.10.0-514.21.1. Tue Aug 22 15:36 - 11:53 (112+21:16)
reboot system boot 3.10.0-514.21.1. Fri Jul 21 16:38 - 11:53 (144+20:15)
reboot system boot 3.10.0-514.21.1. Fri Jun 9 15:00 - 16:18 (42+01:17)
reboot system boot 3.10.0-514.21.1. Mon Jun 5 11:20 - 16:18 (46+04:57)
reboot system boot 3.10.0-514.21.1. Thu Jun 1 09:49 - 16:18 (50+06:28)
reboot system boot 3.10.0-514.el7.x Wed May 31 17:46 - 09:49 (16:02)
last found a record indicating a shut down at 11:53 on December 13, and then several records indicating a start time; so it used that single shut down time for all of them. Then it found a shut down record for 42 days after June 9, at 16:18, and used that, again several times because it didn’t find any other shut down record until 09:49 on June 1.
You can see this in the last source code; search for “lastdown” to find where it’s updated (and used).
| What is the purpose of the 10th column of the `last` command's output? |
1,389,235,186,000 |
I was looking into: 'last -d' command.
-d: For non-local logins, Linux stores not only the host name of the remote host but its IP number as well. This option translates the IP number back into a hostname.
At first, I was looking at similar questions, This one in particular:
'last -d' is REALLY slow
Before I updated my hosts file and added: 0.0.0.0 localhost I received less hostnames and more IP addresses. So that means Linux stores the hostnames somewhere in the OS, If that's the case, is there any way of reaching the hostnames without the command last -d?
|
According to man last, my Arch Linux system stores login info in /var/log/wtmp. It looks to be in a binary format - that is, the usual text tools will only show you parts of it.
This command: xxd /var/log/wtmp | more shows me both text-format dotted-quad IP addresses, and fully-qualified DNS names.
I wrote the following little program to show me what was in /var/log/utmp. It appears that not every entry has a hostname/IP address, and that the binary format only has a small, fixed amount of room for the hostname.
#include <stdio.h>
#include <utmp.h>
int
main(int ac, char **av)
{
struct utmp *utmpp;
utmpname("/var/log/wtmp");
while (NULL != (utmpp = getutent())) {
printf("%s\n", utmpp->ut_host);
}
endutent();
return 0;
}
| Where does 'last' store user hostnames? |
1,389,235,186,000 |
I just found some odd behaviour on one of our servers that I can't explain myself.
It is about both middle lines. I would assume that the timespans for the boot user must not overlap, however, they do:
$ last reboot -F
reboot system boot 4.4.44-39.55.amz Wed Feb 15 09:16:30 2017 - Wed Feb 15 09:36:53 2017 (00:20)
reboot system boot 4.4.41-36.55.amz Fri Feb 10 20:16:26 2017 - Wed Feb 15 09:16:00 2017 (4+12:59)
reboot system boot 4.4.41-36.55.amz Fri Feb 10 14:33:56 2017 - Wed Feb 15 09:16:00 2017 (4+18:42)
reboot system boot 4.4.35-33.55.amz Fri Jan 20 17:06:05 2017 - Wed Feb 15 09:16:00 2017 (25+16:09)
Does this mean the machine was not properly shutdown before rebooted, so there is no logout entry of the boot user in wtmp?
Thanks for any hints.
|
These are not entries of a boot user logging in or out, it's the system writing an entry upon reboot.
The entries are written when a reboot occurs, however, if the system was brought down in some other way (by unplugging the power or whatever), an entry would not have been written. I presume that the next orderly shutdown would therefore produce the effect that you are seeing.
Rebooting with reboot -d will also not update the wtmp database.
| last reboot -F shows overlapping timespans |
1,389,235,186,000 |
I know that the command last | tac is enough, but I want to do it using the sort command. I cannot sort it by column, it always sorts the first one only.
Using bash on Arch Linux.
|
Looks like you can't rely on fields, so you'd need to rely on character column
1 2 3 4 5 6 7
1234567890123456789012345678901234567890123456789012345678901234567890123456789
stephane pts/0 :0 Fri Aug 1 09:48 - 14:34 (17+04:45)
stephane pts/13 :0 Fri Aug 1 16:27 - 13:51 (20+21:24)
From that:
last | sort -k1.44,1.46M -k1.48,1.49n -k51
Note that the M flag to sort on month names is not standard but available in several sort implementations including GNU sort (the one typically found on ArchLinux). Note that sort interprets the month names in the current locale, while last always outputs English month names, so you may want to run sort under LC_TIME=C if in a non-English locale.
| Sort the 'last' output by month |
1,389,235,186,000 |
I noticed that on one of my machines the last command returned nothing. I determined the cause of this to be an empty /var/log/wtmp file. What would cause this to be empty? I assume the "tmp" means temporary, but what and where decides how temporary this log file is?
|
/var/log/wtmp is usually rotated (or just cleared) by a monthly cron job, or with a config file in /etc/logrotate.d/
For example: on my Debian system, all the lines in /etc/logrotate.d/wtmp are commented out, but /etc/cron.monthly/acct (from the acct GNU Accounting Utilities package) rotates it and generates a monthly report (/var/log/wtmp.report).
Check to see if you have /var/log/wtmp.1, /var/log/wtmp.2, etc. Possibly compressed with .gz filename extensions.
You can use last's -f option to view the records in other wtmp files. e.g.
last -f /var/log/wtmp.1
From man last:
-f, --file file
Tell last to use a specific file instead of /var/log/wtmp.
The --file option can be given multiple times, and all of
the specified files will be processed.
BTW, last -f can't read a compressed wtmp file. If it's compressed, you'll have to gunzip it first.
| What causes wtmp to be cleared? |
1,389,235,186,000 |
I often use thelast command to check my systems for unauthorized logins, this command:
last -Fd
gives me the logins where I have remote logins showing with ip.
From man last:
-F Print full login and logout times and dates.
-d For non-local logins, Linux stores not only the host name of the remote host but its IP number as well. This option translates the IP
number back into a hostname.
Question:
One of my systems is only showing a few days worth of logins. Why is that? What can I do when last only gives me few days?
Here is the output in question:
root ~ # last -Fd
user pts/0 111-111-111-111. Wed Oct 8 20:05:51 2014 still logged in
user pts/0 host.lan Mon Oct 6 09:52:01 2014 - Mon Oct 6 09:53:41 2014 (00:01)
user pts/0 host.lan Sat Oct 4 10:11:39 2014 - Sat Oct 4 10:12:13 2014 (00:00)
user pts/0 host.lan Sat Oct 4 09:31:07 2014 - Sat Oct 4 10:11:00 2014 (00:39)
user pts/0 host.lan Sat Oct 4 09:26:04 2014 - Sat Oct 4 09:28:16 2014 (00:02)
wtmp begins Sat Oct 4 09:26:04 2014
|
It is likely that logrotate has archived the log(s) of interest and opened a new one. If you have older wtmp files, specify one of those, as for example:
last -f /var/log/wtmp-20141001
| last command only shows few days worth of logins |
1,389,235,186,000 |
This was a weird output I came across. Running the "last" command gave me this output
gryphon pts/0 192.168.0.108 Wed Mar 19 21:04 still logged in
gryphon pts/0 s0106d850e695678 Wed Mar 19 13:53 - 13:54 (00:01)
gryphon pts/0 192.168.0.108 Tue Mar 18 22:57 - 23:03 (00:06)
I'm pretty sure that the weird IP address was me but what is up with the garbage for the IP address?
I do not have an IPV6 address. It's just a weird hostname I don't have anything named that. I might have been VPN'ing in would that have anything to do with it?
|
As Ricky Beam mentioned, it is a hostname, but sadly, it was too long and therefore cut off. If you want to display the entire host name without any trimming, run the command with the -F flag (capitalization matters).
$ last
andreas pts/15 123-27-247-110.c Fri Jun 13 00:24 - 00:33 (00:08)
$ last -w -F
andreas pts/15 123-27-247-110.client.mchsi.com Fri Jun 13 00:24:24 2014 - Fri Jun 13 00:33:03 2014 (00:08)
| Linux Last command weird output? Garbage for IP address |
1,389,235,186,000 |
We recently set up a new FreeBSD 7.2 machine on March 11. When looking at last, we noticed some funny characters for the TTY field.
What do these characters mean? Note that at this time, the sysadmin was battling with NTP.
# last |grep date
date { Fri Mar 10 17:26
date | Fri Mar 10 15:26
date { Fri Mar 10 15:26
date | Fri Mar 10 03:14
date { Fri Mar 10 03:11
date | Sat Mar 11 15:22
date { Sat Mar 11 15:21
date | Sat Mar 11 09:20
And according to man last, I can specify the tty using -t tty:
# last -t {
date { Fri Mar 10 17:26
date { Fri Mar 10 15:26
date { Fri Mar 10 03:11
date { Sat Mar 11 15:21
|
From utmp(5):
The system time has been manually or automatically updated (see
date(1)). The command name date is recorded in the field
ut_name. In the field ut_line, the character `|' indicates the
time prior to the change, and the character `{' indicates the
new time.
So the I and { are just there to indicate that the system time is being changed.
| Why does `last` show '{' and '|' in the TTY field? |
1,389,235,186,000 |
Is there a shortcut (keyboard) to automatically scroll back to the beginning of a response to a command?
Example: I open the terminal and type the 'last' command and press enter.
A long lost of previously logged in users appears but I am at the end (i.e. oldest users). Scrolling back up to the top can be a bit of a pain.
Is there a way to jump to the top of the output of this command?
I checked the man page for 'last' doesn't seem to contain such a function.
|
Not that I'm aware of, you'd normally pipe the output into a pager like less.
$ last | less
| Does a shortcut exist to auto scroll to the top of a command response in terminal? |
1,389,235,186,000 |
last
...
date { Sun Mar 31 12:00
date | Sun Mar 31 12:00
date { Sun Mar 31 00:00
date | Sun Mar 31 00:00
...
Why are there e.x.: "{" in the last commands output? Some sort of script could do this? (on OpenBSD 5.1)
|
The lines you are seeing indicate the system time has been automatically updated. The '|' character indicates the time prior to the change and the '{' character indicates the new time.
Source: man utmp (5)
| How to detect what is causing entries in the "last" cmd's output? [duplicate] |
1,389,235,186,000 |
What is the meaning of "crash" from the output of the last command?
root pts/0 mastrt03 Wed Jan 24 11:54 - crash (07:12)
We have couple of lines with "crash" on our mastrt03 machine.
|
last command shows crash as logout time when there is no logout time specified in wtmp database for a user session in linux os
Normally In last command will show user login time and log out time and Time Duration of logged in
If Logout for particular user not found in wtmp database. then it will mentioned as crash
| What is the meaning of "crash" from the output of the last command? |
1,389,235,186,000 |
So I'm creating a simple grep command that gets only the last logged in people who's username starts with 161 and has 3 digits next to it:
last | grep "^161[0-9]{3}"
However it doesn't print anything even though it has these usernames on the list.
Whats even more weirder is if I do egrep instead of grep
last | egrep "^161[0-9]{3}"
The command works.
Can anyone explain what is the difference?
|
As steeldriver already pointed out, grep uses basic regular expressions whereas grep -E and egrep use extended regular expressions.
last | grep '^161[0-9]\{3\}'
last | egrep '^161[0-9]{3}'
| Grep regex not working [duplicate] |
1,389,235,186,000 |
When I run the last command on one of my Raspberry Pi's running Raspbian I get this at the end:
wtmp begins Thu Jan 1 01:00:01 1970
When I run the last command on a proper operating system, such as Ubuntu or Fedora I get a real date, not the epoch time. What's causing it and what does in mean (in both cases).
|
You are seeing this because the wtmp file is rotated at the 1st of each month. (e.g. wtmp is moved to wtmp.1 and a new wtmp is created empty).
In the Raspberry PI, as you do not have a Real Time Clock to keep the time, each time you (re)boot it you are back to Epoch 0 which is Jan 1, 1970.
The good news is that you can buy an RTC (DS3231) for the Raspberry in AliExpress for less than 2 Euros, or in Europe for around 5-10 Euros The Pi Hut. I myself bought one for my Lamobo R1 and another one for my rpi 3.
see Adding a Real Time Clock to your Raspberry Pi
As a side note, if having an Internet connection, the rpi is supposed to get back in time sometime after booting via the NTP protocol; however having an RTC means it will get back in time earlier in the bootup sequence; also the RTC is particularly handy for Raspberries/Arduinos that are not connected to the Internet.
| Why do I get this at the end of a `last` command? |
1,389,235,186,000 |
I have the following lines extracted from the output of last. It shows two reboots, and that userA was logged in right up to the reboot. So far, I am able to interpret the data.
However, what I do not understand right now, is the login-time of the pseudo user reboot. For any ordinary users, the two times are the time the user logged-in and logged-out. In case, of a reboot, the entry for the log-out time is crash, indicating, that the user was logged-in right until the bitter end. No statement, whether the user is a victim or the culprit.
My guess is, that the log-in time of the pseudo user reboot is the time when the system reboot was initiated. However, what determines the log-out time of the user reboot?
reboot system boot 3.10.0-327.13.1. Mon Nov 28 08:08 - 10:35 (02:26)
userA pts/0 10.ZZ.YY.XX Sun Nov 27 08:01 - crash (1+00:06)
reboot system boot 3.10.0-327.13.1. Sun Nov 27 07:36 - 10:35 (1+02:58)
userA pts/9 10.ZZ.YY.XX Fri Nov 25 17:39 - crash (1+13:57)
userA pts/0 10.ZZ.YY.XX Fri Nov 25 16:17 - crash (1+15:18)
|
Its refer to the time between reboots. i will explain with example:
root pts/0 192.168.10.58 Mon Nov 28 10:53 still logged in
reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:14 - 11:00 (00:45)
root pts/0 192.168.10.58 Mon Nov 28 10:11 - down (00:02)
reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:09 - 10:14 (00:04)
root pts/0 192.168.10.58 Mon Nov 28 10:08 - down (00:01)
reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:07 - 10:09 (00:01)
root pts/0 192.168.10.58 Mon Nov 28 10:06 - down (00:01)
reboot system boot 2.6.32-642.11.1. Mon Nov 28 10:05 - 10:07 (00:01)
root pts/0 192.168.10.58 Mon Nov 28 09:23 - down (00:41)
reboot system boot 2.6.32-642.11.1. Mon Nov 28 09:21 - 10:05 (00:43)
root pts/0 192.168.10.58 Mon Nov 28 04:42 - down (04:39)
the last reboot logout time will keep change every minute and you can notice that if you type last any time you will see that the last reboot has the present time and the reboots before it every each one of them refer to the time that next reboot happen
| last - user reboot - logged-in period |
1,389,235,186,000 |
Is there a way to tell this command to output the data in local time (not UTC)?
last reboot
Version:
last --version
last von util-linux 2.27.1
|
Use TZ if you need some (troublesome on account of the horrible daylight savings time swings) local timezone, as @Toby indicates:
% TZ=US/Pacific last | sed -n /reboot/p | sed -n 1p
reboot system boot 2.6.32-642.1.1.e Mon Jul 25 09:55 - 08:47 (29+22:51)
% last | sed -n /reboot/p | sed -n 1p
reboot system boot 2.6.32-642.1.1.e Mon Jul 25 16:55 - 15:47 (29+22:51)
%
| last reboot in local time zone? |
1,389,235,186,000 |
I have several remote systems connecting to my server via SSH to establish tunnels.
They authenticate using a public key, their user is locked and their shell is set to /usr/sbin/nologin
It all works fine except with this setup the output from last username is empty for those accounts.
Is there a workaround to get last login info for those? I need the IP and connection time.
Thank you for reading.
|
You could use awk on /var/log/secure or /var/log/auth.log (depending on the distro).
On my CentOS 7 I get the following when I log in remotely:
Dec 8 21:58:20 <server hostname> sshd[8387]: Accepted publickey for gareth from 1.2.3.4 port 58392 ssh2: RSA 55:89:f9:20:db:c6:e0:6f:ff:d4:a7
The above was for interactive login but a similar entry was created for sftp login.
Therefore:
awk '/sshd.*Accepted/ {print $1,$2,$3,$9,$11}' /var/log/secure
should give you:
Dec 8 21:58:20 gareth 1.2.3.4
| Unable to get last connection info for SSH users (locked, no shell, public key) is there an alternative? |
1,389,235,186,000 |
Hello I would like to know how to scan and change the last modified date of all subfolders based on the oldest file in each subfolder.
Example of Ubuntu folder structure:
home/incoming/media/Something.something.1234/
or
/Soemthing Soemthing 1234/
Which means there are folders with dots and without dots. Same goes for files some with dots and some without.
Files are mostly MKV media files maybe some mp4.
Also script should also skip all mkv or mp4 files in the root of media folder because they are without any subfolders.
|
In zsh:
for dir in path/to/media/*(NF); do
oldest=( $dir/*.(mp4|mkv)(N.Om[1]) )
if (( $#oldest )) touch -r $oldest -- $dir
done
Beware the last modification time of a directory is updated any time an entry is added, removed or renamed in it, so that touch may not hold for long.
| Change the last modification time of subfolders to that of the oldest file inside |
1,305,731,391,000 |
I have currently a strange problem on debian (wheezy/amd64).
I have created a chroot to install a server (i can't give any more detail about it, sorry). Let's call its path /chr_path/.
To make things easy, I have initialized this chroot with a debootstrap (also wheezy/amd64).
All seemed to work well inside the chroot but when I started the installer script of my server I got :
zsh: Not found /some_path/perl (the installer includes a perl binary for some reasons)
Naturally, I checked the /some_path/ location and I found the "perl" binary. file in chroot environment returns :
/some_path/perl ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.2.5, not stripped
The file exists, seems ok, has correct rights. I can use file, ls, vim on it but as soon as I try to execute it - ./perl for example - I get : zsh: Not found ./perl.
This situation is quite understandable for me. Moreover :
I can execute other basic binaries (/bin/ls,...) in the chroot without getting errors
I have the same problems for other binaries that came with the project
When I try to execute the binary from the main root (/chr_path/some_path/perl), it works.
I have tried to put one of the binaries with a copy of my ls. I checked that the access rights were the same but this didn't change anything (one was working, and the other wasn't)
|
When you fail to execute a file that depends on a “loader”, the error you get may refer to the loader rather than the file you're executing.
The loader of a dynamically-linked native executable is the part of the system that's responsible for loading dynamic libraries. It's something like /lib/ld.so or /lib/ld-linux.so.2, and should be an executable file.
The loader of a script is the program mentioned on the shebang line, e.g. /bin/sh for a script that begins with #!/bin/sh. (Bash and zsh give a message “bad interpreter” instead of “command not found” in this case.)
The error message is rather misleading in not indicating that the loader is the problem. Unfortunately, fixing this would be hard because the kernel interface only has room for reporting a numeric error code, not for also indicating that the error in fact concerns a different file. Some shells do the work themselves for scripts (reading the #! line on the script and re-working out the error condition), but none that I've seen attempt to do the same for native binaries.
ldd won't work on the binaries either because it works by setting some special environment variables and then running the program, letting the loader do the work. strace wouldn't provide any meaningful information either, since it wouldn't report more than what the kernel reports, and as we've seen the kernel can't report everything it knows.
This situation often arises when you try to run a binary for the right system (or family of systems) and superarchitecture but the wrong subarchitecture. Here you have ELF binaries on a system that expects ELF binaries, so the kernel loads them just fine. They are i386 binaries running on an x86_64 processor, so the instructions make sense and get the program to the point where it can look for its loader. But the program is a 32-bit program (as the file output indicates), looking for the 32-bit loader /lib/ld-linux.so.2, and you've presumably only installed the 64-bit loader /lib64/ld-linux-x86-64.so.2 in the chroot.
You need to install the 32-bit runtime system in the chroot: the loader, and all the libraries the programs need. From Debian wheezy onwards, if you want both i386 and x86_64 support, start with an amd64 installation and activate multiarch support: run dpkg --add-architecture i386 then apt-get update and apt-get install libc6:i386 zlib1g:i386 … (if you want to generate a list of the dependencies of Debian's perl package, to see what libraries are likely to be needed, you can use aptitude search -F %p '~Rdepends:^perl$ ~ri386'). You can pull in a collection of common libraries by installing the ia32-libs package (you need to enable multiarch support first). On Debian amd64 up to wheezy, the 32-bit loader is in the libc6-i386 package. You can install a bigger set of 32-bit libraries by installing ia32-libs.
| Getting "Not found" message when running a 32-bit binary on a 64-bit system |
1,305,731,391,000 |
Assuming I want to test if a library is installed and usable by a program. I can use ldconfig -p | grep mylib to find out if it's installed on the system. but what if the library is only known via setting LD_LIBRARY_PATH?
In that case, the program may be able to find the library, but ldconfig won't. How can I check if the library is in the combined linker path?
I'll add that I'm looking for a solution that will work even if I don't actually have the program at hand (e.g. the program isn't compiled yet), I just want to know that a certain library exists in ld's paths.
|
ldconfig can list all the libraries it has access to. These libraries are also stored in its cache.
/sbin/ldconfig -v -N will crawl all the usual library paths, list all the available libraries, without reconstructing the cache (which is not possible if you're a non-root user). It does NOT take into account libraries in LD_LIBRARY_PATH (contrarily to what this post said before edit) but you can pass additional libraries to the command line by using the line below:
/sbin/ldconfig -N -v $(sed 's/:/ /g' <<< $LD_LIBRARY_PATH)
| Find out if library is in path |
1,305,731,391,000 |
I know this question isn't very new but it seems as if I didn't be able to fix my problem on myself.
ldd generate the following output
u123@PC-Ubuntu:~$ ldd /home/u123/Programme/TestPr/Debug/TestPr
linux-vdso.so.1 => (0x00007ffcb6d99000)
libcsfml-window.so.2.2 => not found
libcsfml-graphics.so.2.2 => not found
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fcebb2ed000)
/lib64/ld-linux-x86-64.so.2 (0x0000560c48984000)
Which is the correct way to tell ld the correct path?
|
if your libraries are not on standard path then either you need to add them to the path or add non-standard path to LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:<Your_non-Standard_path>
Once you done any one of above things then you need to update the dynamic linker run-time binding by executing below command:
sudo ldconfig
UPDATE:
You can make the changes permanent by either writing the above export line into one of your startup files (e.g. ~/.bashrc) OR if the underlying library is not conflicting with any other library then put into one of standard library path (e.g. /lib,/usr/lib)
| ldd does not find path, How to add |
1,305,731,391,000 |
In my case, it seems as if LD_LIBRARY_PATH is set to the empty string. But all standard system tools still work fine, so I guess the dynamic linker checks for that case and uses some default for LD_LIBRARY_PATH in that case.
What is that default value? I guess it at least includes /usr/lib but what else? Is there any good systematic way in figuring out the standard locations where the dynamic linker would search?
This question is slightly different from what paths the dynamic linker will search in. Having a default value means, that it will use the value of LD_LIBRARY_PATH if given, or if not given, it will use the default value - which means, it will not use the default value if LD_LIBRARY_PATH is provided.
|
The usual dynamic linker on Linux uses a cache to find its libraries. The cache is stored in /etc/ld.so.cache, and is updated by ldconfig which looks on the paths it’s given in /etc/ld.so.conf (and nowadays typically files in /etc/ld.so.conf.d). Its contents can be listed by running ldconfig -p.
So there is no default value for LD_LIBRARY_PATH, default library lookup doesn’t need it at all. If LD_LIBRARY_PATH is defined, then it is used first, but doesn’t disable the other lookups (which also include a few default directories).
The ld.so(8) manpage has the details:
If a shared object dependency does not contain a slash, then it is
searched for in the following order:
Using the directories specified in the DT_RPATH dynamic section
attribute of the binary if present and DT_RUNPATH attribute does
not exist. Use of DT_RPATH is deprecated.
Using the environment variable LD_LIBRARY_PATH, unless the
executable is being run in secure-execution mode (see below), in
which case it is ignored.
Using the directories specified in the DT_RUNPATH dynamic section
attribute of the binary if present.
From the cache file /etc/ld.so.cache, which contains a compiled
list of candidate shared objects previously found in the augmented
library path. If, however, the binary was linked with the -z nodeflib linker option, shared objects in the default paths are
skipped. Shared objects installed in hardware capability
directories (see below) are preferred to other shared objects.
In the default path /lib, and then /usr/lib. (On some 64-bit
architectures, the default paths for 64-bit shared objects are
/lib64, and then /usr/lib64.) If the binary was linked with the
-z nodeflib linker option, this step is skipped.
If LD_LIBRARY_PATH is not set or is empty, it is ignored. If it is set to empty values (with LD_LIBRARY_PATH=: for example), those empty values are interpreted as the current directory.
| What is the default value of LD_LIBRARY_PATH? [duplicate] |
1,305,731,391,000 |
I have found by coincidence that on my Debian Jessie there is no LD_LIBRARY_PATH variable (to be exact printenv | grep LD shows nothing related to linker and echo "$LD_LIBRARY_PATH" shows also nothing).
This is the case in x terminal emulator (which might clear it due to setgid) as well as in basic terminal (Ctrl+Alt+F1).
I know that LD_LIBRARY_PATH may be considered bad so Debian may block it somehow, but on the other hand there are a few files in /etc/ld.so.conf.d/ that contains some directories to be added to LD_LIBRARY_PATH. None of my rc files (that I know of) mess with LD_LIBRARY_PATH either.
Why I don't see an LD_LIBRARY_PATH variable?
|
Yes, it is normal that you don't have any explicit LD_LIBRARY_PATH. Read also ldconfig(8) and ld-linux(8) and about the rpath. Notice that ldconfig updates /etc/ld.so.cache, not the LD_LIBRARY_PATH. Sometimes you'll set the rpath of an executable explicitly with -Wl,-rpath,directory passed to gcc at link time.
If you need a LD_LIBRARY_PATH (but you probably should not), set it yourself (e.g. in ~/.bashrc).
If you need system wide settings, you could e.g. consider adding /usr/local/lib/ in /etc/ld.so.conf and run ldconfig after installation of every library there.
AFAIK $LD_LIBRARY_PATH is used only by the dynamic linker ld-linux.so (and by dlopen(3) which uses it) after execve(2). See also ldd(1).
Read Drepper's How To Write Shared Libraries for more.
| Is it normal that LD_LIBRARY_PATH variable is missing from an environment? |
1,305,731,391,000 |
I have a binary executable named "alpha" that requires a linked library (libz.so.1.2.7) which is placed at /home/username/myproduct/lib/libz.so.1.2.7
I export the same to my terminal instance before spawning my binary executable by executing the following command.
export LD_LIBRARY_PATH=/home/username/myproduct/lib/:$LD_LIBRARY_PATH
Now, when I spawn another application "bravo" that requires the same library but of different version, i.e (libz.so.1.2.8) which is available in
/lib/x86_64-linux-gnu/libz.so.1.2.8, system throws the following error.
version `ZLIB_1.2.3.3' not found (required by /usr/lib/x86_64-linux-gnu/libxml2.so.2)
If I unset the LD_LIBRARY_PATH, "bravo" starts up fine. I understand that the above behaviour is because LD_LIBRARY_PATH takes precedence over the directory paths defined in /etc/ld.so.conf while looking for linked libraries and consequently the above error occurred. I am just curious about why have not the developers of UNIX/LINUX designed the OS to search for linked libraries in other directories according to the hierarchy if the first instance of library is of different version.
Simply put, UNIX/LINUX systems traverse through a set of directories until it finds the required library. But why does it not do the same until it finds the expected version rather than accepting the first instance of library irrespective of its version?
|
But why does it not do the same until it finds the expected version rather than accepting the first instance of library irrespective of its version?
It does, as far as it’s aware. zlib.so.1.2.7 and zlib.so.1.2.8 both have an soname of zlib.so.1, so your alpha and bravo binaries say they need zlib.so.1. The dynamic loader loads the first matching library it finds; it doesn’t know that version 1.2.8 provides additional symbols which bravo needs. (This is why distributions take pains to specify additional dependency information, such as zlib1g (>= 1.2.8) for bravo.)
You might think this should be easy to fix, but it isn’t, not least because binaries and libraries list the symbols they need separately from the libraries they need, so the loader can’t check that a given library provides all the symbols that are needed from it. Symbols can be provided in a variety of ways, and introducing a link between symbols and the libraries providing them could break existing binaries. There’s also the added fun of symbol interposition to complicate things (and make security-sensitive developers tear their hair out).
Some libraries provide version information which ends up being stored in .gnu.version_r, with a link to the providing library, which would help here, but libz isn’t one of them.
(Given the sonames, I’d expect your alpha binary to work fine with zlib.so.1.2.8.)
| Why don't Unix/Linux systems traverse through directories until they find the required version of a linked library? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.