date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,625,864,176,000
Looking at the source of ssh-copy-id at openssh-portable/ssh-copy-id at master · openssh/openssh-portable · GitHub it looks like ssh-copy-id can copy the key to user specified authorized_keys files on the server, besides the default ~/.ssh/authorized_keys. However there doesn't seem to be a command line option for it, unless I'm looking at a different ssh-copy-id.
Looking at the source file you mentioned, the commands may be executed as a remote shell skript, that is built on the fly on the local host. This is only used when ssh-copy-id does not use SFTP and the remote version is not NetScreen. Then it could be possible if you change the environment variable AUTH_KEY_FILE.
Can ssh-copy-id be configured to save to a different authorized_keys file?
1,604,442,307,000
Lets say that I have a file /etc/ssh/sshd_config with RevokedKeys /etc/ssh/keys/cert.list What does revoke when I add a ssh certificate (not the public key) to /etc/ssh/keys/cert.list? I've found that what is really revoked is the public key which is based that certificate. I mean, if I issue a new certificate with the same public key, I cannot login using the new certificate. So, is true that what is revoked is the underlying public key?
It depends on what you list As listed on ssh-keygen(1): The files may either contain a KRL specification (see below) or public keys, listed one per line. Plain public keys are revoked by listing their hash or contents in the KRL and certificates revoked by serial number or key ID (if the serial is zero or not available). So, you can revoke either a key or a certficate. If a key was compromised (or weak to begin with, see the Debian ssh keys fiasco), you wouldn't want a new certificate to use that same key.
Rationale behind revocation of ssh certificates
1,604,442,307,000
I use CentOS 7 and would like to know if it is possible to block login via ssh (sshd) for a certain period of time after a certain amount of attempts. EXAMPLE: After 3 wrong attempts the login is blocked for 15 minutes. Thanks! =D
As the comment above said, fail2ban should be worth looking at. It prevents bruteforce attempts and supports whitelisting, so it should fit to your criteria.
sshd - Block login via ssh (sshd) for a certain period of time
1,604,442,307,000
I have two machines A and B within the organization network. I have openssh-server installed and ssh running on both the machines. Both the machines have Ubuntu 16.04. I am able to ssh from A to B without any password. I tried setting up password-less ssh on B as well. ssh-copy-id does not work. Also, ssh B to A does not work and results in timeout always. The /etc/ssh/ssh_config is exactly same on both the machines. Also, I am able to ssh from B to C where C is in the same network as A and B. Any pointers? ssh -v [email protected] OpenSSH_7.2p2 Ubuntu-4ubuntu2.8, OpenSSL 1.0.2g 1 Mar 2016 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to 10.x.x.2 [10.x.x.2] port 22. debug1: connect to address 10.x.x.2 port 22: Connection timed out ssh: connect to host 10.x.x.2 port 22: Connection timed out
If the hosts are on the same network the most probable causes are: SSH service not started Traffic is blocked on the SSH server According to the comments the traffic was blocked by A, disallowing access from both B and C
SSH: A to B works, B to A does not
1,604,442,307,000
I am trying to figure out an issue I have been having. I am using a webpanel called TinyCP and there is a table for displaying failed login attemps via SSH. All SSH attempts, failed or successful, show that they are coming from my NAT. This then gives them all the same IP addresses and disabled me from using fail2ban to try and stop penetration tests. I had this same issue on my web server and fixed it by using the mod_remoteip for apache to redirect X-Forwarded-For to the Remote IP. Is it possible to do something like this with SSH so I am able to retrieve the actual public IP instead of the NAT? Example:
No Performing network address translation (NAT) is conceptually very different from using a HTTP proxy server. There is no way to tell the remote's private local IPv4 address. It would defeat the very purpose of NAT (that is aggregating a network into a singl host). On a side note: FTP reveals the remote's address (and it lead to problems with NATed networks ever since). I just checked the SSH memo – the remote's private IP address is never transmitted (and there is no reason to do so). Change your network configuration so that you can perform your tests with direct (not NATed) connections.
Ubuntu SSH Auth Log Shows Proxy IP
1,604,442,307,000
If there's a SSH setup like this: Host A -> Host B <- Host C Where Host C is behind a NAT/firewall an opens a reverse tunnel to Host B and Host A connects to Host C via Host B (ProxyCommand). Everything is IPv4 only. Is there a simple way to establish IP(v6) connectivity between Host A and Host C without requiring extensive root access and intrusive methods on any of those hosts? EDIT: Basically something similar hostC$ ssh -R 1234:localhost:22 hostB # establis remote hole for host C on host B hostA$ ssh -o"ProxyCommand=ssh hostB nc localhost 1234" # Connect via HostB to Host C hostA$ ping6 <ipv6> # this would validate the connectivity to HostC And the idea would be for the IP(v6) network to be as transient as possible. A service listening on :::1 on HostC should be reachable from HostA.
It looks like you are confusing SSH port forwarding with VPNs. These are very different concepts with different capabilities. SSH Port forwarding lets one server (ip and port) masquerade as another server (ip and port). From your diagram, this would let Host A think it was talking to Host B (port 1234) when it's actually talking to Host C (port 22) because Host B is forwarding the traffic. Note Host A never knows Host C exists. I think what you are actually after is a VPN, where host A can talk to Host C (knowing it is talking to Host C) and all the traffic is sent via Host B. You can't do this with ssh alone. Typically all hosts (A, B and C) would need the same VPN software installed (eg OpenVPN). B would be setup as the VPN server, A and C as clients. There is one exception. sshuttle will let an ssh server act like a VPN server, as long as you have sshuttle on the client, the server doesn't need to know. However it may not work in the configuration you're after. It probably won't let hosts A and C talk to each other.
Using OpenSSH to create overlay IPv6 network [closed]
1,604,442,307,000
if i'd like to have another user be able to ssh into my ec2 instance, is there any pros/cons between generating an ssh key for that user in the ec2 instance, and then copying the private key to that user's local computer, versus the new user generating an ssh key on their local computer and copying the public key to the ec2 instance authorized keys? would it be more secure if the user just generates ssh key on their local computer since you'd just be copying over the public key and not have to copy a private key over?
Before I outline any benefits to either method, I will point out that the industry standard for this type of thing is to generate the key-pair on the client system, not the server, largely because it provides a much better user experience. Now, for the advantages and disadvantages, they pretty much complement each other, so I'll just list the advantages to each method. Generating the keys on the client: Direct trivial integration with ssh. If you generate the key with ssh itself and it's your only key, it automatically ends up in the right place to be used by default, thus simplifying things for non-technical users. In theory, the quality of entropy used to generate the key will probably be better on the client system than the EC2 instance (VM"s have notoriously low entropy). The user always knows exactly where the private key is, which is important for verifying the security of the key. If the user already has an SSH key (and as a general rule people using SSH regularly often do), they can just send over the public key for that instead of having to deal with managing multiple key-pairs (which is a pain). Generating the keys on the EC2 instance: It's trivial to copy the key to the required location on the EC2 instance. Lower load on the client system. It's easier to enforce minimum security requirements. Note that I explicitly did not list anything above about the security implications of copying a private key instead of a public one. THis is because the security implications are entirely dependent on how secure the mechanism you use to copy them are.
pros/cons between generating ssh keys on client vs server
1,604,442,307,000
The server I am logging into does not allow setting up SSH keys, so I have to type in a password + a code generated from a device every time I do an ssh. What I used to be able to do on my Mac is that I would ssh to the machine, and it only asks me to type in the password the first time. After that, whenever I open a new terminal window or tab, I can ssh to the machine without having to type the password again. I recently upgraded to the new Mac OSX. I don't know what has changed but now I have to type the password every time I open a new terminal window. I don't know anything about system security, so any help or explanation is greatly appreciated.
Enable multiple sessions over a single connection in ~/.ssh/config host * controlmaster auto controlpath /tmp/ssh-%r@%h:%p ControlPersist yes
SSH without retyping password in new terminal window/tab
1,604,442,307,000
I have this weird situation: On a Centos 6.6 box, in my sshd_config, I have the line AllowGroups foo bar baz just like on all the other servers. However, when user baz (whose group is baz) tries to login on this one server, login is denied, In the logs I see May 17 16:27:03 myserver sshd[6172]: User baz from other.server.com not allowed because none of user's groups are listed in AllowGroups And the user cannot login. Of course I have restarted the ssh daemon to make sure the config is picked up. Also, if I completely remove the 'AllowGroups' line from the sshd_config, login works. Now I am wondering why sshd is not picking up this group (also, users within the group bar can login with AllowGroups active). Also, the (seemingly) exact setup works on other hosts. The configuration is deployed via puppet, so it really should be identical. Any ideas? Edit as requested below, the output of root@foobar:/etc/ssh $ groups baz baz : nburoot exepcted would be baz as group ... so now I need to check why the group is wrong (even though /etc/groups has it right). I will post the solution as an answer.
Thanks to @Jakuje I checked the group membership with groups baz which showed that the effective group is not what I expecting from the entry in /etc/group. As we also have centrify installed, I looked at the config, which seems correct(baz was listed in /etc/centrifydc/groups.ignore). I restarted the agent (service centrifydc restart), which fixed the issue. baz now has the correct group and can therefor login.
sshd AllowGroups group not granting access [closed]
1,604,442,307,000
How do you delete ssh if you suspect ra root administrating your macbook.. And are under a control by ssh remote?
How do you delete ssh if you suspect ra root administrating your macbook.. And are under a control by ssh remote? Deleting (literally) ssh won't help. You would need to delete / disable sshd. The ssh command is for out-going SSH sessions. You are dealing with incoming SSH sessions. But deleting / disabling sshd isn't sufficient. If someone has "owned" your system, then there is a good chance that they have installed other stuff that provides other ways of controlling your machine. The only way to be sure is to do a complete reinstall of all system software and check for damage / infections in non-system software. A short term workaround is disconnect from all networks. This will at least interrupt the (supposed) hacker's active control of your system. However, that could be tricky, given that most modern mobile systems are capable of using WiFi, etcetera, and you may not be able to control the access points that it is using.
Under remote (ssh) .. How do I remove all remotes and root logins [closed]
1,604,442,307,000
ssh is still asking for a password, even though I did everything by the book. I have included all output, right from the start. Any ideas? Thanks! Gary Generating public/private rsa key pair and checking permissions on local host Edit: => As it turned out, this was the problem. The key pair needed to be generated on the remote machine, not on the local machine, as «Mat» pointed it out in the very first comment. Please read the many comments in the solution if you need to know how we got there. on local computer: mms: admin$ ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/Users/admin/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /Users/admin/.ssh/id_rsa. Your public key has been saved in /Users/admin/.ssh/id_rsa.pub. The key fingerprint is: SHA256:ekIFdhbYVGnWsRcpyhPXRPDF5LTqYI+u6l3URsIjC90 [email protected] The key's randomart image is: +---[RSA 2048]----+ | o+=o.oo*+++| | ..+o B +oo=o| | ..* E.oo..| | .. * =.. | | . S. = + | | . . o * | | o . o o | | o. o | | .o.o.. | +----[SHA256]-----+ mms: admin$ pwd && ls -al /Users/admin/.ssh total 16 drwx------ 4 admin staff 136 Dec 26 09:37 . drwxr-xr-x+ 32 admin staff 1088 Dec 26 08:53 .. -rw------- 1 admin staff 1675 Dec 26 09:37 id_rsa -rw-r--r-- 1 admin staff 401 Dec 26 09:37 id_rsa.pub Copying public key:   (from remote host, because remote host cannot be remotely accessed) server:.ssh ahase$ scp [email protected]:.ssh/id_rsa.pub ~/.ssh/authorized_keys The authenticity of host 'domain-of-local-computer.com (123.456.789.012)' can't be established. RSA key fingerprint is 1f:14:32:84:c4:f8:4e:25:df:2d:56:49:e6:e5:79:1d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'domain-of-local-computer.com,123.456.789.012' (RSA) to the list of known hosts. Password: id_rsa.pub 100% 401 0.4KB/s 00:00 Copying private key and checking permissions:    Edit (as per suggestion) server:.ssh ahase$ scp [email protected]:.ssh/id_rsa ~/.ssh/id_rsa Password: id_rsa 100% 1675 1.6KB/s 00:00 server:.ssh ahase$ ls -al server:.ssh ahase$ scp [email protected]:.ssh/id_rsa ~/.ssh/id_rsa Password: id_rsa 100% 1675 1.6KB/s 00:00 server:.ssh ahase$ ls -al total 24 drwx------ 5 ahase staff 170 26 Dez 12:07 . drwxr-xr-x+ 18 ahase staff 612 10 Dez 09:19 .. -rw------- 1 ahase staff 401 26 Dez 09:58 authorized_keys -rw------- 1 ahase staff 1675 26 Dez 12:07 id_rsa -rw-r--r-- 1 ahase staff 410 26 Dez 09:58 known_hosts ssh still asking for password (-vvv output) [edit after suggested changes] server:.ssh ahase$ ssh -vvv [email protected] OpenSSH_5.2p1, OpenSSL 0.9.8k 25 Mar 2009 debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Connecting to domain-of-local-computer.com [123.456.789.012] port 22. debug1: Connection established. debug1: identity file /Users/ahase/.ssh/identity type -1 debug3: Not a RSA1 key file /Users/ahase/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /Users/ahase/.ssh/id_rsa type -1 debug1: identity file /Users/ahase/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.9 debug1: match: OpenSSH_6.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: [email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 debug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: kex_parse_kexinit: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-sha1 debug1: kex: server->client aes128-ctr hmac-sha1 none debug2: mac_setup: found hmac-sha1 debug1: kex: client->server aes128-ctr hmac-sha1 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 158/320 debug2: bits set: 1048/2048 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: filename /Users/ahase/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug3: check_host_in_hostfile: filename /Users/ahase/.ssh/known_hosts debug3: check_host_in_hostfile: match line 1 debug1: Host 'domain-of-local-computer.com' is known and matches the RSA host key. debug1: Found key in /Users/ahase/.ssh/known_hosts:1 debug2: bits set: 1023/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/ahase/.ssh/identity (0x0) debug2: key: /Users/ahase/.ssh/id_rsa (0x0) debug2: key: /Users/ahase/.ssh/id_dsa (0x0) debug1: Authentications that can continue: publickey,keyboard-interactive debug3: start over, passed a different list publickey,keyboard-interactive debug3: preferred publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: /Users/ahase/.ssh/identity debug3: no such identity: /Users/ahase/.ssh/identity debug1: Trying private key: /Users/ahase/.ssh/id_rsa debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/ahase/.ssh/id_dsa debug3: no such identity: /Users/ahase/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup keyboard-interactive debug3: remaining preferred: password debug3: authmethod_is_enabled keyboard-interactive debug1: Next authentication method: keyboard-interactive debug2: userauth_kbdint debug2: we sent a keyboard-interactive packet, wait for reply debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 1 Password: debug3: packet_send2: adding 32 (len 21 padlen 11 extra_pad 64) debug2: input_userauth_info_req debug2: input_userauth_info_req: num_prompts 0 debug3: packet_send2: adding 48 (len 10 padlen 6 extra_pad 64) debug1: Authentication succeeded (keyboard-interactive). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug1: client_input_global_request: rtype [email protected] want_reply 0 debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Last login: Sat Dec 26 12:22:40 2015 from 123.456.789.012 mms:~ admin$ I can't look into the log files (/var/log/auth.log or /var/log/daemon.log do not exist and I don't know where they are located). Local computer is a Mac running 10.10.5 and remote computer is a Mac running 10.6 (which can't be changed). Thanks!
(I'm wondering if the term "[email protected]" could cause any problems. Thats the name of the local host computer in the local network. fritz.box is the router's name) No, it is just comment. debug1: identity file /Users/name/.ssh/id_rsa type -1 [...] debug1: Trying private key: /Users/name/.ssh/id_rsa debug3: no such identity: /Users/name/.ssh/id_rsa Your client is not using the key. To @Mat comment, on the client, you need accessible ~/.ssh/id_rsa and on the server the ~/.ssh/authorized_keys. You set it up other way round.
SSH still asking for password even after I have tried everything (that I know of)
1,604,442,307,000
How do I set up a remote internet web server using CentOS 7 so that it is completely shielded by a VPN? Here is the use case: 1.) This means that ALL requests for ANY interaction with the web server would have to come through the VPN from a known user. Including http/https requests. 2.) Each server might have only 20 users who would connect to the server over the internet to work with highly sensitive data that needs tight security. 3.) The users would not actually log into the OS. 4.) Instead, the users would simply make http/https requests from pre-registered devices that also contain a unique key identifying the user. 5.) There would be one administrator who logs into the OS remotely, but all other users would just have secure http/https access. 6.) If any request came from a non-known device outside the vpn, the request would be bounced off as if the server was not there. 7.) But the server is connected to the internet and all VPN connections come through the internet. I have read that OpenSSH is a VPN in its current version. I like that it seems to let you use 1024 bit encryption keys, but I get the impression I can only use OpenSSH for remote logins to the OS and not for locking down all routes into the machine, including http/https requests. Is OpenSSH capable of locking up the entire server as described above? I am starting to read about OpenVPN. I see that OpenVPN requires a license fee. The license fee is not impossible, but I would like to know if there are free options before trying a pay option. I also want to make sure OpenVPN could do what I describe above. What I have read about pptp seems to imply that pptp only supports 128 bit encryption keys, and may not do everything described above.
From what I know, you can't implement every feature you require without custom / commercial products to lock down everything. Here are some thoughts to keep in mind while elaborating on how to meet the required critera: Typical VPN-solutions add routes. For routes to be added, the user needs to have administrative rights on the client computer. This isn't always the case, especially when using hardware provided by the employer. Depending on the implementation, existing VPN-solutions have their workarounds regarding that topic. As far as i know, OpenVPN registers a Windows-service while installing the client, which adds the routes as requested by the actual client operated by the user. Tunnelblick (an OS X client for OpenVPN) also does some voodoo to get the routing to work, and at some point needs administraive rights as well. OpenVPN therefore at least needs some effort while configuring the clients to work around that issue. Since at least OpenVPN can add the routes "post-connect", the user could actually add additional routes by himself. Lets say the OpenVPN server pushes routes to a few hosts that should be accessable through OpenVPN. This results in the client adding the routes to those hosts. There's no mechanism prohibiting a user from adding even more routes to additional hosts. So if those routes actually are vaild, access to not authorized backends would be possible. To prevent this, the firewall on the server would have to be airtight to not allow anything but traffic to allowed hosts. If the infrastructure you want to provides has to be available to employees that are often accessing the server from hotels, VPN might be not usable at all. There are quite a few hotels which don't forward the traffic needed for VPN connections. In some cases, just ports 80 and 443 are allowed. Moving away from my OpenVPN thoughts, your requirement to only allow connections from "trusted" devices is not possible as far as I can tell. This would be implemented through a firewall (usually iptables in case of CentOS). The rule that you desire would DROP packets from untrusted devices. Unfortunately, iptables has now way of checking if a device is trusted or not, since such information is not part of the packet. Although there are solutions to filter in a similar way at least within a compatible network (ask Google for PitBull Foundation by General Dynamics if you are interested in this topic). Now moving to thoughts on how I would actually implement your scenario, although lacking some of the required criteria. First of all, OpenSSH is not a VPN solution. It is able to establish a VPN-like tunnel, but to my knowledge, other VPN solutions are superior to the OpenSSH approach regarding performance and functionality. Having said that, you mention, that the users typically will just be using http/https services. Those can easily be accessed through SSH tunnels without any VPN-like functionality. I won't go into great detail on how to configure this, but I'll point out the general approach using OpenSSH. The first thing that might make sense would be to configure the SSH daemon to use port 80 or 443. This would enable users the connect to the server from almost anywhere, even hotels. Regarding security, the sshd configuration allows to add rules matching users, hosts etc. This means, you can actually limit the users traffic to specific hosts behind the OpenSSH server without having to filter this in the firewall. Such a rule could look like this: Match User JohnDoe AllowTcpForwarding yes PermitOpen internal.resource:80 other.internal:443 Secure authentication is mandatory. OpenSSH can make use of PAM modules to authenticate the user. There are several possibilities like authenticating against LDAP, Radius etc. A nifty solution that uses 2-factor authentication in a simple way would be to use the Google Authenticator, which has a PAM module as well. There are several resources pointing out on how to configre OpenSSH to use the Google Authenticator. Actually the hardest part when using this solution would be what happens on the client-side. The client would need a SSH-client which supports the usage of tunnels. On top of that, the client would have to know how to use the client and how to connect to the established tunnels. I open my tunnels manually from shell, but the typical user possibly won't be capable of such tasks without training. At first glance, this client might be of use: Auto(matic) SSH Tunnel Manager To match my suggested solution to your requirement-list: All request would go through the SSH tunnel. Just one port of the SSH-server would have to be accessable through the internet. Check. If you just create and allow those 20 users, no one else can connect. Check. If configured correctly, users won't get a shell. A shell is not needed to establish tunnels. Check. Using 2-factor authentication would be my suggested alternative, but you could also do pubkey-authentication, with on key per device. But: those keys can be transferred to others devices. So without any further research: No check, but traffic could be limited to http/https. root, or whatever user is specified, could be configured to get a shell upon connecting. Check. Not possible. In case of my solution it's not "VPN", but yes, access would work that way. Check.
CentOS 7 VPN server
1,604,442,307,000
When I open an SSH connection to our database server and leave it open for a longer time, the connection gets lost at some point. I don't know why or how, but this happens a lot. I still see the prompt, but can't enter anything. I usually kill the terminal and open a new one. If I wait much longer, I get an error message. However, when I login to MySQL after logging in with SSH, this doesn't happen. The connection stays open. If this is possible, I guess it's a matter of keeping the connection alive. How does this work and can I set SSH to do better?
You can implement ServerAliveInterval, ClientAliveInterval, and ClientAliveCountMax to solve your timeout issue. Please see this post
SSH connection gets lost, but not when logged in to MySQL
1,604,442,307,000
When booting my laptop today I found that ssh (openssh) simply refuses to accept my passphrase when reading my private key. My only idea of how that could've happened is if the HDD has somehow corrupted the key, but I figure that if that is the case I'd probably notice system instability in other ways too. This isn't that much of a chore for me, since I can access all the places where that key is used using some other means, and I've been meaning to generate a new (bigger) key anyway. I'd still like to figure out what happened though, so any ideas on what could've caused this are welcome. I'm absolutely sure that I'm not typoing the passphrase.
Two ideas: First, try to import the key into the ssh-agent with ssh-add $keyfile to be sure it is really a problem with the keyfile and not something about the server. Second, fetch a copy of your private key from your backup and use something like cmp to check, whether the file really changed.
ssh private key stops accepting my passphrase
1,604,442,307,000
I am using docker ubuntu container to set up an SSH lab on my local machine. I have interactively started the container and installed open ssh on it with other tools. I ensured that I port forward container:22 to my localhost:2222 docker pull ubuntu docker run -it --name ubunut-ssh-lab -p 2222:22 ubuntu /bin/bash apt-get update apt-get install -y openssh-server Then in sshd_config I have enabled PubKeyAuth to yes, have copied pub key from local machine to container directory .ssh/authroized_keys file. Restarted the service. I still get root@localhost: Permission denied (publickey). I ran ssh connection using -v (verbose) arg. ssh -v -i id_ubuntu_lab -p 2222 root@localhost OpenSSH_9.3p2, LibreSSL 3.3.6 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files debug1: /etc/ssh/ssh_config line 54: Applying options for * debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling debug1: Connecting to localhost port 2222. debug1: Connection established. debug1: identity file id_ubuntu_lab type 0 debug1: identity file id_ubuntu_lab-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_9.3 debug1: Remote protocol version 2.0, remote software version OpenSSH_8.9p1 Ubuntu-3ubuntu0.4 debug1: compat_banner: match: OpenSSH_8.9p1 Ubuntu-3ubuntu0.4 pat OpenSSH* compat 0x04000000 debug1: Authenticating to localhost:2222 as 'root' debug1: load_hostkeys: fopen /Users/sourabhdhingra/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: [email protected] debug1: kex: host key algorithm: ssh-ed25519 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: SSH2_MSG_KEX_ECDH_REPLY received debug1: Server host key: ssh-ed25519 SHA256:QanR0+0tbO3ombtQ17EvYU/yUoWTXJtBdZz7pPRHD7U debug1: load_hostkeys: fopen /Users/sourabhdhingra/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: checking without port identifier debug1: load_hostkeys: fopen /Users/sourabhdhingra/.ssh/known_hosts2: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts: No such file or directory debug1: load_hostkeys: fopen /etc/ssh/ssh_known_hosts2: No such file or directory debug1: hostkeys_find_by_key_hostfile: hostkeys file /Users/sourabhdhingra/.ssh/known_hosts2 does not exist debug1: hostkeys_find_by_key_hostfile: hostkeys file /etc/ssh/ssh_known_hosts does not exist debug1: hostkeys_find_by_key_hostfile: hostkeys file /etc/ssh/ssh_known_hosts2 does not exist The authenticity of host '[localhost]:2222 ([::1]:2222)' can't be established. ED25519 key fingerprint is SHA256:QanR0+0tbO3ombtQ17EvYU/yUoWTXJtBdZz7pPRHD7U. This key is not known by any other names. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '[localhost]:2222' (ED25519) to the list of known hosts. debug1: rekey out after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: rekey in after 134217728 blocks debug1: get_agent_identities: bound agent to hostkey debug1: get_agent_identities: ssh_fetch_identitylist: agent contains no identities debug1: Will attempt key: id_ubuntu_lab RSA SHA256:XghbmDcG+wgAFNV4/BdCxjwRtsnlBsmq9BiKmxEj5hU explicit debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,[email protected],ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected]> debug1: kex_input_ext_info: [email protected]=<0> debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering public key: id_ubuntu_lab RSA SHA256:XghbmDcG+wgAFNV4/BdCxjwRtsnlBsmq9BiKmxEj5hU explicit debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. root@localhost: Permission denied (publickey). I got above logs and error! I am attaching my sshd_config file here for analysis. Please check and help what is going wrong here. PermitRootLogin without-password PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys .ssh/authorized_keys2 PasswordAuthentication no PermitEmptyPasswords yes Here is the output for ls -la drwxr-xr-x 2 root root 4096 Oct 22 13:54 . drwxr-xr-x 1 root root 4096 Oct 22 13:44 .. -rw------- 1 501 dialout 602 Oct 22 13:54 authorized_keys Note: I have only included uncommented lines from sshd_config. What is the issue here? Not able to connect?
When using a docker Ubuntu container, it is possible the image does not have the /root directory in it by default. In this situation, trying cd ~ in an interactive session with docker would give below: root@4e56fee1ea11:/# cd ~ bash: cd: /root: No such file or directory Therefore perform additional steps! mkdir /root mv .ssh /root service ssh restart And then login from your local machine ssh -p 2222 root@localhost Thanks for the contribution!
Setting up a SSH Lab using docker on my local machine
1,604,442,307,000
I have a remote SSH server with several custom utilities for my work. However, there are times when I feel lazy and don't want to start a full SSH session just to execute a single command. This motivated me to create a multi-call script à la BusyBox that I could make symlinks to which would look at the name it was called with and run the matching command on my server and forward any arguments I give it. I came up with this first: #!/bin/bash exec ssh -q -t my-server "$(basename "$0")" "${@@Q}" but that doesn't set my PATH correctly, so I can't use all my programs. Next, I tried going through bash as a login shell: exec ssh -q -t my-server bash -l "$(basename "$0")" "${@@Q}" but I got an error like this whenever I ran something that isn't a bash script: /bin/ls: /bin/ls: cannot execute binary file I tried passing a command string to bash: exec ssh -q -t my-server bash -l -c "$(basename "$0")" "${@@Q}" This properly sets the PATH and executes binaries, but any arguments I pass to the binary are lost. I tried to pass in the arguments as a herestring: exec ssh -q -t my-server bash -l <<< "$(basename "$0") ${*@Q}" but interactive programs close immediately. Is what I want to do possible? How can I run an arbitrary remote command over SSH that is not on the standard PATH and make it act like I'm logged in normally?
It looks like I needed to add another level of escaped quotes to make bash run the command properly. So far, this works exactly as I want it to. #!/bin/bash exec ssh -q -t eng-vm bash -l -c \""$(basename "$0") ${*@Q}"\"
How to make a script to forward commands to an SSH server?
1,679,570,278,000
On a remote server, I would like to access via ssh only with a pem key, which is password protected, but I'm being asked for the user account password as well I created the account which I want to use when log in from my host with the following command: sudo useradd -d /home/admin -m -G sudo admin Since my pem key is protected by a password, I don't want to set a second one (the key password and the user password), but only the key file password to be prompted, instead of both. How can I do that? EDIT1: I generated my pem key with PuttyGen, saved the public and private key, exported in OpenSSH key (pem). I then added the publick key to ~/.ssh/authorized_keys file. EDIT2: I can access to the remote server, but I have to type in 2 passwords (for the key, and the user). I just want to type in one password, preferably the one for pem key. $ ssh -i admin.pem [email protected] Enter passphrase for key 'admin.pem': [email protected]'s password: EDIT3: this is the result of ssh -vvv -i admin.pem [email protected] OpenSSH_8.4p1 Debian-5+deb11u1, OpenSSL 1.1.1n 15 Mar 2022 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files debug1: /etc/ssh/ssh_config line 21: Applying options for * debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/home/host/.ssh/known_hosts' debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/home/host/.ssh/known_hosts2' debug2: resolving "dcimtest.cloud" port 22 debug2: ssh_connect_direct debug1: Connecting to dcimtest.cloud [12.12.123.123] port 22. debug1: Connection established. debug1: identity file admin.pem type -1 debug1: identity file admin.pem-cert type -1 debug1: Local version string SSH-2.0-OpenSSH_8.4p1 Debian-5+deb11u1 debug1: Remote protocol version 2.0, remote software version OpenSSH_7.6p1 Ubuntu-4ubuntu0.7 debug1: match: OpenSSH_7.6p1 Ubuntu-4ubuntu0.7 pat OpenSSH_7.0*,OpenSSH_7.1*,OpenSSH_7.2*,OpenSSH_7.3*,OpenSSH_7.4*,OpenSSH_7.5*,OpenSSH_7.6*,OpenSSH_7.7* compat 0x04000002 debug2: fd 3 setting O_NONBLOCK debug1: Authenticating to dcimtest.cloud:22 as 'admin' debug3: hostkeys_foreach: reading file "/home/host/.ssh/known_hosts" debug3: record_hostkey: found key type ECDSA in file /home/host/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys from dcimtest.cloud debug3: order_hostkeyalgs: have matching best-preference key type [email protected], using HostkeyAlgorithms verbatim debug3: send packet: type 20 debug1: SSH2_MSG_KEXINIT sent debug3: receive packet: type 20 debug1: SSH2_MSG_KEXINIT received debug2: local client KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,ext-info-c debug2: host key algorithms: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],ssh-ed25519,[email protected],rsa-sha2-512,rsa-sha2-256,ssh-rsa debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected],zlib debug2: compression stoc: none,[email protected],zlib debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug2: peer server KEXINIT proposal debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1 debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected] debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 debug2: compression ctos: none,[email protected] debug2: compression stoc: none,[email protected] debug2: languages ctos: debug2: languages stoc: debug2: first_kex_follows 0 debug2: reserved 0 debug1: kex: algorithm: curve25519-sha256 debug1: kex: host key algorithm: ecdsa-sha2-nistp256 debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none debug3: send packet: type 30 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug3: receive packet: type 31 debug1: Server host key: ecdsa-sha2-nistp256 SHA256:tXow8uxHUporGIyc1suFxLAT92JRXRHO0FHUxgnpwAQ debug3: hostkeys_foreach: reading file "/home/host/.ssh/known_hosts" debug3: record_hostkey: found key type ECDSA in file /home/host/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys from dcimtest.cloud debug3: hostkeys_foreach: reading file "/home/host/.ssh/known_hosts" debug3: record_hostkey: found key type ECDSA in file /home/host/.ssh/known_hosts:6 debug3: load_hostkeys: loaded 1 keys from 12.12.123.123 debug1: Host 'dcimtest.cloud' is known and matches the ECDSA host key. debug1: Found key in /home/host/.ssh/known_hosts:5 debug3: send packet: type 21 debug2: set_newkeys: mode 1 debug1: rekey out after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey in after 134217728 blocks debug1: Will attempt key: admin.pem explicit debug2: pubkey_prepare: done debug3: send packet: type 5 debug3: receive packet: type 7 debug1: SSH2_MSG_EXT_INFO received debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,rsa-sha2-256,rsa-sha2-512,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521> debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Trying private key: admin.pem Enter passphrase for key 'admin.pem': debug3: sign_and_send_pubkey: RSA SHA256:svjGE6KxfPWZ3wosEHHgyO6I2hVxxxxxxxxx/NLYBtM debug3: sign_and_send_pubkey: signing using rsa-sha2-512 SHA256:svjGE6KxfPWZ3wosEHHgyO6I2hVxxxxxxxxx/NLYBtM debug3: send packet: type 50 debug2: we sent a publickey packet, wait for reply debug3: receive packet: type 51 debug1: Authentications that can continue: publickey,password debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: send packet: type 50 debug2: we sent a password packet, wait for reply debug3: receive packet: type 52 debug1: Authentication succeeded (password). Authenticated to dcimtest.cloud ([12.12.123.123]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug3: send packet: type 90 debug1: Requesting [email protected] debug3: send packet: type 80 debug1: Entering interactive session. debug1: pledge: network debug3: receive packet: type 80 debug1: client_input_global_request: rtype [email protected] want_reply 0 debug3: receive packet: type 91 debug2: channel_input_open_confirmation: channel 0: callback start debug2: fd 3 setting TCP_NODELAY debug3: ssh_packet_set_tos: set IP_TOS 0x10 debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug3: send packet: type 98 debug1: Sending environment. debug3: Ignored env SHELL debug3: Ignored env WSL_DISTRO_NAME debug3: Ignored env WT_SESSION debug3: Ignored env NAME debug3: Ignored env PWD debug3: Ignored env LOGNAME debug3: Ignored env HOME debug1: Sending env LANG = en_US.UTF-8 debug2: channel 0: request env confirm 0 debug3: send packet: type 98 debug3: Ignored env LS_COLORS debug3: Ignored env TERM debug3: Ignored env USER debug3: Ignored env SHLVL debug3: Ignored env WSLENV debug3: Ignored env PATH debug3: Ignored env HOSTTYPE debug3: Ignored env WT_PROFILE_ID debug3: Ignored env OLDPWD debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug3: send packet: type 98 debug2: channel_input_open_confirmation: channel 0: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug3: receive packet: type 99 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug3: receive packet: type 99 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1098-azure x86_64)
I resolved adding my public key in the public key to ~/.ssh/authorized_keys using VI instead Nano, perhaps Nano add some kind of formatting to the text that won't let the public key to be recognized.
Access to remote server via SSH via pem key and not be asked for user password as well
1,679,570,278,000
My config: debian 10.9 4.19.0-19-amd64 OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019 autossh 1.4g GNU bash, 5.0.3(1)-release (x86_64-pc-linux-gnu) I start a service in /etc/systemd/system/myshh.service ExecStart=/home/user/ssh_run starts my script ssh_run THIS ARE ONLY TWO OF A LOT EXAMPLES WITH SSH AND AUTOSSH I TRIED ssh_run script #!/bin/bash ssh -f -NT -o "ServerAliveInteval=30" -o "ServerAliveCountMax=2" \ -R 5555:localhost:443 -l [USER] [IP] -p [PORT] -i [KEY-WITHOUT-PASSWORD] EXECUTE NEXT COMMAND or #!/bin/bash /usr/bin/autossh -f -NT -o "ExitOnForwardFailure=yes" \ -R 5555:localhost:443 -l [USER] [IP] -p [PORT] -i [KEY-WITHOUT-PASSWORD] EXECUTE NEXT COMMAND When i execute the command alone from the shell it works ssh -f -NT ...... When i execute only the script it works ./ssh_run When i start the service /etc/systemd/system/myshh.service without -f it works ALL 3 WITHOUT EXECUTE OF THE NEXT COMMANDS PROBLEM: When i start the service /etc/systemd/system/myshh.service, with the script ./ssh_run and i use -f for the background, now the next commands execute but, service exited with status code 0 but no ssh or autossh in ps aux | grep [s]sh or ps aux | grep [a]utossh Checked systemctl status myshh.service i grep the journalctl too without an error the service stops or restart with the same result, i try with and without Restart=30, Restart=always or Environment="AUTOSSH_GATETIME=0" in the service I read most of the posts of all the stacksites, and try the search engines, no solution found i try (command) , exec, bash -c .... without result... Now my problem/question: How can i execute a ssh/autossh remote port forwarding command inside a bash script in the background, that is start from a service, to execute the next commands.
If someone has the same problem, this post save my day now: How do I figure out why my systemctl service didn't start Add Type=forking to my service file /etc/systemd/system/myshh.service. Now i can execute a ssh/autossh remote port forwarding command inside a bash script and the -f option for the background execution, that is start from a service, to execute the next commands/ more commands from/in a script. Other links to posts with relevant informations: How to see full log from systemctl status service? Difference between nohup, disown and & Start a background process from a script and manage it when the script ends
Start a service and Run/load a script with ssh or autossh, remote port forwarding in the background "-f" to execute more/next commands in the script!
1,679,570,278,000
My goal is to increase my ssh connection timeout on my server, where I have limited permisssions. I do not have permission to even read /etc/ssh/sshd_config (nor append/write) and I do not have sudo. Locally on my PC, I already did in ~/.ssh/config: Host * ServerAliveInterval 300 ServerAliveCountMax 2 However, even after reloading my local ssh deamon (sudo systemctl reload sshd), my connection would break much sooner than expected. My question is: Is there a different way to tell to my server to keep my connection alive, different than the standard global solution of modifying /etc/ssh/sshd_config with ClientAliveInterval X?
You need to identify how long it takes for an idle session to get timed out. (If you're getting timeouts across sessions that are not idle then we're chasing the wrong type of solution.) For example, if it's one minute, then set the Keepalive to half that, i.e. 30 seconds. I'd recommend that you don't change the default setting (Host *) but instead create a specific entry for the target host and all the possible aliases that you might use to connect to it. If you also have a default setting (Host *), where for example you set the default target username, ensure that this specific entry is listed before the default. Host myServer myServerAlias myServer.my.domain 10.11.12.13 ServerAliveInterval 30 It's worth taking the time to try and identify the timeout period and halving that, as blindly sending a keepalive every 30 seconds is wasteful.
Increase SSH Connection timeout on server side without permissions
1,679,570,278,000
I have a custom command line script that I want the users to access over SSH. For example when the user logs in with ssh user@server the user should be able to interact with the command line application instead of directly accessing /bin/bash (or any other shell). Are there any configuration changes that should be done to achieve this with OpenSSH?
sshd_config's ForceCommand is what you're looking for. Also, caveat: you say it's a shell script, so no matter what you do, it will still be executed by a shell. This need not be problematic, but it certainly means you need to think about what you do with user input – e.g. eval'ing it should be out of the question.
Using ssh authentication with a custom application
1,679,570,278,000
Ssh connection failure from macbook. This stopped working after trying to migrate to Centos 8 MacBook-Pro:~ $ ssh -Y -vvv [email protected] -p 32 OpenSSH_8.6p1, LibreSSL 2.8.3 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 21: include /etc/ssh/ssh_config.d/* matched no files debug1: /etc/ssh/ssh_config line 54: Applying options for * debug2: resolve_canonicalize: hostname 152.3.36.72 is address debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts' -> '/Users/terry/.ssh/known_hosts' debug3: expanded UserKnownHostsFile '~/.ssh/known_hosts2' -> '/Users/terry/.ssh/known_hosts2' debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling debug3: ssh_connect_direct: entering debug1: Connecting to 152.3.36.72 [152.3.36.72] port 32. debug3: set_sock_tos: set socket 3 IP_TOS 0x48 debug1: connect to address 152.3.36.72 port 32: Connection refused ssh: connect to host 152.3.36.72 port 32: Connection refused
152.3.36.72 port 32: Connection refused indicates that the host 152.3.36.72 doesn't have a ssh server running on port 32. As you were upgrading the system it is possible that the old ssh configuration with non-standard port 32 was overwritten with a new default configuration. Try connecting to the default port 22. Another possibility is that you have the firewall configured to block access and return ICMP port unreachable.
Centos 7.9 ssh connection failure from macbook
1,679,570,278,000
By example, when connect to ssh and execute /usr/libexec/openssh/sftp-server -d /opt/files can get / root directory from a sshfs connection. By example: have a test user and from authorized_keys have two access, one with all access and other with limited access, by example: restrict,command="/usr/libexec/openssh/sftp-server -d /opt/files" ssh-rsa AAA... But with this key can mount the root directory: # mkdir /mnt/remote # sshfs test@hostname:/ /mnt/remote # ls /mnt/remote bin boot dev etc home ... I am trying to create an integration with a custom software developed in python, that is why I am trying with a single user instead of chrooting with different users and different permissions, I want to do it with a single user delegating access to different directories according to each key.
Just having the -d option for sftp-server will not cause the session to be chrooted. sftp-server has no special privileges on its own, and the chroot() system call requires root privileges (or a specific CAP_SYS_CHROOT capability, if separate capabilities are being used). So it is actually impossible for sftp-server to perform an actual chroot operation, unless it is being run as root. The sftp-server(8) man page says: -d start_directory specifies an alternate starting directory for users. [...] The default is to use the user's home directory. This option is useful in conjunction with the sshd_config(5) ChrootDirectory option. So your sftp-server -d /opt/files is not a "virtual chroot". It is nothing more than a cd /opt/files just before the control of the SFTP session is handed to the remote client. To chroot just one particular user, you could do something like this at the end of your /etc/ssh/sshd_config file: Match User test ChrootDirectory /opt/files If you plan to do actual chrooting, you should read the description of the ChrootDirectory option very carefully: the requirements for the directory used as ChrootDirectory are quite strict. Unfortunately it seems you cannot use the key to determine the directory the session will be chrooted to: although the ChrootDirectory option accepts some %-tokens, the sshd_config(5) man page of even the latest version of OpenSSH says: ChrootDirectory accepts the tokens %%, %h, %U, and %u. And those four tokens mean, respectively: a literal % character, user's home directory, the numeric user ID, and the username. If your goal is e.g. to have the Python application prepare files for multiple customers, that's what user groups are for! On many modern Linux distributions, each user is created along with a group dedicated for that user, with a group name equal to the username. You could set up your user accounts this way: User Primary group Secondary groups pythonapp pythonapp test1,test2,test3... test1 test1 (none) test2 test2 (none) test3 test3 (none) ... ... ... If the test1, test2, test3 etc. users have their home directories set up with permissions of at least 710 (drwx--x---) and each home directory has a group-writeable sub-directory with permissions 2770 (drwxrws---), then user pythonapp will have access to all those group-writeable sub-directories, but the test... users will have no access to each others' home directories nor the group-writeable directories within them, because there will be no common group membership between the test... users. The setgid bit on the group-writeable subdirectories will ensure that any files created by the pythonapp user will get assigned to the user-specific group so the test... users will never even see the name of the pythonapp group. Of course, if you have hundreds or thousands of customers, this approach can be difficult to scale that far.
openssh/sftp-server virtual chroot does not work
1,679,570,278,000
I am trying to ssh from an Ubuntu machine ( VM on Win 10) to a linux server using public keys (DSA). However the OpenSSH client on the Ubuntu does not try public keys as authentication method even though I have added below lines to /etc/ssh/ssh_config: PubkeyAuthentication yes PubkeyAcceptedKeyTypes +ssh-dss. The permissions of .ssh directory is set to 700 and the id_dsa file is set to 600. Here is the debug log: The authenticity of host '************' can't be established. RSA key fingerprint is SHA256:cPAuJmw7PjOgBYDN2TYfFscDVTbcsj0rT6HFJH9SDFI. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '*****************' (RSA) to the list of known hosts. debug2: bits set: 4095/8192 debug3: send packet: type 21 debug2: set_newkeys: mode 1 debug1: rekey out after 4294967296 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug3: receive packet: type 21 debug1: SSH2_MSG_NEWKEYS received debug2: set_newkeys: mode 0 debug1: rekey in after 4294967296 blocks debug1: Will attempt key: .ssh/id_dsa explicit debug2: pubkey_prepare: done debug3: send packet: type 5 debug3: receive packet: type 6 debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug3: send packet: type 50 debug3: receive packet: type 51 debug1: Authentications that can continue: gssapi-keyex,gssapi-with-mic debug3: start over, passed a different list gssapi-keyex,gssapi-with-mic debug3: preferred gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup gssapi-with-mic debug3: remaining preferred: publickey,keyboard-interactive,password debug3: authmethod_is_enabled gssapi-with-mic debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: FILE:/tmp/krb5cc_1000) debug1: Unspecified GSS failure. Minor code may provide more information No Kerberos credentials available (default cache: FILE:/tmp/krb5cc_1000) debug2: we did not send a packet, disable method debug1: No more authentication methods to try. *****************: Permission denied (gssapi-keyex,gssapi-with-mic). Could someone explain why publickey is not among the authentication methods here: debug1: Authentications that can continue: gssapi-keyex,gssapi-with-mic Thanks in advance, Update: I started sshd on a different port (2222) on the server and then I was able to connect. So the issue is with port 22, for some reason the server is not allowing publickey authentication. I see this in the logs, when using port 22: debug1: Authentications that can continue: gssapi-keyex,gssapi-with-mic Here the server is not allowing publickey authentication for my user-id on port 22, however server allows publickey authentication on port 2222 : debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic What could be the reason that the server does not allow publickey authentication for my user id on port 22 ?
The issue was with related to my internet connection - it was not allowing publickey authentication on port 22. SSH publickey authentication works fine after switching to another connection.
ssh client not trying publickey authentication on port 22
1,592,395,739,000
I have a system with both a public (e.g server1.foo.bar) and privately-resolvable (e.g. server1.internal.foo.bar) DNS name. SSH connections are only possible via the private IP, but I always think of these hosts in terms of their public name. I would like to: connect to the right IP regardless of whether I remember to use the *.internal.bar pattern save keystrokes I'm aware of the substitution tokens such as %h that can be used to modify the hostname given at the commandline, e.g. Host foo Hostname %h.some.other.domain The behavior I'm looking for would be something like: Host *.foo.bar Hostname %m.internal.foo.bar Where %m gets substituted with just the portion of the given hostname up to the first dot. I've read man 5 ssh_config as well as https://man.openbsd.org/ssh_config and couldn't find the answer, if one even exists. I'm using macOS 10.15.4: $ ssh -V OpenSSH_8.1p1, LibreSSL 2.7.3
I figured out a working method. It's a bit ugly, but it does work. This leverages the ProxyCommand directive (see manpage) combined with a small bash helper script. Add sections like this to ~/.ssh/config — one for each domain you want remapped: Match final host="!*.corp.foo.com,*.foo.com" ProxyCommand ssh_ext2int %h %p Match final host="*.quux.biz" ProxyCommand ssh_ext2int %h %p Save this in your $PATH somewhere as ssh_ext2int — and chmod u+x it: #!/usr/bin/env bash [ $# -eq 2 ] || exit 1 typeset -A dmap dmap=( [foo.com]=corp.foo.com [quux.biz]=internal.qux.lan ) d=${1#*.} h=${1%%.*} nd=${dmap[$d]:-$d} /usr/bin/nc -w 120 $h.$nd $2 2>/dev/null Now, ssh server158247.quux.biz should connect you to server158247.internal.qux.lan
How can I configure ~/.ssh/config such that `ssh foo.bar` results in a connection to `foo.internal.bar`?
1,592,395,739,000
I'm not really sure where or to whom I should ask this question, so hopefully you all can help me. I am working on a project that I would like to add data to the end of each packet before encryption on an SSH client, then take that data off the packet right after decryption on the SSH target. I'd like to do this with all communication during the ssh session as part of an authentication system I am working on involving SSH Currently I'm looking at the OpenSSH github and looking beginning to look through the sshd.c file, but I'm hoping someone can point me to the right file to look at to help me achieve this or perhaps a better place to ask this question to reach people who would know more about this stuff. Any help would be greatly appreciated - Thank you!
An alternative is to leave ssh alone (meaning for example that you can also apply updates and security fixes without recompiling). You then do your out-of-bound communication with a wrapper, using port forwarding instead. That's already been done, check the -M flag in autossh.
Working with SSH/SSL
1,592,395,739,000
I am trying to create a homemade server. I'm running Ubuntu 18.04 and have an Xfinity router. I've forwarded port 22 to a reserved local IP for my server. The problem is that I can connect via a different network (using my iPhone hotstop), but only for about 3 minutes after my router starts up. At this time the xfinity gateway (https://internet.xfinity.com/network/advanced-settings) is unresponsive which I guess means it's still starting up. After about three minutes the page loads, and I'm no longer able to connect via ssh. Any help would be super appreciated - thanks all!
Update: the problem was the router’s firewall. Even on low security it blocked incoming traffic. Seems like kind of a gaping vulnerability for the firewall to take longer to start up than the router @xfinity (which is why I was able to ssh only for a few minutes after start up). Oh well, thanks everybody!
SSH only working shortly after router reboot
1,592,395,739,000
Presently I am using this for controlpath ControlPath /home/user/.ssh/sockets/ssh_mux_%h_%p_%r If i connect to a hostname 'redishost' it creates socket with redishost If i connect to same host 'redishost' with its ip address it creates socket with ip address Is it possible to use ip for all ssh connections instead of hostname %h in controlpath ?
after checking openssh documentation and source I found that openssh don't have any token for controlpath expression where ip address can be specified. In case if you want to use it you can use my repo which I edited source of openssh and added the token %x for ip address resolving. Now ControlPath becomes: ControlPath /home/user/.ssh/sockets/ssh_mux_%x_%p_%r ControlPath supported tokens: "l", thishost, "n", host_arg, "n", host_arg, "p", portstr, "p", portstr, "x", hostip, "r", options.user, "r", options.user, "u", pw->pw_name, "u", pw->pw_name, "i", uidstr, "i", uidstr, "h", host, https://github.com/akhilin/openssh-portable/commit/a2d95e090b73f36590e8c189685ce8cea810f49a
ssh ControlPath use ip address instead of hostame %h
1,592,395,739,000
I have the following set up in my ~/.ssh/config match host devbox compression yes user hari port 22 hostname 192.168.9.7 match originalhost devbox exec "~/.ssh/check_if_outside_home.sh" hostname devbox.harisund.com The idea is this - Always connect to 192.168.8.15 (this will work if I am already at the home network) Connect instead to devbox.harisund.com , if I am not within the home LAN However, with verbose logging, I see this - 1 OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016 2 debug1: Reading configuration data /home/hsundararaja/.ssh/config 3 debug2: checking match for 'host devbox' host devbox originally devbox 4 debug3: /home/hsundararaja/.ssh/config line 734: matched 'host "devbox"' 5 debug2: match found 6 debug2: checking match for 'originalhost devbox exec "~/.ssh/check_if_outside_home.sh"' host 192.168.9.7 originally devbox 7 debug3: /home/hsundararaja/.ssh/config line 744: matched 'originalhost "devbox"' 8 debug1: Executing command: '~/.ssh/check_if_outside_home.sh' 9 debug1: permanently_drop_suid: 14741 10 debug3: command returned status 0 11 debug3: /home/hsundararaja/.ssh/config line 744: matched 'exec "~/.ssh/check_if_outside_home.sh"' 12 debug2: match found 13 debug1: /home/hsundararaja/.ssh/config line 839: Applying options for * 14 debug1: Reading configuration data /etc/ssh/ssh_config 15 debug1: /etc/ssh/ssh_config line 19: Applying options for * 16 debug2: resolving "192.168.9.7" port 22 17 debug2: ssh_connect_direct: needpriv 0 18 debug1: Connecting to 192.168.9.7 [192.168.9.7] port 22. 19 debug2: fd 3 setting O_NONBLOCK 20 debug1: connect to address 192.168.9.7 port 22: Connection timed out 21 ssh: connect to host 192.168.9.7 port 22: Connection timed out In line 4, it detects the first stanza in ~/.ssh/config. At this point, hostname gets changed to 192.168.9.7. All good so far. In line 7, it reaches the second stanza. In line 8, it checks if we are outside home, and it returns 0. As expected. Line 12 says it's a match, which means we should change our hostname to devbox.harisund.com However, in line 16, we see it is still using the local hostname as it was set. Why ? Is this expected behavior?
That's just as designed and documented in man 5 ssh_config: For each parameter, the first obtained value will be used. The configuration files contain sections separated by Host specifications, and that section is only applied for hosts that match one of the patterns given in the specification. The matched host name is usually the one given on the command line (see the CanonicalizeHostname option for exceptions). Since the first obtained value for each parameter is used, more host-specific declarations should be given near the beginning of the file, and general defaults at the end. So in short, your match originalhost can't overwrite a pre-existing hostname, swap the order and you should be fine.
Changing hostname more than once via ~/.ssh/config
1,592,395,739,000
To set the specified sshd log file in ssh server pc this way. vim /etc/rsyslog.conf local0.* /var/log/sshd.log vim /etc/ssh/sshd_config SyslogFacility local0 To create the log file: touch /var/log/sshd.log To restart all services. systemctl restart rsyslog systemctl restart sshd To get all the ssh log in ssh client pc this way. sudo vim /etc/bash.bashrc HISTTIMEFORMAT="%Y-%m-%d:%H-%M-%S:whoami: " export HISTTIMEFORMAT PROMPT_COMMAND='history -a' source /etc/bash.bashrc Now to get all ssh log record. history |grep ssh My question is that how to set the specified ssh log file in ssh client pc,instead of sshd log file in ssh server pc? Here is my try in ssh client pc. vim /etc/rsyslog.conf local0.* /var/log/ssh.log vim /etc/ssh/ssh_config SyslogFacility local0 To create the log file: touch /var/log/ssh.log To restart all services. systemctl restart rsyslog reboot An error occur when to login my vps from ssh client pc: /etc/ssh/ssh_config: line 56: Bad configuration option: syslogfacility /etc/ssh/ssh_config: terminating, 1 bad configuration options Is there a way to set the specified ssh log file in ssh client pc instead of logging it in history? History command can log all the ssh action taken in my ssh client pc,i want all the log info in a single specified file such as /var/log/ssh.log. Please don't tell me this way. history |grep ssh >> /var/log/ssh.log
Do the following on the client: script logfile.log ssh into the machine do whatever you need exit from ssh exit from script It will log everything that happens inside the script session to the file specified at the beginning until you exit. Beware of commands that manipulate the whole screen (vi, top, etc...) as they will generate garbage to the log.
How to set the specified ssh log file in ssh client pc?
1,592,395,739,000
So I have this server I log on with my sudoer account via SSH using a key. I have also created another, restricted, account so that I can give another user access to SFTP files into a specific folder. Now, when I try to connect using sftp, it´s still spitting invalid key back at me. I would like that specific user to be able to SFTP using only username and password. Any help here is appreciated, thanks! Daniel
In /etc/ssh/sshd_config Define the following exception in sshd by adding this block for each user name: Match User username ChrootDirectory %h ForceCommand internal-sftp PasswordAuthentication yes If you have multiple accounts, it's easier to have a group catch: Match Group sftp ChrootDirectory %h ForceCommand internal-sftp PasswordAuthentication yes For the sake of clarity, you meant: No public key requirement. A private key stay's on the client's workstation/server. After this, reload sshd and you're good to go. Read more about it at https://wiki.archlinux.org/index.php/SFTP_chroot
Set up SFTP to not require private key [closed]
1,592,395,739,000
I have a openssh-server. On Ubuntu 14.04x behind a firewall (I have no admin rights to) I want to create a ssh tunnel from my Kali Linux rolling that is using a tether from mobile device that is behind a firewall I have no admin rights. Is this possible? I have chrome remote desktop which allows command line access to the Ubuntu machine which has open vpn access. Server and open ssh server.
Ubuntu----Firewall1----Internet----Firewall2----Kali And you want to SSH to the ubuntu box from the kali box, with no access to either firewall. There are two things you need to have. 1) Some way for an inbound connection to pass through Firewall1 and get to the Ubuntu host. Which port is irrelevant, as long as something is NATted in. It should be a TCP port. 2) Firewall2 needs to allow the connection FROM internal TO internet on the same port as #1. Without both of those, you're not going to achieve your goal. Possible alternatives Check IPv6 - if both ends have functional IPv6 then NAT becomes a non-issue. However firewall rules may still need adjusting. Use a third party - a shell server somewhere on the internet, and both Ubuntu and Kali maintain a ssh session with reverse tunnels configured. So from Kali you would SSH to shellbox with a -R option. More info on this at How does reverse SSH tunneling work? Bypass the firewall by laying in and managing your own connectivity. This might mean installing a DSL or Fibre connection at either end. Downside here is cost and permission. You might get away with using a cellular connection, but they get expensive really quick. Simply don't. If you're bored at work/school and this is merely a distraction, perhaps you need a more challenging and involving job.
Ssh tunneling fron behind nat firewalls
1,446,135,488,000
When i log in to my raspberry pi it shows me this message, i like the fact that it shows the last login but the info is too long (also because i log in from mobile devices). Linux RaspberryPi 4.1.11+ #822 PREEMPT Fri Oct 23 16:14:56 BST 2015 armv6l The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Thu Oct 29 17:00:38 2015 from computer.local It there a possibility to show this instead? Last login: Thu Oct 29 17:00:38 2015 from computer.local Also, show the ip address instead of the resolved name? I know i can see the auth.log but i wonder if it's possible.
This message comes from the standard file /etc/motd where the administrator can put info for people logging in. Simply empty it, eg: sudo sh -c '>/etc/motd' You can get the ip address of a last login with last -i $USER | grep -v 'still logged' | head -1 From the comments below: PrintMotd no in /etc/ssh/sshd_config is the default in debian it seems, but has no effect: ssh still shows the /etc/motd file.
Show only last login in the SSH welcome message
1,446,135,488,000
I'm new in Linux, but learning bits by bits.. So please give explanation too, instead of only the answer, to help me understand about why and how. Currently I want to build an android rom with Xubuntu OS. I already setup openSSH to connect remotely. my remote pc is win10, using putty for ssh. my question are: can I use ssh to build android rom? Can I use "brunch" command, and disconnect ssh connection? -- If can, then how can I know if the process is finished or not? -- if can't, then any other method to do that remotely? beside using vnc? or is there any way to remote only the running terminal app on my server? so that what's on remote screen = server's terminal screen. something like that thanks
You may want look into GNU screen or tmux. they both have the attach feature. typical workflow looks like this: Connect with SSH Start a screen/tmux session. Do your work. Disconnect. Next time you login, attach the last session and continue working.
can I use ssh to send build command for android building?
1,446,135,488,000
I have installed g++ package on Cygwin. so I see the following output in Cygwin: Input:$ which g++ Output: /cygdrive/c/mingw-w64/x86_64-6.2.0-posix-seh-rt_v5-rev1/mingw64/bin I have also installed openssh package on Cygwin and configured an ssh server. However, when I connect remotely to my Cygwin I have the following output: Input:$ which g++ Output: no g++ in (/usr/local/bin:/usr/bin:/bin:/cygdrive/c/Program Files (x86)/Common Files/Intel/Shared Libraries/redis .... How can I resolve the above issue? Is there any specific ./bash_profile file that is loaded in the case of remote ssh to Cygwin? If yes, can I fix the issue by adding the following line to this ./bash_profile:? export PATH=$PATH:/cygdrive/c/mingw-w64/x86_64-6.2.0-posix-seh-rt_v5-rev1/mingw64/bin
put the change of PATH in your ~/.bashrc As reported in the code it will be set for # base-files version 4.1-1 # ~/.bashrc: executed by bash(1) for interactive shells.
Different bash profile files are loaded? For local and ssh-remote access to Cygwin
1,446,135,488,000
I am pretty new to unix, so please bare with me... when i try to connect to my server running ubuntu server through terminal on my ubuntu workstation I get this error: bash: 22: command not found the command I type in to connect is : ssh user@ip 22 then i get the response that asks for the user's pass, after i enter it i get promted with that error. I have tried resinstalling openssh-client by: sudo apt-get remove --purge openssh-client sudo apt-get install openssh-client also tried updating my repos: - sudo apt-get update sadly no result...
Port 22 is the standard port used to connect to sshd. It is used by default, so unless you have configured your remote host to listen on a non-standard port in your sshd_config, then all you need to do is ssh user@ip. If your remote host were listening on port 2222, for instance, the syntax for specifying that would be ssh -p 2222 user@ip. It looks like you are trying to do ssh -p 22 user@ip, but like I said, that is unnecessary because 22 is the default port that SSH tries to connect to. What your command is actually doing is trying to execute the non-existent command 22 on the remote host. Everything after the ssh command is interpreted as a command to be executed on the remote host (and for best practices, should normally be enclosed in quotes). For instance, ssh user@ip hostname would return the remote hostname, because it is executing that command on the remote machine and then exiting SSH back to your local shell.
ssh error bash: 22: command not found
1,446,135,488,000
Why is 50-redhat.conf listed twice with different file sizes for this directory with this ls: ls -la /etc/ssh/ssh* -rw-r--r--. 1 root root 1921 Aug 2 02:58 /etc/ssh/ssh_config -rw-------. 1 root root 3667 Aug 2 02:58 /etc/ssh/sshd_config -rw-r-----. 1 root ssh_keys 480 May 20 02:38 /etc/ssh/ssh_host_ecdsa_key -rw-r--r--. 1 root root 162 May 20 02:38 /etc/ssh/ssh_host_ecdsa_key.pub -rw-r-----. 1 root ssh_keys 387 May 20 02:38 /etc/ssh/ssh_host_ed25519_key -rw-r--r--. 1 root root 82 May 20 02:38 /etc/ssh/ssh_host_ed25519_key.pub -rw-r-----. 1 root ssh_keys 2578 May 20 02:38 /etc/ssh/ssh_host_rsa_key -rw-r--r--. 1 root root 554 May 20 02:38 /etc/ssh/ssh_host_rsa_key.pub /etc/ssh/ssh_config.d: total 8 drwxr-xr-x. 2 root root 28 Aug 4 10:14 . drwxr-xr-x. 4 root root 4096 Aug 4 10:14 .. -rw-r--r--. 1 root root 581 Aug 2 02:58 50-redhat.conf /etc/ssh/sshd_config.d: total 8 drwx------. 2 root root 28 Aug 4 10:14 . drwxr-xr-x. 4 root root 4096 Aug 4 10:14 .. -rw-------. 1 root root 719 Aug 2 02:58 50-redhat.conf
The two files are different. The one occurring in ssh_config.d contains SSH client configuration. The one found in sshd_config.d contains SSH server configuration.
Why does ls show this file twice with different sizes?
1,446,135,488,000
I work in a government agency. We are running a sftp client and sftp-server on a DMZ system where we receive and send files over the internet. Our environment: OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017 Linux version 3.10.0-1160.102.1.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) ) #1 SMP Mon Sep 25 05:00:52 EDT 2023 Red_Hat_Enterprise_Linux-Release_Notes-7-en-US-7-2.el7.noarch redhat-release-server-7.9-8.el7_9.x86_64 Intel(R) Xeon(R) Our problem is that one of our partners want to upgrade the openssh version but we do not know what version is the most stable and secure. I have been looking at the OpenSSH site and elsewhere but have not found any good answer. Our question: What openssh version on Linux do you recommend when it comes to stability and security ( sftp security ) ? Regards Anders
Since you are already using REL, I would suggest staying with it although you should upgrade to version 8 or 9. As @tink has pointed out, REL backports security patches into their packages without changing the major/minor version numbers so your will be very safe there. If possible, you might want to enable FIPS 140-2 to enforce a list of NIST-approved ciphers and MACs. Failing that, you can configure sshd to use a restricted set of ciphers that are known to be more secure. Finally, a question. If your partner wants to switch openssh versions, how does that affect the server that you're running?
What openssh version is best on Linux when it comes to stability and security ( sftp security )? [closed]
1,446,135,488,000
I was reading about OpenSSH and I have the following questions: What is the difference between ssh and slogin? What is the difference between scp and sftp? Does sshd (on server side) provide separate server processes to handle each client (ssh, slogin, sftp, scp) type requests or just one process for all client types? is the secure shell a standalone shell like bash etc., or just a process that encrypt/decrypts traffic and communicate to ordinary bash process locally?
Since the questions are listed, I'll list the respective answers: slogin is an alias for ssh (they are the same) further reading scp and sftp do similar things in that they transfer files, but scp is a separate program that uses ssh to copy files. sftp is an extension of ssh itself that transfers files similar to FTP, but over ssh further reading Yes, the sshd spins up a separate process for each connection. This can be verified by connecting multiple times to an ssh server and then doing ps aux | grep ssh on the server ssh is a protocol, not a shell. It gives access to the shell on the other side and encrypts the traffic between
Openssh Questions [closed]
1,446,135,488,000
I am setting up a Debian server through DigitalOcean, which I initialized with a SSH-key. Currently I can log onto the server as root or as a user. Normally when I do this type of configuration, I uncomment PermitRootLogin yes from /etc/ssh/ssh_config, and change it to PermitRootLogin no. This time, however, I saw a shorter ssh_config that contained no PermitRootLogin. When I tried to add it in, vim's syntax highlighting didn't recognize it, and restarting sshd didn't have any effect. I looked at the man page for ssh_config and the keyword wasn't listed. How do I prohibit logging in as root via SSH?
The ssh_config file is the default configuration for SSH clients. The server configuration will be found in sshd_config. The PermitRootLogin is a server setting (it modifies the behaviour of the SSH server) that should go in the server configuration file.
Does OpenSSH no longer support the "PermitRootLogin" keyword? [closed]
1,446,135,488,000
I have a redhat server 7.5. I have a openssh installed on this server. The openssh is 7.4. I need to upgrade openssh 7.4 to 7.6 AS Nessus, vulnerability scanner scanned this vulnerability (Openssh <7.6) and thus need to upgrade to 7.6 or later.The details is found here, https://www.tenable.com/plugins/nessus/103781. I found the link, https://ftp.openbsd.org/pub/OpenBSD/OpenSSH/openssh-7.6.tar.gz but they said that this cannot be installed on Red Hat as you cant install bsd on Linux. I referred to this link, https://superuser.com/questions/577389/make-command-for-installing-openssh-on-ubuntu/ Thus I went to this link, http://www.openssh.com/portable.html to have a portable version. I used this link, https://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/https://cdn.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-7.6p1.tar.gz. I downloaded this file in /tmp directory. I performed the following command, # tar xvfz …/openssh-7.6p1.tar.gz # cd ssh # make obj ←-------------------- error message # make cleandir # make depend # make # make install I got this error when I performed the command make obj. The error is make: *** No rule to make target 'obj'. Stop.
The issue linked to in the comments refers to CVE-2017-15906 (listed on the same page). Red Hat has documented this CVE along with its details, which is available here: https://access.redhat.com/security/cve/cve-2017-15906 According to the CVE page, an errata has been issued on this issue for RHEL 7 (RHSA-2018:0980). OpenSSH packages on RHEL 5 & 6 are not affected by this vulnerability. The errata is available here: https://access.redhat.com/errata/RHSA-2018:0980 As per the errata page, this security issue is fixed in the following version of OpenSSH and its related packages: 7.4p1-16.el7.x86_64. If you have this version of OpenSSH on RHEL 7, you should be safe from the vulnerability. More details on the change, including upgrade instructions, are available on the errata page linked above. The latest version of OpenSSH as per the project's homepage is 7.9. Although the RHEL package is of a lower version, it still contains all of the security patches from the upstream version. This is done by backporting - isolating the security fixes from the upstream source, and then applying them to the older package that Red Hat distributes. This allows RHEL customers to continue using a specific version of a package, while being protected against new security vulnerabilities. More of this process is explained on this page: Backporting Security Fixes. One of the problems with backporting is the version checks employed by security scanning tools (such as the Nessus Scanner, in this case). From the above link: Also, some security scanning and auditing tools make decisions about vulnerabilities based solely on the version number of components they find. This results in false positives as the tools do not take into account backported security fixes. This is precisely the case here, since the CVE-2017-15906 issue page for Nessus documents this explicitly: Note that Nessus has not tested for these issues but has instead relied only on the application's self-reported version number.
install openssh 7.6 on red hat 7.5 [duplicate]
1,446,135,488,000
How can I communicate with a Linux computer from my Android phone with the help of SSH? I connected Android device from Linux computer vith SSH. How I communicate with a linux machine via SSH?
You need to make sure you have ssh acess on the account you are trying to connect to. open /etc/ssh/ssh_config and add: PermitRootLogin no AllowUsers username Replace username with the user you would like to ssh into. Restart ssh with either sudo service ssh reload or systemctl restart sshd
How can I communicate with a Linux device via SSH? [closed]
1,446,135,488,000
i want upgrade openssh, as i have error: error: Unsafe AuthorizedKeysCommand: bad ownership or modes for file /usr/bin/get_ldap_ssh_key.sh In sshd_config: AuthorizedKeysCommand /usr/bin/get_ldap_ssh_key.sh AuthorizedKeysCommandUser nobody Bureport say Set all RESOLVED bugs to CLOSED with release of OpenSSH 7.1 I not found instructions how to upgrade openssh. How to do it?
That's not a bug that upgrading will fix, but an error in your setup. The sshd_config(5) manpage says AuthorizedKeysCommand Specifies a program to be used to look up the user's public keys. The program must be owned by root, not writable by group or others and specified by an absolute path. You simply need to ensure that /usr/bin/get_ldap_ssh_key.sh satisfies these requirements: sudo chown root:root /usr/bin/get_ldap_ssh_key.sh sudo chmod 755 /usr/bin/get_ldap_ssh_key.sh
How to upgrade openssh? [closed]
1,446,135,488,000
echo $SSH_CONNECTION does not display anything on the SSH server. I use my laptop to connect to my server using a key-less SSH setup. After SSHing to my server, if I run echo $SSH_CONNECTION in a terminal on the server from the machine itself I'm supposed to see the IP address and port numbers of my remote and local server client, however nothing is displayed. I am wondering if anyone can guide me in fixing this issue? I have Ubuntu 16.04 running on both machines.
The SSH_CONNECTION environment variable will be set in the SSH session. It will not be set for any other process on the SSH server than for those started from the SSH connection from the client. If you are logged in directly on the SSH server (on the physical machine, not through SSH), and type echo $SSH_CONNECTION, then I'm expecting that to output nothing. So, logging in with SSH and then printing the value of $SSH_CONNECTION ought to look something like [client] $ ssh [email protected] [server] $ echo "$SSH_CONNECTION" xxx.xxx.xxx.xxx nnnnn yyy.yyy.yyy.yyy 22 Where x is your client's IP address, n is the port used on the client, and y is the server's IP address.
$SSH_CONNECTION Does not display the IP addresses or port port numbers of ssh cnnections [closed]
1,283,455,842,000
I have written a script that runs fine when executed locally: ./sysMole -time Aug 18 18 The arguments "-time", "Aug", "18", and "18" are successfully passed on to the script. Now, this script is designed to be executed on a remote machine but, from a local directory on the local machine. Example: ssh root@remoteServer "bash -s" < /var/www/html/ops1/sysMole That also works fine. But the problem arises when I try to include those aforementioned arguments (-time Aug 18 18), for example: ssh root@remoteServer "bash -s" < /var/www/html/ops1/sysMole -time Aug 18 18 After running that script I get the following error: bash: cannot set terminal process group (-1): Invalid argument bash: no job control in this shell Please tell me what I'm doing wrong, this greatly frustrating.
You were pretty close with your example. It works just fine when you use it with arguments such as these. Sample script: $ more ex.bash #!/bin/bash echo $1 $2 Example that works: $ ssh serverA "bash -s" < ./ex.bash "hi" "bye" hi bye But it fails for these types of arguments: $ ssh serverA "bash -s" < ./ex.bash "--time" "bye" bash: --: invalid option ... What's going on? The problem you're encountering is that the argument, -time, or --time in my example, is being interpreted as a switch to bash -s. You can pacify bash by terminating it from taking any of the remaining command line arguments for itself using the -- argument. Like this: $ ssh root@remoteServer "bash -s" -- < /var/www/html/ops1/sysMole -time Aug 18 18 Examples #1: $ ssh serverA "bash -s" -- < ./ex.bash "-time" "bye" -time bye #2: $ ssh serverA "bash -s" -- < ./ex.bash "--time" "bye" --time bye #3: $ ssh serverA "bash -s" -- < ./ex.bash --time "bye" --time bye #4: $ ssh < ./ex.bash serverA "bash -s -- --time bye" --time bye NOTE: Just to make it clear that wherever the redirection appears on the command line makes no difference, because ssh calls a remote shell with the concatenation of its arguments anyway, quoting doesn't make much difference, except when you need quoting on the remote shell like in example #4: $ ssh < ./ex.bash serverA "bash -s -- '<--time bye>' '<end>'" <--time bye> <end>
How can I execute local script on remote machine and include arguments?
1,283,455,842,000
Currently I'm writing a Bash script which has the following requirements: it should run on a wide variety of Unix/Linux platforms it should support both short and (GNU) long options I know that getopts would be the preferred way in terms of portability but AFAIK it doesn't support long options. getopt supports long options but the BashGuide recommends strongly against it: Never use getopt(1). getopt cannot handle empty arguments strings, or arguments with embedded whitespace. Please forget that it ever existed. So, there still is the option of manual parsing. This is error-prone, produces quite some boilerplate code, and I need to handle errors by myself (I guess getopt(s) do error-handling by themselves). So, what would be the preferred choice in this case?
If it has to be portable to a range of Unices, you'd have to stick to POSIX sh. And AFAIU there you just have no choice but rolling argument handling by hand.
getopt, getopts or manual parsing - what to use when I want to support both short and long options?
1,283,455,842,000
Problem When copying files with cp -H or cp -L, I get the same results: $ ls -l fileA fileA -> fileB $ cp fileA somewhere/ -H $ ls -l somewhere/ fileA # fileA is a copy of fileB, only renamed, with same properties! This answer here describes both options as similar UNLESS used in combination with -R. Not for me. Soft- as hardlinked files become renamed copies of the files they point to at the source.   Question: What is the proper use of cp -H and cp -L? Is this the expected behavior?   My attempt to solve: man cp tells me quite the same for both options, but info cp's wording makes it even more confusing for me. Maybe one can help me break this down a bit: -H If a command line argument specifies a symbolic link, then copy the file it points to rather than the symbolic link itself. However, copy (preserving its nature) any symbolic link that is encountered via recursive traversal. This sounds like a contradiction to me: I guess that »a symbolic link's nature« is that it points somewhere… -L, --dereference Follow symbolic links when copying from them. With this option, cp cannot create a symbolic link. For example, a symlink (to regular file) in the source tree will be copied to a regular file in the destination tree. I do know that a symlink isn't a regular file, but… I admit I'm overchallenged with this explanation here.
With symlinks, tools have two things they can do: Treat the symlink as a symlink ("preserving its nature"), or Treat the symlink as the type of file that it points to. Saying that -H "preserves its nature" is not a contradiction. Consider the alternative. If you use -L, any symlinks cp finds will be opened, and their contents copied to the target file name. So the source was a symlink, but its copy is not a symlink. So it "lost its nature as a symlink". Consider $ mkdir subdir $ echo "some contents" > subdir/file $ ln -s file subdir/link # definition of "list", the abbreviated ls -l output used below $ list() { ls -l "$@" | \ awk '$0 !~ /^total/ { printf "%s %s\t%s %s %s\n", $1, $5, $9, $10, $11 }' ; } $ list subdir -rw-rw-r-- 14 file lrwxrwxrwx 4 link -> file $ cp -rH subdir subdir-with-H $ list subdir-with-H -rw-rw-r-- 14 file lrwxrwxrwx 4 link -> file $ cp -rL subdir subdir-with-L $ list subdir-with-L -rw-rw-r-- 14 file -rw-rw-r-- 14 link
cp -L vs. cp -H
1,283,455,842,000
By reading this question, I have discovered that GNU grep has a -X option which expects an argument. Strangely, it is mentioned neither in the man page nor in the info page. Looking at the source code, there is that comment right in the middle of the --help output: /* -X is deliberately undocumented. */ Looking further, it appears that the -X matcher option sets the engine used for the regexp, matcher being one of grep, egrep, fgrep, awk, gawk, posixawk and perl (as of version 2.25). Some of those values are strictly identical to existing options (namely grep -G, grep -E, grep -F and grep -P). On the other hand, the three awk variants have no corresponding options. Does someone know what is the actual purpose of this option, especially with one of the awk regexp engines? Can someone tell me why it is purposely not documented?
Its purpose is to provide access to the various matchers implemented in GNU grep in one form or another, in particular AWK matchers which aren’t available otherwise, probably for testing purposes (see bug 16481 which discusses adding the gawk and posixawk matchers). However it is currently buggy, which is the reason why it’s documented as being undocumented: On Thu, Jan 27, 2005 at 04:06:04PM -0500, Charles Levert wrote: > The '-X' option, and in particular its use with the "awk" matcher > ("-X awk") is undocumented. please leave it undocumented. It doesn't provide any new functionality besides -X awk. And the implementation of awk regexps is not perfect, I think. The new GNU regex conatins some means to set AWK style syntax, yes. Yet gawk doesn't use it directly: it parses the regex first. In particular, awk regexps allow escape sequences \NNN, where NNN is an octal value. So /\040/ mathes space. grep -X awk doesn't seem to support this. I'm afraid that regex.c doesn't support these escape sequences. We would have to make sure that the regexes are fully compatible with awk regexes before we decided to document (and thus support) this feature. I think it's not worth the trouble. Stepan A follow-up asked for the comment to be added, and provided a bit more background on the -X option: My own inclination is to suggest just removing -X entirely. I suspect it was added by the original author mainly for testing purposes. If it's going to stay in, at least add a comment like this. /* -X is undocumented on purpose. */ to avoid future discussion of a resolved issue. Arnold which Stepan did shortly thereafter.
What is the actual purpose of GNU grep's -X option and why is it undocumented?
1,283,455,842,000
Where did the convention of using single dashes for letters and doubles dashes for words come from and why is continued to be used? For example if I type in ls --help, you see: -a, --all do not ignore entries starting with . -A, --almost-all do not list implied . and .. --author with -l, print the author of each file -b, --escape print octal escapes for nongraphic characters --block-size=SIZE use SIZE-byte blocks -B, --ignore-backups do not list implied entries ending with ~ ... I tried googling - and -- convention even with quotes with little success.
In The Art of Unix Programming Eric Steven Raymond describes how this practice evolved: In the original Unix tradition, command-line options are single letters preceded by a single hyphen... The original Unix style evolved on slow ASR-33 teletypes that made terseness a virtue; thus the single-letter options. Holding down the shift key required actual effort; thus the preference for lower case, and the use of “-” (rather than the perhaps more logical “+”) to enable options. The GNU style uses option keywords (rather than keyword letters) preceded by two hyphens. It evolved years later when some of the rather elaborate GNU utilities began to run out of single-letter option keys (this constituted a patch for the symptom, not a cure for the underlying disease). It remains popular because GNU options are easier to read than the alphabet soup of older styles. 1 [1] http://www.catb.org/esr/writings/taoup/html/ch10s05.html
Single dashes `-` for single-character options, but double dashes `--` for words?
1,283,455,842,000
when specifying ls --directory a* it should list only directories starting with a* BUT it lists files AND directories starting with a Questions: where might I find some documentation on this, other than man and info where I think I thoroughly looked? does this work in BASH only?
The a* and *a* syntax is implemented by the shell, not by the ls command. When you type ls a* at your shell prompt, the shell expands a* to a list of existing all files in the current directory whose names start with a. For example, it might expand a* to the sequence a1 a2 a3, and pass those as arguments to ls. The ls command itself never sees the * character; it only sees the three arguments a1, a2, and a3. For purposes of wildcard expansion, "files" refers to all entities in the current directory. For example, a1 might be a normal file, a2 might be a directory, and a3 might be a symlink. They all have directory entries, and the shell's wildcard expansion doesn't care what kind of entity those entries refer to. Practically all shells you're likely to run across (bash, sh, ksh, zsh, csh, tcsh, ...) implement wildcards. The details may vary, but the basic syntax of * matching zero or more characters and ? matching any single character is reasonably consistent. For bash in particular, this is documented in the "Filename expansion" section of the bash manual; run info bash and search for "Filename expansion", or see here. The fact that this is done by the shell, and not by individual commands, has some interesting (and sometimes surprising) consequences. The best thing about it is that wildcard handling is consistent for (very nearly) all commands; if the shell didn't do this, inevitably some commands wouldn't bother, and others would do it in subtly different ways that the author thought was "better". (I think the Windows command shell has this problem, but I'm not familiar enough with it to comment further.) On the other hand, it's difficult to write a command to rename multiple files. If you write: mv *.log *.log.bak it will probably fail, since*.log.bak is expanded based on the files that already exist in the current directory. There are commands that do this kind of thing, but they have to use their own syntax to specify how the files are to be renamed. Some commands (such as find) can do their own wildcard expansion; you have to quote the arguments to suppress the shell's expansion: find . -name '*.txt' -print The shell's wildcard expansion is based entirely on the syntax of the command-line argument and the set of existing files. It can't be affected by the meaning of the command. For example, if you want to move all .log files up to the parent directory, you can type: mv *.log .. If you forget the .. : mv *.log and there happen to be exactly two .log files in the current directory, it will expand to: mv one.log two.log which will rename one.log and clobber two.log. EDIT: And after 52 upvotes, an accept, and a Guru badge, maybe I should actually answer the question in the title. The -d or --directory option to ls doesn't tell it to list only directories. It tells it to list directories just as themselves, not their contents. If you give a directory name as an argument to ls, by default it will list the contents of the directory, since that's usually what you're interested in. The -d option tells it to list just the directory itself. This can be particularly useful when combined with wildcards. If you type: ls -l a* ls will give you a long listing of each file whose name starts with a, and of the contents of each directory whose name starts with a. If you just want a list of the files and directories, one line for each, you can use: ls -ld a* which is equivalent to: ls -l -d a* Remember again that the ls command never sees the * character. As for where this is documented, man ls will show you the documentation for the ls command on just about any Unix-like system. On most Linux-based systems, the ls command is part of the GNU coreutils package; if you have the info command, either info ls or info coreutils ls should give you more definitive and comprehensive documentation. Other systems, such as MacOS, may use different versions of the ls command, and may not have the info command; for those systems, use man ls. And ls --help will show a relatively short usage message (117 lines on my system) if you're using the GNU coreutils implementation. And yes, even experts need to consult the documentation now and then. See also this classic joke.
why does ls -d also list files, and where is it documented?
1,283,455,842,000
I usually use grep when developing, and there are some extensions that I always want to exclude (like *.pyc). Is it possible to create a ~/.egreprc or something like that, and add filtering to exclude pyc files from all results? Is this possible, or will I have to create an alias for using grep in this manner, and call the alias instead of grep?
No, there's no rc file for grep. GNU grep 2.4 through 2.21 applied options from the environment variable GREP_OPTIONS, but more recent versions no longer honor it. For interactive use, define an alias in your shell initialization file (.bashrc or .zshrc). I use a variant of the following: alias regrep='grep -Er --exclude=*~ --exclude=*.pyc --exclude-dir=.bzr --exclude-dir=.git --exclude-dir=.svn' If you call the alias grep, and you occasionally want to call grep without the options, type \grep. The backslash bypasses the alias.
Is there a 'rc' configuration file for grep/egrep? (~/.egreprc?)
1,283,455,842,000
cd - can switch between current dir and previous dir. It seems that I have seen - used as arguments to other commands before, though I don't remember if - means the same as with cd. I found that - doesn't work with ls. Is - used only with cd?
The POSIX utility syntax guidelines (specifically #13) specify that for utilities that expect a file name to read from, - means standard input, and for utilities that expect a file name to write to, - means standard output. For example, cat somefile - copies the content of somefile to its standard output, followed by what it reads on its standard input. This guideline doesn't apply to the cd command since it doesn't read or write to a file. cd does something different: the argument - means “the previous directory”. The command cd - is equivalent to cd "$OLDPWD" && pwd. This behavior is specific to the cd command, and to directly inspired commands such as pushd. Note that - is an operand, not an option. Only arguments that begin with - and are not just - or -- are options. The main implication of being an operand is that -- doesn't affect its special meaning. For example, cd -- -P changes to a subdirectory called -P, but cd -- - is the same as cd -, it doesn't change to a directory called -. Similarly, cat -- - doesn't read from a file called - but from standard input.
Is `-` used only with cd?
1,283,455,842,000
The Arch Wiki on fstab specifies the options of / to be defaults,noatime, but on my installation the default fstab is created with the options of rw,relatime. The Arch Wiki covers the atime issues. What I am curious about is the defaults option. The man page for mount says: defaults Use the default options: rw, suid, dev, exec, auto, nouser, and async. Note that the real set of all default mount options depends on kernel and filesystem type. See the beginning of this section for more details. Are the default options used only if the defaults option is provided or are they used in all cases? Do I need defaults in my fstab?
You only need defaults if the field would otherwise be empty. You can leave out the options field altogether if it's empty, unless the 5th or 6th fields are present. Field 5 is the dump frequency, rarely used nowadays. Field 6 fsck order, should be 1 for /, 2 for other filesystems mounted on boot and 0 otherwise. Fields 5 and 6 can be omitted if their value is 0, except that field 5 needs to be present if field 6 is. Thus defaults is necessary in /dev/foo /foo somefs defaults 0 1 (though you can use some other option like rw or ro instead) But it can be omitted when you specify another option. eg: The mounts below have the same effect. /dev/foo /foo somefs ro 0 1 /dev/foo /foo somefs defaults,ro 0 1 But these also have the same effect. /dev/foo /foo somefs defaults 0 0 /dev/foo /foo somefs
Do you need to specify the "defaults" option in fstab?
1,283,455,842,000
Just using kubectl as an example, I note that kubectl run --image nginx ... and kubectl run --image=nginx ... both work. For command-line programs in general, is there a rule about whether an equals sign is allowed/required between the option name and the value?
In general, the implementation of how command-line arguments are interpreted is left completely at the discretion of the programmer. That said, in many cases, the value of a "long" option (such as is introduced with --option_name) is specified with an = between the option name and the value (i.e. --option_name=value), whereas for single-letter options it is more customary to separate the flag and value with a space, such as -o value, or use no separation at all (as in -oValue). An example from the man-page of the GNU date utility: -d, --date=STRING display time described by STRING, not 'now' -f, --file=DATEFILE like --date; once for each line of DATEFILE As you can see, the value would be separated by a space from the option switch when using the "short" form (i.e. -d), but by an = when using the "long" form (i.e. --date). Edit As pointed out by Stephen Kitt, the GNU coding standard recommends the use of getopt and getopt_long to parse command-line options. The man-page of getopt_long states: A long option may take a parameter, of the form --arg=param or --arg param. So, a program using that function will accept both forms.
Do command line options take an equals sign between option name and value?
1,283,455,842,000
If I want to know the meaning of wget -b, I see the manual by man wget, then search the -b option. -b --background Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log. I want to get the result by a command like man wget -b. (Of course this doesn't work.) Is there a similar way to make it possible?
You could redirect the manpage to awk and extact the part: man wget | awk '/^ *-b *.*$/,/^$/{print}' -b --background Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log. That part is everything that is between a -b and an empty line.
Is there way to see `man` document only for specified option of a command
1,283,455,842,000
Apparently, running: perl -n -e 'some perl code' * Or find . ... -exec perl -n -e '...' {} + (same with -p instead of -n) Or perl -e 'some code using <>' * often found in one-liners posted on this site, has security implications. What's the deal? How to avoid it?
What's the problem First, like for many utilities, you'll have an issue with file names starting with -. While in: sh -c 'inline sh script here' other args The other args are passed to the inline sh script; with the perl equivalent, perl -e 'inline perl script here' other args The other args are scanned for more options to perl first, not to the inline script. So, for instance, if there's a file called -eBEGIN{do something evil} in the current directory, perl -ne 'inline perl script here;' * (with or without -n) will do something evil. Like for other utilities, the work around for that is to use the end-of-options marker (--): perl -ne 'inline perl script here;' -- * But even then, it's still dangerous and that's down to the <> operator used whenever any of the -n (sed -n mode), -p (sed mode), -a / -F (awk mode) are used without -i (in-place). The issue is explained in perldoc perlop documentation. That special operator is used to read one line (one record, records being lines by default) of input, where that input is coming from each of the arguments in turn passed in @ARGV. In: perl -pe '' a b -p implies a while (<>) loop around the code (here empty). <> will first open a, read records one line at a time until the file is exhausted and then open b... The problem is that, to open the file, it uses the first, unsafe form of open: open ARGV, "the file as provided" With that form, if the argument is "> afile", it opens afile in writing mode, "cmd|", it runs cmd and reads it's output. "|cmd", you've a stream open for writing to the input of cmd. So for instance: perl -pe '' 'uname|' Doesn't output the content of the file called uname| (a perfectly valid file name btw), but the output of the uname command. If you're running: perl -ne 'something' -- * And someone has created a file called rm -rf "$HOME"| (again a perfectly valid file name) in the current directory (for instance because that directory was once writeable by others, or you've extracted a dodgy archive, or you've run some dodgy command, or another vulnerability in some other software was exploited), then you're in big trouble. Areas where it's important to be aware of that problem is tools processing files automatically in public areas like /tmp (or tools that may be called by such tools). Files called > foo, foo|, |foo are a problem. But to a lesser extent < foo and foo with leading or trailing ASCII spacing characters (including space, tab, newline, cr...) as well as that means those files won't be processed or the wrong one will be. Also beware that some characters in some multi-byte character sets (like ǖ in BIG5-HKSCS) end in byte 0x7c, the encoding of |. $ printf ǖ | iconv -t BIG5-HKSCS | od -tx1 -tc 0000000 88 7c 210 | 0000002 So in locales using that charset, perl -pe '' ./nǖ Would try to run the ./n\x88 command as perl would not try to interpret that file name in the user's locale! How to fix/work around AFAIK, there is nothing you can do to change that unsafe default behaviour of perl once and for all system-wide. First, the problem occurs only with characters at the start and end of the file name. So, while perl -ne '' * or perl -ne '' *.txt are a problem, perl -ne 'some code' ./*.txt is not because all the arguments now start with ./ and end in .txt (so not -, <, >, |, space...). More generally, it's a good idea to prefix globs with ./. That also avoids problems with files called - or starting with - with many other utilities (and here, that means you don't need the end-of-options (--) marker any more). Using -T to turn on taint mode helps to some extent. It will abort the command if such malicious file is encountered (only for the > and | cases, not < or whitespace though). That's useful when using such commands interactively as that alerts you that there's something dodgy going on. That may not be desirable when doing some automatic processing though, as that means someone can make that processing fail just by creating a file. If you do want to process every file, regardless of their name, you can use the ARGV::readonly perl module on CPAN (unfortunately usually not installed by default). That's a very short module that does: sub import{ # Tom Christiansen in Message-ID: <24692.1217339882@chthon> # reccomends essentially the following: for (@ARGV){ s/^(\s+)/.\/$1/; # leading whitespace preserved s/^/< /; # force open for input $_.=qq/\0/; # trailing whitespace preserved & pipes forbidden }; }; Basically, it sanitises @ARGV by turning " foo|" for instance into "< ./ foo|\0". You can do the same in a BEGIN statement in your perl -n/-p command: perl -pe 'BEGIN{$_.="\0" for @ARGV} your code here' ./* Here we simplify it on the assumption that ./ is being used. A side effect of that (and ARGV::readonly) though is that $ARGV in your code here shows that trailing NUL character. Update 2015-06-03 perl v5.21.5 and above have a new <<>> operator that behaves like <> except that it will not do that special processing. Arguments will only be considered as file names. So with those versions, you can now write: perl -e 'while(<<>>){ ...;}' -- * (don't forget the -- or use ./* though) without fear of it overwriting files or running unexpected commands. -n/-p still use the dangerous <> form though. And beware symlinks are still being followed, so that does not necessarily mean it's safe to use in untrusted directories.
Security implications of running perl -ne '...' *
1,283,455,842,000
What's the difference between du -sh * and du -sh ./* ? Note: What interests me is the * and ./* parts.
$ touch ./-c $'a\n12\tb' foo $ du -hs * 0 a 12 b 0 foo 0 total As you can see, the -c file was taken as an option to du and is not reported (and you see the total line because of du -c). Also, the file called a\n12\tb is making us think that there are files called a and b. $ du -hs -- * 0 a 12 b 0 -c 0 foo That's better. At least this time -c is not taken as an option. $ du -hs ./* 0 ./a 12 b 0 ./-c 0 ./foo That's even better. The ./ prefix prevents -c from being taken as an option and the absence of ./ before b in the output indicates that there's no b file in there, but there's a file with a newline character (but see below1 for further digressions on that). It's good practice to use the ./ prefix when possible, and if not, for arbitrary data, you should always use: cmd -- "$var" or: cmd -- $patterns If cmd doesn't support -- to mark the end of options, you should report it as a bug to its author (except when it's by choice and documented like for echo). There are cases where ./* solves problems that -- doesn't. For instance: awk -f file.awk -- * fails if there is a file called a=b.txt in the current directory (sets the awk variable a to b.txt instead of telling it to process the file). awk -f file.awk ./* Doesn't have the problem because ./a is not a valid awk variable name, so ./a=b.txt is not taken as a variable assignment. cat -- * | wc -l fails if there is a file called - in the current directory, as that tells cat to read from its stdin (- is special to most text processing utilities and to cd/pushd). cat ./* | wc -l is OK because ./- is not special to cat. Things like: grep -l -- foo *.txt | wc -l to count the number of files that contain foo are wrong because it assumes file names don't contain newline characters (wc -l counts the newline characters, those output by grep for each file and those in the filenames themselves). You should use instead: grep -l foo ./*.txt | grep -c / (counting the number of lines with a / character is more reliable as there can only be one per filename). For recursive grep, the equivalent trick is to use: grep -rl foo .//. | grep -c // ./* may have some unwanted side effects though. cat ./* adds two more character per file, so would make you reach the limit of the maximum size of arguments+environment sooner. And sometimes you don't want that ./ to be reported in the output. Like: grep foo ./* Would output: ./a.txt: foobar instead of: a.txt: foobar Further digressions 1. I feel like I have to expand on that here, following the discussion in comments. $ du -hs ./* 0 ./a 12 b 0 ./-c 0 ./foo Above, that ./ marking the beginning of each file means we can clearly identify where each filename starts (at ./) and where it ends (at the newline before the next ./ or the end of the output). What that means is that the output of du ./*, contrary to that of du -- *) can be parsed reliably, albeit not that easily in a script. When the output goes to a terminal though, there are plenty more ways a filename may fool you: Control characters, escape sequences can affect the way things are displayed. For instance, \r moves the cursor to the beginning of the line, \b moves the cursor back, \e[C forward (in most terminals)... many characters are invisible on a terminal starting with the most obvious one: the space character. There are Unicode characters that look just the same as the slash in most fonts $ printf '\u002f \u2044 \u2215 \u2571 \u29F8\n' / ⁄ ∕ ╱ ⧸ (see how it goes in your browser). An example: $ touch x 'x ' $'y\bx' $'x\n0\t.\u2215x' $'y\r0\t.\e[Cx' $ ln x y $ du -hs ./* 0 ./x 0 ./x 0 ./x 0 .∕x 0 ./x 0 ./x Lots of x's but y is missing. Some tools like GNU ls would replace the non-printable characters with a question mark (note that ∕ (U+2215) is printable though) when the output goes to a terminal. GNU du does not. There are ways to make them reveal themselves: $ ls x x x?0?.∕x y y?0?.?[Cx y?x $ LC_ALL=C ls x x?0?.???x x y y?x y?0?.?[Cx See how ∕ turned to ??? after we told ls that our character set was ASCII. $ du -hs ./* | LC_ALL=C sed -n l 0\t./x$ 0\t./x $ 0\t./x$ 0\t.\342\210\225x$ 0\t./y\r0\t.\033[Cx$ 0\t./y\bx$ $ marks the end of the line, so we can spot the "x" vs "x ", all non-printable characters and non-ASCII characters are represented by a backslash sequence (backslash itself would be represented with two backslashes) which means it is unambiguous. That was GNU sed, it should be the same in all POSIX compliant sed implementations but note that some old sed implementations are not nearly as helpful. $ du -hs ./* | cat -vte 0^I./x$ 0^I./x $ 0^I./x$ 0^I.M-bM-^HM-^Ux$ (not standard but pretty common, also cat -A with some implementations). That one is helpful and uses a different representation but is ambiguous ("^I" and <TAB> are displayed the same for instance). $ du -hs ./* | od -vtc 0000000 0 \t . / x \n 0 \t . / x \n 0 \t . 0000020 / x \n 0 \t . 342 210 225 x \n 0 \t . / y 0000040 \r 0 \t . 033 [ C x \n 0 \t . / y \b x 0000060 \n 0000061 That one is standard and unambiguous (and consistent from implementation to implementation) but not as easy to read. You'll notice that y never showed up above. That's a completely unrelated issue with du -hs * that has nothing to do with file names but should be noted: because du reports disk usage, it doesn't report other links to a file already listed (not all du implementations behave like that though when the hard links are listed on the command line).
What is the difference between "du -sh *" and "du -sh ./*"?
1,283,455,842,000
I have learned to use tar without '-' for options, like tar cvfz dir.tar.gz Directory/ but I recently came accross the slightly different tar -czvf syntax (I think the 'f' must be the last option in this case). Both work on linux and Mac OS. Is there a recommended syntax, with or without '-', which is more portable accross unix flavors?
tar is one of those ancient commands from the days when option syntax hadn't been standardized. Because all useful invocations of tar require specifying an operation before providing any file name, most tar implementations interpret their first argument as an option even if it doesn't begin with a -. Most current implementations accept a -; the only exception that I'm aware of is Minix. Older versions of POSIX and Single Unix included a tar command with no - before the operation specifier. Single Unix v2 had both traditional archivers cpio and tar, but very few flags could be standardized because existing implementations were too different, so the standards introduced a new command, pax, which is the only standard archiver in since Single Unix v3. If you want standard compliance, use pax, but beware that many Linux distributions don't include it in their base installation, and there's no pax in Minix. If you want portability in practice, use tar cf filename.tar.
tar cvf or tar -cvf?
1,283,455,842,000
Are there some built-in tools that will recognize -x and --xxxx as switches (flags or "boolean options", rather than "ordinary arguments"), or do you have to go through all the arguments, test for dashes, and then parse the rest thereafter?
Use getopts. It is fairly portable as it is in the POSIX spec. Unfortunately it doesn't support long options. See also: the Small getopts tutorial (in the Bash Hackers wiki) another question on StackOverflow. If you only need short options, typical usage pattern for getopts (using non-silent error reporting) is: # process arguments "$1", "$2", ... (i.e. "$@") while getopts "ab:" opt; do case $opt in a) aflag=true ;; # Handle -a b) barg=$OPTARG ;; # Handle -b argument \?) ;; # Handle error: unknown option or missing required argument. esac done
How do I handle switches in a shell script?
1,283,455,842,000
Recently I got to know of -- that is double-hyphen which is used to tell a command that the option list has ended and what follows should not be considered as a option. So, grep -- 'search_word' * would search for the given search_word. But you could see a unexpected behavior if you remove -- and if there is a single file that start with -, which would switch on the option that match with the chars in filename. What is this -- called ? Is there any technical term to this ?
The -- is working for tools which use getopt(3) to process command line arguments and many API that parse POSIX style options. From the manual page of getopt(3): The interpretation of options in the argument list may be cancelled by the option `--' (double dash) which causes getopt() to signal the end of argument processing and return -1. I would then say it is called double dash
What is `--` called?
1,283,455,842,000
I have this code - #getoptDemo.sh usage() { echo "usage: <command> options:<w|l|h>" } while getopts wlh: option do case $option in (w) name='1';; (l) name='2';; (h) name='3';; (*) usage exit;; esac done print 'hi'$name When I run bash getoptDemos.sh (without the option) it prints hi instead of calling the function usage. It calls usage when options other than w, h and l are given. Then can't it work when no options are specified. I have tried using ?, \?, : in place of * but I can't achieve what I wanted to. I mean all the docs on getopt says it to use ?. What am I doing wrong?
When you run this script without any options, getopt will return false, so it won't enter the loop at all. It will just drop down to the print - is this ksh/zsh? If you must have an option, you're best bet is to test $name after the loop. if [ -z "$name" ] then usage exit fi But make sure $name was empty before calling getopts (as there could have been a $name in the environment the shell received on startup) with unset name (before the getopts loop)
How can I detect that no options were passed with getopts?
1,283,455,842,000
Variants of this question have certainly been asked several times in different places, but I am trying to remove the last M lines from a file without luck. The second most voted answer in this question recommends doing the following to get rid of the last line in a file: head -n -1 foo.txt > temp.txt However, when I try that in OSX & Zsh, I get: head: illegal line count -- -1 Why is that? How can I remove the M last lines and the first N lines of a given file?
You can remove the first 12 lines with: tail -n +13 (That means print from the 13th line.) Some implementations of head like GNU head support: head -n -12 but that's not standard. tail -r file | tail -n +13 | tail -r would work on those systems that have tail -r (see also GNU tac) but is sub-optimal. Where n is 1: sed '$d' file You can also do: sed '$d' file | sed '$d' to remove 2 lines, but that's not optimal. You can do: sed -ne :1 -e 'N;1,12b1' -e 'P;D' But beware that won't work with large values of n with some sed implementations. With awk: awk -v n=12 'NR>n{print line[NR%n]};{line[NR%n]=$0}' To remove m lines from the beginning and n from the end: awk -v m=6 -v n=12 'NR<=m{next};NR>n+m{print line[NR%n]};{line[NR%n]=$0}'
Negative arguments to head / tail
1,283,455,842,000
By reading the GNU coreutils man page for rm, one of the options is -f, which according to the manual, -f, --force ignore nonexistent files and arguments, never prompt Now, I made some tests and show that indeed if I use something like rm -f /nonexisting/directory/ it won't complain. What can someone really gain from such an option? Plus the most common examples of "deleting directories" using rm is something like rm -rf /delete/this/dir The -r option makes sense, but -f?
I find that the man page lacks a little detail in this case. The -f option of rm actually has quite a few use cases: To avoid an error exit code To avoid being prompted To bypass permission checks You are right that it's pointless to remove a non-existent file, but in scripts it's really convenient to be able to say "I don't want these files, delete them if you find them, but don't bother me if they don't exist". Some people use the set -e in their script so that it will stop on any error (to avoid any further damage the script can cause), and rm -rf /home/my/garbage is easier than if [[ -f /home/my/garbage ]]; then rm -r /home/my/garbage; fi. A note about permission checks: to delete a file, you need write permission to the parent directory, not the file itself. So let's say somehow there is a file owned by root in your home directory and you don't have sudo access, you can still remove the file using the -f option. If you use Git you can see that Git doesn't leave the write permission on the object files that it creates: -r--r--r-- 1 phunehehe phunehehe 62 Aug 31 15:08 testdir/.git/objects/7e/70e8a2a874283163c63d61900b8ba173e5a83c So if you use rm, the only way to delete a Git repository without using root is to use rm -rf.
What's the real point of the -f option on rm?
1,283,455,842,000
ls --hide and ls --ignore provides the possibility leave out files defined through regular expressions set after the --ignore= part. The latter makes sure that this option isn't turned off via -a, -A. The command's man and info page mention Regular Expressions. Question: Which wildcards or Regular Expressions are supported in ls --hide= and ls --ignore=. I found out that * $ ? seem to be supported, as well as POSIX Bracket Expressions. But this doesn't seem to work properly all the time and is more a game of trial and error for me. Did I miss something important here?
From the manual: -I pattern, --ignore=pattern In directories, ignore files whose names match the shell pattern (not regular expression) pattern. As in the shell, an initial . in a file name does not match a wildcard at the start of pattern. Sometimes it is useful to give this option several times. For example, $ ls --ignore='.??*' --ignore='.[^.]' --ignore='#*' The first option ignores names of length 3 or more that start with ., the second ignores all two-character names that start with . except .., and the third ignores names that start with #. You can use only shell glob patterns: * matches any number of characters, ? matches any one character, […] matches the characters within the brackets and \ quotes the next character. The character $ stands for itself (make sure it's within single quotes or preceded by a \ to protect it from shell expansion).
syntax of ls --hide= and ls --ignore=
1,283,455,842,000
It's said that compiling GNU tools and Linux kernel with -O3 gcc optimization option will produce weird and funky bugs. Is it true? Has anyone tried it or is it just a hoax?
It's used in Gentoo, and I didn't notice anything unusual.
Compiling GNU/Linux with -O3 optimization
1,283,455,842,000
I know that rm -f file1 will forcefully remove file1 without prompting me. I also know that rm -i file1 will first prompt me before removing file1 Now if you execute rm -if file1, this will also forcefully remove file1 without prompting me. However, if you execute rm -fi file1, it will prompt me before removing file1. So is it true that when combining command options, the last one will take precedence ? like rm -if, then -f will take precedence, but rm -fi then the -i will take precedence. The ls command for example, it doesn't matter if you said ls -latR or ls -Rtal. So I guess it only matters when you have contradictory command options like rm -if, is that correct?
When using rm with both -i and -f options, the first one will be ignored. This is documented in the POSIX standard: -f Do not prompt for confirmation. Do not write diagnostic messages or modify the exit status in the case of nonexistent operands. Any previous occurrences of the -i option shall be ignored. -i Prompt for confirmation as described previously. Any previous occurrences of the -f option shall be ignored. and also in GNU info page: ‘-f’ ‘--force’ Ignore nonexistent files and missing operands, and never prompt the user. Ignore any previous --interactive (-i) option. ‘-i’ Prompt whether to remove each file. If the response is not affirmative, the file is skipped. Ignore any previous --force (-f) option. Let's see what happens under the hood: rm processes its option with getopt(3), specifically getopt_long. This function will process the option arguments in the command line (**argv) in order of appearance: If getopt() is called repeatedly, it returns successively each of the option characters from each of the option elements. This function is typically called in a loop until all options are processed. From this functions perspective, the options are processed in order. What actually happens, however, is application dependent, as the application logic can choose to detect conflicting options, override them, or present an error. For the case of rm and the i and f options, they perfectly overwrite eachother. From rm.c: 234 case 'f': 235 x.interactive = RMI_NEVER; 236 x.ignore_missing_files = true; 237 prompt_once = false; 238 break; 239 240 case 'i': 241 x.interactive = RMI_ALWAYS; 242 x.ignore_missing_files = false; 243 prompt_once = false; 244 break; Both options set the same variables, and the state of these variables will be whichever option is last in the command line. The effect of this is inline with the POSIX standard and the rm documentation.
The precedence of command options?
1,283,455,842,000
I read about that I should quote variables in bash, e.g. "$foo" instead of $foo. However, while writing a script, I found an a case where it works without quotes but not with them: wget_options='--mirror --no-host-directories' local_root="$1" # ./testdir recieved from command line remote_root="$2" # ftp://XXX recieved from command line relative_path="$3" # /XXX received from command line This one works: wget $wget_options --directory_prefix="$local_root" "$remote_root$relative_path" This one does not (note the double quotes aroung $wget_options): wget "$wget_options" --directory_prefix="$local_root" "$remote_root$relative_path" What is the reason for this? Is the first line the good version; or should I suspect that there is a hidden error somewhere that causes this behavior? In general, where do I find good documentation to understand how bash and its quoting works? During writing this script I feel that I started to work on a trial-and-error base instead of understanding the rules.
Basically, you should double quote variable expansions to protect them from word splitting (and filename generation). However, in your example, wget_options='--mirror --no-host-directories' wget $wget_options --directory_prefix="$local_root" "$remote_root$relative_path" word splitting is exactly what you want. With "$wget_options" (quoted), wget doesn't know what to do with the single argument --mirror --no-host-directories and complains wget: unknown option -- mirror --no-host-directories For wget to see the two options --mirror and --no-host-directories as separate, word splitting has to occur. There are more robust ways of doing this. If you are using bash or any other shell that uses arrays like bash do, see glenn jackman's answer. Gilles' answer additionally describes an alternative solution for plainer shells such as the standard /bin/sh. Both essentially store each option as a separate element in an array. Related question with good answers: Why does my shell script choke on whitespace or other special characters? Double quoting variable expansions is a good rule of thumb. Do that. Then be aware of the very few cases where you shouldn't do that. These will present themselves to you through diagnostic messages, such as the above error message. There are also a few cases where you don't need to quote variable expansions. But it's easier to continue using double quotes anyway as it doesn't make much difference. One such case is variable=$other_variable Another one is case $variable in ...) ... ;; esac
Why do options in a quoted variable fail, but work when unquoted?
1,283,455,842,000
If I run the following .sh file: #!/bin/sh -a echo "a" | sed -e 's/[\d001-\d008]//g' The result is an error: sed: -e expression #1, char 18: Invalid range end But if I run the following .sh file: #!/bin/sh set -a echo "a" | sed -e 's/[\d001-\d008]//g' It runs without error. Isn't the second code supposed to be equivalent of the first one? Why the error in the first one?
When bash is called with the name sh, it does this: if (shell_name[0] == 's' && shell_name[1] == 'h' && shell_name[2] == '\0') act_like_sh++; and then later sets the POSIXLY_CORRECT shell variable to y: if (act_like_sh) { bind_variable ("POSIXLY_CORRECT", "y", 0); sv_strict_posix ("POSIXLY_CORRECT"); } bind_variable calls bind_variable_internal, which, if the shell attribute a is on at the time (which it would be if you invoked the shell with -a), marks the shell variable as exported. So in your first script: #!/bin/sh -a echo "a" | sed -e 's/[\d001-\d008]//g' sed is invoked with POSIXLY_CORRECT=y in its environment, which will make it complain about [\d001-\d008]. (Same thing happens if sed is given the --posix option.) In GNU sed, \dNNN is an escape code for the character whose numerical value in base-10 is NNN, but in POSIX mode, this is disabled inside a bracket expression, so [\d001-\d008], means literally the characters \, d, etc., with the range being from 1 to \. In order of character codes, 1 comes before \ (and the range includes all digits except zero, plus all uppercase letters, plus some special characters). In the en_US.UTF-8 locale which you were using, \ sorts before 1, however, so the range is invalid. In your second script: #!/bin/sh set -a echo "a" | sed -e 's/[\d001-\d008]//g' even though POSIXLY_CORRECT is set in the shell, it isn't exported, so sed is invoked without POSIXLY_CORRECT in the environment, and sed runs with GNU extensions. If you add export POSIXLY_CORRECT near the top of your second script, you'll also see sed complain.
Why does -a in "#!/bin/sh -a" affect sed and "set -a" doesn't?
1,283,455,842,000
For example: xargs -n 1 is the same as xargs -n1 But if you look at the man page, the option is listed as -n max-args, which means the space is supposed to be preserved. There is nothing about the abbreviated form -nmax-args. This also happens with many other Linux utilities. What is this called in Linux? Do all utilities support the abbreviated form (but never document it in the man page)?
When you write the command line parsing bit of your code, you specify what options take arguments and which ones do not. For example, in a shell script accepting an -h option (for help for example) and an -a option that should take an argument, you do opt_h=0 # default value opt_a="" while getopts 'a:h' opt; do case $opt in h) opt_h=1 ;; a) opt_a="$OPTARG" ;; esac done echo "h: $opt_h" echo "a: $opt_a" The a:h bit says "I'm expecting to parse two options, -a and -h, and -a should take an argument" (it's the : after a that tells the parser that -a takes a argument). Therefore, there is never any ambiguity in where an option ends, where its value starts, and where another one starts after that. Running it: $ bash test.sh -h -a hello h: 1 a: hello $ bash test.sh -h -ahello h: 1 a: hello $ bash test.sh -hahello h: 1 a: hello This is why you most of the time shouldn't write your own command line parser to parse options. There is only one case in this example that is tricky. The parsing usually stops at the first non-option, so when you have stuff on the command line that looks like options: $ bash test.sh -a hello -world test.sh: illegal option -- w test.sh: illegal option -- o test.sh: illegal option -- r test.sh: illegal option -- l test.sh: illegal option -- d h: 0 a: hello The following solves that: $ bash test.sh -a hello -- -world h: 0 a: hello The -- signals an end of command line options, and the -world bit is left for the program to do whatever it wants with (it's in one of the positional variables). That is, by the way, how you remove a file that has a dash in the start of its file name with rm. EDIT: Utilities written in C call getopt() (declared in unistd.h) which works pretty much the same way. In fact, for all we know, the bash function getopts may be implemented using a call to the C library function getopt(). Perl, Python and other languages have similar command line parsing libraries, and it is most likely that they perform their parsing in similar ways. Some of these getopt and getopt-like library routines also handle "long" options. These are usually preceded by double-dash (--), and long options that takes arguments often does so after an equal sign, for example the --block-size=SIZE option of [some implementations of] the du utility (which also allows for -B SIZE to specify the same thing). The reason manuals are often written to show a space in between the short options and their arguments is probably for readability. EDIT: Really old tools, such as the dd and tar utilities, have options without dashes in front of them. This is purely for historical reasons and for maintaining compatibility with software that relies on them to work in exactly that way. The tar utility has gained the ability to take options with dashes in more recent times. The BSD manual for tar calls the old-style options for "bundled flags".
Why can spaces between options and parameters be omitted?
1,283,455,842,000
I'm curious - is there a difference between ls -l and ls -lllllllllllllllllllllllllllll? The output appears to be the same and I'm confused on why ls allows duplicate switches. Is this a standard practice among most commands?
Short answer: Because it's programmed to ignore multiple uses of a flag. Long answer: As you can see in the source code of ls, there is a part with the function getopt_long() and a huge switch case: 1648 int c = getopt_long (argc, argv, 1649 "abcdfghiklmnopqrstuvw:xABCDFGHI:LNQRST:UXZ1", 1650 long_options, &oi); .... 1654 switch (c) 1655 { .... 1707 case 'l': 1708 format = long_format; 1709 break; .... 1964 } The function getopt_long() reads all paramters given to the program. In case if -l the variable format is set. So when you type multiple -lllllllll that variable is set multiple times, but that does not change anything. Well, it changes one thing. This huge switch case statement must run through multiple times, because of multiple -l flags. ls needs longer to complete with multiple -l flags. But this time is not worth mentioning. =)
Why does ls accept duplicate switches?
1,283,455,842,000
I am trying to find files containing a specific word using grep. There are many files in the directory (> 500) Command I run $ grep 'delete' * Output validate_data_stage1:0 validate_data_stage2:0 validate_data_stage3:0 validate_data_stage4:0 validate_data_stage5:0 validate_input_stage1:0 validate_input_stage2:0 validate_input_stage3:0 validate_input_stage4:0 .... and hundred of such lines These are the files that don't contain the given match. I want to suppress those lines from displaying to stdout. I know of -q switch, but that would suppress the complete output, which I don't want. How do I do that?
That's the behavior exhibited by grep -c. Probably you have a file whose name starts with - and contains a c character and you're using GNU grep without setting the POSIXLY_CORRECT environment variable. Use: grep -- delete * or better: grep delete ./* -- marks the end of options so that that filename will not be considered as an option (with a POSIX grep, it wouldn't since the non-option delete argument would have marked the end of options), but it wouldn't address the problem of a file called -. The grep delete ./* is more robust but has the drawback of outputting the extra ./ for matching files (though that may be considered a bonus since that helps identify file names that contain newline characters).
grep how to suppress display of non-matched file?
1,283,455,842,000
In below code when I give option r then getopts requires one arguments: while getopts ":hr::l:" opt; do case $opt in r ) echo "Run Numbers - argument = $OPTARG " ;; l ) echo "Latency range - argument = $OPTARG" ;; h ) helptext graceful_exit ;; * ) usage clean_up exit 1 esac done But I need to pass two arguments after -r option, instead of one. Is there an easy way to do this?
You cannot pass two arguments with single option using getopts. I recommend the following alternatives: Put quotes around multiple arguments In this case getopts will treat them as one argument, but you will be able to split it later on. You can even put all arguments in the array at once: #!/bin/bash while getopts ":hr:l:" opt; do case $opt in r ) echo "Run Numbers - argument = $OPTARG " set -f # disable glob IFS=' ' # split on space characters array=($OPTARG) ;; # use the split+glob operator l ) echo "Latency range - argument = $OPTARG" ;; h ) helptext graceful_exit ;; * ) usage clean_up exit 1 esac done echo "Number of arguments: ${#array[@]}" echo -n "Arguments are:" for i in "${array[@]}"; do echo -n " ${i}," done printf "\b \n" The example of run: ./script -r "123 456 789" And output: Run Numbers - argument = 123 456 789 Number of arguments: 3 Arguments are: 123, 456, 789 Use comma (or other preferred character) as a delimiter ./script -r 123,456,789 and you just replace IFS=" " with IFS=, in the code above. That one has the advantage of allowing empty elements. As pointed out in the comments section this solution is chosen by some common programs e.g. lsblk -o NAME,FSTYPE,SIZE. Allow multiple -r options Multiple -r, but each taking only one argument: ./script -r 123 -r 456 -r 789 Then arguments would be added to array one by one array+=("$OPTARG") That one has the advantage of not having limitations on what characters the elements may contain. This one is also used by some standard linux tools e.g. awk -v var1=x -v var2=y.
Provide two arguments to one option using getopts
1,283,455,842,000
If I have a list of URLs separated by \n, are there any options I can pass to wget to download all the URLs and save them to the current directory, but only if the files don't already exist?
There is a -nc (--no-clobber) option for wget.
Download list of files if they don't already exist
1,283,455,842,000
I was reading the man page for rm when I came across this option: -P Overwrite regular files before deleting them. Files are overwritten three times, first with the byte pattern 0xff, then 0x00, and then 0xff again, before they are deleted. I guess -P is meant for thoroughly deleting a file, but wouldn't setting all the bytes to 0xff or 0x00 be enough? Why does it have to toggle between the two three times?
There is a technique called residual information retrieval that can read data that was deleted based on the idea that when the drive is magnetized in order to store data other parts that are close to the data is also affected by this and it should be possible to re-read data this way ... this is though a costly technique, but use it if you are paranoid ;) By writing data 3 times (in this case) the parts next to the track on the drive should be re-set as well in order to make it impossible to re-read this way.
What's the purpose of `rm -P`?
1,283,455,842,000
When I grep the man page of the find command for matches to type it returns a lot of search results that I don't want. Instead I want to use a command that returns only the search results for -type. The command man find | grep -type doesn't work. It returns: grep: invalid option -- 't'
If you want to grep for a pattern beginning with a hyphen, use -- before the pattern you specify. man find | grep -- -type If you want more info, for example the entire section describing an option, you could try using Sed: $ man find | sed -n '/-mindepth/,/^$/p' -mindepth levels Do not apply any tests or actions at levels less than levels (a non-negative integer). -mindepth 1 means process all files except the command line arguments. However, this won't work for every option you might search for. For example: $ man find | sed -n '/^[[:space:]]*-type/,/^$/p' -type c File is of type c: Not very helpful. Worse, for some options you could be misled into thinking you'd read the whole text about the option when you really hadn't. For example, searching -delete omits the very important WARNING contained as a second paragraph under that heading. My recommendation is to use a standard call to man with the LESS environment variable set. I use it quite commonly in my answers on this site. LESS='+/^[[:space:]]*-type' man find To learn more about how this works, see: LESS='+/^[[:space:]]*LESS ' man less LESS='+/\+cmd' man less LESS='+/\/' man less If you just want to find the option quickly and interactively in the man page, learn to use less's search capabilities. And also see: How do I use man pages to learn how to use commands?
grep the man page of a command for hyphenated options
1,283,455,842,000
I am looking for the Linux command and option combination to display the contents of a given file byte by byte with the character and its numerical representation. I was under the impression that in order to do this, I would use the following: od -c [file] However, I have been told this is incorrect.
The key is the character and its numerical representation so -c only gives you half of that. One solution is od -c -b file but of course there are lots of different number representations to choose from.
Linux command to display the contents of a given file byte by byte with the character and its numerical representation displayed for each byte [closed]
1,283,455,842,000
I want to create a .tgz from the content of a directory. I also want to strip the leading "./" from the tar'ed content. I had done this as follows: cd /path/to/files/ && find . -type f | cut -c 3- | xargs czf /path/to/tgz/myTgz.tgz I learned recently that using xargs may not be the best way to pull this off because xargs may invoke tar multiple times if the cmdline arg list gets too long, and I was advised to make use of tar's ability to read a list of input files from stdin. I ended up finding this article on how to do this. However, I find that the recommendation... cd /path/to/files/ && find . -type f | cut -c 3- | tar czf foo.tgz -T - ...seems to not be portable. It runs fine on my dev PC, but on a busybox target, I get the following error from running the same command: tar: can't open '-': No such file or directory So, my question: is there a truly portable/global way to invoke tar to create a .tgz by feeding it input files from stdin (as opposed to cmdline arguments)? (It is not an option available to me to install alternatives to tar such as gnutar/bsdtar/etc.) (Secondary question: Why does the "-T -" argument to tar denote "read files from stdin"? From the tar man page, all I could find was that "-T" means: get names to extract or create from FILE ... but I couldn't see any reference to a plain "-")
It is a common convention to interpret - to mean standard input where an input file name is expected, and to mean standard output where an output file name is expected. Because this is a common convention, the short help summary in the GNU tar man page does not mention it, but the complete manual (usually available locally through info tar) does. The POSIX command line utility syntax guidelines includes this convention, so it's pretty widespread (but it's always a choice on the part of the author of the program). BusyBox utilities do follow this convention. But the manual does not mention tar as supporting the option -T, and neither does the version on the machine I'm posting this (1.27.2 on Ubuntu). I don't know why you're getting the error “: No such file or directory” rather than “invalid option -- 'T'”. It seems that your tar interprets -T as an option that does not take an argument, then sees - as a file name. Since in this context tar needs a file name to put in the archive, and not just some content that comes from a file, it would not make sense to use the stdin/stdout interpretation for -. BusyBox utilities support a restricted set of functionality by design, because they're intended for embedded systems where the fancier features of GNU utilities wouldn't fit. Apparently -T is not a feature that the BusyBox designers considered useful. I don't think BusyBox tar has any way to read file names from stdin. If you need to archive a subset of the files in a directory and you don't need any symbolic links in the archive, a workaround is to create a forest of symbolic links in a temporary directory and archive this temporary directory. It's not clear exactly why you're using find. If you only want the files in the current directory, why not tar czf /path/to/archive.tgz -- * ? Your command does make sense if there are subdirectories and you want to archive the files in these subdirectories, but not the directories themselves (presumably to restore them in a place where the directory structure must exist but may have different permissions). In this case a leading ./ wouldn't do any harm.
Portable way to invoke tar with a list of files from stdin
1,283,455,842,000
sort -o seems superfluous. What is the point of using it when we can use sort >? Is it sometimes impossible to use shell redirection?
Sort a file in-place: sort -o file file Using sort file >file would start by truncating the file called file to zero size, then calling sort with that empty file, resulting in an empty output file no matter what the original file's contents was. Also, in situations where commands or lists of options are automatically generated by e.g. scripts, adding -o somefile to the end of the options would override any previously set output file, which allows controlling the output file location by way of appending options. sort_opt=( some list of options ) if [ ... something ... ]; then # We don't need to go through and delete any old use of "-o" # because this later option would override it. sort_opt+=( -o somefile.out ) fi sort "${sort_opt[@]}" "$thefile" There might also be instances where the sort binary executable is called directly, without a shell to do any redirection to any file. Note that -o is a standard option whereas --output is a GNU extension.
Why does sort have an --output= option?
1,283,455,842,000
I am trying to understand info who but completly fail at the term "non-option argument". Can someone please explain this term to me in simple words or an example? UPDATE: from info who : If given no non-option arguments, who prints the following information for each user currently logged on: login name, terminal line, login time, and remote hostname or X display. If given one non-option argument, who uses that instead of a default system-maintained file (often /var/run/utmp or /etc/utmp) as the name of the file containing the record of users logged on. /var/log/wtmp is commonly given as an argument to who to look at who has previously logged on. If given two non-option arguments, who prints only the entry for the user running it (determined from its standard input), preceded by the hostname. Traditionally, the two arguments given are am i, as in who am i. I [thought to] know the difference between an argument and an option, but this [again] nixes a lot.
The terminology is not completely fixed, so different documentation uses different terms, or worse, the same terms with different meanings. The terminology in the man page you're reading is a common one. It is the one used in the POSIX standard. In a nutshell, each word after the command is an argument, and the arguments that start with - are options. Argument In the shell command language, a parameter passed to a utility as the equivalent of a single string in the argv array created by one of the exec functions. An argument is one of the options, option-arguments, or operands following the command name. Operand An argument to a command that is generally used as an object supplying information to a utility necessary to complete its processing. Operands generally follow the options in a command line. Option An argument to a command that is generally used to specify changes in the utility's default behavior. “Utility” is what is generally called “command” (the standard uses the word utility to avoid ambiguity with the meaning of “command” that includes the arguments or even compound shell commands). Most commands follow the standard utility argument syntax, where options start with a - (dash a.k.a. minus). So an option is something like -a (short option, follows the POSIX guidelines) or --all (long option, an extension from GNU). A non-option argument is an argument that doesn't begin with -, or that consists solely of - (which who treats as a literal file name but many commands treat as meaning either standard input or standard output). In addition, some options themselves have an argument. This argument can be passed in several ways: For a single-letter option, in the same argument to the utility: foo -obar: bar is the argument to the single-letter option -o. In the GNU long argument syntax, in the same argument, separated by an equal sign: foo --option=bar. In a separate argument: foo -o bar or foo --option bar. If the option -o (or --option) takes an argument, then bar is the argument of the option -o (or --option). If -o (or --option) does not take an argument then bar is an operand. Here's a longer example: tail -n 3 myfile -n is an option, 3 is an argument to the option -n, and myfile is an operand. Terminology differs, so you may find documents that use argument in the sense where POSIX uses operand. But “non-option argument” is more common than either term for this meaning.
What is a "non-option argument"?
1,283,455,842,000
The following works: git -C ~/dotfiles status But this fails: git status -C ~/dotfiles Why is this?
This is because -C is a global option, and doesn't "belong" to the status action. This is a common pattern, resulting in synopses like the one below: command [global options] action [action-specific options] git --help lists Git's global options, and man git goes into more detail.
Why does position of -C matter in git commands?
1,283,455,842,000
My nice is from GNU coreutils 9.1. I observed that nice -15 is equivalent to nice -n 15: nice # prints 0 for me, the base niceness is 0 nice -n 15 nice # prints 15, this is expected sudo nice -n -15 nice # prints -15, this is expected nice -15 nice # prints 15 -15 is a negative number. Why does it increment the niceness in the last example above? The manual (e.g. in Debian 12) does not explain this.
The portable syntax requires -n if you want to specify an increment. When in doubt, use nice -n. In nice -15 the argument is not really a negative number. It is a dash concatenated with a positive number. The leading dash indicates it is an option. Compare this e.g. to kill -15 which is equivalent to kill -s 15. Similarly nice -15 is equivalent to nice -n 15. kill -15 is not as confusing as nice -15, because you don't expect signal numbers to be negative. In the case of nice you were confused because an adjustment to niceness may be negative and <dash><digit(s)> surely looks like a non-positive number. It's easy to interpret -15 as "minus fifteen" in cases where negative numbers make sense. Note the number after the dash may be with an explicit sign: nice -+15 nice # prints 15 sudo nice --15 nice # prints -15 Unfortunately --15 looks kinda like a long option (compare: --help) with a positive number, this may add to the confusion. It is another reason to prefer nice -n.
Why does `nice` with a negative argument (e.g. `nice -15`) increment niceness?
1,283,455,842,000
I have the following situation: $ ls 0.txt aaa.txt a-.txt a.txt b.txt ddd.txt -hello.txt libs -.txt ,.txt !.txt ].txt \.txt $ ls [-a]*.txt ls: opzione non valida -- "e" Try 'ls --help' for more information. $ ls [a-]*.txt ls: opzione non valida -- "e" Try 'ls --help' for more information. The dash (-) creates some problems. How can I find a file starting with -?
Use -- to indicate end of options for ls: ls -- -* or do the following to explicitly indicate the argument on current directory with ./ ls ./-* If you want to input more options for ls, put them before -- or ./ e.g. ls -l -- -* ls -l ./-*
Finding a file starting with '-' dash [duplicate]
1,283,455,842,000
Question very similar to How to append multiple lines to a file with bash but I want to start the file with --, and also append to the file, if possible. printf "--no-color\n--format-doc\n--no-profile\n" >> ~/.rspec-test The issue is starting the file with "--" gives me a: -bash: printf: --: invalid option printf: usage: printf [-v var] format [arguments] Is there a way to escape the --? Are there any alternatives? I'm not sure how to do multiple lines using echo, and cat isn't a good option, I'd like to have it in an automated script.
Most commands that accept --foo as an option also accept -- by itself as an "end of options, start of arguments" marker - so you could do: printf -- "--no-color\n--format-doc\n--no-profile\n" >> ~/.rspec-test But the more specific answer to your exact example is that the first argument to printf is a format specifier, and you're making things more difficult than necessary by not using printf for its formatting abilities. This would be a better way to do what you want: printf "%s\n" --no-color --format-doc --no-profile >> ~/.rspec-test That tells printf to take each argument it gets and print it, followed by a newline. Easier than repeating the \n yourself, and it avoids the leading -- problem you were facing. And it removes the need to escape any % signs that your strings might contain. As for how to do multiple lines with echo, you could use: echo -ne "--no-color\n--format-doc\n--no-profile\n" >> ~/.rspec-test Or, much more portably: { echo --no-color; echo --format-doc; echo --no-profile; } >> ~/.rspec-test Or using cat along with a here-doc: cat >>.rspec-test <<EOF --no-color --format-doc --no-profile EOF
How to append multiple lines to a file with bash, with "--" in front of string
1,283,455,842,000
I have a folder with 137795 files in it. I need to delete all of them. When I run rm * I get -bash: /bin/rm: Argument list too long. How do I get past this error?
As I can see you don't need to remove your dir , only files inside. So you can recreate it rm -r /path/to/dir && mkdir /path/to/dir or even delete only files inside find /path/to/dir -type f -delete afair first one works faster. UPD. Note that way with find might be not optimal from space consumption point of view, as directory size will be reduced only after fsck. Details.
Remove many many many files from a folder
1,283,455,842,000
java -cp FILE.jar FILE.Main inputfile What does the -cp option mean? For that matter, what a negative sign in front mean? I've searched everywhere and couldn't find the answer.
The example you finally gave, -cp is a parameter to the command, which is java. Parameters are generally program-specific, in this case cp stands for Class Path, which is another location java will search to find the class files as they are needed by the program.
The -cp option to the java command [closed]
1,393,281,409,000
I was trying out tar command to make archive with .tar extension. please have a look at the following series of commands I tried: $ ls abc.jpg hello.jpg xjf.jpg $ tar -cfv test.tar *.jpg tar: test.tar: Cannot stat: No such file or directory tar: Exiting with failure status due to previous errors $ ls abc.jpg hello.jpg v xjf.jpg $ rm v $ ls abc.jpg hello.jpg xjf.jpg $ tar -cvf test.tar *.jpg abc.jpg hello.jpg xjf.jpg $ ls abc.jpg hello.jpg test.tar xjf.jpg why it gives different response with different sequence of options ? i.e -cfv vs cvf. from what I have learnt in bash scripting that option sequence does not matter.
As @jcbermu said, for most programs and in most cases, the order of command line flags is not important. However, some flags expect a value. Specifically, tar's -f flag is: -f, --file ARCHIVE use archive file or device ARCHIVE So, tar expects -f to have a value and that value will be the name of the tarfile it creates. For example, to add all .jpg files to an archive called foo.tar, you would run tar -f foo.tar *jpg What you were running was tar -cfv test.tar *.jpg tar understands that as "create (-c) an archive called v (-fv), containing files test.tar and any ending in .jpg. When you run tar -cvf test.tar *.jpg on the other hand, it takes test.tar as the name of the archive and *jpg as the list of files.
Why does the specific sequence of options matter for tar command?
1,393,281,409,000
I have seen that people use ls -alh in the Linux terminal. However, when I see the manual, I don't see -alh (i.e. when I type man ls). Why do I not have it in the manual? Can someone explain what it does?
ls -alh is the same as ls -a -l -h. Multiple short options can be combined like this. Here are the meanings of those options from man ls: -a, --all do not ignore entries starting with . -l use a long listing format -h, --human-readable with -l, print sizes in human readable format (e.g., 1K 234M 2G)
What does ls -alh mean? [duplicate]
1,393,281,409,000
I'm having a problem where grep gets confused when the directory contains a file starting with dashes. For example, I have a file named "------.js" . When I do something like grep somestring * I get the error: grep: unrecognized option '------.js' Usage: grep [OPTION]... PATTERN [FILE]... Try 'grep --help' for more information. This seems like the kind of question that would be asked all over the internet, but I can't find anything. I can manually resolve the problem with something likefind . | while read f; do grep MYSTRING "$f"; donebut I'm wondering if there's a simpler / more robust solution. I'm running Arch Linux.
As an addition to Romeo's answer, note that grep pattern --whatever is required by POSIX to look for pattern in the --whatever file. That's because no options should be recognised after non-option arguments (here pattern). GNU grep in that instance is not POSIX compliant. It can be made compliant by passing the POSIXLY_CORRECT environment variable (with any value) into its environment. That's the case of most GNU utilities and utilities using a GNU or compatible implementation of getopt()/getopt_long() to parse command-line arguments. There are obvious exceptions like env, where env VAR=x grep --version gets you the version of grep, not env. Another notable exception is the GNU shell (bash) where neither the interpreter nor any of its builtins accept options after non-option arguments. Even its getopts cannot parse options the GNU way. Anyway, POSIXLY_CORRECT won't save you if you do grep -e pattern *.js (there, pattern is not a non-option argument, it is passed as an argument to the -e option, so more options are allowed after that). So it's always a good idea to mark the end of options with -- when you can't guarantee that what comes after won't start with a - (or + with some tools): grep -e pattern -- *.js grep -- pattern *.js or use: grep -e pattern ./*.js (note that grep -- pattern * won't help you if there's a file called -, while grep pattern ./* would work. grep -e "$pattern" should be used instead of grep "$pattern" in case $pattern itself may start with -). There was an attempt in the mid-90s to have bash be able to tell getopt() which arguments (typically the ones resulting from a glob expansion) were not to be treated as options (via a _<pid>_GNU_nonoption_argv_flags_ environment variable), but that was removed as it was causing more problems than it solved.
grep getting confused by filenames with dashes
1,393,281,409,000
man su says: You can use the -- argument to separate su options from the arguments supplied to the shell. man bash says: -- A -- signals the end of options and disables further option processing. Any arguments after the -- are treated as filenames and arguments. An argument of - is equivalent to --. Well then, let's see: [root ~] su - yuri -c 'echo "$*"' -- 1 2 3 2 3 [root ~] su - yuri -c 'echo "$*"' -- -- 1 2 3 2 3 [root ~] su - yuri -c 'echo "$*"' -- - 1 2 3 1 2 3 [root ~] su - yuri -c 'echo "$*"' - 1 2 3 1 2 3 What I expected (output of the second command differs): [root ~] su - yuri -c 'echo "$*"' -- 1 2 3 2 3 [root ~] su - yuri -c 'echo "$*"' -- -- 1 2 3 1 2 3 [root ~] su - yuri -c 'echo "$*"' -- - 1 2 3 1 2 3 [root ~] su - yuri -c 'echo "$*"' - 1 2 3 1 2 3 Probably not much of an issue. But what's happening there? The second and the third variants seem like the way to go, but one of them doesn't work. The fourth one seems unreliable, - can be treated as su's option.
What is happening is that the first argument you supply to the shell is the $0 parameter, (usually this would be the name of the shell). It is not included when you do echo $* since $* is every argument apart from $0. Example: # su - graeme -c 'echo "\$0 - $0"; echo "\$* - $*"' -- sh 1 2 3 $0 - sh $* - 1 2 3 Update Doing the following command: strace -f su graeme -c 'echo $0; echo "$*"' -- -- 1 2 3 yields the strace line: [pid 9609] execve("/bin/bash", ["bash", "-c", "echo $0; echo \"$*\"", "1", "2", "3"], [/* 27 vars */] <unfinished ...> So somehow it seems that in this case su is gobbling up the extra -- without passing it to bash, possibly due to a bug (or at least undocumented behaviour). It won't however eat up any more than two of the -- arguments: # su graeme -c 'echo $0; echo "$*"' -- -- -- 1 2 3 -- 1 2 3
Passing arguments to su-provided shell
1,393,281,409,000
I often find myself writing shell functions or shell scripts that are meant to be wrappers around other commands. It is also frequent that I want such a wrapper to support a few flags/options. The idea is that the wrapper should cull from the command-line arguments all the flags/options it supports (along with their arguments, when applicable), and pass the remaining arguments as the arguments to the wrapped command. Now, more often than not, the wrapped command also supports flags and options of its own. This means that, according to the scheme described above, the wrapper must be able to handle command-line arguments that include both its own flags/options as well as those supported by the wrapped command. One way to implement such a wrapper would be to specify both the wrapper's and the wrapped command's options in a call to GNU getopt, then collect all the latter, together with any non-option arguments, in some array WRAPPED_COMMAND_ARGUMENTS. Then, at some later time, the wrapped command gets invoked with "${WRAPPED_COMMAND_ARGUMENTS[@]}" as its command-line arguments. This approach has worked reasonably well for me, but it becomes prohibitively laborious when the wrapped command has a lot of options. Instead, I would like to find what in this post's title I refer to as a "permissive alternative to GNU getopt." By this I mean a tool that, like getopt, helps me parse the options that I explicitly tell it about, and treats all the other remaining arguments equally, i.e. making no distinction based on the presence or not of leading hyphens. Is there such a thing?
A common method for tasks like this is to use -- as a separator between options to be handled by the wrapper script and options to be passed on verbatim to the program being executed by the wrapper, e.g. ./my-wrapper -a -b -c -- -d -e -f -- marks the end of options for my-wrapper. All other arguments, and all options after the -- can be passed on to the program it wraps, your wrapper script doesn't need to handle them at all. It also allows for conflicting uses of options - e.g. grep and many other programs use -i to mean that searches should be case-insensitive, while you may want to use -i in your wrapper script to specify an input file. By using --, there's no conflict - -i before the -- means "input file", -i after the -- means "case insensitive". Also worth noting: the program you wrap may also interpret -- as the end of its options, with everything after -- being treated as filenames or strings or other non-option arguments (e.g. to prevent filenames beginning with - being treated as options). ./my-wrapper -a -b -c -- -d -e -f -- non-option args here BTW, -- has been used to mark the end of options since at least the late 1970s or early 1980s.
Looking for a more permissive alternative to GNU getopt for wrapper script
1,393,281,409,000
When I type in the command nmap –Pn –sT -sV –p0-65535 192.168.1.100, my terminal responds: Starting Nmap 7.60 ( https://nmap.org ) at 2018-01-29 11:24 PST Failed to resolve "–Pn". Failed to resolve "–sT". Failed to resolve "–p0-65535". Nmap scan report for 192.168.1.100 Host is up (0.0075s latency). Not shown: 999 closed ports PORT STATE SERVICE VERSION 53/tcp filtered domain Service detection performed. Please report any incorrect results at https://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 1.73 seconds I'm confused as to why it is failing to resolve the flags. This used to work on my machines; I have a MacBook and am using bash, as well as Kali Linux. I have tried restarting both machines, and it continually fails to resolve flags regardless of which IP address I attempt to scan.
nmap did not recognize those options because they start with a unicode EN DASH (342 200 223, –) instead of a hyphen or regular dash (-). As a result, nmap interprets those "options" as names to resolve.
Nmap unable to resolve flags
1,393,281,409,000
In Firefox we have two options at Firefox->Preferences->Preferences->Fonts and colors->Colors menu, Use system colors and Sites can use other colors. I would like keep the first one checked (and this is ok) and change the second on a quick way. A quick way could be pressing a shortcut on keyboard, running a terminal command or changing a content of a config file (because I can do a shell script and use a keyboard command). My motivation is I would like to always use my system colors but if a webpage has strange visuals, I'd like change it to the original quickly. Any ideas?
I had found a solution... I asked on mozilla forum and they returned a answer to me. The solution is: Install a extension called PrefBar. With this extension we can put a checkbox on mozilla that will change the property browser.display.use_document_colors. We can set a shorcut too (for example, F1). With this extension we can enable severel other options too.
How to change a firefox option on a quick way (via shortcuts, command line,..)?
1,393,281,409,000
GNU tar offers the option --checkpoint where a message should be printed each checkpointreached. Question:: In what does --checkpoint measure? My best guess is Bytes. I wasn't able to find a hint in the man nor the info nor in GNU's documentation on tar. The OS is Linux, I use Bash and I use tar (GNU tar) 1.26
From info tar: The checkpoint facility is enabled using the following option: --checkpoint[=N]: Schedule checkpoints before writing or reading each Nth record. The default value for N is 10. So the default value for N is 10 records. But what is a record anyway? In truth, the meaning of record above is not easily discerned. There are no hints or pointers in the checkpoint section of the info tar manual. Still if you go further on you'll eventually come to the section on blocks and blocking-factor. The data in an archive is grouped into blocks, which are 512 bytes. Blocks are read and written in whole number multiples called records. The number of blocks in a record (i.e., the size of a record in units of 512 bytes) is called the blocking factor. The --blocking-factor=512-SIZE (-b 512-SIZE) option specifies the blocking factor of an archive. The default blocking factor is typically 20 (i.e., 10240 bytes), but can be specified at installation. To find out the blocking factor of an existing archive, use tar --list --file=ARCHIVE-NAME. This may not work on some devices. So each checkpoint record is so-many blocks. This is definable via GNU tar's -b or --blocking-factor=[recordsize] option. If you do: tar --show-defaults You should get output like: --format=gnu -f- -b20 --quoting-style=escape --rmt-command=/usr/lib/tar/rmt Which would indicate that one record is 20 blocks. You can also directly specify the record size in terms of bytes like: --record-size=SIZE[SUF] Instructs tar to use SIZE bytes per record when accessing the archive. The argument can be suffixed with a size suffix, e.g. --record-size=10K for 10 Kilobytes.
in which units does tar --checkpoint measure?
1,393,281,409,000
I read many tutorials about the use of kill command, mostly 3 approaches kill -15 <pid> kill -SIGTERM <pid> kill -TERM <pid> For scripts purposes and for portability with macos too, the code numbers are not going to be used. Because kill -l in macos is different than Linux. So here enter in play the signal names. Question What is the correct or suggested approach to send the signal name through the kill command? I mean: SIGsomething or something? And why? These 2 approaches exists for one reason, right? is there a mandatory reason to use an approach over the other? Environment This situation is for Ubuntu Desktop/Server and Fedora Workstation/Server
The most portable variant is kill -s TERM -- … That’s the form specified in POSIX with no extensions: -s followed by the signal’s name, as defined in signal.h, with no SIG prefix, or 0 for the “null” signal (used to check for the existence of a process with a given identifier). SIGTERM is the default signal, so sending that to processes or process groups can be done using the following canonical forms: kill <pid> kill -- -<pgid> On Linux in general, most implementations of kill (including shell builtins) accept signals as numbers or names with or without a SIG prefix; notable exceptions include the kill builtin of dash, which is the default /bin/sh in Debian-based distributions, and of the Schily Bourne Shell. There’s no “mandatory” reason to use one form rather than another, among whatever forms are supported by the tools you use. I would personally avoid numeric forms because they can appear to be ambiguous: is kill -9 -15 intended to send SIGTERM to process groups 9 and 15, or SIGKILL to process group 15? (It’s the latter, but some readers may wonder.) I would also omit the SIG prefix since that’s not recognised everywhere. Note that POSIX does specify a few numeric signal values: Number Signal 0 “Null” signal 1 HUP 2 INT 3 QUIT 6 ABRT 9 KILL 14 ALRM 15 TERM
What is correct or suggested approach to send the signal name through 'kill' command?
1,393,281,409,000
I'm preparing for the LPIC-1, exam 102, and was wondering what is the difference between these two commands with respect to options -G and -aG: usermod -G projectA, projectB jsmith usermod -aG projectA, projectB jsmith The user jsmith has its own default group, which is not listed above among the groups/projects. As I understand from the man pages of usermod, in (1) jsmith is taken off the listed groups/projects. In (2), the user is appended to those groups listed after -G and this does not affect its belonging to its default group. Do I correctly interpret the usage of these two options?
usermod -G sets the user’s supplementary groups to only the groups specified; so after running usermod -G projectA,projectB jsmith the jsmith user will belong to projectA, projectB, and its “primary” group. usermod -aG adds the specified groups to the user’s supplementary groups; so after running usermod -aG projectA,projectB jsmith the jsmith user will belong to projectA and projectB in addition to any groups it already belonged to (including its primary group).
Difference between "usermod -aG" and "usermod -G" options
1,393,281,409,000
Is it possible to change this value in runtime without rebooting? I don't always have this problem, when I suspend right now I'm getting a failure and Suspending console(s) (use no_console_suspend to debug) I would like to debug now, without having to reboot and recreate the problem.
Yes: echo N | sudo tee /sys/module/printk/parameters/console_suspend
Can no_console_suspend be set in runtime?
1,393,281,409,000
With curl is possible do curl http://somedomain.com/originalfilename.tar.gz -o newfilename.tar.gz curl http://somedomain.com/originalfilename.tar.gz -o /other/path/newfilename.tar.gz Therefore with -o is possible rename the filename to download and even define other path directort to download Now if I want keep the same filename is mandatory use -O curl http://somedomain.com/originalfilename.tar.gz -O Therefore is downloaded the originalfilename.tar.gz file. Now Question How to download a file keeping its name (it with -O) and defining the directory to download? Something like curl http://somedomain.com/originalfilename.tar.gz -O <download to /some/path> Therefore for Shell Scripting purposes, where the script can be executed in any place. I want download the originalfilename.tar.gz file explicitly at /some/path
You may specify the path to a directory where the document(s) you are fetching should be written using the --output-dir option. That is, curl -O --output-dir /some/path "$URL" From the curl manual: --output-dir <dir> This option specifies the directory in which files should be stored, when --remote-name or --output are used. The given output directory is used for all URLs and output options on the command line, up until the first -:, --next. If the specified target directory does not exist, the operation will fail unless --create-dirs is also used. If this option is used multiple times, the last specified directory will be used. Example: curl --output-dir "tmp" -O https://example.com See also -O, --remote-name and -J, --remote-header-name. Added in 7.73.0.
curl: download file with the same name with "-O" but defining a specific path directory
1,393,281,409,000
I am looking for an explanation how grep --label=LABEL works: Maybe somebody can give me an example [or two] on what --label= is for. I understand what grep and zgrep are supposed to do – the latter is mentioned in the entry on --label= in the info page: ... especially useful when implementing tools like `zgrep' 98% of what I found so far is copy/paste from info grep and the other two percents the command is embedded in a script which I don't understand.
This feature makes reading the output of grep easier. If you want to check data that grep cannot read directly then you may end up using a pipe to feed grep instead of creating a temporary file which grep can read. If you don't want a temporary file (e.g. because it would be huge) then without --label you would have the problem that grep cannot print the information in which file the match was found. This echo $'fubar\nbaz\nbat' | grep --label=inputfile -H a inputfile:fubar inputfile:baz inputfile:bat is equivalent to echo $'fubar\nbaz\nbat' > inputfile grep -H a inputfile inputfile:fubar inputfile:baz inputfile:bat Without --label the first approach would not work (i.e. not deliver the wanted output) so you would have to do something like this: echo $'fubar\nbaz\nbat' | grep a | awk '{print "inputfile:" $0}' But this does not offer match highlighting in the console.
Understanding grep --label=
1,393,281,409,000
From man tar: -f, --file ARCHIVE use archive file or device ARCHIVE Please consider: tar -zxvf myFile.tar.gz As far as I understand, z means "gzipped tarball", x means "extract", v means "verbose output" but about f I am not sure. If we already give the file name myFile.tar.gz, why is the f argument needed?
It's the option you use to specify the actual pathname of the archive you would want to work with, either for extracting from or for creating or appending to, etc. If you don't use -f archivename, different implementations of tar will behave differently (some may try to use a default device under /dev, the standard input or output stream, or the file/device specified by an environment variable). In the command line that you quote, tar -zxvf myFile.tar.gz which is the same as tar -z -x -v -f myFile.tar.gz you use this option with myFile.tar.gz as the option-argument to specify that you'd like to extract from a particular file in the current directory. Consult the manual for tar on your system to see what data stream or device the utility would use if you don't use the -f option. The GNU tar implementation, for example, has a --show-defaults option that will show the default options used by tar, and this will probably include the -f option (this default may be overridden by setting the TAPE environment variable).
What is the rationale for using -f in tar [duplicate]
1,393,281,409,000
I know the -v switch can be used to awk on the command line to set the value for a variable. Is there any way to set values for awk associative array elements on the command line? Something like: awk -v myarray[index]=value -v myarray["index two"]="value two" 'BEGIN {...}'
No. It is not possible to assign non-scalar variables like this on the command line. But it is not too hard to make it. If you can accept a fixed array name: awk -F= ' FNR==NR { a[$1]=$2; next} { print a[$1] } ' <(echo $'index=value\nindex two=value two') <(echo index two) If you have a file containing the awk syntax for array definitions, you can include it: $ cat <<EOF >file.awk ar["index"] = "value" ar["index two"] = "value two" EOF $ gawk '@include "file.awk"; BEGIN{for (i in ar)print i, ar[i]}' or $ gawk --include file.awk 'BEGIN{for (i in ar)print i, ar[i]}' If you really want, you can run gawk with -E rather than -f, which leaves you with an uninterpreted command line. You can then process those command line options (if it looks like a variable assignment, assign the variable). Should you want to go that route, it might be helpful to look at ngetopt.awk.
Set awk array on command line?
1,393,281,409,000
I'm using getopts for all of my scripts that require advanced option parsing, and It's worked great with dash. I'm familiar with the standard basic getopts usage, consisting of [-x] and [-x OPTION]. Is it possible to parse options like this? dash_script.sh FILE -x -z -o OPTION ## Or the inverse? dash_script.sh -x -z -o OPTION FILE
Script arguments usually come after options. Take a look at any other commands such as cp or ls and you will see that this is the case. So, to handle: dash_script.sh -x -z -o OPTION FILE you can use getopts as shown below: while getopts xzo: option do case "$option" in x) echo "x";; z) echo "z";; o) echo "o=$OPTARG";; esac done shift $(($OPTIND-1)) FILE="$1" After processing the options, getopts sets $OPTIND to the index of the first non-option argument which in this case is FILE.
Getopts option processing, Is it possible to add a non hyphenated [FILE]?