date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,620,033,444,000
I want to restrict internet access of a virtual windows machine. I want this machine to be able to reach a certain IP address and upload/dowload files from this server. I set up an OPNsense installation with two NIC: one is connected to the isolated virtual network with the windows PC, the other is a bridged adapter being part of my LAN. How do I only forward packets from or to this certain IP and drop all other traffic? I have 10.0.0.0/8 for the isolated virtual network and 192.168.178.0/24 for my LAN.
The solution was very easy. Set up an internal network in Virtualbox adapter settings. Connected both the firewall and the windows PC to this network as first network adapter. Connected a second, bridged adapter to the firewall. Started the installation of opnsense, assign the bridged adapter as WAN, the internal network connected adapter as LAN. Assigned IPs on command line, then using the webui installer and after finishing deleted the default ipv4/6 allow-everything rules from Firewall-LAN section, add a new rule to allow inbound traffic with source lan-net, destination was the IP in question. Works perfectly!
block all VM connections but forward to a single host
1,620,033,444,000
I have a whitelist of ip addresses I'm storing in a ipset. I want to craft an iptables rule for my input chain where any IP NOT on the whitelist gets dropped immediately and no rules further down the chain get considered. If a ip matches an address on the whitelist then it continues down the Chain, checking other rules. If I just put a default policy of DROP and an ALLOW rule based on the whitelist, ip addresses not on the whitelist might be compared against other rules in the chain and allowed through based on those criteria, which I do not want. I also don't want to immediately let through traffic matching the whitelist rule (I guess whitelist is a but of a misnomer here) but, rather, apply the traffic to further scrutiny. Does iptables support this "DROP on not match" logic?
First create a whitelist, with a name (identifier) of choice. I named mine mylist in this example. $ sudo ipset -N mylist iphash In my whitelist, I wana allow 10.10.10.0/24 and 10.80.80.0/24 (and then drop everything not listed) $ sudo ipset -A mylist 10.10.10.0/24 $ sudo ipset -A mylist 10.80.80.0/24 Drop any traffic from any host not defined in the whitelist $ sudo iptables -A INPUT -m set ! --match-set mylist src -j DROP Then allow hosts defined in the whitelsit to match the level of access needed per your requirement.
iptables: drop any ip not on whitelist, short circuiting chain
1,620,033,444,000
let's say I have a program running that is already using a random tcp/udp port. If I deny connections now via iptables/ufw the port is still open for that programm until I close/reopen it. Is there any way to block the traffic without restarting that programm? Thank you in advance.
iptables -I OUTPUT [-d DEST_IP] -p tcp -m tcp --dport PORT_NUMBER -j DROP Will stop the application from communicating on that port completely. The fact that the port is still open doesn't really mean anything - it will eventually get closed by timeout.
block port while its already being used?
1,620,033,444,000
I am using the following code to allow DNS requests, and outgoing traffic to port 443, 22 and 80 However, all the traffic to port 443 and 80 is blocked for some reason # Allowing DNS lookups (tcp, udp port 53) to server '8.8.8.8' /sbin/iptables -A OUTPUT -p udp -d 8.8.8.8 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -p udp -s 8.8.8.8 --sport 53 -m state --state ESTABLISHED -j ACCEPT /sbin/iptables -A OUTPUT -p tcp -d 8.8.8.8 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -p tcp -s 8.8.8.8 --sport 53 -m state --state ESTABLISHED -j ACCEPT # Allowing DNS lookups (tcp, udp port 53) to server '127.0.0.53' /sbin/iptables -A OUTPUT -p udp -d 127.0.0.53 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -p udp -s 127.0.0.53 --sport 53 -m state --state ESTABLISHED -j ACCEPT /sbin/iptables -A OUTPUT -p tcp -d 127.0.0.53 --dport 53 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -p tcp -s 127.0.0.53 --sport 53 -m state --state ESTABLISHED -j ACCEPT # allow all and everything on localhost /sbin/iptables -A INPUT -i lo -j ACCEPT /sbin/iptables -A OUTPUT -o lo -j ACCEPT # Allowing new and established incoming connections to port 22, 80, 443 /sbin/iptables -A INPUT -p tcp -m multiport --dports 22,80,443 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A OUTPUT -p tcp -m multiport --sports 22,80,443 -m state --state ESTABLISHED -j ACCEPT # Allow all outgoing connections to port 22 /sbin/iptables -A OUTPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT # Allow outgoing icmp connections (pings,...) /sbin/iptables -A OUTPUT -p icmp -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT /sbin/iptables -A INPUT -p icmp -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections to port 123 (ntp syncs) /sbin/iptables -A OUTPUT -p udp --dport 123 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -p udp --sport 123 -m state --state ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -j LOG -m limit --limit 12/min --log-level 4 --log-prefix IP INPUT drop: /sbin/iptables -A INPUT -j DROP /sbin/iptables -A OUTPUT -j LOG -m limit --limit 12/min --log-level 4 --log-prefix IP OUTPUT drop: /sbin/iptables -A OUTPUT -j DROP # Set default policy to 'DROP' /sbin/iptables -P INPUT DROP /sbin/iptables -P FORWARD DROP /sbin/iptables -P OUTPUT DROP I can see following in syslog, Sep 28 08:17:06 ip-172-31-57-142 kernel: [ 486.605568] IP OUTPUT drop: IN= OUT=eth0 SRC=172.31.57.142 DST=172.217.7.206 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=30718 DF PROTO=TCP SPT=37026 DPT=443 WINDOW=62727 RES=0x00 SYN URGP=0 Sep 28 08:17:07 ip-172-31-57-142 kernel: [ 487.617296] IP OUTPUT drop: IN= OUT=eth0 SRC=172.31.57.142 DST=172.217.7.206 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=30719 DF PROTO=TCP SPT=37026 DPT=443 WINDOW=62727 RES=0x00 SYN URGP=0 I am not sure what I am doing wrong.
Let's look at just one part of this, where you want to allow outbound connections for ssh on port tcp/22 # Allowing new and established incoming connections to port 22, 80, 443 /sbin/iptables -A INPUT -p tcp -m multiport --dports 22,80,443 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A OUTPUT -p tcp -m multiport --sports 22,80,443 -m state --state ESTABLISHED -j ACCEPT # Allow all outgoing connections to port 22 /sbin/iptables -A OUTPUT -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT /sbin/iptables -A INPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT The first pair of rules does not do what the comment says. The second pair should work but is over generous. All four rules are nearly correct, but end up not being sufficiently correct. Instead, just keep it simple # Allow outgoing connections to port tcp/22 iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT # Allow return traffic for established connections iptables -A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT
IPTables is blocking all outgoing traffic to http even though I allowed it
1,620,033,444,000
If I open port to a single server on a private network, will that put every computer in my network at security risk? For example, if I have a company desktop and allow ssh into that desktop from a specific IP. Then will that specific IP be able to snoop around to sniff out data on my personal laptop and phones? If so, how would I secure the personal network?
Credit goes to @John1024 Steps: Setup DMZ with HPC and port forwarded SSH calls to it (using main router). With added whitelisting of particular IPs. Essentially they are public. Set up a secondary router to create a private network for my personal devices.
Securing private network during port forwarding?
1,620,033,444,000
I want the port which my BitTorrent client works on to be open. My PC is connected directly to WAN without a router. I turned off the firewall on ISP side, but the port is still closed. I didn't explicitly install any firewall on my PC either, at least I don't recall it. Is there any way to tell on which side the problem is? I checked the port through the Transmission BitTorrent client itself, through online port checkers and through nmap on another PC connected through a separate channel. nmap report is: Host is up. PORT STATE SERVICE 55133/tcp filtered unknown
Try telnet localhost 55133 on your pc, on a local terminal. If you do not connect, it's something blocking the port (or not opening at all) on your pc.
Port is closed, how am I to tell whether the cause is on my side or on my ISP side?
1,620,033,444,000
What's the difference between "modulate state" and "keep state" in the packet filter (pf) firewall? I'm using MacOS X Catalina 10.15.5. I have no idea how to check what version of pf is installed on this machine :\
Packet Filter originates from OpenBSD: The initial version of PF was written by Daniel Hartmeier. It appeared in OpenBSD 3.0, which was released on 1 December 2001. OpenBSD 3.0: A new packet filter, PF, featuring NAT capabilities, with a mostly ipf-compatible syntax. The commit about modulate state predates this release (2001-08-25). Any MacOS version using PF (which is since OS X 10.6 in 2008) has modulate state available. About modulate state vs keep state, it's all explained in the OpenBSD's PF FAQ. state [...] keep state - works with TCP, UDP, and ICMP. This option is the default for all filter rules. modulate state - works only with TCP. PF will generate strong Initial Sequence Numbers (ISNs) for packets matching this rule. The modulate state option works just like keep state, except that it only applies to TCP packets. With modulate state, the Initial Sequence Number (ISN) of outgoing connections is randomized. This is useful for protecting connections initiated by certain operating systems that do a poor job of choosing ISNs. To allow simpler rulesets, the modulate state option can be used in rules that specify protocols other than TCP. In those cases, it is treated as keep state. So modulate state is doing the default keep state and in addition alters some TCP packets for improved security. Example: pass out on egress proto { tcp, udp, icmp } from any to any modulate state which will "modulate" TCP, and just keep state for others. If you want to know more about TCP innards and TCP ISNs, there are RFCs about it, like: TRANSMISSION CONTROL PROTOCOL Sequence Number: 32 bits The sequence number of the first data octet in this segment (except when SYN is present). If SYN is present the sequence number is the initial sequence number (ISN) and the first data octet is ISN+1. Defending against Sequence Number Attacks Unfortunately, the ISN generator described in [RFC0793] makes it trivial for an off-path attacker to predict the ISN that a TCP will use for new connections, thus allowing a variety of attacks against TCP connections [CPNI-TCP].
pf: difference between 'modulate state' and 'keep state'
1,620,033,444,000
If I check how many connections serverA (192.168.1.1) has open to serverB (192.168.2.1), I get the following response: [username@serverA ~] $ netstat -n | grep 192.168.2.1 tcp 0 0 192.168.1.1:51846 192.168.2.1:10001 ESTABLISHED tcp 0 0 192.168.1.1:50872 192.168.2.1:10001 ESTABLISHED tcp 0 0 192.168.1.1:51824 192.168.2.1:10001 ESTABLISHED tcp 0 0 192.168.1.1:51848 192.168.2.1:10001 ESTABLISHED [username@serverA ~] $ netstat -n | grep 10.79.165.145 | wc -l 4 However, if I do the opposite and check on serverB how many connections it has open to serverA, I get this: [username@serverB ~] $ netstat -n | grep 192.168.1.1 tcp 0 0 192.168.2.1:10001 192.168.1.1:51846 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:55122 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:59930 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:50352 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:44142 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:57698 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:38268 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:41822 ESTABLISHED ... many more connections ... tcp 0 0 192.168.2.1:10001 192.168.1.1:43840 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:50870 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:34100 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:34620 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:41126 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:49298 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:50004 ESTABLISHED tcp 0 0 192.168.2.1:10001 192.168.1.1:51408 ESTABLISHED [username@serverB ~] $ netstat -n | grep 192.168.1.1 | wc -l 104 I was not expecting there to be a mismatch in the number of the connections between the 2 servers, essentially serverB thinks there are a lot more connections open to serverA and than that serverA does to serverB. These servers are on different VLANs and the connections do go through a firewall. Both serverA and serverB are RHEL v7 VMs running on ESXI. They don't run any containers or anything that would do NAT. What could be responsible for the mismatch in open connection numbers
This was solved by a patch release for the application server in use.
Running netstat on 2 servers checking connections to the other one shows mismatch in number of connections
1,620,033,444,000
I was trying to set firewall rules to my website but I messed up with IP table rules and locked out myself. Now, I can't access to the VPS via SSH. When I try it I get that message: ssh: connect to host [IP address] port [Port]: Connection timed out Firstly, I have done those steps for setting the firewall rules. And after iptables -A INPUT -j DROP I think I've blocked myself as I couldn't add any command after all. So, what can I do for fixing this problem and add a strong firewall? Note 1: I'm new in those stuff and trying to learn it tho, Note 2: I don't have access to instance’s management page but I can ask my friend who is hosting the website to do stuff. So priority fixation can be on SSH Thanks Beforehand!
There's no way to regain access via ssh. You have to contact your friend and ask him to reset the iptables' rules. If I can give you a suggestion, I would configure a watchdog using cron or at while configuring iptables, so it won't happen again. I think you can find useful this answer and this answer on ServerFault.
Messed up IP table rules and locked myself out while setting firewall on SSH
1,620,033,444,000
I have a VServer, in which I installed the firewall UFW. I scanned the Server with NMAP but I it showed a lot open Ports, which I didnt open. Is it a Bug? Or did I installed UFW false? Thank you ufw status: http://prntscr.com/pgp5db nmap: nmap -T4 -A -v ********* //edit I solved the problem. The Problem was I just used the wrong nmap command. So the ports where already closed.
Installing UFW and activating the systems unit is not sufficient. You need to configure it. The normal default is that everything is denied and UFW therefore disabled after installation (including start of service). Check ufw status output and be careful not to enable the firewall configuration via UFW or other means without ensuring You have access with SSH or what You need to manage Your server.
NMAP shows open ports even tough I installed the UFW firewall
1,620,033,444,000
I have recently configured my RaspberryPi 3 to only allow connections through VPN. I would however like to open it for SSH connections from anywhere. The rules below should allow traffic on port 22, however as soon as I enable ufw I can no longer connect from anywhere but a local IP (the rules configuring local access work fine). (Router Firewall is configured correctly) root@raspberrypi:~# ufw status verbose Status: active Logging: on (low) Default: deny (incoming), deny (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- 192.168.178.0/24 ALLOW IN Anywhere 22/tcp ALLOW IN Anywhere 22/tcp (v6) ALLOW IN Anywhere (v6) Anywhere ALLOW OUT Anywhere on tun0 192.168.178.0/24 ALLOW OUT Anywhere 31.13.190.247 443/tcp ALLOW OUT Anywhere Anywhere (v6) ALLOW OUT Anywhere (v6) on tun0
My error was having openvpn active.
UFW denies SSH even though rules allow [closed]
1,558,456,132,000
Say I wanted to reject packets that are sent to my computer from a specific IP on the network using iptables. Do I need to define the destination of the packet in my command or is it sufficient to just include the source? For example, say I am working on 126.184.25.25 and I want to reject all TCP packets from 126.184.25.101 should I use: sudo iptables -t filter -A INPUT -p tcp -s 126.184.25.101 -d 126.184.25.25 -j REJECT or is it sufficient to remove the destination address and use: sudo iptables -t filter -A INPUT -p tcp -s 126.184.25.101 -j REJECT When testing both of the above on my network, the former appeared not to work, but both seem to make sense and I am unsure as to why the former may be incorrect.
Do I need to define the destination of the packet in my command or is it sufficient to just include the source? No and yes it is. If you leave off a setting it has a default. If you don't specify a protocol it defaults to block all, if you don't specify the table (-t) it defaults to filter. The man page normally states the default setting, if it has one.
Rejecting TCP packets from certain IP on network using iptables
1,558,456,132,000
I need to block all INPUT traffic to port 8090 on the Ubuntu server 16.04. I used Iptables but it did not work. Commands I used: iptables -A INPUT -p tcp --dport 8090 -j DROP iptables -A INPUT -p tcp --dport 8090 -s <IP> -j ACCEPT In NAT I have: Chain DOCKER (2 references) target prot opt source destination DNAT tcp -- anywhere <VM local IP> tcp dpt:8090 to:172.21.0.2:8080 Public interface named eth0 and docker interface named docker0
Because of DNAT you're now routing. Your INPUT chain isn't used anymore for this DNATed traffic and it's now the FORWARD chain that is traversed instead. The new destination is 172.21.0.2:8080 and that's what the rules should now care about, not <VM local IP>:8090 anymore. So with DNAT in place, you should block your traffic with (in the right order: allow exception, then forbid everything else): iptables -A FORWARD -s <IP> -d 172.21.0.2 -p tcp --dport 8080 -j ACCEPT iptables -A FORWARD -d 172.21.0.2 -p tcp --dport 8080 -j DROP To be sure it's actually done before any system rule, you could do: iptables -I FORWARD 1 -s <IP> -d 172.21.0.2 -p tcp --dport 8080 -j ACCEPT iptables -I FORWARD 2 -d 172.21.0.2 -p tcp --dport 8080 -j DROP Those rules as is might prevent other containers to reach this container depending on configuration, so you might have to adapt them (by stating the external input interface for example). Anyway you have to find a way to integrate this nicely with the system's method of firewall.
Block Docker port and access it to few IP addresses
1,558,456,132,000
This is to document a problem I had with CSF (ConfigServer Firewall) today that cost me a good couple hours. The problem was that my Ethereum node communicating on port 30303 was being blocked, even though I had added the ports to my config file. It seemed to be working just fine, blocking bad logins and otherwise communications to and from my machine, but when I added new ports to the TCP_IN, TCP_OUT, UDP_IN and UDP_OUT lists and ran sudo systemctl restart csf, they wouldn't take effect. See answer for solution....
The solution was frustrating: systemctl didn't actually refresh the iptables rules. Only csf -ra would do that. So after several hours of beating my head against the wall, I finally ran csf -ra just for kicks and everything came up.
CSF blocking ethereum traffic, despite valid config
1,558,456,132,000
My goal is to setup a firewall & Intrusion Prevention system using Snort. I have a spare pc available with at least 2 physical NIC's, which ran pfSense having a firewall with Snort, but this time I want to do the setup myself. So far I managed to install Debian 9 as a headless system with ssh login (and if really needed I could add a keyboard and screen temporary). I wanted to start with just a firewall, without Snort. How to I achieve the following: - is it possible to put the firewall just in between my IPS cable modem router and my LAN? The ISP router has DHCP/NAT enabled, which I can't turn off. - I want to achieve a "plug&play" firewall that I could just put in between, without turning it into a double NAT (which I had before using pfSense). I mean, if possible I don't want to have different networks, eg. a 192.168.x.x one and a for example 10.x.x.x one. - the firewall is headless, logging in via ssh Internet WAN | | ISP Cable Modem & Router with DCHP gateway 192.168.0.1 | | [eth0] Firewall [eth1] | ________ Wireless AP | / |_____ Switch__/_________ PC1 \ \________ ... I tried to setup a bridge on br0 (via /etc/network/interfaces) adding eth0 and eth1. The bridge had an IP address and it worked fine, where I could still connect to the internet from devices behind the switch via the AP. So I learned bridges don't care about IP addresses.... which doesn't sound good to build a firewall with eventually snort (IPS). I've read about iptables and using the "physical dev". Maybe I'm force to do double NAT and setup routing? The problem is I don't know enough to know what is best and how to go about it. Sure, I've googled (a lot) and found for example on aboutdebian.org articles about proxy/NAT and firewalling... but most articles asume you can have a modem only, but I can't turn off DCHP nor I can configure the range of it. It's always the full 255.255.255.0 range.
Seems I’ve found a working solution... maybe trivial once you know it, but keep in mind I didn’t know linux nor much networking. So, here.s what I learned: - you need to use a bridge if you want ‘plug&play’, because it just passes trafic. You could setup a router, but then what comes behind the firewall, needed a different LAN (eg. 10.x.x.x instead of 192.168.x.x). I would also end up with double-NAT and needed to run a DHCP server to provide all devices behind the router/firewall an IP address. So, that why I went with a bridge: no need to change existing setup, but just put the bridge in between. Now, getting the firewalling at work on a Bridge can be done using IPTABLES. Since a bridge doesn’t look at level 3 (IP), but only at level 2 (MAC address/ethernet frame) I.ve found that using the iptables-extension “physdev” is needed. The man page about it gave me some info. So far I was able to block a ping or port 80; 443 etc. just for testing.... but it proves this way it would work out ok. Important is to use the FORWARD chain. For example: iptables -A FORWARD -m physdev --physdev-in eth0 --physdev-out eth1 -p icmp --icmp-type echo-request -j DROP Next things to find out: - how to block IPV6... not sure if I need to add rules to IP6TABLES or just disable it all together on the host. In my internal LAN only IPV4 addresses would be needed. Would I miss out anything if I would block/not use IPV6? - check out eptables - get into Snort ... but I feel I got where I wanted to be.
How to setup a firewall between my ISP cable modem/router and my LAN?
1,558,456,132,000
I'm using linux mint and I want to block all incoming connections on port 5210 except 3 IPs. I've searched and went through a lot of threads, and found only results allowing just ranges of LAN IPs, and I cannot find anything related to allowing exactly 3 different IPs that are not in the LAN. How should I do this or what should I search for?
Allow the three, reject/drop the rest. With iptables from the command line: iptables -A INPUT -p tcp --dport 5210 --source "$addr1" -j ACCEPT iptables -A INPUT -p tcp --dport 5210 --source "$addr2" -j ACCEPT iptables -A INPUT -p tcp --dport 5210 --source "$addr3" -j ACCEPT iptables -A INPUT -p tcp --dport 5210 -j REJECT For, e.g. addr2, the first rule does not match, and it's ignored, while the second rule matches and accepts the packet. Or, make a chain that does nothing for the three addresses and rejects the rest, then accept or do any further processing in the upper level: iptables -N p5210 iptables -A p5210 --source "$addr1" -j RETURN iptables -A p5210 --source "$addr2" -j RETURN iptables -A p5210 --source "$addr3" -j RETURN iptables -A p5210 -j REJECT iptables -A INPUT -p tcp --dport 5210 -j p5210 # add whatever further limitations you want iptables -A INPUT -p tcp --dport 5210 -j ACCEPT Of course, putting the addresses in a variable and using a loop to run the same command for all of them is also an option: #!/bin/bash allowed_addresses=(1.2.3.4 4.5.6.7 7.8.9.0) for addr in "${allowed_addresses[@]}" ; do iptables -A INPUT -p tcp --dport 5210 --source "$addr" -j ACCEPT done
Linux - iptables allow only 3 IPs
1,558,456,132,000
I used svnserve on my centOS server. And I have open port number 3690 on my server. As you can see, the result of command iptables -L is shown as following Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT icmp -- anywhere anywhere ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:webcache ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:mysql ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:5901 ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ddi-tcp-1 REJECT all -- anywhere anywhere reject-with icmp-host-prohibited ACCEPT tcp -- anywhere anywhere tcp dpt:svn ACCEPT tcp -- anywhere anywhere tcp dpt:search-agent Chain FORWARD (policy ACCEPT) target prot opt source destination REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:svn And I have started svnserve on my server, as I can check out successfully on my server with command svn co svn://ip address/name. Nevertheless, when I try to check out from my laptop. It is said that connection is refused. And also, I have tested the connection by telnet ip port , it is said telnet: Unable to connect to remote host. It is quite confused since I have open port 3690 and my svn service is definitely listening to port 3690. What could be the reason for that? And what should I do to access the svn server from remote?
IPtables process the rules from top to the bottom. Here is the issue: Take look on the following rules: REJECT all -- anywhere anywhere reject-with icmp-host-prohibited ACCEPT tcp -- anywhere anywhere tcp dpt:svn After Reject all rule all packets are discarded, including your svn rule. So that is why you're unable to connect. Solution: Place the following rule as the latest one on your INPUT chain, if you really want to discard all other packets: iptables -A INPUT -j REJECT --reject-with icmp-host-prohibited
Why I cannot access the svn server from remote?
1,558,456,132,000
Are things such as Chromium addons and UFW Firewall settings automatically set for all users and future users on Linux distributions ?
The firewall, and more generally the network configuration, is a system setting. It applies to all users. Creating new user accounts doesn't change the system settings (apart from the accounts, of course). Note that a web proxy setting is a user setting, because it's applied by each browser, not by the system. Chromium add-ons are installed as part of a Chromium profile (the same goes for other browsers: Chrome, Firefox, etc.). Each user has their own browser profile (and you can even create multiple profiles on your account). It's possible to install system-wide add-ons, but rarely done except in some enterprise setups.
Are Chromium addons and UFW Firewall settings set for all users? [closed]
1,558,456,132,000
I made a test machine with old mandrake 9.0 for penetration testing (had a lot of bugs,so is the ideal). For security reason i use "host only nic" with virtualbox. The firewall on machine is disabled but nmap report all port closed except 554! PORT STATE SERVICE 22/tcp filtered ssh 23/tcp filtered telnet 135/tcp filtered msrpc 139/tcp filtered netbios-ssn 161/tcp filtered snmp 445/tcp filtered microsoft-ds 554/tcp open rtsp 1433/tcp filtered ms-sql-s 1434/tcp filtered ms-sql-m What can it be? I try also to accept all machines on /etc/hosts.allow but no success I want to ssh the machine from my network Machine has 10.1.1.2 ip my network is 192.168.0.0/24 I can ping the machine but no ssh
Solution found,i use another ip class,so even avoid routing,firewall,etc I simply change to virtualbox default host-only net(wich is 192.168.56.0/24) the ip of guest. Now all ports are accessible,i still don't understand why rstp was open....
why my "host-only" nic on virtualbox refuse the ssh?
1,558,456,132,000
The following is my network topology: + + | | | | | | | +------------------+ | | | | | +-----+ firewall +-------+ eth0 | | eth1 +--------+---------+ | eth4 | | +-----+---------+ | | +---------+ switch +---+ | | | | | +-+-+-+-+-+-+-+-+ | | | | | | | | | | | + + + + + + + + + + 10 Vlans I use extension statistic and and connection mark to load balancing for LAN network. But module mark and statistic are not work well. My iptables as the following: #!/bin/sh # # delete all existing rules. # IPT='/sbin/iptables' LAN_IF='eth4' WAN_IF='eth0' OPT_IF='eth1' LAN_NET='192.168.10.0/24' VLAN1_NET='192.168.101.0/24' VLAN2_NET='192.168.102.0/24' VLAN3_NET='192.168.103.0/24' VLAN4_NET='192.168.104.0/24' VLAN5_NET='192.168.105.0/24' VLAN6_NET='192.168.106.0/24' VLAN7_NET='192.168.107.0/24' $IPT -F $IPT -t nat -F $IPT -t mangle -F $IPT -X #$IPT -A INPUT -j LOG --log-level 4 --log-prefix 'NETFILTER' #$IPT -A OUTPUT -j LOG --log-level 4 --log-prefix 'NETFILTER' $IPT -A FORWARD -j LOG --log-level 4 --log-prefix 'NETFILTER ' $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP # Always accept loopback traffic $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT # Allow for lan net $IPT -A OUTPUT -o $LAN_IF -j ACCEPT $IPT -A INPUT -i $LAN_IF -j ACCEPT # Allow from local to internet $IPT -A OUTPUT -o $WAN_IF -j ACCEPT $IPT -A OUTPUT -o $OPT_IF -j ACCEPT # Allow established connections, and those not coming from the outside $IPT -A INPUT -s $LAN_NET -p icmp -j ACCEPT $IPT -A OUTPUT -s $LAN_NET -p icmp -j ACCEPT $IPT -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A INPUT -m state --state NEW -i $LAN_IF -j ACCEPT # Allow forward both WANT and OPT $IPT -A FORWARD -i $WAN_IF -o $LAN_IF -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A FORWARD -i $OPT_IF -o $LAN_IF -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the LAN side. $IPT -A FORWARD -s $LAN_NET -o $WAN_IF -j ACCEPT $IPT -A FORWARD -s $VLAN7_NET -o $WAN_IF -j ACCEPT $IPT -A FORWARD -s $VLAN6_NET -o $WAN_IF -j ACCEPT $IPT -A FORWARD -s $VLAN5_NET -o $WAN_IF -j ACCEPT $IPT -A FORWARD -s $VLAN4_NET -o $WAN_IF -j ACCEPT $IPT -A FORWARD -s $VLAN3_NET -o $WAN_IF -j ACCEPT $IPT -A FORWARD -s $VLAN2_NET -o $WAN_IF -j ACCEPT $IPT -A FORWARD -s $VLAN1_NET -o $WAN_IF -j ACCEPT #$IPT -A FORWARD -s $LAN_NET -o $OPT_IF -j ACCEPT # Allow outgoing connections from the LAN side. $IPT -A FORWARD -s $LAN_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -s $VLAN5_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -s $VLAN6_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -s $VLAN7_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -s $VLAN4_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -s $VLAN3_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -s $VLAN2_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -s $VLAN1_NET -o $OPT_IF -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -s $VLAN1_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -d $VLAN1_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -s $VLAN2_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -d $VLAN2_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -s $VLAN3_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -d $VLAN3_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -s $VLAN4_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -d $VLAN4_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -s $VLAN5_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -d $VLAN5_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -s $VLAN6_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -d $VLAN6_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -s $VLAN7_NET -j ACCEPT $IPT -A FORWARD -i $LAN_IF -o $LAN_IF -d $VLAN7_NET -j ACCEPT # Masquerade. $IPT -t nat -A POSTROUTING -o $WAN_IF -j MASQUERADE $IPT -t nat -A POSTROUTING -o $OPT_IF -j MASQUERADE # load balancing $IPT -A PREROUTING -t mangle -j CONNMARK --restore-mark $IPT -A PREROUTING -t mangle -m mark ! --mark 0 -j ACCEPT $IPT -A PREROUTING -p icmp -t mangle -m statistic --mode nth --every 2 --packet 0 -j MARK --set-mark 2 $IPT -A PREROUTING -p icmp -t mangle -m statistic --mode nth --every 2 --packet 1 -j MARK --set-mark 3 $IPT -A PREROUTING -t mangle -j CONNMARK --save-mark # Enable routing. echo 1 > /proc/sys/net/ipv4/ip_forward I debug by command: cat /var/log/messages |grep 0x2 |wc -l and cat /var/log/messages |grep 0x3 |wc -l But the number of packets marked 0x2 and packets marked 0x3 are not balance. Why does it happen ?
The load balancing part of your script says : If I already know this connection, just let it go the same way than previously If I don't, send it half of the time on one interface, half of the time on the other one So you have an equal number of connection going on each interface. But some connections only have 3 packets, when other ones have 1000, the packet count will not be equal. Furthermore, if you check the number of opened connections, it might be inequal, because some connections last longer. In order to load balance packets, you would have to delete these lines : $IPT -A PREROUTING -t mangle -j CONNMARK --restore-mark $IPT -A PREROUTING -t mangle -m mark ! --mark 0 -j ACCEPT $IPT -A PREROUTING -t mangle -j CONNMARK --save-mark It would be a terrible idea though, since your connections would have different ip source and half of the packet wouldn't hit destination services because of asymmetric routing. And balance packet doesn't balance bits send since. Even balance bits send doesn't balance bits received. In my opinion, you should just let your script as it is now, and the more traffic there is, the more balanced your links will be.
iptables connection mark not balance
1,558,456,132,000
I have a problem with following thing: Foreign connection is connection initialized not by my computer. I would like to accept only foreign connection such that port is 22 or from interval [1000, 1100] Could you help me please ?
For accepting port 22 -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT For ports 1000-1100 -A INPUT -m state --state NEW -m tcp -p tcp --dport 1000:1100 -j ACCEPT if you enter these lines into /etc/sysconfig/iptables file on your linux machine and restart iptables service with service iptables restart command, you should be good to go.
IP tables - how to configure connections into my compuer
1,558,456,132,000
I’m doing some rules in a machine with the firewall disabled, but when I run rcSuSEfirewall2 a lot of rules and policies are applyed by default: iptables -L Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state ESTABLISHED ACCEPT icmp -- anywhere anywhere state RELATED input_ext all -- anywhere anywhere input_ext all -- anywhere anywhere LOG all -- anywhere anywhere limit: avg 3/min bu rst 5 LOG level warning tcp-options ip-options prefix `SFW2-IN-ILL-TARGET ' DROP all -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination LOG all -- anywhere anywhere limit: avg 3/min bu rst 5 LOG level warning tcp-options ip-options prefix `SFW2-FWD-ILL-ROUTING ' Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Chain forward_ext (0 references) target prot opt source destination Chain input_ext (2 references) target prot opt source destination DROP all -- anywhere anywhere PKTTYPE = broadcast ACCEPT icmp -- anywhere anywhere icmp source-quench ACCEPT icmp -- anywhere anywhere icmp echo-request LOG all -- anywhere anywhere limit: avg 3/min bu rst 5 PKTTYPE = multicast LOG level warning tcp-options ip-options prefix `SFW2- INext-DROP-DEFLT ' DROP all -- anywhere anywhere PKTTYPE = multicast DROP all -- anywhere anywhere PKTTYPE = broadcast LOG tcp -- anywhere anywhere limit: avg 3/min bu rst 5 tcp flags:FIN,SYN,RST,ACK/SYN LOG level warning tcp-options ip-options pre fix `SFW2-INext-DROP-DEFLT ' LOG icmp -- anywhere anywhere limit: avg 3/min bu rst 5 LOG level warning tcp-options ip-options prefix `SFW2-INext-DROP-DEFLT ' LOG udp -- anywhere anywhere limit: avg 3/min bu rst 5 state NEW LOG level warning tcp-options ip-options prefix `SFW2-INext-DROP -DEFLT ' DROP all -- anywhere anywhere Chain reject_func (0 references) target prot opt source destination REJECT tcp -- anywhere anywhere reject-with tcp-res et REJECT udp -- anywhere anywhere reject-with icmp-po rt-unreachable REJECT all -- anywhere anywhere reject-with icmp-pr oto-unreachable Resuming my question: How can I set my Suse firewall to when I start it show all policies in the ACCEPT chain? like this: iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination (my custom DROP Rule) Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination PS: I know sounds no sense but it is because I’m adding extra rules in /etc/sysconfig/scripts/SuSEfirewall2-custom I'm using SuSE Linux Enterprise Server 11 Service Pack 3 UPDATE: I've rechecked if Yast Firewall haves an option to set policies to ACCEPT but nothing.
Well, i want to share this workarround, i think is not so elegant as i wanted but it works. First create a file and call it as you want i.e fwsrv #!/bin/bash # Author: Francisco Tapia # # /etc/init.d/fwsrv # ### BEGIN INIT INFO # Provides: fwsrv # Required-Start: network # Should-Start: $null # Required-Stop: $null # Should-Stop: $null # Default-Start: 5 # Default-Stop: 5 # Short-Description: Executes iptables rules. # Description: this is not a service. ### END INIT INFO . /etc/rc.status rc_reset case "$1" in start) # use colour for ease of spotting echo -e "\E[36mRunning $0 (start)...\E[0m"; /etc/init.d/fwsrv.d/start echo -e "\E[36mDone $0 \E[0m"; ;; stop) echo -e "\E[36mRunning $0 (stop)...\E[0m"; /etc/init.d/fwsrv.d/stop echo -e "\E[36mDone $0 \E[0m"; ;; restart) $0 stop $0 start rc_status ;; *) echo "Usage $0 (start|stop|restart)" exit 1; ;; esac rc_exit Then create 2 files one called start and other stop with this content in script. #!/bin/bash # run scripts with names starting 0-9 in foreground. if you want to # put a script in start.d and you care about when it gets run in relation # to other scripts, give it a name starting 0-9 for i in $(dirname $0)/start.d/[0-9]*;do test -x $i && echo -e "\E[36mRunning ${i} \E[0m" && $i done # run scripts with names starting a-z in the background # as this reduces the over all time this script takes to run. for i in $(dirname $0)/start.d/[a-z]*;do test -x $i && echo -e "\E[36mRunning ${i} \E[0m" && $i & done # wait for children to exit wait; and finally the last one called rules and it will have all my desired rules: #!/bin/bash rcSuSEfirewall2 start iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT iptables -F #My Desired Rules Then execute the following commands in terminal. cp fwsrv /etc/init.d/fwsrv chmod u+x /etc/init.d/fwsrv mkdir -p /etc/init.d/fwsrv.d/start.d mkdir -p /etc/init.d/fwsrv.d/stop.d cp start /etc/init.d/fwsrv.d/start cp stop /etc/init.d/fwsrv.d/stop chmod u+x /etc/init.d/fwsrv.d/start chmod u+x /etc/init.d/fwsrv.d/stop cp rules /etc/init.d/fwsrv.d/start.d/rules chmod u+x /etc/init.d/fwsrv.d/start.d/rules insserv /etc/init.d/fwsrv Now the Machine will start the Firewall when starts and cleaning all rules and applying all custom rules. if you want to add more rules just edit rules file in /etc/init.d/fwsrv.d/start.d/
How to set up a clear SuSE-Firewall?
1,558,456,132,000
I want to deny all outgoing traffic by default, except ssh connections. I added this rules: ufw default deny outgoing ufw allow ssh Then I restarted the firewall by: ufw disable ufw enable Should this do the trick? I also want to deploy a Rails Application on this server. And my hello-world Rails App Server is bound to the server ip with a port. I choose 3000. After the setup of the firewall I expected to not be able to reach my Rails app anymore. But it is still there. What am I doing wrong?
OK. The solution is that my host provider also offers Plesk. And there is a firewall running which somehow overwrites everything i setup manually.
ufw rule deny default outgoing
1,558,456,132,000
I have an java application that I have been developing. I use a external db server that I can't control. I want to simulate connection error to it, but I'm unable to do it. I have tried to use iptables and tc to create the situation, but after the java program is running it can create a query to the database. If I restart the application then the blocking succeeds. Is there something I don't understand?
The first thing you don't understand is that we can't debug your iptables rules if you don't show them to us. That being said, I see a potential pitfall. But of course I don't know whether that is your problem. It's likely that the Java application establishes a TCP connection to the database once and for all when it starts. If your firewall merely blocks connection establishment packets and lets packets through if they're part of an established TCP session (iptables -A -p tcp -m state --state ESTABLISHED,RELATED -j ALLOW), then your application will be able to continue communicating with the database. In order to block the communication, you need to either set up the firewall before starting the application, or block all TCP packets whose destination is the database port.
Block outgoing connection from running process
1,558,456,132,000
I am working in a company what uses Ubuntu Precise on the desktops behind a proxy. The Proxy is available in /etc/environment and set as: http_proxy, https_proxy, ftp_proxy, no proxy and their uppercase versions. So actually i have a problem with using some applications inside the Bash. If i'm using backportpackage (like backportpackage -s trusty -d precise fop) or bzr branch then just comes up a timeout. Now i'm guessing that the programs trying to use an unsupported (from Proxies Firewall) Port. But if i can find out what port it uses, i can ask the Admins for opening the port. May anyone can help with this?
When accessing a bzr branch/repository via smart-server directly (via bzr://, but not bzr+ssh:// and not http://), the default port is 4155 according to http://doc.bazaar.canonical.com/bzr.0.18/server.htm When using bzr+ssh, it will use the ssh port (22) When using http(s), it will use 80(443). According to http://doc.bazaar.canonical.com/latest/en/user-guide/configuring_bazaar.html bzr should be respecting your http proxy settings, if the branches/repositories you are asking are http URLs. The URL of the branch/repository can specify a specific port, and in that case the port used will be the one specified. The manpage for backportpackage says it fetches a package from one distribution release or from a specified .dsc path or URL[...] so what port it uses is dependant on the URL specified or the package details.
What port uses my backportpackage or bzr?
1,558,456,132,000
I try to hardening my server. For doing so, I got a general question: Should I install kernel security patches like selinux and an Anti-Virus with Intrusion Detection Firewall? Does it make sense to combine it or just one of them? I mean, the patches are known to secure local things like processes etc. from turning into zombies or stuff like that. But I don´t think, that those patches secures also my Internet Connection, does they?
If you are concerned about system integrity, then selinux or grsecurity (or the various similar security packages) are very powerful. Unfortunately, mastering their policies is far from trivial. (Any decent distro that includes SELinux will have predefined policies for all kinds of things, though.) Grsecurity policies are easier to create but still require some effort. Grsecurity has the big advantage over SELinux that it comes with several system hardening measures, like pax, which provides quite rigorous memory protection. On the downside, Grsecurity is not officially part of Linux and never will be (for, um, political reasons) and thus only few distros provide integration of Grsecurity. My personal view: The whole concept of AV is entirely rotten because they are - in essence - nothing more than giant black lists that need to be updated frequently. Because of this they grow ever larger and don't protect you from 0-day-exploits. Personally I believe in encapsulation and containment, which is what SELinux, Grsecurity, etc. achieve. IDS/IPS is useful to some degree, as long as you can keep it simple (like using iptables, fail2ban, or aide). "High-end" IDS/IPS work like AV and thus my view applies for them as well.
kernel security and IDS Firewall + AV together or not?
1,418,633,044,000
On a machine called ubuntu1, this is the iptables command: sudo iptables -A INPUT -p icmp -j DROP on the other computer (xp1) I can not ping the ubuntu1.So this is OK. But On ubuntu1 can ping xp1. and I think this is not OK. I do not have problem with ping request but I have problem with ping replay from xp1. Why does that command not drop the replay of ping which is an ICMP packet? UPDATE: I did a mistake . I did not see the replay on terminal!!! I just see the replay on wireshark.!!!
The command which you are entering is just for blocking incoming ICMP connection if you want to block outgoing ICMP connection you have to choose output chain i.e sudo iptables -A OUTPUT -p icmp -j DROP
iptables INPUT command
1,418,633,044,000
In the past, I have used the following script to set up a stateful firewall (on a normal x64 Ubuntu machine) without issue: iptables -P INPUT DROP iptables -P FORWARD DROP iptables -N TCP iptables -N UDP iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -i eth0 -j ACCEPT iptables -A INPUT -i wlan0 -j ACCEPT iptables -A INPUT -m conntrack --ctstate INVALID -j DROP iptables -A INPUT -p icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT iptables -A INPUT -p udp -m conntrack --ctstate NEW -j UDP iptables -A INPUT -p tcp --syn -m conntrack --ctstate NEW -j TCP iptables -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable iptables -A INPUT -p tcp -j REJECT --reject-with tcp-rst iptables -A INPUT -j REJECT --reject-with icmp-proto-unreachable iptables -A TCP -p tcp --dport 22 -j ACCEPT I am trying to accomplish the same on an am335x Starter Kit board, running the standard SDK 6 image. As such it runs TI's Arago OS. It fails on the lines involving '-m conntrack' with: iptables: No chain/target/match by that name. The output of 'iptables -S' is: -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT I tried 'modprobe nf_conntrack' to no avail (the command succeeded, but did not help.) I also tried purposefully misspelling parts of the command such as: iptables -A INPUT -m conntrack --ctstate BLAH -j ACCEPT gives error: iptables v1.4.15: Bad ctstate "BLAH" And: iptables -A INPUT -m conntrack --ctstate RELATED -j BLAH gives error: iptables v1.4.15: Couldn't load target `BLAH':No such file or directory So curiously, it seems to indicate it is complaining about the '-A INPUT' portion, which works perfectly fine in other commands such as: iptables -A INPUT -i lo -j ACCEPT
I had posted this question to the TI Forums as well, and got a response. Enabling CONFIG_NETFILTER_XT_MATCH_CONNTRACK in the kernel config solved this issue. I just set CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y in the config, rebuilt with the SDK (instructions here). After installing the new kernel and modules, the -m conntrack command doesn't complain anymore.
How to set up a stateful firewall on an am335x Starter Kit board?
1,418,633,044,000
I have been testing my application which has TCP/UDP ports for peer to peer with the help of server signalling commands for making communication, that works when I have Public IP or Lan IP and not firewall/port blocks are involved. I will name my end points here: a) PC1 - running in European commission having Lan IP and unknown Wan IP b) PC2 - running in European commission having Lan IP and unknown Wan IP c) Server 1 - running in Amazon with public IP Now the same Application I am testing in European commission/Airport/Rail way networks where there internet inbound/outbound traffics are having firewall and rules as a result it fails to communicate with server for mapping and application algorithms. On those PC1/PC2 how-ever I tested Skype and it simply works without caring firewall or all those network issues. Skype simply works. So I was thinking is there any third party tools which I can use in my PC1/PC2 to make list of ports available remotely to access via TCP/UDP (without caring what firewall or network they are located?) . So that from Server I can do the port mapping and bridge or relay there packets? (For example Skype works in such complicated network, is there any tools we have in Linux to use it as external package)
I think what you should really be asking is: "... how does Skype traverse complex network topologies where it would seem impossible to connect through these networks which have complex firewalling deployed?" I'd take a look at this article directly from Skype which explains in pretty good terms the methods they employ to make Skype jus t work. What are P2P communications? If you read through that article what they're basically saying is that they use a variety of techniques to circumvent complex networks. The key technologies that they leverage are as follows: 1. Firewall and NAT (Network Address Translation) traversal excerpt from Wikipedia Many techniques exist, but no single method works in every situation since NAT behavior is not standardized. Many NAT traversal techniques require assistance from a server at a publicly routable IP address. Some methods use the server only when establishing the connection, while others are based on relaying all data through it, which adds bandwidth costs and increases latency, detrimental to real-time voice and video communications. 2. Global decentralized user directory This is a fancy way of saying "We use supernodes on the internet which are computers that do allow Skype clients to connect ad-hoc to any port of its choosing. These "clients" act as decentralized databases of user info which if you take them as a whole, make up the Skype directory of users. excerpt Clearly, in order to deliver high-quality communications with the lowest possible costs, a third generation of P2P technology ("3G P2P") or Global Index (GI) was a necessary development and represents yet another paradigm shift in the notion of scalable networks. The Global Index technology is a multi-tiered network where supernodes communicate in such a way that every node in the network has full knowledge of all available users and resources with minimal latency. 3. How does Skype maintain call quality? There answer basically says, it's a secret and we're not willing to share that bit of information with you.
Is there any tools which can be used to make ports available from any firewall network?
1,418,633,044,000
I created a RedHat 6.1 VM with EC2. Logged in as root, I installed (unzipped) JDK1.7 in /root/bin/jdk1.7.0 and installed (unzipped) GlassFish 3.1.1 in /root/bin/glassfish3. I set JAVA_HOME and GLASSFISH_HOME in root's .bash_profile and I started GlassFish. It's definitely running, because if I do a wget localhost:8080 from the command line, wget downloads the index.html file. The problem is, when I browse to the machine at http://ec2-107-20-96-43.compute-1.amazonaws.com:8080, I get nothing. I added port 8080 to the VM's security group; is there something else I have to do there? Is there something else I have to do on the linux machine to make 8080 visible?
I spent some more time with the EC2 security groups. I allowed all incoming TCP ports and it looks like it's working now. I guess I didn't have port 8080 open correctly.
Can't connect remotely to server running redhat (ec2)
1,418,633,044,000
I m able to telnet locally to mysql process like below: I have also made sure MySQL process is listening on all IPs by setting bind-address = 0.0.0.0 as evident below: root@localhost:~# netstat -plutn | grep mysql tcp 0 0 0.0.0.0:33060 0.0.0.0:* LISTEN 39288/mysqld tcp 0 0 0.0.0.0:7306 0.0.0.0:* LISTEN 39288/mysqld and root@localhost:~# telnet 82.165.32.59 7306 Trying 82.165.32.59... Connected to 82.165.32.59. Escape character is '^]'. >Host 'linux' is not allowed to connect to this MySQL serverConnection closed by foreign host I opened the firewall port 7306 and reloaded the firewall using the below commands: root@localhost:~# firewall-cmd --zone=public --permanent --add-port=7306/tcp Warning: ALREADY_ENABLED: 7306:tcp success root@localhost:~# firewall-cmd --reload success root@localhost:~# firewall-cmd --list-all public target: default icmp-block-inversion: no interfaces: sources: services: dhcpv6-client ssh ports: 443/tcp 80/tcp 7306/tcp protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: However, when telnet from a remote host it fails like below: $ telnet 82.165.32.59 7306 Trying 82.165.32.59... telnet: connect to address 82.165.32.59: Connection timed out My OS is: root@localhost:~# uname -a Linux localhost 5.4.0-89-generic #100-Ubuntu SMP Fri Sep 24 14:50:10 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux root@localhost:~# lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal I tried restarting the firewall service as below: root@localhost:~# systemctl restart firewalld I also flushed the IPTABLES using the below script but it did not help: root@localhost:~# cat fw.stop #!/bin/sh echo "Stopping IPv4 firewall and allowing everyone..." ipt="/sbin/iptables" ## Failsafe - die if /sbin/iptables not found [ ! -x "$ipt" ] && { echo "$0: \"${ipt}\" command not found."; exit 1; } $ipt -P INPUT ACCEPT $ipt -P FORWARD ACCEPT $ipt -P OUTPUT ACCEPT $ipt -F $ipt -X $ipt -t nat -F $ipt -t nat -X $ipt -t mangle -F $ipt -t mangle -X $ipt -t raw -F $ipt -t raw -X I also checked if port 7306 is open for the outside world using the below website but it too says Port 7306 is closed on 82.165.32.59. https://www.yougetsignal.com/tools/open-ports/ Below is the output of iptables -L however, I do not have the expertise to understand derive from it. root@localhost:~# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED,DNAT ACCEPT all -- anywhere anywhere INPUT_direct all -- anywhere anywhere INPUT_ZONES all -- anywhere anywhere DROP all -- anywhere anywhere ctstate INVALID REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED,DNAT ACCEPT all -- anywhere anywhere FORWARD_direct all -- anywhere anywhere FORWARD_IN_ZONES all -- anywhere anywhere FORWARD_OUT_ZONES all -- anywhere anywhere DROP all -- anywhere anywhere ctstate INVALID REJECT all -- anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere OUTPUT_direct all -- anywhere anywhere Chain FORWARD_IN_ZONES (1 references) target prot opt source destination FWDI_public all -- anywhere anywhere [goto] Chain FORWARD_OUT_ZONES (1 references) target prot opt source destination FWDO_public all -- anywhere anywhere [goto] Chain FORWARD_direct (1 references) target prot opt source destination Chain FWDI_public (1 references) target prot opt source destination FWDI_public_pre all -- anywhere anywhere FWDI_public_log all -- anywhere anywhere FWDI_public_deny all -- anywhere anywhere FWDI_public_allow all -- anywhere anywhere FWDI_public_post all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere Chain FWDI_public_allow (1 references) target prot opt source destination Chain FWDI_public_deny (1 references) target prot opt source destination Chain FWDI_public_log (1 references) target prot opt source destination Chain FWDI_public_post (1 references) target prot opt source destination Chain FWDI_public_pre (1 references) target prot opt source destination Chain FWDO_public (1 references) target prot opt source destination FWDO_public_pre all -- anywhere anywhere FWDO_public_log all -- anywhere anywhere FWDO_public_deny all -- anywhere anywhere FWDO_public_allow all -- anywhere anywhere FWDO_public_post all -- anywhere anywhere Chain FWDO_public_allow (1 references) target prot opt source destination Chain FWDO_public_deny (1 references) target prot opt source destination Chain FWDO_public_log (1 references) target prot opt source destination Chain FWDO_public_post (1 references) target prot opt source destination Chain FWDO_public_pre (1 references) target prot opt source destination Chain INPUT_ZONES (1 references) target prot opt source destination IN_public all -- anywhere anywhere [goto] Chain INPUT_direct (1 references) target prot opt source destination Chain IN_public (1 references) target prot opt source destination IN_public_pre all -- anywhere anywhere IN_public_log all -- anywhere anywhere IN_public_deny all -- anywhere anywhere IN_public_allow all -- anywhere anywhere IN_public_post all -- anywhere anywhere ACCEPT icmp -- anywhere anywhere Chain IN_public_allow (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ctstate NEW,UNTRACKED ACCEPT tcp -- anywhere anywhere tcp dpt:https ctstate NEW,UNTRACKED ACCEPT tcp -- anywhere anywhere tcp dpt:http ctstate NEW,UNTRACKED ACCEPT tcp -- anywhere anywhere tcp dpt:mysql ctstate NEW,UNTRACKED ACCEPT tcp -- anywhere anywhere tcp dpt:7306 ctstate NEW,UNTRACKED Chain IN_public_deny (1 references) target prot opt source destination Chain IN_public_log (1 references) target prot opt source destination Chain IN_public_post (1 references) target prot opt source destination Chain IN_public_pre (1 references) target prot opt source destination Chain OUTPUT_direct (1 references) target prot opt source destination Can you please suggest?
The server was provisioned by https://cloudpanel.ionos.de/ After login into the portal, there was a firewall option on the webpage dashboard to allow access (incoming traffic) on port 7306. I don't understand what was preventing the port to be accessed by a remote host if someone can shed some light on the answer. After opting for that port restarted the server and now the port is connecting. Many Thanks for your help.
Unable to connect telnet to mysql listen ip port from remote host
1,418,633,044,000
In my college, I can't use the web, because they use Sonicwall network firewalls How can I tease it? For I use all websites in my college? Please see the result when I access a website blocked I already tried use non-proxy but it didn't worked. Thanks!
If outgoing SSH works, you can use SSH tunneling to set up a SOCKS proxy which will effectively bypass the firewall. You will obviously need the following: Make sure SonicWall doesn't block outgoing SSH connections (TCP port 22). If they do, and you have full control over the SSH server outside of this network (e.g. if you run one at home), try running it on a different port, or even on port 80 or 443. Use ssh with the -D8080 switch to log on to the remote SSH server. If the SSH server runs on a different port (e.g. 443) use -p443 to specify the port. For example: ssh -D8080 -p443 [email protected] Keep the SSH session open, and configure your browser to use a SOCKS proxy on localhost, TCP port 8080 (should be the default setting). Go to ipchicken.com or whatismyip.com to confirm that your browser is using your SSH tunnel/proxy. Enjoy unrestricted Internet access. I like this trick a lot, because it actually encrypts with SSH all traffic to/from your computer. Note, however, that some plugins, like Flash, will ignore your browser's proxy setting and will still try to connect directly. That means that sites using Flash video players may not work, however, browser-based HTML5 players should work.
How can I tease Sonicwall ?
1,418,633,044,000
Here are 2 servers 192.168.0.12 192.168.0.21 there is a service running in 50070 port in server 192.168.0.12 when I do telnet from 192.168.0.21 server it fails: $telnet 192.168.0.12 50070 Trying 192.168.0.12... telnet: connect to address 192.168.0.12: Connection refused When I give hostname then also it fails: $telnet master1.mycluster 50070 Trying 192.168.0.12... telnet: connect to address 192.168.0.12: Connection refused Even when I try from 192.168.0.12 it fails if I give the IP address: $telnet 192.168.0.12 50070 Trying 192.168.0.12... telnet: connect to address 192.168.0.12: Connection refused But it works if I give the hostname: $telnet master1.mycluster 50070 Trying 127.0.0.1... Connected to master1.mycluster. Escape character is '^]'. I found this question can not telnet to a server connection refuse, but I tried all the possibilities. These are what I tried: Turned off iptables in both the servers Added ALL: ALL in cat /etc/hosts.allow Made sure that the service is running in that port But none of these work for me. Here is my /etc/hosts 127.0.0.1 master1.mycluster master1 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.12 master1.mycluster master1 192.168.0.21 slave1.mycluster slave1 Is there anything else that I am missing to make it working ?
Your service is listening on the loopback address only, 127.0.0.1. When you make a connection from 192.168.0.21 or when you specify the ip address it does not work, as your service is not listening on that ip. When you use the hostname from 192.168.0.12 it works because it is connecting to the loopback address. This is because it will look in your hosts file first, /etc/hosts, which has an entry pointing that hostname to your loopback ip: 127.0.0.1 master1.mycluster
Unable to telnet to a server
1,418,633,044,000
Below are rules for Firewall (Refered from multiple posts) settings, where I want to allow Sending Mails through Applications on my server and allow FTP access of Server. But the Mails have Stopped passing through application after adding these rules. may be something is missing in this. Any help is Appreciated. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [824:72492] -A INPUT -p tcp -m tcp --dport 21 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 20 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --sport 1024:65535 --dport 20:65535 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m state --state ESTABLISHED -j ACCEPT -A INPUT -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -p tcp -m tcp --dport 21 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 20 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A OUTPUT -p tcp -m tcp --sport 1024:65535 --dport 20:65535 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp --dport 25 -j ACCEPT -A INPUT -p tcp --dport 110 -j ACCEPT -A INPUT -p tcp --dport 995 -j ACCEPT -A INPUT -p tcp --dport 143 -j ACCEPT -A INPUT -p tcp --dport 993 -j ACCEPT -A INPUT -p tcp --dport 53 -j ACCEPT -A INPUT -p udp --dport 53 -j ACCEPT -A OUTPUT -p tcp --dport 53 -j ACCEPT -A OUTPUT -p udp --dport 53 -j ACCEPT -A INPUT -p tcp --dport 25 -j ACCEPT -A OUTPUT -p tcp --dport 25 -j ACCEPT -A OUTPUT -p tcp --dport 587 -j ACCEPT -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT COMMIT The Console Error I get for this is, email Failed to load resource: the server responded with a status of 500 (Internal Server Error)
You'll probably want that REJECT rule to be at the end of the INPUT chain. I.e., moved to be the last rule before COMMIT.
IpTables Rules for sending Mails on linux server
1,418,633,044,000
Because the ISP blocks port 80 this prevents my running a web server. As a work-around it's possible to specify a different port for Apache? I believe I've seen mention of using port 81, or some lower ports. Not for production, just mucking around.
Assuming a recent version of Linux and Apache. To accomplish this configuration change, modify /etc/httpd/conf/httpd.conf replacing the Listen 80 directive with a different port. As far as ports go, I'd recommend higher. Check a list of TCP and UDP port numbers, go high and stay away from known ports.
running apache web server on an arbitrary port?
1,418,633,044,000
I need to open port 21 on a Linux (CentOS 5) virtual machine I have. I have tried several Google solutions, but none are working. I was wondering if someone could tell me how to do this. Below is the output of netstat -tulpn: tcp 0 0 127.0.0.1:2208 0.0.0.0:* LISTEN 3576/hpiod tcp 0 0 0.0.0.0:611 0.0.0.0:* LISTEN 3397/rpc.statd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 3365/portmap tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 3020/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3629/sendmail: acce tcp 0 0 127.0.0.1:2207 0.0.0.0:* LISTEN 3582/python tcp 0 0 :::22 :::* LISTEN 3595/sshd udp 0 0 0.0.0.0:68 0.0.0.0:* 3278/dhclient udp 0 0 0.0.0.0:605 0.0.0.0:* 3397/rpc.statd udp 0 0 0.0.0.0:608 0.0.0.0:* 3397/rpc.statd udp 0 0 0.0.0.0:5353 0.0.0.0:* 3729/avahi-daemon: udp 0 0 0.0.0.0:111 0.0.0.0:* 3365/portmap udp 0 0 0.0.0.0:57333 0.0.0.0:* 3729/avahi-daemon: udp 0 0 0.0.0.0:631 0.0.0.0:* 3020/cupsd udp 0 0 192.168.201.90:123 0.0.0.0:* 3611/ntpd udp 0 0 127.0.0.1:123 0.0.0.0:* 3611/ntpd udp 0 0 0.0.0.0:123 0.0.0.0:* 3611/ntpd udp 0 0 :::5353 :::* 3729/avahi-daemon: udp 0 0 :::52217 :::* 3729/avahi-daemon: udp 0 0 fe80::20c:29ff:fe66:123 :::* 3611/ntpd udp 0 0 ::1:123 :::* 3611/ntpd udp 0 0 :::123 :::* 3611/ntpd And here is the output of iptables -L -n: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination
I figured it out. I don't have an FTP server running on the machine I am trying to connect to.
How can I open port 21 on a Linux VM?
1,418,633,044,000
I am having CentOS 6 on my server. When I disable the firewall via the following commands, ssh starts working fine, however when I turn the firewall back on, ssh stops. service iptables save service iptables stop chkconfig iptables off Here is the list of iptable rules [root@server1 ~]# sudo iptables -S -P INPUT DROP -P FORWARD DROP -P OUTPUT ACCEPT -N IPTABLES-UP -A INPUT -s 127.0.0.1/32 -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -s 58.74.16.32/28 -j ACCEPT -A INPUT -s 121.97.80.16/28 -j ACCEPT -A INPUT -s 203.177.90.0/24 -j ACCEPT -A INPUT -s 122.55.79.144/28 -j ACCEPT -A INPUT -s 125.212.38.80/28 -j ACCEPT -A INPUT -s 192.168.10.0/24 -j ACCEPT -A INPUT -s 192.168.50.0/24 -j ACCEPT -A INPUT -s 192.168.60.0/24 -j ACCEPT -A INPUT -s 192.168.70.0/24 -j ACCEPT -A INPUT -s 192.168.160.0/24 -j ACCEPT -A INPUT -s 192.168.170.0/24 -j ACCEPT -A INPUT -s 192.168.150.0/24 -j ACCEPT -A INPUT -s 192.168.237.0/24 -j ACCEPT -A INPUT -s 192.168.235.0/24 -j ACCEPT -A INPUT -s 192.168.228.0/22 -j ACCEPT -A INPUT -p tcp -m tcp --sport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --sport 26 -j ACCEPT -A INPUT -p tcp -m tcp --dport 26 -j ACCEPT -A INPUT -p tcp -m tcp --sport 587 -j ACCEPT -A INPUT -p tcp -m tcp --dport 587 -j ACCEPT -A INPUT -p tcp -m tcp --sport 465 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A INPUT -p udp -m udp --dport 53 -j ACCEPT -A INPUT -p udp -m udp --sport 123 -j ACCEPT -A INPUT -p udp -m udp --dport 123 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -j DROP -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -s 192.168.0.0/16 -d 192.168.0.0/16 -j ACCEPT -A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -j DROP -A OUTPUT -d 127.0.0.1/32 -j ACCEPT -A OUTPUT -p icmp -j ACCEPT -A OUTPUT -j ACCEPT The port of SSH is also not changed. Can you help me out why I am not able to connect via ssh?
According to what I read on your screenshot you have several iptables rules and the last one drop all the rest iptables -A input -j DROP Before droping the rest of the request you should add a rule to allow ssh into your server iptables -A input -p 22 -j ACCEPT
Internet & Ping Working but can't connect via SSH
1,418,633,044,000
The server is supposed to silently ignore any requests (including ping) unless it receives correct credentials with initial request. Nobody else can tell that the host even exists. I am talking about a very basic "authentication server", a mechanism to selectively make a host visible only to some guests. When the client sends correct credentials, the firewall makes the host visible to its IP and begins to accept any connections from it.
If your host doesn't respond to anything, then your options to communicate with it are limited. You aren't going to be able to do things like send it a user name and password, since that would require opening a TCP connection, and that requires traffic in both directions. You can do port knocking to make the server start responding to a usable protocol such as SSH. Port knocking works by encoding a password into a series of packets that don't require a connected protocol, for example sending pings on a series of ports, or sending an ICMP or UDP packet to a specific port containing a password. Note that any method that requires the server to be stealthy has an inherent weakness: somebody observing the traffic can see the traffic that leads the server to reply, and replay that sequence. You can limit this by making the password depend on the time, but beware that this can lock you out if the clocks on the client and the server get out of synch. Obviously the server won't be stealthy while you're communicating with it. Setting up port knocking isn't that difficult, you can find tutorials for it on the web. But even so it's usually not worth the trouble. Port knocking does not improve privacy and does not defend against any serious threat. Its main advantage is to make your server invisible to generic probes that attempt to exploit security vulnerabilities, but those probes are harmless if you keep your server up-to-date with security patches.
How do I configure my server to stay hidden unless it receives correct password?
1,418,633,044,000
After upgrading from LMDE2 to LMDE3 Cindy I noticed I could no longer launch the gufw GUI. The error was: ** (gufw.py:20536): WARNING **: Failed to load shared library 'libwebkit2gtk-4.0.so.37' referenced by the typelib: libGLESv2.so.2: cannot open shared object file: No such file or directory /usr/share/gufw/gufw/gufw/view/gufw.py:117: Warning: cannot retrieve class for invalid (unclassed) type 'void' self.web_content = WebKit2.WebView() Traceback (most recent call last): File "/usr/share/gufw/gufw/gufw.py", line 30, in <module> gufw = Gufw(controler.get_frontend()) File "/usr/share/gufw/gufw/gufw/view/gufw.py", line 79, in __init__ self._set_objects_name() File "/usr/share/gufw/gufw/gufw/view/gufw.py", line 117, in _set_objects_name self.web_content = WebKit2.WebView() gufw version is 17.04.1-1.1 ufw version is 0.35-4
In hindsight I think the problem probably had nothing to do with the upgrade, and everything to do with changing to non-free nvidia drivers. (I just hadn't needed to change any firewall settings until after the upgrade.)I checked all of the stated dependencies and everything required seemed to be installed, including libgles2-mesa and webkit2-4.0. I installed: libgles2-nvidia libgles-nvidia2 libgles2-glvnd-nvidia And was able to launch the gufw GUI.
gufw GUI fails to launch on LMDE
1,418,633,044,000
Say I have a couple of servers at DigitalOcean and I want them to talk to each others. DigitalOcean offers a WAN connection and a LAN connection. Problem is that both are insecure. The WAN is the Internet and the LAN is shared by everyone who has a computer (VPS) at DigitalOcean. So I want to block everything except a few ports such as 53, 80, 443 on the WAN. That's the standard procedure. Then, maybe I have MySQL on the other computer so I want to open port 3306 for IP address 10.1.1.1 (sample IPs, not actually valid at DigitalOcean.) Now my problem is that I want the firewall rules to be in place before either interface gets started. auto eth0 eth1 iface eth0 inet static address 8.8.8.2 # some Internet address netmask 255.255.255.255 gateway 8.8.8.1 # some Internet address dns-nameservers 8.8.8.8 8.8.4.4 pre-up /etc/network/firewall iface eth1 inet static address 10.1.1.1 netmask 255.0.0.0 pre-up /etc/network/firewall What I came up with is to add the pre-up to both interfaces. That way I'm sure it starts before either one, it also means the script will run twice. Is that the way to do it? Or would there be a better way to have the equivalent of pre-up that's global to all Interfaces? Note: OS is Ubuntu 16.04.1, latest available.
I actually found a solution, since I'm on Ubuntu 16.04 and we have systemd, I just created a script snapinitfirewall.service and installed it along my firewall code. # Documentation available at: # https://www.freedesktop.org/software/systemd/man/systemd.service.html [Unit] Description=Snap! Websites firewall initialization Before=network.target [Service] Type=oneshot RemainAfterExit=true ExecStart=/etc/network/firewall #ExecStop=... -- why would you ever want to remove your firewall rules? [Install] WantedBy=multi-user.target # vim: syntax=dosini /etc/network/firewall is a script which restores all the rules at once on boot. Because I have another service file for that one package (i.e. one package offering two services) I have to include the following line to make sure the initialization is enabled and runs on reboot: systemctl -q enable snapinitfirewall This is a much better approach than using the pre-up capability since you can be sure it runs before the network gets started. For those interested by more, the snapfirewall project is on github.
Two interfaces, both require the firewall to be up before starting
1,418,633,044,000
What is the method to block a machine from establishing connection to an outside ftp server. Both ftp and sftp. inet, iptables, shutdown service?
I'm guessing, but it looks like you want to block connections to port 21 and port 22 . This can be done on the host itself for ftp iptables -I OUTPUT -p tcp --dport 21 \ -d the.rem.ote.ip \ -m comment --comment "blocked as per ticket##" \ -j REJECT SFTP, though, is tricky: it shares a port with ssh. If you are okay with blocking the outgoing ssh connections too, then ^21^22 after that last command. If you need to keep ssh and block sftp, you'd need to condition the remote end to never offer any sftp subcomment; or change your local config (not sure how) to prevent sftp of any kind (the challenge being users can download sftp binaries at any time). Blocking port 21 and 22 on an intervening firewall you control is much more reliable, but that's not going to be a centos/rhel issue that you'd need to ask here. I think if you can't block all port-22 access via firewalls, then you're in for a rough time.
How to stop outbound ftp from being established. centos/ rhel
1,418,633,044,000
I would like to set up a firewall on Linux Debian for IPv6. It is important to use iptables for me. I have tried to change the ipv4 folder to ipv6. How can I set up iptables for IPv6?
In addition to the existing answer. If you prefer (like I do) to use the syntax from the iptables-save and iptables-restore command ip6tables-save and ip6tables-restore can be used. The convenient part is that you can share the same rule file for iptables-restore and ip6tables-restore respectively by prefixing all the version-specific lines with -4 and -6 respectively and leaving it out on lines that apply to both IPv4 and IPv6. In order to to check for the correct address family ($ADDRFAM) in your script, use: set -e case "$1" in start) case $ADDRFAM in inet) iptables-restore < /etc/myfwrules.txt ;; inet6) ip6tables-restore < /etc/myfwrules.txt ;; esac # ... ;; stop) # ... ;; esac exit 0
How does the iptables work with IPv6?
1,478,542,920,000
This question is inspired by Why is using a shell loop to process text considered bad practice ? I see these constructs for file in `find . -type f -name ...`; do smth with ${file}; done and for dir in $(find . -type d -name ...); do smth with ${dir}; done being used here almost on a daily basis even if some people take the time to comment on those posts explaining why this kind of stuff should be avoided... Seeing the number of such posts (and the fact that sometimes those comments are simply ignored) I thought I might as well ask a question: Why is looping over find's output bad practice and what's the proper way to run one or more commands for each file name/path returned by find ?
The problem for f in $(find .) combines two incompatible things. find prints a list of file paths delimited by newline characters. While the split+glob operator that is invoked when you leave that $(find .) unquoted in that list context splits it on the characters of $IFS (by default includes newline, but also space and tab (and NUL in zsh)) and performs globbing on each resulting word (except in zsh) (and even brace expansion in ksh93 (even if the braceexpand option is off in older versions) or pdksh derivatives!). Even if you make it: IFS=' ' # split on newline only set -o noglob # disable glob (also disables brace expansion # done upon other expansions in ksh) for f in $(find .) # invoke split+glob That's still wrong as the newline character is as valid as any in a file path. The output of find -print is simply not post-processable reliably (except by using some convoluted trick, as shown here). That also means the shell needs to store the output of find fully, and then split+glob it (which implies storing that output a second time in memory) before starting to loop over the files. Note that find . | xargs cmd has similar problems (there, blanks, newline, single quote, double quote and backslash (and with some xarg implementations bytes not forming part of valid characters) are a problem) More correct alternatives The only way to use a for loop on the output of find would be to use zsh that supports IFS=$'\0' and: IFS=$'\0' for f in $(find . -print0) (replace -print0 with -exec printf '%s\0' {} + for find implementations that don't support the non-standard (but quite common nowadays) -print0). Here, the correct and portable way is to use -exec: find . -exec something with {} \; Or if something can take more than one argument: find . -exec something with {} + If you do need that list of files to be handled by a shell: find . -exec sh -c ' for file do something < "$file" done' find-sh {} + (beware it may start more than one sh). On some systems, you can use: find . -print0 | xargs -r0 something with though that has little advantage over the standard syntax and means something's stdin is either the pipe or /dev/null. One reason you may want to use that could be to use the -P option of GNU xargs for parallel processing. The stdin issue can also be worked around with GNU xargs with the -a option with shells supporting process substitution: xargs -r0n 20 -P 4 -a <(find . -print0) something for instance, to run up to 4 concurrent invocations of something each taking 20 file arguments. With zsh or bash, another way to loop over the output of find -print0 is with: while IFS= read -rd '' file <&3; do something "$file" 3<&- done 3< <(find . -print0) read -d '' reads NUL delimited records instead of newline delimited ones. bash-4.4 and above can also store files returned by find -print0 in an array with: readarray -td '' files < <(find . -print0) The zsh equivalent (which has the advantage of preserving find's exit status): files=(${(0)"$(find . -print0)"}) With zsh, you can translate most find expressions to a combination of recursive globbing with glob qualifiers. For instance, looping over find . -name '*.txt' -type f -mtime -1 would be: for file (./**/*.txt(ND.m-1)) cmd $file Or for file (**/*.txt(ND.m-1)) cmd -- $file (beware of the need of -- as with **/*, file paths are not starting with ./, so may start with - for instance). ksh93 and bash eventually added support for **/ (though not more advances forms of recursive globbing), but still not the glob qualifiers which makes the use of ** very limited there. Also beware that bash prior to 4.3 follows symlinks when descending the directory tree. Like for looping over $(find .), that also means storing the whole list of files in memory¹. That may be desirable though in some cases when you don't want your actions on the files to have an influence on the finding of files (like when you add more files that could end-up being found themselves). Other reliability/security considerations Race conditions Now, if we're talking of reliability, we have to mention the race conditions between the time find/zsh finds a file and checks that it meets the criteria and the time it is being used (TOCTOU race). Even when descending a directory tree, one has to make sure not to follow symlinks and to do that without TOCTOU race. find (GNU find at least) does that by opening the directories using openat() with the right O_NOFOLLOW flags (where supported) and keeping a file descriptor open for each directory, zsh/bash/ksh don't do that. So in the face of an attacker being able to replace a directory with a symlink at the right time, you could end up descending the wrong directory. Even if find does descend the directory properly, with -exec cmd {} \; and even more so with -exec cmd {} +, once cmd is executed, for instance as cmd ./foo/bar or cmd ./foo/bar ./foo/bar/baz, by the time cmd makes use of ./foo/bar, the attributes of bar may no longer meet the criteria matched by find, but even worse, ./foo may have been replaced by a symlink to some other place (and the race window is made a lot bigger with -exec {} + where find waits to have enough files to call cmd). Some find implementations have a (non-standard yet) -execdir predicate to alleviate the second problem. With: find . -execdir cmd -- {} \; find chdir()s into the parent directory of the file before running cmd. Instead of calling cmd -- ./foo/bar, it calls cmd -- ./bar (cmd -- bar with some implementations, hence the --), so the problem with ./foo being changed to a symlink is avoided. That makes using commands like rm safer (it could still remove a different file, but not a file in a different directory), but not commands that may modify the files unless they've been designed to not follow symlinks. -execdir cmd -- {} + sometimes also works but with several implementations including some versions of GNU find, it is equivalent to -execdir cmd -- {} \;. -execdir also has the benefit of working around some of the problems associated with too deep directory trees. In: find . -exec cmd {} \; the size of the path given to cmd will grow with the depth of the directory the file is in. If that size gets bigger than PATH_MAX (something like 4k on Linux), then any system call that cmd does on that path will fail with a ENAMETOOLONG error. With -execdir, only the file name (possibly prefixed with ./) is passed to cmd. File names themselves on most file systems have a much lower limit (NAME_MAX) than PATH_MAX, so the ENAMETOOLONG error is less likely to be encountered. Bytes vs characters Also, often overlooked when considering security around find and more generally with handling file names in general is the fact that on most Unix-like systems, file names are sequences of bytes (any byte value but 0 in a file path, and on most systems (ASCII based ones, we'll ignore the rare EBCDIC based ones for now) 0x2f is the path delimiter). It's up to the applications to decide if they want to consider those bytes as text. And they generally do, but generally the translation from bytes to characters is done based on the user's locale, based on the environment. What that means is that a given file name may have different text representation depending on the locale. For instance, the byte sequence 63 f4 74 e9 2e 74 78 74 would be côté.txt for an application interpreting that file name in a locale where the character set is ISO-8859-1, and cєtщ.txt in a locale where the charset is IS0-8859-5 instead. Worse. In a locale where the charset is UTF-8 (the norm nowadays), 63 f4 74 e9 2e 74 78 74 simply couldn't be mapped to characters! find is one such application that considers file names as text for its -name/-path predicates (and more, like -iname or -regex with some implementations). What that means is that for instance, with several find implementations (including GNU find on GNU systems²). find . -name '*.txt' would not find our 63 f4 74 e9 2e 74 78 74 file above when called in a UTF-8 locale as * (which matches 0 or more characters, not bytes) could not match those non-characters. LC_ALL=C find... would work around the problem as the C locale implies one byte per character and (generally) guarantees that all byte values map to a character (albeit possibly undefined ones for some byte values). Now when it comes to looping over those file names from a shell, that byte vs character can also become a problem. We typically see 4 main types of shells in that regard: The ones that are still not multi-byte aware like dash. For them, a byte maps to a character. For instance, in UTF-8, côté is 4 characters, but 6 bytes. In a locale where UTF-8 is the charset, in find . -name '????' -exec dash -c ' name=${1##*/}; echo "${#name}"' sh {} \; find will successfully find the files whose name consists of 4 characters encoded in UTF-8, but dash would report lengths ranging between 4 and 24. yash: the opposite. It only deals with characters. All the input it takes is internally translated to characters. It makes for the most consistent shell, but it also means it cannot cope with arbitrary byte sequences (those that don't translate to valid characters). Even in the C locale, it can't cope with byte values above 0x7f. find . -exec yash -c 'echo "$1"' sh {} \; in a UTF-8 locale will fail on our ISO-8859-1 côté.txt from earlier for instance. Those like bash or zsh where the multi-byte support has been progressively added. Those will fall back to considering bytes that can't be mapped to characters as if they were characters. They still have a few bugs here and there especially with less common multi-byte charsets like GBK or BIG5-HKSCS (those being quite nasty as many of their multi-byte characters contain bytes in the 0-127 range (like the ASCII characters)). Those like the sh of FreeBSD (11 at least) or mksh -o utf8-mode that support multi-bytes but only for UTF-8. Interrupted output Another problem with parsing the output of find or even find -print0 may arise if find is interrupted, for instance because it has triggered some limit or was killed for whatever reason. Example: $ (ulimit -t 1; find / -type f -print0 2> /dev/null) | xargs -r0 printf 'rm -rf "%s"\n' | tail -n 2 rm -rf "/usr/lib/x86_64-linux-gnu/guile/2.2/ccache/language/ecmascript/parse.go" rm -rf "/usr/" zsh: cpu limit exceeded (core dumped) ( ulimit -t 1; find / -type f -print0 2> /dev/null; ) | zsh: done xargs -r0 printf 'rm -rf "%s"\n' | tail -n 2 Here, find was interrupted because it reached the CPU time limit. Since the output is buffered (as it goes to a pipe), find had output a number of blocks to stdout and the end of the last block it had written at the time it was killed happened to be in the middle of some /usr/lib/x86_64-linux-gnu/guile... file path, here unfortunately just after the /usr/. xargs, just saw a non-delimited /usr/ record followed by EOF and passed that to printf. If the command had been rm -rf instead, it could have had severe consequences. Notes ¹ For completeness, we could mention a hacky way in zsh to loop over files using recursive globbing without storing the whole list in memory: process() { something with $REPLY false } : **/*(ND.m-1+process) +cmd is a glob qualifier that calls cmd (typically a function) with the current file path in $REPLY. The function returns true or false to decide if the file should be selected (and may also modify $REPLY or return several files in a $reply array). Here we do the processing in that function and return false so the file is not selected. ² GNU find uses the system's fnmatch() libc function to do the pattern matching, so the behaviour there depends on how that function copes with non-text data.
Why is looping over find's output bad practice?
1,478,542,920,000
I have the following bash script: #!/bin/bash upperlim=10 for i in {0..10} do echo $i done for i in {0..$upperlim} do echo $i done The first for loop (without the variable upperlim in the loop control) works fine, but the second for loop (with the variable upperlim in the loop control) does not. Is there any way that I can modify the second for loop so that it works? Thanks for your time.
The reason for this is the order in which things occur in bash. Brace expansion occurs before variables are expanded. In order to accomplish your goal, you need to use C-style for loop: upperlim=10 for ((i=0; i<=upperlim; i++)); do echo "$i" done
In bash, is it possible to use an integer variable in a brace expansion
1,478,542,920,000
What is the purpose of the do keyword in Bash for loop syntax? To me, it feels redundant. for i in `seq 1 2`; do echo "hi"; done Why isn't the syntax like this? for i in `seq 1 2`; echo "hi"; done I'm sure that it does fill a purpose. I just want to learn.
Note that that syntax is inherited from the Bourne shell. After the variable name, you can have either in to have the list of elements explicitly given, or do, to loop over the positional parameters. for i in 1 2 3 do echo "$i" done Or set 1 2 3 for i do echo "$i" done Having the do in both cases (even if it's not strictly necessary in the first one) makes for a more consistent syntax. It's also consistent with the while/until loops where the do is necessary. while cmd1 cmd2 do cmd3 cmd4 done You need the do to tell where the list of condition commands end. Note that the Bourne shell did not support for i; do. That syntax was also not POSIX until the 2016 edition of the standard (for i do has always been POSIX; see the related Austin group bug). zsh has a few shorthand forms like: for i in 1 2 3; echo $i for i (1 2 3) echo $i for ((i=1;i<=3;i++)) echo $i Or support for more than one variable: for i j (1 a 2 b) echo $i $j (though you can't use in or do as variable name in place of j above). Even if rarely documented, most Bourne-like shells (Bourne, ksh, bash, zsh, not ash nor yash) also support: for i in 1 2 3; { echo "$i";} The Bourne shell, ksh and zsh (but not bash) also support: for i { echo "$i"; } While bash, ksh and zsh (but not the Bourne shell) support: for i; { echo "$i"; } All (Bourne, bash, ksh, zsh) support: for i { echo "$i";} ksh93, bash, zsh support: for ((i=1;i<=3;i++)) { echo "$i"; }
What is the purpose of the "do" keyword in Bash for loops?
1,478,542,920,000
In bash, I know that it is possible to write a for loop in which some loop control variable i iterates over specified integers. For example, I can write a bash shell script that prints the integers between 1 and 10: #!/bin/bash for i in {1..10} do echo $i done Is it possible to instead iterate over a loop control variable that is a string, if I provide a list of strings? For example, suppose that I have a string fname that represents a file name. I want to call a set of commands for each file name. For example, I might want to print the contents of fname using a command like this: #!/bin/bash for fname in {"a.txt", "b.txt", "c.txt"} do echo $fname done In other words, on the first iteration, fname should have the value fname="a.txt", while on the second iteration, fname should have the value fname="b.txt", and so on. Unfortunately, it seems that the above syntax is not quite correct. I would like to obtain the output: a.txt b.txt c.txt but when I try the above code, I obtain this output: {a.txt, b.txt, c.txt} Can you please help me determine the correct syntax, so that I can iteratively change the value/contents of the variable fname? Thank you for your time.
The correct syntax is as follows: #!/bin/bash for fname in a.txt b.txt c.txt do echo $fname done
In a bash shell script, writing a for loop that iterates over string values
1,478,542,920,000
${!FOO} performs a double substitution in bash, meaning it takes the (string) value of FOO and uses it as a variable name. zsh doesn’t support this feature. Is there a way to make this work the same in bash and zsh? Background: I’ve got a list of environment variables, like PATH MAIL EDITOR and want to first print the variable names and afterwards their values. This works in bash but not zsh: for VAR in LIST do echo $VAR echo ${!VAR} done It should be somehow possible “the old way” with eval, but I can’t get it to work: for VAR in LIST do echo $VAR echo `eval \$$VAR` done I’m never going to understand why I can’t simply do arbitrary deep substitutions like ${${VAR}} or even ${${${VAR}}} if need be, so an explanation for that would be nice, too.
Both bash and zsh have a way to perform indirect expansion, but they use different syntax. It's easy enough to perform indirect expansion using eval; this works in all POSIX and most Bourne shells. Take care to quote properly in case the value contains characters that have a special meaning in the shell. eval "value=\"\${$VAR}\"" echo "$VAR" echo "$value" ${${VAR}} doesn't work because it's not a feature that any shell implements. The thing inside the braces must conform to syntax rules which do not include ${VAR}. (In zsh, this is supported syntax, but does something different: nested substitutions perform successive transformations on the same value; ${${VAR}} is equivalent to $VAR since this performs the identity transformation twice on the value.)
What is the equivalent of bash indirect referencing ${!FOO} in zsh?
1,478,542,920,000
I'd like to find an equivalent of cmd 1 && cmd 2 && ... && cmd 20 but with commands expressed within a for loop like for i in {1..20} do cmd $i done What would you suggest to change in the second expression to find an equivalent of the first?
The equivalent to your original sequence would be: for i in {1..20} do cmd $i || break done The difference with Amit's answer is the script won't exit, i.e. will execute potential commands that might follow the sequence/loop. Note that the return status of the whole loop will always be true with my suggestion, this might be fixed if relevant in your case.
How do I replace AND (&&) in a for loop?
1,478,542,920,000
In bash I often use for-loops such as the following for file in *.type; do sommecommand "$file"; done; to perform an operation for all files matching *.type. If no file with this ending is found in the working directories the asterisk is not expanded and usually I will get an error message saying that somecommand didn't find the file. I can immediately think of several ways to avoid this error. But adding a conditional does not seem to be very elegant. Is there a short and clean way to achieve this?
Yes, run the following command : shopt -s nullglob it will nullify the match and no error will be triggered. if you want this behaviour by default, add the command in your ~/.bashrc if you want to detect a null glob in POSIX shell, try for i in *.txt; do [ "$i" = '*.txt' ] && [ ! -e '*.txt' ] && continue done See http://mywiki.wooledge.org/NullGlob
Avoiding errors due to unexpanded asterisk
1,478,542,920,000
I need to create a script with a loop through a list of items. I want to insert a string in the script. I tried: for i in " a b c"; do echo "test "$i done But that only outputs one string: test a b c How would I get this? testa testb testc (a, b and c are just examples for some longer words, which I got from an OpenOffice Calc file)
export a=" a b c " for i in $a; do echo "test$i";done
for loop with multiline data
1,478,542,920,000
I want to print list of numbers from 1 to 100 and I use a for loop like the following: number=100 for num in {1..$number} do echo $num done When I execute the command it only prints {1..100} and not the list of number from 1 to 100.
Yes, that's because brace-expansion occurs before parameter expansion. Either use another shell like zsh or ksh93 or use an alternative syntax: Standard (POSIX) sh syntax i=1 while [ "$i" -le "$number" ]; do echo "$i" i=$(($i + 1)) done Ksh-style for ((...)) for ((i=1;i<=number;i++)); do echo "$i" done use eval (not recommended) eval ' for i in {1..'"$number"'}; do echo "$i" done ' use the GNU seq command on systems where it's available unset -v IFS # restore IFS to default for i in $(seq "$number"); do echo "$i" done (that one being less efficient as it forks and runs a new command and the shell has to reads its output from a pipe). Avoid loops in shells. Using loops in a shell script are often an indication that you're not doing it right. Most probably, your code can be written some other way.
bash variables in for loop range [duplicate]
1,478,542,920,000
I need to iterate through every file inside a directory. One common way I saw was using the for loop that begins with for file in *; do. However, I realized that it does not include hidden files (files that begin with a "."). The other obvious way is then do something like for file in `ls -a`; do However, iterating over ls is a bad idea because spaces in file names mess everything up. What would be the proper way to iterate through a directory and also get all the hidden files?
You just need to create a list of glob matching files, separated by space: for file in .* *; do echo "$file"; done Edit The above one can rewrite in different form using brace expansion for file in {.*,*}; do echo "$file"; done or even shorter: for file in {.,}*; do echo "$file"; done Adding the path for selected files: for file in /path/{..?,.[!.],}*; do echo "$file"; done Adding path for selected files: for file in /path/{.,}*; do echo "$file"; done If you want to be sophisticated and remove from the list usually unneeded . and .. just change {.,}* to {..?,.[!.],}*. For completeness it is worth to mention that one can also set dotglob to match dot-files with pure *. shopt -s dotglob In zsh one needs additionally set nullglob to prevent the error in case of no-matches: setopt nullglob or, alternatively add glob qualifier N to the pattern: for file in /path/{.,}*(N); do echo "$file"; done
proper way to iterate through contents in a directory [duplicate]
1,478,542,920,000
I have a folder named 'sample' and it has 3 files in it. I want to write a shell script which will read these files inside the sample folder and post it to an HTTP site using curl. I have written the following for listing files inside the folder: for dir in sample/*; do echo $dir; done But it gives me the following output: sample/log sample/clk sample/demo It is attaching the parent folder in it. I want the output as follows (without the parent folder name) log clk demo How do I do this?
Use basename to strip the leading path off of the files: for file in sample/*; do echo "$(basename "$file")" done Though why not: ( cd sample; ls )
Loop through a folder and list files
1,478,542,920,000
I need my script to do something to every file in the current directory excluding any sub-directories. For example, in the current path, there are 5 files, but 1 of them is a folder (a sub-directory). My script should activate a command given as arguments when running said script. I.e. "bash script wc -w" should give the word count of each file in the current directory, but not any of the folders, so that the output never has any of the "/sub/dir: Is a directory" lines. My current script: #!/bin/bash dir=`pwd` for file in $dir/* do $* $file done I just need to exclude directories for the loop, but I don`t know how.
#!/bin/bash - for file in "$dir"/* do if [ ! -d "$file" ]; then "$@" "$file" fi done Note that it also excludes files that are of type symlink and where the symlink resolves to a file of type directory (which is probably what you want). Alternative (from comments), check only for files: for file in "$dir"/* do if [ -f "$file" ]; then "$@" "$file" fi done
Loop through files excluding directories
1,478,542,920,000
I'm trying to write a simple script to retrieve memory and swap usage from a list of hosts. Currently, the only way I've been able to achieve this is to write 3 separate scripts: for a in {1..9}; do echo "bvrprdsve00$a; $(ssh -q bvrprdsve00$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done > /tmp/svemem.txt; for a in {10..99}; do echo "bvrprdsve0$a; $(ssh -q bvrprdsve0$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done >> /tmp/svemem.txt; for a in {100..218}; do echo "bvrprdsve$a; $(ssh -q bvrprdsve$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done >> /tmp/svemem.txt The reason for this is that the hostname always ends in a 3 digit number and these hosts go from 001-218, so I've needed to do a different for loop for each set (001-009, 010-099, 100-218). Is there a way in which I can do this in one script instead of joining 3 together?
Bash brace expansions could generate the numbers with leading zeros (since bash 4.0 alpha+ ~2009-02-20): $ echo {001..023} 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 So, you can do: for a in {001..218}; do echo "bvrprdsve$a; $(ssh -q bvrprdsve$a "echo \$(free -m|grep Mem|/bin/awk '{print \$4}';free -m|grep Swap|/bin/awk '{print \$4}')")"; done >> /tmp/svemem.txt But, let's look inside the command a little bit: You are calling free twice, using grep and then awk: free -m|grep Mem |/bin/awk '{print \$4}'; free -m|grep Swap|/bin/awk '{print \$4}' All could be reduced to this one call to free and awk: free -m|/bin/awk '/Mem|Swap/{print \$4}' Furthermore, the internal command could be reduced to this value: cmd="echo \$(free -m|/bin/awk '/Mem|Swap/{print \$4}')" Then, the whole script will look like this: b=bvrprdsve; f=/tmp/svemem.txt; cmd="echo \$(free -m|/bin/awk '/Mem|Swap/{print \$4}')"; for a in {001..218}; do echo "$b$a; $(ssh -q "$b$a" "$cmd")"; done >> "$f";
How do I get 0-padded numbers in {} (brace expansion)?
1,478,542,920,000
Say I have a below for in loop: for i in /apps/textfiles/*.txt do do something done Now say I have 50 files inside /apps/textfiles/ In what order will the files be picked?
Filename expansion in Bash is sorting alphabetically. From the Bash Manual: Bash scans each word for the characters ‘*’, ‘?’, and ‘[’. If one of these characters appears, then the word is regarded as a pattern, and replaced with an alphabetically sorted list of filenames matching the pattern [...]. It doesn't make a difference here that your globbing context is part of the for loop. Note that alphabetical sorting still obeys the collation order defined by the LC_COLLATE variable: LC_COLLATE This variable determines the collation order used when sorting the results of filename expansion, and determines the behavior of range expressions, equivalence classes, and collating sequences within filename expansion and pattern matching (see Filename Expansion).
In what order does a bash FOR IN loop pick up files in a folder?
1,478,542,920,000
Is there a way to specify multiple variables (not just integers) in for loops in bash? I may have 2 files containing arbitrary text that i would need to work with. What i functionally need is something like this: for i in $(cat file1) and j in $(cat file2); do command $i $j; done Any ideas?
First, Don't read lines with for, as there are several unavoidable issues with reading lines via word-splitting. Assuming files of equal length, or if you want to only loop until the shorter of two files are read, a simple solution is possible. while read -r x && read -r y <&3; do ... done <file1 3<file2 Putting together a more general solution is hard because of when read returns false and several other reasons. This example can read an arbitrary number of streams and return after either the shortest or longest input. #!/usr/bin/env bash # Open the given files and assign the resulting FDs to arrName. # openFDs arrname file1 [file2 ...] openFDs() { local x y i arr=$1 [[ -v $arr ]] || return 1 shift for x; do { exec {y}<"$x"; } 2>/dev/null || return 1 printf -v "${arr}[i++]" %d "$y" done } # closeFDs FD1 [FD2 ...] closeFDs() { local x for x; do exec {x}<&- done } # Read one line from each of the given FDs and assign the output to arrName. # If the first argument is -l, returns false only when all FDs reach EOF. # readN [ -l ] arrName FD1 [FD2 ...] readN() { if [[ $1 == -l ]]; then local longest shift else local longest= fi local i x y status arr=$1 [[ -v $arr ]] || return 1 shift for x; do if IFS= read -ru "$x" "${arr}[i]" || { unset -v "${arr}[i]"; [[ ${longest+_} ]] && return 1; }; then status=0 fi ((i++)) done return ${status:-1} } # readLines file1 [file2 ...] readLines() { local -a fds lines trap 'closeFDs "${fds[@]}"' RETURN openFDs fds "$@" || return 1 while readN -l lines "${fds[@]}"; do printf '%-1s ' "${lines[@]}" echo done } { readLines /dev/fd/{3..6} || { echo 'error occured' >&2; exit 1; } } <<<$'a\nb\nc\nd' 3<&0 <<<$'1\n2\n3\n4\n5' 4<&0 <<<$'x\ny\nz' 5<&0 <<<$'7\n8\n9\n10\n11\n12' 6<&0 # vim: set fenc=utf-8 ff=unix ts=4 sts=4 sw=4 ft=sh nowrap et: So depending upon whether readN gets -l, the output is either a 1 x 7 b 2 y 8 c 3 z 9 d 4 10 5 11 12 or a 1 x 7 b 2 y 8 c 3 z 9 Having to read multiple streams in a loop without saving everything into multiple arrays isn't all that common. If you just want to read arrays you should have a look at mapfile.
Multivariable For Loops
1,478,542,920,000
I have a problem with for loop in bash. For example: I have an array ("etc" "bin" "var"). And I iterate on this array. But in the loop I would like append some value to the array. E.g. array=("etc" "bin" "var") for i in "${array[@]}" do echo $i done This displays etc bin var (of course on separate lines). And if I append after do like that: array=("etc" "bin" "var") for i in "${array[@]}" do array+=("sbin") echo $i done I want: etc bin var sbin (of course on separate lines). This is not working. How can I do it?
It will append "sbin" 3 times as it should, but it won't iterate over the newly added "sbin"s in the same loop. After the 2nd example: echo "${array[@]}" #=> etc bin var sbin sbin sbin
In a loop over an array, add an element to the array
1,478,542,920,000
I have a folder with several repositories inside. Is there any way I can run git branch or whatever git command inside each folder? $ ls project1 project2 project3 project4 And I'd like to have some kind of output like the following $ command project1 [master] project2 [dev] project3 [master] project4 [master]
Try this. $1 should be the parent dir containing all of your repositories (or use "." for the current dir): #!/bin/bash function git_branches() { if [[ -z "$1" ]]; then echo "Usage: $FUNCNAME <dir>" >&2 return 1 fi if [[ ! -d "$1" ]]; then echo "Invalid dir specified: '${1}'" return 1 fi # Subshell so we don't end up in a different dir than where we started. ( cd "$1" for sub in *; do [[ -d "${sub}/.git" ]] || continue echo "$sub [$(cd "$sub"; git branch | grep '^\*' | cut -d' ' -f2)]" done ) } You can make this its own script (but replace $FUNCNAME with $0), or keep it inside a function and use it in your scripts.
Get git branch from several folders/repos
1,478,542,920,000
I have a bash script which simply docker pushes an image: docker push $CONTAINER_IMAGE:latest I want to loop for 3 times when this fails. How should I achieve this?
Use for-loop and && break: for n in {1..3}; do docker push $CONTAINER_IMAGE:latest && break; done break quits the loop, but only runs when docker push succeeded. If docker push fails, it will exit with error and the loop will continue.
How to loop for 3 times in bash script when docker push fails?
1,478,542,920,000
I know that the SHELL allows variable assignment to take place immediately before a command, such that IFS=":" read a b c d <<< "$here_string" works... What I was wondering is do such assignments not work when done with compound statements such as loops? I tried something like IFS=":" for i in $PATH; do echo $i; done but it results in a syntax error. I could always do something like oldIFS="$IFS"; IFS=":"; for....; IFS="$oldIFS", but I wanted to know if there was any way I could make such inline assignments work for compound statements like for loops?
for is a reserved word and as such follows special rules: The following words shall be recognized as reserved words: ! { } case do done elif else esac fi for if in then until while This recognition shall only occur when none of the characters is quoted and when the word is used as: The first word of a command The first word following one of the reserved words other than case, for, or in The third word in a case command (only in is valid in this case) The third word in a for command (only in and do are valid in this case) If you try IFS=":" for i in $PATH; do echo $i; done then by the rules above that is not a for loop, as the keyword is not the first word of the command. But you can get the desired output with tr ':' '\n' <<< "$PATH" #Bash, Ksh, Zsh printf "%s\n" "$PATH" | tr ':' '\n' #Any standard shell where tr replaces each : by a newline. You may be familiar with this valid approach: while IFS= read -r line; do while is the first word of the command, and the IFS assignment applies to read, so all is OK.
changing IFS temporarily before a for loop [duplicate]
1,478,542,920,000
Is this the correct way to start multiple sequential processings in the background? for i in {1..10}; do for j in {1..10}; do run_command $i $j; done & done; All j should be processed after each other for a given i, but all i should be processed simultaneously.
The outer loop that you have is basically for i in {1..10}; do some_compound_command & done This would start ten concurrent instances of some_compound_command in the background. They will be started as fast as possible, but not quite "all at the same time" (i.e. if some_compound_command takes very little time, then the first may well finish before the last one starts). The fact that some_compound_command happens to be a loop is not important. This means that the code that you show is correct in that iterations of the inner j-loop will be running sequentially, but all instances of the inner loop (one per iteration of the outer i-loop) would be started concurrently. The only thing to keep in mind is that each background job will be running in a subshell. This means that changes made to the environment (e.g. modifications to values of shell variables, changes of current working directory with cd, etc.) in one instance of the inner loop will not be visible outside of that particular background job. What you may want to add is a wait statement after your loop, just to wait for all background jobs to actually finish, at least before the script terminates: for i in {1..10}; do for j in {1..10}; do run_command "$i" "$j" done & done wait
Bash: Multiple for loops in Background
1,478,542,920,000
In my directory I have two files with space, foo bar and another file. I also have two files without space, file1 and file2. The following script works: for f in foo\ bar another\ file; do file "$f"; done This script also works: for f in 'foo bar' 'another file'; do file "$f"; done But the following script doesn't work: files="foo\ bar another\ file" for f in $files; do file "$f"; done Not even this script works: files="'foo bar' 'another file'" for f in $files; do file "$f"; done But, if the files do not contain space, the script works: files="file1 file2" for f in $files; do file "$f"; done Thanks! Edit Code snippet of my script: while getopts "i:a:c:d:f:g:h" arg; do case $arg in i) files=$OPTARG;; # ... esac done for f in $files; do file "$f"; done With files without spaces, my script works. But I would like to run the script passing files with spaces as argument in one of these ways: ./script.sh -i "foo\ bar another\ file" ./script.sh -i foo\ bar another\ file ./script.sh -i "'foo bar' 'another file'" ./script.sh -i 'foo bar' 'another file'
For your command line parsing, arrange with the pathname operands to always be the last ones on the command line: ./myscript -a -b -c -- 'foo bar' 'another file' file[12] The parsing of the options would look something like a_opt=false b_opt=false c_opt=false while getopts abc opt; do case $opt in a) a_opt=true ;; b) b_opt=true ;; c) c_opt=true ;; *) echo error >&2; exit 1 esac done shift "$(( OPTIND - 1 ))" for pathname do # process pathname operand "$pathname" here done The shift will make sure to shift off the handled options so that the pathname operands are the only things left in the list of positional parameters. If that's not possible, allow the -i option to be specified multiple times and collect the given arguments in an array each time you come across it in the loop: pathnames=() a_opt=false b_opt=false c_opt=false while getopts abci: opt; do case $opt in a) a_opt=true ;; b) b_opt=true ;; c) c_opt=true ;; i) pathnames+=( "$OPTARG" ) ;; *) echo error >&2; exit 1 esac done shift "$(( OPTIND - 1 ))" for pathname in "${pathnames[@]}"; do # process pathname argument "$pathname" here done This would be called as ./myscript -a -b -c -i 'foo bar' -i 'another file' -i file1 -i file2
Bash for loop with string var containing spaces
1,478,542,920,000
I have a script something like this: for chain in http https ssh do iptables -nvxL $chain | tail -1 | awk '{print $2}' done But what I actually want to do is capture the output of the iptables command for each iteration into a different variable, whose name should be the equal to the current value of chain. So for the first loop iteration (where $chain is http) I want this to happen: http=$(iptables -nvxL http | tail -1 | awk '{print $2}') Then for the next I want this: https=$(iptables -nvxL https | tail -1 | awk '{print $2}') Hopefully you get the idea, not sure how to do this.
In your case, I would use an associative array for that: declare -A rules for chain in http https ssh do rules[$chain]=$(iptables -nvxL $chain | tail -1 | awk '{print $2}') done You can then access the output by dereferencing, as in printf -- "%s\n" "${rules['http']}" or for chain in http https ssh do printf -- "%s\n" "${rules[$chain]}" done
How can I assign the output of a command to different variables in each loop iteration?
1,478,542,920,000
in my bash script I try to use a number as an input variable for a for loop I run the script as ./script.sh InputFolder/ Number_of_iterations the script should work inside the given folder and run a for loop as many times as the Number_of_iterations variable is set to. But somehow I can't set the variable as an integer. this is an example of my loop in the script: for i in {1..$(($2))} do echo "Welcome $i times" done I have tried already the double brackets $(($...)) option as well as the double quotations "...", but the output I keep getting is Welcome {1..5} times which make me think, this is not an integer. I would appreciate any help in reading the input parameter as an integer into the script.
You can do this two ways: With ksh93-compatible shells (ksh93, zsh, bash): for (( i=1;i<=$2;i++ )) do echo "Welcome $i times" done Here we set i to 1 and loop, incrementing it until it is less than or equal to $2, outputting: Welcome 1 times Welcome 2 times With POSIX shells on GNU systems: for i in $(seq "$2") do echo "Welcome $i times" done The seq command (GNU specific) will output numbers from 1 to the number specified in $2 on separate lines. Assuming you've not modified $IFS (which by default contains the line delimiter character), the command substitution will split that into as many elements for for to loop on.
How to convert an input parameter to integer value in a for loop in bash? [duplicate]
1,478,542,920,000
I have this idea of running a bash script to check some conditions and using ffmpeg to convert all the videos in my directory from any format to .mkv and it is working great! The thing is, I did not know that a for file in loop does not work recursively (https://stackoverflow.com/questions/4638874/how-to-loop-through-a-directory-recursively) But I barely understand "piping" and am looking forward to seeing an example and clearing some uncertainties. I have this scenario in mind that I think would help me a lot to understand. Suppose I have this bash script snippet: for file in *.mkv *avi *mp4 *flv *ogg *mov; do target="${file%.*}.mkv" ffmpeg -i "$file" "$target" && rm -rf "$file" done What it does is, for the current directory, search for any *.mkv *avi *mp4 *flv *ogg *mov then declare the output to have its extension to be .mkv then afterwards delete the original file, then the output should be saved to the very same folder the original video is in. How can I convert this to run recursively? if I use find, where to declare the variable $file? And where should you declare $target? Are all find just really one-liners? I really need to pass the file to a variable $file, because I will still need to run the condition check. And, assuming that (1) is successful, how to make sure that the requirement "then the output should be save to the very same folder the original video is in" is satisfied?
You've got this code: for file in *.mkv *avi *mp4 *flv *ogg *mov; do target="${file%.*}.mkv" ffmpeg -i "$file" "$target" && rm -rf "$file" done which runs in the current directory. To turn it into a recursive process you have a couple of choices. The easiest (IMO) is to use find as you suggested. The syntax for find is very "un-UNIX-like" but the principle here is that each argument can be applied with AND or OR conditions. Here, we're going to say "If this-filename matches OR that-filename matches Then print-it". The filename patterns are quoted so that the shell can't get hold of them (remember that the shell is responsible for expanding all unquoted patterns, so if you had an unquoted pattern of *.mp4 and you had janeeyre.mp4 in your current directory, the shell would replace *.mp4 with the match, and find would see -name janeeyre.mp4 instead of your desired -name *.mp4; it gets worse if *.mp4 matches multiple names...). The brackets are prefixed with \ also to keep the shell from trying to action them as subshell markers (we could quote the brackets instead, if preferred: '('). find . \( -name '*.mkv' -o -name '*avi' -o -name '*mp4' -o -name '*flv' -o -name '*ogg' -o -name '*mov' \) -print The output of this needs to be fed into the input of a while loop that processes each file in turn: while IFS= read file ## IFS= prevents "read" stripping whitespace do target="${file%.*}.mkv" ffmpeg -i "$file" "$target" && rm -rf "$file" done Now all that's left is to join the two parts together with a pipe | so that the output of the find becomes the input of the while loop. While you're testing this code I'd recommend you prefix both ffmpeg and rm with echo so you can see what would be executed - and with what paths. Here is the final result, including the echo statements I recommend for testing: find . \( -name '*.mkv' -o -name '*avi' -o -name '*mp4' -o -name '*flv' -o -name '*ogg' -o -name '*mov' \) -print | while IFS= read file ## IFS= prevents "read" stripping whitespace do target="${file%.*}.mkv" echo ffmpeg -i "$file" "$target" && echo rm -rf "$file" done
Converting `for file in` to `find` so that my script can apply recursively
1,478,542,920,000
I have a difficulty getting over this man bash passage. If the control variable in a for loop has the nameref attribute, the list of words can be a list of shell variables, and a name reference will be established for each word in the list, in turn, when the loop is executed. Array variables cannot be given the -n attribute. However, nameref variables can reference array variables and subscripted array variables. Could you give an example of this nameref variable in a loop with some explanation?
A nameref variable is to a “normal” variable what a symbolic link is to a regular file. $ typeset -n ref=actual; ref=foo; echo "$actual" foo A for loop executes the body with the loop variable (the “control variable”) bound in turn to each word in the list. $ for x in one two three; do echo "$x"; done one two three This is equivalent to writing out successive assignments: x=one; echo "$x" x=two; echo "$x" x=three; echo "$x" If the loop variable is a nameref, then the body is executed with the nameref targeting each element of the word list in turn. This is not equivalent to a series of assignment like above: an assignment ref=value where ref is a nameref would affect the variable that ref is pointing to, but the for loop changes where the nameref points rather than following the reference to change the variable that it points to. $ original=0; one=1; two=2; three=3 $ typeset -n ref=original $ echo $ref 0 $ for ref in one two three; do echo "$ref"; done 1 2 3 $ echo original 0 The indirection can be observed through assignments as well, if you assign to the loop variable (which is uncommon, but permitted). $ one=1; two=2; three=3 $ typeset -n ref $ for ref in one two three; do echo ref=$((ref+10)); done $ echo "$one $two $three" 11 12 13 The last sentence explains that the target of a nameref can be an array. The nameref itself isn't an array, it's still a scalar variable, but when it's used in an assignment or in a dereference, it acts as the same type as the variable it points to. $ a=(0 1 2) $ typeset -n ref=a $ ref[1]=changed $ echo "${a[@]}" 0 changed 2
Shell variables in a for loop
1,478,542,920,000
I am dealing with a situation where I need to create a comma separated list from an array into a heredoc and remove the last comma. I am using bash for piped into sed which is erasing all commas instead of the last one. A simplified example is as follows: x=$(for i in a b c; do echo "${i}",; done| sed 's/,$//') echo $x a b c Desired output: a, b, c Any suggestions appreciated.
The issue was that, by default, echo adds a new line each time it is called and sed was operating on those new lines. You couldn't see that because, when bash processes $(...), those new lines are converted to spaces. So, to fix the problem while making the smallest change to your approach: $ x=$(for i in a b c; do echo -n "${i}",; done| sed 's/,$//') ; echo $x a,b,c The option -n tells echo not to add new lines. If you want spaces between the items, they are easily added: $ x=$(for i in a b c; do echo -n "${i}, " ; done| sed 's/, $//') ;echo $x a, b, c
Remove last comma from a bash for loop generated string
1,478,542,920,000
for i in {0..999999999}; do echo "hi"; done takes a long time to write it's output, as if all the echos are first executed and then sent to stdout. The effect is even more pronounced with cowfortune instead of echo and makes terminal buffering less likely to be the issue. So what precisely happens when I execute the above command, step by step? And why is the delay?
Ordinary for loops always iterate over a static set of strings. This is regardless of whether the strings are generated by brace expansions or by filename globbing patterns, or some other expansion of a variable or command substitution, etc. For your for loop, you generate the strings to loop over using a brace expansion. That brace expansion has to be expanded before the first iteration of the loop can run. Since you generate such a huge list of words (each one of the one billion whole numbers in the range 0 to 999999999) this would likely take some time (and probably more than 8 gigabytes of RAM). If you really need to do that sort of iteration in bash, use an arithmetic for loop instead: for (( i=0; i <= 999999999; ++i )); do echo hi; done ... or solve the particular issue in some other way yes hi | head -n 1000000000 ... or consider using almost any other language for your task at hand, as shell scripting languages are rarely efficient for this sort of task.
How do bash loops work precisely?
1,478,542,920,000
This has probably been asked but the existing answers don't make sense to me. I am running a loop over multiple CSV files in a folder. This folder has non-csv files also and hence I must specify the csv files explicitly. I am modifying the csv files in several ways using awk, cut etc. After I am done, I want to redirect the output to new csv files with slightly modified names as follows: TLC_2017.csv > TLC_2017_prepped.csv Here's a MWE: for file in TLC*.csv; do cut -d, -f2- ${file} > ${file}_prepped done The problem is that I am getting new files with names such as TLC_2017.csv_prepped. In other words, the suffix is being added to the file extension and not the name. How do I add the suffix to the filename and not the extension? If an answer to an existing question solves my problem, kindly don't just link to it but also provide a little bit of explanation so that I can figure it out. Thanks!
for file in TLC*.csv; do cut -d, -f2- "${file}" > "${file%.*}_prepped.csv" done ${file%.*} removes everything after the last dot of $file. If you want to remove everything after the first dot, then you would use %%. Likewise, ${file#*.} (${file##*.}) removes everything before the first (last) dot. Read more in Parameter Expansion, Bash manual. And remember, always quote your variables. You can use shellcheck to help you debugging your scripts. It would warn you about unquoted variables. If looping over files with different extensions, the target file extension cannot be hardcoded as done above, so ${file##*.} is needed. As minimal example, which you can try in an empty test directory, this makes a _prepped copy to every file with an extension: touch A.file.txt B.file.dat C_file.csv D_file.ko E for file in *.*; do noext=${file%.*} ext=${file##*.} cp "$file" "${noext}_prepped.${ext}" done After execution, $ ls A.file_prepped.txt A.file.txt B.file.dat B.file_prepped.dat C_file.csv C_file_prepped.csv D_file.ko D_file_prepped.ko E
Adding suffix to filename during for loop in bash
1,478,542,920,000
Hello Every one I have script that call other script with while loop I need to know how to calculate time taken by each loop like : Starting NodeManager... NodeManager Started Elapsed time: 00:00:10 Starting AdminServer... AdminServer Started Elapsed time: 00:01:10 here is the script #!/bin/bash set -e clear TFILE=starting.log #-------------------------------------------------------------------------------- Check_Status_NM () { tail -F ${TFILE} | while read LOGLINE do if [[ "${LOGLINE}" == *"Secure socket listener started on port"* ]] then pkill -P $$ tail break elif [[ "${LOGLINE}" == *"Address already in use"* ]]; then pkill -P $$ tail echo -e "Cannot Start Server\nSee starting.log for more info " exit 1 fi done } #---------------------------------------------------------------- Check_Status () { tail -F ${TFILE} | while read LOGLINE do if [[ "${LOGLINE}" == *"The Network Adapter could not establish the connection"* ]] ; then echo -e "\e[5m\e[93mWARNING\e[0m Could not establish the connection\n\e[91mCheck Connection to Database\e[0m\n" elif [[ "${LOGLINE}" == *"<Server started in RUNNING mode>"* ]] then pkill -P $$ tail && cat /dev/null > ${TFILE} sleep 1 break elif [[ "${LOGLINE}" == *"<Server state changed to FORCE_SHUTTING_DOWN>"* ]] || [[ "${LOGLINE}" == *"Address already in use"* ]]; then pkill -P $$ tail echo -e "\e[91mCannot Start Server\e[0m\nSee starting.log for more info " exit 1 fi done } export JAVA_OPTIONS="-Dweblogic.management.username=weblogic -Dweblogic.management.password=oracle11g" #-------Start NodeManager------------------------------------------------------- echo -e "Starting NodeManager..." nohup "$WLS_HOME"/server/bin/startNodeManager.sh > ${TFILE} 2>&1 & Check_Status_NM echo -e "NodeManager \e[92mStarted\e[0m\n" #--------------------------Start WebLogic Domain------------------------------------------------ echo -e "Starting AdminServer..." nohup "$DOMAIN_HOME"/bin/setDomainEnv.sh > ${TFILE} 2>&1 & nohup "$DOMAIN_HOME"/bin/startWebLogic.sh > ${TFILE} 2>&1 & Check_Status echo -e "AdminServer \e[92mStarted\e[0m\n" #----------- Start FORMS------------------------------ echo "Starting Forms Server..." nohup "$DOMAIN_HOME"/bin/startManagedWebLogic.sh WLS_FORMS t3://$(hostname):7001 > ${TFILE} 2>&1 & Check_Status echo "Forms Server \e[92mStarted\e[0m\n" #----------- Start Reports------------------------------ echo -e "Starting Reports Server..." nohup "$DOMAIN_HOME"/bin/startManagedWebLogic.sh WLS_REPORTS t3://$(hostname):7001 > ${TFILE} 2>&1 & Check_Status echo -e "Reports Server \e[92mStarted\e[0m\n" #---------------------Start anything remaining using OPMN------------------------ opmnctl startall ; opmnctl status ; emctl start agent
Bash has a built in "timer". Set the SECONDS variable to 0 (zero) when you want to start timing, and read its value to get the number of seconds elapsed since it was last reset.
how to calculate time taken (elapsed time) by loop
1,478,542,920,000
I have a file with a lot of lines like this /item/pubDate=Sun, 23 Feb 2014 00:55:04 +010 If I execute this echo "/item/pubDate=Sun, 23 Feb 2014 00:55:04 +010" | grep -Po "(?<=\=).*" Sun, 23 Feb 2014 00:55:04 +010 I get the correct date (all in one line). Now I want to try this with a lot of dates in a xml file. I use this and it's ok. xml2 < date_list | egrep "pubDate" | grep -Po "(?<=\=).*" Fri, 22 Jan 2016 17:56:29 +0100 Sun, 13 Dec 2015 18:33:02 +0100 Wed, 18 Nov 2015 15:27:43 +0100 ... But now I want to use the date in a bash program and I get this output for fecha in $(xml2 < podcast | egrep "pubDate" | grep -Po "(?<=\=).*"); do echo $fecha; done Fri, 22 Jan 2016 17:56:29 +0100 Sun, 13 Dec 2015 18:33:02 +0100 Wed, 18 Nov 2015 15:27:43 +0100 I want the date output in one line (in variable fecha) how the first and second examples but I don't know how to do it.
Do it this way instead: while IFS= read -r fecha; do echo $fecha done < <(xml2 < podcast | egrep "pubDate" | grep -Po "(?<=\=).*") Bash will separate "words" to loop through by characters in the Internal Field Separator ($IFS). You can temporarily disable this behavior by setting IFS to nothing for the duration of the read command. The pattern above will always loop line-by-line. <(command) makes the output of a command look like a real file, which we then redirect into our read loop. $ while IFS= read -r line; do echo $line; done < <(cat ./test.input) Fri, 22 Jan 2016 17:56:29 +0100 Sun, 13 Dec 2015 18:33:02 +0100 Wed, 18 Nov 2015 15:27:43 +0100
for loop over input lines
1,478,542,920,000
I want a for loop analog for Vifm. When I don't select any file, I can type :!echo %f and I see the output of echo with the current file name as the argument. When I select several files, :!echo %f yields output of echo with all selected filenames joined with spaces as the argument. What if I want to apply any program (e.g. echo) to each selected file? So echo file1 echo file2 echo file3 ... instead of echo file1 file2 file3 What options do I have? P.S.: I want the Vifm analog of the following Bash code: for f in file1 file2 file3; do echo $f done
This will produce the desired output: :!echo %f | xargs -n 1 echo Of course you could define a command (e.g. for) for convenient usage: :com for echo %f | xargs -n 1 Then you can just type: :for echo
Vifm: run command on each selected file individually
1,478,542,920,000
I want to keep all files not ending with .bat I tried for f in $(ls | egrep -v .bat); do echo $f; done and for f in $(eval ls | egrep -v .bat); do echo $f; done But both approaches yield the same result, as they print everything. Whereas ls | egrep -v .bat and eval ls | egrep -v .bat work per se, if used apart from the for loop. EDIT It's interesting to see that if I leave out the -v flag, the loop does what it should and lists all files ending with .bat. Feel free to edit the question title, as I was not sure what the problem is. I'm using GNU bash, version 4.1.10(4)-release (i686-pc-cygwin). EXAMPLE $ ls -l | egrep -v ".bat" total 60K -rwx------+ 1 SYSTEM SYSTEM 5.3K Jun 6 20:31 fsc* -rwx------+ 1 SYSTEM SYSTEM 5.3K Jun 6 20:31 scala* -rwx------+ 1 SYSTEM SYSTEM 5.3K Jun 6 20:31 scalac* -rwx------+ 1 SYSTEM SYSTEM 5.3K Jun 6 20:31 scaladoc* -rwx------+ 1 SYSTEM SYSTEM 5.3K Jun 6 20:31 scalap* Command is working, but not in the for loop. $ for f in $(ls | egrep -v .bat); do echo $f; done fsc fsc.bat scala scala.bat scalac scalac.bat scaladoc scaladoc.bat scalap scalap.bat scalac scalac.bat scaladoc scaladoc.bat scalap scalap.bat DEBUG $ set -x mike@pc /cygdrive/c/Program Files (x86)/scala/bin $ for f in $(ls | egrep -v .bat); do echo $f; done ++ ls -hF --color=tty ++ egrep --color=auto -v .bat + for f in '$(ls | egrep -v .bat)' + echo fsc fsc + for f in '$(ls | egrep -v .bat)' + echo fsc.bat fsc.bat // and so on
A few things wrong in your code: Using unquoted command substitution ($(...)) without setting $IFS Leaving expansions unquoted is the split+glob operator. The default is to split on space, tab and newline. Here, you only want to split on newline, so you need to set IFS to that as otherwise that means that will not work properly if filenames contain space or tab characters Using unquoted command substitution without set -f. Leaving expansions unquoted is the split+glob operator. Here you don't want globbing, that is the expansion of wildcards such as scala* into the list of matching files. When you do not want the shell to do globbing, you have to disable it with set -f ls aliased to ls -F The issue above is aggravated by the fact that you have ls aliased to ls -F. Which adds / to directories and * to executable files. So, typically, because scala is executable, ls -F outputs scala*, and as a globbing pattern, it is expanded to all the filenames that start with scala which explains why it seems like egrep -v is not filtering files out. Assuming filenames don't contain newline characters newline is as valid a character as any in a filename. So parsing the output of ls typically doesn't work. As for instance the output of ls in a directory that contains a and b files is the same as in a directory that contains one file called a\nb. Above egrep will filter the lines of the filenames, not the filenames Using egrep instead of grep -E egrep is deprecated. grep -E is the standard equivalent. Not escaping the . regex operator. Above, you used egrep to enable extended regular expressions, but you don't use any of the extended RE specific operator. The only RE operator you're using is . to match any character, while it looks like that's not what you intended. So you might as well have used grep -F here. Or use grep -v '\.bat'. Not anchoring the regexp on end-of-line egrep .bat will match any line that contains any character followed by bat, so that's the regexp that means anything that contains bat not in first position. It should have been grep -v '\.bat$'. Leaving $f unquoted Leaving an expansion unquoted is the split+glob operator. There, you want neither, so $f should be quoted ("$f"). Use echo echo expands the ANSI C escape sequences in its arguments and/or treats strings like -n or -e specially depending on the echo implementation (and/or the environment). Use printf instead. So a better solution: for f in *; do case $f in (*.bat);; (*) printf '%s\n' "$f" esac done Though if there's no non-hidden file in the current directory, that will still output *. You can work around that in zsh by changing * to *(N) or in bash by running shopt -s nullglob.
Command substitution in for loop not working
1,478,542,920,000
How do you iterate through a loop n amount of times when n is specified by the user at the beginning? I have written a shell script and need to repeat a certain part of it n numbers of times (depending upon how many times the user wishes). My script so far looks like this: echo "how many times would you like to print Hello World?" read num for i in {1.."$num"} do echo "Hello World" done If I change "num" to a number such as "5" the loop works however I need to be able to let the user specify the amount of times to iterate through the loop.
You can use seq for i in $(seq 1 "$num") or your shell may support C-style loops e.g. in bash for ((i=0; i<$num; i++))
How do you create a for loop with a changeable number of iterations?
1,478,542,920,000
I found one for loop example online. Now I want to use it in my code but I am not sure how does this loop operates for entry in "$search_dir"/* do echo "$entry" done Now I want to ask that Does it look in search_dir in each iteration and copies files in search_dir to entry variable one file in each iteration? Or I take a snapshot of all the contents of search_dir and then store that snapshot to entry variable? Is there any change in output if some one inserts some file in search_dir while the loop is still working?
When the shell gets to the for-statement, it will expand the value of $search_dir and perform the file name globbing to generate a list of directory entries that will be iterated over. This happens only once, and if the things in $search_dir disappears or if there are new files/directories added to that directory while the loop is executing, these changes will not be picked up. If the loop operates on the directory entries whose names are in $entry, one might want to test for their existence in the loop, especially if the loop is known to take a long time to run and there are lots of files that are in constant flux for one reason or another: for entry in "$search_dir"/*; do if [ -e "$entry" ]; then # operate on "$entry" else # handle the case that "$entry" went away fi done As Stéphane rightly points out in comments, this is a superfluous test in most cases.
What happens when files are added/removed in the middle of a "for f in *" sh loop?
1,478,542,920,000
in the idiom for i in $directories; do # ... done ... is the variable $i local or global? And what if there happens to be a global variable of the same name. Does bash work with the global variable or the one of the for ... in ... header ?
for doesn’t introduce its own variable scope, so i is whatever it is on entry to the for loop. This could be global, or local to whatever function declared it as local, or even global but in a sub-shell. On exit from the for loop, the variable will have the last value it had in the loop, unless it ended up in a sub-shell. How much that affects depends on the variable’s scope, so it is a good idea to declare loop variables as local inside functions (unless the side-effect is desired).
In for loops in bash, is the counter variable local or global?
1,478,542,920,000
The following bash script #!/bin/bash startNumber=$(( 1 )) endNumber=$(( $startNumber + 3 )) #for number in {$startNumber..$endNumber} for number in {1..4} do echo $number done exit 0 gives the desired output 1 2 3 4 However, when I switch the uncommented and commented for loop, the output is {1..4} What am I doing wrong?
Variables won't expand inside brace expansion. You could do: for ((number=startNumber; number<=endNumber; number++)); do echo "$number" done Also, there is no reason to use arithmetic expansion for startNumber you should simply do: startNumber=1. Additionally, you don't need to use $ to expand variables inside arithmetic expansion, so endNumber could be:endNumber=$((startNumber+3))
Bash script - variables in curly braces [duplicate]
1,478,542,920,000
Consider we have many photos with names like DSC_20170506_170809.JPEG. To rename the photos so that they follow the pattern Paris_20170506_170809.JPEG, I've wrote the following script that works perfect. for file in *.JPEG; do mv ${file} ${file/DSC/Paris}; done My question is , how we can write this script using a while loop instead of a for loop?
There's nothing wrong with using a while loop here. You just have to do it right: set -- *.jpeg while (($#)); do mv -- "${1}" "${1/DSC/Paris}" shift done The while loop above is just as reliable as the for loop (it will work with any file names) and while the latter is - in many instances - the most appropriate tool to use, the former is a valid alternative1 that has its uses (e.g. the above could process three files at a time or process only a certain number of arguments etc). All these commands (set, while..do..done and shift) are documented in the shell manual and their names are self-explanatory... set -- *.jpeg # set the positional arguments, i.e. whatever that *.jpeg glob expands to while (($#)); do # execute the 'do...' as long as the 'while condition' returns a zero exit status # the condition here being (($#)) which is arithmetic evaluation - the return # status is 0 if the arithmetic value of the expression is non-zero; since $# # holds the number of positional parameters then 'while (($#)); do' means run the # commands as long as there are positional parameters (i.e. file names) mv -- "${1}" "${1/DSC/Paris}" # this renames the current file in the list shift # this actually takes a parameter - if it's missing it defaults to '1' so it's # the same as writing 'shift 1' - it effectively removes $1 (the first positional # argument) from the list so $2 becomes $1, $3 becomes $2 and so on... done 1: It's not an alternative to text-processing tools so NEVER use a while loop to process text.
Rename files using a WHILE loop instead of a FOR loop
1,478,542,920,000
I want to run YII_ENV=prod yii kw/test ten times. I tried $ YII_ENV=prod for x in 1..10 do; yii kw/test done; -bash: for: command not found 1304682651 (Seemed to run once.) I also tried $ for x in {1..10} do; YII_ENV=prod yii kw/test done; -bash: syntax error near unexpected token `YII_ENV=prod' GNU bash, version 4.3.39(2)-release (i686-pc-cygwin)
First correct the syntax of your command, place the semicolons correctly. Instead of: for x in 1..10 do; yii kw/test done; Use (adding a correct brace expansion also): for x in {1..10}; do yii kw/test; done Then, add the variable: for x in {1..10}; do YII_ENV=prod yii kw/test; done
How do I use a temporary environment variable in a bash for loop?
1,478,542,920,000
I'm having trouble understanding what I need to escape when using sh -c. Let's say I want to run the for loop for i in {1..4}; do echo $i; done. By itself, this works fine. If I pass it to eval, I need to escape the $: eval "for i in {1..4}; do echo \$i; done", but I cannot make it work for sh -c "[...]": $ sh -c "for i in {1..4}; do echo $i; done" 4 $ sh -c "for i in {1..4}; do echo \$i; done" {1..4} $ sh -c "for i in \{1..4\}; do echo \$i; done" {1..4} $ sh -c "for i in \{1..4\}\; do echo \$i\; done" sh: 1: Syntax error: end of file unexpected Where can I find more information about this?
The usual wisdom is to define the script (after the -c) inside single quotes. The other part you need to use is a shell where the {1..4} construct is valid: $ bash -c 'for i in {1..4}; do echo $i; done' # also work with ksh and zsh One alternative to get it working with dash (your sh) is to make the expansion on the shell you are using interactively (I am assuming that you use bash or zsh as your interactive shell): $ dash -c 'for i do echo $i; done' mysh {1..4} 1 2 3 4
How to run a loop inside sh -c
1,478,542,920,000
#! /bin/bash for (( l = 1 ; l <= 50; ++l )) ; do for (( k = 1 ; k <= 1000; ++k )) ; do sed -n '$l,$lp' $k.dat >> ~/Escritorio/$l.txt done done The script is located in a folder together with 1000 dat files each one having 50 lines of text. The dat files are called 1.dat, 2.dat,...., 1000.dat My purpose is to make files l.txt, where l.txt has the l line of 1.dat, the l line of 2.dat, etc. For that, I use the sed command to select the l file of each dat file. But when I run the above script, the txt created have nothing inside... Where is the mistake?
for LINE in {1..50}; do for FILE in {1..1000}; do sed -n "${LINE}p" "${FILE}.dat" >>"~/Escritorio/${LINE}.dat" done done In your script you are using single quotes for the sed expression, variables don't expand inside single quotes, you need to use double quotes. Also there is a one liner with awk that can do the same: awk 'FNR<=50 {filename=sprintf("results/%d.dat", FNR); print >> filename; close(filename)}' *.dat Just create the results directory, or change it in the command to another one, ~ does not expand to home there.
For loop inside another doesn't work
1,478,542,920,000
I have user account on a linux machine which I do not know its exact IP address. But I know a range which it is ran on one of them. I want to check which server is my desired server. There are some Microsoft servers, some Linux servers and some servers that are down as well. I have a shell script to check each server: #!/bin/sh for i in $(seq 1 127) do echo "192.168.1.$i:" ssh 192.168.1.$i -l user_name done This code goes to each IP. If it ran ssh, it prompts for password, if ssh did not run, it tries the next ip and if server is down, it waits for a long time. Also if the server has ssh and prompts for a password, I can not escape from it by keyboard from such a server. How can I escape from these two types by keyboard and go to the next IP without terminating the program? For example CTRL + C terminates the program.
Ctrl-C sends the SIGINT signal to all the processes of the foreground job of your interactive shell. So that sends it to the sh running the script and ssh. You can make it not kill the shell by adding a: trap : INT to the beginning of your script. You may also want to use the ConnectTimeout option of ssh: ssh -o ConnectTimeout=2 ... Note that you're giving away your password to all those machines you're trying to connect to. Not a good idea if you don't trust their administrators.
Continue for loop by keyboard
1,515,439,233,000
I was trying to parse some nginx config λ tree sites-enabled/ sites-available/ sites-enabled/ ├── bank.cwrcoding.com.conf ├── calendar.cwrcoding.com.conf ├── cloud.cwrcoding.com.conf ├── cwrcoding.com.conf ├── drive.cwrcoding.com.conf ├── groups.cwrcoding.com.conf ├── mail.cwrcoding.com.conf ├── sites.cwrcoding.com.conf ├── studentenverwaltung.cwrcoding.com.conf ├── wekan.cwrcoding.com.conf └── www.cwrcoding.com.conf sites-available/ ├── bank.cwrcoding.com.conf ├── calendar.cwrcoding.com.conf ├── cloud.cwrcoding.com.conf ├── cwrcoding.com.conf ├── drive.cwrcoding.com.conf ├── groups.cwrcoding.com.conf ├── mail.cwrcoding.com.conf ├── sites.cwrcoding.com.conf ├── studentenverwaltung.cwrcoding.com.conf ├── wekan.cwrcoding.com.conf └── www.cwrcoding.com.conf The sites-enabled/* files each contain a single line: include sites-availabe/cwrcoding.com.conf; When trying to iterate over the sites-enabled/* files, cutting those and trying to read their contents as files, I got some weird error, so I tried a minimalistic working solution, and working my way up from there, but yet the following happens: λ for enabled in sites-enabled/* > do > echo "$(cat "$enabled") |" > echo ========== > done include sites-available/bank.cwrcoding.com.conf; | ========== |clude sites-available/calendar.cwrcoding.com.conf; ========== |clude sites-available/cloud.cwrcoding.com.conf; ========== |clude sites-available/cwrcoding.com.conf; ========== |clude sites-available/drive.cwrcoding.com.conf; ========== |clude sites-available/groups.cwrcoding.com.conf; ========== |clude sites-available/mail.cwrcoding.com.conf; ========== |clude sites-available/sites.cwrcoding.com.conf; ========== |clude sites-available/studentenverwaltung.cwrcoding.com.conf; ========== include sites-available/wekan.cwrcoding.com.conf; | ========== |clude sites-available/www.cwrcoding.com.conf; ========== As you can see, for most of the sites the first characters of the cat output are replaced by the text supposed to be after the command substitution. Can anyone explain what is happening? Or have I found some bug? If you want to take a look at the files: https://github.com/cwrau/nginx-config
The problem is that your files have DOS/Windows-style line-endings. As a quick work-around, replace: echo "$(cat "$enabled") |" With: echo "$(tr -d '\r' <"$enabled") |" Here, tr removes the carriage-return character before the file is displayed, avoiding the problem. If your files are intended to be used on a Unix system, however, you would be better off removing the carriage-returns from the files themselves using one the dos2unix or similar utilities. Example Let's create two DOS-style files: $ echo 'include sites-availabe/cwrcoding.com.conf;' | unix2dos > sites-enabled/file1 $ echo 'include sites-availabe/cwrcoding.com.conf;' | unix2dos > sites-enabled/file2 Let's run the original command: $ for enabled in sites-enabled/*; do echo "$(cat "$enabled") |"; echo ==========; done |clude sites-availabe/cwrcoding.com.conf; ========== |clude sites-availabe/cwrcoding.com.conf; ========== Note the mangled output. With tr applied, we receive the output that we expect: $ for enabled in sites-enabled/*; do echo "$(tr -d '\r' <"$enabled") |"; echo ==========; done include sites-availabe/cwrcoding.com.conf; | ========== include sites-availabe/cwrcoding.com.conf; | ==========
Why does bash replaces text from command substitution with text thereafter
1,515,439,233,000
I have a list of values, separated by ':' and I want to process them one by one. When the delimiter is space, there are no problems: nuclear@korhal:~$ for a in 720 500 560 130; do echo $a; done 720 500 560 130 But after settings IFS (Internal Field Separator) to : , strange things start to happen: nuclear@korhal:~$ IFS=":" for a in 720:500:560:130; do echo $a; done; bash: syntax error near unexpected token `do' If I skip all semicolons, when IFS is set: nuclear@korhal:~$ IFS=":" for a in 720:500:560:130 do echo $a done; Command 'for' not found, did you mean: command 'vor' from deb vor (0.5.8-1) command 'fop' from deb fop (1:2.5-1) command 'tor' from deb tor (0.4.4.5-1) command 'forw' from deb mailutils-mh (1:3.9-3.2) command 'forw' from deb mmh (0.4-2) command 'forw' from deb nmh (1.7.1-7) command 'sor' from deb pccts (1.33MR33-6build1) command 'form' from deb form (4.2.1+git20200217-1) command 'fox' from deb objcryst-fox (1.9.6.0-2.2) command 'fort' from deb fort-validator (1.4.0-1) command 'oor' from deb openoverlayrouter (1.3.0+ds1-3) Try: sudo apt install <deb name> Bash does not recognize the for command at all. If there was no IFS set in this case, it will show the prompt, because it expects more output (normal behaviour) What is happening when the IFS is set to custom character? Why the for loop does not work with it? I am using Kubuntu 20.10 Bash version 5.0.17
Keywords aren't recognized after an assignment. So, the for in IFS=blah for ... just runs a regular command called for, if you have one: $ cat > ./for #!/bin/sh echo script for $ chmod +x ./for $ PATH=$PATH:. $ for x in a b c > ^C $ foo=bar for x in a b c script for But because Bash parses the whole input line before running it, the keyword do causes a syntax error before that happens. This is similar with redirections in place of the assignment: Can I specify a redirected input before a compound command? And also see Why can't you reverse the order of the input redirection operator for while loops? for the gory details about how the syntax is defined. Also see: How do I use a temporary environment variable in a bash for loop? My Zsh is stricter: $ zsh -c 'foo=bar for x in a b c' zsh:1: parse error near `for' But Zsh does allow redirections there before a compound command. This outputs the three lines to test.txt: $ zsh -c '> test.txt for x in a b c ; do echo $x; done ' Besides, note that IFS won't be used to split a static string like 720:500:560:130, word splitting only works for expansions. So: $ IFS=":" $ for a in 720:500:560:130; do echo "$a"; done; 720:500:560:130 but, $ IFS=":" $ s=720:500:560:130 $ for a in $s; do echo "$a"; done; 720 500 560 130
Weird bash behavior when IFS is set for a for loop
1,515,439,233,000
I am executing below script LOGDIR=~/curl_result_$(date |tr ' :' '_') mkdir $LOGDIR for THREADNO in $(seq 20) do for REQNO in $(seq 20) do time curl --verbose -sS http://dummy.restapiexample.com/api/v1/create --trace-ascii ${LOGDIR}/trace_${THREADNO}_${REQNO} -d @- <<EOF >> ${LOGDIR}/response_${THREADNO} 2>&1 {"name":"emp_${THREADNO}_${REQNO}_$(date |tr ' :' '_')","salary":"$(echo $RANDOM%100000|bc)","age":"$(echo $RANDOM%100000|bc)"} EOF echo -e "\n-------------------------------" >> ${LOGDIR}/response_${THREADNO} done 2>&1 | grep real > $LOGDIR/timing_${THREADNO} & done After sometime if i check for no of bash processes, it shows 20(not 1 or 21) ps|grep bash|wc -l The question is since I have not used brackets "()" to enclose inner loop, new shell process should not be spawned. I want to avoid creating new shells as the CPU usage nears 100%. I don't know if it matters, but i am using Cygwin.
Because you have piped the loop into grep, it must be run in a subshell. This is mentioned in the Bash manual: Each command in a pipeline is executed in its own subshell, which is a separate process (see Command Execution Environment) It is possible to avoid that with the lastpipe shell option for the final command in the pipeline, but not any of the others. In any case, you've put the whole pipeline into the background, which also creates a subshell. There is no way around this. What you're doing inherently does require separate shell processes in order to work: even ignoring the pipeline, creating a background process requires creating a process. If your issue is the CPU usage, that's caused by running everything at once. If you remove the & after the grep, all the commands will run in sequence instead of simultaneously. There will still be subshells created (for the pipeline), but those are not themselves the main issue in that case. If you need them to run simultaneously, the increased CPU usage is the trade-off you've chosen.
Inner for loop when run in background in bash spawns new bash process
1,515,439,233,000
Let's say I have two bash variables that contain binary values: a=0011 # decimal 3 b=1000 # decimal 8 Is there a way I can loop through all the possible values between $a and $b keeping it binary? Something like: for blah in $(seq $a $b) ; do print "Blah is: $blah" done So it will output: Blah is: 0011 Blah is: 0100 Blah is: 0101 Blah is: 0110 Blah is: 0111 Blah is: 1000 I have tried: for blah in $(seq "$((2#$a))" "$((2#$b))") ; do But then $blah becomes decimal, and I'd like to keep it a binary (I can always transform the decimal back to binary, but that seems a waste, since I alredy have the extremes in binary) This code must run in a limited linux (OpenWRT) that doesn't have obase available. If the answer is that it's not possible to keep the binary value, that's a useful answer as well (I can create a function that converts decimal to binary without using obase) Besides, it can be a useful answer to people using regular bash.
seq is not a built-in. It's also not part of the Posix standard. But the usual implementations of seq don't have any ability to sequence in bases other than 10. In bash, you can specify a range as {start..finish}. However, that also doesn't work in bases other than 10 (although it does work with letters: {a..f} expands to a b c d e f. And as far as I know, that's it for simple sequence generators, which leaves you with a couple of possibilities. The silly way to do it is to filter out the non-binary values. That's simple but hugely inefficient if a and b aren't tiny: for x in $(seq -w $a $b); do if [[ ! ($x =~ [2-9]) ]]; then echo $x fi done Here's a better solution. Assuming a and b are the same length (if not, you can use printf to fix that), the following will loop through all binary numbers from a to b, inclusive: # We need a string of 0s at least as long as a: z=${a//1/0} while [[ ! ($a > $b) ]]; do # do something with $a # The following "increments" a by removing the last 0 (and trailing 1s) # and replacing that with a 1 and the same number of 0s. a=$(printf "%.*s" ${#a} ${a%0*}1$z) done
Bash: For loop with binary range keeping control value binary
1,515,439,233,000
I am using the following counter functionality in a script: for ((i=1; i <= 100; i++ )); do printf "\r%s - %s" "$i" $(( 100 - i )) sleep 0.25 done is there a way I can pause and resume the counter with keyboard input? (preferably with the same key - lets say with space)
Use read with a timeout -t and set a variable based on its output. #!/bin/bash running=true for ((i=1; i <= 100; i++ )); do if [[ "$running" == "true" ]]; then printf "\r%s - %s" "$i" $(( 100 - i )) fi if read -sn 1 -t 0.25; then if [[ "$running" == "true" ]]; then running=false else running=true fi fi done With this, you can press any key to pause or unpause the script. running stores true or false to tell if we want the loop to do work or now. read -sn 1 -t 0.25 is the key, it reads one character -n1, it suppresses keypresses -s and waits for only 0.25s -t 0.25. If read times out it returns a none 0 exit status which we detect with the if and only if a key was pressed do we toggle the status of running. You can also assign the read char to a variable and check for a specific character to limit it to only one key. if read -sn 1 -t 0.25 key && [[ "$key" = "s" ]] ; then Use "$key" == "" if you want to check for space or enter. Note that one side effect of the read + timeout is that if you hit a key the next loop will execute quicker then normal which is made more obvious if you hold down a key. An alternative might be to use job flow control. ctrl + s will suspend a terminal and ctrl + q will resume it. This blocks the entire shell and does not work in all shells. You can use ctrl + z to suspend a process giving you a prompt back and use fg to resume the process again which allows you to continue to use the shell.
Bash - pause and resume a script (for-loop) with certain key
1,515,439,233,000
I have a Java program that gets two arguments (a video file name and an image) and outputs a boolean (0 or 1) in the first line: java -jar myProgram video1.mp4 image.png > 0 >some extra information... >other extra information....going on Now using bash script, I need to iterate through all files in a folder (not files in nested folders), run the program with the file name passed to the first argument (video name changes everytime, and image is fixed), and if the output in the first line is 0, copy the file in folder0, and if the output is 1, copy the file to folder1. How can I achieve that in bash?
You have much better control when using conditional statements: for file in *; do if [[ -f "$file" ]]; then output=$(java -jar myProgram "$file" image.png | head -n 1) [[ $output = "0" ]] && cp -- "$file" folder0 [[ $output = "1" ]] && cp -- "$file" folder1 fi done EDIT: if you still want to see the output of java, you can use this: output=$(java -jar myProgram "$file" image.png | tee /dev/tty | head -n 1)
How can I create a Bash conditional script, based on output from a command?
1,515,439,233,000
I frequently execute programs that take a long time on several remote servers: for NUM in {1..100}; do ssh host-{NUM}.mydomain.com /usr/bin/takesalongtime; done Most of the time I let this run in background (i.e. in a terminal emulator while doing something else) and wait for it to finish. However, sometimes I need to break the loop and continue or re-run it later. Is there a way to stop such a loop running in an interactive bash after the current iteration without killing the current ssh or takesalongtime program in the process? I.e. I want to do something, so that the loop behaves like I inserted a break; between the ssh command and done, so that it breaks the loop after completing takesalongtime on the current remote host.
Find the pid of the shell and send it a sigint. This will not stop the ssh, but when it finishes the shell will handle the signal and end the loop. $ for i in {1..100}; do ssh -n localhost 'sleep 9;date'; done in other terminal $ ps fax 10636 ? S 0:00 | \_ xterm 10638 pts/2 Ss 0:00 | \_ bash 12164 pts/2 S+ 0:00 | \_ ssh -n localhost sleep 9;date $ kill -int 10638 If you have backgrounded the command by adding & to the end you can use kill -hup on the backgrounded shell.
Break running for-loop after current iteration
1,515,439,233,000
I have shell scripts in my ~/Shell directory that I want to be run whenever Bash is started up as my usual user account. So what I have done is added the following to ~/.bashrc: for i in `find ~/Shell/ -name "*.sh"` do sh $i done but, for whatever reason the functions contained in files with the file extension .sh in my ~/Shell directory are not automatically loaded. For example, I have a function called abash in my ~/Shell/bash.sh file and running abash from a new user terminal gave an error stating that the command was not found. I know I can just manually list all the files in my ~/Shell directory with a dot before them to get them executed at Bash startup time. For example, I used to have this in my ~/.bashrc file: . ~/Shell/bash.sh . ~/Shell/cd.sh . ~/Shell/emerge.sh ... and it worked fine, but I would rather a for loop to do this, as it would mean if I add any new shell scripts to ~/Shell I do not have to worry about adding them to ~/.bashrc. I have also now tried: for i in `find -name "~/Shell/*.sh"` do sh $i done and: for i in "~/Shell/*.sh" do sh $i done and: for i in `find -name '~/Shell/*.sh'` do sh $i done with no success.
Put this in your .bashrc: for rc in ~/Shell/*.sh do . "$rc" done And you're off to the races! A couple of notes: The bash (and zsh etc) source command, while readable, is not universal and does not exist in dash, the most posixly correct shell I know. As it stands, this same code can be used to load code into almost any bourne-shell derivative. The traditional naming convention for files to be directly sourced into the shell is to use a suffix of rc or .rc (as in .bashrc). rc stands for "run commands". The .sh extension is usually used for executable script programs. (These are only conventions -- not rules.)
How do I get ~/.bashrc to execute all scripts in my ~/Shell directory using a loop?
1,515,439,233,000
By using find command I got multiple files. Now I want to add all these files as mail attachment. How do I add these files as attachment in single mail? I want to implement this in script. Do I need to use for loop and store the files in array variables? EX: I got 3 files results by the following find . -type f -name "sum*" result: sum123.pdf sum234.pdf sum453.pdf
You can do it with mutt like this: mutt -a $(find . -type f -name "sum*") If you want to do it non-interactive, try mutt -s "Subject" -a $(find . -type f -name "sum*") -- [email protected] < /dev/null If mutt is not installed, here is an example with mail and more tools (e.g. mpack)! So it should be something like #!/bin/bash # This needs heirloom-mailx from="[email protected]" to="[email protected]" subject="Some fancy title" body="This is the body of our email" declare -a attargs for att in $(find . -type f -name "sum*"); do attargs+=( "-a" "$att" ) done mail -s "$subject" -r "$from" "${attargs[@]}" "$to" <<< "$body" For a sh environment without declare: #!/bin/sh # This needs heirloom-mailx from="[email protected]" to="[email protected]" subject="Some fancy title" body="This is the body of our email" attargs="" for att in $(find . -type f -name "sum*"); do attargs="${attargs}-a $att " done attargs=${attargs::-1} mail -s "$subject" -r "$from" ${attargs[@]} "$to" <<< "$body"
Attach files for sending mail which are the result set of find command
1,515,439,233,000
I have a question with range variables in for loops. In for loops I know you can use {..} to define a range. But I want it to be customer definable. So the script asks for a range value, and I want it to be flexible. like this: #!/bin/bash while true; do echo "Some explenation..." read -p "Possible values are: ALL or RANGE (to define a specific range):" answer case $answer in #ALL [aA][lL][lL] ) echo "ALL is selected, 1 to 250" RANGE="{1..250}" break ;; #RANGE [rR][aA][nN][gG][eE] ) echo "Range is selected, please define a custom range. Notation should be like: <start>..<stop>" echo "or: <single> <single>" echo "or a combination of both. Like: 1..23 28 29 101..145 180 212" read RANGE break ;; #OTHERS * ) echo "Typo error!! Not correct, type again ALL, PRI, SEC or RANGE" ;; esac done echo "" echo "Range for this execution is set to:" echo "$RANGE" for NEXI in $RANGE do sleep 0.2 echo "Nexus number: $NEXI" done echo ... But the Output when I enter {1..10} 52 54 {120..128} is: Nexus number: {1..10} Nexus number: 52 Nexus number: 54 Nexus number: {120..128} while I expected it to be like: Nexus number: 1 Nexus number: 2 Nexus number: 3 Nexus number: 4 Nexus number: 5 Nexus number: 6 Nexus number: 7 Nexus number: 8 Nexus number: 9 Nexus number: 10 Nexus number: 52 Nexus number: 54 Nexus number: 120 Nexus number: 121 ... etc. fully written out. What can I change to achieve this? Thanks.
The problem is that brace expansion happens before variable expansion. As explained in man bash: The order of expansions is: brace expansion; tilde expansion, parameter and variable expansion, arithmetic expansion, and command substitution (done in a left-to-right fashion); word splitting; and pathname expansion. This means that when you have something like $var which contains {1..3}, and do echo "$var", the shell looks for possible brace expansions before expanding the variable to its value. Since the braces are in the variable, that means that at that point the shell sees no braces so no brace expansion is performed. One workaround would be to eval the variable if it is a range. Add these lines before your for loop: if [[ $RANGE =~ ^[0-9]+\.\.[0-9]+$ ]]; then RANGE=$(eval echo {$RANGE}) fi Now, on a more general note, it is usually a good idea to avoid using CAPITAL variable names in shell scripts. Environment variables are usually capitalized and that can lead to mixups if you happen to use the same name as a defined env variable. So use lower case variable names to be on the safe side. Also, this is a really annoying program to debug and equally annoying to use. Don't prompt your user for input at run time! Instead, change your script and make it take arguments. Typing values is hard, annoying, error prone and can't be automated.
Range variables in for loop custom definable
1,515,439,233,000
I'm trying to trigger a beep on the PC speaker for every unique visitor of a website. After some brainstorming, it seemed to be possible with one line: for e in `ssh me@mymachine "tail -n 1 -f /var/log/apache2/test.log | awk '{print $1}' | uniq"`; do beep; done However uniq doesn't output anything as long as stdin is open (seems to wait for EOF). The same goes for the for loop. If I remove the uniq from the chain, I still get no output with tail keeping the pipe open. This seems not to be because of buffering. Even if I write >100.000 lines into the test file with this command running, there's no output on the other end. Is there a way to get that working without completely killing the beauty (simplicity) of the solution? Update I solved the first part. uniq is unblocked by prefixing the tail command with stdbuf -oL -eL (see https://unix.stackexchange.com/a/25378/109296). The same doesn't work for the loop. Update 2 I got it working - but not exactly according to my spec and with 2 lines: while [ 1 -eq 1 ]; do ssh root@speedy "stdbuf -oL -eL tail -n 1 -f /var/log/apache2/www.access.log | stdbuf -oL -eL grep 'GET / '"; sleep 60; done > www.log awk '{print $1}' is missing because it didn't work inside this construct (just passed through the whole line). I don't know why. But I can live without, because anyway uniq turned out not to be so useful after all, because it only looks at adjacent lines, which means that the requests patterns ip1, ip2, ip1 would still let ip1 through twice. uniq -u would do what I expect, but it has the same problem like sort: doesn't output anything as long as stdin is open (not even with stdbuf -oL. This command just writes all requests for the base URL (/) to another file. I wrapped it into a loop (and wait) in order to have it automatically retry if for some reason the pipe or connection interrupts. while inotifywait -e modify www.log; do beep -f 250; done makes the sound! I could not get the bash for loop to process line by line unbuffered, also tried while read with the same result. Thus I gave up and went on with inotifywait which however means that I need an intermediate file (maybe a named pipe would also work, didn't try. Doesn't really make a difference for me). I'd still be thankful to contributions which help to make the filtering of unique visitors work (without escalating complexity). This will be a nice surprise for my team members when they return to the office :-) I plan to extend this notification system to monitor several events, using different audio frequencies. That's the best job I've found so far for an old server collecting dust...
This is what I came finally up with, thanks to the neat Perl command contributed by JJoao: # kill everything on termination trap "kill 0" SIGINT SIGTERM # Make sure the remote processes are killed on exit, see http://unix.stackexchange.com/questions/103699/kill-process-spawned-by-ssh-when-ssh-dies shopt -s huponexit ( while [ 1 -eq 1 ]; do ssh -t -t root@speedy "stdbuf -oL -eL tail -n 1 -f /var/log/apache2/www.access.log | stdbuf -oL -eL grep 'GET / ' | stdbuf -oL -eL perl -naE '($a{$F[0]}++ == 0) and say $F[0]'"; sleep 60; done > www.log ) & ( while inotifywait -e modify www.log; do beep -f 250; done ) &
uniq and bash for loop not writing to stdout before stdin closing (for one-line website visitor notification system)
1,515,439,233,000
I've read up on a few answers here in reference to quoting variables on the shell. Basically, here's what I'm running: for f in "$(tmsu files)"; do echo "${f}"; done this has the expected outcome: a list of files. However, I'm pretty sure they're coming out as a single string. After some reading I realized that this is a zsh-ism; it deals with splitting differently than most shells. So I fired up bash and ran the same command. I was expecting it to split the list, but alas, I got exactly the same result. So now I'm really confused. Why doesn't bash behave as expected? How do I get zsh to split lines? I've tried (zsh): for f in "$(tmsu files)"; do echo "${=f}"; done as well as for f in "$(=tmsu files)"; do echo "${f}"; done neither worked, so obviously I'm mis-understanding how to get zsh to split. What makes this even worse is at some point I had this doing exactly what I wanted it to, and I can't remember how I did that.
However, I'm pretty sure they're coming out as a single string. This is what you get when you use quotes. I can't speak to how zsh is supposed to do it, but the bash manual states as much (section "Word Splitting", under "EXPANSION"): The shell scans the results of parameter expansion, command substitution, and arithmetic expansion that did not occur within double quotes for word splitting. (emphasis added). Note that word splitting also does not occur with single quotes; however, substitution and expansion does not occur within them either. If you want separate arguments out of the command substitution, you need to remove the quotes. However, if tmsu produces any results with spaces, those will also yield undesirable splits (since both spaces and newlines are considered word delimiters), making things a little more complicated. In bash, assuming tmsu gives one result per line, you can do this: tmsu files | while IFS= read -r f do printf "<%s>\n" "$f" done The read builtin reads a whole line of input. You can give it any number of names. All but the last are assigned based on the word delimiter (space by default), and the last is assigned the remainder of the line, so it's a good way to get around bash's annoying treatment of whitespace in expansion and substitution.
I don't really understand expansion and quoting on the shell (zsh/bash)
1,515,439,233,000
I want to change all filenames they have the exact length of 16 characters (digits and lower case letters). I tried [0-9a-z]{16} and [0-9a-z]\{16\} for the placeholder regX in the following snippet, but it doesn't work. for file in <regX> do mv "$file" "${file}.txt" done
With extglob shopt -s extglob for file in +([0-9a-z]) do [[ ${#file} == 16 ]] && echo mv "$file" "${file}.txt" done +([0-9a-z]) means one or more of [0-9a-z] characters ${#file} gives the length of filename echo is for dry run, remove once things are fine
Fetch all filenames with specific number of characters
1,515,439,233,000
I've been given a script to run, but it produces an error when calling find . -depth 1 -type d. It produces the following error, find: paths must precede expression: `1' This is the line in which it fails, for dir in `find . -depth 1 -type d` do .... I have tried quite a few things without success. And I don't really see why it gives the error since it seems to me at least, that the paths does indeed precede the "1".
The -depth switch does not take an argument, but -maxdepth does, so: for dir in `find . -depth -maxdepth 1 -type d` do .... should work. The -depth argument as per the man page means process directory contents first.
Looping over dirs using `find . -depth 1 -type d`
1,515,439,233,000
I have a huge number of files that are numbered in a way such as file_01_01.out where the first number is the group that the file belongs to, and the second is number of the file in the group - so file_10_07.out is the 7th file in the 10th group. I want to copy some text from these files and group them in some output files. I have tried using this and it doesn't really work, and I can't understand why: for i in {0..21}; do grep "text" file_$i_*out > out_$i.txt; done; Not sure why this doesn't work, but there is definitely logic to the output. It's just not the output I was going for, and some files are just completely skipped.
(in adition to @Philippos): Bash is trying to expand variable $i_ instead of $i. Try ...${i}_...: for i in {00..21} do grep "text" file_${i}_*out > out_$i.txt done
Using a For Loop to Sort and Save Files Using Grep
1,515,439,233,000
I try, with no success, to use an awk command inside a for loop. I've got a variable which contains a series of strings that I want to cut with awk to get the data. I know how to do that but what I really want is to cut the data successively. So I've got this variable: var="data1,data2,data3" And here where I am right now: for ((i=1; i<=3; i++)) do echo $(awk -F, '{print $1}' <<< $var) done I try to replace the $1 by the loop $i but without success.
You can accomplish what you're trying to do by using double quotes in the awk script to inject the shell variable into it. You still want to keep one literal $ in it, which you can do by escaping it with backslash: echo $(awk -F, "{print \$$i}" <<<$var) This will expand the $i to 1, 2 and 3 in each of the iterations, therefore awk will see $1, $2 and $3 which will make it expand each of the fields. Another possibility is to inject the shell variable as an awk variable using the -v flag: echo $(awk -F, -v i="$i" '{print $i}' <<<$var) That assigns the awk variable i to the contents of the shell variable with the same name. Variables in awk don't use a $, which is used for fields, so $i is enough to refer to the i-th field if i is a variable in awk. Assigning an awk variable with -v is generally a safer approach, particularly when it can contain arbitrary sequences of characters, in that case there's less risk that the contents will be executed as awk code against your intentions. But since in your case the variable holds a single integer, that's less of a concern. Yet another option is to use a for loop in awk itself. See awk documentation (or search this site) for more details on how to do that.
Awk command inside a for loop
1,515,439,233,000
I'm trying to pass a list of files with a known set of characters to sed for a find and replace. For a directory containing multiple .xml files: ls -la file1.xml file2.xml file3.xml Each containing a matching string: grep -i foo * file1.xml <foo/> file2.xml <foo/> file3.xml <foo/> Replace foo with bar using a for loop: for f in *.xml; do ls | sed -i "s|foo|bar|g" ; done Returns: sed: no input files sed: no input files sed: no input files I already figured out an alternative that works, so this is mostly for my own edification at this point. find /dir/ -name '*.xml' -exec sed -i "s|foo|bar|g" {} \;
You have a flaw in your for loop. Remove the ls command, and add the $f variable as the argument to sed -i, which will edit each filename.xml in place: for f in *.xml; do sed -i "s|foo|bar|g" "$f"; done
Use sed to find and replace a string in multiple files [duplicate]