date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,298,910,509,000
I have a VPS where I changed the SSH port from the default 22. Unfortunately I forgot to allow the new port through the firewall. I don't have physical access to the server, and my host does not seem to offer any shell access to the VPS through their website. Result: I'm locked out of the server. Is there any way I can rectify this short of resetting the server?
If you made the changes by hand and reinvoked sshd resetting might help, but if you changed sshd_config then resetting the server will not help you, the server will come back up and listen on the new, firewalled, port. You will have to access the VPS through a console or any other means your provider provides to rectify this kind of problem. BTW, you can, and should, specify multiple ports in your sshd_config: Port 22 Port 2222 that way you can test things on the new port before removing the old one, assuming you have to remove the old port in the first place. The only reason I ever had to setup sshd to listen to a different port is because a friend's internet provider blocks access to ports below 1025, and his router is not able to map ports, only to allow specific port traffic through, to an internal address.
Changed SSH port without allowing it through firewall, locked out now - what to do?
1,298,910,509,000
For a while now (introduced in version 1.3 I believe), iptables' conntrack module can track two virtual states, SNAT and DNAT: SNAT A virtual state, matching if the original source address differs from the reply destination. DNAT A virtual state, matching if the original destination differs from the reply source. On my router/firewall host, I have some rules for SNAT like this: # SNAT iptables -t filter -A FORWARD -i $FROM_IFACE -o $TO_IFACE -s $FROM_IP -d $TO_IP -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT iptables -t filter -A FORWARD -i $TO_IFACE -o $FROM_IFACE -s $TO_IP -d $FROM_IP -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o $TO_IFACE -s $FROM_IP -d $TO_IP -j SNAT --to-source $SNAT_IP # DNAT iptables -t nat -A PREROUTING -i $FROM_IFACE -d $FROM_IP -p $PROTO --dport $PORT -j DNAT --to-destination $TO_IP iptables -t filter -A FORWARD -i $FROM_IFACE -o $TO_IFACE -d $TO_IP -p $PROTO --dport $PORT -j ACCEPT iptables -t filter -A FORWARD -i $TO_IFACE -o $FROM_IFACE -s $TO_IP -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT After a fair bit of googling, I couldn't find any example of iptables rules using those "new" SNAT or DNAT states, but I tried anyway to replace ESTABLISHED,RELATED by SNAT or DNAT, like this: # SNAT iptables -t filter -A FORWARD -i $FROM_IFACE -o $TO_IFACE -s $FROM_IP -d $TO_IP -m conntrack --ctstate NEW,SNAT -j ACCEPT iptables -t filter -A FORWARD -i $TO_IFACE -o $FROM_IFACE -s $TO_IP -d $FROM_IP -m conntrack --ctstate SNAT -j ACCEPT iptables -t nat -A POSTROUTING -o $TO_IFACE -s $FROM_IP -d $TO_IP -j SNAT --to-source $SNAT_IP # DNAT iptables -t nat -A PREROUTING -i $FROM_IFACE -d $FROM_IP -p $PROTO --dport $PORT -j DNAT --to-destination $TO_IP iptables -t filter -A FORWARD -i $FROM_IFACE -o $TO_IFACE -d $TO_IP -p $PROTO --dport $PORT -j ACCEPT iptables -t filter -A FORWARD -i $TO_IFACE -o $FROM_IFACE -s $TO_IP -m conntrack --ctstate DNAT -j ACCEPT It seemed to work, and this method has at least one benefit I could notice: my firewall used to drop RST packets going from my internal hosts to the Internet (since they are in INVALID state), but with this new method, they were allowed to pass. Unfortunately, although convenient, I'm not sure if this method is really suitable, because my theoretical knowledge about networks is not sufficient to understand if it's too permissive (i.e. allow some unwanted packets from outside my LAN to reach inside). I think my question could be worded like this: can a packet have the SNAT or DNAT state while not also have the ESTABLISHED or RELATED state (except, obviously, the first one which has the NEW state) ? Note: I tried to log such packets but to my knowledge it's impossible, since iptables accepts only one --ctstate option, and ! can't be used within it (in other words, I can't say, or at least couldn't find a way to say, "log packets which have SNAT state but not ESTABLISHED or RELATED state"). If there is an alternate method to log them I didn't think of, this would also be very welcomed. EDIT 1: after some trial and error, I realized I was wrong (hence the stroked text): some packets are still in state INVALID and thus finally dropped. EDIT 2: if using SNAT/DNAT in place of ESTABLISH,RELATED is not safe, please provide some concrete examples of cases where packets could be in those former states without being in the latter ones.
Thanks to @A.B's advice about logging, I could do some more tests and here are the results, as well as answers to my own questions, in the hope this will help other people who, like me, didn't find anything on the web about the states SNAT and DNAT, and/or their ability to replace ESTABLISHED,RELATED for matching. So, on a moderately busy home network (a couple of hosts accessing the Internet over SNAT, as well as some virtual machines hosting servers (HTTP/HTTPS, SMTP, IMAP, etc etc) publicly accessible over DNAT), in five days, I didn't see a single line of log about a packet which would be in SNAT or DNAT state, and not also ESTABLISHED or RELATED. So, the answer to the question "can a packet have the SNAT or DNAT state while not also have the ESTABLISHED or RELATED state" is no. Since my real worry was that matching against SNAT or DNAT instead of ESTABLISHED,RELATED to allow packets to get into my LAN could be too permissive, this would seem reassuring at first, but I found out that it's not a good idea. In fact, it appears that this is, on the contrary, less permissive: during my tests with those rules, I saw a small but non-negligible number of packets in state RELATED that were dropped, mostly ICMP type 3, codes 1 and 3 (respectively destination host unreachable and destination port unreachable), coming from the Internet and destined to the hosts inside my LAN. In other words (and if I understand networks correctly), my hosts tried to make some connections to the Internet, the remote routers responded that the connection couldn't be made, and my own firewall/router host blocked those responses. This can't be good. So, the answer to the underlying question "Is it a good idea to replace ESTABLISHED,RELATED by SNAT or DNAT", is, again, no.
iptables conntrack module: SNAT (or DNAT) state in place of ESTABLISHED/RELATED?
1,298,910,509,000
How to list all loadable modules in iptables (given after the -m flag)? This post proposes to list loadable modules with ls /lib*/iptables/ I don't have this folder with my version (v1.6.0).
You got everything described in the linked your post. List all available modules: :~# ls /lib/modules/`uname -r`/kernel/net/netfilter/ List all loaded modules (unless you use a specific module in a rule, it won't show up in this list): :~# cat /proc/net/ip_tables_matches comment addrtype mark conntrack conntrack conntrack recent recent addrtype udplite udp tcp multiport icmp Man pages for iptables-extensions provide information of available extensions in the standard iptables distribution. :~# man 8 iptables-extensions
How to list iptables loadable match modules
1,298,910,509,000
As I notice more often with FreeBSD, there are always plenty of ways that lead to some specific goal. After figuring out which firewall I wanted (I choose ipfw) I now am completely insecure about which way to do Network Address Translation (NAT). As I have discovered now, there are two ways to to NAT, I could use the kernel space ipfw nat or I could use the userspace natd. The only one of these described in the FreeBSD handbook is natd. What I would like to know is what the main differences are between these? Which one is more popular. Off course I would also like to be able to fish, so how I can find out these differences in the manuals/handbooks?
ipfw nat is generally preferable, since it runs in kernel-space and consumes less CPU than divert+natd. But natd still can be useful if you need to dynamically add rules for FTP connections (look for -punch_fw option in natd(8)). Handbook page is badly outdated.
FreeBSD kernel nat or natd?
1,298,910,509,000
I am trying to limit number of ssh login attempts per time period. How might I do that? I have something like (in shorewall's rules) #ACTION SOURCE DEST PROTO DEST SOURCE MARK # PORT PORT(S) ... Limit:info:SSHA,3,180 net all tcp 22 But it doesn't seem to work
You can use iptables to limit to 3 attempts per minute: iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --set iptables -I INPUT -p tcp --dport 22 -i eth0 -m state --state NEW -m recent --update --seconds 60 --hitcount 4 -j DROP Or use something like fail2ban. It bans by IP address for 15 minutes after 5 unsuccesfull login attempts.
How to limit number off ssh login attempts per time interval
1,298,910,509,000
I have a machine (A) behind a firewall with no access to the Internet, on this machine I can NFS mount directories on another machine (B) which can access internet, and is accessible from Internet, but I cannot install anything on this machine (B). I want to keep a directory on (A) in sync with my Dropbox (that I use on all my other machines (not A or B), all of them connect to Internet regularly). The solution that I came up with is to have a cron job on (A), to call two rsync commands to sync a directory on (A) with an NFS mounted directory which is actually on (B). Then I can have a cron job on some other machine on the Internet that syncs my Dropbox with the directory on (B). Anyone can see any problems with this plan, or has a better suggestion? Anything other unix utility besides cron and rsync?
You may be able to use Unison to synchronize your files. Unison uses the rsync protocol and can run over ssh. You may need to copy the executable into your directory on the remote system. Using rsync may cause problems as it is difficult to synchronize file deletions. EDIT: To sync a folder on A from system C (with working Drobox) a chosen directory on B becomes the hub and A and C two spokes. Schedule the steps so that only one is running at a time. Schedule Unison on system C to sync to the directory on B. Schedule Unison on system A to sync to the directory on B. (May require NFS mounting the directory.) Periodically, check Unison for conflicts if automatic resolution wasn't configured. There are other ways to handle this. If the directory on B is alway mounted when you need it, then you can skip this step. A symlink to an autofs NFS mount would handle this. p.s. I was working with WinSCP today, and found it has a synchronize function. It appears to be useful for periodic use. Unison still seems better for automated updates.
Using rsync + cron to sync a machine behind a firewall with my dropbox
1,298,910,509,000
Which tools should I research into to intercept a plain text packet, edit it and then continue it on it's way. I am using Ubuntu 11.04 and Backtrack 5; my wifi is WPA2-person encrypted. I need to edit an item database of an ipod game :)
You don't want to intercept this in the air. It's very hard to do well. I suggest you change your network around a bit. You'll need a PC with two network interfaces and two routers to pull this off. Here's how I would do it: Internet --> Router --> Ubuntu machine --(network port)--> Wifi router --> iPod Ubuntu needs to "share" the Internet connection with the second router. This is actually dead simple with Network Manager: just create a new connection and in the IPv4 tab, set "Method" to "Shared to other computers". The second router should then get traffic from your PC and you should connect your iPod to the second router. Now you just need to intercept and mangle the traffic. Let's deal the the mangle first. You want something like Hatkit. It's simple and for purpose. It won't handle tons of traffic but it'll get you going. Set it to run on 127.0.0.1:8080 (the default). There are other similar proxies out there including scripts you can hack into and customise. And to intercept, you just need a simple iptables rule to redirect incoming traffic from the second router through your proxy (you need to replace the IP 10.42.43.2 with the one that Ubuntu has assigned your second router - most easily found out by looking at the admin pages on the second router): sudo iptables -t nat -A PREROUTING -p tcp -s 10.42.43.2 \ --destination-port 80 -j REDIRECT --to-ports 8080 Now when you request things on port 80 from the second router, all the requests should fly through Hatkit where you can alter them and the responses to your delight. Enjoy hacking your game :P You can do this with a laptop: network cable in from the first router and use the onboard wireless as an access point. I didn't suggest this because ad-hoc networking in my experience is extremely flaky in Ubuntu. I have two nics and more routers than I can shake a stick at, so the other way is just easier for me.
What tools do I need to intercept and change an incoming packet on a WP2-Personal wifi hub
1,298,910,509,000
What incoming TCP and UDP connections are permitted, by the default firewall policy of Fedora Workstation, and Fedora Server? I am interested in the current version, Fedora 28.
Look at the default zone definitions in /usr/lib/firewalld/zones/, and cross-reference them against /usr/lib/firewalld/services/. FedoraWorkstation.xml Unsolicited incoming network packets are rejected from port 1 to 1024, except for select network services. Incoming packets that are related to outgoing network connections are accepted. Outgoing network connections are allowed. <service name="dhcpv6-client"/> <!-- udp 546 from fe80::/64 only --> <service name="ssh"/> <!-- tcp 22 --> <service name="samba-client"/> <!-- udp 137,138, plus nf_conntrack_netbios_ns --> <port protocol="udp" port="1025-65535"/> <port protocol="tcp" port="1025-65535"/> FedoraServer.xml For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted. <service name="ssh"/> <!-- tcp 22 --> <service name="dhcpv6-client"/> <!-- udp 546 from fe80::/64 only --> <service name="cockpit"/> <!-- tcp 9090 --> ("cockpit" is implemented as a web server running on TCP port 9090. It uses HTTPS and password authentication. There is an alternative option to use SSH and SSH key authentication as well). Does it allow MDNS / avahi? This is slightly confusing when you look at the package. The package includes a patch to enable MDNS by default, but it does not touch either of these files. Nevertheless, MDNS will be allowed on Fedora Workstation. The standard MDNS port is 5353, which is in the "high ports" that Fedora Workstation allows (1025-65535). The MDNS patch pre-dates FedoraWorkstation.xml and FedoraServer.xml in Fedora 21 (2014-12-09). This was the first release of Fedora to be split into Workstation and Server editions. In Fedora 20, the default zone definition was public.xml and it allowed MDNS. Fedora 21 and its Workstation firewall -- LWN.net, 2014-12-17 https://src.fedoraproject.org/rpms/firewalld/tree/f28 Date: Mon, 6 Aug 2012 10:01:09 +0200 Subject: [PATCH] Make MDNS work in all but the most restrictive zones MDNS is a discovery protocol, and much like DNS or DHCP should be available for the network to function as expected. Avahi (the main MDNS) implementation has taken steps to make sure no private information is published by default. See: https://fedoraproject.org/wiki/Desktop/Whiteboards/AvahiDefault
Which ports does the default firewall allow on Fedora Workstation and Fedora Server?
1,298,910,509,000
I have a linux host running 3.10 kernel with two bridged interfaces: eth0 & eth1 as brid00 with no IP. Bridge works fine, but now I want to filter some of the traffic going through the bridge, but iptables' rules are not firing. I have enabled net.bridge.bridge-nf-call-iptables (all traffic is IPv4) and net.ipv4.ip_forward and I'm using physdev module for matching. For example, trying to block all ICMP requests with iptables -A FORWARD -p icmp -m physdev --physdev-in eth0 --physdev-out eth1 -j DROP has no effect. Any clue on whats happening? I think this kind of filtering was possible without using ebtables (My future plan is use nfqueue in some advanced filtering so I need iptables to fire the rules with the bridge traffic).
Considered a bug in kernel 3.10 (maybe only in my architecture, arm64). Works fine in 4.x kernels, tested in few of them. According to kernel diagrams and docs routing is the same between kernels 3.x & 4.x and should work in both, but it doesn't. br_netfilter is a separate module in kernel 4.x, you have to modprobe br_netfilter to enable functionality.
iptables not filtering bridged traffic
1,425,403,167,000
I am using CentOS 6.5 64 and use xen to create a virtual machine (CentOS) ifconfig [root@CentOS ~]# ifconfig eth0 Link encap:Ethernet HWaddr 08:00:27:54:B3:FA inet6 addr: fe80::a00:27ff:fe54:b3fa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10087 errors:0 dropped:0 overruns:0 frame:0 TX packets:6094 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:763616 (745.7 KiB) TX bytes:541789 (529.0 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:5 errors:0 dropped:0 overruns:0 frame:0 TX packets:5 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:560 (560.0 b) TX bytes:560 (560.0 b) vif2.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:3969 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:2088 (2.0 KiB) TX bytes:267825 (261.5 KiB) xenbr0 Link encap:Ethernet HWaddr 08:00:27:54:B3:FA inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe54:b3fa/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9896 errors:0 dropped:0 overruns:0 frame:0 TX packets:1892 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:613149 (598.7 KiB) TX bytes:284945 (278.2 KiB) brctl show [root@CentOS ~]# brctl show bridge name bridge id STP enabled interfaces xenbr0 8000.08002754b3fa yes eth0 vif2.0 xl network-list [root@CentOS ~]# xl network-list xc Idx BE Mac Addr. handle state evt-ch tx-/rx-ring-ref BE-path 0 0 00:16:3e:22:4f:4b 0 4 10 768/769 /local/domain/0/backend/vif/2/0 brctl showmacs xenbr0 [root@CentOS ~]# brctl showmacs xenbr0 port no mac addr is local? ageing timer 2 00:16:3e:22:4f:4b no 89.35 1 00:1e:8c:19:62:67 no 0.00 1 00:22:6b:fe:b9:36 no 4.92 1 08:00:27:54:b3:fa yes 0.00 1 90:c1:15:c4:89:6d no 25.00 1 e0:2a:82:3d:c0:c5 no 3.78 2 fe:ff:ff:ff:ff:ff yes 0.00 Ping ping the virtual machine from same host [root@CentOS ~]# ping 192.168.1.120 PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data. 64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=2.78 ms 64 bytes from 192.168.1.120: icmp_seq=2 ttl=64 time=0.916 ms 64 bytes from 192.168.1.120: icmp_seq=3 ttl=64 time=0.917 ms ^C --- 192.168.1.120 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2370ms rtt min/avg/max/mdev = 0.916/1.538/2.782/0.879 ms ping the virtual machine from a device in local network C:\Users\motaz>ping 192.168.1.120 Pinging 192.168.1.120 with 32 bytes of data: Request timed out. Request timed out. Request timed out. iptables [root@CentOS ~]# iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 11 700 ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- any any anywhere anywhere 0 0 ACCEPT all -- lo any anywhere anywhere 0 0 ACCEPT tcp -- any any anywhere anywhere state NEW tcp dpt:ssh 0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- any any anywhere anywhere PHYSDEV match --physdev-is-bridged 0 0 REJECT all -- any any anywhere anywhere reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 8 packets, 864 bytes) pkts bytes target prot opt in out source destination Any one who can give me an idea to solve this, i'll be grateful. brctl showstp xenbr0 [root@CentOS images]# brctl showstp xenbr0 xenbr0 bridge id 8000.080027798267 designated root 8000.080027798267 root port 0 path cost 0 max age 20.00 bridge max age 20.00 hello time 2.00 bridge hello time 2.00 forward delay 2.00 bridge forward delay 2.00 ageing time 300.00 hello timer 0.65 tcn timer 0.00 topology change timer 0.00 gc timer 109.38 hash elasticity 4 hash max 512 mc last member count 2 mc init query count 2 mc router 1 mc snooping 1 mc last member timer 1.00 mc membership timer 260.00 mc querier timer 255.00 mc query interval 125.00 mc response interval 10.00 mc init query interval 31.25 flags eth0 (0) port id 0000 state forwarding designated root 8000.080027798267 path cost 4 designated bridge 8000.080027798267 message age timer 0.00 designated port 8001 forward delay timer 0.00 designated cost 0 hold timer 0.00 mc router 1 flags
First of all if you are using VirtualBox to host the XEN server please ensure to use Ethernet not Wireless network and set Promiscuous Mode to "Allow All". Secondly just to make everything clean, let's start with clean installation of CentOS with XEN and install the Bridge Network and CentOS VM on it. Assuming you have external server 192.168.1.6 with CentOS ISO extracted on /var/www/html/centos/6.3/os/i386/ and kickstart file on /var/www/html/centos/6.3/os/i386/ks.cfg and /var/www/html/centos/6.3/os/i386/repodata with correct names match names in repodata/TRANS.TBL file On the XEN server (CentOS+XEN) install the following packages: yum install -y rsync wget vim-enhanced openssh-clients yum install -y libvirt python-virtinst libvirt-daemon-xen yum install -y bridge-utils tunctl Then edit ifcfg-* file to create the bridge echo "DEVICE=br0 TYPE=Bridge BOOTPROTO=dhcp ONBOOT=yes" > /etc/sysconfig/network-scripts/ifcfg-br0 echo "DEVICE=eth0 HWADDR=XX:XX:XX:XX:XX:XX ONBOOT=yes TYPE=Ethernet IPV6INIT=no USERCTL=no BRIDGE=br0" > /etc/sysconfig/network-scripts/ifcfg-eth0 edit HWADDR=XX:XX:XX:XX:XX:XX line to match your MAC address. Don't reboot on ssh console, use VBox console reboot after reboot, assuming you have DHCP server the XEN server will got a new IP, login via VBox console to get the new IP ifconfig result should be similar to br0 Link encap:Ethernet HWaddr 08:00:27:23:54:69 inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe23:5469/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5063 errors:0 dropped:0 overruns:0 frame:0 TX packets:3142 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34251267 (32.6 MiB) TX bytes:361205 (352.7 KiB) eth0 Link encap:Ethernet HWaddr 08:00:27:23:54:69 inet6 addr: fe80::a00:27ff:fe23:5469/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:149910 errors:0 dropped:0 overruns:0 frame:0 TX packets:5045 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:182020057 (173.5 MiB) TX bytes:493792 (482.2 KiB) Now the bridge is ready you can use the IP of br0 to get ssh console again To create a virtual machine on XEN which use previous bridge: cd /var/lib/xen/images/ Create virtual disk: dd if=/dev/zero of=centos_1.img bs=4K count=0 seek=1024K qemu-img create -f raw centos_1.img 8G Then use virt-install to create the VM: virt-install -d -n TestVM1 -r 512 --vcpus=1 \ --bridge=br0 --disk /var/lib/xen/images/centos_1.img \ --nographics -p -l "http://192.168.1.6/centos/6.3/os/i386" \ --extra-args="text console=com1 utf8 console=hvc0 ks=http://192.168.1.6/centos/6.3/os/i386/ks.cfg" Now the VM should start and be able to get IP from the DHCP server normally and able to complete unattended remote installation. The ifconfig result on XEN should be similar to: br0 Link encap:Ethernet HWaddr 08:00:27:23:54:69 inet addr:192.168.1.105 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe23:5469/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:10247 errors:0 dropped:0 overruns:0 frame:0 TX packets:8090 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:102264338 (97.5 MiB) TX bytes:827859 (808.4 KiB) eth0 Link encap:Ethernet HWaddr 08:00:27:23:54:69 inet6 addr: fe80::a00:27ff:fe23:5469/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:998780 errors:0 dropped:0 overruns:0 frame:0 TX packets:37992 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:724701715 (691.1 MiB) TX bytes:2897912 (2.7 MiB) vif5.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:37 errors:0 dropped:0 overruns:0 frame:0 TX packets:67 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:4381 (4.2 KiB) TX bytes:9842 (9.6 KiB) After the installation complete you can use xen console to get the IP of it, then you can have ssh console on it.
bridge does not forwarding packets centos
1,425,403,167,000
What is a practical way to manage a whitelist of firewall outgoing connection rules for http://security.debian.org (on a server that blocks all outgoing connections by default)? My understanding is that security.debian.org is a CNAME to several mirror IPs, and it is advisable to use only IP addresses (not hostnames) in firewall rules. At the moment I simply add newly resolved IPs to security.debian.org to my firewall (ufw) outbound rules as I discover them. However this is cumbersome and doesn't allow for automated apt-get updates. Can anyone suggest a better way? PS: I found the following page somewhat relevant but it did not provide a solution: http://www.debian.org/doc/manuals/securing-debian-howto/ap-fw-security-update.en.html
The real solution would be to check all DNS resolving actions. If you can make the (local) DNS server (which is used by the restricted system) log all activities then you can grep all inquiries for security.debian.org, compare the result to your list of IPs and update the firewall in case the IP is new. That's probably not fast enough for the first connection try but should not cause problems. An alternative would be to configure a static resolution of this FQDN (either in /etc/hosts or in the DNS server) and allow the configured addresses only. From time to time you would resolve this FQDN externally and update the local configuration if necessary.
ufw firewall rules for security.debian.org
1,425,403,167,000
I need to add a rule (allow any to any port 22) to my firewall, so that I can ssh remotely into my machine. I have had a look in the SCO OSR600 Documentation and I cannot find anything in there. Update I have managed to enable ipfstat: #ipfstat enable And now my firewall is active, I just need to add rules now. But where do I find the rules text file?
Probably the rule, should be: pass in quick proto tcp from any to any port = 22 keep state pass out quick proto tcp from any to any port = 22 keep state in /etc/ipf.conf
How to Add rules to IP Filter (Firewall in SCO)
1,425,403,167,000
When executing wget https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf I have this error: --2020-06-03 20:55:06-- https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf Resolving docs.conda.io (docs.conda.io)... 104.31.71.166, 104.31.70.166, 172.67.149.185, ... Connecting to docs.conda.io (docs.conda.io)|104.31.71.166|:443... connected. ERROR: cannot verify docs.conda.io's certificate, issued by ‘CN=SSL-SG1-GFRPA2,OU=Operations,O=Cloud Services,C=US’: Unable to locally verify the issuer's authority. To connect to docs.conda.io insecurely, use `--no-check-certificate'. What I have tried to solve this problem: sudo update-ca-certificates -f Export the certificate from browser when opening the url, save it in a file conda.cer, then perform openssl x509 -in conda.cer -inform der -outform pem -out conda.pem, then execute: wget --ca-certificate=conda.pem \ https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf => still the same error Put the file under /etc/ssl/certs, sudo cp conda.pem /etc/ssl/certs => still same error I know I can use --no-check-certificate, but this is not what i want. This problem occurs for some other websites too. Anyone know the reason? Thanks. UPDATE1 I have tried the following steps: 1) sudo cp conda.crt /usr/share/ca-certificates/mozilla/ 2) sudo vi /etc/ca-certificates.conf and append mozilla/conda.crt at the end 3) run sudo update-ca-certificates -f 4) i can see symbolic link created under /etc/ssl/certs which looks like: conda.pem -> /usr/share/ca-certificates/mozilla/conda.crt However, it's still not working! UPDATE2 - Deleted. Please refer to UPDATE3 UPDATE 3 The certificates chain in the URL above contains 4 certificates. Just to ensure not missing any one, I put all the 4 certificates (namely conda1.crt, conda2.crt, conda3.crt, conda4.crt) in /usr/share/ca-certificates/mozilla/ and repeat the steps mentioned in UPDATE1. Symbolic links are created successfully in /etc/ssl/certs. Verification: openssl verify -no-CAfile -no-CApath -partial_chain -CAfile conda1.pem conda2.pem conda2.pem: OK openssl verify -no-CAfile -no-CApath -partial_chain -CAfile conda2.pem conda3.pem conda3.pem: OK openssl verify -no-CAfile -no-CApath -partial_chain -CAfile conda3.pem conda4.pem conda4.pem: OK Result: still fail with wget UPDATE4 Part of the cause is found Bluecoat service which intercepts the network is the root cause (it has problem to VM Ubuntu only though, the host machine windows works fine with ssl). Both of these works (conda1.crt is extracted from browser which should be from the Bluecoat service): wget --ca-certificates=/etc/ssl/certs/ca-certificates.crt https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf wget --ca-certificates=conda1.crt https://docs.conda.io/projects/conda/en/4.6.0/_downloads/52a95608c49671267e40c689e0bc00ca/conda-cheatsheet.pdf CURRENT STATUS I have installed conda1.crt in /etc/ssl/certs following the steps described in UPDATE1. conda1.crt is believed to be the right one as shown in the wget step in UPDATE4. However, even after this step, connection still failed. If I disable Bluecoat service forecely, ssl problem disappeared. However, I am required to use Bluecoat, hence any help to resolve the problem under Bluecoat is really appreciated!
meta: not really an answer but too much for comments UPDATE1 I have tried the following steps: [using certs exported by browser that are actually from Bluecoat, not really from conda] 1) sudo cp conda.crt /usr/share/ca-certificates/mozilla/ 2) sudo vi /etc/ca-certificates.conf and append mozilla/conda.crt at the end 3) [] sudo update-ca-certificates -f 4) i can see symbolic link created under /etc/ssl/certs which looks like: conda.pem -> /usr/share/ca-certificates/mozilla/conda.crt However, it's still not working! [without specifying wget --ca-certificates=] That's surprising. It works for me on Ubuntu 18.04, using my test certs (because I don't have yours, of course). Note every cert is placed in /etc/ssl/certs three ways: (1) a 'friendly' filename like Digicert_something or Go_Daddy_whatever that links back to /usr/share/ca-certificates/; (2) a 'hashname' that links to the 'friendly' name like 3513523f.0 -> DigiCert_Global_Root_CA.pem; and (3) a single concatenated file ca-certificates.crt that contains the PEM blocks for all the certificates with no human-readable names. wget uses OpenSSL which uses the certificate data from (2) and/or (3) depending on the code -- which I don't have time to download and read through -- but never (1), so checking (1) doesn't prove much; check (2) and (3). If those are correct, then I would try openssl s_client which should use the same truststore and logic, but gives more detailed information about any problems it finds (cluttered by a lot of other info). If that doesn't help, I think you'd have to get the source for wget, rebuild with symbols, and debug it, which is just too much work. That said, and although the update-ca-certificates(1) manpage only mentions it in passing, I think they intend you to put additions in /usr/local/share/ca-certificates not /usr/share/ca-certificates/mozilla because the latter is maintained by the package manager while the former is in /usr/local which is the place for site or machine local additions both traditionally and per hier(7). /usr/share/doc/ca-certificates/README.Debian is more specific: If you want to install local certificate authorities to be implicitly trusted, please put the certificate files as single files ending with ".crt" into /usr/local/share/ca-certificates/ and re-run 'update-ca-certificates'. ... some minor points Export the certificate from browser ... and openssl x509 -inform der -outform pem .... Chrome on Windows, and Internet Explorer, use the Windows 'cert wizard' (or one of them) which allows exporting a single cert in DER or 'base-64' -- which is actually PEM -- as well as 'p7b' which allows the whole chain. (It also has options to include the privatekey for one's own cert but that doesn't apply here.) Firefox allows the same choices, plus chain in PEM. AFAIK only Edge is limited to exporting in DER and requires that conversion step -- at least the version I currently have; Edge was supposed to pumpkin into Chromium early this year, but I don't know if mine actually did, since the W10 philosophy is to prevent you knowing, much less controlling, what happens on your computer. [interception causes] problem to VM Ubuntu only though, the host machine windows works fine ... Is the host owned, or managed, by the network owner? E.g. are both the machine and network for a business? If so it is common when installing an interceptor like Bluecoat to automatically install the root cert in the machines that will need to trust it -- especially Windows machines that can easily be centrally managed using a 'domain' and 'group policy'. (Note this type of Windows domain has nothing to do with the 'domain names' and 'domain name system DNS' used on the Internet -- don't mix them up.) A simple signal of this is if your login name is not a simple name like fred or email like [email protected], but in the form domain\fred.
Unable to locally verify the issuer's authority even after using certificate
1,425,403,167,000
I cannot get my Mac (10.10.3) machine to connect to my Oracle Linux 7 (CentOS/RH 7) server with its firewall up. (I am trying to configure for NFSv3 only; I don't need v4) I have verified that NFS is working by issuing this command on the Mac (firewall OFF on OL 7 server) showmount -e myserver.home And I get this back: Export list for myserver: /var/www 192.168.10.0/24 If I try connecting with Command-K and enter nfs://myserver.home it makes the connection and I can browse, edit and delete files as expected. Next, I enable the firewall on the OL7 server. I also open the ports as specified by Oracle OL 7 Documentation and when I issue the showmount command again, I get this error message: showmount: Cannot retrieve info from host: localhost: RPC: Program not registered If I turn off the firewall and it works again. So...what ports did I enable? #firewall-cmd --list-ports 32803/tcp 662/udp 2049/udp 662/tcp 111/udp 32769/udp 892/udp 2049/tcp 892/tcp 111/tcp I checked to see what RPC was listening on (according to the Admin guide link above, it should be 2049 and 111) # rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100024 1 udp 47793 status 100024 1 tcp 52921 status 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 nfs_acl 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 3 udp 2049 nfs_acl 100021 1 udp 32769 nlockmgr 100021 3 udp 32769 nlockmgr 100021 4 udp 32769 nlockmgr 100021 1 tcp 32803 nlockmgr 100021 3 tcp 32803 nlockmgr 100021 4 tcp 32803 nlockmgr And finally my /etc/sysconfig/nfs file: # Note: For new values to take effect the nfs-config service # has to be restarted with the following command: # systemctl restart nfs-config # # Optional arguments passed to in-kernel lockd #LOCKDARG= # TCP port rpc.lockd should listen on. LOCKD_TCPPORT=32803 # UDP port rpc.lockd should listen on. LOCKD_UDPPORT=32769 MOUNTD_PORT=892 STATD_PORT=662 # # Optional arguments passed to rpc.nfsd. See rpc.nfsd(8) RPCNFSDARGS="--port 2049" # Number of nfs server processes to be started. # The default is 8. #RPCNFSDCOUNT=16 # # Set V4 grace period in seconds #NFSD_V4_GRACE=90 # # Set V4 lease period in seconds #NFSD_V4_LEASE=90 # # Optional arguments passed to rpc.mountd. See rpc.mountd(8) RPCMOUNTDOPTS="" # # Optional arguments passed to rpc.statd. See rpc.statd(8) STATDARG="" # # Optional arguments passed to sm-notify. See sm-notify(8) SMNOTIFYARGS="" # # Optional arguments passed to rpc.idmapd. See rpc.idmapd(8) RPCIDMAPDARGS="" # # Optional arguments passed to rpc.gssd. See rpc.gssd(8) RPCGSSDARGS="" # # Enable usage of gssproxy. See gssproxy-mech(8). GSS_USE_PROXY="yes" # # Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8) RPCSVCGSSDARGS="" # # Optional arguments passed to blkmapd. See blkmapd(8) BLKMAPDARGS=""
I have solved this issue and wanted to post the answer here in case anyone else had the same difficulties as the documentation on Oracle's Website is incomplete. We need to open a port for the mountd service. To do this, issue the following commands: firewall-cmd --permanent --zone=<zone> --add-service mountd Make sure to enter your zone name. Mine was "public" but you also have the option of leaving it out and it will select the default zone. This part was missing from the Oracle documentation. Once I did that, I was able to connect my iMac to my NFS share with no problems.
NFS Port Blocking Firewall Issue
1,425,403,167,000
Trying to disable the network access for the user: [root@notebook ~]# iptables -I OUTPUT -m owner --uid-owner tempuser -j DROP [root@notebook ~]# ip6tables -I OUTPUT -m owner --uid-owner tempuser -j DROP Could not open socket to kernel: Address family not supported by protocol [root@notebook ~]# [root@notebook ~]# iptables -I INPUT -m owner --uid-owner tempuser -j DROP iptables: Invalid argument. Run `dmesg' for more information. [root@notebook ~]# ip6tables -I INPUT -m owner --uid-owner tempuser -j DROP Could not open socket to kernel: Address family not supported by protocol [root@notebook ~]# Testing it: [root@notebook ~]# su - tempuser [tempuser@notebook ~]$ ping google.com ping: unknown host google.com [tempuser@notebook ~]$ [tempuser@notebook ~]$ ping 8.8.8.8 PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data. 64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=4.80 ms 64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=4.07 ms ^C --- 8.8.8.8 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1057ms rtt min/avg/max/mdev = 4.071/4.439/4.807/0.368 ms [tempuser@notebook ~]$ [tempuser@notebook ~]$ exit logout [root@notebook ~]# ping google.com PING google.com (216.58.209.174) 56(84) bytes of data. 64 bytes from bud02s21-in-f14.1e100.net (216.58.209.174): icmp_seq=1 ttl=55 time=5.05 ms ^C --- google.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 572ms rtt min/avg/max/mdev = 5.059/5.059/5.059/0.000 ms [root@notebook ~]# Question: how can I disable the network access for a given user under Linux? (INPUT/OUTPUT/IPv4/IPv6?) - why can I still ping IPv4 addresses with the user?
Try this, iptables -A OUTPUT -o ethX -m owner --uid-owner {USERNAME} -j DROP Where, --uid-owner { USERNAME } : Matches if the packet was created by a process with the given effective USERNAME. -A : Append rule to given table/chain -I : Insert rule to head of table/chain For example, my oracle user id is 1000 so I will append following rule: /sbin/iptables -A OUTPUT -o eth0 -m owner --uid-owner 1000 -j DROP service iptables save
How to disable network access for a user?
1,425,403,167,000
I am configuring a REDIS server and I want to allow connections only from a set of specific IP addresses. This is a Debian 10 server, and the recommended framework to use is nft, which I haven't used in the past. The default ruleset is this: #!/usr/sbin/nft -f flush ruleset table inet filter { chain input { type filter hook input priority 0; } chain forward { type filter hook forward priority 0; } chain output { type filter hook output priority 0; } } What rule do I need to add in that file to allow incoming connections to redis from IP 1.1.1.1 and 2.2.2.2, dropping everything else? REDIS is using port 6379.
In case someone else stumbles upon the same issue, my main problem was that I was using rules in the incorrect order. I was adding a drop rule before the accept rule, and this seems to work the other way around. This is a sample rule for dropping all IP addresses except 2: ip saddr 1.1.1.1 tcp dport 6379 accept ip saddr 2.2.2.2 tcp dport 6379 accept tcp dport 6379 drop Complete rules file: #!/usr/sbin/nft -f flush ruleset table inet filter { chain input { type filter hook input priority 0; # allow connection to redis from ip saddr 1.1.1.1 tcp dport 6379 accept ip saddr 2.2.2.2 tcp dport 6379 accept tcp dport 6379 drop } chain forward { type filter hook forward priority 0; } chain output { type filter hook output priority 0; } }
nftables allow redis only from specific IP addresses
1,425,403,167,000
Problem: I have two web applications which were created by using JAVA and PYTHON respectively. The JAVA application runs using Tomcat server on the port number 8000. The PYTHON application uses web.py and runs on the port number 8080. The Python (API) performs a back-end job and Java (UI) acts a front-end guy. In my local Ubuntu machine, these applications were working perfectly. However, I have to make this application run in my QA machine in which only ports 80 and 443 are open and all the remaining ports are restricted. I tried using authbind to run java on port 80 but it failed. Is there any other ways to redirect the HTTP requests to their respective web services and port number internally using URL Filtering ? If there any other methods kindly share the information about it. Thanks in advance.
The standard solution for this is to use a front-end server which dispatches the requests to the appropriate “real” server, typically based on the host name. This is called a reverse proxy. Nginx is very often used for that. Start with the tutorial. Here's how the configuration (/etc/nginx/nginx.conf) of a reverse proxy with two backends looks like: server { server_name java-app.example.com; proxy_pass http://localhost:8000/; } server { server_name python-app.example.com; proxy_pass http://localhost:8080/; } Of course there are many more options that may be useful.
Two web servers running in one linux machine?
1,425,403,167,000
Linux box a is connected to a home DSL line with dynamic DNS registration, hosting a tmux session to which multiple clients connect in read-only mode over SSH. All users connect using the same credentials: user b. Example: ssh [email protected] tmux attach -t screencast It all works fine but I have had a user do "naughty" stuff from the box out to the Internet. That's unacceptable as I am responsible for my Internet contract with the ISP; how do I completely jail every user apart from granting the ability to use the account b, over ssh, using tmux to watch session screencast on my a machine? I am thinking about updating ipchains straight after a user connects over ssh, allowing traffic back to that ip address only but... with multiple viewers sharing the same account?
I don't completely understand your requirements: which machine are the users to be jailed on? Can they do anything that doesn't involve the network? Nonetheless I think I can tell you what the necessary building blocks are. To restrict a user to specific network connections, see How to restrict internet access for a particular user on the lan using iptables in Linux. In short, to restrict user 1234's network traffic to connecting to 192.0.2.42 (the IP address of machine A) via ssh: iptables -t mangle -A OUTPUT -o eth0 -m owner --uid-owner 1234 -d 192.0.2.42 --dport 22 -j ACCEPT iptables -t mangle -A OUTPUT -o eth0 -m owner --uid-owner 1234 -j DROP Remember to block IPv6 as well if you have it. On machine A, to restrict the restricted users to the account B, the most effective method is to arrange for these users not to have credentials to other accounts. You can use Match directives in sshd_config to restrict connections from certain IP addresses to authenticating certain users, but this may not be a good thing as it would prevent you from obtaining administrative access. Match 192.0.2.99 AllowTCPForwarding No AllowUsers B PasswordAuthentication No X11Forwarding No To restrict account B to a single command, there are two ways: Give the users a private key (preferably one per user), and set restrictions for this key in the authorized_keys file with a command= directive. command="tmux attach-session -r -t screencast" ssh-rsa … Set the user's shell to a script that launches tmux with the right arguments. This has the advantage that you can allow password authentication, but may be harder to get right in a way that doesn't allow the user to break out to a shell prompt. I think tmux that tmux doesn't allow shell escapes in a read-only session, but I'm not sure, check that users can't escape at that point.
How to "jail" a user account's network capabilities on Linux?
1,425,403,167,000
At the moment I'm using a Python script to generate iptables rules. Each set of changes gets committed to a git repository before deployment so there's a trace of who changed what and why. What tools/processes do other people use to manage changes to their firewall rules? Is there a guide on best practice for firewall change control that anyone likes? UPDATE: I guess what I'm asking is for tools/processes around the area. For instance I find testing large firewall scripts quite difficult. Anyone use/written a test script or know of a unit testing type approach that's possible with iptables?
You could use a higher-level software that generates iptables rules, like shorewall. It has a command 'shorewall check' that checks the consistency and errors in your rules.
Firewall change control
1,425,403,167,000
I posted a question on ServerFault about a specialized Firewall setup, but as an avid software developer I am also considering rolling my own. I am only interested in using a high-level language, preferably Java or Node.JS. Is there some system for Linux or Illumos that will take all network packets, and provide them to my application to make a determination on whether they should be allowed, dropped or refused? (or re-written) I'm only interested in ICMP, UDP and TCP packets. I'm envisioning that I would write a Java application, that would allow me to sniff the traffic to make a determination on whether it should be allowed. For example, in HTTP traffic I may wish to check the Host header to determine what website the browser is attempting to visit. I realize this is likely to lower the potential throughput, but perhaps the solution you guys recommend will have documentation that will let me clarify the impact of that caveat. It's almost like I'm asking for FUSE, except for firewalls instead of filesystems. Is there such a program out there, or would I be stuck with writing C/C++ code for the firewall?
On Linux-based platforms there is a netlink socket that you can open from your Java program and determine whether or not to accept the packet. This socket can be included in the network stack with an iptables rule. Here of course you can also limit the types of packets to be passed to your usermode filter. Here's what the man page has to say on the matter: ULOG This target provides userspace logging of matching packets. When this target is set for a rule, the Linux kernel will multicast this packet through a netlink socket. One or more userspace processes may then subscribe to various multicast groups and receive the packets. Given the complexity and sophistication of the netfilter project, it might be worth asking for solutions to the problem you're trying to solve. (Or perhaps that's what your other SE question covered; I haven't looked yet )
Build my own firewall, in Java or other high-level language?
1,425,403,167,000
Does iptables -I INPUT -j ACCEPT open all ports?
This will accept all traffic that is inbound and destined for this computer; all ports, all protocols, all users, all states. Remember that port isn't the only thing you have to think about when writing a firewall on a Linux System. Any traffic that is outbound or destined for another computer may have different rules. However if your rules are this, for example iptables -I INPUT -j ACCEPT iptables -I INPUT -j DROP the second rule will end up being before the first and all traffic will be dropped. If you want to allow all traffic. I generally recommend using -A instead of -I when writing a firewall script so the order is obvious. Also if you just want to allow everything, you'd be better off setting the policy for the chain. iptables -P INPUT ACCEPT will tell everything that hasn't matched another rule at the end of INPUT to be allowed.
What does this iptables command do?
1,425,403,167,000
I am using Linux at home and want to be able to configure firewall. I would like to understand what I am doing not just copy paste some rules from internet :).
If you want to really understand what you are doing, you've got your work cut out for you as iptables is massively complex: frozentux iptables tutorial. This is a highly recommended tutorial (it's free). If you're willing to break a few bucks: http://www.amazon.com/Linux-Firewalls-3rd-Steve-Suehring/dp/0672327716
Recommend a reading for learning Linux firewalls configuration for beginner? [closed]
1,425,403,167,000
I was trying to setup the firewall settings, and probably did something wrong. I don't have internet now unless I stop iptables service I tried flushing,and accepting everything sudo iptables -F sudo iptables -P OUTPUT ACCEPT sudo iptables -P INPUT ACCEPT But I still cannot access internet. if I stop the service, sudo service iptables stop then I can access inet. Starting again blocks it. Can you point me where the problem lies? (I'm running CentOS 6.4)
The iptables command per default only shows entries of the filter table. But there are also other tables: There are probably some entries in the nat table. Add -t nat to your commands to look at them.
No internet even with iptables ACCEPT all
1,425,403,167,000
My company PC is behind the firewall, I want to connect to my remote server. The port is open however I can not connect to it, does anyone know the root cause? From my company PC connect to my remote server: # telnet my-server 2221 Trying x.x.x.x... Connected to my-server. Escape character is '^]'. ^C^C^C # nc -vzw5 my-server 2221 Connection to my-server 2221 port [tcp/rockwell-csp1] succeeded! # ssh -vvv my-server -p 2221 OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013 debug1: Reading configuration data /root/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to my-server [x.x.x.x] port 2221. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/identity-cert type -1 debug3: Not a RSA1 key file /root/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_rsa-cert type -1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: identity file /root/.ssh/id_dsa-cert type -1 debug1: identity file /root/.ssh/id_ecdsa type -1 debug1: identity file /root/.ssh/id_ecdsa-cert type -1 ^C The process will stuck forever. However at the same time, I check my remote server status, I can clearly saw the connection has been established: # netstat -at tcp 0 402 myserver:ssh x.x.x.x:11307 ESTABLISHED After a while, the connection status will change to FIN_WAIT1, then closed: # netstat -at tcp 0 402 myserver:ssh x.x.x.x:11307 FIN_WAIT1 Tcpdump on server side while client initiate a connection request: # tcpdump -i ppp0 port 2221 -vv tcpdump: listening on ppp0, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes 12:09:01.408239 IP (tos 0x10, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 60) server_ip.2221 > client_ip.20999: Flags [S.], cksum 0x21e6 (correct), seq 2805531925, ack 581774329, win 14400, options [mss 1452,sackOK,TS val 9959078 ecr 74287789,nop,wscale 4], length 0 12:09:01.424747 IP (tos 0x0, ttl 50, id 41302, offset 0, flags [DF], proto TCP (6), length 52) client_ip.20999 > server_ip.2221: Flags [.], cksum 0x8711 (correct), seq 1, ack 1, win 457, options [nop,nop,TS val 74287802 ecr 9959078], length 0 12:09:01.448272 IP (tos 0x10, ttl 64, id 62398, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5dba (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959082 ecr 74287802], length 402 12:09:01.674641 IP (tos 0x10, ttl 64, id 62399, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5da3 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959105 ecr 74287802], length 402 12:09:01.904523 IP (tos 0x10, ttl 64, id 62400, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5d8c (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959128 ecr 74287802], length 402 12:09:02.364225 IP (tos 0x10, ttl 64, id 62401, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5d5e (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959174 ecr 74287802], length 402 12:09:03.283694 IP (tos 0x10, ttl 64, id 62402, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5d02 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959266 ecr 74287802], length 402 12:09:05.122593 IP (tos 0x10, ttl 64, id 62403, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5c4a (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959450 ecr 74287802], length 402 12:09:08.810407 IP (tos 0x10, ttl 64, id 62404, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5ad9 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9959819 ecr 74287802], length 402 12:09:15.006311 IP (tos 0x10, ttl 64, id 17769, offset 0, flags [DF], proto TCP (6), length 52) server_ip.2221 > client_ip.4708: Flags [F.], cksum 0x0499 (correct), seq 1497941342, ack 2936162453, win 900, options [nop,nop,TS val 9960438 ecr 74001029], length 0 12:09:16.176090 IP (tos 0x10, ttl 64, id 62405, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x57f8 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9960556 ecr 74287802], length 402 12:09:30.927316 IP (tos 0x10, ttl 64, id 62406, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x5234 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9962032 ecr 74287802], length 402 12:10:00.429743 IP (tos 0x10, ttl 64, id 62407, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x46ac (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9964984 ecr 74287802], length 402 12:10:59.354673 IP (tos 0x10, ttl 64, id 62408, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x2fa4 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9970880 ecr 74287802], length 402 12:12:57.364324 IP (tos 0x10, ttl 64, id 62409, offset 0, flags [DF], proto TCP (6), length 454) server_ip.2221 > client_ip.20999: Flags [P.], cksum 0x0184 (correct), seq 1:403, ack 1, win 900, options [nop,nop,TS val 9982688 ecr 74287802], length 402 12:14:01.653934 IP (tos 0x10, ttl 64, id 62410, offset 0, flags [DF], proto TCP (6), length 52) server_ip.2221 > client_ip.20999: Flags [F.], cksum 0x0e69 (correct), seq 403, ack 1, win 900, options [nop,nop,TS val 9989120 ecr 74287802], length 0
debug1: Connection established. [...] debug1: identity file /root/.ssh/id_ecdsa-cert type -1 ^C When a client connects to an SSH server, the first data exchange is that the server and client send their version strings to each other. The OpenSSH client normally logs this immediately after the list of identity files, for example from my system: [...] debug1: identity file /home/devuser/.ssh/id_ed25519-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 Your client never logged receiving the SSH version string from the server. One of three things is probably happening: A firewall or something similar is blocking or dropping TCP data packets from the server to the client. The client is connecting to an SSH server, but it's malfunctioning. The client is connecting to something other than an ssh server. You'll need to troubleshoot this on the server. The OpenSSH server logs through syslog. You should start by checking the syslog logs to see what if anything sshd logged about the connection attempt.
Port is open but I can't ssh to it
1,425,403,167,000
My SMTP server is being probed. It looks like a brute force attach on SASL, where they're going through a password dictionary. Having seen thousands of these lines in the log files Sep 18 14:09:52 xxx postfix/smtpd[7412]: connect from ca255.calcit.fastwebserver.de[146.0.42.124] Sep 18 14:09:55 xxx postfix/smtpd[7412]: warning: ca255.calcit.fastwebserver.de[146.0.42.124]: SASL LOGIN authentication failed: authentication failure Sep 18 14:09:55 xxx postfix/smtpd[7412]: lost connection after AUTH from ca255.calcit.fastwebserver.de[146.0.42.124] Sep 18 14:09:55 xxx postfix/smtpd[7412]: disconnect from ca255.calcit.fastwebserver.de[146.0.42.124] I modified my main.cf like this: inet_interfaces = all smtpd_sasl_auth_enable=yes smtpd_helo_required = yes smtpd_sender_restrictions = reject_unknown_address smtpd_client_restrictions = check_client_access hash:/etc/postfix/maps/access_client, permit_mynetworks, reject smtpd_recipient_restrictions = check_client_access hash:/etc/postfix/maps/access_client, permit_mynetworks, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_sender_domain, reject_unknown_recipient_domain, permit_sasl_authenticated, reject_unauth_pipelining, reject_unauth_destination, reject_rbl_client zen.spamhaus.org, reject_rbl_client list.dsbl.org permit broken_sasl_auth_clients = yes And my /etc/postfix/maps/access_client only has this line: 146.0.42.124 REJECT However after restarting postfix there is still no change in behaviour, I still see the same error, so SASL is still being checked, even though I thought with these settings the client would be rejected based on its IP address before SASL even comes into the game ? A 2nd question is - I am relaying outgoing mail traffic from one machine to another on the internal network - apart from the 'relayhost' setting on the machine that just relays, can I keep the rest of the postfix settings the same on both ?
Postfix doesn't evaluate the smtpd_client_restrictions until the RCPT TO (or ETRN) command is sent. http://www.postfix.org/SMTPD_ACCESS_README.html#timing Current Postfix versions postpone the evaluation of client, helo and sender restriction lists until the RCPT TO or ETRN command. This behavior is controlled by the smtpd_delay_reject parameter. Restriction lists are still evaluated in the proper order of (client, helo, etrn) or (client, helo, sender, relay, recipient, data, or end-of-data) restrictions. When a restriction list (example: client) evaluates to REJECT or DEFER the restriction lists that follow (example: helo, sender, etc.) are skipped. Thus you can get around this by setting the following in your main.cf: smtpd_delay_reject = no   As for your second question, there are so many controls for postfix, this is near impossible to answer without having complete details of your network, postfix configuration, and client configuration. Best way is to just try it.
Why doesn't Postfix reject a specific client's connection attempts?
1,425,403,167,000
Is there an infographic somewhere that describes the logical flow of iptables? Specifically, I'm looking for something that diagrams which point in the process ip_conntrack applies.
This netfilter diagram (svg) seems to fit.
iptables infographic
1,425,403,167,000
I run nginx on my local machine to dispatch to various internal applications that I use quite a lot. nginx makes it easy for me to give the applications easy local aliases. The problem is that the applications serve up very sensitive information, and so I would like to not have nginx listening on any of my public interfaces. My configuration blocks in /etc/nixos/configuration.nix look like this: networking.firewall = { enable = true; # allowedTCPPorts = [ 8081 ]; }; services.nginx = { enable = true; recommendedProxySettings = true; recommendedTlsSettings = true; virtualHosts."localhost" = { locations."/wiki".proxyPass = "http://localhost:8000"; locations."/weblog".proxyPass = "http://localhost:3001"; }; }; So, the question becomes, how can I firewall port 80?
So, it turns out that while nginx will listen to all interfaces, enabling nginx does not actually open up the port on the firewall. In my original test, I opened the firewall port, then accessed the service from a remote machine. This was my control test to verify that the service was truly accessible. Then I closed the firewall port. The service remained accessible, but only to that browser on that computer. Future tests showed that as soon as I closed the port, other browsers could not access the service, and that original browser eventually lost access, too.
How can I enable nginx on nixos for localhost only?
1,425,403,167,000
I'm using an iptables firewall on my local Linux server. The log tells me that there is a continuous multicast on the network by my router (Fritz box). Is this a normal behaviour. Should I allow this traffic? [633912.348130] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [634912.348130] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [635037.322691] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 [635287.169456] IPTables Packet Dropped: IN=enp3s0 OUT= MAC=01:00:5e:00:00:01:xx:xx:xx:xx:xx:xx:xx:xx SRC=192.168.178.1 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2
This is IGMP traffic, which unless you know why you'd want it, can be safely ignored. The 224.0.0.1 multicast subnet is defined as being for all hosts on the network segment. (See Notable IPv4 multicast addresses (Wikipedia).) Protocol 2 (See List of IP protocol numbers (Wikipedia)) is IGMP, the Internet Group Management Protocol (Wikipedia). Essentially, your router is asking if there are any other multicast-capable routers on the subnet.
Continuous multicast traffic from my router
1,425,403,167,000
I made default policy of my machine- iptables -A INPUT -j REJECT #DROP ALL PACKETS TO INPUT CHANNEL INPUT channel has been blocked. Now I want to allow only some specific services like I should be able to access the internet. So what rule should I add? Port 80 is for HTTP so I tried allowing that port by iptables -A INPUT -p tcp --dport 80 ACCEPT But it didn't work. Let me know how can I do this?
iptables rules are sequential, meaning the first rule they hit that matches gets executed. rules like ACCEPT, DROP, and REJECT are terminal, meaning the packet will not proceed further into the chain.-A means append. So what you've done is match everything and REJECT it # everything stops here accept tcp port 80 # we never reach this because everything stopped there ^ unfortunately tcp port 80 is part of everything, and thus you never reach your second rule. Flush your INPUT chain with -F and reverse the order in which you run your rules. I also recommend reading Dan Robbins article on stateful firewall design which is not just for gentoo or 2.4 kernels.
iptables rule to allow access to internet
1,425,403,167,000
I used to use a linux software that monitors logs like http, ssh, etc and if it detects that someone is trying to use brute force, it blocks that ip by adding a rule to iptables. I forgot what that software is called. It's opensource and free.
fail2ban does that, although I don't think it's the only such tool. (Amazed mentioned DenyHosts, although it seems to be SSH specific.)
Linux App that monitors log and add rules to iptables
1,425,403,167,000
I have an old Dell Dimension with two ethernet cards, and I need to turn it into a firewall. I've tried FireStarter and IPCop, but I don't think I'm setting them up right. Here's the setup that I'm trying to accomplish: I need to have one ethernet card plugged directly into the modem, and the other ethernet card plugged into a 16-port switch/hub. Part of the problem is that I've never done this before and despite hours of Googling, I'm still not sure where to start. I don't need anything fancy or endlessly configurable - I just need some basic protection that I can tweak out later. The bottom line is that it's 10:45p, everyone else left the office at 5p, and I'm still trying to get this taken care of. Thanks in advance for any help.
for sheer ease of use, I can recommend Smoothwall. I used it many years ago, and it was great then. Just install it following the prompts, and you're done. Of course, you can tweak it and fiddle with it if you want to, but you probably won't need to unless you have unusual requirements. It's very good. :)
Turning an old computer into a network firewall
1,425,403,167,000
I am running an Apache web server on a desktop machine running Trisquel 8 (based on Ubuntu). I would like to make the server accessible to other machines/devices on my local network, but I can't figure out how. When I try to connect from another device, using the local IP address of the Apache server, I get error messages in the browser, such as: in Firefox on a Mac, I get 'Unable to connect. Firefox can't establish a connection to the server at localhost.'. If I try to connect using the DuckDuckGo browser on an Android phone, I get 'Webpage not available. The webpage at http://localhost/ could not be loaded because: net::ERR_CONNECTION_REFUSED'. One of the answers suggested using nmap to see which ports are open, which returned the following result: $ nmap [LOCAL IP ADDRESS] Starting Nmap 7.01 ( https://nmap.org ) at 2019-10-12 09:25 EDT Nmap scan report for [LOCAL IP ADDRESS] Host is up (0.00013s latency). Not shown: 998 closed ports PORT STATE SERVICE 22/tcp open ssh 80/tcp open http Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds So, it shows that port 80 is open for http. It's probably also worth mentioning that I can ping the machine from another on the local network and, as the nmap output shows, I have another port open for ssh. I have been ssh-ing to this machine for several months and that works just fine. For that, I just installed the ssh-server and it pretty much worked out of the box. So, does that imply that something is wrong with the Apache2 setup (as opposed to iptables/firewall), given that ssh is working with no problems? Contents of iptables: $ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http ctstate NEW,ESTABLISHED Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination Contents of apache2.conf: # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.4/ for detailed information about # the directives and /usr/share/doc/apache2/README.Debian about Debian specific # hints. # # # Summary of how the Apache 2 configuration works in Debian: # The Apache 2 web server configuration in Debian is quite different to # upstream's suggested way to configure the web server. This is because Debian's # default Apache2 installation attempts to make adding and removing modules, # virtual hosts, and extra configuration directives as flexible as possible, in # order to make automating the changes and administering the server as easy as # possible. # It is split into several files forming the configuration hierarchy outlined # below, all located in the /etc/apache2/ directory: # # /etc/apache2/ # |-- apache2.conf # | `-- ports.conf # |-- mods-enabled # | |-- *.load # | `-- *.conf # |-- conf-enabled # | `-- *.conf # `-- sites-enabled # `-- *.conf # # # * apache2.conf is the main configuration file (this file). It puts the pieces # together by including all remaining configuration files when starting up the # web server. # # * ports.conf is always included from the main configuration file. It is # supposed to determine listening ports for incoming connections which can be # customized anytime. # # * Configuration files in the mods-enabled/, conf-enabled/ and sites-enabled/ # directories contain particular configuration snippets which manage modules, # global configuration fragments, or virtual host configurations, # respectively. # # They are activated by symlinking available configuration files from their # respective *-available/ counterparts. These should be managed by using our # helpers a2enmod/a2dismod, a2ensite/a2dissite and a2enconf/a2disconf. See # their respective man pages for detailed information. # # * The binary is called apache2. Due to the use of environment variables, in # the default configuration, apache2 needs to be started/stopped with # /etc/init.d/apache2 or apache2ctl. Calling /usr/bin/apache2 directly will not # work with the default configuration. # Global configuration # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the Mutex documentation (available # at <URL:http://httpd.apache.org/docs/2.4/mod/core.html#mutex>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # #ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # Mutex file:${APACHE_LOCK_DIR} default # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 5 # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog ${APACHE_LOG_DIR}/error.log # # LogLevel: Control the severity of messages logged to the error_log. # Available values: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the log level for particular modules, e.g. # "LogLevel info ssl:warn" # LogLevel warn # Include module configuration: IncludeOptional mods-enabled/*.load IncludeOptional mods-enabled/*.conf # Include list of ports to listen on Include ports.conf # Sets the default security model of the Apache2 HTTPD server. It does # not allow access to the root filesystem outside of /usr/share and /var/www. # The former is used by web applications packaged in Debian, # the latter may be used for local directories served by the web server. If # your system is serving content from a sub-directory in /srv you must allow # access here, or in any related virtual host. <Directory /> Options FollowSymLinks AllowOverride None Require all denied </Directory> <Directory /usr/share> AllowOverride None Require all granted </Directory> <Directory /var/www/> Options Indexes FollowSymLinks AllowOverride All # Require local # Require ip 192.168.1 Require all granted </Directory> #<Directory /srv/> # Options Indexes FollowSymLinks # AllowOverride None # Require all granted #</Directory> # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <FilesMatch "^\.ht"> Require all denied </FilesMatch> # # The following directives define some format nicknames for use with # a CustomLog directive. # # These deviate from the Common Log Format definitions in that they use %O # (the actual bytes sent including headers) instead of %b (the size of the # requested file), because the latter makes it impossible to detect partial # requests. # # Note that the use of %{X-Forwarded-For}i instead of %h is not recommended. # Use mod_remoteip instead. # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements IncludeOptional conf-enabled/*.conf # Include the virtual host configurations: IncludeOptional sites-enabled/*.conf # vim: syntax=apache ts=4 sw=4 sts=4 sr noet I need the AllowOverride All under /var/www because I am trying to run an instance of Wordpress, and it needs to be able to write to the Apache server. Apache2 is definitely running, as I can access the web content using 'localhost' from a browser on the local machine. Also, systemctl status apache2 shows it is running: ~$ systemctl status apache2 ● apache2.service - LSB: Apache2 web server Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled) Drop-In: /lib/systemd/system/apache2.service.d └─apache2-systemd.conf Active: active (running) since Thu 2019-10-10 20:01:44 EDT; 5min ago Docs: man:systemd-sysv-generator(8) Process: 1562 ExecStart=/etc/init.d/apache2 start (code=exited, status=0/SUCCESS) CGroup: /system.slice/apache2.service ├─1621 /usr/sbin/apache2 -k start ├─1624 /usr/sbin/apache2 -k start ├─1625 /usr/sbin/apache2 -k start ├─1626 /usr/sbin/apache2 -k start ├─1627 /usr/sbin/apache2 -k start ├─1628 /usr/sbin/apache2 -k start └─2102 /usr/sbin/apache2 -k start Oct 10 20:01:42 lee-Desktop systemd[1]: Starting LSB: Apache2 web server... Oct 10 20:01:42 lee-Desktop apache2[1562]: * Starting Apache httpd web server apache2 Oct 10 20:01:43 lee-Desktop apache2[1562]: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message Oct 10 20:01:44 lee-Desktop apache2[1562]: * Oct 10 20:01:44 lee-Desktop systemd[1]: Started LSB: Apache2 web server. As suggested in the comments, I tried netstat --inet -a | grep apache2, but it returned nothing. Apparently this is unusual if apache2 is running, as it should be listening on port 80. I ran netstat -plunt | grep :80 and got the following output: $ sudo netstat -plunt | grep :80 tcp6 0 0 :::80 :::* LISTEN 1557/apache2 Does this mean Apache is listening, but not hearing anything? In terms of the virtualhost configs, which were also requested, the only file in /etc/apache2/sites-enabled/ is 000-default.conf, the contents of which is: <VirtualHost *:80> # The ServerName directive sets the request scheme, hostname and port that # the server uses to identify itself. This is used when creating # redirection URLs. In the context of virtual hosts, the ServerName # specifies what hostname must appear in the request's Host: header to # match this virtual host. For the default virtual host (this file) this # value is not decisive as it is used as a last resort host regardless. # However, you must set it for any further virtual host explicitly. #ServerName www.example.com ServerAdmin webmaster@localhost DocumentRoot /var/www/html # Available loglevels: trace8, ..., trace1, debug, info, notice, warn, # error, crit, alert, emerg. # It is also possible to configure the loglevel for particular # modules, e.g. #LogLevel info ssl:warn ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined # For most configuration files from conf-available/, which are # enabled or disabled at a global level, it is possible to # include a line for only one particular virtual host. For example the # following line enables the CGI configuration for this host only # after it has been globally disabled with "a2disconf". #Include conf-available/serve-cgi-bin.conf </VirtualHost> # vim: syntax=apache ts=4 sw=4 sts=4 sr noet I have also tried running tail -f /var/log/apache2/*.log, but nothing is being printed to the logs when I try to connect from a remote machine. So, how can I troubleshoot what is blocking the connection? Is there a log anywhere that might enlighten me as to why the connection is being refused, and for what reason? I tried the suggestions made by Jacob in his answer, but unfortunately it didn't resolve the problem. Any other suggestions or guidance would be greatly appreciated!
When I try to connect from another device, using the local IP address of the Apache server Please post the output of ip addr (or ifconfig) command run as root on the server Please indicate which exact local IP address you tried to connect from other devices at that time. Please indicate the LAN IP address of each other devices you use at the time of connection. Reasons below, also see Postscript for other far-fetching possibilities regarding your router and server configuration. I did type the local IP of the apache server into the browser, but for some reason, after it tried to connect and gave the error, it showed http://localhost/ in the address bar. This smells really fishy, it says that local IP address you used for connecting is very, very, wrong. An alternate explanation: the request successfully got through, but resulted in a nonsensical HTTP redirect— skip to read Postscript number 3. If said local IP address in question is something closely resembling 127.0.0.1 or ::1, the address itself is likely to be the root cause of your problem. Because that is not a "real" IP address... IPv4 address 127.0.0.1 or abbreviated IPv6 address ::1 means this device, and host name localhost is always locally defined as synonym of this address. When you tried to connect to this address from any device other than the server itself, you are instructing it to connect to port 80 of itself (not the port 80 of server machine). Since your other device did not run HTTP server on itself, the connection attempt will certainly end with TCP RST failure, i.e. the "Connection refused" error you got on your browser. For a remedy: you must use a correct LAN IP address when connecting from other machines: certainly not localhost, not 127.0.0.1, and not ::1... Run a command ifconfig as root on the server, and look for a block that is NOT named lo. The IP address will be a field called inet addr: . If the server is connected to local network by wire, the block to use will be named eth followed by number, e.g. eth0. If the server is connected to local network wirelessly, the block to use will be named wlan followed by number, e.g. wlan0. The block you use must also contain RUNNING status (which shows that the LAN connection is enabled and usable). The address in question should look like 192.168.x.x, 10.x.x.x, or ranged from 172.16.x.x to 172.31.x.x. Try starting a browser on the server, then enter that IP address you found in step 1 in the address bar, press Enter. If your web page shown up correctly, process to next step. On the other device, make sure they are connecting to the same LAN (not cellular Internet). Start a browser on the other device, then enter the IP address you found in step 1, press "Go". Your web page should show up correctly. Postscript If the IP address you used is correct, then there might be other less-obvious reasons that could cause this problem, like: You home router may be configured with "Virtual LAN" or port isolation— which would isolate each LAN port and each wireless LAN device in its own little network. Each device is confined and cannot connect to each other (and is only allowed to go straight to the Internet). In this configuration, it is theoretically possible that in each virtual LAN, each device got assigned the same IP address. Thus when you entered "server's IP address" into the other device, it turned out to be "coincidentally" the same as device's own LAN IP address— instructing it to connect to itself which doesn't run HTTP server— resulting in "Connection refused" error. If this is the case, disable port isolation and virtual LAN option on your router. Your home router might incorporate layer 3 switch functionalities and is configured with access control list (i.e. firewall) to reject any "incoming connection" to any private LAN IP address, no matter of traffic's origin. Thus, when your device tried to connect to your server, the router (or rather, the switch) intercepted, and replied back with TCP RST instead— resulting in "Connection refused" error. If this is the case, change the router's access control list to apply only to traffic originated from the Internet/PPPoE; or provide proper exceptions for local IP address ranges. There might be something on your server that produces HTTP 301/302 redirect to http://localhost/. This could explain why your other device shown localhost in the address bar even when you entered a real LAN IP address. Namely, the first request went all right; but due to some misconfiguration/misperception occurred on the server or server-side scripts, the client got redirected to http://localhost/, which is an incorrect address for reasons already lined in the main section of the answer... The end result is "Connection refused" error in the second request, and http://localhost/ being in the address bar. Don't debug this with browser, since HTTP 301 redirect is cached. Use GNU wget or similar tools to issue request from other device, and look at its output carefully. If you see a redirection status came up before the "Connection refused" error— then that is not a network problem, but rather a server problem. If this happen to be the case, you will need find out what caused the server to produce the redirect, and fix it. If you got this web root from somewhere else, it might contain a configuration which produces a redirection when it found that client accessed it using non-canonical host name. (This is very common, like when you go to www.stackexchange.com, it would produce a HTTP 301 redirect to stackexchange.com) If your web application perceived its canonical host name to be just localhost, then it would inadvertently produce problematic redirection to http://localhost/ . In this case, specifically check your .htaccess and application's configuration; then disable said redirect.
Unable to access Apache webserver from local home network
1,425,403,167,000
I want to use ULOG and send firewall logs to ulogd2 iptables -A INPUT -i eth0 -j ULOG gives me following error: iptables: No chain/target/match by that name I have these LOG-related options enabled in my kernel: CONFIG_NETFILTER_NETLINK_LOG=y CONFIG_NF_LOG_COMMON=y CONFIG_NETFILTER_XT_TARGET_LOG=y CONFIG_NETFILTER_XT_TARGET_NFLOG=y CONFIG_NF_LOG_IPV4=y What else do I need for ULOG to work ? I don't see any ULOG options (nothing found when I search for ULOG) My kernel is 4.4.
ULOG has been deprecated, and if you don't have module ipt_ULOG you should move on to the newer NFLOG target. ulogd handles both of these, even though it is still called "ulog". Check out man iptables-extensions.
iptables: No chain/target/match ULOG
1,425,403,167,000
I am building a router with a RPi (Raspbian). It has 3 network interfaces: eth0: Connected to the Internet (IP/Gateway from DHCP) wlan0, wlan1: local WLAN interfaces (each serving its own SSID as AP) Moreover a have a VPN connection tun0 to a remote network, which is connected to the internet itself. Now I want: all traffic from wlan0 to be routed through tun0 and all traffic from wlan1 to be routed through eth0 In the result I want to have two WLANs, one with direct internet access and one with internet access through the VPN connection. This was very easy using two different devices, but how to do this with only one default gateway?
You need to create a second routing table and use policy based routing. Applied to your case you need to: Setup the first default route using the main routing table. This table will be used for the traffic generated locally and for the traffic from wlan1 : ip route add default via <gateway_reachable_by_eth0> table main Create a second routing table vpn: echo 200 vpn >> /etc/iproute2/rt_tables Add a default route to the new table: ip route add default via <gateway_reachable_by_tun0> table vpn Indicate that all traffic from wlan0 should use this new table: ip rule add from <wlan0_subnet> lookup vpn
Configure two Routers on one Device
1,425,403,167,000
I'm running FreeBSD 10.3 p4 and observed some strange behavior When restarting the machine pf starts due to /etc/rc.conf entry # JAILS cloned_interfaces="${cloned_interfaces} lo1" gateway_enable="YES" ipv6_gateway_enable="YES" # OPENVPN -> jails cloned_interfaces="${cloned_interfaces} tun0" # FIREWALL pf_enable="YES" pf_rules="/etc/pf.conf" fail2ban_enable="YES" # ... other services ... # load ezjail ezjail_enable="YES" but ignores all rules concerning jails. So I have to reload rules manually to get it started by sudo pfctl -f /etc/pf.conf My pf.conf reads: #external interface ext_if = "bge0" myserver_v4 = "xxx.xxx.xxx.xxx" # internal interfaces set skip on lo0 set skip on lo1 # nat all jails jails_net = "127.0.1.1/24" nat on $ext_if inet from $jails_net to any -> $ext_if # nat and redirect openvpn vpn_if = "tun0" vpn_jail = "127.0.1.2" vpn_ports = "{8080}" vpn_proto = "{tcp}" vpn_network = "10.8.0.0/24" vpn_network_v6 = "fe80:dead:beef::1/64" nat on $ext_if inet from $vpn_network to any -> $ext_if rdr pass on $ext_if proto $vpn_proto from any to $myserver_v4 port $vpn_ports -> $vpn_jail # nsupdate jail nsupdate_jail="127.0.1.3" nsupdate_ports="{http, https}" rdr pass on $ext_if proto {tcp} from any to $myserver_v4 port $nsupdate_ports -> $nsupdate_jail # ... other yails ... # block all incoming traffic #block in # pass out pass out # block fail2ban table <fail2ban> persist block quick proto tcp from <fail2ban> to any port ssh # ssh pass in on $ext_if proto tcp from any to any port ssh keep state I had to disable blocking all incoming traffic as ssh via ipv6 stopped working. Any suggestions how to fix the problem?
The problem here is that /etc/rc.d/pf runs before /usr/local/etc/rc.d/ezjail, so the kernel hasn't configured the jailed network by the time it tries to load the firewall rules. You might be tempted to alter the pf script to start after ezjail, but that's not a good idea - you want your firewall to start early in the boot process, but jails get started quite late on. service -r shows what order your rc scripts will run. You don't show any of your pf.conf rules, but my guess is that they use static interface configuration. Normally, hostname lookups and interface name to address translations are carried out when the rules are loaded. If a hostname or IP address changes, the rules need to be reloaded to update the kernel. However, you can change this behaviour by surrounding interface names (and any optional modifiers) in parentheses, which will cause the rules to update automatically if the interface's address changes. As a simple (and not very useful) example: ext_if="em0" pass in log on $ext_if to ($ext_if) keep state The pf.conf manpage is very thorough. In particular, the "PARAMETERS" section is relevant here.
Freebsd: pf firewall doesn't work on restart
1,425,403,167,000
I've got a raspberry pi set up to send me periodic emails. As it's connected to the internet 24/7, I need IPTables set up properly. I want to allow incoming SSH and allow emails to send out on port 587 via SMTP. I've came up with this IPTables script, is it correct? If not, can you tell me why. Thanks. sudo iptables -P INPUT DROP sudo iptables -P OUTPUT DROP sudo iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT sudo iptables -A OUTPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT sudo iptables -A OUTPUT -p tcp --dport 587 -j ACCEPT sudo iptables -A INPUT -p tcp --sport 587 -j ACCEPT
A iptables rule like this works fine *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT ACCEPT [1:156] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 587 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT The first rule DROP by default all incoming connection the second DROP by default all forwarding the third ACCEPT the output,why accept?Is not too unsafe imho to make open the output connections,close it can make the firewall configuration a little difficult. -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT Accept the connection with state RELATED and established state the rest is easy -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 587 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT accept 22 tcp,accept 587 tcp and forbid all the other connections, you can save on file and then do iptables-restore < firewall.file And check it with nmap -sS your host
How can I allow SSH and SMTP only using IPTables?
1,425,403,167,000
so... we know that we can test that if a port is open on the firewall with: telnet SERVERIP PORT ..but afaik there are services that can't be tested with telnet, because ex.: telnet doesn't know that protocol that the service is using, and telnet will report that the port is closed, but in reality the service is up&running. Q: first: was I correct about telnet? second: What to use for testing that a port is opened on a server? (so it's not blocked by a firewall) - are there any unix tools for this?
On the first question, maybe the service does not wait for interactive input. There could be other explanations, too. On the second, nmap can be used to test the firewall. There are many options. Scan the first 1,000 ports (default): nmap -v -A -PN hostname.domainname.com Or perhaps a specific range: nmap -v -A -p 10000-11000 -PN hostname.domainname.com
What to use for firewall testing (port opened or not)
1,314,836,079,000
I recently enabled IP Forwarding on a production web server that also acts as a VPN server. What, if any, security issues did I just set myself up for? And are there any iptables rules I should set up to limit forwarding to my VPN tunnel?
You will only have security issues if your server gets compromised. Anyone/thing that gets into your server will have access to machines on your network, as if it was the server (your machines probably trust your server). You should definitely have some iptables rules! Basically you want iptables to block everything from the outside world to your firewall, except your webserver port(s) and VPN server port(s). I would recommend something like shorewall http://www.shorewall.net/ to set this up for you, but some people think coming up with the iptables rules yourself if better since you'll understand more about what's going on.
Security with Kernel IP Forwarding Enabled
1,314,836,079,000
When using the iptables recent module, I can see the module settings here: $ ls -1 /sys/module/xt_recent/parameters/ and list setting for particular parameter: $ cat /sys/module/xt_recent/parameters/ip_list_tot 100 I also know I can change the defaults when loading the module. My question is, what happens when the list reaches the size of ip_list_tot? Does the recent module stop adding new IP addresses or does it "rotate" the old ones out and replacing them with new ones? I looked in the help, but could not find any explanation iptables -m recent --help Also, what is a reasonable size for ip_list_tot in a production environment, where I want to block offending IPs? The default ip_list_tot size of 100 seems to me ridiculously small. Could I experience any negative effects if I set it to 10'000?
After reading (very fast) the source code, I would say that the older entry is removed: if (t->entries >= ip_list_tot) { e = list_entry(t->lru_list.next, struct recent_entry, lru_list); recent_entry_remove(t, e); } To increase this value, you can set the parameter while loading the module manually: ~$ sudo modinfo -p xt_recent ip_list_tot:number of IPs to remember per list (uint) ip_list_hash_size:size of hash table used to look up IPs (uint) ip_list_perms:permissions on /proc/net/xt_recent/* files (uint) ip_list_uid:default owner of /proc/net/xt_recent/* files (uint) ip_list_gid:default owning group of /proc/net/xt_recent/* files (uint) ip_pkt_list_tot:number of packets per IP address to remember (max. 255) (uint) ~$ sudo modprobe xt_recent ip_list_tot=10000 ~$ sudo cat /sys/module/xt_recent/parameters/ip_list_tot 10000 Make sure the module isn't in use (disable firewall or, at least, rules that use the recent match) before unloading/loading. To make this setting persistent, you can put a file under /etc/modprobe.d/xt_recent with the following content: options xt_recent ip_list_tot=10000 (Note this method may not work and may be adapted depending on your distro). Regarding the performance issues that may be met if you increase consequently this parameter value, it's quite hard to tell. It depends on your hardware, on the other tasks running on the system, etc. Still based on reading the source code and my own background on development, I would say that the main things you may be afraid of is the introduction of latency if, for example, the currently tested IP is the last one on the list or isn't in the list (which may occur frequently): static struct recent_table *recent_table_lookup(struct recent_net *recent_net, const char *name) { struct recent_table *t; list_for_each_entry(t, &recent_net->tables, list) if (!strcmp(t->name, name)) return t; return NULL; } Given x the complexity of list_for_each_entry() + strcmp(), the extra "cost" of setting `ip_list_tot̀ to a huge value is the time to browser the list. Final complexity may vary between 1 * x and ip_list_tot * x. Nevertheless, I guess that chained list in kernel is well implemented, with performance and speed as a requirements. To conclude, I would advice you to benchmark ... if possible.
settings for iptables "recent" module
1,314,836,079,000
I have 2 Servers and the network looks like this: Server_A (Ubuntu) -> Firewall/Router -> Internet Server_A can connect to any server on the internet. Server_B (Ubuntu) which is directly connected to the internet. No restrictions in port forwarding or any Firewalls on Server_B I can't connect to Server_A from the internet because it is inside a local network and port forwarding is not possible here, I don't have access to the router. Can Server_A connect to Server_B so I can connect via SSH to Server_B and communicate with Server_A?
If you can run a ssh server on Server_A, the you can ssh from Server_A to Server_B and have a port on Server_B forwarded back to Server_A's ssh server. Server_A$ ssh -R 12345:localhost:22 Server_B Password: Server_B$ And then on Server_B you can now ssh to Server_A by using localhost on port 12345: Server_B$ ssh -p 12345 localhost Password: Server_A$ If you really want to expose Server_A's ssh server on the Internet you can make the port forwarding accessible from other hosts, but this requires that Server_B's sshd_config file allows GatewayPorts: Server_A$ ssh -R '*:12345:localhost:22' Server_B Password: Server_B$ and then from anywhere: Anywhere$ ssh -p 12345 Server_B Password: Server_A$ But as mentionned in a comment be very careful you do not step over some important security policies.
How to connect to a SSH server behind a firewall using another server?
1,314,836,079,000
I have two desktops and a laptop running Arch Linux. I have not setup any iptable rules on any of the machines, and I am curious if I should. One desktop is only used in my home network and is "protected" by a cheap cable modem/router/firewall that is essentially unconfigured and unmaintained (I changed the default password when I got it and haven't looked at it since in 3 years). The desktop runs standard home user applications (e.g., web browsing and Skype) as well as an SSH server. As far as I can tell the firewall default configuration blocks all incoming connections since I can only connect to the SSH server from within my home network. The other desktop is behind my University's firewall. It is more permissive (I can SSH into the machine from off campus) and hopefully better maintained. Apart from the SSH server the machine doesn't do anything network related other than web browsing and Skype. The laptop is used behind the home firewall, the University firewall and sometime on unsecured public wireless and wired networks (e.g., hotels and coffee shops). Is it worth configuring IP tables and if so should the configuration vary across the machines? Potentially related, is it worth setting up a separate independent firewall to protect my home network?
Whether or not to use iptables is ultimately up to you. You have to look at what it's capable of and what you want it do to. At a very high level, iptables can do 3 things very well: Filter inbound network traffic Filter outbound network traffic Log traffic Typically, configuring iptables is unnecessary on systems that are behind a residential router because such a router should block inbound packets by default. If you decide to expose a service to the Internet (ssh for example) and forward a port on your router to an ssh server in your network, then using iptables becomes a little more relevant. It can be used to filter traffic based on source or destination ip/network block, control the icmp errors your system responds to unwanted traffic with or even log every packet sent to anyone, ever. The question really is: do you have any services on your computers that you don't want other people to interact with? are there any external services you don't want people in your network to be able to connect to? how much logging do you want to do? Additional info: CentOS iptables HOWTO Ulogd man iptables search for REJECT
Does IP tables need to be configured or is a stand alone firewall sufficient?
1,314,836,079,000
I need to ensure on my server that maximum new ssh connections per minute are not more then 5. sudo /sbin/iptables -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 5 -j REJECT Above IPtables rule work for me, but it will not allow new connections after one minute. Any pointers how to achieve this?
Add --reject-with tcp-reset so that the rejected connections get closed gracefully otherwise you're going to have a bunch of SYN_WAITs sitting around. sudo /sbin/iptables -A INPUT -p tcp --syn --dport 22 -m connlimit --connlimit-above 5 -j REJECT --reject-with tcp-reset
IPtables : Limit number of new ssh connections per minute
1,314,836,079,000
GoogleBot is hitting my server hard - and even though I have set the CrawlRate in Webmaster Tools it is still hiking up the load on my server and slowing down Apache for the rest of the normal web traffic. Is it possible to limit / rate-limit connections per second / minute using UFW based on a user agent string? If not how can I do it for GoogleBot's IP ranges?
You cannot do this with ufw directly, but you need to add the right iptables rules to /etc/ufw/before.rules. I suggest you to learn iptables. As a (not optimized) starting point something like -A ufw-before-input -p tcp --syn -dport 80 -m recent --name LIMIT_BOTS --update --seconds 60 --hitcount 4 --rcheck -j DROP -A ufw-before-input -p tcp -dport 80 -m string --algo bm --string "NotWantedUserAgent" -m recent --name LIMIT_BOTS --set ACCEPT could work, where you of course need to replace NotWantedUserAgent with the correct one. This rules should limit the number of new connections per minute from a specific bot - I have not tested them and do not know if they really reduce the workload from a specific bot.
Can I limit connections per second for certain UserAgents using UFW?
1,314,836,079,000
I have a box running a fresh install of Fedora 15. I've installed TigerVNC server on it and client on my Windows machine. I've added -A INPUT -m state --state NET -m tcp -p tcp --dport 5900 -j ACCEPT to /etc/sysconfig/iptables then, added to /etc/sysconfig/vncservers: VNCSERVER="1:UNAME" VNCSERVERARGS[1]="-geometry 1024x768" then I try to start the server, but I get job failed. See bla bla for details ((mentions some files i have no idea to find)) What am I doing wrong? -thanks! iptables -nvL: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 173 12044 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 1518 85858 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:5900 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited Chain OUTPUT (policy ACCEPT 1613 packets, 146K bytes) pkts bytes target prot opt in out source destination`` systemctl: vncserver.service loaded failed failed LSB: start|stop|restart|try-restart|status|force-reload vncserver
The "1:user" tells the vnc server that the username user is map to display 1, so the port number to access this user via vnc is 5901. Note: "By default, VNC uses ports numbered 5900 plus the display number. In this example, the display is 1, so the port number is 5901.
Problems setting up TigerVNC and firewall
1,314,836,079,000
A few months ago, I migrated the firewall of a debian laptop from iptables to nftables, using debian's recommended procedure, and all seems to have been fine. Now, months later, I'm scrutinizing the rule-set created by that migration procedure, trying to learn the nftables syntax, and see what seem to be several counter-based rules that I don't understand and suspect might not be correct. I haven't found the nftables wiki to be a helpful educational resource, and haven't found any other on-line educational resource that addresses this kind of question: The default auto-migrated rule-set included the following: table inet filter { chain INPUT { type filter hook input priority 0; policy drop; counter packets 123 bytes 105891 jump ufw-before-logging-input counter packets 123 bytes 105891 jump ufw-before-input counter packets 0 bytes 0 jump ufw-after-input counter packets 0 bytes 0 jump ufw-after-logging-input counter packets 0 bytes 0 jump ufw-reject-input counter packets 0 bytes 0 jump ufw-track-input } The first two counter statements are examples of what caught my eye. Am I correct that they are saying "jump to the rules in section ufw-before-foo, but only after the first 123 packets and the first 105891 bytes have been received". Why not start immediately from packet 0 byte 0? Why not use a syntax >= which seems to supported by nftables? Are these numbers arbitrary? Possibly due to a glitch in the migration? The above rule-set includes a jump to the following chain, with a possibly similar issue. Here's a snippet of it: chain ufw-before-input { iifname "lo" counter packets 26 bytes 3011 accept ct state related,established counter packets 64 bytes 63272 accept ct state invalid counter packets 0 bytes 0 jump ufw-logging-deny ct state invalid counter packets 0 bytes 0 drop ... } Why are the decisions to accept based upon the receiving 26 or 64 prior packets? A firewall can be flushed arbitrarily at any time after power-up and network discovery/connection, so why drop all those initial packets? As I mentioned above, these rules have been in place for months, so I'm wondering what negative effect they could possibly have had. The only candidate that I've come up with is that the laptop can sometimes have a difficult time making a wifi connection (especially after resuming from sleep) while a second nearby laptop has no such trouble. Could these rules dropping packets be the culprit for difficulty negotiating a wifi connection?
No, the explanation is much simpler: the counter statement has optional arguments packets and bytes which display the number of packets and bytes counted by the counter when a packet reached the rule where it was. Without filter before the counter, any packet (including on loopback) will thus increase the values so it can happen very early and fast. The tool doing conversion saw an iptables default counter and chose to also translate its values for fidelity. So usually when you write a rule you don't set those values, you put a simple counter alone, and they both get a default of 0. When packets traverse it those values increase. Optionally, especially when used in a named counter stateful object where it can even be displayed-and-reset (using something like nft reset counters), to do some form of accounting, one can set those values when writing the ruleset: usually when restoring the ruleset saved right before reboot. This can only be reset if used as the named variant, not as "inlined" anonymous counter. They cannot be used to alter the match in a rule, there's no other option than to display them. Any wifi problem you have cannot be caused by any counter statement. If now you want to use packets count to limit the usage of rules in a nftables firewall, there are a few different methods depending on needs: the other stateful object quota (again which can be used anonymously, but can only be reset if used named). You can then have for example a rule never match or start matching depending on its count. there's the limit statement to count rates. For example if you fear a log rule could flood log files, you can use this to limit the number of logs done. It can also limit the rate to some ressource (usually used with other filters with conntrack). with a recent enough nftables (0.9.2?) and kernel (4.18?), there's the ct count conntrack expression to count the number of established connection using conntrack, usually to limit concurrent access to some ressource (ssh, web server...)
firewall: nftable counter rules
1,314,836,079,000
I have two CentOS 7 boxes, called Turbo and Demo. I started a daemon/service on the first one, Turbo, and wish to call the service from the second one, Demo. I started the service to monitor port 8081. Another daemon uses port 8080, so I thought to use the next port, 8081. Neither box has the firewall daemon, FirewallD, running. I added an entry to the iptables and restarted the system. Here is the contents of the file. # Generated by iptables-save v1.4.21 on Tue Nov 10 13:00:36 2015 *filter :INPUT ACCEPT [900:143014] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [781:136535] -A INPUT -p tcp -m state --state NEW -m tcp --dport 8081 -m comment --comment "DataMover Service Port" -j ACCEPT COMMIT # Completed on Tue Nov 10 13:00:36 2015 Here is the expected result when trying to open the port using the Firewall (disabled). [root@Turbo Downloads]# firewall-cmd --permanent --add-port=8081/tcp FirewallD is not running The curl command that I used is: [hts@jmr-server2 raid1]$ curl -H "Accept: application/xml" "http://192.168.20.88:8081/base/TestService/sayHello" I did a Google search and most results indicate editing iptables or opening the port on the firewall. I looked at this site, which states common port numbers for CentOS. It states the following for the two ports: 8080 webcache World Wide Web (WWW) caching service 8081 tproxy Transparent Proxy Per request, here is the result of iptables -L. [root@Turbo Downloads]# service iptables restart Redirecting to /bin/systemctl restart iptables.service [root@Turbo Downloads]# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:tproxy /* TestService Service Port */ Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination [root@Turbo Downloads]# Here is the questions: [root@Turbo Downloads]# netstat -nap|grep 8081 tcp6 0 0 127.0.0.1:8081 :::* LISTEN 7272/java [root@Turbo Downloads]# I am doing testing using local host, but wanted to take the next step and go to a different machine. Here is a localhost result: [root@Turbo Downloads]# curl -H "Accept: application/xml" "http://localhost:8081/base/TestService/getHello" <?xml version="1.0" encoding="UTF-8"?> <note> <to>Tove</to> <from>Jani</from> <heading>Reminder</heading> <body>Don't forget me this weekend!</body> </note> I started the service from a terminal window. I am not to the point, where I am loading the service automatically. Is curl access from one server to the other only possible on port 8080 and not using a different port or is there some other problem?
This is the problem -- your netstat output shows the service listening on localhost instead of an externally-accessible IP: 127.0.0.1:8081 Edit the service to listen on an (or 'any') IP and restart it.
Connection refused on port 8081 using curl
1,314,836,079,000
Say there are several iptables scripts (run at boot time), all of which run something like iptables -A ... to add rules. I'm thinking this could be improved, turning all those shell scripts into text files generated by iptables-save. But I must be doing something wrong, trying to read all those rulesets. The script run at boot time would loop through those files and read them using iptables-restore. Of course with -n or --noflush. This works for some rules (stored in the default chains) but not for most of my rules which are in other chains. Below is an example of 2 rulesets that flush each other (reading set a, check; reading set b, check but set a is gone). How would you read a bunch of iptables rulesets? Example: $ cat fake1-a.rules *nat :PREROUTING ACCEPT [7:997] :INPUT ACCEPT [7:997] :OUTPUT ACCEPT [28:1810] :POSTROUTING ACCEPT [28:1810] COMMIT *mangle :PREROUTING ACCEPT [344:84621] :INPUT ACCEPT [344:84621] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [296:37971] :POSTROUTING ACCEPT [296:37971] COMMIT *filter :INPUT ACCEPT [102:26513] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [89:10767] :TESTCHAIN - [0:0] -A TESTCHAIN -p tcp -m tcp --dport 12345 -j DROP COMMIT $ cat fake1-b.rules *nat :PREROUTING ACCEPT [7:997] :INPUT ACCEPT [7:997] :OUTPUT ACCEPT [28:1810] :POSTROUTING ACCEPT [28:1810] COMMIT *mangle :PREROUTING ACCEPT [344:84621] :INPUT ACCEPT [344:84621] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [296:37971] :POSTROUTING ACCEPT [296:37971] COMMIT *filter :INPUT ACCEPT [102:26513] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [89:10767] :TESTCHAIN - [0:0] -A TESTCHAIN -p tcp -m tcp --dport 54321 -j DROP COMMIT # cat fake1-a.rules | iptables-restore --noflush # iptables -nL | grep DROP DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:12345 # cat fake1-b.rules | iptables-restore --noflush # iptables -nL | grep DROP DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:54321
The --noflush option for iptables-restore doesn't work for user-defined chains, such as TESTCHAIN, only builtin chains. Your best bet is to consolidate all of the TESTCHAIN rules into a single file and import that ruleset using iptables-restore. You could find all the rules with something along the lines of: egrep -r "\sTESTCHAIN\s" firewall_rules_directory/*
How to combine iptables rulesets
1,314,836,079,000
I am currently testing netfilter / nftables / nft. As a starting point, I have made a ruleset that drops nearly everything in and out, and have written the rules so that every dropped packet is logged. As always, and as it probably has to be, I don't understand the very first thing the machine tries to do and that I notice in the logs: ... IN= OUT=enp0s3 ARP HTYPE=37 PTYPE=0x90bd OPCODE=21 According to this document: Opcode 21 means MARS-Grouplist-Reply. Neither did I ever hear of it, nor did I find a single reference to it on the net, except in RFCs or IANA documents, but it is nowhere explained there. HTYPE 37 means HFI hardware. As with the opcode, I have never heard of such a thing, nor did I find any explanation on the net. I am pretty sure that I don't have that type of hardware. In this case, the networking hardware is a virtual NIC in QEMU. PTYPE 0x90bd: During today's research, I have seen a list of protocol types; unfortunately, I can't remember where. But anyway, 0x90bd for sure was not mentioned there. Could somebody please explain what the opcode, the hardware type and the protocol type mean, and why the system in question wants to send such packets? This happens in a vanilla debian Bullseye installation, up to date at the time of writing, in a virtual machine with virtualized standard x64 Intel hardware and virtio NIC.
This is a bug in Netfilter's ARP logs. There was a bug report about this problem. It was discovered that ARP didn't log using the correct data (it used data from link layer header instead of ARP's network layer). A patch was committed to fix this a few days later and appeared in kernel 5.19: netfilter: nf_log: incorrect offset to network header NFPROTO_ARP is expecting to find the ARP header at the network offset. In the particular case of ARP, HTYPE= field shows the initial bytes of the ethernet header destination MAC address. netdev out: IN= OUT=bridge0 MACSRC=c2:76:e5:71:e1:de MACDST=36:b0:4a:e2:72:ea MACPROTO=0806 ARP HTYPE=14000 PTYPE=0x4ae2 OPCODE=49782 NFPROTO_NETDEV egress hook is also expecting to find the IP headers at the network offset. Fixes: 35b9395104d5 ("netfilter: add generic ARP packet logger") Reported-by: Tom Yan Signed-off-by: Pablo Neira Ayuso It appears this fix was not backported to vanilla kernel 5.10, possibly because the file to patch was not yet consolidated elsewhere so was in a different place, or this patch was just missed for a backport. When fixed (eg: vanilla 5.19.17): ah = skb_header_pointer(skb, nhoff, sizeof(_arph), &_arph); When not fixed (eg: vanilla 5.10.174, so including Debian bullseye's unless patched): ah = skb_header_pointer(skb, 0, sizeof(_arph), &_arph); Someone has to make a bug report about it. Meanwhile you could try a bullseye-backports kernel (eg currently: 6.1.12-1~bpo11+1) which is guaranteed to not have it anymore. Tested affected on today's bullseye kernel (5.10.162). Just logging any ARP table arp t { chain cout { type filter hook output priority filter; policy accept; log } } will log HTYPE=65535 when trying to reach a non-existent IP address on the LAN because it incorrectly uses the start of the broadcast MAC address and that's what is used as described in the patch. The same test done with the kernel in package linux-image-6.1.0-0.deb11.5-amd64-unsigned logs instead HTYPE=1 as should be.
What are ARP hardware type 37, opcode 21 and protocol type 0x90bd?
1,314,836,079,000
Current system: Distro: Ubuntu 20.04 kernel: 5.4.0-124-generic nft: nftables v0.9.3 (Topsy) I am new and learning nftables, Here is my nft ruleset currently: $sudo nft list ruleset taxmd-dh016d-02: Wed Sep 21 12:09:08 2022 table inet filter { chain input { type filter hook input priority filter; policy accept; } chain forward { type filter hook forward priority filter; policy accept; } chain output { type filter hook output priority filter; policy accept; ip daddr 192.168.0.1 drop } } I want to delete ip daddr 192.168.0.1 drop from the output chain. I tried the following: sudo nft del rule inet filter output ip daddr 192.168.0.1 drop sudo nft delete rule inet filter output ip daddr sudo nft 'delete element ip daddr 192.168.0.1 drop' sudo nft 'delete element ip' sudo nft delete rule filter output ip daddr 192.168.0.1 drop But nothing works, I keep getting this error: Error: syntax error, unexpected inet delete inet filter chain output ip daddr 192.168.0.1 drop ^^^^ Why can't I delete a specific element? I would think this would be straight forward, but I am missing something.
The wiki says what you tried is not yet implemented: You have to obtain the handle to delete a rule. The example is: $ sudo nft -a list table inet filter table inet filter { ... chain output { type filter hook output priority 0; ip daddr 192.168.1.1 counter packets 1 bytes 84 # handle 5 } } The -a shows the assigned handle "5" as a comment, so you can $ sudo nft delete rule filter output handle 5
How do I delete a specific element in a chain in nftables?
1,314,836,079,000
In Linux, you can do NAT port redirection with a command like this: iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8000 What is the equivalent with BSD's PF?
You can do it like this with Packet Filter : pass in on em0 proto tcp from any to any port 80 rdr-to 192.168.1.20 port 8000 Change em0 with your network interface, and change the IP address to suit your needs. Read more : http://www.openbsd.org/faq/pf/rdr.html#filter
How do you do NAT port redirection with PF?
1,314,836,079,000
I'm trying to properly setup the firewall on my gateway (OpenBSD 4.7) using pf to allow amule (on 10.0.0.104) to operate properly as discussed here: Now I know pf has changed a bit from OpenBSD 4.7 to 4.9 with some of the rules being rewritten and I believe the '->' symbol is no longer used with port forwarding. So my question is: Which rule set do I use for pf on OpenBSD 4.7 on the gateway? This: # pass in on $int_if proto tcp from any to any port 4662 rdr-to 10.0.0.104 # pass in on $int_if proto udp from any to any port 4672 rdr-to 10.0.0.104 # pass in on $int_if proto udp from any to any port 4665 rdr-to 10.0.0.104 or this: # rdr pass on egress proto tcp to port 4662 -> 10.0.0.104 # rdr pass on egress proto udp to port 4672 -> 10.0.0.104 # rdr pass on egress proto udp to port 4665 -> 10.0.0.104 The OBSD site has updated everything to reflect 4.9 now and no longer seems to have the old pf stuff up there.
You want the first set. Regarding the OpenBSD documentation you can consult the man page for the most accurate info, if you don't have them installed you can get them from OpenBSDs site: http://www.openbsd.org/cgi-bin/man.cgi?query=pf.conf&apropos=0&sektion=0&manpath=OpenBSD+4.7&arch=amd64&format=html They keep all versions and there is a drop down that lets you select the specific one you are using.
Setting up the firewall for amule on OpenBSD 4.7 gateway
1,314,836,079,000
I would like to limit number of connection to some ports. for example: I would like to allow just 2 connection for ports between 2300 to 2500 on my server. But I don't know how can I do that using iptables. Is there need additional software? Any suggestion?
You can do this with nftables, using kernel >= 4.18 (tested here with kernel 5.3) and nftables >= 0.9.1 for its connlimit's count feature (and the dynamic flag used here). It's more flexible than iptables's connlimit because you can choose, when creating the meter set, the selector(s) and masks on which the limit will be applied, while only a few possible selectors (not including port) exist for iptables, which would have probably required to have one rule per port. Here's the adaptation of the wiki example, except we track incoming TCP ports instead of outgoing IP addresses to address OP's question. Any TCP connection to the same local port between 2300 and 2500 after having already established two, will be rejected with TCP RST. I understand ct state new is just an optimization to avoid having to try and add from packet path a new element for every incoming packet in the connection rather than just the first. nft rules file to load with nft -f: flush ruleset table ip my_filter_table { set my_connlimit { type inet_service size 65535 flags dynamic } chain my_input_chain { type filter hook input priority filter; policy accept; tcp dport 2300-2500 ct state new add @my_connlimit { tcp dport ct count over 2 } counter reject with tcp reset } } Matching connections will create entries in my_connlimit: the selector entries dynamically created, rather than the current counts (which are handled by connlimit using conntrack's entries). For this specific case setting the set's size to 2500-2300+1=201 would probably have been enough. The added elements will disappear automatically (at least on kernel 5.3) when there's no associated count anymore (ie: all connections on this port were closed). Example after having established one or two connections on port 2301: # nft list set ip my_filter_table my_connlimit table ip my_filter_table { set my_connlimit { type inet_service size 65535 flags dynamic elements = { 2301 ct count over 2 } } } UDP would have worked the same except conntrack will usually timeout the entry between 30s and 120s (was 180s before) after last activity since there's no actual connection. It's also possible to use a concatenation instead of a simple set as meter, for example to limit this per server's IP plus port rather than just per port for a server having multiple IPs, like this: flush ruleset table ip my_filter_table { set my_connlimit { type ipv4_addr . inet_service size 65535 flags dynamic } chain my_input_chain { type filter hook input priority filter; policy accept; tcp dport 2300-2500 ct state new add @my_connlimit { ip daddr . tcp dport ct count over 2 } counter reject with tcp reset } } Note: to avoid counting local connections in the meter, connections going through the lo device should be bypassed. For example: # nft insert rule ip my_filter_table my_input_chain iif lo accept
How to Limit Number of connection to specific port in linux?
1,314,836,079,000
I am seeing journal entries such as the following, which appear at regular 4-second intervals: Jan 22 19:31:00 tara kernel: OUT-global:IN= OUT=enp3s0f2 SRC=fe80:0000:0000:0000:56e4:c37c:30cc:668f DST=ff02:0000:0000:0000:0000:0000:0000:0002 LEN=48 TC=0 HOPLIMIT=255 FLOWLBL=158870 PROTO=ICMPv6 TYPE=133 CODE=0 Jan 22 19:31:04 tara kernel: OUT-global:IN= OUT=enp3s0f2 SRC=fe80:0000:0000:0000:56e4:c37c:30cc:668f DST=ff02:0000:0000:0000:0000:0000:0000:0002 LEN=48 TC=0 HOPLIMIT=255 FLOWLBL=158870 PROTO=ICMPv6 TYPE=133 CODE=0 Jan 22 19:31:08 tara kernel: OUT-global:IN= OUT=enp3s0f2 SRC=fe80:0000:0000:0000:56e4:c37c:30cc:668f DST=ff02:0000:0000:0000:0000:0000:0000:0002 LEN=48 TC=0 HOPLIMIT=255 FLOWLBL=158870 PROTO=ICMPv6 TYPE=133 CODE=0 Jan 22 19:31:12 tara kernel: OUT-global:IN= OUT=enp3s0f2 SRC=fe80:0000:0000:0000:56e4:c37c:30cc:668f DST=ff02:0000:0000:0000:0000:0000:0000:0002 LEN=48 TC=0 HOPLIMIT=255 FLOWLBL=158870 PROTO=ICMPv6 TYPE=133 CODE=0 RFC4890 - Recommendations for Filtering ICMPv6 Messages in Firewalls lists Router Solicitation (Type 133) in Section 4.4.1 - Traffic That Must Not Be Dropped. But it seems that my configuration is indeed dropping them. My iptables are generated by firehol, configured thus: version 6 # ssh on port 5090 (ssh is a built-in service name) server_ssh_hidden_ports="tcp/5090" client_ssh_hidden_ports="default" # mosh server_mosh_ports="udp/60001:60020" # Mosh uses 60001 to 60999 counting up client_mosh_ports="default" # NoMachine (nxserver is a built-in, but seemingly on incorrect ports) server_nomachine_ports="tcp/4000" client_nomachine_ports="default" # Deluge server_deluge_ports="tcp/8112" client_deluge_ports="default" # Zerotier-one interface zt0 zerotier policy reject # be nicer than default "drop" on internal network protection strong server "ssh_hidden mosh" accept with limit 8/min 10 # rate/period [burst] server "nomachine deluge" accept with limit 8/min 10 # rate/period [burst] #server "ssh_hidden nomachine" accept with recent recent-zerotier 30 6 # name, seconds, attempts per period client all accept # All interfaces - look at fallthrough if putting this non-last as it didn't work without it interface any global protection strong server ssh_hidden accept with limit 8/min 10 client all accept How do I remove these noisy log messages?
As mentioned in FireHOL IPv6 Setup, add the following to the top of your firehol.conf: ipv6 interface any v6interop proto icmpv6 client ipv6neigh accept server ipv6neigh accept client ipv6mld accept client ipv6router accept policy return
Prevent dropping of IPv6 Router Solicitation (Type 133) packets
1,314,836,079,000
I just got a new dedicated server with CentOS and I'm trying to debug some network problems. In doing so, I found over one thousand iptables entries. Is this the default on a CentOS system? Is there some firewall package that might be guilty of doing that?
Is this the default on a CentOS system? No. The default one is below. Is there some firewall package that might be guilty of doing that? Probably. You don't say what the entries are but if they're banning CIDR blocks I'd guess your server has a firewall like APF or CSF that can subscribe to blacklist like Spamhaus' DROP and that is how your rules are being generated. Alternatively, there might be some cron job which does it all. If you do a grep -rl iptables /etc/* that will tell you all the files that mention iptables and hopefully track down what is generating your entries. Here's the default iptables from /etc/sysconfig/iptables: # Firewall configuration written by system-config-securitylevel # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p 50 -j ACCEPT -A RH-Firewall-1-INPUT -p 51 -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited COMMIT
1000 iptables entries on CentOS?
1,474,297,842,000
I've installed a server (WSO2 ESB) in my laptop. When I run it, going to the browser and typing the console address https://10.13.6.75:9443/carbon, it works normally, but if I try it on another machine on the network, it gives timeout. Some observations: ping works telnet respond on ssh port (22) but not in 23, 8080, 9443, etc I've already executed service firewalld stop, service iptables stop, and disabled the selinux, but no results. There's another process involved blocking the requests?
Fedora 19 uses the firewall firewalld.service. You can see if it's running from a terminal using this command: $ systemctl status firewalld.service firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: active (running) since Sat 2014-01-11 17:02:00 EST; 4 days ago Main PID: 564 (firewalld) CGroup: name=systemd:/system/firewalld.service └─564 /usr/bin/python /usr/sbin/firewalld --nofork --nopid Jan 11 17:02:00 greeneggs.bubba.net systemd[1]: Started firewalld - dynamic firewall d...n. You can disable it, temporarily like this: $ sudo systemctl stop firewalld.service Now it's off. $ systemctl status firewalld.service firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled) Active: inactive (dead) since Thu 2014-01-16 10:20:42 EST; 11s ago Main PID: 564 (code=exited, status=0/SUCCESS) CGroup: name=systemd:/system/firewalld.service Jan 11 17:02:00 greeneggs.bubba.net systemd[1]: Started firewalld - dynamic firewall d...n. Jan 16 10:20:39 greeneggs.bubba.net systemd[1]: Stopping firewalld - dynamic firewall ..... Jan 16 10:20:42 greeneggs.bubba.net systemd[1]: Stopped firewalld - dynamic firewall d...n. To restart it: $ sudo systemctl start firewalld.service If you're new to Fedora, or just new to 19, I highly suggest reading the Fedora Project's documentation on firewalld. https://fedoraproject.org/wiki/FirewallD What ports are open? You can use the command line tool nmap to see what ports are accepting connections on your system. $ sudo nmap -sS -P0 192.168.1.161/32 Starting Nmap 6.40 ( http://nmap.org ) at 2014-01-16 11:06 EST Nmap scan report for 192.168.1.161 Host is up (0.000019s latency). Not shown: 998 closed ports PORT STATE SERVICE 22/tcp open ssh 111/tcp open rpcbind Nmap done: 1 IP address (1 host up) scanned in 0.11 seconds Be sure to substitute your system's IP address in the above command. netstat Alternatively you can use the command netstat to see what ports are in use and which PIDs and IP's they're bound on. $ sudo netstat -anpt | grep :22 tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 894/sshd tcp 0 0 192.168.1.161:52732 67.253.170.83:22 ESTABLISHED 5023/ssh tcp6 0 0 :::22 :::* LISTEN 894/sshd NOTE: This lists the IP addresses that have sshd bound to port 22. Binding to ports on specific interfaces Many people get confused by this but each interface in IPv4 has it's own set of ports. You need ot make sure that when you start a service that you take care to tell the app that you want it to bind to IP address's 127.0.0.1's port XYZ vs. the IP address for you NIC card's port XYZ. You application appears to provide this type of feature. It's discussed in this documentation. http://docs.wso2.org/display/ESB460/Setting+Up+Host+Names+and+Ports There's an XML file according to the docs where you can change your IP to bind to. <parameter name="bind-address" locked="false">hostname or IP address</parameter>
Fedora 19: can't disable the firewall
1,474,297,842,000
I have an excellent book called "Linux Firewalls: Attack Detection and Response" by Michael Rash. I have a few questions before I begin. I want to make an enterprise grade iptables Firewall and was wondering if I will need to do my own kernel compilation like it says in the book or nowadays is it ok to just download the Debian/linux OS server and plainly install Iptables onto it and start configuring? I was wondering since nftables is a newer improved version of iptables does it go about the same way of installation? (did not find research material on nftables)
As for firewalls, I would be worried where they are placed, your Internet speeds, and how much rules you need on them. They can pretty much dictate the kind of hardware you will need. Be aware for more performance/higher speeds, you may need better NIC cards. In the past, I used top tier Intel Pro cards. About router/firewalls in ISP settings, I used to have at the ISP I was running, a Linux router with IPtables for firewalling/accounting. In time, I replaced it with a Cisco ISP grade router, created access lists to block the few ports I needed to cut (mostly Windows default ports, SQLSERVER and not much more) and started sending netflow to a Linux server to do the customers data accounting when our capacity started growing. Beware that if you are a cable plant outfit, layer 2/3 firewall rules can de added to the DOCSIS modem configurations. You can save a significant upstream bandwidth that way. As for an open source firewall, I do recommend pfSense. I used it in the past to protect the corporate network of the ISP, and nowadays using them to provide native client VPNs to OS/X, Linux and Windows 7-10. They also support full fail-over, where if the master fails, the slave maintains the state of the connections over time, and pick ups everything. pfSense runs on top of FreeBSD, and has a graphical management interface that is very flexible. https://www.pfsense.org Concerning iptables/VPN in Linux, I am using a Debian also as a firewall and VPN (with strongswan) to secure a special network, and it is not necessary to mess with kernel compilations. As for layer-7 traffic shapping, we tried to do it for a while with Linux, but it was not very efficient, and it was a time-consuming process. We ended up going for a NetEnforcer traffic shapper.
IPtables installation question
1,474,297,842,000
I am curious if using negative vs. positive matching impacts the performance of the netfilter stack. e.g. is iptables -I INPUT -p tcp -s 192.168.0.0/16 -j DROP equivalent to iptables -I INPUT -p tcp !-s 192.168.0.0/16 -j ACCEPT in terms of performance? I have been told that negative matching might yield far worse performance, but I do not think it is supported by any facts; for all I know, it is the number of matching iterations (i.e. rules) which affects performance most. Unfortunately I do not have a test setup at hand with which I could test that hypothesis. The reason I am asking this is the company I work for uses many small (both in size and in computing capabilities) MikroTik routers and I am trying to come up with a reasonable best practices firewall policy. Apparently RouterOS, the proprietary OS MikroTik ships with those routers, is based on Linux kernel 2.6.16, so I believe that limitations of a vanilla 2.6.16 kernel apply there too. As the person who claims there is a difference in performance is my boss, I want to be sure that I may safely ignore the claim.
My first instinct is that in your example the cost and complexity of your rules is identical and which is better is as much personal preference as anything else. The inversion generally is not more complex as a matching rule in netfilter. The general consensus seems to be that the number and ordering of rules is much more important for optimal performance then how you craft individual rules, although you can make gains there too. The Linux netfilter firewall normally operates under a first match basis and the rules in each chain are processed sequentially, so the fewer rules needed to be processed before hitting a match, the higher your performance will be. That is the reason most firewall configurations have something like: -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT as the first rule. Typically that single rule matches the majority of all traffic on reasonably busy sites. (As in upwards of 99% of all traffic...) Now the remaining firewall rules are only to be triggered and processed for new connections, significantly reducing the amount of processing required. Since the rules are processed in order, from an performance perspective it makes sense to order the rules by how likely they are triggered. e.g. on a webserver the vast majority of traffic will be on the default HTTP ports. Therefore a rule like -A INPUT -p tcp -m state --state NEW -m multiport --dports 80,443 -j ACCEPT effecting the majority of your users should be your second rule and not as rule number 199 after a whole range of rules that are unlikely to match for anyone but a small number of specific users and/or uncommon protocols. As far as crafting rules, by using the correct modules like for instance multiport and iprange you can create smart rules rather then numerous individual rules.
Does negative vs positive matching impact firewall performance?
1,474,297,842,000
I am trying to setup a Transparent Firewall using ArchLinux. My setup looks like this: (ISP, IP: 10.90.10.254) \ \ \ (eth0-> ip: 10.90.10.1 gateway: 10.90.10.254) +-----------+ | | | PC | |(as server)| +-----------+ \ (eth1-> ip: 10.90.10.100) \ \ (10.90.10.101)\ | (wireless-> ip-range: 10.90.10.102-) +-------+ |Router | +-------+ My router does not have firewalling capacities, therefore I need to drop a firewall between the router and my ISP.
To accomplish that, you need to put eth0 and eth1 into bridge mode on the PC and give 1 ip to the bridge interface (not on the individual eths) Here are the basics about bridging on linux, to get started http://www.tldp.org/HOWTO/BRIDGE-STP-HOWTO/index.html Depending on your distro there might be a faster/better way to do bridging. Now, the wireless IP range you mentioned, cannot be specified via some configuration. It is up to you which IPs you will allocate where. Maybe you could control that via DHCP, but it depends on your overall setup and needs.
How to setup transparent firewall using ArchLinux
1,474,297,842,000
I am curious if most Linux distros make it possible to intercept incoming network traffic as soon as it enters the system and filter its content based on some rules before any other client can use it or at least before it gets to a specified client. E.g., let's say I wanted to have a filter that intercepts all HTTP traffic before it gets to a specific client (e.g. Firefox) and, if some pattern is matched, modify the HTML. Or replace all content coming from a certain remote host. I would like to be able to do that before it hits any client, regardless of the client. Does Linux allow for that kind of packet filtering? Additionally, I would also like to know what the workflow of a network packet is once it enters the computer from the port, i.e. if there is a sequence of steps assigned that gets performed before it becomes available to the client app that invoked it.
You need to Content-filtering not Packet-filtering . Packet filtering : Working on Port, IP, layers , redirecting , icmp, udp, and other necessary protocol. Content Filtering: Suppose you have a packet and it have a payload such as sex term.you need to drop it. Content filtering softwares: Dansguardian , SquidGuard, HostsFile, OpenDNS, FoxFilter (FireFox extension) , webcleaner .
Packet analyzer to intercept and filter incoming traffic before any client app
1,474,297,842,000
I have an OpenBSD 5.2 box what's running a webserver on port 80 and an SSHD server on port 2222. How can I configure OpenBSD's pf to only allow connections from given countries to port 80 and 2222?
Short answer: The Internet Doesn't Work That Way Longer answer: IP address blocks are not neatly demarcated per country. As far as IPv4 is concerned, the parent organization IANA allocated (past tense -- they're out of blocks) address blocks to the various NICs, which operate in very wide regions as you can see here. They then assign IP blocks to ISPs on a per-case basis depending on what the ISP says they'll need -- and at that level, the ISPs they generally trade with tend to straddle borders. I'm not familiar with the exact specifics of pf as opposed to linux' iptables but I'm reasonably sure both of them work on an IP/netblock basis. Maybe you could massage a geolocation database to spit out a list of all netblocks it's almost certain are exclusive to one country or another, but I wouldn't bet on it. On a more cynical note, you might want to ask the Iranian or Chinese governments for advice on how they try to handle it, but I'd hold up neither of them as role models for proper internet usage...
How to configure OpenBSD pf to only allow inbound from given countries?
1,474,297,842,000
I have an Ubuntu 11.04 server in a remote location on another continent, so I have no physical access to it. I only interact with it by ssh (and scp), and intend to only ever interact with it that way. For security purposes, I want to ensure that absolutely all ports on the server are closed, except for ssh. My understanding is still vague, despite having tried to find instructions on the web. What I've gathered so far is that I need to "flush" the "iptables", and also that I need to edit some files (/etc/hosts, maybe?), and reboot the machine. Obviously, though, I want to be very careful about this, because if I do it wrong, I could end up accidentally shutting down the ssh port, making the server inaccessible to me. I'm looking to establish a fool proof set of steps before I do this. So, how do I shut down all ports while still preserving my access? Bonus question: While doing this, should I, and can I, change the ssh port from 22 to a non-standard one? Does it really make a difference?
First write a little script to flush the iptables rules: #!/bin/bash echo "Stopping firewall and allowing everyone..." iptables -F iptables -X iptables -t nat -F iptables -t nat -X iptables -t mangle -F iptables -t mangle -X iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT (You probably don't need the 'nat' and 'mangle' commands.) Call it 'flush.sh' and put the script in the '/root' directory. Remember to 'chmod +x flush.sh'. Test the script by adding a harmless iptables rule such as iptables -A INPUT -p tcp -j ACCEPT and then running the script from the command line. Verify that the rule that you added is gone. Add the script to root's crontab to run every ten minutes: */10 * * * * /root/flush.sh Add back the harmless iptables rule that you used to test the script. Wait ten minutes and verify that your cron job executed successfully and removed the rule. At this point you should be able to debug your iptables rule set with the flush.sh safety net running every ten minutes. When you are finished debugging your rules, comment out the line in crontab that runs the flush.sh script. Where you put your rules is somewhat distro dependent. For Ubuntu, have a look at this link. Towards the end you will see two options for setting up your firewall rules permanently - /etc/network/interfaces and by using the Network Manager configuration. Since you are running a server, the former option is probably better. You shouldn't ever need to reboot in order to change or flush your iptables rules, unless you lock yourself out. It is best to configure sshd to only allow root login using public key authentication rather than by password. If you have a secure gateway available with a fixed IP address such as a server at your office that you can log into from anywhere, it would be good to have an iptables rule on the remote server to allow SSH only from that gateway. Changing the SSH port from 22 to something else is of very limited value as most port scanners will find the new SSH port quickly.
How do I shut down ports remotely without shutting myself out?
1,474,297,842,000
I am seeing some log enties that look like hacking attempts Type 1 Dec 26 03:09:01 ... CRON[9271]: pam_unix(cron:session): session closed for user root Dec 26 03:17:01 ... CRON[9308]: pam_unix(cron:session): session opened for user root by (uid=0) Type 2 Dec 26 03:27:11 ... sshd[9364]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=60.217.235.5 user=root Dec 26 03:27:12 ... sshd[9364]: Failed password for root from 60.217.235.5 port 47933 ssh2 What does these mean? They usually come in 5's. I enabled ufw limit ssh. So I think UFW is trottling this. But wonder why are there 2 kinds, 1 authentication failure, 1 session closed. I suppose I should respond to this by using denyhosts, changing ssh port & disable root login? What else can I do? Last time I used deny hosts, it blocked me too ... Also can I increase the UFW retry time? Like only allow login 1hour later? Perhaps reduce the login attempts to 3?
As already mentioned, the first type does not come from any SSH related stuff, its an activity log of your cron deamon and this is not harmful, but usual. The second one might be login attempts from a hacker, and these should be followed. denyhosts might help you, but disabling the possibility for root logins is also a very good idea (in addition to denyhosts, not instead of!)
Making sense of auth log
1,474,297,842,000
The ip6tables command accepts icmp and icmpv6 protocols: $ sudo ip6tables -A INPUT -p icmp -j ACCEPT $ sudo ip6tables -A INPUT -p ipv6-icmp -j ACCEPT However, when I test with the ping6 command: $ ping6 fe80::a00:1234:1234:1234%eth1 I never hit the icmp rule: Chain INPUT (policy ACCEPT 133 packets, 13501 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT icmp * * ::/0 ::/0 112 11488 ACCEPT icmpv6 * * ::/0 ::/0 Why is the icmp protocol accepted by the ip6tables if it cannot be reached?
The protocol is just a number: $ grep icmp /etc/protocols icmp 1 ICMP # internet control message protocol ipv6-icmp 58 IPv6-ICMP # ICMP for IPv6 These numbers share the same "namespace": Internet Procol, some protocols are common, eg: UDP (17), TCP (6), SCTP (132), but others are not, especially when differences between IPv4 and IPv6 matter. That's the case for ICMP: two different protocols. On a normal environment setup there will never be an IPv6 packet with ICMP (value 1) in its upper layer protocol header. Likewise on IPv4 there should never be an IPv4 packet of type ICMPv6 (aka ipv6-icmp) (value 58). Perhaps some environments using NAT64 could imperfectly leak such packets (ICMP over IPv6 or ICMPv6 over IPv4). At the same time ip6tables deals only with IPv6: it won't filter at all packets of type IPv4, the same way iptables deals only with IPv4 and won't filters packets of type IPv6. So the correct way to filter (or here count) both is to have one IPv4 rule and one IPv6 rule each with its correct upper layer protocol. sudo iptables -A INPUT -p icmp -j ACCEPT sudo ip6tables -A INPUT -p ipv6-icmp -j ACCEPT
How do we access the "icmp" protocol in ip6tables?
1,474,297,842,000
I need to script some Iptables rule changes involving NAT rules (-t nat) on Ubuntu 16 servers. It seems like the common way to drop a rule using -D [rule here] does not work with the -t identifier... I really do not want to complicate the scripting by having to identify which rule in my chain I'm looking for and get its associated line number... Any ideas? In case it helps, the purpose of the below rules is to redirect traffic both localhost and external from 1 server to a backup, during a crash or restart of a local MySQL database (basically). My Rules: iptables -t nat -A POSTROUTING -j MASQUERADE iptables -t nat -A PREROUTING -p tcp --dport 3306 -j DNAT --to-destination RMT_IP:3306 iptables -t nat -I OUTPUT -p tcp -o lo --dport 3306 -j DNAT --to-destination RMT_IP:3306 My Attempt to Drop (Works): iptables -t nat -D POSTROUTING -j MASQUERADE iptables -t nat -D PREROUTING -p tcp --dport 3306 -j DNAT --to-destination RMT_IP:3306 Can not figure out how to drop this rule without using --line-number: iptables -t nat -I OUTPUT -p tcp -o lo --dport 3306 -j DNAT --to-destination RMT_IP:3306
Given any rule with -I (insert) or -A (append), you can repeat the rule definition with -D to delete it. For your particular example, this will delete the first matching rule in the OUTPUT chain for the nat table iptables -t nat -D OUTPUT -p tcp -o lo --dport 3306 -j DNAT --to-destination RMT_IP:3306
iptables - Drop NAT rules based on rule/name, NOT rule number
1,474,297,842,000
I have two machines A and B which are in different subnets, both behind separate firewalls. Machine A can see B, but B cannot see A. I have a user account (non-root) on both machines, I can SSH B from A, and I would like to be able to SSH A from B instead, but that cannot be done directly. I have used tunnelling to hop through intermediate servers with SSH, but what I am asking is different here, and I don't know what it would be called. Is there a way to open a connection from A to B, which could in turn be used "in reverse" from machine B to run commands on A?
The short answer is yes you can, the how is: machine-A$ ssh -R 127.0.0.1:2222:127.0.0.1:22 [ip__or_name_of_B] Then on B you can ssh to A with: machine-B$ ssh -p2222 127.0.0.1 This says the following: On A create a tunnel on the remote side (-R), such that any traffic that goes to localhost (127.0.0.1) on port 2222 should come back through the tunnel and be sent to localhost (127.0.0.1 now on the local side) port 22 The B command simply says, ssh to localhost port 2222 which is the entrance to the tunnel.
"Reverse" an SSH connection from destination to target [duplicate]
1,474,297,842,000
I've got netatalk running as an AFP server so I can make Time Machine backups on my LAN. It works perfectly as long as iptables accepts all incoming traffic on the LAN, but I'm trying to tighten up security on the server, so I set the default iptables input policy to REJECT, and now I need to open up the ports needed for Time Machine. I'm using [ferm][1] to configure iptables. I added the following rule in ferm.conf: proto tcp saddr $LAN_SUBNET dport afpovertcp ACCEPT; which generates this iptable rule: -A INPUT --protocol tcp --source 192.168.42.0/24 --dport afpovertcp --jump ACCEPT but the Time Machine server still isn't showing up when I browse Network in Finder. What other ports need to be open to traffic on the LAN?
I opened these ports, and Time Machine backups are now working: afpovertcp mdns svrloc at-rtmp at-nbp at-echo at-zis 1900 To generate the iptables rules I added the following to ferm.conf: # netatalk daemon ports for AFP Time Machine server @def $PORT_TIME_MACHINE = (afpovertcp mdns svrloc at-rtmp at-nbp at-echo at-zis 1900); # allow AFP connentions on for Time Machine on LAN proto (udp tcp) saddr $LAN_SUBNET dport $PORT_TIME_MACHINE ACCEPT; New iptables rules: -A INPUT --protocol udp --source 192.168.42.0/24 --dport afpovertcp --jump ACCEPT -A INPUT --protocol udp --source 192.168.42.0/24 --dport mdns --jump ACCEPT -A INPUT --protocol udp --source 192.168.42.0/24 --dport svrloc --jump ACCEPT -A INPUT --protocol udp --source 192.168.42.0/24 --dport at-rtmp --jump ACCEPT -A INPUT --protocol udp --source 192.168.42.0/24 --dport at-nbp --jump ACCEPT -A INPUT --protocol udp --source 192.168.42.0/24 --dport at-echo --jump ACCEPT -A INPUT --protocol udp --source 192.168.42.0/24 --dport at-zis --jump ACCEPT -A INPUT --protocol udp --source 192.168.42.0/24 --dport 1900 --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport afpovertcp --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport mdns --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport svrloc --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport at-rtmp --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport at-nbp --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport at-echo --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport at-zis --jump ACCEPT -A INPUT --protocol tcp --source 192.168.42.0/24 --dport 1900 --jump ACCEPT These resources were helpful: Netatalk article on Arch wiki; TCP and UDP ports used by Apple software products
What ports need to be open for netatalk to work as a Time Machine server on my LAN?
1,474,297,842,000
I've been reading about states in iptables. This page says A connection is considered RELATED when it is related to another already ESTABLISHED connection. What this means, is that for a connection to be considered as RELATED, we must first have a connection that is considered ESTABLISHED. The ESTABLISHED connection will then spawn a connection outside of the main connection. The newly spawned connection will then be considered RELATED, if the conntrack module is able to understand that it is RELATED. Supposing a httpd connection is allowed by iptables on tcp port A (eg http://www.example.com:8001). The response from the webserver is a 302 redirect instructing the browser to go to a URL on tcp port B on the same server (eg http://www.example.com:8002). Is iptables 'aware' of this relationship, and treat packets on the new connection with state RELATED. Or does iptables consider it a new connection and treat the packets with state NEW?
No. the http REDIRECT indicates the client that the searched page is now at an other address (maybe in the same host, maybe not). iptables RELATED indicates related connections (usually in parallel, or as reply), and not a new initiated connection.
Is an HTTP redirect considered as a RELATED connection by iptables?
1,474,297,842,000
I have Debian 7, and I want to block all websites in my computer, unless my email, using IPTABLES or others firewall. How can I block all website, include http and https too. How can I do this?
iptables -I OUTPUT -p tcp -m tcp --dport 443 -j REJECT --reject-with icmp-port-unreachable iptables -I OUTPUT -p tcp -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable iptables -I OUTPUT -p tcp -m tcp -d youremailsiteIP/32 --dport 80 -j ACCEPT iptables -I OUTPUT -p tcp -m tcp -d youremailsiteIP/32 --dport 443 -j ACCEPT where youremailsiteIP is the IP address of your mail site
iptables to block all websites
1,474,297,842,000
Is there a GUI to track any socket connection sent to this computer and which program that initiates it? Also if possible track any incoming connection sent to this computer and which program that handles it (as a realtime popup indicator if possible) ? For example: "/bin/x owned by user x tries to connect to x.x.x.x:x" "x.x.x.x connected to your computer on port 80 handled by /usr/bin/apache" or at minimum, what should I learn to create this kind of software?
There is old school console tool: nethogs - Net top tool grouping bandwidth per process e.g. run in this manner: # nethogs eth0 NetHogs version 0.8.0 PID USER PROGRAM DEV SENT RECEIVED 11173 user rtorrent eth0 111.001 4.358 KB/sec 13159 user rtorrent eth0 125.673 3.734 KB/sec 9737 user irssi eth0 0.027 0.1 9687 user chromium-browser eth0 0.000 0.000 KB/sec You can browser the developer's site, for more information and more such tools. Now you can grab the source, make your own fork and develop kind of GUI. Appending sockets info with fidelity near bandwidth not a great job.
Linux GUI to track connections made from/to this computer
1,474,297,842,000
I have a range of ip addresses (10.13.13.10-19) that I want to redirect all outgoing http traffic to an internal webserver. So if someone in that range tried to access any site, the html from my webserver would be returned instead. However, I only want to affect that ip range. What iptables rules do I need on my router to make this happen?
You can use the iprange module to match a range of addresses. You want to DNAT the packets to your webserver. iptables --table nat --append PREROUTING --match iprange --src-range 10.13.13.10-10.13.13.19 --protocol tcp --dport 80 --jump DNAT --to-destination 1.2.3.4
Redirect http using iptables for an ip range
1,474,297,842,000
I would like to configure squid in such way, so that only specific (public) ip (reverse proxy), could connect to the server, but I don't know how... can someone tell me how to do this?
In Squid this is done by specifying the public IP address in http_port, and using loopback address for the web server and Apache may be configured like in httpd.conf to listen on the loopback address: Port 80 BindAddress 127.0.0.1
squid (reverse proxy) configuration
1,474,297,842,000
I'm trying to implement a way to prevent network scans from my notebook. One of the things I want is to allow arp request to specific hosts, like my gateway. I added some rules using arptables and they seem to work (at first) arptables -A OUTPUT -d 192.168.1.30 -j DROP arptables -A INPUT -s 192.168.1.30 -j DROP This is actually blocking arp requests to this host. If I run: tcpdump -n port not 22 and host 192.168.1.38 (target host) and run: arp -d 192.168.1.30; ping -c 1 192.168.1.30; arp -n (notebook) tcpdump shows no incoming packets on the target and arp -n on the notebook show (incomplete) But if I run nmap -sS 192.168.1.30 on my notebook I get on the target host: 22:21:12.548519 ARP, Request who-has 192.168.1.30 tell 192.168.1.38, length 46 22:21:12.548655 ARP, Reply 192.168.1.30 is-at xx:xx:xx:xx:xx:xx, length 28 22:21:12.728499 ARP, Request who-has 192.168.1.30 tell 192.168.1.38, length 46 22:21:12.728538 ARP, Reply 192.168.1.30 is-at xx:xx:xx:xx:xx:xx, length 28 but an arp -n on the notebook still shows incomplete, but the nmap detects the host. I also tried using nftables and ebtables with no success. How can I prevent nmap to send arp request and finding the host?
I'll complete OP's setup: address 192.168.1.38/24 on eth0 and a gateway (not really needed) 192.168.1.1. If the setup uses Wifi rather than actual Ethernet, the first method (bridge) won't be available without additional efforts (probably easy if Access Point, very difficult to impossible if not AP). nmap uses a packet socket (type AF_PACKET) to craft ARP requests, rather than using the kernel's network stack dealing with ARP cache and resolution. arping behave similarly (and will be used instead to simplify examples). tcpdump also uses AF_PACKET to capture. By contrast even other special tools such as ping, when they merely use AF_INET, SOCK_DGRAM, IPPROTO_IP or AF_INET, SOCK_RAW, IPPROTO_ICMP rather than AF_PACKET, will be filtered by iptables. Their methods can be verified using strace -e trace=%network (as root user) on these commands. As presented in Packet flow in Netfilter and General Networking: AF_PACKET happens before (at ingress) or after (at egress) most of Netfilter's subsystems, ebtables (bridge), arptables or iptables and their equivalent nftables tables: the firewall is bypassed. tcpdump (or nmap) is able to read incoming packets because it captures them before the firewall, nmap is able to send ARP packets because it injects them after the firewall. So with a standard setup, any packet generated by nmap or any other tool using AF_PACKET can't be filtered with arptables (or iptables). There are ways to overcome this. Old method: using a bridge and ebtables It's bridging, thus in most cases not compatible with Wifi. For ebtables (or nftables using bridge family) it's usually not a problem: when the ARP or IP packet that couldn't be filtered is turned into an Ethernet frame it re-enters the network stack at an other layer. Now it's within the network stack and will be affected by all the facilities there, including bridge firewall rules created with ebtables (or nftables with a bridge family). Using a bridge thus allows to overcome the firewall bypass. Create a bridge, set eth0 as bridge port and move the addresses and routes on br0 (of course this should be done by reconfiguring the adequate network tool in use and/or not done remotely because of the temporary loss of connectivity): ip link add name br0 up type bridge ip link set dev eth0 master br0 ip addr flush dev eth0 ip addr add 192.168.1.38/24 brd + dev br0 ip route add default via 192.168.1.1 # not needed for this problem Then transform the arptables rules into ebtables rules. They'll still use INPUT and OUTPUT because these are the chains between the routing stack and (for lack of better term) the bridge stack. ebtables -A OUTPUT -p ARP --arp-ip-dst 192.168.1.30 -j DROP ebtables -A INPUT -p ARP --arp-ip-src 192.168.1.30 -j DROP One roughly equivalent nftables ruleset could be (to load with nft -f somerulefile.nft): add table bridge t # for idempotence delete table bridge t # for idempotence table bridge t { chain out { type filter hook output priority 0; policy accept; arp daddr ip 192.168.1.30 drop } chain in { type filter hook input priority 0; policy accept; arp saddr ip 192.168.1.30 drop } } (Additional filtering to limit the affected interface should probably be added.) With one of these rules in place, running concurrently two tcpdump, one on br0 one on eth0 like this: tcpdump -l -n -e -s0 -i br0 arp & tcpdump -l -n -e -s0 -i eth0 arp & will show emission on br0 but not on eth0 anymore: the ARP packet which couldn't be blocked when injected is effectively blocked by the bridge layer. If the rules are removed, both interfaces will show traffic. Likewise for the reverse test from remote: the packet will be captured on eth0 but won't reach br0: blocked. New method: nftables with netdev family and egress hook ⚠: requires Linux kernel >= 5.16 which provides the egress hook, and nftables >= 1.0.1 to use it. ingress has been available since kernel 4.2. As there's no bridge involved, there's no change of network layout needed, and this will work the same for Ethernet or for Wifi. In particular this commit presents use cases: netfilter: Introduce egress hook Support classifying packets with netfilter on egress to satisfy user requirements such as: outbound security policies for containers (Laura) filtering and mangling intra-node Direct Server Return (DSR) traffic on a load balancer (Laura) filtering locally generated traffic coming in through AF_PACKET, such as local ARP traffic generated for clustering purposes or DHCP (Laura; the AF_PACKET plumbing is contained in a follow-up commit) [...] nftables provides access to additional Netfilter hooks (not described in previous schematic) in the netdev family working at the interface level: ingress and egress. These hooks are near AF_PACKET for ingress and egress (staying fuzzy because implementations details with regard to ingress/egress and capture/injection have some subtleties): egress is able to affect packets injected at AF_PACKET. Base chains in netdev family tables must be linked to (an) interface(s). Using OP's initial setup, the previous nftables ruleset can be rewritten using the netdev family's syntax like this: add table netdev t # for idempotence delete table netdev t # for idempotence table netdev t { chain out { type filter hook egress device eth0 priority 0; policy accept; arp daddr ip 192.168.1.30 drop } chain in { type filter hook ingress device eth0 priority 0; policy accept; arp saddr ip 192.168.1.30 drop } } tcpdump won't capture any injected ARP at egress: they were dropped before. Capture at ingress still happens first: tools relying on AF_PACKET (starting with tcpdump) can still capture them (but the firewall will drop them right after). Misc: There's also tc which is able to filter AF_PACKET sockets (if the tool doesn't use the option PACKET_QDISC_BYPASS). tc is more difficult to handle. Here's a Q/A with an answer of mine (at the time I wrote it my understanding and overall explanation was less acurate) having a simple example without filter.
arptables not working with nmap
1,474,297,842,000
I have the following in nftables.conf: table inet nat { set blocked { type ipv4_addr } chain postrouting { type nat hook postrouting priority 100; policy accept; ip daddr @blocked counter drop; oifname "ppp0" masquerade; iifname "br-3e4d90a574de" masquerade; } } The set blocked is a named set which can be updated dynamically. It is in this set I wish to have a collection of IPs to block, updated every n minutes. In order to preserve the atomicity, I am not using the following (updateblock.sh) to update the list: #!/bin/bash sudo nft flush set inet nat blocked sudo nft add element inet nat blocked {$nodes} But rather blockediplist.ruleset: #!/usr/sbin/nft -f flush set inet nat blocked add element inet nat blocked { <example_ip> } I use the following order of commands: nft -f /etc/nftables.conf nft -f blockediplist.ruleset However the changes in blockediplist.ruleset are not immediately applied. I know the ruleset now contains the new IPs because the IPs are present in nft list ruleset and nft list set inet nat blocked. Even just with nft add element inet nat blocked { <IP> } is the IP not being instantly blocked. An alternative method would be to define a new set and reload nftables.conf in its entirety, though I think this would be a poor and inefficient way of doing things. Is there a way to force the changes in blockediplist.ruleset to be applied immediately? UPDATE: I've just discovered that when I block an IP which I haven't pinged, it gets blocked instantly. However when adding an IP to the blocklist mid-ping it takes a while for it to be blocked. When I try a set with netdev ingress the IP gets blocked instantly. Maybe this avenue of investigation might reveal something.
The nat hook (as all other hooks) is provided by Netfilter to nftables. The NAT hook is special: only the first packet of a connection is traversing this hook. All other packets of a connection already tracked by conntrack aren't traversing any NAT hook anymore but are then directly handled by conntrack to continue performing the configured NAT operations for this flow. That explains why you should never use this hook to drop: it won't affect already tracked connections, NAT-ed or not. Just change the hook type from type nat to type filter for the part dropping traffic. Contrary to iptables a table is not limited to one hook type and actually has to use multiple types for this kind of case, because the set is local to a table and can't be shared across two tables. For the same reason, this table should logically not be called inet nat anymore because it's not just doing NAT (but I didn't rename it). So in the end: nftables.conf: table inet nat { set blocked { type ipv4_addr } chain block { type filter hook postrouting priority 0; policy accept; ip daddr @blocked counter drop } chain postrouting { type nat hook postrouting priority 100; policy accept; oifname "ppp0" masquerade iifname "br-3e4d90a574de" masquerade } } Now: all packets will be checked by the inet nat block chain allowing the blocked set to immediately affect the traffic rather than having to wait for the next flow to be affected. as usual only the first packet of a new flow (tentative conntrack state NEW) will traverse the inet nat postrouting chain. Please also note that iifname "br-3e4d90a574de" masquerade; requires a recent enough kernel (Linux kernel >= 5.5): before only filtering by outgoing interface was supported in a postrouting hook. Also, this looks like a Docker-related interface, and adding this kind of rule might possibly interact with Docker (eg: it might do NAT on traffic between two containers in the same network) because it's referencing a bridge interface. That's because Docker makes bridged traffic seen by nftables (as well as iptables) by loading the br_netfilter module).
nftables Named Set Update Delay
1,474,297,842,000
$ systemctl status ufw ● ufw.service - Uncomplicated firewall Loaded: loaded (/lib/systemd/system/ufw.service; enabled; vendor preset: enabled) Active: active (exited) since Sun 2021-10-03 15:43:42 +03; 1h 30min ago Docs: man:ufw(8) Main PID: 28326 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 4467) CGroup: /system.slice/ufw.service Oct 03 15:43:39 Cheetah systemd[1]: Starting Uncomplicated firewall... Oct 03 15:43:42 Cheetah systemd[1]: Started Uncomplicated firewall. Why is systemctl status ufw outputting "active (exited)" instead of "active (running)"? Should I be worried? $ sudo ufw status Status: active
From original post: What does status "active (exited)" mean for a systemd service? State active (exited) means that systemd has successfully run the commands but that it does not know there is a daemon to monitor. If you're using ufw, you can check if you're firewall is active with sudo ufw status ufw is a tool for managing netfilter firewall and that even if it is not running, if tables are set the firewall is working (i.e. if you configure your tables manually with iptables, the ufw doesn't need to be active).
Is my firewall configured OK?
1,474,297,842,000
I'm trying to connect to a FTP server behind a firewall that allows incoming connections in the range 6100-6200 only. I have successfully connected to this server using curl like this: curl --ftp-port :6100-6200 --list-only ftp.server But I'd like to reproduce the behaviour of this curl command with other clients that are more friendly to use from Python. In principle Linux's ftp, but I'm open to other options if someone suggest a good one. I tried ftplib but it seems that this library does not allow you to select ports; I've tried it unsuccessfully. Currently I can not make it work with ftp: 230 Login successful. Remote system type is UNIX. Using binary mode to transfer files. ftp> passive Passive mode on. ftp> ls 227 Entering Passive Mode (XXX,XXX,XXX,XXX,202,251). ftp: connect: Connection refused The same set of commands work from my laptop, therefore it seems clear that the problem is the firewall. How can I force ftp to negociate a data connection in a port in the range 6100-6200, so emulating the behaviour of curl?
When you use FTP in passive mode, the server tells the client which (server-side) data port to use. The well-known FTP protocol includes no way for the client to express requests on which port range to use at the server end. There could be some extensions that could change that, but those are not necessarily widely supported. In your example, the message 227 Entering Passive Mode (XXX,XXX,XXX,XXX,202,251). comes directly from the FTP server, as it's telling the client: "I'm listening for a data connection from you at IP address XXX.XXX.XXX.XXX, port 51683" (= 202*256 + 251). Each TCP connection has two port numbers: a local port number and a remote port number. Usually, an outgoing connection just picks the first free local port in the OS-specified range of ports to be used for outgoing connections, and the remote port is specified according to the service that's being used. In case of passive FTP, the server will pick the remote port according to its configuration and will tell it to the client in the form of a FTP 227 response. There are generally two ways to handle passive FTP in firewalls: a) The firewall and the FTP server need both be configured in cooperation to accept/use a specific range of ports for passive FTP data connections, so the server won't even try to select a port the firewall is not going to let through, or b) the firewall needs to listen in on the FTP command channel traffic, determine the port numbers used for each data connection and dynamically allow passive FTP data connections between the FTP client and server using the port numbers declared on the command channel. If you are using Linux iptables/netfilter firewall, this is exactly what the protocol-specific conntrack extension module for FTP does. You'll just need to tell it what control connections it's allowed to listen to, since the previous policy of listening on all FTP control connections passing through the firewall system turned out to be exploitable by bad guys, and now such extensions will no longer be used automatically. For details, see this page or this question here on U&L SE. curl actually uses FTP in passive mode by default, but when you use the --ftp-port option it switches to active mode. From the man page (highlight mine): -P, --ftp-port (FTP) Reverses the default initiator/listener roles when connecting with FTP. This option makes curl use active mode. curl then tells the server to connect back to the client's specified address and port, while passive mode asks the server to setup an IP address and port for it to connect to. Regarding Python and ftplib, note that the question you referred to is more than 10 years old, and there's now a new answer added by Marcus Müller: Since Python 3.3, ftplib functions that establish connections take a source_addr argument that allows you to do exactly this.
Setting which ports to use for passive FTP connection with Linux's ftp client
1,474,297,842,000
I'm try to open port using firewall-cmd but I get error ModuleNotFoundError: No module named 'six'. I'm try to reinstall six using easy_install, pip, pip3 and pip3.6 but it not work. os: centos 8 python: 3.6.8 pip 20.1.1
first of all this issue happen due to python setuptools issue , by mistake or intentionally you upgrade it , and then after upgrade i think you reinstall six module by you or one of the libraries , what happen it will not install it correctly after upgrade in the correct path all you need is to run this command , it will move the six lib file to it's correct path so other modules can find it cp /usr/local/lib/python3.6/site-packages/six.py /usr/lib/python3.6/site-packages/
firewall-cmd (ModuleNotFoundError: No module named 'six')
1,474,297,842,000
I am trying to build a simple stateful firewall with nftables following the Arch Linux nftables guide. I posted this question on the Arch Linux forum and never received an answer. After completing the guide and rebooting my machine, systemd failed to load the nftables.service. To troubleshoot the error I ran: systemctl status nftables Here is the relevant output: /etc/nftables.conf:7:17-25: Error: conflicting protocols specified: inet-service v. icmp The error is complaining about a rule that I set for accepting new pings (icmp) in the input chain. Here is the rule and I don’t see anything wrong with it: icmp type echo-request ct state new accept If I remove the rule it will work. But I want the rule. Here is my ruleset in nftables.conf after completing the guide: table inet filter { chain input { type filter hook input priority 0; policy drop; ct state established,related accept iif "lo" accept ct state invalid drop icmp type echo-request ct state new accept ip protocol udp ct state new jump UDP tcp flags & (fin | syn | rst | ack) == syn ct state new jump TCP ip protocol udp reject ip protocol tcp reject with tcp reset meta nfproto ipv4 counter packets 0 bytes 0 reject with icmp type prot-unreachable } chain forward { type filter hook forward priority 0; policy drop; } chain output { type filter hook output priority 0; policy accept; } chain TCP { tcp dport http accept tcp dport https accept tcp dport ssh accept tcp dport domain accept } chain UDP { tcp dport domain accept } } What am I missing? Thank you in advance.
This was a syntax limitation of nftables 0.7 (or a few other versions): it didn't consider ICMP and ICMPv6 directly usable in the dual IPv4/IPv6 table inet without stating explicitly which IP protocol first: So the rule: icmp type echo-request ct state new accept to work both on IPv4 and IPv6 has to be written twice like this: UPDATE: actually one shouldn't rely for IPv6 on nexthdr pointing to the upper-layer protocol: there can be Extension Headers between the Fixed Header and the upper-layer header (which comes last). Adding the correct syntax (using the meta-informations already providing protocol informations), and leaving my original answer striked, because I don't know if the "correct" syntax is valid with nftables 0.7: meta nfproto ipv4 meta l4proto icmp icmp type echo-request ct state new accept meta nfproto ipv6 meta l4proto icmpv6 icmpv6 type echo-request ct state new accept ip protocol icmp icmp type echo-request ct state new accept ip6 nexthdr icmpv6 icmpv6 type echo-request ct state new accept giving the corresponding bytecode (displayed using nft --debug=netlink list ruleset -a): inet filter input 9 8 [ meta load nfproto => reg 1 ] [ cmp eq reg 1 0x00000002 ] [ payload load 1b @ network header + 9 => reg 1 ] [ cmp eq reg 1 0x00000001 ] [ payload load 1b @ transport header + 0 => reg 1 ] [ cmp eq reg 1 0x00000008 ] [ ct load state => reg 1 ] [ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ] [ cmp neq reg 1 0x00000000 ] [ immediate reg 0 accept ] inet filter input 10 9 [ meta load nfproto => reg 1 ] [ cmp eq reg 1 0x0000000a ] [ payload load 1b @ network header + 6 => reg 1 ] [ cmp eq reg 1 0x0000003a ] [ payload load 1b @ transport header + 0 => reg 1 ] [ cmp eq reg 1 0x00000080 ] [ ct load state => reg 1 ] [ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ] [ cmp neq reg 1 0x00000000 ] [ immediate reg 0 accept ] ICMP is IP protocol 1, echo-request value 8. ICMPv6 is IPv6 protocol 58 (0x3a), its echo-request value 128 (0x80). Newer nftables 0.9 accepts directly the rule icmp type echo-request ct state new accept, but its corresponding bytecode is then only: inet filter input 9 8 [ meta load nfproto => reg 1 ] [ cmp eq reg 1 0x00000002 ] [ meta load l4proto => reg 1 ] [ cmp eq reg 1 0x00000001 ] [ payload load 1b @ transport header + 0 => reg 1 ] [ cmp eq reg 1 0x00000008 ] [ ct load state => reg 1 ] [ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ] [ cmp neq reg 1 0x00000000 ] [ immediate reg 0 accept ] meaning it's dealing only with ICMP, not also ICMPv6, which should still be added with an additional rule, simply as: icmpv6 type echo-request ct state new accept giving back the equivalent bytecode of former version: inet filter input 10 9 [ meta load nfproto => reg 1 ] [ cmp eq reg 1 0x0000000a ] [ meta load l4proto => reg 1 ] [ cmp eq reg 1 0x0000003a ] [ payload load 1b @ transport header + 0 => reg 1 ] [ cmp eq reg 1 0x00000080 ] [ ct load state => reg 1 ] [ bitwise reg 1 = (reg=1 & 0x00000008 ) ^ 0x00000000 ] [ cmp neq reg 1 0x00000000 ] [ immediate reg 0 accept ]
Nftables configuration error: conflicting protocols specified: inet-service v. icmp
1,474,297,842,000
I am using a box running Arch as my router and firewall (with shorewall). Recently, I tried to add another network onto the system, which failed horribly. After putting everything back where it was before this, and confirming that everything is exactly the same, I am having some issues with routing from my internal network (192.168.1.0/24) to the outside. Here is the current situation: I CAN ping the external network from my firewall I CAN ping the firewall from my internal network I CANNOT ping the external network from the internal network I have two network interfaces, enp5s0 (internal) and enp6s0 (external). Here are my routes from ip route ls (note my external ip ends in .78, I redacted the rest for obvious reasons): default via [redacted].1 dev enp6s0 src [redacted].78 metric 203 mtu 576 [redacted].0/24 dev enp6s0 proto kernel scope link src [redacted].78 [redacted].0/24 dev enp6s0 proto kernel scope link src [redacted].78 metric 203 mtu 576 192.168.1.0/24 dev enp5s0 proto kernel scope link src 192.168.1.1 metric 202 A traceroute from the machine on the internal network reveals that it gets to 192.168.1.1, then times out. I suspect I need to add another route allowing traffic coming from 192.168.1.0/24 to be routed through enp6s0 out to the net. I've tried different routes, and none have worked. Also, my dhcpcd.conf did change. If the commented line is uncommented, it creates a second default route which stops any connection at all. Previously, this was not an issue. interface enp5s0 static ip_address=192.168.1.1/24 #static routers=192.168.1.1 static domain_name_servers=192.168.1.1 # I have TOR DNS bound to this ip Any help would be much appreciated.
I've done something like that on an Arch server. The server had enp4s8 as the "external" network, and wlp1s0 as the "internal" network. enp4s8 had a statically-defined IP address of 10.0.0.3, and a default route to 10.0.0.1, the DSL modem. /usr/bin/ip link set dev wlp1s0 up /usr/bin/ip addr add 172.16.0.1/24 dev wlp1s0 sleep 10 modprobe iptable_nat echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s 172.16.0.0/24 -j MASQUERADE iptables -A FORWARD -o enp4s8 -i wlp1s0 -s 172.16.0.0/24 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT dhcpd -cf /etc/dhcpd.wlp1s0.conf wlp1s0 You may not have IPv4 forwarding turned on, and getting the iptables right is sometimes difficult. The other part of the trick is in /etc/dhcpd.wlp1s0.conf. I think you have to tell machines on the "internal" network about their default route and router with DHCP: option domain-name "fleegle"; option domain-name-servers 172.16.0.1; option routers 172.16.0.1; option ntp-servers 10.0.0.3; default-lease-time 14440; ddns-update-style none; deny bootp; shared-network intranet { subnet 172.16.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; pool { range 172.16.0.50 172.16.0.200; } } }
Add route from internal network to external network
1,474,297,842,000
I need to add some firewall rules in our QA environment using iptables. I have to do the changes remotely . Some of the changes also include disabling SSH for few Networks . What are best practices I can follow so that if SSH service is somehow get blocked , how can I restore my access back without rebooting the host. Some of the things I am planning are: chkconfig iptables off This is in case if we need to reboot the host , so that iptables is not started up. But this is for when we reboot for host , I am looking for something so that we can restore back without rebooting the host . BTW , console is not available for the server. Any suggestions ?
That's what iptables-apply is for. From the man page: iptables-apply will try to apply a new rulesfile (as output by iptables-save, read by iptables-restore) or run a command to configure iptables and then prompt the user whether the changes are okay. If the new iptables rules cut the existing connection, the user will not be able to answer affirmatively. In this case, the script rolls back to the previous working iptables rules after the timeout expires. The default timeout is 10 seconds. If this is too short it can be changed with --timeout 30 to reset the rule after 30 seconds if no confirmation has been received.
Best practises: Applying iptables firewall rules for SSH
1,474,297,842,000
I've installed Postgresql 9.4 on Ubuntu Trusty from the PGDG ppa. I've created a database and set it listen-addresses to '*'. I've made an entry in the pg_hba.conf file. I can connect locally with no trouble. Here is the entry from my pg_hba.conf: host all tarka 192.168.0.0/24 md5 The problem is that the port seems blocked by UFW. I've tried several variations of the ufw command to allow postgres such as sudo ufw allow postgresql/tcp sudo ufw allow 5432/tcp and most recently sudo ufw allow from 192.168.0.0/24 to any port 5432 I've restarted ufw each time. This is the status currently: sudo ufw status verbose Status: active Logging: on (low) Default: allow (incoming), allow (outgoing), disabled (routed) New profiles: skip To Action From -- ------ ---- 22 ALLOW IN Anywhere 5432 ALLOW IN 192.168.0.0/24 22 (v6) ALLOW IN Anywhere (v6) The entries in iptables seem valid: Chain ufw-user-input (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT udp -- anywhere anywhere udp dpt:ssh ACCEPT tcp -- 192.168.0.0/24 anywhere tcp dpt:postgresql ACCEPT udp -- 192.168.0.0/24 anywhere udp dpt:postgresql Never the less, when I try to connect from a remote machine, ufw logs: Sep 2 13:55:28 estuary kernel: [242754.395342] [UFW BLOCK] IN=eth0 OUT= MAC=94:de:80:27:4a:7e:b4:75:0e:97:21:29:08:00 SRC=192.168.0.13 DST=192.168.0.12 LEN=60 TOS=0x00 PREC=0x00 TTL=64 ID=43525 DF PROTO=TCP SPT=36382 DPT=21 WINDOW=29200 RES=0x00 SYN URGP=0 In fact I can't even connect by disabling ufw. In all cases nmap reports the port 5432 is closed: nmap estuary -p5432 Starting Nmap 6.40 ( http://nmap.org ) at 2015-09-02 16:43 PDT Nmap scan report for estuary (192.168.0.12) Host is up (0.0059s latency). PORT STATE SERVICE 5432/tcp closed postgresql In addition, I'm running nginx as a web server and it is completely accessible from the other machine. How can I get ufw (or whatever is actually doing it) to stop blocking port 5432? Edit as requested: Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 estuary:domain *:* LISTEN tcp 0 0 *:51413 *:* LISTEN tcp 0 0 *:ssh *:* LISTEN tcp 0 0 localhost:ipp *:* LISTEN tcp 0 0 localhost:postgresql *:* LISTEN tcp 0 0 *:smtp *:* LISTEN tcp 0 0 localhost:6010 *:* LISTEN tcp 0 0 *:49152 *:* LISTEN tcp 0 0 *:9091 *:* LISTEN tcp 0 0 *:5900 *:* LISTEN tcp 0 0 *:http *:* LISTEN tcp 0 0 *:http-alt *:* LISTEN tcp 0 0 192.168.0.12:ssh cutter:46943 ESTABLISHED tcp 1 0 192.168.0.12:46461 104.28.7.98:http CLOSE_WAIT tcp 1 0 192.168.0.12:59407 89.234.156.205:http CLOSE_WAIT tcp 0 0 localhost:38145 localhost:6010 ESTABLISHED tcp 1 1 192.168.0.12:59404 89.234.156.205:http LAST_ACK tcp 0 0 localhost:6010 localhost:38144 ESTABLISHED tcp 0 0 localhost:6010 localhost:38145 ESTABLISHED tcp 1 0 192.168.0.12:45068 89.218.2.238.stati:http CLOSE_WAIT tcp 0 0 192.168.0.12:9091 cutter:46825 ESTABLISHED tcp 0 0 localhost:38144 localhost:6010 ESTABLISHED tcp6 0 0 [::]:51413 [::]:* LISTEN tcp6 0 0 [::]:ssh [::]:* LISTEN tcp6 0 0 ip6-localhost:ipp [::]:* LISTEN tcp6 0 0 [::]:smtp [::]:* LISTEN tcp6 0 0 ip6-localhost:6010 [::]:* LISTEN tcp6 0 0 [::]:5900 [::]:* LISTEN tcp6 0 0 [::]:http [::]:* LISTEN My client (cutter) connects by wireless.
Just noticed in your question you have 'listen-addresses' with a hyphen - the documentation has an underscore ('listen_addresses')
ufw won't allow connections to port 5432
1,474,297,842,000
I'm trying to mark traffic, How shall I write following code in terms of IPtables? /ip firewall mangle> add chain=prerouting src-address=10.1.1.1/32 action=mark-connection \ new-connection-mark=server_con /ip firewall mangle> add chain=forward connection-mark=server_con action=mark-packet \ new-packet-mark=server /ip firewall mangle> add chain=prerouting src-address=10.1.1.2 action=mark-connection new-connection-mark=workstation_con /ip firewall mangle> add chain=prerouting src-address=10.1.1.3 action=mark-connection new-connection-mark=workstation_con /ip firewall mangle> add chain=prerouting src-address=10.1.1.4 action=mark-connection new-connection-mark=workstation_con /ip firewall mangle> add chain='''forward''' connection-mark=workstation_con action=mark-packet \ new-packet-mark=workstations The above code was on mikrotik firewall and I want to implement them on LinuxBox
You can translate MikroTik firewall rules to Linux iptables rules pretty easily. The only real difference is that iptables marking isn't quite as pretty, it likes 32 bit flags instead of nice long names, but "1" suffices most of the time. According to the iptables man pages: add chain=prerouting : -t mangle -A PREROUTING (Appends a new rule to the end of the mangle PREROUTING chain) src-address=10.1.1.1/32 : -s 10.1.1.1/32 (Triggers on packets with a source address of 10.1.1.1/32) action=mark-connection new-connection-mark=server_con : -j CONNMARK --set-mark 1 (marks these connections as "1") connection-mark=workstation_con action=mark-packet new-packet-mark=workstations : -m connmark --mark 1 -j MARK --set-mark 1 (Marks all packets associated with connection "1" with packet mark "1") You should be able to use these translations to create a set of rules to meet your needs.
How to Mark Traffic using IPtables?
1,474,297,842,000
I want to block intruders via psad, but HTTP and HTTPS should not be blocked. For example, if someone is scanning my dedicated server via nmap, psad should block him for 2 hours, but he should still see the contents from my domain. I set AUTO_BLOCK_TIMEOUT to a value of 7200, so everyone scanning me is completely blocked for 2 hours. Woefully the attacker is also blocked from seeing my webpage, which is not my intention. Is there any possibility to set up a partial blockage via psad?
Using SCAN_TIMEOUT I would assume that if psad detects scanning attacks from some nefarious IP address that it wholesale blocks it for the duration of time set in AUTO_BLOCK_TIMEOUT. If you just want to block scanning attacks then from the manual I would say you might want to use this timeout instead: SCAN_TIMEOUT 3600; excerpt SCAN_TIMEOUT Defines the number of seconds psad will use to timeout scans (or other suspect traffic) associated with individual IP addresses. The default value is 3600 seconds (one hour). Note the SCAN_TIMEOUT is only used if ENABLE_PERSISTENCE is set to "N". Autoblocking with psad If you look through the FAQ there's actually a section that discourages you from using psad in this manner. 3.3. Is it a good idea to set ENABLE_AUTO_IDS="Y" to automatically block scans? In general no, and this feature is disabled by default. The reason for this is that a scan can be spoofed from any IP address (see the -S option to nmap). If psad is configured to automatically block scans then an attacker can spoof a scan, say, from www.yahoo.com and then you will be parsing your firewall ruleset to discover why you can't browse Yahoo's website, (or you can just execute "psad --Flush" to remove any auto-generated firewall rules). Also, an advanced scanning technique called the TCP Idle Scan requires that scan packets are spoofed by the attacker from a seemingly unrelated IP address from the viewpoint of the target. Nmap implements the Idle scan with its -sI option, and a good explanation of the technique can be found here. So can I leave ports 80/443 unaffected? In looking at the documentation I don't see how you could achieve this, except by crafting your own rules that would get used when an attack is detected. excerpt IPTABLES_AUTO_RULENUM Defines the specific rule number that psad will use to add auto-generated iptables blocking rules in the INPUT, OUTPUT, and FORWARD chains (ENABLE_AUTO_IDS must be set to "Y" for this keyword to be used). The default value is "1".
psad: do not block access to HTTP
1,474,297,842,000
I was wondering about the semantics of ipset(8). Is it possible to add a rule matching a set to iptables and then manipulate the set, or can I only create a set and swap it for an older set in order to apply it to an iptables rule matching the name? I.e. can I add/remove to/from an IP set ad hoc, or can I exchange whole sets while the sets are in active use? The reason I ask is this is as follows. Say I create a set ipset create PCs hash:ip ipset add PCs 1.1.1.1 ipset add PCs 2.2.2.2 ... et cetera. And a rule that allows access to HTTP: iptables -A INPUT -p tcp --dport 80 -m set --set PCs src -j ACCEPT What happens when I run: ipset add PCs 3.3.3.3 will the iptables rule now take immediate effect for IP 3.3.3.3 as well? I saw it's possible to use -j SET --add-set ... to manipulate IP sets ad hoc from within iptables rules. This makes me think it should work to manipulate a set at any given point. However, the ipset project site seems to suggest that swapping a new (adjusted) set for another is the better alternative. Be it via ipset swap or via ipset restore -!. Can anyone shed light on this?
You can add and remove IPs to your already defined sets on the fly. This is one of the ideas behind IPsets: if this wasn't possible, the whole set extension of iptables wouldn't make much sense. The primary goal of ipset was to enable you to define (also dynamically) classes of matches (e.g. for dynamically blacklisting malicious hosts without the need to magically add one rule for every single host). excerpt from the ipset homepage store multiple IP addresses or port numbers and match against the collection by iptables at one swoop dynamically update iptables rules against IP addresses or ports without performance penalty express complex IP address and ports based rulesets with one single iptables rule and benefit from the speed of IP sets
Do I have to swap IP sets, or can I add/remove on the fly?
1,474,297,842,000
I have a small web server running on port 80, and I'd like to allow only Google Translate to have access to it. First I tried running dig translate.google.com and dig translate.googleusercontent.com to get the IP address for it, which returned addresses ranging from 74.125.234.74 to 74.125.234.110. Then I opened the mentioned ports, but it did not work. Looking at iptables logs, I found Google has very different IP addresses to open the target website for translation. In a few tests they ranged from 74.125.186.40 to 74.125.187.169. Now, what I've got so far is the following rule, that opens the door for this range: -A INPUT -p tcp --dport 80 -m iprange --src-range 74.125.186.40-74.125.187.169 -j ACCEPT The problems: Everytime Google tries to access the page, it uses a different IP address, and probably out of the range I mentioned previously. I'd have to try many many times to take note of all the range. I'm concerned about the security implications also, since I don't know whether all IPs in the range are safe to leave open (would they all belong to Google?) I tried nslookup 74.125.186.40 to see if I could get a domain name for it, but it returns: ** server can't find 40.186.125.74.in-addr.arpa.: NXDOMAIN
You can't do this effectively. You aren't going to be able to count on any particular block of IPs for any known length of time. Now, there are a couple of ways of solving your problem. cut&paste what you need to translate into Google Translate. You can use cat file | xclip -i to do this for even a fairly long file if you have the xclip package installed. If you are using a fairly solid web server, make a directory for it to serve that has a very long and random name. Use mkdir $(dd if=/dev/random bs=21 count=1 | base64) to create the directory so it has a truly random name. Make sure you have to include that directory name in the URL. This will set it up so someone has to know a secret in order to get at any of the files. Of course, you have to make sure the parent directory cannot be listed through the web server, otherwise it's trivial to figure out what the 'secret' directory name is.
Allow incoming connections from Google Translate only
1,474,297,842,000
I have a home server (with slackware 13) with a eth0 for the local network and a eth1 for the internet (cable modem with dynamic ip). While I do want to learn more about iptables I am still on the proccess and I need some rules done and can't until a learn to do it as I dont wish my server to get compromised at this stage. I currently have a vm where I play with my rules and everything and would appreciate if some one could wrap me a firewall rule for iptables to do the below: Allow all users from my dhcp server on eth0 to have full access to internet and server, in other words eth0 should have no restrictions within the network and server. Allow all users to be able to create a server, for example if they are playing a game such as for example warcraft, and they create a game, the firewall should allow the negotiation of those connections to go thru. Block any requests from internet to server unless it was initiated by the server or an users from the network.
So, basically your Linux box acts as a firewall? First, enable IP forwarding. echo 1 > /proc/sys/net/ipv4/ip_forward echo 1 > /proc/sys/net/ipv4/conf/all/forwarding Then, add some forwarding rules: iptables -A FORWARD -i eth0 -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT Secure the FORWARD chain: iptables -P FORWARD DROP Create a NAT rule: iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE Finally, don't forget to check that you have a default route: ip route show | grep default You should see something like: default via <IP_of_eth1's_gateway> dev eth1 If not, add one: ip route add default vie <IP_of_eth1's_gateway> dev eth1 (Usually the DHCP client will automatically add one)
iptables rule for local network with free internet blocking unrequested connection from internet to server?
1,474,297,842,000
I am building a custom embedded Linux platform based on the NXP i.MX8 with Yocto. I want to use UFW to setup the firewall. When I boot the system and try to use UFW it returns an error Couldn't determine iptables version. I have the iptables and nftables packages installed. I have tried to manually change the itpables symlink to point to the iptables-legacy binary. It still fails. How can I fix this? Please see the versions below. root@iot-gate-imx8plus:~# iptables -v iptables v1.8.7 (legacy): no command specified Try `iptables -h' or 'iptables --help' for more information. root@iot-gate-imx8plus:~# nft -v nftables v1.0.2 (Lester Gooch) root@iot-gate-imx8plus:~# ufw version ufw 0.36.2 Copyright 2008-2023 Canonical Ltd. root@iot-gate-imx8plus:~# ufw status ERROR: Couldn't determine iptables version root@iot-gate-imx8plus:~# uname -r 5.15.32+g07c574e56d60 root@iot-gate-imx8plus:~# iptables: root@iot-gate-imx8plus:/usr/sbin# ls -lrt *iptables* > lrwxrwxrwx 1 root root 20 Mar 9 2018 iptables-save -> xtables-legacy-multi\ lrwxrwxrwx 1 root root 20 Mar 9 2018 iptables-restore -> xtables-legacy-multi\ lrwxrwxrwx 1 root root 20 Mar 9 2018 iptables-legacy-save -> xtables-legacy-multi\ lrwxrwxrwx 1 root root 20 Mar 9 2018 iptables-legacy-restore -> xtables-legacy-multi\ lrwxrwxrwx 1 root root 20 Mar 9 2018 iptables-legacy -> xtables-legacy-multi\ lrwxrwxrwx 1 root root 20 Mar 9 2018 iptables -> xtables-legacy-multi UPDATE: Strace pointed out the problem. UFW makes an assumption of where the iptables binary is and Yocto installed it somewhere else: strace: Process 700 attached [pid 700] openat(AT_FDCWD, "/proc/self/fd", O_RDONLY|O_CLOEXEC) = 3 [pid 700] execve("/sbin/iptables", ["/sbin/iptables", "-V"], 0xffffec437b88 /* 21 vars */) = -1 ENOENT (No such file or directory) [pid 700] +++ exited with 255 +++ --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=700, si_uid=0, si_status=255, si_utime=0, si_stime=0} --- ERROR: Couldn't determine iptables version +++ exited with 1 +++ root@iot-gate-imx8plus:~# which iptables /usr/sbin/iptables root@iot-gate-imx8plus:~# ln -sf /usr/sbin/iptables /sbin/iptables root@iot-gate-imx8plus:~# ufw status Status: inactive
You could use strace -f -s 1000 -e trace=file ufw status to figure out which files path are used, you could discover something wrong.
UFW Couldn't determine iptables version
1,614,612,997,000
I'm just getting started with tinkering with Linux / IPTABLES and following this great tutorial here to successfully setup a RaspBerry Pi device (running RaspBerryPi OS / Debian) as a VPN Gateway. Any device on the same network can manually configure their network settings and set their routing to the IP address of the device, which will then process the information via VPN and then send it back (all from the same eth0 interface). The problem I'm having is the VPN software NordVPN for Linux removes the tun0 interface automatically when I disconnect / logout from VPN, resulting in internet no longer working on the laptop. I understand that this is due to the packets are still being told to forward, and if the tun0 interface doesn't exist those packets have no way to return to the laptop. What would be the ideal procedure to simply return all incoming traffic on eth0 back out to eth0 when tun0 doesn't exist? Structure: Eg: laptop → raspBerry pi (eth0 > tun0 > eth0) → laptop The iptables on the RaspBerry Pi are as follows: sudo iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE sudo iptables -A FORWARD -i eth0 -o tun0 -j ACCEPT sudo iptables -A FORWARD -i tun0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT sudo iptables -A INPUT -i lo -j ACCEPT sudo iptables -A INPUT -i eth0 -p icmp -j ACCEPT sudo iptables -A INPUT -i eth0 -p tcp --dport 22 -j ACCEPT sudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT sudo iptables -P FORWARD DROP sudo iptables -P INPUT DROP So far my research has pointed me to a couple answers: https://unix.stackexchange.com/a/322386 Always Forward Traffic Out The Interface It Originated On Maybe something like: ip route add table 2 default via 192.168.0.1 (ip address of my main router?) however I have not tried this yet and not fully sure if this is the direction I should be headed.
Correcting a common misconception: iptables doesn't route. It just controls if routed flows are allowed to proceed or not, but doesn't change their direction (unless using things like NAT which can change the routing fate, still done by the routing stack). That's why the most important thing in the question should be to provide the routes and routing tables (and for this case it looks like the routing rules should have been provided too). Once the tunnel interface disappears, all routes using it will disappear too. You're getting a router routing on its single interface: its ingress interface is also its egress interface: eth0. You just need to enable the flow with iptables, since its default policy is to drop forwarded packets: sudo iptables -A FORWARD -i eth0 -o eth0 -j ACCEPT and that's about all you need. Normally your RPi has already its default route set to use the 192.168.0.1 router so there's nothing else to do. If the routes aren't standard, they'll have to be added in the question and this answer updated. Below are the fine prints: When a host node sends packets through the RPi, the RPi will detect it's not an optimal route and will send back a few ICMP redirect packets to warn the client that a more direct route is available: through 192.168.0.1 directly. The node can choose to cache the information and send its next packets directly to that router, or can keep using the RPi, which will anyway also keep forwarding these packets. The actual router will most certainly send replies back directly to the host node, creating an asymmetric route. As it's all happening in the same LAN and thus with the same interface this should be fine and not trigger things like Strict Reverse Path Forwarding anywhere. For this reason, you should not use additional conntrack rules in this case (those in place are fine). -m conntrack --ctstate NEW,ESTABLISHED is not equivalent to all packets, because from the point of view of conntrack there will be out of window TCP packets caused by the packets bypassing the RPi, that would be tagged INVALID instead (but must be allowed to be forwarded).
iptables / route for returning incoming traffic back out of originating interface (eth0)
1,614,612,997,000
In setting up dynamic blacklists for nftables, per A.B.'s excellent answer, I'm encountering an error when duplicating the blacklist for both ipv4 and ipv6. I perform the following command-line operation (debian nftables) (EDIT: The original question was for a prior version, 0.9.0; during the back-and-forth comment process, it was upgraded to the more current version 0.9.3, so the accepted answer below is valid for the version 0.9.3 API): nft flush ruleset && nft -f /etc/nftables.conf for a config file including: tcp flags syn tcp dport 8000 meter flood size 128000 { ip saddr timeout 20s limit rate over 1/second } add @blackhole_4 { ip saddr timeout 1m } drop tcp flags syn tcp dport 8000 meter flood size 128000 { ip6 saddr timeout 20s limit rate over 1/second } add @blackhole_6 { ip6 saddr timeout 1m } drop tcp flags syn tcp dport 8000 meter greed size 128000 { ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop tcp flags syn tcp dport 8000 meter greed size 128000 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop and get the following error response: /etc/nftables.conf:130:17-166: Error: Could not process rule: Device or resource busy tcp flags syn tcp dport 8000 meter flood size 128000 { ip6 saddr timeout 20s limit rate over 1/second } add @blackhole_6 { ip6 saddr timeout 1m } drop ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ /etc/nftables.conf:132:17-145: Error: Could not process rule: Device or resource busy tcp flags syn tcp dport 8000 meter greed size 128000 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Also, I'm not sure what the size is a measure of; it's set to 128000 because that's what I saw somewhere authoritative. EDIT: Okay. I decided to continue playing, and see that creating separate meters for each ipv6 rule causes the error message to go away, but I don't understand why, so instead of answering my own question, I'll leave it open for someone who has a knowledgeable explanation why the meters can't be shared. The following produces no errors: tcp flags syn tcp dport 8000 meter flood_4 size 128000 { ip saddr timeout 20s limit rate over 1/second } add @blackhole_4 { ip saddr timeout 1m } drop tcp flags syn tcp dport 8000 meter flood_6 size 128000 { ip6 saddr timeout 20s limit rate over 1/second } add @blackhole_6 { ip6 saddr timeout 1m } drop tcp flags syn tcp dport 8000 meter greed_4 size 128000 { ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop tcp flags syn tcp dport 8000 meter greed_6 size 128000 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop EDIT: At the time of this writing, the man pages for nftables use the term meter, but according to the nftables wiki, the term has been deprecated in favor of set, which requires a definition including a specific protocol type (eg. ipv4_addr), so if nftables is currently mapping the term meter to the newer set, that would explain why a single meter can't currently be shared between ipv4_addr and ipv6_addr. However, the example given in the nftables wiki itself is also not up-to-date: it generates an error because dynamic is not currently (nftables v0.9.0) a valid flag type. Back to the man pages, and we can see that set has flags of either type constant, interval, or timeout, and I'm uncertain which would be appropriate for this purpose. EDIT: The "count" form of metering seems to have moved to a separate part of nftables: ct (connection tracking). It seems that one should now create defintions such as: set greed_4 { type ipv4_addr flags constant size 128000 } set greed_6 { type ipv6_addr flags constant size 128000 } and then the following rules may be close, but still generate errors: ct state new add @greed_4 { tcp flags syn tcp dport 8000 ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop ct state new add @greed_6 { tcp flags syn tcp dport 8000 ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop
try this one table inet filter { set blackhole_4 { type ipv4_addr flags timeout } set blackhole_6 { type ipv6_addr flags timeout } set greed_4 { type ipv4_addr flags dynamic size 128000 } set greed_6 { type ipv6_addr flags dynamic size 128000 } chain input { type filter hook input priority 0; ct state new tcp flags syn tcp dport 8000 add @greed_4 { ip saddr ct count over 3 } add @blackhole_4 { ip saddr timeout 1m } drop ct state new tcp flags syn tcp dport 8000 add @greed_6 { ip6 saddr ct count over 3 } add @blackhole_6 { ip6 saddr timeout 1m } drop } } EDIT: Explanation from @User1404316: Since @Zip May (correctly) asked for some explanation. As I understand it: ct introduces a connection tracking rule, in this case for new tcp connections, that if they are heading for port 8000 (dport is destination port), add the source IPv4 to the pre-defined collection set greed_4. If that happens, the rule continues with the first bracket condition, that if the source address has more than three active connections, add the source IPv4 to the second predefined set blackhole_4, but only keep it there for one minute, and if we have gotten this far along in the rule, then drop the connection. The original posted answer had two of its long lines truncated, but I figured out what I think they should be and inserted them above. The good news is that my testing indicates that this answer works! A remaining curiousity for me is how to decide when to set sizes for the collection sets, and how large to make them, so I just left things the way they are for now.
nftables dynamic blacklisting both IPv4 and IPv6
1,614,612,997,000
This is my /etc/sysconfig/nftables.conf #!/usr/sbin/nft -f flush ruleset table ip filter { chain input { type filter hook input priority filter; policy accept; ct state established,related counter packets 264 bytes 17996 accept ct state invalid drop tcp dport 22 ip saddr 192.168.0.0/16 accept udp sport 53 accept drop } chain forward { type filter hook forward priority filter; policy accept; } chain output { type filter hook output priority filter; policy accept; } } When I use nft -f /etc/sysconfig/nftables.conf but after I reboot, I also get these additional rules below the table I showed above: table bridge filter { chain INPUT { type filter hook input priority filter; policy accept; } chain FORWARD { type filter hook forward priority filter; policy accept; } chain OUTPUT { type filter hook output priority filter; policy accept; } } What is it that I do not understand? Additional question. I'm trying to harden a machine. The machine should be used for essentially browsing the web, so that has to be allowed. And I want to be able to ssh to it from the local network. Have I missed something essential?
That's the compatibility table and chains created by the newer version of the ebtables command, used to manipulate bridges, but using the nftables kernel API in ebtables compatibility mode. Something ran an ebtables command somewhere, even if just to verify there's no ebtables rule present, or maybe to auto-load some ebtables ruleset, which was converted into an nftables ruleset. You can know that's it by a few methods (here on CentOS8): actual executable # readlink -e /usr/sbin/ebtables /usr/sbin/xtables-nft-multi version displayed # ebtables -V ebtables 1.8.2 (nf_tables) rule monitoring term1: # nft -f /etc/sysconfig/nftables.conf # nft monitor #command waits in event mode term2: # ebtables -L Bridge table: filter Bridge chain: INPUT, entries: 0, policy: ACCEPT Bridge chain: FORWARD, entries: 0, policy: ACCEPT Bridge chain: OUTPUT, entries: 0, policy: ACCEPT term 1 again (Fedora's newer nftables version would display a bridge's -200 priority value with its symbolic equivalent filter): add table bridge filter add chain bridge filter INPUT { type filter hook input priority -200; policy accept; } add chain bridge filter FORWARD { type filter hook forward priority -200; policy accept; } add chain bridge filter OUTPUT { type filter hook output priority -200; policy accept; } # new generation 7 by process 16326 (ebtables) As the base chains include no rules and have an accept policy, nothing will be affected. The system also requires the presence of a bridge to have this table and chains used at all anyway. If CentOS8 and your current Fedora version are still close enough, this might be created by the use of the systemd ebtables service from the iptables-ebtables package. If you don't need bridge filtering, you can consider removing this package. You can still use nft for it if really needed. The fact that the added table is of family bridge tells it's ebtables rather than iptables, ip6tables or arptables which would all give the same behaviour, creating if not already present a different table family (resp. ip, ip6 or arp) and its base chains. So one should avoid using the same table names to avoid any clash, or at least not the same table+chain combination (eg: an nft rule in the ip filter INPUT (uppercase) chain could clash with iptables etc.) more informations about this here: Moving from iptables to nftables - nftables wiki Legacy xtables tools - nftables wiki Using iptables-nft: a hybrid Linux firewall - Red Hat About the additional question: Your rules appear to allow basic client usage (including SSH access from LAN), one important exception notwithstanding: udp sport 53 accept will allow to access any UDP port of your system, as long as the "scan" is made from UDP source port 53. Replace it with this more sensible rule: iif lo accept to allow local communication unhindered (including a possible local DNS server).
nftables changes on reboot
1,614,612,997,000
I enable ufw and I tried to block all the traffic from one server, but I can't. It only blocks ssh, all the other ports are open. I test it with telnet. I want to allow all ports for some IPs, and block all ports if the IP is not there. I have these rules: sudo ufw status verbose Status: active Logging: on (low) Default: deny (incoming), allow (outgoing), deny (routed) New profiles: skip UPDATE Also, test from iptables iptables --policy INPUT DROP I tried telnet for ssh, is blocking but for other services, I can still access them. Any ideas? I don't want to create default deny for outgoing, and then whitelist every port I want. UPDATE The problem is that the services are running inside the container. If I create a new listener with nc the firewall is blocking that connection. How can I block the incoming traffic for containers?
Because docker use also other Chains you have to block from DOCKER chain.
ufw & iptables don't block incoming connection
1,614,612,997,000
We run a FreePBX server on our LAN and softphones can register using the local SIP server IP. I need these softphones to be able to register over the internet too so we have configured the firewall and created a dns entry for sip.ourdomain.com. When the softphones are configured to use sip.ourdomain.com then can register over the internet fine however when they are in the office and are connected to the wifi they are unable to register. I suspect this is because when in the office they are trying to register to sip.ourdomain.com which resolves to the public IP that redirects to the sip server on the local LAN. How can this be resolved? Edit1 LAN is 192.168.1.X/24 & SIP Server is 192.168.1.8
What you may need is defining in your infra-structure a split view DNS or multiview DNS architecture. Thus in your internal network, your internal DNS server will resolve sip.ourdomain.com to 192.168.1.8 and externally to the current public IP address. Another alternative is enforcing a public IP address for the SIP server instead of a private IP address. I usually advise network administrators using public IP addresses for SIP servers and VPN servers for not having to deal with some corner cases of NAT problems.
Unable to register SIP via WiFi
1,614,612,997,000
I've installed a database on Ubuntu that I'll connect to it from my other server remotely on port 27017. This server I want to use only as a storage for my other server. I requested the host support to block all the connections except from one specified IP. He did, but I still could connect from home. He reasoned that he blocked all IPs from accessing the services running on all the familiar default ports like 80, 25, 21, etc. so did the same for 27017. With this setting, can I be sure that my database is not open to the world?! thanks.
You only need to block which the database uses. The more interesting question might be how that is done. iptables -F INPUT # deletes all rules iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT : maybe allow other services here iptables -A INPUT -s $allowed_source_address -p tcp --dport 27017 -j ACCEPT iptables -A INPUT -j DROP This would block every new incoming connection except for those from the allowed IP to the database. blocking the database only iptables -F INPUT # deletes all rules iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A INPUT -s $allowed_source_address -p tcp --dport 27017 -j ACCEPT iptables -A INPUT -p tcp --dport 27017 -j DROP
Reject all connections except from a specific IP
1,614,612,997,000
I am running a cloud service on my Raspberry Pi 3 and want to access it also from outside. Unfortunately, my ISP does not allow me to forward ports (this is another story) therefore I sometimes also need to access it over IPv6. To limit the access on IPv4 I have setup the following rules # /etc/iptables/rules.v4 # Generated by iptables-save v1.4.21 on Fri Mar 10 18:07:14 2017 *filter :INPUT DROP [10:3211] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [163:16092] -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT COMMIT # Completed on Fri Mar 10 18:07:14 2017 Everything works as expected. However, when I try to do the same for IPv6, i.e., use ip6tables instead of iptables and try to apply the same rules, I cannot access it anymore over IPv6 anymore. Are the rules for ip6tables setup differently? Thank you!
Ok, it seems that adding the following two rules helped. -A INPUT -p udp -m udp --dport 546 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT Here the complete rules.v6 # Generated by ip6tables-save v1.4.21 on Wed May 17 10:14:19 2017 *filter :INPUT DROP [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [110:14552] -A INPUT -i lo -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p udp -m udp --dport 546 -m state --state NEW,ESTABLISHED -j ACCEPT -A INPUT -p ipv6-icmp -j ACCEPT COMMIT # Completed on Wed May 17 10:14:19 2017
How to convert iptables rules to ip6tables rules?
1,614,612,997,000
I use iptables-persistent to set firewall rules. This is my standard configuration: *filter :INPUT DROP [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -p tcp --dport 2123 -m mac --mac-source XX:XX:XX:XX:XX:XX -j ACCEPT COMMIT Problem is I can't download packages from debian servers and ping local and external IP addresses. INPUT is only for 'incoming' connections, is this correct? These are the rules for IPv6: *filter :INPUT DROP [0:0] :FORWARD DROP [0:0] :OUTPUT DROP [0:0] COMMIT
The problem you've got is that you're not allowing any incoming packets. So if you try and reach out to an external server then you can't receive the replies! This, typically, can be handled with an "established" rule -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT The idea, here, is that incoming packets that match an outgoing connection will be allowed back in. Now with default DROP for input chains, you may see other problems (eg ICMP packets) so you may also need to allow them in depending on your requirements.
iptables-persistent blocking any outbound connections
1,614,612,997,000
on my linux machine I see the following: iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination but from /etc/sysconfig/iptables I see many rules so my question is if from iptables -L I see all are ACCEPT dose this mean the /etc/sysconfig/iptables rules are not relevant ?
The rules in /etc/sysconfig/iptables are loaded when the iptables service is started. Since your firewall chains appear empty, there's probably nothing in there. Do not that there could rules set like the nat and raw tables. If you change the rules directly in /etc/sysconfig/iptables, you need to restart the iptables service and inversely if you add rules dynamically with iptables, you may wish to save them with iptables-save.
linux + iptables + /etc/sysconfig/iptables
1,614,612,997,000
Say that i want to only allow username/password logins on my private network, but restrict all external sources on to key/cert login. I would do something like this: RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no Match Address 10.0.0.* PasswordAuthentication yes But is there a way that an attacker would be able to fool this in order to appear inside my local IP range and be allowed username/password logins from the Internet?
IP Spoofing, is a technique where the attacker uses a forged IP source address with the purpose of concealing the identity of the sender or impersonating another computing system. However, this kind of attack will be nearly "impossible" from the internet because RFC1918 defines the following blocks that will be used only inside LAN environments: The Internet Assigned Numbers Authority (IANA) has reserved the following three blocks of the IP address space for private internets: 10.0.0.0 - 10.255.255.255 (10/8 prefix) 172.16.0.0 - 172.31.255.255 (172.16/12 prefix) 192.168.0.0 - 192.168.255.255 (192.168/16 prefix) This also means that ISP's on the internet will not route those requests back to the attacker if he somehow forge an ip address from inside your lan. Since security layers are never enough, and if you have control over the firewall, or if this machine is directly attached to a internet interface, i would suggest you to enable the Reverse Path Filtering inside Linux: # sysctl -w net.ipv4.conf.all.rp_filter = 0 # sysctl -w net.ipv4.conf.default.rp_filter = 0 This will make your kernel automatically discard packages that obviously don't belong to the same subnet of the interface where they are trying to ingress.
Are there any loopholes in IP restricting sshd?
1,614,612,997,000
I'm not really sure what's going on here. I have the following: A fresh install of Ubuntu 12.04 LTS. Jira 6.1.2 installed as per instructions from Atlassian. Confluence 3.5.13 installed as per instructions from Atlassian. What's happening is that as long as I maintain an active SSH session with the server I can access Jira and Confluence just fine. From both the internal network and externally. The problem, however, is that as soon as I disconnect the SSH session I lose the ability to access both Jira and Confluence. It's just gone, poof. The browser displays a blank page/404 error. As soon as I log in again over SSH, the services are accessible again (I don't need to restart them or anything after logging back in...just authenticate via SSH and everything is magically working correctly again). I also have an Apache2 instance running on the same machine. This one works just fine even when I'm not logged in via SSH. It seems like it's only Jira/Confluence that are affected here. Any ideas on what might cause this, or how to fix it?
Do you have an encrypted home directory and JIRA or Confluence depend on files anywhere under your home directory? If so, when you log out, that directory is encrypted and only available again when it's unencrypted after you log back in.
Ubuntu blocks access to services when not logged in
1,614,612,997,000
I am not sure this is a Linux question directly ... I use Arch Linux which uses package signing. This requires me to download a set of pgp keys with the pacman-key program. This works off the presumably more general gpg program. If I can get gpg to work, I am guessing I can get pacman-key working. The error I am getting suggests that the firewall I am behind is blocking the port (or something isn't set correctly in my proxy). I am behind a pretty restrictive university firewall and proxy, but the ports for things like ssh, ftp, and http are open and working, but it appears port 11371 is closed. To debug my problem I tried going to http://pgp.mit.edu/, which works fine. When I try and download a key I get redirected to http://pgp.mit.edu:11371/ and then HTTP Error Status: 403 Forbidden Error Reason: Forbidden port I think I am looking for a pgp keyserver that uses a port that might already be open in the firewall. Is there a different keyserver that I can use that works on a more "universal" port?
Indeed, there are keyservers that listen on port 80. One such keyserver is hkp://keyserver.ubuntu.com:80. Indeed, pacman-key uses gpg under the hood. You might have tried specifying a keyserver by passing the --keyserver argument to pacman-key. This didn't work for me. You might have tried specifying a keyserver by creating or altering ~/.gnupg/gpg.conf. This didn't work for me. Turns out that pacman-key is a bash shell script. When it calls gpg, it specifies a config directory. On my system this is /etc/pacman.d/gnupg. Remove or comment out the existing keyserver and keyserver-options lines in /etc/pacman.d/gnupg/gpg.conf. Add these lines: keyserver hkp://keyserver.ubuntu.com:80 keyserver-options verbose timeout=10 Notes: You don't have to use that specific keyserver, there's more than one out there which both contains keys relevant to Arch Linux and listens on port 80 or some other port your firewall doesn't block. Directly editing system-wide configuration files is acceptable on Arch Linux--they have a different philosophy about this sort of thing. You should now be able to use the pacman-key command as expected.
PGP keyserver and proxy firewall issues
1,614,612,997,000
I have my ssh port changed to XXX. Then I did ufw limit XXX/tcp However, when I try to login, after 5/6 failed attempts I get thrown back to prompt, but I can try again. Is that supposed to be the way it works?
If I understand your description correctly, you're typing ssh myserver.example.com and making a few failed attempts, and getting your local prompt back. That is still one TCP connection, so it counts for one against ufw limit (the firewall doesn't know anything about authentication attempts, it works at a lower level). You need to make 6 separate ssh connections within 30 seconds to trigger the limit.
UFW Limit does not appear to work
1,614,612,997,000
I'm running OpenSUSE 11.4. The problem is that I can set easily what to log, but not where to log to. And currently the same logs are written to /var/log/firewall and /var/log/messages. I still want messages to be written into the first one, but not the second one — it is redundant and it is polluting regular system logs. So how to stop the firewall from writing logs to /var/log/messages? I have /etc/rsyslog.conf, and its contents are: ## ## Note, that when the MYSQL, PGSQL, GSSAPI, GnuTLS or SNMP modules ## (provided in separate rsyslog-module-* packages) are enabled, the ## configuration can't be used on a system with /usr on a remote ## filesystem. ## [The modules are linked against libraries installed bellow of /usr ## thus also installed in /usr/lib*/rsyslog because of this.] ## ## You can change it by adding network-remotefs to the Required-Start ## and Required-Stop LSB init tags in the /etc/init.d/syslog script. ## # # if you experience problems, check # http://www.rsyslog.com/troubleshoot for assistance # and report them at http://bugzilla.novell.com/ # # rsyslog v3: load input modules # If you do not load inputs, nothing happens! $ModLoad immark.so # provides --MARK-- message capability (every 1 hour) $MarkMessagePeriod 3600 $ModLoad imuxsock.so # provides support for local system logging (e.g. via logger command) # reduce dupplicate log messages (last message repeated n times) $RepeatedMsgReduction on $ModLoad imklog.so # kernel logging (may be also provided by /sbin/klogd), # see also http://www.rsyslog.com/doc-imklog.html. $klogConsoleLogLevel 1 # set log level 1 (same as in /etc/sysconfig/syslog). # # Use traditional log format by default. To change it for a single # file, append ";RSYSLOG_TraditionalFileFormat" to the filename. # $ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat # # Include config generated by /etc/init.d/syslog script # using the SYSLOGD_ADDITIONAL_SOCKET* variables in the # /etc/sysconfig/syslog file. # $IncludeConfig /var/run/rsyslog/additional-log-sockets.conf # # Include config files, that the admin provided? : # $IncludeConfig /etc/rsyslog.d/*.conf ### # print most important on tty10 and on the xconsole pipe # if ( \ /* kernel up to warning except of firewall */ \ ($syslogfacility-text == 'kern') and \ ($syslogseverity <= 4 /* warning */ ) and not \ ($msg contains 'IN=' and $msg contains 'OUT=') \ ) or ( \ /* up to errors except of facility authpriv */ \ ($syslogseverity <= 3 /* errors */ ) and not \ ($syslogfacility-text == 'authpriv') \ ) \ then /dev/tty10 & |/dev/xconsole # Emergency messages to everyone logged on (wall) *.emerg * # enable this, if you want that root is informed # immediately, e.g. of logins #*.alert root # # firewall messages into separate file and stop their further processing # if ($syslogfacility-text == 'kern') and \ ($msg contains 'IN=' and $msg contains 'OUT=') \ then -/var/log/firewall & ~ # # acpid messages into separate file and stop their further processing # # => all acpid messages for debuging (uncomment if needed): #if ($programname == 'acpid' or $syslogtag == '[acpid]:') then \ # -/var/log/acpid # # => up to notice (skip info and debug) if ($programname == 'acpid' or $syslogtag == '[acpid]:') and \ ($syslogseverity <= 5 /* notice */) \ then -/var/log/acpid & ~ # # NetworkManager into separate file and stop their further processing # if ($programname == 'NetworkManager') or \ ($programname startswith 'nm-') \ then -/var/log/NetworkManager & ~ # # email-messages # mail.* -/var/log/mail mail.info -/var/log/mail.info mail.warning -/var/log/mail.warn mail.err /var/log/mail.err # # news-messages # news.crit -/var/log/news/news.crit news.err -/var/log/news/news.err news.notice -/var/log/news/news.notice # enable this, if you want to keep all news messages # in one file #news.* -/var/log/news.all # # Warnings in one file # *.=warning;*.=err -/var/log/warn *.crit /var/log/warn # # the rest in one file # *.*;mail.none;news.none -/var/log/messages # # enable this, if you want to keep all messages # in one file #*.* -/var/log/allmessages # # Some foreign boot scripts require local7 # local0,local1.* -/var/log/localmessages local2,local3.* -/var/log/localmessages local4,local5.* -/var/log/localmessages local6,local7.* -/var/log/localmessages ###
Gilles, no wonder you were puzzled (I wasn't, I just didn't understand the syntax ;-) ), it was a bug: https://bugzilla.novell.com/show_bug.cgi?id=676041 Luckily fixed already.
How to stop firewall from writing logs to /var/log/messages?