date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,651,155,404,000 |
I have Debian 4.19.194-1 as a router server with LAN, WAN, PPPOE (as gateway) and COMPUTER1 in LAN network which should have access to internet through Debian router.
As firewall I use nftables with rules:
#!/usr/sbin/nft -f
flush ruleset
define EXTIF = "ppp0"
define LANIF = "enp1s0"
define WANIF = "enp4s0"
define LOCALIF = "lo"
table firewall {
chain input {
type filter hook input priority 0
ct state {established, related} counter accept
ct state invalid counter drop
ip protocol icmp counter accept
ip protocol igmp counter accept comment "Accept IGMP"
ip protocol gre counter accept comment "Accept GRE"
iifname { $LOCALIF, $LANIF } counter accept
tcp dport 44122 counter accept
udp dport 11897 counter accept
udp dport 1194 counter accept
udp dport {67,68} counter accept comment "DHCP"
counter reject
}
chain forwarding {
type filter hook forward priority 0
# teleguide.info for ntf monitor
ip daddr 46.29.166.30 meta nftrace set 1 counter accept
ip saddr 46.29.166.30 meta nftrace set 1 counter accept
udp dport 1194 counter accept
tcp dport 5938 counter accept
udp dport 5938 counter accept
ip daddr 10.10.0.0/24 counter accept
ip saddr 10.10.0.0/24 counter accept
ip protocol gre counter accept comment "Accept GRE Forward"
counter drop comment "all non described Forward drop"
}
chain outgoing {
type filter hook output priority 0
oifname $LOCALIF counter accept
}
}
table nat {
chain prerouting {
type nat hook prerouting priority 0
iifname $EXTIF udp dport 1194 counter dnat to 10.10.0.4
}
chain postrouting {
type nat hook postrouting priority 0
ip saddr 10.10.0.0/24 oifname $EXTIF counter masquerade
}
}
lsmod:
tun 53248 2
pppoe 20480 2
pppox 16384 1 pppoe
ppp_generic 45056 6 pppox,pppoe
slhc 20480 1 ppp_generic
binfmt_misc 20480 1
i915 1736704 0
ppdev 20480 0
evdev 28672 2
video 49152 1 i915
drm_kms_helper 208896 1 i915
iTCO_wdt 16384 0
iTCO_vendor_support 16384 1 iTCO_wdt
parport_pc 32768 0
coretemp 16384 0
sg 36864 0
serio_raw 16384 0
pcspkr 16384 0
drm 495616 3 drm_kms_helper,i915
parport 57344 2 parport_pc,ppdev
i2c_algo_bit 16384 1 i915
rng_core 16384 0
button 20480 0
nft_masq_ipv4 16384 3
nft_masq 16384 1 nft_masq_ipv4
nft_reject_ipv4 16384 1
nf_reject_ipv4 16384 1 nft_reject_ipv4
nft_reject 16384 1 nft_reject_ipv4
nft_counter 16384 25
nft_ct 20480 2
nft_connlimit 16384 0
nf_conncount 20480 1 nft_connlimit
nf_tables_set 32768 3
nft_tunnel 16384 0
nft_chain_nat_ipv4 16384 2
nf_nat_ipv4 16384 2 nft_chain_nat_ipv4,nft_masq_ipv4
nft_nat 16384 1
nf_tables 143360 112 nft_reject_ipv4,nft_ct,nft_nat,nft_chain_nat_ipv4,nft_tunnel,nft_counter,nft_masq,nft_connlimit,nft_masq_ipv4,nf_tables_set,nft_reject
nf_nat 36864 2 nft_nat,nf_nat_ipv4
nfnetlink 16384 1 nf_tables
nf_conntrack 172032 8 nf_nat,nft_ct,nft_nat,nf_nat_ipv4,nft_masq,nf_conncount,nft_connlimit,nft_masq_ipv4
nf_defrag_ipv6 20480 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
ip_tables 28672 0
x_tables 45056 1 ip_tables
autofs4 49152 2
ext4 745472 2
crc16 16384 1 ext4
mbcache 16384 1 ext4
jbd2 122880 1 ext4
fscrypto 32768 1 ext4
ecb 16384 0
crypto_simd 16384 0
cryptd 28672 1 crypto_simd
glue_helper 16384 0
aes_x86_64 20480 1
raid10 57344 0
raid456 172032 0
async_raid6_recov 20480 1 raid456
async_memcpy 16384 2 raid456,async_raid6_recov
async_pq 16384 2 raid456,async_raid6_recov
async_xor 16384 3 async_pq,raid456,async_raid6_recov
async_tx 16384 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov
xor 24576 1 async_xor
raid6_pq 122880 3 async_pq,raid456,async_raid6_recov
libcrc32c 16384 3 nf_conntrack,nf_nat,raid456
crc32c_generic 16384 5
raid0 20480 0
multipath 16384 0
linear 16384 0
raid1 45056 2
md_mod 167936 8 raid1,raid10,raid0,linear,raid456,multipath
sd_mod 61440 6
ata_generic 16384 0
ata_piix 36864 4
libata 270336 2 ata_piix,ata_generic
psmouse 172032 0
scsi_mod 249856 3 sd_mod,libata,sg
ehci_pci 16384 0
i2c_i801 28672 0
uhci_hcd 49152 0
lpc_ich 28672 0
ehci_hcd 94208 1 ehci_pci
mfd_core 16384 1 lpc_ich
usbcore 299008 3 ehci_pci,ehci_hcd,uhci_hcd
r8169 90112 0
realtek 20480 2
libphy 77824 2 r8169,realtek
usb_common 16384 1 usbcore
ntf monitor trace(verdict accept everywhere):
trace id 2c2a8923 ip firewall forwarding packet: iif "enp1s0" oif "ppp0" ether saddr xxx ether daddr xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32611 ip length 52 tcp sport 62489 tcp dport https tcp flags == syn tcp window 8192
trace id 2c2a8923 ip firewall forwarding rule ip daddr 46.29.166.30 nftrace set 1 counter packets 0 bytes 0 accept (verdict accept)
trace id 2c2a8923 ip nat postrouting packet: oif "ppp0" @ll,xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32611 ip length 52 tcp sport 62489 tcp dport https tcp flags == syn tcp window 8192
trace id 2c2a8923 ip nat postrouting rule ip saddr 10.10.0.0/24 oifname "ppp0" counter packets 0 bytes 0 masquerade (verdict accept)
trace id 73f8f405 ip firewall forwarding packet: iif "ppp0" oif "enp1s0" ip saddr 46.29.166.30 ip daddr 10.10.0.96 ip dscp af32 ip ecn not-ect ip ttl 58 ip id 0 ip length 52 tcp sport https tcp dport 62489 tcp flags == 0x12 tcp window 29200
trace id 73f8f405 ip firewall forwarding rule ip saddr 46.29.166.30 nftrace set 1 counter packets 0 bytes 0 accept (verdict accept)
trace id ca8ec4f5 ip firewall forwarding packet: iif "enp1s0" oif "ppp0" ether saddr xxx ether daddr xxx ip saddr 10.10.0.96 ip daddr 46.29.166.30 ip dscp cs0 ip ecn not-ect ip ttl 127 ip id 32612 ip length 40 tcp sport 62489 tcp dport https tcp flags == ack tcp window 256
And I don't know why, but some sites work fine from COMPUTER1, but some not with such rules.
For example: https://google.com works well from server and from computer1, but https://teleguide.info works well from server(wget), but not works from computer1.
Any idea whats wrong?
|
The firewall rules did not cause the problem. Instead, it's due to the MTU difference in "plain" Ethernet and PPPoE. Since PPP header takes up (at least) 8 bytes, and the usual MTU of Ethernet itself is 1500 bytes, the MTU of PPPoE in that case will be at most 1492 bytes.
I don't know MTU stuff well enough to tell the details, but as far as I know, if the TCP SYN packet advertise MSS to be larger than what can fit into the MTU of the interface that the replies will come in through, the replying traffic could end up having trouble from actually getting in.
AFAIK, the reason it works fine with the router/server itself is that, the MSS is derived from the MTU of its outbound interface (ppp0), while on the other hand, COMPUTER1's outbound interface is plain Ethernet.
For TCP traffics, one can workaround the problem by having a rule in a hook forwarding chain:
tcp flags syn tcp option maxseg size set 1452
1452 comes from 1500 - 8 - 40, where the 40 is the size of a IPv4 header. For IPv6 you may need 1500 - 8 - 60 = 1432.
You might need to have the rule ordered before any accept rules. (It could depend on the whole structure of the ruleset though, I think.)
P.S. Not sure if you need any measure for UDP traffics.
Alternatively, you can probably just set the MTU of the Ethernet interfaces of all the LAN "clients" of this "router" (and that of its LANIF) to 1492. It's probably a less "workaround" approach, but could be quite a hassle.
| Router with nftables doesn't work well |
1,651,155,404,000 |
Let's say I want to apply a rule to ip daddr { 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 }, but I want to exclude two more specific IPv4 addresses from that. How do I do that?
I was hoping for some more elegant way of doing this:
ip daddr { 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 } \
ip daddr != 10.0.1.2 \
ip daddr != 10.0.2.3
as explained in the nft manpage for negation of addresses or ranges, but it does not show a way to do that with sets.
|
It appears that negations of sets are working as expected (undocumented):
ip daddr { 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16 } \
ip daddr != { 10.0.1.2, 10.0.2.3 }
| Can I match a set negatively in nftables? |
1,651,155,404,000 |
Since I'm using a transparent proxy service, I use a raspberry pi as my home router. Its OS is plain Raspbian. Now I'm setting up a Minecraft server on 192.168.2.28, and am exposing it to WAN using NAT. Here's my /etc/nftables.conf:
#!/sbin/nft -f
flush ruleset
table ip filter {
chain output {
type filter hook output priority 0; policy accept;
tcp sport 25565 drop
}
}
table ip nat {
chain prerouting {
type nat hook prerouting priority 0; policy accept;
tcp dport 25565 dnat 192.168.2.28
}
chain postrouting {
type nat hook postrouting priority 0; policy accept;
tcp sport 25565 ip saddr 192.168.2.28 masquerade
}
}
However, I have the following issue:
On 192.168.2.28, I run
nc -l -p 25565
On 192.168.2.27, I run
echo "Hello, world!" | nc wan_ip 25565
The wanted behavior is that I get the "Hello, world!" message on 192.168.2.28.
However, when the first SYN packet goes through the router, it only has its daddr NATed, keeping its saddr equal to 192.168.2.27.
When 192.168.2.28 receives the packet, it replies to 192.168.2.27. Since they are in the same L2 network, the packet doesn't go through the router, hence not NATed.
Then 192.168.2.27 receives the packet from 192.168.2.28, but it doesn't know this is the reply from wan_ip.
How can I fix this issue and make port forwarding work everywhere, including from LAN hosts?
|
As a reminder, nftables (just like iptables) sees only the first packet of a flow to be NAT-ed. When it comes to doing NAT, every other packet in the same flow will be handled directly by Netfilter/conntrack without seeing nftables anymore: so the return traffic is automatically un-NAT-ed without further assistance.
The only part that matters is what happens to the very first packet (so for TCP it's a SYN packet). Further traffic for this flow, including reply packets is automatically handled and bypasses NAT hooks.
That means there should be no special postrouting rule to handle traffic emitted from 192.168.2.28. There should be only a generic rule to masquerade all of 192.168.2.0/24 when communicating towards Internet. Anyway this rule is not a problem in itself: it just won't be used as often as one think when the emitted traffic is reply traffic: it's part of a pre-existing flow, so as written above, such packets won't traverse nftables anymore.
What is important is to have proper NAT hairpinning support: the case where the client is in the same LAN as the server and asymmetric traffic would happen without proper care.
Here:
when the router sees anything (coming from anywhere) destined to tcp destination port 25565, redirect the destination address to 192.168.2.28
As the redirection doesn't discriminate from outside and from inside, this rule should have been enough on its own when associated with an adequate generic masquerade rule for the whole LAN. But OP didn't use such rule.
Instead of generic masquerade in postrouting,
OP used tcp sport 25565 ip saddr 192.168.2.28 which has no relation to the previous rule's filter part for the same first packet tcp dport 25565 (and which has also received new destination 192.168.2.28): NAT hairpinning is not achieved.
What should be done instead is either one of:
use a generic masquerading for the whole LAN (adapted to NAT hairpinning use case and lack of additional information)
chain postrouting {
type nat hook postrouting priority 0; policy accept;
ip saddr 192.168.2.0/24 ip daddr != 192.168.2.0/24 masquerade
}
use masquerading only when the server is the destination, for packets with a destination rather source port 25565
chain postrouting {
type nat hook postrouting priority 0; policy accept;
tcp dport 25565 ip daddr 192.168.2.28 masquerade
}
That means only traffic to the server, including LAN-to-LAN for proper NAT hairpinning is masqueraded (LAN systems won't be able to reach Internet).
Instead of 2. choose to apply masquerade only to traffic that first got DNAT
chain postrouting {
type nat hook postrouting priority 0; policy accept;
ct status dnat masquerade
}
Same effect as 2. but more generic.
combine 1 and 3 to masquerade only when needed
Bullets 2. or 3. have the side effect of hiding the Internet source address, so better apply that only for the LAN:
chain postrouting {
type nat hook postrouting priority 0; policy accept;
ip saddr 192.168.2.0/24 ip daddr != 192.168.2.0/24 ct status dnat masquerade
}
So either use 1. or 4. (to restrict LAN access to Internet) There are other choices possible.
Note: There's a corner case for 1. : if the client uses the router's internal IP address as target it won't work without an additional rule for this case, but I doubt the client will do this, and I don't have enough information from OP: the router's internal IP address.
| How to configure port forwarding with nftables for a Minecraft server on Raspberry Pi? |
1,648,211,276,000 |
I am using Ubuntu 20.04 OS with dnsjava client library to query DNS servers.
I have nftables rule in this machine which block all traffic on ports except ephemeral port range 32768-61000 which will be used by dnsjava to get results from DNS server.
table inet tb {
chain input {
type filter hook input priority 0; policy drop;
tcp dport 32768-61000 accept
udp dport 32768-61000 accept
....
....
}
chain forward {
....
}
chain output {
.....
}
}
It looks like allowing 32768-61000 range might be security flaw. But completely blocking this port range is adding latency in dns resolution and many failure due to timeout.
Is there way we can avoid this rule allowing port range in nftables? Is there any nftable feature which we can use to avoid this without impacting dns resolution latency?
|
Use stateful firewall rules. Connection state for stateful rules is handled by Netfilter's conntrack subsystem and can be used from nftables.
The goal is to allow (select) outgoing packets, let them be tracked (automatically) by conntrack and allow back as incoming packets, only those that are part of the flow initially created in the outgoing part. conntrack works automatically as soon as a rule references it (any ct expression). In addition it should work automatically in the initial (host) network namespace as soon as loaded even without rule.
As OP didn't provide the complete ruleset, I'm just replacing rules and don't attempt to create a full ruleset (eg: allowing packets on the lo interface is quite common, or maybe the output chain could also have a drop policy). Not trying simplifications (eg recent nftables/kernel allow a single rule for TCP and UDP).
This becomes:
table inet tb {
chain input {
type filter hook input priority 0; policy drop;
ct state established,related accept
....
....
}
chain forward {
....
}
chain output {
.....
ct state established accept
udp dport 53 accept
tcp dport 53 accept
}
}
The ephemeral ports aren't used anymore in the ruleset (there's not even need to specify source port 53). An incoming packet which is a reply to the outgoing packets to port 53 will be automatically accepted. The related part also allows related packets, such as ICMP errors when a destination is unreachable, to be also accepted (thus preventing a timeout in this case).
One can now also follow flow states using these command (to be run in the same network namespace as the application in case containers are involved):
For a list:
conntrack -L
for (quasi-realtime) events:
conntrack -E
or more specifically with these two commands for example (running in two terminals):
conntrack -E -p tcp --dport 53
conntrack -E -p udp --dport 53
Of course there's much more about all this. Further documentation:
Stateful firewall
Connection Tracking System
Matching connection tracking stateful metainformation
| How to avoid allowing ephemeral port range rule in nftables |
1,648,211,276,000 |
This is on Ubuntu 20.04.
I am attempting to write a rule for nftables which will match all IP packets received on interface eth1 that have a specific TOS value (0x02). My attempt so far:
sudo nft add table raw
sudo nft -- add chain raw prerouting {type filter hook prerouting priority -300\;}
sudo nft add rule ip raw prerouting iifname eth1 ip dscp 2 counter
sudo nft add rule ip raw prerouting iifname eth1 udp dport 41378 counter
I am sending UDP packets from a seperate computer to the computer running nftables. The code to setup this sending socket, including setting the TOS in those packets:
if ((sockfd = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
{
perror("socket creation failed");
exit(EXIT_FAILURE);
}
int optval = 2;
setsockopt(sockfd, IPPROTO_IP, IP_TOS, &optval, sizeof(optval)); //Set TOS value
servaddr.sin_family = AF_INET;
servaddr.sin_port = htons(41378);
servaddr.sin_addr.s_addr = inet_addr("192.168.10.100");
I can see the packets arrive using sudo tcpdump -i eth1 -vv:
14:51:35.153295 IP (tos 0x2,ECT(0), ttl 64, id 7091, offset 0, flags [DF], proto UDP (17), length 50)
192.168.12.10.49089 > ubuntu.41378: [udp sum ok] UDP, length 22
The raw header of these is as follows:
IP Header
00 E0 4C 00 05 8B 3C 97 0E C7 E1 00 08 00 45 02 ..L...<.......E.
00 31 7E 52 .1~R
Decoded it shows:
IP Header
|-IP Version : 4
|-IP Header Length : 5 DWORDS or 20 Bytes
|-Type Of Service : 2
|-IP Total Length : 49 Bytes(Size of Packet)
|-Identification : 32338
|-TTL : 64
|-Protocol : 17
|-Checksum : 8873
|-Source IP : 192.168.12.10
|-Destination IP : 192.168.12.100
The problem is that when I run sudo nft list ruleset I see:
table ip raw {
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifname "eth1" ip dscp 0x02 counter packets 0 bytes 0
iifname "eth1" udp dport 41378 counter packets 8 bytes 392
}
}
The rule matching based on udp destination port is working well, but the rule matching on dscp of 0x02 is not.
How can I make a rule to match on a TOS of 0x02?
So far I have tried other values of TOS, in-case 0x02 was special. I tried decimal 8, 16, 24, and 32. Each time I see the incoming packet with the TOS value I am setting, but the nfttables rule never counts, which I believe means it never matched.
Handy nftables guide:
https://wiki.nftables.org/wiki-nftables/index.php/Quick_reference-nftables_in_10_minutes
A handy reference for DSCP values to names:
https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus1000/sw/4_0/qos/configuration/guide/nexus1000v_qos/qos_6dscp_val.pdf
|
Looking further into the make-up of a IPv4 header:
https://en.wikipedia.org/wiki/IPv4#Header
I see that TOS is the name given to the entire byte, but DSCP is the name for only the most-significant 6 bits.
Based on this I guessed TOS != DSCP.
I tried changing the sending code to using a TOS of 0x20 and then modified the nftables rule to look for 0x20 >> 2 == 0x08 (Shifting the TOS right two bits to convert it into a DSCP value):
sudo nft add rule ip raw prerouting iifname eth1 ip dscp 0x8 counter
With this change I now see that counter increasing for that new rule.
table ip raw {
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifname "eth1" ip dscp cs1 counter packets 12 bytes 590
iifname "eth1" udp dport 41378 counter packets 12 bytes 590
}
}
TLDR:
TOS is not the same as DSCP.
The DSCP is the most-significant 6 bits of the TOS.
To match a TOS in nftables using ip dscp, shift the TOS right 2 bits and match on that value.
I'm positive I'm missing some core concepts with this answer, so I encourage anyone who understands this better to provide a more useful answer.
| Nftables not matching TOS value in IP packets |
1,648,211,276,000 |
This is my /etc/nftables.conf
#!/usr/sbin/nft -f
flush ruleset
define wan = { eth0 }
table inet filter {
chain input {
type filter hook input priority 0; policy drop;
# allow everything from loopback interface
iif lo accept comment "Accept any localhost traffic"
# drop invalid connection attempts
ct state invalid drop comment "Drop all invalid connection attempts"
# allow established and related connections
ct state established,related accept comment "Accept all traffic initiated by us"
# allow explicitly allowed services/ports/protocols
iif $wan tcp dport 22 accept comment "wan"
# Apply extra (manual configured) rules
# reject everything that has not been accepted before
reject with icmpx type admin-prohibited comment "Drop everything, which is not explicitly allowed"
}
chain forward {
type filter hook forward priority 0; policy drop;
# allow everything from loopback interface
iif lo accept comment "Accept any localhost traffic"
# drop invalid connection attempts
ct state invalid drop comment "Drop all invalid connection attempts"
# Apply extra (manual configured) rules
# reject everything that has not been accepted before
reject with icmpx type admin-prohibited comment "Drop everything, which is not explicitly allowed"
}
chain output {
type filter hook output priority 0; policy accept;
# Apply extra (manual configured) rules
}
}
This is what I get from journalctl -u nftables.service, after running systemctl restart nftables.service:
Feb 01 18:54:40 mydomain.net systemd[1]: Starting nftables...
Feb 01 18:54:40 mydomain.net nft[1682]: /etc/nftables.conf:14:13-33: Error: Could not process rule: No such file or directory
Feb 01 18:54:40 mydomain.net nft[1682]: ct state invalid drop comment "Drop all invalid connection attempts"
Feb 01 18:54:40 mydomain.net nft[1682]: ^^^^^^^^^^^^^^^^^^^^^
Feb 01 18:54:40 mydomain.net nft[1682]: /etc/nftables.conf:16:13-47: Error: Could not process rule: No such file or directory
Feb 01 18:54:40 mydomain.net nft[1682]: ct state established,related accept comment "Accept all traffic initiated by us"
Feb 01 18:54:40 mydomain.net nft[1682]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Feb 01 18:54:40 mydomain.net nft[1682]: /etc/nftables.conf:21:13-51: Error: Could not process rule: No such file or directory
Feb 01 18:54:40 mydomain.net nft[1682]: reject with icmpx type admin-prohibited comment "Drop everything, which is not explicitly allowed"
Feb 01 18:54:40 mydomain.net nft[1682]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Feb 01 18:54:40 mydomain.net nft[1682]: /etc/nftables.conf:29:13-33: Error: Could not process rule: No such file or directory
Feb 01 18:54:40 mydomain.net nft[1682]: ct state invalid drop comment "Drop all invalid connection attempts"
Feb 01 18:54:40 mydomain.net nft[1682]: ^^^^^^^^^^^^^^^^^^^^^
Feb 01 18:54:40 mydomain.net nft[1682]: /etc/nftables.conf:32:13-51: Error: Could not process rule: No such file or directory
Feb 01 18:54:40 mydomain.net nft[1682]: reject with icmpx type admin-prohibited comment "Drop everything, which is not explicitly allowed"
Feb 01 18:54:40 mydomain.net nft[1682]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Feb 01 18:54:40 mydomain.net systemd[1]: nftables.service: Main process exited, code=exited, status=1/FAILURE
Feb 01 18:54:40 mydomain.net systemd[1]: nftables.service: Failed with result 'exit-code'.
Feb 01 18:54:40 mydomain.net systemd[1]: Failed to start nftables.
When I comment the rules starting with "ct state" the service starts without an error. What is wrong here? The very same ruleset works fine on other machines.
System information:
OS: Debian 10
Kernel: 4.19.0-14-amd64
|
For anybody else encountering this problem. Make sure:
The "netfilter" (and corresponding) kernel options are compiled either directly or as modules (grep -i netfilter /proc/config* or grep -i netfilter /boot/config*)
If the option have been compiled as modules, make sure you do not have sysctl option kernel.modules_disabled set to 1. (edit /etc/sysctl.conf)
| nftables: ct state rule produces "Error: Could not process rule: No such file or directory" |
1,648,211,276,000 |
nftables supports dynamically populated sets which are documented in nftables wiki. The first example on the wiki page is following:
table ip my_filter_table {
set my_ssh_meter {
type ipv4_addr
size 65535
flags dynamic
}
chain my_input_chain {
type filter hook input priority filter; policy accept;
tcp dport 22 ct state new add @my_ssh_meter { ip saddr limit rate 10/second } accept
}
}
nftables configuration above is explained as follows:
In this example we create a rule to match new TCP ssh (port 22) connections, which uses a dynamic set named my_ssh_meter to limit the traffic rate to 10 connections per second for each source IP address.
How to read this rule or how to understand it? I mean the source IP addresses(ip saddr) with limit rate 10/second data is added to named set my_ssh_meter if Linux connection tracking subsystem sees a new connection(ct state new) to TCP destination port 22(tcp dport 22). However, it looks like the my_ssh_meter set is never used? And is the accept action executed when populating the set succeeded?
|
What happens:
when a new ssh connection arrives, its source plus attached meter is added (if not already present: it's a set, so each element is present only once) to the set and evaluated to provide a boolean result true/false
if the evaluation is true, the rule can continue, else the rule stops.
note: without the attached meter, adding an element would always evaluate true: adding to a set is not terminal.
Here:
if it evaluated true (ie: it didn't flood), the rule continues to accept ending the current input hook (ie: chain my_input_chain) evaluation.
if it evaluated false, the rule ends before its final accept statement
it continues to the next rule, or if none
the default chain policy is used: accept
So here whatever happens, the new connection packet is always accepted: this ruleset as-is has no visible effect except populating the set.
The wiki's ruleset example is incomplete. It should be followed by a drop (or reject) within the same conditions (some rule factorization is certainly possible):
% nft add rule ip my_filter_table my_input_chain tcp dport 22 ct state new drop
So if the metric did overflow, the accept final part of the previous rule is not evaluated and the incoming connection is dropped in the next rule. As long as attempts are being done, it will be accounted in the metric's tocken bucket. So as long as attempts keep staying too fast, zero new connection will be possible (note that with a default token bucket burst of 5, the first connections will always succeed even if immediately flooding).
The important thing to know here is that as with many other statements, the set statement (with its special syntax using @set) both does an action with a change (an element is added in a set) and has a boolean result: true or false which conditions further evaluation of the remaining part of the rule: both roles are in use at the same time.
To do tests, simply replace 10/second with 10/minute or 10/hour: the behavior will now become more obvious (after the 5 first connections: by default limit's token bucket allows a burst of 5 packets.)
| How to understand the nftables "add @my_ssh_meter { ip saddr limit rate 10/second } accept" rule? |
1,648,211,276,000 |
From the nftables Quick reference:
family refers to a one of the following table types: ip, arp, ip6,
bridge, inet, netdev.
and
type refers to the kind of chain to be created. Possible types are:
filter: Supported by arp, bridge, ip, ip6 and inet table families.
route: Mark packets (like mangle for the output hook, for other hooks
use the type filter instead), supported by ip and ip6.nat: In order
to perform Network Address Translation, supported by ip and ip6.
From another document which explains how to configure chains:
The possible chain types are:
filter, which is used to filter packets. This is supported by the arp,
bridge, ip, ip6 and inet table families.route, which is used to
reroute packets if any relevant IP header field or the packet mark is
modified. If you are familiar with iptables, this chain type provides
equivalent semantics to the mangle table but only for the output hook
(for other hooks use type filter instead). This is supported by the
ip, ip6 and inet table families.nat, which is used to perform
Networking Address Translation (NAT). Only the first packet of a given
flow hits this chain; subsequent packets bypass it. Therefore, never
use this chain for filtering. The nat chain type is supported by the
ip, ip6 and inet table families.
Hence, according to at least two authoritative references, no chain type is supported by the netdev family. Given that, how can we use the netdev family at all?
|
I am new one, but also interested in nftables rules. I found in nftables wiki: "The principal (only?) use for this (netdev) family is for base chains using the ingress hook, new in Linux kernel 4.2." More info here, in the end of article: https://wiki.nftables.org/wiki-nftables/index.php/Nftables_families
Ingress hook allows you to filter L2 traffic. It comes before prerouting, after the packet is passed up from the NIC driver. This means you can enforce very early filtering policies. This very early location in the packet path is ideal for dropping packets associated with DDoS attacks.
When adding a chain on ingress hook, it is mandatory to specify the device where the chain will be attached
Source: https://www.datapacket.com/blog/securing-your-server-with-nftables
How to specify the device can be found here:
How to use variable for device name when declaring a chain to use the (netdev) ingress hook?
| What chain types are supported by the nftables NETDEV family? |
1,648,211,276,000 |
Here is a working example on how I currently use an unamed set containing ICMP types:
#!/usr/sbin/nft -f
add table filter_4
add chain filter_4 icmp_out_4 {
comment "Output ICMPv4 traffic"
}
define response_icmp_4 = {
0, # Echo Reply
3, # Destination Unreachable
10, # Router Solicitation
11, # Time Exceeded
12, # Parameter Problem
14 # Timestamp Reply
}
# Block ICMP response from localhost
add rule filter_4 icmp_out_4 icmp type $response_icmp_4 drop
What I want to do is convert this unamed set called response_icmp_4 to a named set.
Here is my unsuccessful attempt to do this:
# Declare named set
add set filter_4 response_icmp_4 { type inet_service }
# Add ICMP type elements
add element filter_4 response_icmp_4 {
0, # Echo Reply
3, # Destination Unreachable
10, # Router Solicitation
11, # Time Exceeded
12, # Parameter Problem
14 # Timestamp Reply
}
Creating the set works but it's refused during rule processing:
# Block ICMP response from localhost
add rule filter_4 icmp_out_4 icmp type @response_icmp_4 drop
With the following error output:
Error: datatype mismatch, expected ICMP type, expression has type internet network service
The error is self-explanatory, but the question is what type should I specify? because type inet_service does not work.
According to docs valid type expressions are ipv4_addr, ipv6_addr, ether_addr, inet_proto, inet_service, mark
It's also possible to specify typeof to auto-derive data type but this doesn't work as well, for ex:
add set filter_4 response_icmp_4 { typeof vlan id }
Which errors with similar error:
Error: datatype mismatch, expected ICMP type, expression has type integer
Which is an odd error because ICMP type IS an integer.
If you can also link to documentation explaining this that would be helpful because I'm not able to find it.
|
The named set declaration must match the type to compare it to. An ICMP type is not an inet service:
# nft describe inet_service
datatype inet_service (internet network service) (basetype integer), 16 bits
which means a port. Compatible for example with:
# nft describe udp dport
payload expression, datatype inet_service (internet network service) (basetype integer), 16 bits
So the when adding the rule nftables complains with:
Error: datatype mismatch, expected ICMP type, expression has type internet network service
As for figuring out ICMP type, from the rule (payload expression/typeof) icmp type:
# nft describe icmp type
payload expression, datatype icmp_type (ICMP type) (basetype integer), 8 bits
pre-defined symbolic constants (in decimal):
echo-reply 0
destination-unreachable 3
[...]
leading to (datatype/type) icmp_type:
# nft describe icmp_type
datatype icmp_type (ICMP type) (basetype integer), 8 bits
pre-defined symbolic constants (in decimal):
echo-reply 0
destination-unreachable 3
[...]
This:
add set filter_4 response_icmp_4 { type inet_service }
has to be replaced with:
add set filter_4 response_icmp_4 { type icmp_type; }
or, instead of type using typeof (which is usually matching what's used in the ruleset so easier to figure out, and which can also be used as above directly with nft describe to figure out the equivalent type):
add set filter_4 response_icmp_4 { typeof icmp type; }
| Declare and use a named set of ICMP types in nftables |
1,648,211,276,000 |
Here is a working example of how ct helper object is currently declared as per nftables docs
#!/usr/sbin/nft -f
add table filter_4 {
# TODO: Can helper object be declared outside the scope of table declaration scope?
# ct helper stateful object
# "ftp-standard" is the name of this ct helper stateful object
# "ftp" is the in-kernel name of the ct helper for ftp
ct helper ftp-standard {
type "ftp" protocol tcp;
}
}
The ct helper object is then used for ex. as follows:
add chain filter_4 new_out_4 {
comment "New output IPv4 traffic"
}
# FTP (active and passive)
# Rule for initial ftp connection (control channel), setting ct helper stateful object to use
# "ftp-standard" is the name of the ct helper stateful object
add rule filter_4 new_out_4 tcp sport >1023 tcp dport 21 ct helper set "ftp-standard" accept
What I want to achieve is to know the syntax to declare ct helper object outside the table in similar fashion how sets are declared.
An example on how a set is declared outside table declaration scope
add set filter_4 multicast_proto { type inet_proto; comment "IPv4 multicast protocols"; }
add element ip filter_4 multicast_proto { udp, igmp }
In similar fashion I want to declare ct helper object for instance:
# Table declaration
add table filter_4
# Declare ct helper separately
add ct helper ftp-standard {
type "ftp" protocol tcp;
}
This of course doesn't work, what's the syntax to add\declare ct helper like this?
It seems ct helper must be bound to a table (is this true?) therefore perhaps table name should be specified in above example.
|
The syntax is described in nft(8):
CT HELPER
add ct helper [family] table name { type type protocol protocol ; [l3proto family ;] }
delete ct helper [family] table name
list ct helpers
So for your case:
From shell (including proper shell escaping using ' where appropriate, nft itself doesn't care: it parses parameters the same however they are provided split):
nft add ct helper filter_4 ftp-standard '{ type "ftp" protocol tcp; }'
Or if already within nft context, simply:
add ct helper filter_4 ftp-standard { type "ftp" protocol tcp; }
| Declare ct helper object outside table declaration scope in nftables |
1,648,211,276,000 |
I've set up a rule to match multicast packets as follows:
add rule filter_4 new_out_4 meta pkttype multicast goto multicast_out_4
filter_4 is IPv4 table, new_out4 is output chain and multicast_out_4 is a chain to handle multicast only traffic.
Here is a more complete picture of the IPv4 table excluding non-relevant portion:
#!/usr/sbin/nft -f
add table filter_4
add chain filter_4 output {
# filter = 0
type filter hook output priority filter; policy drop;
}
add chain filter_4 multicast_out_4 {
comment "Output multicast IPV4 traffic"
}
add chain filter_4 new_out_4 {
comment "New output IPv4 traffic"
}
#
# Stateful filtering
#
# Established IPv4 traffic
add rule filter_4 input ct state established goto established_in_4
add rule filter_4 output ct state established goto established_out_4
# Related IPv4 traffic
add rule filter_4 input ct state related goto related_in_4
add rule filter_4 output ct state related goto related_out_4
# New IPv4 traffic ( PACKET IS MATCHED HERE )
add rule filter_4 input ct state new goto new_in_4
add rule filter_4 output ct state new goto new_out_4
# Invalid IPv4 traffic
add rule filter_4 input ct state invalid log prefix "drop invalid_filter_in_4: " counter name invalid_filter_count_4 drop
add rule filter_4 output ct state invalid log prefix "drop invalid_filter_out_4: " counter name invalid_filter_count_4 drop
# Untracked IPv4 traffic
add rule filter_4 input ct state untracked log prefix "drop untracked_filter_in_4: " counter name untracked_filter_count_4 drop
add rule filter_4 output ct state untracked log prefix "drop untracked_filter_out_4: " counter name untracked_filter_count_4 drop
In the above setup new output traffic including multicast is matched by rule add rule filter_4 output ct state new goto new_out_4
Here is new_out_4 chain with only the relevant (non working) multicast rule that doesn't work:
# Multicast IPv4 traffic ( THIS RULE DOES NOT WORK, SEE LOG OUTPUT BELOW)
add rule filter_4 new_out_4 meta pkttype multicast goto multicast_out_4
#
# Default chain action ( MULTICAST PACKET IS DROPPED HERE )
#
add rule filter_4 new_out_4 log prefix "drop new_out_4: " counter name new_out_filter_count_4 drop
Here is what the log says about dropped multicast packet:
drop new_out_4: IN= OUT=eth0 SRC=192.168.1.100 DST=224.0.0.251 LEN=163 TOS=0x00 PREC=0x00 TTL=255 ID=27018 DF PROTO=UDP SPT=5353 DPT=5353 LEN=143
The packet that is dropped was sent to destination address 224.0.0.251, this is multicast address, it was supposed to be matched by multicast rule in new_out_4 chain and was supposed to be processed by multicast_out_4 chain but was not.
Instead the packet was not matched and was dropped by default drop rule in new_out_4 chain above, see comment (Default chain action).
Obviously this means that the multicast rule does not work.
Why multicast rule doesn't work?
Expected:
meta pkttype multicast matches destination address 224.0.0.251
EDIT:
System info:
Kernel: 6.5.0-0.deb12.4-amd64
had the same problem with earlier kernel 6.1
nftables: v1.0.6 (Lester Gooch #5)
|
Having reproduced (and completed missing parts for) the setup with a few additional entries such as:
nft insert rule filter_4 new_out_4 counter meta pkttype host counter
indeed the property meta pkttype for this skbuff is host rather than the expected multicast for an outgoing multicast packet. Note that when this keyword was introduced, it was about input, not output:
src: Add support for pkttype in meta expresion
If you want to match the pkttype field of the skbuff, you have to use
the following syntax:
nft add rule ip filter input meta pkttype PACKET_TYPE
where PACKET_TYPE can be: unicast, broadcast and multicast.
Actually the direct equivalent with iptables is the pkttype match module:
pkttype
This module matches the link-layer packet type.
[!] --pkt-type {unicast|broadcast|multicast}
# iptables-translate -A OUTPUT -m pkttype --pkt-type multicast
nft 'add rule ip filter OUTPUT pkttype multicast counter'
Putting all this together, when an outgoing IP (routing: layer 3) packet is created, it has not yet reached its layer 2 (link-layer) so its skbuff doesn't reflect what it might become, if it's even intended to later.
What should actually be tested is the IP address property with regard to the routing stack, rather than the packet property wih regard to Ethernet. iptables provides for this the addrtype match module:
addrtype
This module matches packets based on their address type. [...]
Its translation hints what should actually be used: the fib expression:
# iptables-translate -A OUTPUT -m addrtype --dst-type MULTICAST
nft 'add rule ip filter OUTPUT fib daddr type multicast counter'
FIB EXPRESSIONS
fib {saddr | daddr | mark | iif | oif} [. ...] {oif | oifname | type}
A fib expression queries the fib (forwarding information base) to
obtain information such as the output interface index a particular
address would use. The input is a tuple of elements that is used as
input to the fib lookup functions.
There's no direct example for multicast. The nearest example is a more complex one about dropping a pacjet not intended for an address on the incoming interface, where multicast is in the 3 exceptions:
# drop packets to address not configured on incoming interface
filter prerouting fib daddr . iif type != { local, broadcast, multicast } drop
So just replace wherever used in an output (or postrouting) hook:
meta pkttype multicast
with:
fib daddr type multicast
In an input (or prerouting) hook, while the skbuff property probably matches the IP property, to be consistent, it should also be replaced exactly the same, also with:
fib daddr type multicast
The test command below, used conjointly on an other host in the LAN (to test input) and on the host (to test output):
socat -d -d UDP4-DATAGRAM:224.0.0.251:5555,bind=:5555,ip-add-membership=224.0.0.251:eth0 -
will properly match fib daddr type multicast in input and output.
Important remark:
I believe above addressed the question, but note however that multicast cannot be properly tracked by Netfilter's conntrack, because it can't associate a reply using an other unicast source address to the multicast destination address of the initial query: they differ so it considers the reply to be an other (new) flow instead of associating them and considering such reply as part of the previous flow. So such kind of flow will never appear as ESTABLISHED state with conntrack or the conntrack -L command. The ruleset should be adapted for this: it can't rely only on ct state established,related kind of rules, but that's beyond the scope of this question.
| nftables - multicast packets not matched |
1,648,211,276,000 |
This is a question specifically about nftables chain types in the Linux kernel.
I don't understand how they're processed. I've been staring at the kernel code for a while, and it looks to me like an nftables "chain" is attached to a netns as a hook entry (in e.g. struct netns_nf.hooks_ipv4 for IPv4).
I don't see anything that discriminates on the "type" of the chain—filter, nat, or route—while creating or processing the chain. It looks like all chain types would simply get stuffed in as hook entries, and only the struct nf_hook_entry.hook function would be type-specific. For example, I think nf_hook_entry.hook would be the function nft_nat_do_chain for a type nat chain.
Looking at this table of which combinations of family, hook, and type exist, let's say I added two chains to the input hook, one with type filter and one with type nat. Let's further say that both chains are created with the same priority.
Questions:
Is my hypothetical scenario even possible, two chains on the same hook, only varying by type? If not, where does the kernel prevent this?
If it is possible, what will determine the order that these two chains run in? Is there something I'm missing that runs e.g. chains of type nat before chains of type filter? Or will it be down to whichever chain was added first vs. second (and maybe kernel version, etc.)?
There is an excellent related answer that's about chains with the same priority, but the specific case there is with two chains of the same type.
I am asking this question with the ultimate intent of understanding why nftables has a concept of "type" at all.
I know, for example, that the handler for type route chains may call ip_route_me_harder (not a joke!) if certain fields of a packet are changed by a chain, and this is unique to chains of type route. I know type nat has a few restrictions on its priority. I have also read that type nat chains are only called for the first packet of a connection, but I haven't been able to locate that exact restriction anywhere in the code (though maybe it's nf_nat_inet_fn in nf_nat_core.c?).
I appreciate any pointers you can give me to help me understand how and where type is handled for nftables chains in the kernel!
Edit: This answer seems to suggest that nftables "types" are nearly a stylistic choice, though it does point out the special behaviors of the route type. Another answer there further muddies my waters by saying that a NAT rule cannot be added to a chain of type filter, which (if true) is very confusing to me. Where is such a restriction implemented? (Only in userspace?)
|
TL;DR
When doing an experiment where a network namespace receives traffic and does NAT on it, one can see that whatever the priority given to the type nat hook prerouting chain, it doesn't matter with regard to the filter chains priorities: NAT always happen at exactly prerouting hook priority -100 aka NF_IP_PRI_NAT_DST. Priority between NAT chains themselves is preserved.
You looked at the .hook entries in definitions which are for actual actions during packet traversal, but overlooked the .ops_register/.ops_unregister entries defined only for NAT hooks which introduce a different behavior when the chain is registered.
Tests done with kernel 6.5.x and nftables 1.0.9, some links provided on https://elixir.bootlin.com/ with latest LTS kernel at this date without patch revision: 6.1 (not 6.1.x).
To summarize:
NAT acts at special hook priorities, and only these priorities (rather than the priority given when adding the chain) are relevant when comparing with other hook types such as filter or route: NAT chains register differently than other chains. Still the given priorities apply internally between different NAT chains hooking at the same place.
route follows normal priorities just like filter (no special registration).
don't use exact priorities such as NF_IP_PRI_NAT_DST (or various other NAT-related exact values) elsewhere because then the precise interaction between how nftables and NAT hook into Netfilter might be undefined (example: could change depending on order of creation, or behavior could change depending on kernel version) instead of deterministic. For example use -101 or less to be before DNAT or -99 or more to be after DNAT but don't ever use -100 to avoid undefined behavior.
the same warning applies for other special facilities' priorities, described for example there, such as NF_IP_PRI_CONNTRACK_DEFRAG or NF_IP_PRI_CONNTRACK etc. (and for iptables priorities when also interacting with iptables rules and needing a deterministic outcome).
Experiment
I left aside cases such as family inet: one can just check it will behave the same with an adequate ruleset and test case.
Example ruleset (to be loaded using nft -f ...):
table t # for idempotence
delete table t # for idempotence
table t {
chain pf1 {
type filter hook prerouting priority -250; policy accept;
udp dport 5555 meta nftrace set 1 counter
}
chain pf2 {
type filter hook prerouting priority -101; policy accept;
udp dport 5555 counter accept
udp dport 6666 counter accept
}
chain pf3 {
type filter hook prerouting priority -99; policy accept;
udp dport 5555 counter accept
udp dport 6666 counter accept
}
chain pn1 {
type nat hook prerouting priority -160; policy accept;
counter
}
chain pn2 {
type nat hook prerouting priority 180; policy accept;
udp dport 5555 counter dnat to :6666
}
chain pn3 {
type nat hook prerouting priority -190; policy accept;
counter
}
chain pn4 {
type nat hook prerouting priority 190; policy accept;
udp dport 5555 counter dnat to :7777
udp dport 6666 counter dnat to :7777
}
}
This ruleset will change a received UDP port 5555 into port 6666 instead in pn2. pn1, pn3 and pn4 are here just for priority between NAT chains (pn4 also here to explain that NAT of a given type (DNAT, SNAT...) happens only once). There's a receiving application on UDP port 6666 (so the flow isn't deleted by an ICMP destination port unreachable), I used socat UDP4-LISTEN:6666,fork EXEC:date for this test and (interactively) sent two packets from a remote client using socat UDP4:192.0.2.2:5555 -.
One would believe that the NAT chain pn2 with priority 180 performing a DNAT would happen after filter chain pf3 with priority -99. But that's not what happens between type nat and other types: NAT is special. Using nft monitor trace like below:
# nft monitor trace
trace id 4ab9ba62 ip t pf1 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pf1 rule udp dport 5555 meta nftrace set 1 counter packets 0 bytes 0 (verdict continue)
trace id 4ab9ba62 ip t pf1 verdict continue
trace id 4ab9ba62 ip t pf1 policy accept
trace id 4ab9ba62 ip t pf2 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pf2 rule udp dport 5555 counter packets 0 bytes 0 accept (verdict accept)
trace id 4ab9ba62 ip t pn3 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pn3 rule counter packets 0 bytes 0 (verdict continue)
trace id 4ab9ba62 ip t pn3 verdict continue
trace id 4ab9ba62 ip t pn3 policy accept
trace id 4ab9ba62 ip t pn1 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pn1 rule counter packets 0 bytes 0 (verdict continue)
trace id 4ab9ba62 ip t pn1 verdict continue
trace id 4ab9ba62 ip t pn1 policy accept
trace id 4ab9ba62 ip t pn2 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pn2 rule udp dport 5555 counter packets 0 bytes 0 dnat to :6666 (verdict accept)
trace id 4ab9ba62 ip t pf3 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49393 ip length 30 udp sport 58201 udp dport 6666 udp length 10 @th,64,16 0x610a
trace id 4ab9ba62 ip t pf3 rule udp dport 6666 counter packets 0 bytes 0 accept (verdict accept)
trace id 46ad0497 ip t pf1 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49394 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x620a
trace id 46ad0497 ip t pf1 rule udp dport 5555 meta nftrace set 1 counter packets 0 bytes 0 (verdict continue)
trace id 46ad0497 ip t pf1 verdict continue
trace id 46ad0497 ip t pf1 policy accept
trace id 46ad0497 ip t pf2 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49394 ip length 30 udp sport 58201 udp dport 5555 udp length 10 @th,64,16 0x620a
trace id 46ad0497 ip t pf2 rule udp dport 5555 counter packets 0 bytes 0 accept (verdict accept)
trace id 46ad0497 ip t pf3 packet: iif "lan0" ether saddr 8e:3e:82:1a:dc:87 ether daddr fa:2f:7e:2d:f1:03 ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49394 ip length 30 udp sport 58201 udp dport 6666 udp length 10 @th,64,16 0x620a
trace id 46ad0497 ip t pf3 rule udp dport 6666 counter packets 0 bytes 0 accept (verdict accept)
^C
one can see that all prerouting NAT hooks are happening between pf2 and pf3 ie between priorities -101 and -99: at priority -100 which is NF_IP_PRI_NAT_DST as used in Netfilter's own structures static const struct nf_hook_ops nf_nat_ipv4_ops[]. Chain ip t pf3 sees port 6666 and not 5555.
If a NAT statement has been applied, following rules (in the same hook) are skipped by Netfilter so pn4 never gets a chance here to be traversed at all in the example above (with only 2 packets of the same flow initially to port 5555) and never appears: this behavior also differs from type filter where the next hook is still traversed (eg: pf3 is still traversed after pf2).
As usual, the next packet in the flow doesn't trigger any NAT chain anymore since only packet creating a new flow (conntrack state NEW) are sent to NAT chains, so the next packet doesn't even display traversing pnX chains anymore. Priorities between the four prerouting NAT chains are honored: priority order is pn3 (-190) , pn1 (-160), pn2 (180) (and then there would be pn4 (190) but it doesn't get the chance).
Note: the fact that the packets/bytes counters don't appear increased in the same run of nft monitor trace looks like a bug or a missing feature to me (they are incremented when checking nft list ruleset).
type nat hooks use a different registering function than default for other nftables hooks so they can be handled differently:
.ops_register = nf_nat_ipv4_register_fn,
.ops_unregister = nf_nat_ipv4_unregister_fn,
It's to be handled by NAT (which is managed by Netfilter) and in hook NF_INET_PRE_ROUTING (still provided by Netfilter to nftables) this will be done at priority NF_IP_PRI_NAT_DST.
This is not done for type filter (nor route) which will then use a common nftables method rather than the specified one.
| nftables: Are chains of multiple types all evaluated for a given hook? |
1,648,211,276,000 |
I'm working from the answer of this question and man nft in order to create some dnat rules in my nftables config.
The relevant config extract is:
define src_ip = 192.168.1.128/26
define dst_ip = 192.168.1.1
define docker_dns = 172.20.10.5
table inet nat {
map dns_nat {
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
flags interval
elements = {
$src_ip . $dst_ip . udp . 53 : $docker_dns . 5353,
}
}
chain prerouting {
type nat hook prerouting priority -100; policy accept;
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
}
}
When I apply this rule with nft -f, I see no command output so I presume it's succeeded. However when I inspect the ruleset using nft list ruleset the rules aren't present. When the dnat to ... line is commented out the rules appear to be applied, however when the line is present the rules are not applied.
The collection of rules in the prerouting chain I'm attempting to replace is:
ip saddr $src_ip ip daddr $dst_ip udp dport 53 dnat to $docker_dns:5353;
...
Version information:
# nft -v
nftables v1.0.6 (Lester Gooch #5)
# uname -r
6.1.0-11-amd64
Why might this not be working? Thanks
|
There are 3 problems.
no error is displayed
This looks to be a bug in nftables 1.0.6, see following bullets.
Here with the same version and OP's ruleset in /tmp/ruleset.nft:
# nft -V
nftables v1.0.6 (Lester Gooch #5)
[...]
# nft -f /tmp/ruleset.nft
/tmp/ruleset.nft:7:38-45: Error: unknown datatype ip_proto
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
^^^^^^^^
/tmp/ruleset.nft:6:9-15: Error: set definition does not specify key
map dns_nat {
^^^^^^^
Error: unknown datatype ip_proto
The original linked Q/A used the correct type inet_proto. This should not have been replaced with ip_proto which is an unknown type. So replace back:
type ipv4_addr . ipv4_addr . ip_proto . inet_service : ipv4_addr . inet_service
with the correct original spelling:
type ipv4_addr . ipv4_addr . inet_proto . inet_service : ipv4_addr . inet_service
A list of available types can be found in nft(8) at PAYLOAD EXPRESSION and more precisely for this case at IPV4 HEADER EXPRESSION:
Keyword
Description
Type
[...]
protocol
Upper layer protocol
inet_proto
[...]
typeof ip protocol <=> type inet_proto (not type ip_proto).
Normally typeof should be preferred to type to avoid having to guess the correct type, but as I wrote in the linked Q/A, some versions of nftables might not cope correctly with this precise case. The replacement would have been:
typeof ip saddr . ip daddr . ip protocol . th dport : ip daddr . th dport
which is almost a cut/paste from the rule using it, but its behavior should be thoroughly tested.
no error is displayed - take 2
Once this previous error is fixed (and the result put in /tmp/ruleset2.nft), then, as OP wrote, trying again the ruleset fails silently:
# nft -V
nftables v1.0.6 (Lester Gooch #5)
cli: editline
json: yes
minigmp: no
libxtables: yes
# nft -f /tmp/ruleset2.nft
# echo $?
1
#
The only clue it failed is non-0 return code.
While with a newer nftables version:
# nft -V
nftables v1.0.8 (Old Doc Yak #2)
cli: editline
json: yes
minigmp: no
libxtables: yes
# nft -f /tmp/ruleset2.nft
/tmp/ruleset2.nft:16:9-12: Error: specify `dnat ip' or 'dnat ip6' in inet table to disambiguate
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
^^^^
#
Now the error is displayed. Whatever was the issue in 1.0.6 it has been fixed at least with version 1.0.8.
Error: specify `dnat ip' or 'dnat ip6' in inet table to disambiguate
Because NAT is done in the inet family (combined IPv4+IPv6) rather than either ip (IPv4) or ip6 (IPv6) family, one parameter which is usually optional becomes mandatory: state the IP version NAT should be applied to (even if one could infer it from the map table layout (IPv4)). Documentation tells:
NAT STATEMENTS
snat [[ip | ip6] to] ADDR_SPEC [:PORT_SPEC] [FLAGS]
dnat [[ip | ip6] to] ADDR_SPEC [:PORT_SPEC] [FLAGS]
masquerade [to :PORT_SPEC] [FLAGS]
redirect [to :PORT_SPEC] [FLAGS]
[...]
When used in the inet family (available with kernel 5.2), the dnat and
snat statements require the use of the ip and ip6 keyword in case an
address is provided, see the examples below.
So:
dnat to ip saddr . ip daddr . ip protocol . th dport map @dns_nat;
should be replaced with:
dnat ip to ip saddr . ip daddr . ip protocol . th dport map @dns_nat
The original Q/A didn't state the family, so it would be assumed it was the default ip family which wouldn't require this.
Of course, this will work with nftables 1.0.6, only the error reporting had a problem. The return code will now be 0.
| nftables dnat map rule failing silently |
1,648,211,276,000 |
I would like to change source address of every packet generated by a process in given cgroup (version 2). Is that even possible?
I have:
nftables 1.0.2,
linux 5.15 (Ubuntu variant)
/system.slice/system-my-service.slice/[email protected] cgroup
I have tried to:
create a table nft add table ip myservice
create a postrouting nat chain nft add chain ip myservice postrouting { type nat hook postrouting priority 100 \; }
try to create postrouting rule nft add rule ip myservice postrouting socket cgroupv2 level 1 'system.slice' snat 10.0.0.1 (during experiments, I have used only 'system.slice', because nft has issues with @ in the cgroup name, which would be level 2 issue :-).
I have found also cgroup matching which requires int32 argument, from which I guess it's cgroup version 1 (thus not applicable for me), because I have found no trace of hint for converting the path style cgroup to int.
I suspect the socket expression is not applicable to the postrouting nat chain, as the nft suggests in Error: Could not process rule: Operation not supported.
Do all the fails mean, that this is completely wrong approach?
Or have I just missed something obvious?
|
Here's an answer to your two problems:
syntax
cgroupv2 expects a path, which is a string. A string is always displayed with double-quotes, and requires double-quotes if it includes special characters. These double-quotes are for the nft command's consumption, not for the shell. With direct commands (ie: not in a file read using nft -f), these double-quotes themselves should be escaped or enclosed with single-quote, else the shell would consume them.
In addition this path is documented as relative and doesn't need a leading / (anyway it's accepted and removed when displayed back), giving when directly from shell:
socket cgroupv2 level 3 '"system.slice/system-my-service.slice/[email protected]"'
Finally, nft doesn't care if it's given a single parameter with multiple tokens or multiple parameters one token at a time: the line is assembled and parsed the same. So whenever there's a special character for the shell (here "), just single-quote all the line rather than try to figure out where to escape characters (eg \; in base chains).
work around the limitation
You can mark the packet in output hook and check the mark in postrouting hook to do NAT.
nft 'add chain ip myservice { type filter hook output priority 0; policy accept; }'
nft 'add rule ip myservice output socket cgroupv2 level 3 "system.slice/system-my-service.slice/[email protected]" meta mark set 0xcafe'
nft add rule ip myservice postrouting meta mark 0xcafe snat to 10.0.0.1
The final result can be put in its own file, with the two glue commands at start for idempotence. Having its own file prevents any interaction with shell parsing, and gets atomic updates. It still requires "..." around the path parameter to not having nftables interpret the @ character.
myservice.nft:
table ip myservice
delete table ip myservice
table ip myservice {
chain postrouting {
type nat hook postrouting priority 100; policy accept;
meta mark 0xcafe snat to 10.0.0.1
}
chain output {
type filter hook output priority 0; policy accept;
socket cgroupv2 level 3 "system.slice/system-my-service.slice/[email protected]" meta mark set 0xcafe
}
}
which can then be loaded with nft -f myservice.nft, as long as the cgroup already exists (probably meaning, that at boot the service has to be started before nftables loads this rule).
In the end:
every outgoing packet will traverse filter/output
if it's from a process in the adequate cgroup the packet will receive a mark 0xcafe
every first packet of a new flow will traverse the nat/postrouting rule.
if it matches the mark 0xcafe (meaning it was in the adequate cgroup) this will trigger the SNAT rule for the flow (then the mark doesn't matter anymore)
| Can nftables perform postrouting matching on crgroupv2? |
1,648,211,276,000 |
I'm using nftables on a router running NixOS 22.11 (with the latest XanMod kernel patches and acpid as well as irqbalance enabled). The machine has 3 interfaces: enp4s0 which is connected to the internet and two local WiFi access points serving distinct IP LANs, wlp1s0 and wlp5s0.
My nftables configuration is the following: I just allow inbound DNS, DHCP and SSH traffic on the the local networks, and as allow outbound and forwarded traffic to the internet along with SNAT.
table ip filter {
chain conntrack {
ct state vmap { invalid : drop, established : accept, related : accept }
}
chain dhcp {
udp sport 68 udp dport 67 accept comment "dhcp"
}
chain dns {
ip protocol { tcp, udp } th sport 53 th sport 53 accept comment "dns"
}
chain ssh {
ip protocol tcp tcp dport 22 accept comment "ssh"
}
chain in_wan {
jump dns
jump dhcp
jump ssh
}
chain in_iot {
jump dns
jump dhcp
}
chain inbound {
type filter hook input priority filter; policy drop;
icmp type echo-request limit rate 5/second accept
jump conntrack
iifname vmap { "lo" : accept, "wlp1s0" : goto in_wan, "enp4s0" : drop, "wlp5s0" : goto in_iot }
}
chain forward {
type filter hook forward priority filter; policy drop;
jump conntrack
oifname "enp4s0" accept
}
}
table ip nat {
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifname "enp4s0" snat to 192.168.1.2
}
}
table ip6 global6 {
chain input {
type filter hook input priority filter; policy drop;
}
chain forward {
type filter hook forward priority filter; policy drop;
}
}
With this simple configuration, I expected KDE Connect to not work as it requires ports 1714-1764 to be open. And indeed, if I connect my computer to wlp1s0 and my phone to wlp5s0 (so different interfaces), the devices cannot see each other, and I can see the packets through tcpdump as well as through nftables, either using logging rules or nftrace.
But somehow if I now put both machines on the same interface, e.g. wlp1s0, KDE Connect works perfectly and the devices see each other. My best guess was that this happens because of connection tracking, but even if I add
chain trace_wan {
type filter hook prerouting priority filter - 1; policy accept;
iifname "wlp1s0" oifname "wlp1s0" meta nftrace set 1
}
to the filter table, I can't see any packets when running nft monitor trace. Similarly I can't see any packets in the system journal when inserting a logging rule at index 0 in the forward chain. And yet when running tcpdump -i wlp1s0 port 1716 I can see packets I expected nftables to see as well:
14:33:59.943462 IP 192.168.2.11.55670 > 192.168.2.42.xmsg: Flags [.], ack 20422, win 501, options [nop,nop,TS val 3319725685 ecr 2864656484], length 0
14:33:59.957101 IP 192.168.2.42.xmsg > 192.168.2.11.55670: Flags [P.], seq 20422:20533, ack 1, win 285, options [nop,nop,TS val 2864656500 ecr 3319725685], length 111
Why can nftables not see those packets when the two devices are connected on the same interface ? How can I make nftables actually drop all these forwarded packets by default ?
Additional information requested in the comments:
❯ ip -br link
lo UNKNOWN <LOOPBACK,UP,LOWER_UP>
enp2s0 DOWN <BROADCAST,MULTICAST>
enp3s0 DOWN <BROADCAST,MULTICAST>
enp4s0 UP <BROADCAST,MULTICAST,UP,LOWER_UP>
wlp5s0 UP <BROADCAST,MULTICAST,UP,LOWER_UP>
wlp1s0 UP <BROADCAST,MULTICAST,UP,LOWER_UP>
❯ ip -4 -br address
lo UNKNOWN 127.0.0.1/8
enp4s0 UP 192.168.1.2/24
wlp5s0 UP 192.168.3.1/24
wlp1s0 UP 192.168.2.1/24
❯ bridge link
❯ ip route
default via 192.168.1.1 dev enp4s0 proto static
192.168.1.0/24 dev enp4s0 proto kernel scope link src 192.168.1.2
192.168.1.1 dev enp4s0 proto static scope link
192.168.2.0/24 dev wlp1s0 proto kernel scope link src 192.168.2.1
192.168.3.0/24 dev wlp5s0 proto kernel scope link src 192.168.3.1
❯ sysctl net.bridge.bridge-nf-call-iptables
sysctl: error: 'net.bridge/bridge-nf-call-iptables' is an unknown key
|
Warning: this is a generic Linux answer. What won't be covered in this answer is specific integration with NixOS and its own method for configuring network or how to call arbitrary commands from its configuration.
Presentation
In OP's first case (two different interfaces), the router is actually routing between the two interfaces wlp1s0 and wlp5s0: forwarded IPv4 traffic is seen in nftables' family ip, filter forward hook.
In the second case, the traffic is bridged by the router's Access Point interface wlp1s0: nftables' family ip table doesn't see bridged traffic, only IPv4 traffic.
In addition this bridging doesn't even happens at standard Linux bridge level, but is done directly by the Access Point (AP)'s driver (and/or hardware accelerated): two Wifi devices will communicate between themselves (still through the AP) without their frames reaching the actual network stack.
In order for the system to actually filter this traffic, three things must be done:
change the AP's settings so that traffic goes through the network stack
have a Linux bridge associated to the AP so that frames aren't dropped by the network stack and so that nftables can see them at the bridge level
have adequate nftables rules in the bridge family. For IP stateful firewalling in the bridge family, this also requires Linux kernel >= 5.3 (NixOS 22.11 is good enough).
Other options not pursued:
alternatively to 2+3, and without possible stateful firewalling, one might imagine using nftables' netdev family with ingress and possibly egress (requires Linux kernel >= 5.17 for egress fwd) but there would be many corner cases to handle: better not
instead of 3, use old bridge netfilter code intended for stateful firewalling in bridge path by iptables (and used by Docker) to have all rules in the same table
nftables, which is also affected by it, aims to not depend on this code and thus lacks features for its proper use (mostly it lacks the equivalent of iptables' physdev module for one way to distinguish bridged traffic from routed traffic in the same ruleset). This would make things still rely on iptables and would thus still need multiple tables. (Example of such complex use along Docker: nftables whitelisting docker).
As a warning, should Docker be added on the router, expect disruption of the setup presented below.
Setup
change hostapd settings
Two related settings must be changed on the hostapd setup:
tell that frames must be handled by the network stack rather than being short-circuited by the driver
The configuration of hostapd for wlp1s0 must be changed. If somehow a single configuration existed for the two Wifi interfaces, chances are there should now be two separate configurations. I won't address such integration in this answer and will concentrate on the single interface wlp1s0.
AP isolation must be enabled in hostapd.conf:
ap_isolate=1
Now frames between two station clients (STA) will reach the network stack rather than being handled directly by the AP driver.
Configure hostapd to use a bridge and set the wireless interface as bridge port
With only the previous setting, only the frames to or from the router would be handled by the routing stack part of the network stack. Frames not intended for or coming from the router would simply be dropped, as would happen if an Ethernet interface received unicast frames not intended for its MAC address. That's also why the setting is named ap_isolate: by default STAs become isolated between themselves.
A bridge is required to handle this. Tell hostapd to set wlp1s0 as bridge port as soon as it configured it in AP mode. It will either create a bridge or (to be preferred) reuse an existing bridge with the provided name and set the interface as bridge port when running. I chose the arbitrary name brwlan1.
Configure the bridge in hostapd.conf:
bridge=brwlan1
change network settings related to using a bridge
configure the bridge with no attached port and no delay
Manually that would just be:
ip link add name brwlan1 up type bridge forward_delay 0
Note: hostapd is the tool that will attach the Wifi interface to the bridge, because it must be set as AP first before being allowed to be set as bridge port.
move any routing (layer 3) setup about wlp1s0 to brwlan1:
ip addr delete 192.168.2.1/24 dev wlp1s0
ip addr add 192.168.2.1/24 dev brwlan1
This also includes changing any interface reference in various applications, for example in the DHCP settings.
...and also nftables but this will be dealt in the next part.
have hostapd running
Verify that once its wlp1s0 instance is running, wlp1s0 is set as brwlan1 bridge port:
One should see something similar to:
# bridge link show dev wlp1s0
6: wlp1s0 <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master brwlan1 state forwarding priority 32 cost 100
then enable hairpin mode on the single bridge port
For now, there still can't be STA-to-STA communication, but only STA-to-AP or AP-to-STA: STA-to-STA requires a frame arriving on the single bridge port wlp1s0 to be re-emitted on this same bridge port. Even if there's now a bridge to forward such frames, they won't be yet: by default an Ethernet bridge (or switch) disables forwarding back to the originating port because it doesn't make much sense in normal wired setup.
So hairpin must be enabled on wlp1s0 so a frame received on this port can be re-emitted on this same port. Currently only development version (branch main) of hostapd accepts the new configuration parameter bridge_hairpin=1 to do this automatically (version 2.10 is not recent enough). This can be done manually using the bridge command (the obsolete brctl command doesn't support this feature):
bridge link set dev wlp1s0 hairpin on
This part requires proper OS integration: it must be done only after hostapd attached wlp1s0 as bridge port, because it's usable only on a bridge port. I would expect hostapd to set wlp1s0 as bridge port before daemonizing and letting the network configuration tool then run the command. Should that not be the case and a race condition happen, one can consider simply inserting a delay (eg: sleep 3; ) before this command to be sure the interface is a bridge port when the command runs. Should wlp1s0 be detached/reattached with the bridge (eg: restart of hostapd), this command must be run again: it should be called from network configuration.
adapt nftables ruleset
... using the bridge family instead of the ip family. It's quite similar to routing. Frames intended for the router are seen in input hook, frames from the router are seen in output hook, STA-to-STA frames are seen in the forward hook.
As objects namespace is per table, there can't be any rule reused between both, so some duplication will be needed. I just copied and adapted the relevant parts of the routing rules related to forwarding. As an example I enabled ping and the ports for KDE connect, with a few counters. Some of the boiler-plate is not really needed (eg: ether type ip ip protocol tcp tcp dport 1714 can be replaced with just tcp dport 1714 if there's a generic drop rule for IPv6 first. Internally the nft command inserts any needed boiler-plate when presenting the rules to the kernel).
table bridge filter # for idempotence
delete table bridge filter # for idempotence
table bridge filter {
chain conntrack {
ct state vmap { invalid : drop, established : accept, related : accept }
}
chain kdeconnect {
udp dport 1714-1764 counter accept
tcp dport 1714-1764 counter accept
}
chain forward {
type filter hook forward priority filter; policy drop;
jump conntrack
ether type ip6 drop # just like OP did: drop any IPv6
icmp type echo-request counter accept
jump kdeconnect
ether type arp accept # mandatory for IPv4 connectivity
counter
}
}
Should wlp5s0 be later configured likewise with its own separate bridge, then filtering per bridge port or per bridge will become needed (eg: iifname wlp1s0 or ibrname brwlan1 etc. where needed).
Other cases are still handled by standard routing in OP's initial ruleset: the input and output filter hooks are not configured so will accept traffic, either to/from the router, or to be routed to/from other interfaces.
OP's nftables for routing must be adapted too. Wherever in table ip filter the word wlp1s0 appears, it must be replaced by brwlan1 which is now the interface participating in routing.
| nftables doesn't see KDE Connect packets between two machines on the same interface |
1,648,211,276,000 |
When creating a dnat rule, you can specify the following command:
nft 'add rule ip twilight prerouting ip daddr 1.2.3.0/24 dnat ip prefix to ip daddr map { 1.2.3.0/24 : 2.3.4.0/24 }'
And then get dnat that maps addresses like 1.2.3.4 -> 2.3.4.4. This command runs as expected with nftables v1.0.4 (Lester Gooch #3), and according to the answer here.
If I try to do the same with ipv6, using the following commands:
nft 'add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { [aa:bb:cc:dd::]/64 : [bb:cc:dd:ee::]/64 }'
nft 'add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { aa:bb:cc:dd::/64 : bb:cc:dd:ee::/64 }'
nft 'add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { "aa:bb:cc:dd::/64" : "bb:cc:dd:ee::/64" }'
Then, I get the following error messages:
Error: syntax error, unexpected newline
add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { [aa:bb:cc:dd::]/64 : [bb:cc:dd:ee::]/64 }
^
Error: syntax error, unexpected newline
add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { aa:bb:cc:dd::/64 : bb:cc:dd:ee::/64 }
^
Error: syntax error, unexpected newline
add rule ip6 twilight prerouting ip6 daddr aa:bb:cc:dd::/64 dnat ip6 prefix to ip6 daddr map { "aa:bb:cc:dd::/64" : "bb:cc:dd:ee::/64" }
^
Is there a way that I can make anonymous ipv6 maps in nftables?
|
TL;DR: You need at least nftables version >= 1.0.5.
In version 1.0.5:
scanner: allow prefix in ip6 scope
Which matches this commit:
scanner: allow prefix in ip6 scope
'ip6 prefix' is valid syntax, so make sure scanner recognizes it also
in ip6 context.
Also add test case.
[...]
diff --git a/tests/shell/testcases/sets/0046netmap_0 b/tests/shell/testcases/sets/0046netmap_0
index 2804a4a2..60bda401 100755
--- a/tests/shell/testcases/sets/0046netmap_0
+++ b/tests/shell/testcases/sets/0046netmap_0
@@ -8,6 +8,12 @@ EXPECTED="table ip x {
10.141.13.0/24 : 192.168.4.0/24 > }
}
}
+ table ip6 x {
+ chain y {
+ type nat hook postrouting priority srcnat; policy accept;
+ snat ip6 prefix to ip6 saddr map { 2001:db8:1111::/64 : 2001:db8:2222::/64 }
+ }
+ }
"
set -e
The corresponding regression test is similar to OP's attempt. OP's syntax tested ok here with nftables 1.0.7.
| nftables anonymous map for ipv6 dnat |
1,648,211,276,000 |
Given the following:
One host called middleman has the interfaces:
Interface
Address
Location
Master VRF
enp1s0
192.168.2.99
Outer network
vrf-outer
enp2s0
192.168.2.1
Inner network
vrf-inner
Presume that the outer network has default gateway 192.168.2.1, and that we have another network - 192.168.3.0/24.
If we have machine 192.168.2.22 on the outer network, and a machine with the same ip on the inner network, we use the 192.168.3.0/24-network to enable them to talk to each other. If host 192.168.2.22 on the outer network wants to contact 192.168.2.22 on the inner network we use the ip 192.168.3.22. This gets transported to enp1s0, dstnat:ed, routed via vrf-outer, srcnat:ed and routed to the correct machine through enp2s0. The response then takes the same path using conntrack nat on the way back.
I currently have this working somewhat using a modified config from the question here , but cannot ssh 192.168.2.99 from a machine on the outer network. Instead, I get a "connection refused" from the vrf-outer (found through wireshark). Pinging 192.168.2.99 from the same machine works, and all communications between machines on the inner network and the outer network (as well as the internet) works.
The packet flow is as follows with the command ssh 192.168.2.99 is executed on host 192.168.2.234 on the outer network:
outer host -> enp1s0 (192.168.234 -> 192.168.2.99) [SYN]
enp1s0 -> vrf-outer (192.168.234 -> 192.168.2.99) [SYN]
vrf-outer -> enp1s0 (192.168.99 -> 192.168.2.234) [RST, ACK]
enp1s0 -> outer host (192.168.99 -> 192.168.2.234) [RST, ACK]
conntrack -L shows no trace of this connection, and nft monitor trace shows nothing but verdict accept for all rules. Firewalld is configured to allow ssh on all interfaces and zones.
The configuration I am using is included below. Thank you for your time!
#!/bin/bash
TW_INT="vrf-tw-int"
TW_EXT="vrf-tw-ext"
EXT="enp1s0"
INT="enp2s0"
DESIRED_ZONE="FedoraServer"
############################### SET ECHO COMMAND WHEN EXECUTING ###################################
set -x
############################### ENABLE IP-FORWARDING ##############################################
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1
############################### REMOVE OLD VRF INTERFACES #########################################
nmcli con del ${TW_INT} || true
nmcli con del ${TW_EXT} || true
############################### ADD VRF INTERFACES ################################################
nmcli con add type vrf con-name ${TW_INT} ifname ${TW_INT} table 100 ipv4.method disabled ipv6.method disabled
nmcli con add type vrf con-name ${TW_EXT} ifname ${TW_EXT} table 200 ipv4.method disabled ipv6.method disabled
############################### SET VRF INTERFACES UP #############################################
nmcli con up ${TW_INT}
nmcli con up ${TW_EXT}
############################### ADD VRF INTERFACES TO ACTUAL INTERFACES ###########################
nmcli con mod ${INT} master ${TW_INT}
nmcli con mod ${EXT} master ${TW_EXT}
nmcli con up ${INT}
nmcli con up ${EXT}
############################### ADD IP-ADDRESSES ##################################################
nmcli con mod ${INT} ipv4.addresses 192.168.2.1/24
nmcli con mod ${INT} ipv4.method manual
nmcli con up ${INT}
ip route show table 100
ip route show table 200
ip route
############################### MOVE INTERFACES IN FIREWALLD AND SET FORWARD ######################
ZONE_INT=$(firewall-cmd --get-zone-of-interface=${INT})
ZONE_EXT=$(firewall-cmd --get-zone-of-interface=${EXT})
ZONE_TW_INT=$(firewall-cmd --get-zone-of-interface=${TW_INT})
ZONE_TW_EXT=$(firewall-cmd --get-zone-of-interface=${TW_EXT})
firewall-cmd --zone=${DESIRED_ZONE} --add-forward --permanent
firewall-cmd --zone=${ZONE_INT} --remove-interface ${INT} --permanent
firewall-cmd --zone=${ZONE_EXT} --remove-interface ${EXT} --permanent
firewall-cmd --zone=${ZONE_TW_INT} --remove-interface ${TW_INT} --permanent
firewall-cmd --zone=${ZONE_TW_EXT} --remove-interface ${TW_EXT} --permanent
firewall-cmd --zone=${DESIRED_ZONE} --add-interface ${INT} --permanent
firewall-cmd --zone=${DESIRED_ZONE} --add-interface ${EXT} --permanent
firewall-cmd --zone=${DESIRED_ZONE} --add-interface ${TW_INT} --permanent
firewall-cmd --zone=${DESIRED_ZONE} --add-interface ${TW_EXT} --permanent
firewall-cmd --reload
ip addr
############################### ADD CONNTRACK LABELS ##############################################
mkdir -p /etc/xtables
cat << EOF > /etc/xtables/connlabel.conf
1 INSIDE
2 OUTSIDE
EOF
ln -s /etc/xtables/connlabel.conf /etc/connlabel.conf
ln -s /etc/xtables/connlabel.conf /etc/nftables/connlabel.conf
############################### ADD ROUTING RULES FOR MARKED PACKETS ##############################
ip rule add prio 100 fwmark 100 lookup 200
ip rule add prio 200 fwmark 200 lookup 100
############################### ADD DEFAULT ROUTE FOR MACHINE #####################################
ip route add default via 192.168.2.1 dev ${EXT}
############################### FIX VRF ROUTE SOURCES #############################################
ip route del 192.168.2.0/24 dev ${EXT} proto kernel scope link src 192.168.2.99 metric 105 table 200 || true
ip route add 192.168.2.0/24 dev ${EXT} proto kernel scope link metric 105 table 200 || true
ip route del default via 192.168.2.1 dev ${EXT} proto dhcp src 192.168.2.99 metric 105 table 200 || true
ip route add default via 192.168.2.1 dev ${EXT} proto dhcp metric 105 table 200 || true
ip route del 192.168.2.0/24 dev ${INT} proto kernel scope link src 192.168.2.1 metric 106 table 100 || true
ip route add 192.168.2.0/24 dev ${INT} proto kernel scope link metric 106 table 100 || true
############################### ADD TABLES AND FLUSH OLD TABLES ###################################
nft 'add table ip twilight'
nft 'delete table ip twilight'
nft 'add table ip twilight'
############################### ADD PREROUTING CHAINS #############################################
nft 'add chain ip twilight prerouting { type nat hook prerouting priority -100; policy accept; }'
############################### ADD PREROUTING MANGLE CHAINS ######################################
nft 'add chain ip twilight premangle { type filter hook prerouting priority -180; policy accept; }'
############################### ADD RAW (PRE-CONNTRACK) CHAINS ####################################
nft 'add chain ip twilight raw { type filter hook prerouting priority raw; policy accept; }'
############################### ADD POSTROUTING CHAINS ############################################
nft 'add chain ip twilight postrouting { type nat hook postrouting priority 100; policy accept; }'
############################### ADD FORWARDING CHAINS #############################################
nft 'add chain ip twilight forward { type filter hook forward priority filter; policy accept; }'
############################### ADD EXTERNAL TRANSLATION MAPS #####################################
nft 'add map ip twilight from_192_168_3_0_to_192_168_2_0 { type ipv4_addr: ipv4_addr; }'
############################### ADD INTERNAL TRANSLATION MAPS #####################################
nft 'add map ip twilight from_192_168_2_0_to_192_168_3_0 { type ipv4_addr: ipv4_addr; }'
############################### ADD DEBUG RULES FOR ALL PACKETS ###################################
nft 'add rule ip twilight prerouting meta nftrace set 1'
nft 'add rule ip twilight postrouting meta nftrace set 1'
nft 'add rule ip twilight forward meta nftrace set 1'
nft 'add rule ip twilight raw meta nftrace set 1'
nft 'add rule ip twilight premangle meta nftrace set 1'
############################### ADD DNAT RULES ####################################################
nft 'add rule ip twilight prerouting ip daddr 192.168.3.0/24 meta nftrace set 1 dnat to ip daddr map @from_192_168_3_0_to_192_168_2_0'
############################### ADD ROUTING RULES - MARK PACKETS ##################################
nft "add rule ip twilight raw iif "${INT}" ip daddr 192.168.3.0/24 meta mark set 100"
nft "add rule ip twilight raw iif "${EXT}" ip daddr 192.168.3.0/24 meta mark set 200"
nft "add rule ip twilight prerouting iif "${INT}" ip daddr != 192.168.3.0/24 ct label set INSIDE"
nft "insert rule ip twilight premangle iif "${EXT}" ct label INSIDE meta mark set 200"
############################### TELL CONNTRACK ORIGINAL ZONES FOR MARKED PACKETS ##################
nft "add rule ip twilight raw iif "${INT}" ip saddr 192.168.2.0/24 ip daddr != 192.168.2.0/24 ct original zone set 100"
nft "add rule ip twilight raw iif "${EXT}" ip saddr 192.168.2.0/24 ip daddr != 192.168.2.0/24 ct original zone set 200"
############################### ADD SNAT RULES ####################################################
nft 'add rule ip twilight postrouting ip saddr 192.168.2.0/24 ip daddr 192.168.2.0/24 meta nftrace set 1 snat to ip saddr map @from_192_168_2_0_to_192_168_3_0'
############################### ADD TWILIGHT -> WORLD MASQUERADE ##################################
nft "add rule ip twilight postrouting iif "${TW_INT}" oif "${EXT}" ip daddr != 192.168.2.0/24 meta nftrace set 1 masquerade"
############################### ADD TWILIGHT -> REGLER FORWARDING #################################
nft "add rule ip twilight forward iif "${INT}" meta nftrace set 1 accept"
nft "add rule ip twilight forward iif "${TW_EXT}" meta nftrace set 1 accept"
nft "add rule ip twilight forward oif "${INT}" meta nftrace set 1 accept"
############################### ADD REGLER -> TWILIGHT FORWARDING #################################
nft "add rule ip twilight forward iif "${EXT}" ip daddr 192.168.3.0/24 meta nftrace set 1 accept"
############################### ADD ELEMENTS TO INTERNAL TRANSLATION MAP ##########################
nft 'include "/etc/nftables/nft-host-alias-twilight-192.168.2.0-192.168.3.0"'
############################### ADD ELEMENTS TO EXTERNAL TRANSLATION MAP ##########################
nft 'include "/etc/nftables/nft-host-alias-twilight-192.168.3.0-192.168.2.0"'
############################### MAKE FIREWALLD IGNORE TWILIGHT TRAFFIC ############################
FIREWALLD_CHAINS="mangle_PREROUTING nat_PREROUTING nat_POSTROUTING nat_OUTPUT filter_PREROUTING filter_INPUT filter_FORWARD filter_OUTPUT"
for CHAIN in ${FIREWALLD_CHAINS}; do
nft "insert rule inet firewalld "${CHAIN}" iif "${TW_INT}" accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${TW_EXT}" accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${EXT}" oif "${INT}" accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${EXT}" ip daddr 192.168.3.0/24 accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${EXT}" oif "${TW_INT}" accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${EXT}" oif "${TW_EXT}" accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${INT}" oif "${EXT}" accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${INT}" ip daddr 192.168.221.0/24 accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${INT}" oif "${TW_INT}" accept"
nft "insert rule inet firewalld "${CHAIN}" iif "${INT}" oif "${TW_EXT}" accept"
done
|
The config showed above works as intended, with one small caveat. Vrf interfaces include vrf contexts, to handle multi-tennant applications. This means that applications can be vrf-aware and only listen to a specific vrf context, where all programs run in a default vrf context unless told otherwise. My traffic was being routed to the vrf-outer context, and thus the traffic was dropped as there was no ssh server listening to that vrf context.
You can make programs that listen to the default vrf context work across all contexts by running:
sysctl -w net.ipv4.tcp_l3mdev_accept=1
sysctl -w net.ipv4.udp_l3mdev_accept=1
as documented in the kernel documentation for vrf interfaces.
The same does not seem to apply for ipv6 as this has no l3mdev_accept options. It has not yet become clear to me as to exactly why ipv6 does not need this.
| Routing between identical copies of network with assymetric IPs on bridge machine |
1,648,211,276,000 |
A have a weird scenario in which I need to create some firewall-rules to flash a LED. So far, I've always been able to do that using iptables:
iptables -A INPUT -p tcp --dport 443 -j LED --led-trigger-id mytrigger
So far, so good. There is a new issue, however: For various reasons, I now need to create such a rule not in the input-, but rather in an ingress-chain, which I can (AFAIK) only create and manage using nftables. However, I cannot for the life of me figure out how to create LED-rules using nft.
I have taken a look at the output of the rule created by iptables using nft list chain filter INPUT, which yields:
table ip filter {
chain INPUT {
type filter hook input priority filter; policy accept;
tcp dport 443 counter packets 0 bytes 0 # led-trigger-id:"myfirewalltrigger"
}
}
This is not helpful. Let's try iptables-translate -A INPUT -p tcp --dport 443 -j LED --led-trigger-id myfirewalltrigger:
nft # -A INPUT -p tcp --dport 443 -j LED --led-trigger-id myfirewalltrigger
This is not helpful either.
How come that nftables seemingly just cannot deal with LED rules?
|
I think you are having this problem because LED is an unsupported extension of iptables that is not now supported in nftables.
https://wiki.nftables.org/wiki-nftables/index.php/Supported_features_compared_to_xtables#LED
That is too bad because it sounds like you are doing something cool. Please revisit this question if you find a workaround (like parsing live logs and triggering on that?)
| nftables: Create LED rules |
1,648,211,276,000 |
The nftables status in my os:
sudo systemctl status nftables
● nftables.service - nftables
Loaded: loaded (/lib/systemd/system/nftables.service; disabled; vendor preset: enabled)
Active: active (exited) since Fri 2022-11-04 11:01:47 HKT; 1s ago
Docs: man:nft(8)
http://wiki.nftables.org
Process: 3780 ExecStart=/usr/sbin/nft -f /etc/nftables.conf (code=exited, status=0/SUCCESS)
Main PID: 3780 (code=exited, status=0/SUCCESS)
CPU: 7ms
Nov 04 11:01:47 debian systemd[1]: Starting nftables...
Nov 04 11:01:47 debian systemd[1]: Finished nftables.
Now i want to log all incoming traffic:
sudo nft add rule filter input log
Error: Could not process rule: No such file or directory
add rule filter input log
^^^^^^
List the configuration file nftables.conf:
cat /etc/nftables.conf
#!/usr/sbin/nft -f
flush ruleset
table inet filter {
chain input {
type filter hook input priority 0;
}
chain forward {
type filter hook forward priority 0;
}
chain output {
type filter hook output priority 0;
}
}
How to fix it?
|
The table is in the inet family (representing the combination of IPv4+IPv6 together) so the family parameter inet is needed, else it defaults to ip:
If an identifier is specified without an address family, the ip family
is used by default.
As there's no ip filter table nor ip filter input chain, this command:
nft add rule filter input log
fails.
The proper command would be (as root or with sudo. All commands below are to be run as root or with sudo)...
nft add rule inet filter input log
... but the command above is usually dangerous because it can generate too much logs and flood the filesystem storing these logs and should not be used as is unless prepared for it.
It could be better, for this case where no actual firewalling is done, to not log packets part of an already existing flow (ie: packets in established conntrack state) leaving new,related (eg: ICMP errors sent back) and invalid packets. This is done using conntrack's help. For good measure also limit the number of logs (eg: a maximum of 20 per minute). I'm also adding three times the counter statement, to be able to display back (using nft list ruleset) statistics about the difference in volume with each added filter before it reaches the log statement. They are not needed.
nft add rule inet filter input counter ct state != established counter limit rate 20/minute counter log
To keep this for later reuse, the previous file /etc/nftables.conf can be edited using the output of nft -s list ruleset (but the flush ruleset command should not be removed) and the nftables service restarted (systemctl restart nftables) or the file directly reloaded (nft -f /etc/nftables.conf) to revert to its content.
| How can add rule to log all incoming traffic? |
1,648,211,276,000 |
Update 2022-09-15:
It has turned out that what I was trying to achieve does not make much sense. Hence, actually this question should be deleted. However, there are some very enlightening comments to it; therefore I'll leave it as-is for the moment and leave the decision about its fate to the community.
Original question:
I am currently trying to learn nftables and have made some progress. Now I have the following problem (please bear with me if the question is dumb, but all references link to wiki.netfilter.org, which currently is down (my usual luck :-)):
I have an IPv4 network with some client PCs and a router / firewall PC which is running nftables. The router has two IP addresses, 192.168.20.253 and 192.168.20.254. The former is solely for management of the router (e.g. an SSH daemon is listening on the router on that address), while the latter is the gateway address the clients should use.
In the router's nftables ruleset, I would like to be able to distinguish between packets which came in through .253 (for such packets, I would allow only SSH when the daddr (destination address) actually is .253) and packets which came in through .254 (for such packets, I would allow only them if the daddr is outside the local network).
I know how to achieve that if .253 and .254 are assigned to two different interfaces. But this is not the case; both router IP addresses are assigned to the same interface.
Could anybody give me a tip? I didn't find hints in man nft. It mentions routing expressions like ip or nexthop, but that obviously doesn't help. Do I need to create two interfaces (on the same NIC) and assign .253 to one of them and .254 to the other?
|
The router has two IP addresses, 192.168.20.253 and 192.168.20.254
...both ... are assigned to the same interface
So we have something like:
# ip addr
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:f8:ed:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.20.254/24 brd 192.168.20.255 scope global enp0s8
valid_lft forever preferred_lft forever
inet 192.168.20.253/24 brd 192.168.20.255 scope global secondary enp0s8
valid_lft forever preferred_lft forever
So both on the same physical ethernet port (L1) and the same MAC address (L2).
I'd like to prevent clients to reach the internet via .253
That is, clients can configure the gateway on their computers as 192.168.20.253 or as 192.168.20.254
In the router's nftables ruleset, I would like to be able to distinguish between packets which came in through .253
Let see what the problem here.
...
One of the reasons is that I would like to be able to change .253 to something else later without hassle (while .254 is "fixed").
Run simple DNS server and let client use DNS names instead of IPs to connect.
But I have understood that I can't achieve this the normal way
Normal way in your scenario is to check dst IP and setup DNS. See above.
I'll try to create a second interface which I assign .253 to; this would solve the problem.
You only need a separate interface if you want to separate users at the physical and data link layers.
Also IP addresses you use both from the same IP subnet so even with two interfaces it doesn't make any sense and also you will have routing problems when the same subnet accessed via two interfaces.
You cannot make 192.168.20.253 and 192.168.20.254 to be in a different subnets larger then /31 prefix.
| In nftables, how can we get the IP address via which a packet came in if the respective interface has multiple IP addresses assigned? |
1,648,211,276,000 |
I am trying to locate the file /etc/nftables/inet-filter which is referenced in the readme for a project I've inherited. When I installed nftables, the only files that existed in etc/nftables were:
. .. main.nft nat.nft osf router.nft
I found an inet-filter.nft file at git.netfilter.org which consists of:
#!/usr/sbin/nft -f
table inet filter {
chain input { type filter hook input priority 0; }
chain forward { type filter hook forward priority 0; }
chain output { type filter hook output priority 0; }
}
but I'm not sure if this is the file that my project was referencing.
If anyone has actually used the inet-filter.nft file, does this look familiar? Or is inet-filter.nft obsolete for some reason?
Thanks.
Fedora system: Linux fedora 5.18.11-200.fc36.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jul 12 22:52:35 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
Vagrant: Vagrant 2.3.0
nftables: nftables v1.0.1 (Fearless Fosdick #3)
|
It appears they are not packaged on Fedora since Fedora 36:
# drop vendor-provided configs, they are not really useful
rm -f $RPM_BUILD_ROOT/%{_datadir}/nftables/*.nft
Instead, a "more advanced default config" is shipped with files /etc/nftables/main.nft,router.nft and nat.nft.
# Sample configuration for nftables service.
# Load this by calling 'nft -f /etc/nftables/main.nft'.
Anyway you should create your own tables, especially considering that having different hook types in the same table (eg filter + nat) is what should be done with nftables because separating them would hinder functionality (eg: sharing the same set accross chains with a different type requires them to be in the same table). nftables' tables are not an exact equivalent of iptables' tables.
If you need this file to follow some example then yes, the file you found is the one you were looking for. For 1.0.1 this file and other related files are found there instead.
| Where is /etc/nftables/inet-filter? |
1,648,211,276,000 |
I want to match all data from ens19 with the mac address 88:7e:25:d3:90:0b use table 147.
My idea is to mark the data from 88:7e:25:d3:90:0b and give it a fwmark 14. Then use ip rule to the specified route table
So I make this command
nft add rule filter input iif ens19 ether saddr = 88:7e:25:d3:90:0b mark set 147
Error: syntax error, unexpected '='
add rule filter input iif ens19 ether saddr = 88:7e:25:d3:90:0b mark set 147
What is the correct way to do this command?
Thanks
|
According to the documentation, the syntax you want is:
nft add rule filter input iif ens19 ether saddr 88:7e:25:d3:90:0b mark set 147
| maching source MAC address in nftables |
1,648,211,276,000 |
Hi dear esteemed community,
I'm having a hard time porting my very functional iptables firewall to nftables.
No issues with input/output/forward stuffs, it's mainly the conntrack marking.
What I currently do is the following:
1/ I create three routing tables with the ip command, along with rules and conntrack marks. Each of them has one default route, either my FDDI, my VPN or my 4G connexion.
ip route add table maincnx default dev $WAN via 192.168.1.2
ip route add table maincnx 192.168.0.0/24 dev $LAN src 192.168.0.1
ip route add table maincnx 192.168.1.0/24 dev $WAN src 192.168.1.1
ip rule add from 192.168.1.2 table maincnx
[[ $VPN ]] && ip route add table vpnclient default dev $VPNIF via $VPNCLIENTIP
[[ $VPN ]] && ip route add table vpnclient $VPNCLIENTROUTE dev $VPNIF src $VPNCLIENTIP
[[ $VPN ]] && ip route add table vpnclient 192.168.0.0/24 dev $LAN src 192.168.0.1
[[ $VPN ]] && ip route add table vpnclient 192.168.1.0/24 dev $WAN src 192.168.1.1
ip rule add from $VPNCLIENTIP table vpnclient
ip route add table altcnx default dev $WAN2 via 192.168.2.2
ip route add table altcnx 192.168.0.0/24 dev $LAN src 192.168.0.1
ip route add table altcnx 192.168.1.0/24 dev $WAN src 192.168.1.1
ip route add table altcnx 192.168.2.0/24 dev $WAN2 src 192.168.2.1
ip rule add from 192.168.2.2 table altcnx
ip rule add from all fwmark 1 table maincnx
[[ $VPN ]] && ip rule add from all fwmark 2 table vpnclient
ip rule add from all fwmark 3 table altcnx
ip route flush cache
2/ Then, I put some iptables rules together:
(I left the comments if anyone is already struggling with the Iptables version)
$IPTABLES -t mangle -A PREROUTING -j CONNMARK --restore-mark # Restore mark previously set
$IPTABLES -t mangle -A PREROUTING -m mark ! --mark 0 -j ACCEPT # If a mark exists: skip
$IPTABLES -t mangle -A PREROUTING -s 192.168.0.5 -p tcp --sport 50001 -j MARK --set-mark 2 # route through VPN
$IPTABLES -t mangle -A PREROUTING -s 192.168.0.3 -j MARK --set-mark 2
$IPTABLES -t mangle -A PREROUTING -s 192.168.0.4 -j MARK --set-mark 3 # route through 4G
$IPTABLES -t mangle -A POSTROUTING -j CONNMARK --save-mark # save marks to avoid retagging
3/ The associated Postrouting:
$IPTABLES -t nat -A POSTROUTING -o $WAN -j SNAT --to-source 192.168.1.1
$IPTABLES -t nat -A POSTROUTING -o $WAN2 -j SNAT --to-source 192.168.2.1
[[ $VPN ]] && $IPTABLES -t nat -A POSTROUTING -o $VPNIF -j SNAT --to-source $VPNCLIENTIP
ps: $VPN is obviously a variable set to 1 if the VPN is up & running when the script is launched. There are a few other things to make this work like IP rules cleanup and some prerouting/forward, but it's not the point here, if you're interested, comment, I'll post them in full.
Typology: the gateway has 3 eth: 0/1/2, using ips 192.168.1.1 (FDDI), 192.168.0.1 (LAN), 192.168.2.1 (4G) and the gateways are 192.168.1.2 for FDDI and 192.168.2.2 for 4G, the VPN sits on a TUN0 device which IP is somewhat around 10.8.0.x.
So basically, when 192.168.0.5 in initiating a connexion toward a 50001:tcp port, it is routed through the VPN. 192.168.0.3 is constantly using the VPN whatever it's trying to connect to and 192.168.0.4 is going through the 4G connexion and all others are, by default, using routing table 1 and going through the FDDI connexion.
Question: I'm guessing the Ip part of the job stays the same with nftables but what are the equivalent command in nftables to have the mangling and postrouting done in the same as iptables does it here?
|
iptables-translate is provided along any modern iptables installation (or might be packaged separately, search for it). It will (attempt) to translate iptables rules into nftables rules. That's an easy way if one doesn't want to read all the documentation (including use for this tool) and man page.
It doesn't require root to be used.
For this case (completing a few variables with dummy information to cope with the way OP creates rules):
$ export IPTABLES=/usr/sbin/iptables-translate
$ $IPTABLES -V
iptables-translate v1.8.7 (nf_tables)
$ export WAN=wan WAN2=wan2 VPNIF=vpnif VPNCLIENTIP=192.0.2.2 VPN=1
$ cat > /tmp/rules.bash <<'EOF'
$IPTABLES -t mangle -A PREROUTING -j CONNMARK --restore-mark # Restore mark previously set
$IPTABLES -t mangle -A PREROUTING -m mark ! --mark 0 -j ACCEPT # If a mark exists: skip
$IPTABLES -t mangle -A PREROUTING -s 192.168.0.5 -p tcp --sport 50001 -j MARK --set-mark 2 # route through VPN
$IPTABLES -t mangle -A PREROUTING -s 192.168.0.3 -j MARK --set-mark 2
$IPTABLES -t mangle -A PREROUTING -s 192.168.0.4 -j MARK --set-mark 3 # route through 4G
$IPTABLES -t mangle -A POSTROUTING -j CONNMARK --save-mark # save marks to avoid retagging
$IPTABLES -t nat -A POSTROUTING -o $WAN -j SNAT --to-source 192.168.1.1
$IPTABLES -t nat -A POSTROUTING -o $WAN2 -j SNAT --to-source 192.168.2.1
[[ $VPN ]] && $IPTABLES -t nat -A POSTROUTING -o $VPNIF -j SNAT --to-source $VPNCLIENTIP
EOF
result:
$ bash /tmp/rules.bash
nft add rule ip mangle PREROUTING counter meta mark set ct mark
nft add rule ip mangle PREROUTING mark != 0x0 counter accept
nft add rule ip mangle PREROUTING ip saddr 192.168.0.5 tcp sport 50001 counter meta mark set 0x2
nft add rule ip mangle PREROUTING ip saddr 192.168.0.3 counter meta mark set 0x2
nft add rule ip mangle PREROUTING ip saddr 192.168.0.4 counter meta mark set 0x3
nft add rule ip mangle POSTROUTING counter ct mark set mark
nft add rule ip nat POSTROUTING oifname "wan" counter snat to 192.168.1.1
nft add rule ip nat POSTROUTING oifname "wan2" counter snat to 192.168.2.1
nft add rule ip nat POSTROUTING oifname "vpnif" counter snat to 192.0.2.2
Translator might not know or be able to translate some parts and would leave the line commented out when this happens, for example:
$ /usr/sbin/iptables-translate -A INPUT -s 192.0.2.2 -j LED --led-trigger-id test
nft # -A INPUT -s 192.0.2.2 -j LED --led-trigger-id test
As this dumped translation didn't include any comment, one can assume the translation is ok. Just rename correctly the tables, chains (and content back into variables) to be compatible with what was chosen in the nftables ruleset and reuse it. Improve or simplify if needed (eg: counter expression is only needed for debug or statistics, not for actual operations).
It can often be improved with newer features provided by nftables (eg: using maps for better factorization) if needed, but that's not within the scope of this answer.
Caveat: don't just blindly trust the translation, nftables peculiarities must still be known.
The translation tool doesn't know about context of use, so OP must still understand, even if checking expressions and statements discovered from the translation only after. I'm just giving an example not needed by OP: when a (conn)mark is set in OUTPUT hooks for the specific purpose of rerouting (nothing special for marks set in PREROUTING since it's done before routing and this doesn't apply for FORWARD), the correct way to do this with nftables is to set a mark in a chain that is of type route hook output instead of type filter hook output which wouldn't trigger a reroute:
Supported chain types
[...]
Type
Families
Hooks
Description
[...]
route
ip, ip6
output
If a packet has traversed a chain of this type and is about to be accepted, a new route lookup is performed if relevant parts of the IP header have changed. This allows to e.g. implement policy routing selectors in nftables.
... in order to have a reroute happen at all. Here nft add rule ip mangle OUTPUT wouldn't tell about this detail because usually mangle is just transformed into filter when using nftables. One has to understand how it's working: translation can't do everything.
| Porting Iptables to Nftables firewall with conntrack marks |
1,648,211,276,000 |
I am finally switching from the old iptables to the new netfilter (specifically using firewalld) to configure my computers and servers but so far I have failed to find any newer alternative to the good old iptables -vnL for quickly getting current statistics.
What's the appropriate command to use here instead?
|
You can print all netfilter rules to check current counter values
nft list ruleset
Edit:
Since firewalld probably does not add counters to nft rules, you will not get traffic statistics using firewalld with nftables.
| Get netfilter statistics on the command line |
1,648,211,276,000 |
I've got my server set up with long list of services, and everything is working great... on IPv4. But when I test them against IPv6 nothing is resolving. After disabling nftables everything started working, so I turned it back on and through trial and error, was able to identify the two lines (identified with --> below) that were causing things to fail...
# Accept related or established, drop bad, accept everything else
add rule inet filter INPUT ct state related,established accept
add rule inet filter INPUT tcp flags & (fin|syn|rst|psh|ack|urg) == 0x0 drop
--> add rule inet filter INPUT tcp flags & (fin|syn|rst|ack) != syn ct state new drop
--> add rule inet filter INPUT tcp flags & (fin|syn|rst|psh|ack|urg) == fin|syn|rst|psh|ack|urg drop
add rule inet filter INPUT iifname "lo" accept
I'm not as educated as I should be on TCP, actually I'm just educated enough to get myself into trouble, so I'd appreciate some help interpreting what I'm doing here. My understanding is that I'm accepting all related/established traffic, and all traffic related to the loopback interface. I'm really just looking at "new" requests...
My problem is that I don't really understand the TCP flags other than syn and ack (and really only insofar as how they work in a three way TLS handshake), the others I've added here just seemed to be common in the tutorials I was reviewing. My fear is I don't understand the implication of leaving them there, or taking them out and what I'm opening myself up to. My goal is to allow IPv4 and IPv6 traffic, while eliminating any bad or unrelated packets from getting through.
This will eventually be tied to a commercial offering, so I'd like to understand better but need a little guidance. Would appreciate anyone helping clear this up with me.
EDIT: Turns out these were not the issue, the rules that were causing the issue were actually the icmp rules, IPv6 requires a handful of nb-* rules in order to operate properly. I'll provide details in an answer.
|
Turns out that the rules I posted were not the culprit, as I've noted in the question update. The actual issue was related to ICMP traffic and in particular several types related directly to IPv6. My ICMP rules were...
add rule inet filter INPUT ct state new icmp type { echo-request, echo-reply } accept
add rule inet filter INPUT ct state new icmpv6 type { echo-request, echo-reply } accept
But in order to operate properly IPv6 requires a number of Neighbour Discovery related rules (nd-*). I've included them as well as a few other types that are all part of being a "good network citizen". The ones I thought were the issue were actually important for attack mitigation and are working fine now that I've fixed my ICMP traffic.
The new ICMP rules are...
add rule inet filter INPUT ip protocol icmp icmp type { destination-unreachable, echo-reply, echo-request, source-quench, time-exceeded } accept
add rule inet filter INPUT ip6 nexthdr icmpv6 icmpv6 type { destination-unreachable, echo-reply, echo-request, nd-neighbor-solicit, nd-router-advert, nd-neighbor-advert, packet-too-big, parameter-problem, time-exceeded } accept
The original rules I thought were the issue are actually for mitigating malicious behaviour...
XMAS Attack This rule is for mitigating the XMAS attack, or one that enables the packet bits for all tcp types, otherwise, lighting it up like a "Christmas tree" in order to parse the slight differences in OS responses to such a request to help identify further avenues of attack by a bad actor...
add rule inet filter INPUT tcp flags & (fin|syn|rst|psh|ack|urg) == fin|syn|rst|psh|ack|urg drop
Force SYN check If I understand it right, this helps to lower the processing load by eliminating other useless packets that precede the initial SYN packet that could be part of an attack on resources, like a denial of service via resource exhaustion...
add rule inet filter INPUT tcp flags & (fin|syn|rst|ack) != syn ct state new drop
This post helped me get a foothold and start searching out a better understanding.
Hope this provides a shortcut to the answer for someone else! :)
| nftables preventing services from resolving on IPv6 |
1,648,211,276,000 |
I have a working nftables rule-set. However it is very long, and has a lot of repeated code:
Exact (just a few characters different) duplicate for ip4 and ip6.
Chains of rules, that branch into near identical branches. I feel that some boolean logic would help here.
How can I reduce the repeated code, to make this rule-set more concise?
I was trying various things then realised I can do it like below, but it makes the programmer in me feel dirty. It has too much repeated code.
#!/usr/sbin/nft -f
table ip vnc_table {};
table ip6 vnc_table {};
flush table ip vnc_table;
flush table ip6 vnc_table;
table ip vnc_table {
# 3 near identical sets
set richardports {
type inet_service;
flags interval;
elements = { 5910-5919 };
}
set henryports {
type inet_service;
flags interval;
elements = { 5920-5929 };
}
set sholaports {
type inet_service;
flags interval;
elements = { 5930-5939 };
}
chain output {
type filter hook output priority 0; policy accept;
ip daddr 127.0.0.1 jump localhost;
}
chain localhost {
tcp dport @richardports jump richard_chain;
tcp dport @henryports jump henry_chain;
tcp dport @sholaports jump shola_chain;
}
# 3 near identical chains
chain richard_chain {
skuid "richard" accept;
reject;
}
chain henry_chain {
skuid "henry" accept;
reject;
}
chain shola_chain {
skuid "shola" accept;
reject;
}
}
#then we do it all again for ip6
table ip6 vnc_table {
set richardports {
type inet_service;
flags interval;
elements = { 5910-5919 };
}
set henryports {
type inet_service;
flags interval;
elements = { 5920-5929 };
}
set sholaports {
type inet_service;
flags interval;
elements = { 5930-5939 };
}
chain output {
type filter hook output priority 0; policy accept;
ip6 daddr ::1 jump localhost;
}
chain localhost {
tcp dport @richardports jump richard_chain;
tcp dport @henryports jump henry_chain;
tcp dport @sholaports jump shola_chain;
}
chain richard_chain {
skuid "richard" accept;
reject;
}
chain henry_chain {
skuid "henry" accept;
reject;
}
chain shola_chain {
skuid "shola" accept;
reject;
}
}
|
Joining IPv4 and IPv6 together
Join tables of families ip and ip6 into a single table of family inet:
remove the whole ip6 vnc_table table
change the ip vnc_table table into inet vnc_table table
Family inet can handle both IPv4 and IPv6 at the same time, and can still accept specific IPv4 or IPv6 rules when needed. So replace:
table ip vnc_table {
with:
table inet vnc_table {
adapt the flushes before
While answering this question, I discovered that flush table isn't adequate to be able to reload the rules (I had to amend my answer on this topic): as documented, it will "Flush all chains and rules of the specified table." but will not delete those objects themselves, leading to leftover or conflicting objects. delete table should be used instead:
delete table inet vnc_table
and of course ip vnc_table and ip6 vnc_table tables should be deleted manually once:
# nft delete table ip vnc_table
# nft delete table ip6 vnc_table
add the now missing ip6 rule, so that both IPv4 and IPv6 localhost will match
nft add inet vnc_table output ip6 daddr ::1 jump localhost
Bleeding edge required to do better
To go further the sets have to be organised differently but there's a bug/limitation which has been lifted only in nftables 0.9.4 released on 2020-04-01 along with libnftnl 1.1.6, while also requiring kernel 5.6:
Support for ranges in concatenations (requires Linux kernel >= 5.6),
Without these versions, nftables can't accept a concatenation which includes a range (here: the ports range). Using only kernel 5.5 rather than 5.6 led to a segmentation fault when handling the ruleset below (this is arguably a bug): kernel too is required.
So two sets are made: the first is to tell what port ranges are filtered and need special treatment. The second (still having to reinclude the previous port ranges) is a concatenation of port ranges and user ids. Those two properties will be checked against the packet: if a port + user association exists, it will be accepted, else the packet is rejected for those filtered ports.
Result:
table inet vnc_table
delete table inet vnc_table
table inet vnc_table {
set filteredport {
type inet_service
flags interval
elements = { 5910-5919, 5920-5929, 5930-5939 }
}
set portuser {
type inet_service . uid
flags interval
elements = {
5910-5919 . "richard",
5920-5929 . "henry",
5930-5939 . "shola"
}
}
chain output {
type filter hook output priority filter; policy accept;
ip daddr 127.0.0.1 jump localhost
ip6 daddr ::1 jump localhost
}
chain localhost {
tcp dport @filteredport jump portuser_chain
}
chain portuser_chain {
tcp dport . meta skuid @portuser accept
reject with tcp reset
}
}
Notes:
the uid type in a named set is still not documented in the man page, but has been existing for a very long time along many other types. Source code defining it: here and there.
replacing reject (which sends ICMP) with reject with tcp reset (which sends a specific TCP RST) speeds up the port rejection detection on the client (and there's no need to specify it's TCP before, the check is added implicitly, matching only for TCP).
| Simplify nftables (net filter tables) rules |
1,648,211,276,000 |
Set up/configuration:
I have a RHEL 8 server, running Asterisk 15.x, that has 2 NICs. NMCLI is used for networking
NIC0 (eno5np0) is on the trusted network and is configured as a static IPv4 and NIC1 (ens1f0) is on the untrusted side as a DHCP IPv4. Both are UP,BROADCAST,RUNNING,MULTICAST
NIC0 is where I access the server from, is an internal network and has an IP of 10.38.149.244/32 (GW is 10.38.149.241) NIC1 is supposed to allow access to the internet (for SIP calling) and has an IP of 10.0.0.91 (GW is 10.0.0.1)
Firewall status - inactive(dead)
SE Linux status - disabled
Server #1 interface configs:
TYPE=Ethernet
DEVICE=eno5np0
UUID=77c33e7a-7dba-4785-b749-dc0883b46cef
ONBOOT=yes
IPADDR=10.38.149.244
NETMASK=255.255.255.240
GATEWAY=10.38.149.241
NM_CONTROLLED=yes
BOOTPROTO=none
DOMAIN=comcast.net
DNS1=69.252.80.80
DNS2=69.252.81.81
DEFROUTE=yes
USERCTL=no
IPV4_FAILURE_FATAL=yes
TYPE=Ethernet
BOOTPROTO=dhcp
NM_CONTROLLED=yes
PEERDNS=no
DEFROUTE=no
NAME=ens1f0
UUID=249b95f0-d490-4402-b654-43695317d738
DEVICE=ens1f0
ONBOOT=yes
PROXY_METHOD=none
BROWSER_ONLY=no
IPV4_FAILURE_FATAL=no
IPV6_DISABLED=yes
IPV6INIT=no
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
Kernel IP routing table:
Destination
Gateway
Genmask
Flags
Metric
Ref
Use
Iface
0.0.0.0
10.38.149.241
0.0.0.0
UG
100
0
0
eno5np0
10.0.0.0
0.0.0.0
255.255.255.0
U
101
0
0
ens1f0
10.38.149.240
0.0.0.0
255.255.255.240
U
100
0
0
eno5np0
I do not have any nft tables/IP tables configured
I am SSH'd to the 10.38.149.244 interface (NIC0, aka eno5np0), have sudo access
I run the following command for NIC0: sudo traceroute -i eno5np0 8.8.8.8 and get a nice, completed trace to 8.8.8.8
I run the following command for NIC1: sudo traceroute -i ens1f0 8.8.8.8 and it times out, no packets received
I cannot ping/traceroute to any ip address through NIC1 (sudo ping -I and sudo traceroute -i) except 10.0.0.1, which is the gateway. It is almost like if it isn't the gateway the packets are not making it back into the server for processing?
Issue/Problem
So, after trying both ping and traceroute and not receiving a response, I opened a second SSH session to the server and did a tcpdump while running a ping to 8.8.8.8 over the NIC1 interface in my first SSH session:
TCP Dump
sudo tcpdump -vv --interface ens1f0 -c 10
dropped privs to tcpdump
tcpdump: listening on ens1f0, link-type EN10MB (Ethernet), capture size 262144 bytes
15:21:09.450739 IP6 (flowlabel 0x9b9b7, hlim 255, next-header ICMPv6 (58) payload length: 120) fe80::1256:11ff:fe86:6e92 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 120
hop limit 64, Flags [managed, other stateful], pref medium, router lifetime 180s, reachable time 0ms, retrans timer 0ms
rdnss option (25), length 40 (5): lifetime 180s, addr: device1.inetprovider.net addr: device2.inetprovider.net
0x0000: 0000 0000 00b4 2001 0558 feed 0000 0000
0x0010: 0000 0000 0001 2001 0558 feed 0000 0000
0x0020: 0000 0000 0002
prefix info option (3), length 32 (4): 2601:0:200:80::/64, Flags [onlink, auto], valid time 300s, pref. time 300s
0x0000: 40c0 0000 012c 0000 012c 0000 0000 2601
0x0010: 0000 0200 0080 0000 0000 0000 0000
route info option (24), length 24 (3): ::/0, pref=medium, lifetime=180s
0x0000: 0000 0000 00b4 0000 0000 0000 0000 0000
0x0010: 0000 0000 0000
source link-address option (1), length 8 (1): 10:56:11:86:6e:92
0x0000: 1056 1186 6e92
15:21:10.415419 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has dns.google tell 10.0.0.91, length 28
15:21:11.439570 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has dns.google tell 10.0.0.91, length 28
15:21:12.453262 IP6 (flowlabel 0x9b9b7, hlim 255, next-header ICMPv6 (58) payload length: 120) fe80::1256:11ff:fe86:6e92 > ff02::1: [icmp6 sum ok] ICMP6, router advertisement, length 120
hop limit 64, Flags [managed, other stateful], pref medium, router lifetime 180s, reachable time 0ms, retrans timer 0ms
rdnss option (25), length 40 (5): lifetime 180s, addr: device1.inetprovider.net addr: device2.inetprovider.net
0x0000: 0000 0000 00b4 2001 0558 feed 0000 0000
0x0010: 0000 0000 0001 2001 0558 feed 0000 0000
0x0020: 0000 0000 0002
prefix info option (3), length 32 (4): 2601:0:200:80::/64, Flags [onlink, auto], valid time 300s, pref. time 300s
0x0000: 40c0 0000 012c 0000 012c 0000 0000 2601
0x0010: 0000 0200 0080 0000 0000 0000 0000
route info option (24), length 24 (3): ::/0, pref=medium, lifetime=180s
0x0000: 0000 0000 00b4 0000 0000 0000 0000 0000
0x0010: 0000 0000 0000
source link-address option (1), length 8 (1): 10:56:11:86:6e:92
0x0000: 1056 1186 6e92
15:21:12.463417 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has dns.google tell 10.0.0.91, length 28
15:21:13.487416 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has dns.google tell 10.0.0.91, length 28
15:21:13.546246 IP (tos 0x0, ttl 4, id 8382, offset 0, flags [DF], proto UDP (17), length 219)
169.254.100.1.50760 > 239.255.255.250.ssdp: [udp sum ok] UDP, length 191
15:21:13.546273 IP (tos 0x0, ttl 4, id 8383, offset 0, flags [DF], proto UDP (17), length 223)
169.254.100.1.50760 > 239.255.255.250.ssdp: [udp sum ok] UDP, length 195
15:21:13.546320 IP (tos 0x0, ttl 4, id 8384, offset 0, flags [DF], proto UDP (17), length 227)
169.254.100.1.50760 > 239.255.255.250.ssdp: [udp sum ok] UDP, length 199
15:21:13.546419 IP (tos 0x0, ttl 4, id 8385, offset 0, flags [DF], proto UDP (17), length 220)
169.254.100.1.50759 > 239.255.255.250.ssdp: [udp sum ok] UDP, length 192
10 packets captured
10 packets received by filter
0 packets dropped by kernel
I am not understanding why, if the server is doing an ARP request, am I not getting a response? Is the issue on my server not knowing how to respond back to NIC0 with my ping request (where I am SSH'd into)? Is it the gateway being misconfigured? Do I need a NFT table/IP Table configured?
I am familiar with how to do this in RHEL 6.x, but not in RHEL 8 (configuration using IP route and IP tables was simpler I think?)
At the end of the day (for a broader picture) - I have Softphone clients to register to the Asterisk PBX on the internal/trusted network coming in over NIC0 (which works). They need to make phone calls to endpoints on the Internet, but only over NIC1 - and right now I cannot even ping to any location on the internet over the NIC1 interface.
Any help/guidance would be very much appreciated at this point - I am lost and desperate.
Edit/additional clarification:
I have a RHEL 6.x server, with exact same physical connections and NICs that this does work on. I have tried to use the iptable and routing table from this Server #2 on Server #1 above and it will not work (I get booted when I turn the interface back up, and have to reboot the device to clear out any unsaved changes before I can get back in) I did use the iptables to nft translate function just as an FYI. I have plugged my Server #1 NIC1 into the known good modem/internet access port that Server #2 is using and still no change.
Server #2 interface configs:
DEVICE=eth0
BOOTPROTO=none
NM_CONTROLLED=yes
ONBOOT=yes
TYPE=Ethernet
UUID="da71293d-4351-481e-a794-bc5850e29391"
IPADDR=10.38.149.243
DNS1=10.168.241.223
DOMAIN=comcast.net
DEFROUTE=no
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
#HWADDR=00:1C:23:CF:BC:E3
HWADDR=00:1c:23:cf:bc:e3
NETMASK=255.255.255.240
USERCTL=no
PEERDNS=yes
GATEWAY=10.38.149.241
DEVICE=eth1
BOOTPROTO=dhcp
HWADDR=00:1c:23:cf:bc:e5
NM_CONTROLLED=yes
ONBOOT=yes
DEFROUTE=yes
TYPE=Ethernet
UUID="78bc69cb-80ca-41d1-af9c-66703eb952d5"
USERCTL=no
PEERDNS=yes
IPV6INIT=no
Kernel Routing Table on Server #2
Destination
Gateway
Genmask
Flags
Metric
Ref
Use
Iface
0.0.0.0
10.0.0.1
255.255.255.255
UGH
0
0
0
eth1
10.38.149.240
0.0.0.0
255.255.255.240
U
0
0
0
eth0
10.0.0.0
0.0.0.0
255.255.255.0
U
0
0
0
eth1
10.0.0.0
10.38.149.241
255.0.0.0
UG
0
0
0
eth0
0.0.0.0
10.0.0.1
0.0.0.0
UG
0
0
0
eth1
iptables -L on Server #2
Chain INPUT (policy ACCEPT)
target
prot
opt
source
destination
status?
DROP
all
--
c-67-164-235-175.devivce1.mi.inetprovider.net
anywhere
DROP
all
--
c-67-164-235-175.devivce1.mi.inetprovider.net
anywhere
ACCEPT
all
--
anywhere
anywhere
ACCEPT
all
--
anywhere
anywhere
state RELATED,ESTABLISHED
ACCEPT
tcp
--
anywhere
anywhere
tcp dpt:ssh
ACCEPT
udp
--
anywhere
anywhere
udp dpt:sip
ACCEPT
udp
--
anywhere
anywhere
udp dpts:ndmp:dnp
DROP
all
--
106.0.0.0/8
anywhere
DROP
all
--
106.0.0.0/8
anywhere
DROP
all
--
host-87-0-0-0.retail.blockeddomain.notus/8
anywhere
DROP
all
--
113.0.0.0/8
anywhere
DROP
all
--
117.0.0.0/8
anywhere
DROP
all
--
p5b000000.dip0.blockeddomain.notus/8
anywhere
Chain FORWARD (policy ACCEPT)
target
prot
opt
source
destination
ACCEPT
all
--
anywhere
anywhere
Chain OUTPUT (policy ACCEPT)
target
prot
opt
source
destination
|
A gateway with a genmask of 0.0.0.0 is a "default gateway". In other words, it means "unless otherwise specified, the rest of the world is this way." In a simple multi-homed host configuration, there should be just one default gateway in the entire system at a time. You cannot really use two NATted internet connections in parallel, unless you at least have an exact control of how the NAT is done. The best you can probably do with two average consumer-grade Internet connections (with a provider-dictated NAT on each) is to use one as a primary, with an automatic fall-back to the second one if the first one loses a link.
You have a default gateway configured on eno5np0 interface, but not on the ens1f0 interface. There are no more specific routes either, just the auto-generated network entries for the local network segment of each interface. This is probably because your system's DHCP client detects you already have a statically-configured default gateway on eno5np0, so it won't mess things up by adding another.
As a result, the system has no clue that it should send outgoing traffic addressed to 8.8.8.8 via 10.0.0.1 if sending it out through ens1f0. By your routing table, only addresses in the form of 10.0.0.* should be reachable through that interface.
But because you are explicitly telling traceroute to try and reach 8.8.8.8 via ens1f0, it assumes you are trying to debug a possibly misconfigured server in your local network segment, and sends out direct ARP requests for that IP address.
You should never see an ARP request for 8.8.8.8 in your own network (unless you are next-door to a Google's datacenter and have somehow managed to get a neighborly direct-link to their network :-) unless something is misconfigured. Instead, you should see an ARP request for the default gateway in that segment, and then this system should send any outgoing traffic bound to 8.8.8.8 to the gateway.
Your system also probably has a IP Reverse Path Filtering in effect. Basically, since your routing table says that the ens1f0 interface has connectivity to the 10.0.0.* addresses only, any packets with source addresses not in that range coming in via that interface would get discarded as fakes. That would cause any responses from 8.8.8.8 coming in via 10.0.0.1 to be discarded as long as your current routing table is in effect.
| RHEL 8 IP/Kernel Routing Multi-Homed Server Issue - Cannot get a response to ping, when trying to ping from 2nd Interface |
1,648,211,276,000 |
I am running AlmaLinux 9, and on boot I see warning
Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
but what is loading that driver? I have firewalld service disabled. I want to eliminate this warning (properly).
Additional info:
[root@server ~]# lsmod | grep nft_compat
nft_compat 20480 14
nf_tables 278528 98 nft_compat,nft_counter,nft_chain_nat
nfnetlink 20480 2 nft_compat,nf_tables
[root@server ~]# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
NETAVARK_FORWARD all -- 0.0.0.0/0 0.0.0.0/0 /* netavark firewall plugin rules */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain NETAVARK_FORWARD (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.88.0.0/16 ctstate RELATED,ESTABLISHED
ACCEPT all -- 10.88.0.0/16 0.0.0.0/0
Chain NETAVARK_ISOLATION_2 (1 references)
target prot opt source destination
Chain NETAVARK_ISOLATION_3 (0 references)
target prot opt source destination
DROP all -- 0.0.0.0/0 0.0.0.0/0
NETAVARK_ISOLATION_2 all -- 0.0.0.0/0 0.0.0.0/0
|
About the message itself: it's not an upstream kernel message, it's a specific message added in AlmaLinux 9 (and probably inherited from RHEL 9).
The nft_compat module is the compatibility layer allowing to run iptables over nftables and still use non-nftables modules.
CONFIG_NFT_COMPAT: Netfilter x_tables over nf_tables module
[...]
modules built: nft_compat
Help text
This is required if you intend to use any of existing x_tables
match/target extensions over the nf_tables framework.
Any xtables module that is used by iptables-nft instead of having iptables-nft translate the iptables rule into a native-only nftables rule will need nft_compat to work.
Starting from a clean VM running nothing network-related and thus not having nft_compat loaded, almost anything that is not empty will load it.
This doesn't:
iptables-nft -A -j ACCEPT
As it's translated purely into nftables code:
# uname -r
6.1.0-0.deb11.5-amd64
# iptables -V
iptables v1.8.7 (nf_tables)
# iptables -A -j ACCEPT
# nft --debug=netlink list ruleset
ip filter INPUT 2
[ counter pkts 0 bytes 0 ]
[ immediate reg 0 accept ]
table ip filter {
chain INPUT {
type filter hook input priority filter; policy accept;
counter packets 0 bytes 0 accept
}
}
# lsmod | grep nft_compat
#
Almost anything else will (until the userspace and kernel version allow to translate it into native nftables, so this possibly depends on the iptables version and the kernel version for a given rule).
# iptables -A INPUT -j REJECT
# lsmod | grep nft_compat
nft_compat 20480 1
nf_tables 286720 4 nft_compat
x_tables 61440 2 nft_compat,ipt_REJECT
nfnetlink 20480 2 nft_compat,nf_tables
# nft --debug=netlink list ruleset
ip filter INPUT 2
[ counter pkts 8 bytes 608 ]
[ immediate reg 0 accept ]
ip filter INPUT 3 2
[ counter pkts 0 bytes 0 ]
[ target name REJECT rev 0 ]
table ip filter {
chain INPUT {
type filter hook input priority filter; policy accept;
counter packets 8 bytes 608 accept
counter packets 0 bytes 0 reject
}
}
#
Any entry displayed by nft --debug=netlink ... that includes either target name (for iptables-nft .. -j target) or match name (for iptables-nft ... -m module) means it's using the corresponding x_tables kernel module through the nft_compat compatibility module.
To get rid of it, don't use iptables-nft at all or tools using their libraries (libip4tc2, ...) . Of course reverting to iptables-legacy (if even provided?) would be worse: this one is intended to become deprecated upstream, and the compatibility layer intended to be kept for a long time to replace it instead.
This probably means: don't use Docker, don't use podman, don't use ...
Conclusion: I don't see how you can get rid of this message.
| What loads nft_compat |
1,648,211,276,000 |
Our router machine has multiple public IPs (/27) on its WAN interface. Now, I want to add dnat rules which match specific dport/saddr/daddr combinations.
My dream would be something like this:
map one2one_dnat {
# dst_addr . src_addr . proto . dst_port : dnat_to . dnat_to_port
type ipv4_addr . ipv4_addr . inet_proto . inet_service : ipv4_addr . inet_service
flags interval
counter
comment "1-1 dnat"
elements = {
42.42.42.5 . 0.0.0.0/0 . tcp . 8888 : 10.42.42.5 . 8888
}
}
# And then later in a chain
ip daddr . ip saddr . ip protocol . th dport dnat to @one2one_dnat
However, this results in:
root@XXX# nft -c -f assembled.nft
assembled.nft:252:59-60: Error: syntax error, unexpected to, expecting newline or semicolon
ip daddr . ip saddr . ip protocol . th dport dnat to @one2one_dnat
^^
The following syntax examples do work (however not with the intended fancy all-in-one map):
dnat ip addr . port to ip saddr . tcp dport map { 42.42.42.5 . 8888 : 10.42.42.5 . 8888}
# And even with saddr restrictions
ip saddr 0.0.0.0/0 dnat ip addr . port to ip saddr . tcp dport map { 42.42.42.5 . 8888 : 10.42.42.5 . 8888}
Any ideas/suggestions are highly appreciated
|
The idea was here but with a wrong syntax used for the named map case, while the proper syntax was used for the anonymous map case.
A map replaces a key with this key's value if found (or the expression just evaluates to false, stopping further processing). Even when used along a dnat rule a map named keytovalue must be used with proper syntax: key map @keytovalue. These 3 parts will then be replaced with the value according to the packet's properties and consumed by the other part of the rule.
OP's attempt doesn't follow the syntax:
ip daddr . ip saddr . ip protocol . th dport dnat to @one2one_dnat
It should be written like this instead:
dnat to ip daddr . ip saddr . ip protocol . th dport map @one2one_dnat
No surprise here: it's the same syntax OP successfully used with anonymous maps: the key (made of concatenations) followed by the keyword map followed by the map reference (which is the definition in the anonymous case). dnat [to] will be the consumer of the resulting ip:port value (only when a match happened).
Further notes.
For other readers, this also requires recent enough nftables support, both in userland and kernel parts, for the purpose of doing NAT: nftables 0.9.4 and Linux kernel 5.6:
NAT mappings with concatenations. This allows you to specify the address and port to be used in the NAT mangling from maps, eg.
nft add rule ip nat pre dnat ip addr . port to ip saddr map { 1.1.1.1 : 2.2.2.2 . 30 }
You can also use this new feature with named sets:
nft add map ip nat destinations { type ipv4_addr . inet_service : ipv4_addr . inet_service \; }
nft add rule ip nat pre dnat ip addr . port to ip saddr . tcp dport map @destinations
Replacing the type syntax with a typeof syntax along concatenations, which is usually preferable for readability and to avoid having to figure out all the involved type names, some of them poorly documented, doesn't appear to work currently for OP's case: the use of ip protocol and th appears to clash between the map and the rule at least with nftables 1.0.7 and kernel 6.1.x. So better not use typeof here and keep type, or else split the map into two separate maps, one for UDP one for TCP to avoid this clash.
Splitting would also probably be needed for a similar IPv6 setup, since ip6 nexthdr can't be used safely to replace ip protocol, and the correct replacement, meta l4proto won't play along either.
| Nftables: Dnat with source address restriction and just one map |
1,648,211,276,000 |
My understanding from reading is that once a rule matches, no further rules are evaluated. However; my experience with the following example seems to indicate otherwise. Looking for some clarity on this.
table netdev retag {
chain tagin {
type filter hook ingress devices = $lan priority -149; policy accept;
ip saddr 10.0.0.0/8 ip daddr 10.0.0.0/8 ip dscp set af21 counter;
ip saddr 10.0.1.0/24 ip daddr 10.0.2.0/24 ip dscp set af31 counter;
}
}
If the above statement was true then "nft list ruleset" should show hits against the first rule and 0 hits against the second rule since the first rule would always match before the second rule. However; see hits against both. Am I missing something silly here?
|
Some actions, such as accept, drop prevent further rule processing. In the documentation such actions are called "terminal statement".
Other actions, such as counter, log perform their task and continue to the next rule.
In this slightly modified example I added accept to the first rule, this will prevent further evaluation.
table netdev retag {
chain tagin {
type filter hook ingress devices = $lan priority -149; policy accept;
ip saddr 10.0.0.0/8 ip daddr 10.0.0.0/8 ip dscp set af21 counter accept;
ip saddr 10.0.1.0/24 ip daddr 10.0.2.0/24 ip dscp set af31 counter;
}
}
| Questions about nftables and rule processing order |
1,662,510,379,000 |
On Linux, I want to drop all packets that contain any obsolete tcp options. By obsolete options, I mean all those with a tcp option kind number above 8. How can I do this using nftables?
For example, if there is a way to check whether a tcp packet has an option with a given numeric kind in nftables, that would work. If nftables does not support this, can I use tc or another standard linux utility to do it?
|
Many keywords in nftables just represent a constant. That's the case for tcp options (UPDATE: but only since nftables 0.9.8).
Here's an excerpt for tcp options:
EXTENSION HEADER EXPRESSIONS
Extension header expressions refer to data from variable-sized
protocol headers, such as IPv6 extension headers and TCP options.
[...]
tcp option
{eol | noop | maxseg | window | sack-permitted | sack | sack0 | sack1
| sack2 | sack3 | timestamp} tcp_option_field
TCP Options
Keyword
Description
TCP option fields
eol
End of option list
kind
noop
1 Byte TCP No-op options
kind
maxseg
TCP Maximum Segment Size
kind, length, size
window
TCP Window Scaling
kind, length, count
[...]
timestamp
TCP Timestamps
kind, length, tsval, tsecr
[...]
Boolean specification
The following expressions support a boolean comparison:
Expression
Behaviour
fib
Check route existence.
exthdr
Check IPv6 extension header existence.
tcp option
Check TCP option header existence.
[...]
# match if TCP timestamp option is present
filter input tcp option timestamp exists
Here eol stands for 0, nop for 1, ... timestamp for 8, etc.
UPDATE: since version 0.9.8 it's possible to specify an arbitrary numeric value instead of a keyword:
Add raw tcp option match support
... tcp option @42,16,4
where you can specify @kind,offset,length
Allow to check for the presence of any tcp option
... tcp option 42 exists
So with this kind of rule:
nft add table t
nft add chain t c '{ type filter hook input priority 0; policy accept; }'
nft add rule t c tcp option 8 exists drop
nft add rule t c tcp option 9 exists drop
nft add rule t c tcp option 10 exists drop
[...]
nft add rule t c tcp option 254 exists drop
one can filter all rules with value of 8 (included to show later that 8 means timestamp) or above (option value is on an 8 bits field).
Known values (here only timestamp) are displayed with a keyword, else they stay as number:
# nft -a --debug=netlink list ruleset
ip t c 2
[ exthdr load tcpopt 1b @ 8 + 0 present => reg 1 ]
[ cmp eq reg 1 0x00000001 ]
[ immediate reg 0 drop ]
ip t c 3 2
[ exthdr load tcpopt 1b @ 9 + 0 present => reg 1 ]
[ cmp eq reg 1 0x00000001 ]
[ immediate reg 0 drop ]
ip t c 4 3
[ exthdr load tcpopt 1b @ 10 + 0 present => reg 1 ]
[ cmp eq reg 1 0x00000001 ]
[ immediate reg 0 drop ]
ip t c 5 4
[ exthdr load tcpopt 1b @ 254 + 0 present => reg 1 ]
[ cmp eq reg 1 0x00000001 ]
[ immediate reg 0 drop ]
table ip t { # handle 21
chain c { # handle 1
type filter hook input priority filter; policy accept;
tcp option timestamp exists drop # handle 2
tcp option 9 exists drop # handle 3
tcp option 10 exists drop # handle 4
tcp option 254 exists drop # handle 5
}
}
I didn't find a way to factorize this into sets or maps nor do comparisons with the option value, which would have simplified this, because the whole tcp option foo must be used in the grammar, not only tcp option, which as far as I understand leads to useless features like:
Check if option timestamp which means kind 8 has a kind value of... 8. (always true if it exists, so equivalent to tcp option timestamp exists):
nft add rule t c tcp option timestamp kind == 8
or check if option timestamp which means kind 8 has a kind value greater than 8 (always false):
nft add rule t c 'tcp option timestamp kind > 8'
Considering that a tcp option is part of a list of options, I can understand implementation isn't that easy because it has to loop through all present options to select one. Which would be chosen when two could match?
Note: a few options above 8 are worth keeping. For example tcp option kind 30 is used for MPTCP which was recently added into mainstream Linux, while tcp options kind 6 (echo) and 7 (echo reply) are obsolete.
| blocking obsolete tcp options |
1,662,510,379,000 |
There is the requirement to set up a stateless NAT for two UDP connections from a physical network adapter in global network namespace via a linked pair of virtual network adapters to a service running in a special network namespace. This should be done on a CPU (Intel Atom) in an industrial device running Linux (Debian) with kernel 5.9.7.
Here is a scheme of the network configuration which should be set up:
===================== =====================================================
|| application CPU || || communication CPU ||
|| || || ||
|| || || global namespace | nsprot1 namespace ||
|| || || | ||
|| enp4s0 || || enp1s0 | enp3s0 ||
|| 0.0.0.5/30 ========== 0.0.0.6/30 | 192.168.2.15/24 =======
|| || || | ||
|| UDP port 50001 || || UDP port 50001 for sv1 | TCP port 2404 for sv2 ||
|| UDP port 50002 || || UDP port 50002 for sv1 | ||
|| UDP port 53401 || || UDP port 50401 for sv1 | ||
|| UDP port 53402 || || UDP port 50402 for sv1 | ||
|| || || | ||
|| || || vprot0 | vprot1 ||
|| || || 0.0.0.16/31 --- 0.0.0.17/31 ||
|| || || | ||
|| UDP port 53404 || || UDP port 50404 for sv2 - UDP port 50404 for sv2 ||
|| UDP port 53441 || || UDP port 50441 for sv2 - UDP port 50441 for sv2 ||
===================== =====================================================
The application CPU always starts first and opens several UDP ports for communication with service sv1 and service sv2 on the communication CPU via its physical network adapter enp4s0 with the IP address 0.0.0.5.
The output of ss --ipv4 --all --numeric --processes --udp executed on application CPU is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:50001 0.0.0.0:* users:(("sva",pid=471,fd=5))
udp UNCONN 0 0 0.0.0.0:50002 0.0.0.0:* users:(("sva",pid=471,fd=6))
udp ESTAB 0 0 0.0.0.5:53401 0.0.0.6:50401 users:(("sva",pid=471,fd=12))
udp ESTAB 0 0 0.0.0.5:53402 0.0.0.6:50402 users:(("sva",pid=471,fd=13))
udp ESTAB 0 0 0.0.0.5:53404 0.0.0.6:50404 users:(("sva",pid=471,fd=19))
udp ESTAB 0 0 0.0.0.5:53441 0.0.0.6:50441 users:(("sva",pid=471,fd=21))
The communication CPU starts second and has finally two services running:
sv1 in global namespace and
sv2 in special network namespace nsprot1.
The output of ss --ipv4 --all --numeric --processes --udp executed in global namespace of the communication CPU is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp UNCONN 0 0 0.0.0.0:50001 0.0.0.0:* users:(("sv1",pid=812,fd=18))
udp UNCONN 0 0 0.0.0.6:50002 0.0.0.0:* users:(("sv1",pid=812,fd=17))
udp UNCONN 0 0 0.0.0.6:50401 0.0.0.0:* users:(("sv1",pid=812,fd=13))
udp UNCONN 0 0 0.0.0.6:50402 0.0.0.0:* users:(("sv1",pid=812,fd=15))
The output of ip netns exec nsprot1 ss --ipv4 --all --numeric --processes --udp (nsprot1 namespace) is:
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
udp ESTAB 0 0 0.0.0.17:50404 0.0.0.5:53404 users:(("sv2",pid=2421,fd=11))
udp ESTAB 0 0 0.0.0.17:50441 0.0.0.5:53441 users:(("sv2",pid=2421,fd=12))
Forwarding for IPv4 is enabled in sysctl in general and for all physical network adapters.
Just broadcast and multicast forwarding is disabled as not needed and not wanted.
The network configuration is set up on communication CPU with the following commands:
ip netns add nsprot1
ip link add vprot0 type veth peer name vprot1 netns nsprot1
ip link set dev enp3s0 netns nsprot1
ip address add 0.0.0.16/31 dev vprot0
ip netns exec nsprot1 ip address add 0.0.0.17/31 dev vprot1
ip netns exec nsprot1 ip address add 192.168.2.15/24 dev enp3s0
ip link set dev vprot0 up
ip netns exec nsprot1 ip link set vprot1 up
ip netns exec nsprot1 ip link set enp3s0 up
ip netns exec nsprot1 ip route add 0.0.0.4/30 via 0.0.0.16 dev vprot1
The network address translation is set up with the following commands:
nft add table ip prot1
nft add chain ip prot1 prerouting '{ type nat hook prerouting priority -100; policy accept; }'
nft add rule prot1 prerouting iif enp1s0 udp dport '{ 50404, 50441 }' dnat 0.0.0.17
nft add chain ip prot1 postrouting '{ type nat hook postrouting priority 100; policy accept; }'
nft add rule prot1 postrouting ip saddr 0.0.0.16/31 oif enp1s0 snat 0.0.0.6
The output of nft list table ip prot1 is:
table ip prot1 {
chain prerouting {
type nat hook prerouting priority -100; policy accept;
iif "enp1s0" udp dport { 50404, 50441 } dnat to 0.0.0.17
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
ip saddr 0.0.0.16/31 oif "enp1s0" snat to 0.0.0.6
}
}
There is defined additionally in global namespace only the table inet filter with:
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
}
chain forward {
type filter hook forward priority 0; policy accept;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
That NAT configuration is for a stateful NAT. It works for the UDP channel with the port numbers 50404 and 53404 because of sv2 started last opens 0.0.0.17:50404 and sends a UDP packet to 0.0.0.5:53404 on which source network address translation is applied in postrouting hook for enp1s0 in global namespace. The service sva of application CPU sends back a UDP packet from 0.0.0.5:53404 to 0.0.0.6:50404 which reaches 0.0.0.17:50404. The UDP packet does not pass the prerouting rule for dnat to 0.0.0.17. It is send directly via connection tracking to 0.0.0.17 as I found out later.
But this stateful NAT configuration does not work for the UDP channel with the port numbers 50441 and 534441. It looks like the reason is that sva of application CPU sends several UDP packets already from 0.0.0.5:53441 to 0.0.0.6:50441 before the service sv2 is started at all and the destination port is opened in network namespace nsprot1. There is returned by ICMP that the destination port is unreachable. That is no surprise on taking into account that the destination port is not yet opened at all. It is unfortunately not possible to block the UDP packet sends in service sva until service sv2 is started and opened the two UDP ports. Service sva sends periodically and sometimes additionally triggered spontaneous UDP packets from 0.0.0.5:53441 to 0.0.0.6:50441 independent on connection state.
So the problem with this configuration seems to be the stateful NAT as the dnat rule in prerouting hook is still not used on destination port finally opened in network namespace nsprot1. There is still continued to route the UDP packets to 0.0.0.6:50441 which results in dropping the UDP packet and returning with ICMP that the destination port is not reachable.
Therefore the solution is maybe the usage of a stateless NAT. So there are executed additionally the commands:
nft add table ip raw
nft add chain ip raw prerouting '{ type filter hook prerouting priority -300; policy accept; }'
nft add rule ip raw prerouting udp dport '{ 50404, 50441, 53404, 53441 }' notrack
But the result was not as expected. The prerouting rule to change the destination address from 0.0.0.6 to 0.0.0.17 for UDP packets from input interface enp1s0 with destination port 50404 and 50441 is still not taken into account.
There was executed next by me:
nft add table ip filter
nft add chain filter trace_in '{ type filter hook prerouting priority -301; }'
nft add rule filter trace_in meta nftrace set 1
nft add chain filter trace_out '{ type filter hook postrouting priority 99; }'
nft add rule filter trace_out meta nftrace set 1
nft monitor trace
I looked on the trace and could see that the notrack rule is taken into account, but then the UDP packets with destination port 50441 are passed directly to the input hook. I don't know why.
I studied many, many hours very carefully following pages:
nft manual (read several times completely from top to bottom)
nftables wiki (most pages completely)
nftables on ArchWiki
and many, many other web pages regarding to usage of network namespaces and network address translation.
I tried really many different configurations, used Wireshark, used nft monitor trace, but I cannot find out a solution which works for the UDP channel with the ports 50441 and 53441 on sva sending UDP packets already before destination port 0.0.0.17:50441 is opened at all.
The stateful NAT configuration works if I manually terminate on application CPU the service sva, set up the network configuration on communication CPU with starting the two services sv1 and sv2 and start last manually the service sva again on all UDP ports already opened on communication CPU. But this order of starting the services cannot be done in the industrial device by default. The application service sva must run independent on communication services are ready for communication or not.
Which commands (chains/rules) are necessary to have a stateless NAT for the two UDP channels 0.0.0.5:53404 - 0.0.0.17:50404 and 0.0.0.5:53441 - 0.0.0.17:50441 independent on the open states of the destination ports and which service sends first an UDP packet to the other service?
PS: The service sv2 can be started depending on configuration of the device also in global namespace using a different physical network adapter on which no NAT and network namespace are necessary. In this network configuration there is absolutely no problem with the UDP communication between the three services.
|
I found the solution by myself finally after many, many hours of reading documentations, tutorials, suggestions on various web pages, making lots of trials, and doing deep and comprehensive network and netfilter monitorings and analyzes.
nft add table ip prot1
nft add chain ip prot1 prerouting '{ type filter hook prerouting priority -300; policy accept; }'
nft add rule ip prot1 prerouting iif enp1s0 udp dport '{ 50404, 50441 }' ip daddr set 0.0.0.17 notrack accept
nft add rule ip prot1 prerouting iif vprot0 ip saddr 0.0.0.17 notrack accept
nft add chain ip prot1 postrouting '{ type filter hook postrouting priority 100; policy accept; }'
nft add rule ip prot1 postrouting oif enp1s0 ip saddr 0.0.0.17 ip saddr set 0.0.0.6 accept
The netfilter hooks page should be opened and read first to understand the following explanation.
Explanation for the used commands:
A netfilter table is added for protocol ip (IPv4) with name prot1.
A chain is added to table prot1 with name prerouting of type filter for the hook prerouting with priority -300. It is important to use a priority number lower than -200 to be able to bypass the connection tracking conntrack. That excludes the usage of a chain of type nat for the destination network address translation as having an even lower priority.
A filter rule is added to table prot1 to chain prerouting which is applied only on IPv4 packets received on input interface enp1s0 of protocol type udp having as destination port either 50404 or 50441 which modifies the ip destination address of the packet from 0.0.0.6 to 0.0.0.17 and activates no tracking of the connection for this UDP packet. The verdict is specified explicitly with accept although not really necessary to pass the UDP packet received from the service sva of application CPU for the service sv2 of communication CPU as fast as possible to the next hook which is in this case the forward hook.
A second filter rule is added to table prot1 to chain prerouting which is applied only on all IPv4 packets received on input interface vprot0 independent on protocol type (udp, icmp, ...) having the ip source address 0.0.0.17 to activate no tracking of the connection for this packet. It would be of course also possible to filter just on UDP packets with appropriate source or destination port number, but this additional limitation is not needed here and this rule is also good for ICMP packets send back from 0.0.0.17 to 0.0.0.5 on destination port not yet opened because of the service sv2 is not running at the moment. The verdict is again specified explicitly with accept instead of using the implicit default continue to pass the packet as fast as possible to the forward hook.
A second chain is added to table prot1 with name postrouting of type filter for the hook postrouting with priority 100. It is important to use a chain of type filter and not of type nat to be able to apply a source address translation on the UDP (and ICMP) packets which bypassed the connection tracking.
A filter rule is added to table prot1 to second chain postrouting which is applied only on IPv4 packets sent on output interface enp1s0 independent on protocol type (udp, icmp, ...) having as source address 0.0.0.17 which modifies the ip source address of the packet from 0.0.0.17 to 0.0.0.6. The verdict is specified once more explicitly with accept although not really necessary to pass the UDP packet received from the service sv2 of the communication CPU to the service sva of the application CPU as fast as possible. This rule changes also the source address to 0.0.0.6 of the ICMP packet sent from 0.0.0.17 on destination port not reachable because of the service sv2 is not yet running. So the application CPU never notices that it communicates for two UDP channels with a different interface than 0.0.0.6 which was a second requirement to fulfill although being not really important.
It was a hard work to find out that a stateless network translation was needed for this very special network configuration and kind of communication between the services sva and sv2 and that the NAT must be done without using the nat hook.
| How to set up stateless NAT for two UDP connections from a global network to special network namespace? |
1,662,510,379,000 |
I need some clarification. If I add this rule
nft add rule ip filter INPUT ip daddr 127.0.0.1 drop
nftables will ignore it and never acknowledge because it is on end of the rules?
|
So I found answer for my issue.
The stuff is about filters..in nft terminology CHAINS. If I want to add rule, I need to add it to specific chain, otherwise it will ignore my rule and take it as nonsense.
so if I wanted to use this rule nft add rule ip filter INPUT ip daddr 127.0.0.1 drop
I already had to have created chain named INPUT. Since nft is case sensitive, INPUT and input can be two differend chains.
Rules are or what I understood should be used right after they are added to Chain.
| NFTable clarification |
1,662,510,379,000 |
I have a problem with my nftables setup.
I have two tables, each one has a chain with the same hook but a different name and priority.
The tables are in different files which are loaded by an include argument.
Because of the priority,
I would think that the VPN-POSTROUTING chain will be executed before the INTERNET chain.
But in my setup, the INTERNET chain is executed first.
table ip nat {
chain INTERNET {
type nat hook postrouting priority srcnat + 1; policy accept;
oifname br2 masquerade
}
}
table ip vpn {
chain VPN-POSTROUTING {
type nat hook postrouting priority srcnat - 1; policy accept;
oifname br2 ip saddr 10.0.0.0/24 ip daddr 192.168.0.0/24 accept
}
}
where is my mistake?
Edit:
I changed the rules and add all chains to the same table,
with the same result.
In the next step, I followed A.B.'s advice and add counters and logs to the rules.
The order of the chains corresponds to the priority, but the accept rule for the VPN is not triggered.
When I add the VPN accept rule to the INTERNET chain, right before the masquerade rule, it works like expected.
|
Historically there used to be one NAT chain in a given hook (prerouting, input, output, ...). Executing a nat statement or simply accept-ing the packet being terminal for the chain, with a single chain it was also ending treatment within the hook. With nftables allowing to use more than one chain in the same hook, terminating a chain will just continue to the next chain. So if the first chain doesn't do anything (by accept-ing), the next chain gets its chance to do it instead, which is not what is intended.
To solve this, the first chain (or any other chain) can leave a message passed to each next chain so it can act upon it: set a mark (an arbitrary value) for the next chain to change its behavior.
Instead of accept which has zero effect (leaving the chain VPN-POSTROUTING empty also executes the default policy: accept), set a mark. So replace the rule in VPN-POSTROUTING with this one instead:
nft add rule ip vpn VPN-POSTROUTING oifname br2 ip saddr 10.0.0.0/24 ip daddr 192.168.0.0/24 mark set 0xdeaf
When this mark is set, it can then be used on the other chain to change behavior by not executing remaining rules. Insert this rule first in ip nat INTERNET:
nft insert rule ip nat INTERNET mark 0xdeaf accept
| nftables table and chain priority |
1,662,510,379,000 |
I have a default gateway with IP address 192.168.1.1 and MAC address 5c:77:76:6e:0d:7b. It is my only wi fi modem router from which I receive internet.
But in input nftables logs I saw another one router with the same IP address and different MAC address 5c:77:77:6e:0d:7b. This unknown router sends pages which I didn't open (spam).
I tried two ways to solve this problem:
Set up static arp cache. Now my arp cache looks like this:
arp -a mw40.home (192.168.1.1) at 5c:77:76:6e:0d:7b [ether] PERM on wlo1
Drop packets from illegal router in etc/ nftables.conf:
ether saddr 5c:77:77:6e:0d:7b counter drop;
But after the second step, I lost my internet connection.
My questions:
What is happening in this piece of log (below)?
How are the two routers communicating with each other?
How I can remove from my network the illegal router with MAC address 5c:77:77:6e:0d:7b?
Sep 1 15:16:03 flower kernel: [ 133.359821] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:77:6e:0d:7b:08:00 SRC=85.159.224.52 DST=192.168.1.2 LEN=76 TOS=0x18 PREC=0x60 TTL=49 ID=4873 DF PROTO=UDP SPT=123 DPT=47244 LEN=56
Sep 1 15:16:11 flower kernel: [ 141.053122] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:77:6e:0d:7b:08:00 SRC=192.168.1.1 DST=192.168.1.2 LEN=185 TOS=0x00 PREC=0x00 TTL=64 ID=32498 DF PROTO=UDP SPT=53 DPT=56881 LEN=165
Sep 1 15:16:12 flower kernel: [ 141.660330] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:77:6e:0d:7b:08:00 SRC=192.168.1.1 DST=192.168.1.2 LEN=111 TOS=0x00 PREC=0x00 TTL=64 ID=32521 DF PROTO=UDP SPT=53 DPT=36247 LEN=91
Sep 1 15:16:12 flower kernel: [ 141.694208] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:77:6e:0d:7b:08:00 SRC=172.67.68.8 DST=192.168.1.2 LEN=52 TOS=0x18 PREC=0x60 TTL=56 ID=0 DF PROTO=TCP SPT=443 DPT=50048 WINDOW=65535 RES=0x00 ACK SYN URGP=0
Sep 1 15:16:12 flower kernel: [ 141.722991] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:77:6e:0d:7b:08:00 SRC=192.168.1.1 DST=192.168.1.2 LEN=147 TOS=0x00 PREC=0x00 TTL=64 ID=32522 DF PROTO=UDP SPT=53 DPT=51721 LEN=127
Sep 1 15:16:12 flower kernel: [ 141.743011] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:76:6e:0d:7b:08:00 SRC=172.67.68.8 DST=192.168.1.2 LEN=40 TOS=0x18 PREC=0x60 TTL=56 ID=3764 DF PROTO=TCP SPT=443 DPT=50048 WINDOW=66 RES=0x00 ACK URGP=0
Sep 1 15:16:12 flower kernel: [ 141.743028] New Input packets: IN=wlo1 OUT= MAC=b8:81:98:cb:ef:a8:5c:77:76:6e:0d:7b:08:00 SRC=172.67.68.8 DST=192.168.1.2 LEN=2840 TOS=0x18 PREC=0x60 TTL=56 ID=3765 DF PROTO=TCP SPT=443 DPT=50048 WINDOW=66 RES=0x00 ACK PSH URGP=0
More logs from router with illegal MAC address here:
nftables logs from illegal router
|
Routers often have multiple virtual network interfaces, and the MAC address that gets assigned to these is made by flipping some bits of the hardware MAC address.
So the "illegal router" you see is probably another virtual interface of your "legal router". And if you block the packets (which are "legal" packets) then your "legal" router will stop to work.
If there really was an "illegal router" somewhere in your home, you should be able to physically see and touch it, shouldn't you?
| Delete illegal router from network |
1,662,510,379,000 |
In nftables I have this
table inet my_table {
chain badips {
ip saddr 185.165.190.17 counter packets 0 bytes 0 drop
}
type filter hook input priority filter; policy drop;
# Block badips
counter packets 0 bytes 0 jump badips
}
The planning it to put a long list of IPs in badips chain. How can add only unique IPs? Is there nothing to avoid to add the same IP?
Now I run
nft add rule inet my_table badips "ip saddr <IP> counter packets 0 bytes 0 drop"
In nft rule, I see there is not nft create rule unlike chain.
|
Don't add one rule per ip. Just create a single rule, and use a set. E.g.:
table inet my_table {
set badips_v4 {
type ipv4_addr
}
set badips_v6 {
type ipv6_addr
}
chain badips {
type filter hook input priority filter; policy accept;
counter ip saddr @badips_v4 drop
counter ip6 saddr @badips_v6 drop
}
}
With these rules in place, you block an ip by adding it to the appropriate set:
nft add element inet my_table badips_v4 { 185.165.190.17 }
If you run that statement multiple times, it doesn't add multiple items to the set -- a set is a collection of unique items, so if an item already exists in the set, adding it again is a no-op.
| How to avoid insert repeated rules in nftables |
1,662,510,379,000 |
On the link below is an image explaining packet flow across chains in nftables
Netfilter hooks
I understanding everything except one thing, the image does not explain at which stage is routing decision done?
According to the image it should be done in 2 places:
After prerouting hook but before input and forward hood
Before output hook when a packet leaves localhost process
First point above makes sense because there are 2 possible hooks that follow prerouting, however for point 2 it's not clear what routing decision is to be performed since the only flow option from local process is the output hook so there shouldn't be any routing decision but the image says there is one.
This makes me believe that there is only one routing decision which is in point 1 above.
Why does the image specify routing decision node between local process and output hook?
SNAT is done in postrouting hook which according to docs "sees all packets after routing, just before they leave the local system"
But the question is which routing? since there are 2 routing decisions according to the image.
And btw. what is "routing decision" (node in the image) I don't think that's NAT because NAT is done in prerouting and postrouting hooks.
|
The image is right.
The routing is done only once: since Local Process is a starting point or termination point, a packet flow go through only one routing decision.
For your second point, the flow will go through:
local process
routing decision
output
postrouting
egress
driver tx
There is nothing prior to this, so it will only go through one routing decision.
In this case, routing decision means chosing to which IP address, using which interface, you will send your packet.
If I want to send a packet to 1.1.1.1, it would mean doing:
$ ip r get 1.1.1.1
1.1.1.1 via 192.168.1.1 dev wlp0s20f3 src 192.168.1.254 uid 1000
cache
So I will send my packet (srcip:192.168.1.254, dstip:1.1.1.1) to 192.168.1.1 on my Wi-Fi interface wlp0s20f3.
Once I chosed this, I could change the ip header source ip if I want to do snat.
| natables routing decision step(s) |
1,662,510,379,000 |
I'm following the official Debian Wiki tutorial for setting up a VPN server on Debian 11.
Everything worked well except for the paragraph Forward traffic to provide access to the Internet at the end.
The following lines do not work :
IF_MAIN=eth0
IF_TUNNEL=tun0
YOUR_OPENVPN_SUBNET=10.9.8.0/24
#YOUR_OPENVPN_SUBNET=10.8.0.0/16 # if using server.conf from sample-server-config
nft add rule ip filter FORWARD iifname "$IF_MAIN" oifname "$IF_TUNNEL" ct state related,established counter accept
nft add rule ip filter FORWARD oifname "$IF_MAIN" ip saddr $YOUR_OPENVPN_SUBNET counter accept
nft add rule ip nat POSTROUTING oifname "$IF_MAIN" ip saddr $YOUR_OPENVPN_SUBNET counter masquerad
Here is the output :
root@server:/home/user# nft add rule ip filter FORWARD iifname "$IF_MAIN" oifname "$IF_TUNNEL" ct state related,established counter accept
Error: Could not process rule: No such file or directory
add rule ip filter FORWARD iifname enp1s0 oifname tun0 ct state related,established counter accept
^^^^^^
I get similar error with the 3 commands.
Do I have missing something ? Is something missing in the tutorial ?
|
It looks like Debian Wiki's instructions have been written to build on top of the compatibility tables and chains created by iptables-nft (or possibly the default stub /etc/nftables.conf included in the nftables package), which is the default version of iptables on Debian 10 and newer.
If you are starting from a completely blank nftables configuration, you must first create the tables and chains before adding rules into them:
IF_MAIN=eth0
IF_TUNNEL=tun0
YOUR_OPENVPN_SUBNET=10.9.8.0/24
# Create a rules table for IPv4, named "custom":
nft create table ip custom
# Create a forward filter chain with the standard priority and
# iptables-resembling name "FORWARD", into the "custom" table
# created above:
# (priority filter == priority 0, see "man nft")
nft add chain ip custom FORWARD { type filter hook forward priority filter\; }
# Create a NAT filter chain with the iptables-like name "POSTROUTING" too:
nft add chain ip custom POSTROUTING { type nat hook postrouting priority srcnat\; }
# now you can start adding your filter rules
nft add rule ip custom FORWARD iifname "$IF_MAIN" oifname "$IF_TUNNEL" ct state related,established counter accept
nft add rule ip custom FORWARD oifname "$IF_MAIN" ip saddr $YOUR_OPENVPN_SUBNET counter accept
nft add rule ip custom POSTROUTING oifname "$IF_MAIN" ip saddr $YOUR_OPENVPN_SUBNET counter masquerade
This places all your custom rules into a single table named custom. If you later add some other software that creates nftables rules of their own, they are likely to use their own table(s), which should remove the possibility of them wiping out your custom rules by accident. You'll just need to review the hook priorities to ensure a sensible processing order of the rule chains in different tables, and adjust if necessary.
Note: custom, FORWARD and POSTROUTING here are just names you could change to whatever you want, while everything else has a specific meaning.
This also allows you to delete or temporarily deactivate all your custom rules at once, with a single command:
nft add table ip custom { flags dormant; } # temporary disable
nft add table ip custom # re-enable
nft delete table ip custom # wipe custom rules completely
These might be helpful when troubleshooting your ruleset.
To make the rules persistent:
nft list ruleset > /etc/nftables.conf # save the current rules
systemctl enable nftables.service # enable loading rules at boot
| nftables Error: Could not process rule: No such file or directory with official Debian Wiki |
1,662,510,379,000 |
I am trying to run the following nft commands:
nft add table netdev filter
nft -- add chain netdev filter input { type filter hook ingress device vlan100 priority -500 \; policy accept \; }
nft add rule netdev filter input ip daddr 198.18.0.0/24 udp dport 1234 counter drop
However, when I try to run the second command I keep getting this error:
Error: Could not process rule: No such file or directory
add chain netdev filter input { type filter hook ingress device vlan100 priority -500 ; policy accept ; }
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This error is too general and not clear what it means.
I checked to make sure table netdev filter was created. (It is)
I tried in interactive mode by just supplying add chain netdev filter input { type filter hook ingress device vlan100 priority -500 ; policy accept ; }, and I get this error
Error: Could not process rule: No such file or directory
add chain netdev filter input { type filter hook ingress device vlan100 priority -500 ; policy accept ; }
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Other suggested from this post that I need to "Enable Kernel Options" however its not clear how I do this? How do I enable Kernel options? I also read this post And Question2 on this post , but this post does not detail how to actually enable these options. Do I need to recompile Linux from source with these options enabled? If so, how does one accomplish this, can someone share a guide on how to do this please?
I am running kernel version 5.4.0-91 on Ubuntu 20.04.
I am not sure if I am missing a kernel option? I can see the following when running cat /boot/config-5.4.0-91-generic | grep -i "Config_NF_\|Config_NETFILTER"
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
CONFIG_NETFILTER_NETLINK_ACCT=m
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NETFILTER_NETLINK_OSF=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_LOG_COMMON=m
CONFIG_NF_LOG_NETDEV=m
CONFIG_NETFILTER_CONNCOUNT=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
# CONFIG_NF_CONNTRACK_PROCFS is not set
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
CONFIG_NF_CT_NETLINK_HELPER=m
CONFIG_NETFILTER_NETLINK_GLUE_CT=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NETFILTER_SYNPROXY=m
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_SET=m
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
CONFIG_NF_DUP_NETDEV=m
CONFIG_NF_FLOW_TABLE_INET=m
CONFIG_NF_FLOW_TABLE=m
CONFIG_NETFILTER_XTABLES=m
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LED=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
CONFIG_NETFILTER_XT_MATCH_IPCOMP=m
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
CONFIG_NETFILTER_XT_MATCH_L2TP=m
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
CONFIG_NETFILTER_XT_MATCH_NFACCT=m
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
CONFIG_NETFILTER_XT_MATCH_TIME=m
CONFIG_NETFILTER_XT_MATCH_U32=m
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_TPROXY_IPV4=m
CONFIG_NF_TABLES_IPV4=y
CONFIG_NF_TABLES_ARP=y
CONFIG_NF_FLOW_TABLE_IPV4=m
CONFIG_NF_DUP_IPV4=m
CONFIG_NF_LOG_ARP=m
CONFIG_NF_LOG_IPV4=m
CONFIG_NF_REJECT_IPV4=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NF_TPROXY_IPV6=m
CONFIG_NF_TABLES_IPV6=y
CONFIG_NF_FLOW_TABLE_IPV6=m
CONFIG_NF_DUP_IPV6=m
CONFIG_NF_REJECT_IPV6=m
CONFIG_NF_LOG_IPV6=m
CONFIG_NF_DEFRAG_IPV6=m
CONFIG_NF_TABLES_BRIDGE=m
CONFIG_NF_LOG_BRIDGE=m
CONFIG_NF_CONNTRACK_BRIDGE=m
Why can I not add the chain? What am I doing wrong?
UPDATE:
The problem was related to the interface name. I was following instructions from this post (step 6). I did not realize they created an interface with the name "vlan100". However after closer inspection from the author's github README.md (Found here) It shows they created "vlan100" earlier on in the context of the setup. The fix for myself was to apply the netdev chain to an existing network interface (primarily the one I was sending traffic to)
|
In the nftables wiki, the netdev family is described like this:
The netdev family is different from the others in that it is used to create base chains attached to a single network interface. Such base chains see all network traffic on the specified interface, with no assumptions about L2 or L3 protocols.
"No assumptions about L2 protocols" would probably mean that VLAN tags have not been processed yet at that point. That would seem to mean could attach netdev chains to physical network devices only, not to VLAN devices.
Note: this is just my guess. But if it turns out to be correct, the error message would make sense: the vlan100 virtual network interface would not exist in the context of netdev nftables family, because it is not a physical network interface.
| nftables refuses to add chain |
1,662,510,379,000 |
I'm on Centos 8 Stream, a rolling release version using 4.18.0-301.1.el8.x86_64 and I find weird and non consistent behavior.
Depending on how firewalld is started, it has a different behavior.
When firewalld is booted at the boot, it adds LIBVIRT_* chains.
When firewalld is restarted with systemctl, all these chains disappear.
Why?
# nftables after the reboot
$ nft list tables
table ip filter
table ip nat
table ip mangle
table ip6 filter
table ip6 nat
table ip6 mangle
# nftables after the systemctl restart
$ nft list tables
table ip filter
table ip nat
table ip mangle
table ip6 filter
table ip6 nat
table ip6 mangle
table bridge filter
table ip security
table ip raw
table ip6 security
table ip6 raw
table bridge nat
table inet firewalld
table ip firewalld
table ip6 firewalld
$ sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
LIBVIRT_INP all -- 0.0.0.0/0 0.0.0.0/0
Chain FORWARD (policy ACCEPT)
target prot opt source destination
LIBVIRT_FWX all -- 0.0.0.0/0 0.0.0.0/0
LIBVIRT_FWI all -- 0.0.0.0/0 0.0.0.0/0
LIBVIRT_FWO all -- 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
LIBVIRT_OUT all -- 0.0.0.0/0 0.0.0.0/0
Chain LIBVIRT_INP (1 references)
target prot opt source destination
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:67
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:67
Chain LIBVIRT_OUT (1 references)
target prot opt source destination
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:53
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:68
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:68
Chain LIBVIRT_FWO (1 references)
target prot opt source destination
ACCEPT all -- 192.168.122.0/24 0.0.0.0/0
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain LIBVIRT_FWI (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 ctstate RELATED,ESTABLISHED
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable
Chain LIBVIRT_FWX (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
$ sudo systemctl restart firewalld
$ sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
That is very perturbing and is very hard to debug. Especially after I start the openvpn-as service, it adds iptables chains.
|
It's not precisely firewalld that's adding those chains. As the names of the chains suggest, it's libvirt adding them on top of your firewalld configuration.
CentOS 8 Stream's factory default configuration includes some preparations for running virtual machines (or for nested virtualization, if the CentOS 8 system itself is a VM). If you don't need to run VMs on your CentOS, you might want to disable those.
I don't have a CentOS 8 Stream test system at hand right now, but I think this should do it:
virsh net-destroy default # unconfigure what libvirtd has done for now
systemctl stop libvirtd.service # stop the service
systemctl disable libvirtd.service # persistently disable it
If you want to keep libvirtd running for some purpose, but want to disable its default network settings, this might do it (but I'm less sure if this will clear the iptables additions or not):
virsh net-destroy default # unconfigure what libvirtd has done for now
virsh net-undefine default # persistently remove libvirtd network config
Or maybe just: (this should be undoable without reinstalling libvirtd or otherwise restoring the default configuration)
virsh net-destroy default # unconfigure what libvirtd has done for now
virsh net-autostart default --disable # tell libvirt to not autostart default config
To undo the 3rd version, just use virsh net-autostart default without the --disable option, and restart the libvirtd service or reboot.
| Why does firewalld enabled after a reboot and after a restart have a different behavior? |
1,662,510,379,000 |
I'm a bit frustrated by the lack of comprehensive documentation of nftables and currently I'm failing to get even a simple example to work. I'm trying just create a output rule. Here's my only table:
root@localhost ~ # nft list ruleset
table inet filter {
chain output {
type filter hook output priority 0; policy accept;
}
}
I wish to count the number of packets sent to 8.8.8.8. So I used the example command from the nftables wiki (https://wiki.nftables.org/wiki-nftables/index.php/Simple_rule_management):
root@localhost ~ # nft add rule filter output ip daddr 8.8.8.8 counter
Error: Could not process rule: No such file or directory
add rule filter output ip daddr 8.8.8.8 counter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
But for some reason, I get very uninformative error message. What am I doing wrong and what is the correct way to add an output rule?
root@localhost ~ # uname -a
Linux localhost 4.15.3-2-ARCH #1 SMP PREEMPT Thu Feb 15 00:13:49 UTC 2018 x86_64 GNU/Linux
root@localhost ~ # nft --version
nftables v0.8.2 (Joe Btfsplk)
root@localhost ~ # lsmod|grep '^nf'
nfnetlink_queue 28672 0
nfnetlink_log 20480 0
nf_nat_masquerade_ipv6 16384 1 ip6t_MASQUERADE
nf_nat_ipv6 16384 1 ip6table_nat
nf_nat_masquerade_ipv4 16384 1 ipt_MASQUERADE
nf_nat_ipv4 16384 1 iptable_nat
nf_nat 36864 4 nf_nat_masquerade_ipv6,nf_nat_ipv6,nf_nat_masquerade_ipv4,nf_nat_ipv4
nft_reject_inet 16384 0
nf_reject_ipv4 16384 1 nft_reject_inet
nf_reject_ipv6 16384 1 nft_reject_inet
nft_reject 16384 1 nft_reject_inet
nft_meta 16384 0
nf_conntrack_ipv6 16384 2
nf_defrag_ipv6 36864 1 nf_conntrack_ipv6
nf_conntrack_ipv4 16384 2
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
nft_ct 20480 0
nf_conntrack 155648 10 nft_ct,nf_conntrack_ipv6,nf_conntrack_ipv4,ipt_MASQUERADE,nf_nat_masquerade_ipv6,nf_nat_ipv6,nf_nat_masquerade_ipv4,ip6t_MASQUERADE,nf_nat_ipv4,nf_nat
nft_set_bitmap 16384 0
nft_set_hash 28672 0
nft_set_rbtree 16384 0
nf_tables_inet 16384 2
nf_tables_ipv6 16384 1 nf_tables_inet
nf_tables_ipv4 16384 1 nf_tables_inet
nf_tables 106496 10 nft_ct,nft_set_bitmap,nft_reject,nft_set_hash,nf_tables_ipv6,nf_tables_ipv4,nft_reject_inet,nft_meta,nft_set_rbtree,nf_tables_inet
nfnetlink 16384 3 nfnetlink_log,nfnetlink_queue,nf_tables
|
The correct command is
root@localhost ~ # nft add rule inet filter output ip daddr 8.8.8.8 counter
Notice the inet prefix before the table name (filter). That's the table's family type. It's optional, but if you omit it, nft assumes ip (= IPv4), but I'm using inet pseudo-family (both IPv4 and IPv6).
I learned this thanks to the people in the #netfilter channel on Freenode.
Needless to say that nft error messages are anything but helpful. :-)
| nftables, add output rule syntax |
1,662,510,379,000 |
Here is relevant example ruleset with 2 sample nftables NAT rules which masquerade IP from virtual machine to LAN:
#!/usr/sbin/nft -f
add table nat_4
# Sees all packets after routing, just before they leave the local system
add chain nat_4 postrouting_nat_4 {
type nat hook postrouting priority srcnat; policy accept;
comment "Postrouting SNAT IPv4 traffic"
}
# Masquerade all packets going from VMs to the LAN/Internet
add rule nat_4 postrouting_nat_4 meta l4proto tcp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter masquerade to :1024-65535
# Same rule as above but without [to :PORT_SPEC]
add rule nat_4 postrouting_nat_4 meta l4proto tcp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter masquerade
Sample IP address 192.168.122.0/24 is network address on which virtual machine operates.
I understand what masquerade does, it performs SNAT by changing source IP to outbound interface IP.
In that sense I understand what 2nd rule from the sample above does.
What's I don't understand is [to :PORT_SPEC] declaration (docs link), in the example (1st rule): masquerade to :1024-65535
What exactly does masquerade to :1024-65535 do and is it necessary to specify it?
Are those 2 rules basically the same or how are they different?
Which one is preferred to do NAT from virtual machine to internet over virtual switch on localhost?
|
I think things are clearer if you look at the documentation for the iptables MASQUERADE target. From iptables-extensions(8):
MASQUERADE
[...]
--to-ports port[-port]
This specifies a range of source ports to use, overriding
the default SNAT source port selection heuristics (see
above). This is only valid if the rule also specifies one
of the following protocols: tcp, udp, dccp or sctp.
The to :PORT_SPEC option in the NFT rule accomplishes the same thing: it specifies the source ports to use for masqueraded connections.
| usage and meaning of masquerade [to :PORT_SPEC] |
1,662,510,379,000 |
I want to match every traffic from a server but it is at the same interface.
MAC 88:7e:25:d3:90:0b > ens19 > table 147
Therefore I made this nftables rule
table ip filter { # handle 3
chain input { # handle 1
type filter hook input priority filter; policy accept;
iif "ens19" ether saddr 88:7e:25:d3:90:0b meta mark set 0x00000093 # handle 2
iif "ens19" ether saddr 08:05:e2:04:ce:b3 meta mark set 0x00003417 # handle 3
}
}
And ip rule for specify routing table
ip rule add from all fwmark 0x93 lookup 147
ip rule add from all fwmark 13335 lookup 147
ip -6 rule add from all fwmark 0x93 lookup 147
ip -6 rule add from all fwmark 13335 lookup 147
But I use tshark to see if it works, it shows there is no incoming packages and I can't ping the address. So something is wrong with matching income flow.
And if I use
from all iif ens19 lookup 147
instead of
ip -6 rule add from all fwmark 0x93 lookup 147
ip -6 rule add from all fwmark 13335 lookup 147
it works, so it must be something wrong with my nftables rules.
Does anyone know why?
|
hook input is specifically meant for packets where the local host is the final destination – it is only reached after routing decisions have been made, as that's how netfilter knows which packets are processed through "hook input" chains and which ones go through "hook forward" chains. So at that point, your policy rules no longer matter.
Instead, I think you need hook prerouting (and probably with priority raw?).
| MAC based routing with nftables and ipv6 |
1,662,510,379,000 |
List all ruleset currently:
sudo nft list ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
iif "lo" accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
Delete all ruleset:
sudo nft flush ruleset
List the ruleset again:
sudo nft list ruleset
#nothing shown on the output
Reboot pc and list all ruleset:
sudo nft list ruleset
table inet filter {
chain input {
type filter hook input priority 0; policy accept;
iif "lo" accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
I draw the conclusion that nft flush ruleset can't delete all rules permanently,how to delete all ruleset permanently then?
|
You said
Reboot pc and list all ruleset:
Check if /etc/nftables.conf exist, you should empty or delete that too and then run nft flush ruleset.
Depending on your distro, you may want to get rid of packages like netfilter-persistent if you don't want them at all.
| `nft flush ruleset` can't delete all rules permanently? |
1,662,510,379,000 |
The documentation suggests that from nftables 0.9.4 on it's possible to use typeof ip daddr (and similar) to combine IPv4 and IPv6 sets. Alas, the LTS Ubuntu version 20.04 is one patch short of that version (0.9.3).
Quote:
The typeof keyword is available since 0.9.4 and allows you to use a high level expression, then let nftables resolve the base type for you
In the past with ipset I created a set of sets whenever I wanted to refer only to a single name. So I had a set specific to IPv4 (e.g. blackhole4), another to IPv6 (e.g. blackhole6) and then one containing those two (e.g. blackhole). The in-packet-path as well as the ipset CLI updating of the set elements worked fine against that set of sets. The elements would be inserted/updated in the appropriate "subset".
Is there a possibility to unify sets also for nftables 0.9.3?
NB: I'd be fine having to create separate IPv4 and IPv6-specific sets and then have some container set to achieve the feat. It's just that given the documentation I don't see how to achieve this.
PS: I saw this and this but they were for other nftables versions and the outcome is not what's desired.
|
Currently as of 2021-05-01 no version of kernel and nftables (including 0.9.8) can do this.
Pablo Neira Ayuso, a lead developer wrote on 2020-09-26 there's no major architectural issue preventing its implementation, but it's not done yet.
https://www.spinics.net/lists/netfilter/msg59761.html :
So you would like to consolidate:
tcp dport @b_t update @b_sa4 { ip saddr } drop
tcp dport @b_t update @b_sa6 { ip6 saddr } drop
In one single rule?
Something like (hypothetical syntax)
tcp dport @b_t update @b_sa { inet saddr } drop
where b_sa is a set with something like type inet_addr.
https://www.spinics.net/lists/netfilter/msg59761.html :
General set infrastructure that provides an abstraction for IPv4 and
IPv6 through inet is possible, yes.
| Prior to nftables 0.9.4 is there a way to express a set of sets to unify IPv4 and IPv6 rules? |
1,662,510,379,000 |
I was following tutorial at https://wiki.nftables.org/wiki-nftables/index.php/Main_Page
Here is what I did.
#uname -a
Linux delor 4.9.0-0.bpo.6-amd64 #1 SMP Debian 4.9.88-1+deb9u1~bpo8+1 (2018-05-13) x86_64 GNU/Linux
# sudo nft add table ip filter
# sudo nft add chain ip filter output { type filter hook input priority 0 \; }
# sudo nft add chain ip filter input { type filter hook input priority 0 \; }
# sudo nft add rule filter output ip daddr 8.8.8.8 counter
# ping -c 1 8.8.8.8
# sudo nft -nn list table filter
table ip filter {
chain output {
type filter hook input priority 0; policy accept;
ip daddr 8.8.8.8 counter packets 0 bytes 0
}
chain input {
type filter hook input priority 0; policy accept;
}
}
We see that the tables are set up (as shown in the tutorial). However the counter did not increase.
Did I miss something? Was I supposed to do something else to enable it?
|
Your output chain is using the input hook. So it's actually a second chain working for input. Its name doesn't matter. What does matter is its hook: input.
Use instead:
# sudo nft add chain ip filter output { type filter hook output priority 0 \; }
| nftables not working, am I doing it right? |
1,662,510,379,000 |
With a standard log rule "ct state new" we get the details about a new session, however, we only get the data size of the first packet looking in LEN i.e.
2024-06-15T10:11:31.829667+00:00 deepu kernel: ALLOW INPUT: IN=ens33 OUT= MAC=ff:ff:ff:ff:ff:ff:3a:f9:d3:87:89:65:08:cc SRC=172.16.0.1 DST=172.16.0.255 LEN=72 TOS=0x00 PREC=0x00 TTL=64 ID=32643 PROTO=UDP SPT=57621 DPT=57621
In this, we see 72 bytes.
How can we log the total volume of data transferred in that session? For example, if this was a 100MB file download, I'd want to see 100MB download, plus the few small packets of TCP establishment etc.
|
The basic iptables/nftables counters only track the number of bytes/packages that have matched a particular rule, regardless of which connection they belong to.
For per-session statistics, you would need to track individual connections and log connection statistics whenever a connection ends. Sounds like a job for the connection tracking subsystem!
At least on Debian, there is a conntrackd package, which includes a configuration example at /usr/share/doc/conntrackd/examples/stats/conntrackd.conf, for writing statistics about ending connections to /var/log/conntrackd-stats.log by default. This might be just about exactly what you're asking for.
| How can I get nftables to log the data transferred per session? |
1,402,528,639,000 |
I have some question in closing port, I think I got some strange things.
When I use execute
nmap --top-ports 10 192.168.1.1
it shows that 23/TCP port is open.
But when I execute
nmap --top-ports 10 localhost
it show that 23/tcp port is closed.
Which of them is true? I want to close this port on my whole system, how can I do it?
|
Nmap is a great port scanner, but sometimes you want something more authoritative. You can ask the kernel what processes have which ports open by using the netstat utility:
me@myhost:~$ sudo netstat -tlnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 1004/dnsmasq
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 380/sshd
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 822/cupsd
tcp6 0 0 :::22 :::* LISTEN 380/sshd
tcp6 0 0 ::1:631 :::* LISTEN 822/cupsd
The options I have given are:
-t TCP only
-l Listening ports only
-n Don't look up service and host names, just display numbers
-p Show process information (requires root privilege)
In this case, we can see that sshd is listening on any interface (0.0.0.0) port 22, and cupsd is listening on loopback (127.0.0.1) port 631. Your output may show that telnetd has a local address of 192.168.1.1:23, meaning it will not answer to connections on the loopback adapter (e.g. you can't telnet 127.0.0.1).
There are other tools that will show similar information (e.g. lsof or /proc), but netstat is the most widely available. It even works on Windows (netstat -anb). BSD netstat is a little different: you'll have to use sockstat(1) to get the process information instead.
Once you have the process ID and program name, you can go about finding the process and killing it if you wish to close the port. For finer-grained control, you can use a firewall (iptables on Linux) to limit access to only certain addresses. You may need to disable a service startup. If the PID is "-" on Linux, it's probably a kernel process (this is common with NFS for instance), so good luck finding out what it is.
Note: I said "authoritative" because you're not being hindered by network conditions and firewalls. If you trust your computer, that's great. However, if you suspect that you've been hacked, you may not be able to trust the tools on your computer. Replacing standard utilities (and sometimes even system calls) with ones that hide certain processes or ports (a.k.a. rootkits) is a standard practice among attackers. Your best bet at this point is to make a forensic copy of your disk and restore from backup; then use the copy to determine the way they got in and close it off.
| How to close ports in Linux? |
1,402,528,639,000 |
Can nmap list all hosts on the local network that have both SSH and HTTP open? To do so, I can run something like:
nmap 192.168.1.1-254 -p22,80 --open
However, this lists hosts that have ANY of the list ports open, whereas I would like hosts that have ALL of the ports open. In addition, the output is quite verbose:
# nmap 192.168.1.1-254 -p22,80 --open
Starting Nmap 6.47 ( http://nmap.org ) at 2015-12-31 10:14 EST
Nmap scan report for Wireless_Broadband_Router.home (192.168.1.1)
Host is up (0.0016s latency).
Not shown: 1 closed port
PORT STATE SERVICE
80/tcp open http
Nmap scan report for new-host-2.home (192.168.1.16)
Host is up (0.013s latency).
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
Nmap done: 254 IP addresses (7 hosts up) scanned in 3.78 seconds
What I'm looking for is output simply like:
192.168.1.16
as the above host is the only one with ALL the ports open.
I certainly can post-process the output, but I don't want to rely on the output format of nmap, I'd rather have nmap do it, if there is a way.
|
There is not a way to do that within Nmap, but your comment about not wanting "to rely on the output format of nmap" lets me point out that Nmap has two stable output formats for machine-readable parsing. The older one is Grepable output (-oG), which works well for processing with perl, awk, and grep, but is missing some of the more advanced output (like NSE script output, port reasons, traceroute, etc.). The more complete format is XML output (-oX), but it may be overkill for your purposes.
You can either save these outputs to files with -oG, -oX, or -oA (both formats plus "normal" text output), or you can send either one straight to stdout: nmap 192.168.1.1-254 -p22,80 --open -oG - | awk '/22\/open.*80\/open/{print $2}'
| Can nmap display only hosts with specific ports open? |
1,402,528,639,000 |
My rental Linux server doesn't respond to nmap the way I thought it would. When I run nmap it shows three open ports: 80, 443 and 8080. However, I know ports 2083, 22 and 2222 should all be open, as they're used for the web-based C-Panel, SSH and SFTP, respectively.
Has my server rental company not opened these ports fully, or is does nmap not give a complete list (by default)?
|
By default, nmap scans the thousand most common ports. Ports 2083 and 2222 aren't on that list. In order to perform a complete scan, you need to specify "all ports" (nmap -p 1-65535, or the shortcut form nmap -p-).
Port 22, on the other hand, is on the list. If nmap isn't reporting it, it's because something's blocking your access, or the SSH server isn't running.
| nmap doesn't appear to list all open ports |
1,402,528,639,000 |
A few days ago I started to care a lot about my data security, I end up nmaping myself with: nmap 127.0.0.1
Surprise, surprise, I have lots of active services listen to localhost:
$ nmap 127.0.0.1
Starting Nmap 5.21 ( http://nmap.org ) at 2013-05-05 00:19 WEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00025s latency).
Not shown: 993 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
53/tcp open domain
111/tcp open rpcbind
139/tcp open netbios-ssn
445/tcp open microsoft-ds
631/tcp open ipp
Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds
The only one that I might use is ssh (although it is probably not well configured, I will keep this matter to another question).
As far as I know ipp protocol is used by CUPS to share my printers, I don't need to share them, just access printers from a server.
This is the output of netstat -lntup by the root user, removing the localhost addresses:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 497/sshd
tcp 0 0 0.0.0.0:17500 0.0.0.0:* LISTEN 2217/dropbox
tcp 0 0 0.0.0.0:445 0.0.0.0:* LISTEN 892/smbd
tcp 0 0 0.0.0.0:50022 0.0.0.0:* LISTEN 1021/rpc.statd
tcp 0 0 0.0.0.0:139 0.0.0.0:* LISTEN 892/smbd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 906/rpcbind
tcp6 0 0 :::22 :::* LISTEN 497/sshd
tcp6 0 0 :::42712 :::* LISTEN 1021/rpc.statd
tcp6 0 0 :::445 :::* LISTEN 892/smbd
tcp6 0 0 :::139 :::* LISTEN 892/smbd
tcp6 0 0 :::111 :::* LISTEN 906/rpcbind
udp 0 0 0.0.0.0:51566 0.0.0.0:* 615/avahi-daemon: r
udp 0 0 0.0.0.0:68 0.0.0.0:* 7362/dhclient
udp 0 0 0.0.0.0:111 0.0.0.0:* 906/rpcbind
udp 0 0 192.168.1.255:137 0.0.0.0:* 1782/nmbd
udp 0 0 192.168.1.67:137 0.0.0.0:* 1782/nmbd
udp 0 0 0.0.0.0:137 0.0.0.0:* 1782/nmbd
udp 0 0 192.168.1.255:138 0.0.0.0:* 1782/nmbd
udp 0 0 192.168.1.67:138 0.0.0.0:* 1782/nmbd
udp 0 0 0.0.0.0:138 0.0.0.0:* 1782/nmbd
udp 0 0 0.0.0.0:655 0.0.0.0:* 906/rpcbind
udp 0 0 0.0.0.0:17500 0.0.0.0:* 2217/dropbox
udp 0 0 0.0.0.0:5353 0.0.0.0:* 615/avahi-daemon: r
udp 0 0 0.0.0.0:34805 0.0.0.0:* 1021/rpc.statd
udp6 0 0 :::40192 :::* 1021/rpc.statd
udp6 0 0 :::111 :::* 906/rpcbind
udp6 0 0 :::655 :::* 906/rpcbind
udp6 0 0 :::5353 :::* 615/avahi-daemon: r
udp6 0 0 :::42629 :::* 615/avahi-daemon: r
How do I configure those services so they only listen to the outside world when I'm actually using them?
|
Determine your exposure
Taking your output from the netstat command, what looks like a lot of services is actually a very short list:
$ netstat -lntup | awk '{print $6 $7}'|sed 's/LISTEN//'| cut -d"/" -f2|sort|uniq|grep -v Foreign
avahi-daemon:r
dhclient
dropbox
nmbd
rpcbind
rpc.statd
smbd
sshd
Getting a lay of the land
Looking at this list there are several services which I'd leave alone.
dhclient
DHCP server daemon responsible for getting your IP address, have to have this one.
dropbox
obviously Dropbox, have to have
Start reducing it - disable Samba
You can probably right off the bat disable Samba, it accounts for 2 of the above services, nmbd and smbd. It's questionable that you'd really need that running on a laptop whether on localhost or your IP facing your network.
To check that they're running you can use the following command, status:
$ status nmbd
nmbd start/running, process 19457
$ status smbd
smbd start/running, process 19423
Turning services off can be confusing with all the flux that's been going on with upstart, /etc/rc.d, business so it might be difficult to figure out which service is under which technology. For Samba you can use the service command:
$ sudo service nmbd stop
nmbd stop/waiting
$ sudo service smbd stop
smbd stop/waiting
Now they're off:
$ status nmbd
nmbd stop/waiting
$ status smbd
smbd stop/waiting
Keeping them off ... permanently
To make them stay off I've been using this tool, sysv-rc-conf, to manage services from a console, it works better than most. It allows you to check which services you want to run and in which runlevel they should be started/stopped:
$ sudo apt-get install sysv-rc-conf
Disabling the rest of what's NOT needed
So now Samba's off we're left with the following:
avahi-daemon
part of zeroconf (plug-n-play), turn it off
rpcbind
needed for NFS - turn it off
rpc.statd
needed for NFS - turn it off
For the remaining 3 you can do the same things we did for Samba to turn them off as well.
CUPS?
To turn CUPS off, which you don't really need by the way, you can follow the same dance of turning the service off and then disabling it from starting up. To be able to print you'll need to setup each printer individually on your system. You can do so through the system-config-printer GUI.
Making these services on demand?
This is really the heart of your question but there isn't really a silver bullet solution to making these services "smart" so that they run when they're being used, rather than all the time.
#1 - systemd vs. upstart
Part of it is the current split between systemd and upstart. There's a good overview of the 2 competing technologies here.
Both technologies are trying to do slightly different things, IMO, given their feature sets, systemd seems geared more towards servers whereas upstart seems geared more towards the desktop roll. Over time this will work itself out, IMO, and both services will be stable and feature rich.
Eventually both services will offer on demand starting & stopping across the board for all the services they manage. Features such as StopWhenUnneeded=yes already exist in systemd for example, so it's only a matter of time until these capabilities get fleshed out.
#2 - service support
Some services don't support being stopped/started very well if at all. Services such as sshd seem to make little sense to run as on-demand, especially if they're used heavily. Also some services such as Apache provide mechanisms within themselves to spin up more or less of their own listeners managing themselves. So it's unclear how on-demand provided by systemd or upstart are going to integrate with these types of services.
starting sshd on first connection to port 22 with upstart's new socket bridge
CUPS and systemd: on demand start and stop
Is this really necessary?
You'll hear from both sides that this is overkill or that you should take a minimalist's approach only installing what you absolutely need, but it's really a personal choice. Understanding that these services are there and what they do is really what's important. At the end of the day a computer is a tool, and by using a Unix system you're already saying that you're willing to peek behind the curtain and understand what makes your computer tick.
I'd say that this type of questioning is exactly the frame of mind one should strive for when dealing with computers and Unix in general.
References
Recommended way to enable & disable services
upstart - wikipedia
systemd - wikipedia
[10: http://tech.cueup.com/blog/2013/03/08/running-daemons/
| How to "close" open ports? |
1,402,528,639,000 |
The nmap man page has this to say about the -sn parameter:
-sn (No port scan) .
This option tells Nmap not to do a port scan after host
discovery, and only print out the available hosts that
responded to the scan.
The first half of the sentence mentions that there is no scan, but the second half says that there is a scan. Is there a different type of scan than a port scan that the second half is referring to? A host-discovery scan perhaps (guessing from the little that I know about nmap)?
|
You're right that the documentation is worded poorly. -sn means "skip the port scan phase," and was previously available as -sP, with the mnemonic "Ping scan".
Nmap scans happen in phases. These are:
Name resolution
NSE script pre-scan phase
Host discovery ("ping" scan, but not necessarily ICMP Echo request)
Parallel reverse name resolution
Port or Protocol scan
Service version detection
OS fingerprinting
Traceroute
NSE portrule and hostrule script scanning phase
NSE post-scan phase
Note that not all these phases get executed in every scan, depending on the arguments. This scan:
nmap -sn scanme.nmap.org
will run phases 1, 3, and 4.
EDIT: I corrected the documentation you referred to in r33790:
This option tells Nmap not to do a port scan after host
discovery, and only print out the available hosts that
responded to the host discovery probes.
| Nmap -sn: scan or no scan? |
1,402,528,639,000 |
Today the IT manager got angry because I used nmap on the 3 servers I manage to see what ports they had open. I know I could have used netstat inside the host' shell.
He told me that "If the network goes down because of nmap I would be punished". I would like to know technically how many network bandwith / bytes would take a nmap 192.168.1.x which outputs:
Starting Nmap 6.40 ( http://nmap.org ) at 2015-05-11 13:33 ART
Nmap scan report for 192.168.x.53
Host is up (0.0043s latency).
Not shown: 983 closed ports
PORT STATE SERVICE
1/tcp open tcpmux
22/tcp open ssh
79/tcp open finger
80/tcp open http
111/tcp open rpcbind
119/tcp open nntp
143/tcp open imap
1080/tcp open socks
1524/tcp open ingreslock
2000/tcp open cisco-sccp
6667/tcp open irc
12345/tcp open netbus
31337/tcp open Elite
32771/tcp open sometimes-rpc5
32772/tcp open sometimes-rpc7
32773/tcp open sometimes-rpc9
32774/tcp open sometimes-rpc11
Nmap done: 1 IP address (1 host up) scanned in 3.28 seconds
|
This is easy enough to measure, at least if you nmap a host your machine is not otherwise communicating with. Just use tcpdump or wireshark to capture the traffic, limited to that IP address. You could also use iptables counters, etc.
I did so (using wireshark), the machine I tested on has fewer open TCP ports (5), but the totals were 2009 packets, 118,474 bytes. That took 1.4 seconds, so 1435 pps or 677 kbps. Neither should take down a reasonably-configured network.
Doing additional targets could potentially overwhelm a stateful firewall's connection tracking, if the scan went through a firewall. And of course running nmap is likely to cause any intrusion detection system to alarm—potentially wasting someone's time investigating.
Finally, nmap (by default) doesn't check all ports and host-based IDSs may detect and respond to the scan—both mean you don't necessarily get accurate answers.
| How many bytes occupy a simple nmap to a host? |
1,402,528,639,000 |
The following IP address is for my network interface
$ nmap 192.168.0.142
Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-09 11:33 EST
Nmap scan report for ocean (192.168.0.142)
Host is up (0.00047s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
3306/tcp open mysql
Nmap done: 1 IP address (1 host up) scanned in 0.97 seconds
Are those services shown below but not above exactly those that are closed to the outside but open within my local machine?
Are the services whose security that I should worry about exactly those listed above?
Thanks.
$ nmap localhost
Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-09 11:34 EST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00046s latency).
Other addresses for localhost (not scanned):
Not shown: 993 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
631/tcp open ipp
3306/tcp open mysql
5432/tcp open postgresql
9050/tcp open tor-socks
Nmap done: 1 IP address (1 host up) scanned in 0.16 seconds
|
The other two answers both raise very important points. But in addition, you appear to have only scanned TCP and not UDP :-). So there might also be UDP services you want to worry about.
UDP scanning has a number of issues that do not apply to TCP scanning. In either case, I would start by querying the OS instead: How do I list all sockets which are open to remote machines?
Port scanning is still useful as a confirmation though. Port scanning from a different host is a particularly good idea if you have set up a firewall, to confirm that the firewall is doing what you want.
| Difference between `nmap local-IP-address` and `nmap localhost` |
1,402,528,639,000 |
When I type in the command nmap –Pn –sT -sV –p0-65535 192.168.1.100, my terminal responds:
Starting Nmap 7.60 ( https://nmap.org ) at 2018-01-29 11:24 PST
Failed to resolve "–Pn".
Failed to resolve "–sT".
Failed to resolve "–p0-65535".
Nmap scan report for 192.168.1.100
Host is up (0.0075s latency).
Not shown: 999 closed ports
PORT STATE SERVICE VERSION
53/tcp filtered domain
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 1.73 seconds
I'm confused as to why it is failing to resolve the flags. This used to work on my machines; I have a MacBook and am using bash, as well as Kali Linux. I have tried restarting both machines, and it continually fails to resolve flags regardless of which IP address I attempt to scan.
|
nmap did not recognize those options because they start with a unicode EN DASH (342 200 223, –) instead of a hyphen or regular dash (-). As a result, nmap interprets those "options" as names to resolve.
| Nmap unable to resolve flags |
1,402,528,639,000 |
Per this comment, I'm going to take advice and ask this as a separate question. I am trying to learn more about networking and security and want to play with tools to help increase my understanding.
Fing seems like a pretty cool tool - finding devices on the network and their associated MAC address. One could easily implement any of the solutions that provides detection and alerting, but I would like to know how these tools are implemented? Is this a combination of low level linux utilities, or is there some custom programming going on?
If it is the second - what would that algorithm look like?
|
I just ran Fing against my wireless network. Using tcpdump, it appears that Fing generates Address Resolution Protocol (ARP) request packets. ARP is a pretty simple protocol that runs at the Ethernet Protocol level (Data Link, OSI level 2). An ARP request packet has the broadcast address (ff:ff:ff:ff:ff:ff) as the "to" address, the Android phone's MAC and IP address as the "from" information, and an IP address that Fing wants to know about. It appears that Fing just marches through whatever subnet it's on, in my case 172.31.0.0/24, so 255 IP addresses, from 172.31.0.1 to 172.31.0.254. After the march, it appears to try IP addresses that haven't responded a second time. This looks to me like Fing tries IP addresses in batches, and relies on the underlying Linux kernel to buffer ARP replies, for a Fing thread to deal with as fast as it can. If Fing decides that there's a timeout, it resends. It's not clear to me how Fing (a Java program) gets the phone's Linux kernel to generate ARP packets.
The notorious nmap, invoked with -sn, the "ping scan" flag, does the same thing. I did an strace on nmap -sn 172.31.0.0/24 to see how it gets the kernel to send ARP requests. It looks like nmap creates an ordinary TCP/IP socket, and calls connect() on the socket to TCP port 80, with an IP address. nmap must be doing this in non-blocking mode, as it does a large number of connect() calls sequentially, faster than it would take for Linux to decide to time out a connect() when there's no host with the IP address.
So there's your answer: create a TCP/IP socket, call connect() with a particular port and IP address, then see what the error is. If the error is ECONNREFUSED, it's a live IP address, but nothing is listening on that port. If you get a TCP connection, that IP address has a host. IP addresses that the connect() call times out for, don't have a machine associated. You need to batch the connect() calls for speed, and you need to wait for connect() calls to timeout to decide that an IP address does not have a machine associated with it.
| How does FING (or any of the IP/MAC Address Mappers) work? |
1,402,528,639,000 |
I know that i can use nmap to see which ports are open on specific machine.
But what i need is a way to get it from the host side itself.
Currently, if i use nmap on one of my machines to check the other one, i get for an example:
smb:~# nmap 192.168.1.4
PORT STATE SERVICE
25/tcp open smtp
80/tcp open http
113/tcp closed ident
143/tcp open imap
443/tcp open https
465/tcp open smtps
587/tcp open submission
993/tcp open imaps
Is there a way to do this on the host itself? Not from a remote machine to a specific host.
I know that i can do
nmap localhost
But that is not what i want to do as i will be putting the command into a script that goes through all the machines.
EDIT:
This way, nmap showed 22 5000 5001 5432 6002 7103 7106 7201 9200 but lsof command showed me 22 5000 5001 5432 5601 6002 7102 7103 7104 7105 7106 7107 7108 7109 7110 7111 7112 7201 7210 11211 27017
|
On Linux, you can use:
ss -ltu
or
netstat -ltu
To list the listening TCP and UDP ports.
Add the -n option (for either ss or netstat) if you want to disable the translation from port number and IP address to service and host name.
Add the -p option to see the processes (if any, some ports may be bound by the kernel like for NFS) which are listening (if you don't have superuser privileges, that will only give that information for processes running in your name).
That would list the ports where an application is listening on (for UDP, that has a socket bound to it). Note that some may only listen on a given address only (IPv4 and/or IPv6), which will show in the output of ss/netstat (0.0.0.0 means listen on any IPv4 address, [::] on any IPv6 address). Even then that doesn't mean that a given other host on the network may contact the system on that port and that address as any firewall, including the host firewall may block or mask/redirect the incoming connections on that port based on more or less complex rules (like only allow connections from this or that host, this or that source port, at this or that time and only up to this or that times per minutes, etc).
For the host firewall configuration, you can look at the output of iptables-save.
Also note that if a process or processes is/are listening on a TCP socket but not accepting connections there, once the number of pending incoming connection gets bigger than the maximum backlog, connections will no longer be accepted, and from a remote host, it will show as if the port was blocked. Watch the Recv-Q column in the output of ss/netstat to spot those situations (where incoming connections are not being accepted and fill up a queue).
| A way to find open ports on a host machine |
1,402,528,639,000 |
According to https://networkengineering.stackexchange.com/a/57909/, a packet sent to 192.168.1.97 "doesn't leave the host but is treated like a packet received from the network, addressed to 192.168.1.97." So same as sending a packet to loop back 127.0.0.1.
why does nmap 127.0.0.1 return more services than nmap 192.168.1.97?
Does nmap 127.0.0.1 necessarily also return those services returned by nmap 192.168.1.97? Does a server listening at 192.168.1.97 necessarily also listen at 127.0.0.1?
$ nmap -p0-65535 192.168.1.97
Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-23 19:18 EDT
Nmap scan report for ocean (192.168.1.97)
Host is up (0.00039s latency).
Not shown: 65532 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
3306/tcp open mysql
33060/tcp open mysqlx
Nmap done: 1 IP address (1 host up) scanned in 9.55 seconds
$ nmap -p0-65535 localhost
Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-23 19:18 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00033s latency).
Other addresses for localhost (not scanned):
Not shown: 65529 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
631/tcp open ipp
3306/tcp open mysql
5432/tcp open postgresql
9050/tcp open tor-socks
33060/tcp open mysqlx
Nmap done: 1 IP address (1 host up) scanned in 5.39 seconds
Thanks.
|
In short, they are two different interfaces (192.168.1.97 vs 127.0.0.1), and may have different firewall rules applied and/or services listening. Being on the same machine means relatively little.
| why `nmap 192.168.1.97` returns less services than `nmap 127.0.0.1`? [duplicate] |
1,402,528,639,000 |
I am testing my Debian Server with some Nmap port Scanning. My Debian is a Virtual Machine running on a bridged connection.
Classic port scanning using TCP SYN request works fine and detects port 80 as open (which is correct) :
nmap -p 80 192.168.1.166
Starting Nmap 6.47 ( http://nmap.org ) at 2016-02-10 21:36 CET
Nmap scan report for 192.168.1.166
Host is up (0.00014s latency).
PORT STATE SERVICE
80/tcp open http
MAC Address: xx:xx:xx:xx:xx:xx (Cadmus Computer Systems)
Nmap done: 1 IP address (1 host up) scanned in 0.51 seconds
But when running UDP port scan, it fails and my Debian server answers with an ICMP : Port unreachable error :
nmap -sU -p 80 192.168.1.166
Starting Nmap 6.47 ( http://nmap.org ) at 2016-02-10 21:39 CET
Nmap scan report for 192.168.1.166
Host is up (0.00030s latency).
PORT STATE SERVICE
80/udp closed http
MAC Address: xx:xx:xx:xx:xx:xx (Cadmus Computer Systems)
Nmap done: 1 IP address (1 host up) scanned in 0.52 seconds
Wireshark record :
How is that possible ? My port 80 is open, how come that Debian answers with an ICMP : Port unreachable error ? Is that a security issue?
|
Albeit TCP and UDP are part of TCP/IP, both belong to the same TCP/IP or OSI layers, and both are a layer above IP, they are different protocols.
http://www.cyberciti.biz/faq/key-differences-between-tcp-and-udp-protocols/
Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are
two of the core protocols of the
Internet Protocol suite. Both TCP and UDP work at the transport layer
TCP/IP model and both have a very different usage.
TCP is a connection-oriented protocol. UDP is a connectionless protocol.
(source: ml-ip.com)
Some services do indeed answer to TCP and UDP ports at the same time, as is the case of DNS and NTP services, however that is not certainly the case with web servers, which normally only answer by default to port 80/TCP (and do not work/listen at all in UDP)
You can list your UDP listenning ports in a linux system with:
$sudo netstat -anlpu
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
udp 0 0 0.0.0.0:1900 0.0.0.0:* 15760/minidlnad
udp 0 0 0.0.0.0:5000 0.0.0.0:* 32138/asterisk
udp 0 0 0.0.0.0:4500 0.0.0.0:* 1592/charon
udp 0 0 0.0.0.0:4520 0.0.0.0:* 32138/asterisk
udp 0 0 0.0.0.0:5060 0.0.0.0:* 32138/asterisk
udp 0 0 0.0.0.0:4569 0.0.0.0:* 32138/asterisk
udp 0 0 0.0.0.0:500 0.0.0.0:* 1592/charon
udp 0 0 192.168.201.1:53 0.0.0.0:* 30868/named
udp 0 0 127.0.0.1:53 0.0.0.0:* 30868/named
udp 0 0 0.0.0.0:67 0.0.0.0:* 2055/dhcpd
udp 0 0 0.0.0.0:14403 0.0.0.0:* 1041/dhclient
udp 17920 0 0.0.0.0:68 0.0.0.0:* 1592/charon
udp 0 0 0.0.0.0:68 0.0.0.0:* 1041/dhclient
udp 0 0 0.0.0.0:56417 0.0.0.0:* 2055/dhcpd
udp 0 0 192.168.201.1:123 0.0.0.0:* 1859/ntpd
udp 0 0 127.0.0.1:123 0.0.0.0:* 1859/ntpd
udp 0 0 192.168.201.255:137 0.0.0.0:* 1777/nmbd
udp 0 0 192.168.201.1:137 0.0.0.0:* 1777/nmbd
udp 0 0 0.0.0.0:137 0.0.0.0:* 1777/nmbd
udp 0 0 192.168.201.255:138 0.0.0.0:* 1777/nmbd
udp 0 0 192.168.201.1:138 0.0.0.0:* 1777/nmbd
udp 0 0 0.0.0.0:138 0.0.0.0:* 1777/nmbd
udp 0 0 192.168.201.1:17566 0.0.0.0:* 15760/minidlnad
And your listening TCP ports with the command:
$sudo netstat -anlpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5060 0.0.0.0:* LISTEN 32138/asterisk
tcp 0 0 192.168.201.1:8200 0.0.0.0:* LISTEN 15760/minidlnad
tcp 0 0 192.168.201.1:139 0.0.0.0:* LISTEN 2092/smbd
tcp 0 0 0.0.0.0:2000 0.0.0.0:* LISTEN 32138/asterisk
tcp 0 0 192.168.201.1:80 0.0.0.0:* LISTEN 7781/nginx
tcp 0 0 192.168.201.1:53 0.0.0.0:* LISTEN 30868/named
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 30868/named
tcp 0 0 192.168.201.1:22 0.0.0.0:* LISTEN 2023/sshd
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN 1919/perl
tcp 0 0 127.0.0.1:953 0.0.0.0:* LISTEN 30868/named
tcp 0 0 192.168.201.1:445 0.0.0.0:* LISTEN 2092/smbd
tcp 0 224 192.168.201.1:22 192.168.201.12:56820 ESTABLISHED 16523/sshd: rui [pr
Now normally NMAP does send a SYN to the port being scanned, and per the TCP protocol, if a daemon/service is bound to the port, it will answer with a SYN+ACK, and nmap will show it as open.
TCP/IP connection negotiation: 3 way handshake
To establish a connection, TCP uses a three-way handshake. Before a
client attempts to connect with a server, the server must first bind
to and listen at a port to open it up for connections: this is called
a passive open. Once the passive open is established, a client may
initiate an active open. To establish a connection, the three-way (or
3-step) handshake occurs:
SYN: The active open is performed by the client sending a SYN to the
server. The client sets the segment's sequence number to a random
value A. SYN-ACK: In response, the server replies with a SYN-ACK.
However, if a service is not running there, TCP/IP defines the kernel will send an ICMP message back with an "Port unreachable" message for UDP services, and TCP RST messages for TCP services.
ICMP Destination unreachable
Destination unreachable is generated by the host or its inbound
gateway[3] to inform the client that the destination is unreachable
for some reason. A Destination Unreachable message may be generated as
a result of a TCP, UDP or another ICMP transmission. Unreachable TCP
ports notably respond with TCP RST rather than a Destination
Unreachable type 3 as might be expected.
So indeed, your UDP scanning to port 80/UDP simply receives an ICMP unreachable message back because there is not a service listening to that combination or protocol/port.
As for security considerations, those ICMP destination unreachable messages can certainly be blocked, if you define firewall/iptables rules that DROP all messages by default, and only allow in the ports that your machine serves to the outside. That way, nmap scans to all the open ports, especially in a network, will be slower, and the servers will use less resources.
As an additional advantage, if a daemon/service opens additional ports, or a new service is added by mistake, it won't be serving requests until it is expressly allowed by new firewall rules.
Please do note, that if instead of using DROP in iptables, you use REJECT rules, the kernel won't ignore the scanning/ TCP/IP negotiation tries, and will answer with ICMP messages of Destination unreachable, code 13: "Communication administratively prohibited (administrative filtering prevents packet from being forwarded)".
Block all ports except SSH/HTTP in ipchains and iptables
| ICMP : Port unreachable error even if port is open |
1,402,528,639,000 |
I noticed I have several networks with all ICMP messages blocked at the firewall level, except for ICMP echo and reply.
I know that there is a need at least ICMP messages type 3 in IPv4 have to be allowed for the MTU negotiation to occur.
The packets can be sniffed with the command:
sudo tcpdump icmp
However, how do I generate ICMP packets type 3 on one remote point to make global tests?
|
You need ICMP type 3 "destination unreachable" packets to provide healthy IP connections.
The easiest way to generate ICMP packets type 3 for testing is by using the nping program.
The nping program is part of the nmap package, and as such there is a need to have it installed. For it you have to do:
sudo apt install nmap
After having it installed, to test a remote Linux system, starting running on the remote side, to listen for ICMP type 3 and 4 packets:
sudo tcpdump 'icmp[0] = 3'
or
sudo tcpdump '(icmp[0] = 3) and (host ip_or_dns_of_nping_sender)'
and then do the other system/side to send the ICMP type 3 packets:
sudo nping --icmp-type 3 ip_or_dns_of_remote
Be sure to test them in both directions.
As an example, using the loopback interface to show the test in the local machine:
In the first terminal - listening for ICMP type 3 messages:
$sudo tcpdump -i lo 'icmp[0] = 3'
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 262144 bytes
21:37:44.089420 IP localhost > localhost: [|icmp]
21:37:45.090092 IP localhost > localhost: [|icmp]
21:37:46.091289 IP localhost > localhost: [|icmp]
21:37:47.093095 IP localhost > localhost: [|icmp]
21:37:48.095019 IP localhost > localhost: [|icmp]
^C
5 packets captured
10 packets received by filter
0 packets dropped by kernel
In the second terminal - sending ICMP type 3 messages:
$sudo nping --icmp-type 3 localhost
Starting Nping 0.6.47 ( http://nmap.org/nping ) at 2017-03-06 21:37 WET
SENT (0.0221s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
RCVD (0.2088s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
SENT (1.0228s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
RCVD (1.2088s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
SENT (2.0240s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
RCVD (2.2088s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
SENT (3.0258s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
RCVD (3.2088s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
SENT (4.0277s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
RCVD (4.2088s) ICMP 127.0.0.1 > 127.0.0.1 Destination unreachable (type=3/code=0) ttl=64 id=40477 iplen=28
Max rtt: 186.715ms | Min rtt: 181.081ms | Avg rtt: 184.307ms
Raw packets sent: 5 (140B) | Rcvd: 5 (140B) | Lost: 0 (0.00%)
Nping done: 1 IP address pinged in 4.24 seconds
| MTU (IPv4) tests in Linux |
1,402,528,639,000 |
Nmap scanning network for SNMP enabled devices:
sudo nmap -sU -p 161 --script default,snmp-sysdescr 26.14.32.120/24
I'm trying figure out how make that nmap return only devices that have specific entries in snmp-sysdescr object:
snmp-sysdescr: "Target device name"
Is that possible?
|
Nmap doesn't contain much in the way of output filtering options: --open will limit output to hosts containing open ports (any open ports). -v0 will prevent any output to the screen.
Instead, the best way to accomplish this is to save the XML output of the scan (using the -oX or -oA output options), which will contain all the information gathered by the scan in an easy-to-parse XML format. Then you can filter that with XML parsing tools to include the information you want.
One command-line XML parser is xmlstarlet. You can use this command to filter out only IP addresses for targets that have sysdescr containing the string "example":
xmlstarlet sel -t -m "//port/script[@id='snmpsysdescr' and contains(@output,'example')]/../../../address[@addrtype='ipv4']" -v @addr -n output.xml
You can also do this with Ndiff, which is a tool and Python 2 library distributed with Nmap:
#!/usr/bin/env python
import ndiff
def sysdescr_contains (value, host):
for port in host.ports:
for script in filter(lambda x: x.id == u"snmp-sysdescr", port.script_results):
if value in script.output:
return True
return False
def usage ():
print """Look for <substring> in snmp-sysdescr output and print matching hosts.
Usage: {} <filename.xml> <substring>"""
if __name__ == "__main__":
import sys
if len(sys.argv) < 3:
usage()
exit(1)
scan = ndiff.Scan()
scan.load_from_file(sys.argv[1])
for host in filter(lambda x: sysdescr_contains(sys.argv[2], x), scan.hosts):
print host.format_name()
Other Nmap-output parsing libraries are available in most common programming languages.
| Nmap scan for SNMP enabled devices |
1,402,528,639,000 |
When I execute nmap -sn 192.168.1.1-255, I get:
Nmap scan report for router (192.168.1.1)
Host is up (0.037s latency).
Nmap scan report for 192.168.1.17 # This is my smart TV
Host is up (0.054s latency).
Nmap scan report for prometheus (192.168.1.164)
Host is up (0.0020s latency).
Nmap done: 255 IP addresses (3 hosts up) scanned in 9.71 seconds
This output is an accurate account of all of the online hosts on my home network at the time. However, when I run nmap -sL 192.168.1.1-255 immediately after, I get:
Nmap scan report for router (192.168.1.1)
…
Nmap scan report for android-cae4e91f5179dac4 (192.168.1.81)
…
Nmap scan report for Owner-PC (192.168.1.118)
Nmap scan report for prometheus (192.168.1.164)
…
Nmap scan report for kindle-707ed367c (192.168.1.187)
…
Nmap done: 255 IP addresses (0 hosts up) scanned in 8.33 seconds
The android, Owner-PC, and Kindle have all connected to my network previously (though at the time I ran nmap -sL, they were not up). Meanwhile, the smart TV (192.168.1.17), isn't listed. Reading the man-page on these options doesn't shed any light for me on why this should be the case. Can anybody provide me with a more intuitive explanation of the difference between these two options?
|
As I read the "help" output of nmap, "-sL" lists out the possible scan targets. On my little intranet, nmap clearly does DNS lookups: in addition to IP addresses, it gives the DNS names I've assigned, even if those hosts aren't even plugged in. But in any case, nmap -sL just lists IP addresses and host names, it doesn't actually send a packet to see what happens. nmap -sn does a ping-scan of the specified subnet. nmap sends ICMP ECHO_REQUEST packets to IP addresses, and shows you what answered.
In your listing above, you've elided all the plain IP addresses, those without a DNS name. My guess is that the IP address 192.168.1.117 is one of the plain IP addresses - your "smart TV" didn't do DHCP with a nickname that got into the DNS/DHCP server's tables. Since all that nmap -sL does is list IP addresses, and possibly hostnames it gets from DNS, the "smart TV" "doesn't show up" as aything momre than an IP address.
So that's the behavior one would want from nmap. Executed with "-sn" it shows you what IP addresses answer ICMP ECHO_REQUEST packets. Executed with "-sL" it just lists IP addresses. Try "-sL" with different subnets, like "192.168.1.0/24" and "192.168.1.0/28" to see the difference. Go crazy and do nmap -sL 10.0.0.0/24 to see what that says. Since your DHCP is giving out 192.168.1.x addresses, and 10.x.y.z addresses aren't routable, you should be convinced that you're just seeing a listing.
| nmap -sn lists all active hosts on my network, but nmap -sL does not |
1,402,528,639,000 |
I am interested in detecting any nmap scans directed on a (my) GNU/Linux host. I would like to use snort in combination with barnyard2 and snorby for this, or if possible to perform a signature-based detection on snort unified2 logs. I noticed a similar packet to the following pops up when performing a nmap -A scan:
[ 0] 00 00 FF FF 00 00 00 00 00 00 00 00 00 00 08 00 ................
[ 16] 45 00 00 A2 5C 63 40 00 78 FF 39 03 B9 1E A6 45 E...\[email protected]
[ 32] 05 27 08 D3 50 72 69 6F 72 69 74 79 20 43 6F 75 .'..Priority Cou
[ 48] 6E 74 3A 20 39 0A 43 6F 6E 6E 65 63 74 69 6F 6E nt: 9.Connection
[ 64] 20 43 6F 75 6E 74 3A 20 38 0A 49 50 20 43 6F 75 Count: 8.IP Cou
[ 80] 6E 74 3A 20 32 0A 53 63 61 6E 6E 65 72 20 49 50 nt: 2.Scanner IP
[ 96] 20 52 61 6E 67 65 3A 20 38 34 2E 32 34 32 2E 37 Range: 84.242.7
[ 112] 36 2E 36 36 3A 31 38 35 2E 33 30 2E 31 36 36 2E 6.66:185.30.166.
[ 128] 36 39 0A 50 6F 72 74 2F 50 72 6F 74 6F 20 43 6F 69.Port/Proto Co
[ 144] 75 6E 74 3A 20 31 30 0A 50 6F 72 74 2F 50 72 6F unt: 10.Port/Pro
[ 160] 74 6F to
What's the packet above? Does it have to do with nmap, solely? (I highly doubt that)
Unfortunately snort configured with sfPortscan isn't as effective and/or accurate as I want it to be (Scans are detected but due to some reason I can't see details about it, such as source/destination :: https://i.sstatic.net/uqess.png , https://i.sstatic.net/faulS.png . I have iptables configured with --hitcount and --seconds which makes "stream5: Reset outside window" pop up, thus I can detect a few scans.).
What are my options here?
|
Have you looked at the emerging threats ruleset? Specifically their scan rules?
You will never detect detect scans with 100% accuracy. Generally speaking, thresholding is useful. On border firewalls on a large network, I look at e.g. number of distinct hosts contacted by a some remote host over a certain period. On a single host, the number of distinct ports on that host in a given period. On the iptables front, a good option is logging DROPed packets. You could do this in snort too. The basic idea is just to monitor some ports that you do not have open. Contact on those ports is by definition unsolicited. (Okay, so that deviates a little bit from the goal of just detecting nmap scans...)
| What's the most effective way to detect nmap scans? |
1,402,528,639,000 |
I have a project where I know a single computer and a single printer will be the only things on the network. What I want to do is detect when the printer is connected to the network. I also know that the computer is 192.168.3.1. However, with DHCP I won't know the printer address (yes, it could be made static to make it easier but, 'they' don't like that. 'They' want it dynamic)
What I have is a script that does the following and it works.
nmap -sP 192.168.3.0/24 \
| awk '/192.168.3/ && !/192.168.3.1$/' \
| sed 's/Nmap scan report for //'
Nmap output
Nmap scan report for 192.168.3.1
Host is up (0.014s latency).
Nmap scan report for 192.168.3.100
Host is up (0.012s latency).
Nmap done: 256 IP addresses (2 hosts up) scanned in 2.54 seconds
Script output
192.168.3.100
It only takes a couple seconds to work but is there a better/cleaner/faster way?
|
There's no need to scan the entire subnet if you know that you're not interested in part of it. (Avoiding the computer means you don't need to discard its result.)
nmap -oG - -sn 192.168.3.2-254 | awk '$NF=="Up" {print $2}'
or if you prefer using the XML output instead of the grep output
nmap -oX - -sP 192.168.3.2-254 | xmlstarlet sel -t -m '//address[@addrtype="ipv4"]' -v '@addr' -n
Use -sP instead of the newer -sn if your version of nmap requires it.
Incidentally, although your system administrators may want you to have your printer on DHCP, there should be little reason why they can't arrange for it to have a known unchanging address. (I do that for printers on my networks so that printer software doesn't need to worry about IP addresses changing unexpectedly.) Sometimes this is known as a "sticky" address, to differentiate it from a static (non-DHCP) address or a pseudo-random dynamic (DHCP) address.
Are you sure the DHCP server itself won't be on your subnet? Otherwise, how is your printer going to get its dynamic address?
| nmap to awk to sed. is there a better way? |
1,402,528,639,000 |
Is it possible to runnmap via Tor?
When I googled around, I got the impression that Tor uses Polipo / Privoxy, which are socks5 proxies. So any TCP / UDP aware applications should be able to use them as a gateway to route their traffic.
But somewhere it also said that nmap uses raw packets, so it can't be run over Tor!
|
Short answer
Yes it is possible, use tsocks nmap -sT IP
Long answer
First of all Tor doesn't use privoxy, Tor provides an socks proxy for connecting via the Tor network.
This means you won't see any network routes or things like that on your system but you have to configure your applications to use the Tor socks proxy to connect via Tor.
Typical Tor installations have privoxy or other proxy serves to provide HTTP Proxies as some browsers try to resolve the hostname locally if they are using a socks proxy. But these http proxy servers have nothing to do with connecting arbitrary applications through Tor.
Applications like tsocks allow to use arbitrary applications to connect via the Tor network. This is done by hooking into specific syscalls like connect and connect them automatically via the socks proxy. This only works if the program uses the specific syscalls and is dynamically linked.
To use nmap via Tor you have to use a program like tsocks to redirect the connections via the socks proxy and use a scanning options which uses the connect syscall. Fortunately nmap provides the -sT option:
-sT (TCP connect scan)
TCP connect scan is the default TCP scan type when SYN scan is not an option. This is the case when a user does not have raw packet privileges.
So yes it is possible to run specific nmap scans (the TCP connect scan) via the TOR network if you use tsocks.
| Run nmap via Tor |
1,402,528,639,000 |
I am looking for a quick way to scan for commonly open ports on proxies. I am doing this through php and I have been using nmap and came up with this command:
<?php
system("nmap -PN -p U:1194,T:21,22,25,53,80,110,111,143,443,465,993,995,3306,8443,553,554,1080,3128,6515,6588,8000,8008,8080,8081,8088,8090,8118,8880,8909,1723,7080 {$_SERVER['REMOTE_ADDR']} 2>&1");
?>
The problem is that it typically takes 1-2 seconds for the scans to complete, even if i just defined port 80, it's still around 1-2 seconds.
However doing this in PHP will return almost instant or timeout within 0.5 seconds:
if( @fsockopen( $_SERVER['REMOTE_ADDR'], $port, $errstr, $errno, 0.5 ) )
die("php_tests_callback({success: false, message: 'Client has port $port open'});");
So I am wondering if there is a more optimized way of using NMAP or an alternative program? I am almost tempted to write some sort of php forking process to run numerous fsockopens.
Edit:
Apparently I need to read NMAP man before I post a question. I came up with these arguments which usually get the scan down to 0.50 seconds or a tad more.:
system("nmap -T5 --host-timeout 4s --min-rate 1000 -PN -p U:1194,T:21,22,25,53,80,110,111,143,443,465,993,995,3306,8443,553,554,1080,3128,6515,6588,8000,8008,8080,8081,8088,8090,8118,8880,8909,1723,7080 {$_SERVER['REMOTE_ADDR']} 2>&1");
However, I am still open to other suggestions/applications.
|
Take a look at the Rainmap Web-hosted Nmap scanner. It was developed as a Google Summer of Code project 2 years ago under the guidance of the Nmap development team
| Fastest way to port scan. NMAP? |
1,402,528,639,000 |
I used nmap to scan all the ports in the hosts in a network by using the command:
$ nmap 172.31.100.0/24
What I found is that it showed the following in the result:
Nmap scan report for 172.31.100.0
Host is up (0.0039s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
23/tcp open telnet
80/tcp filtered http
What does 172.31.100.0 represent? Is it a specific interface or something else?
|
172.31.100.0 is the IP address of one of the hosts you scanned. If your network is actually 172.31.96.0/21 (or larger), then 100.0 is a perfectly valid IP address.
172.31.100.0 is part of the pre-CIDR Class B IP space, so you may have gotten a default network of 173.31.0.0/16 if you didn't configure otherwise (and 100.0 completely valid on that network).
If you don't want to scan .0, you can invoke nmap with a range: nmap -sS 172.31.100.1-254.
| Doing nmap on a network |
1,402,528,639,000 |
Why am I getting the "operation not permitted" with nmap - even when executed as root?
$ sudo nmap 192.168.1.2
Starting Nmap 7.12 ( https://nmap.org ) at 2017-01-13 02:12 CST
sendto in send_ip_packet_sd: sendto(5, packet, 44, 0, 192.168.1.2, 16) => Operation not permitted
Offending packet: TCP 192.168.1.1:53769 > 192.168.1.2:2099 S ttl=47 id=47294 iplen=44 seq=2821662280 win=1024 <mss 1460>
...
Omitting future Sendto error messages now that 10 have been shown. Use -d2 if you really want to see them.
This is not an iptables issue - my OUTPUT chain is wide open:
$ sudo iptables -L OUTPUT
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Now, I do have a few different interfaces here, with VLANs and a bridge. This is working on some interfaces and not others:
br0: Bridge over eth0 (untagged) and vbox0 (using VirtualBox), has IP 192.168.1.1 -> Not working (above).
For kicks, removing vbox0 from the bridge doesn't fix anything.
eth0.2: VLAN 2, with IP 192.168.2.1. Executing nmap on addresses in this subnet works as expected -> working.
This seems significant, as this goes out the same physical NIC as eth0 (above).
vbox1: Has IP 192.16.3.1. Executing nmap on addresses in this subnet works as expected -> working.
This is a physical workstation - not being operated under any virtualization or container that might impose additional restrictions here.
Bridge:
$ brctl show
bridge name bridge id STP enabled interfaces
br0 8000.0015171970fc no eth0
vbox0
Granted, I can work-around this by using a less-privileged TCP connect scan (-sT) rather than the default TCP SYN scan (-sS) - but this doesn't explain why this is happening.
Are there any known limitations here with the Ethernet bridging, or anything else I should be looking at?
Edits (2017-01-14):
Attempting to replicate in a clean VM (2 vCPU on an i7 physical system). Even after setting up the bridge, unable to reproduce.
Disabling all Ethernet offloading options (using ethtool) does nothing to help.
Running the latest compiled-from-source Nmap 7.40 does nothing to help.
This appears to be a kernel issue? http://seclists.org/nmap-dev/2016/q4/131 . Not sure why I couldn't reproduce on the VM, despite the same versions. Possibly also hardware/driver-specific...
This looks to be an issue with the iptable_nat module in the current 4.8.x kernels: https://bugzilla.redhat.com/show_bug.cgi?id=1402695
The scan still runs. This only seems to impact the start of the scan - for which I remain concerned as I may be missing results.
It says that all messages after the first 10 have been omitted. However, even after repeating with the -d2 as prompted, I still see only 10. (Could be a bug in itself, however.)
If I repeat the scan for a given port as listed (e.g. with -p 2099 for the example shown above), it scans successfully for that port - so it isn't as if certain ports are blocked.
Running with --max-parallelism=1 drastically reduces the occurrence.
Setting to 50 doesn't seem to help.
Setting to 30 seems to work about half the time for a single host - but still eventually starts failing for a subnet scan.
Progressively lower values lengthen the time it takes into a subnet scan for any failures to be observed - but even using 1 fails eventually.
This does not appear to be a parallelism issue within nmap itself. Running multiple concurrent nmap scans with parallel and --max-parallelism=1 re-increases the occurrence of the issue.
Host info: Ubuntu 16.10, kernel 4.8.0-34-generic #36-Ubuntu. Intel(R) Core(TM) i7-2600S, 32 GB RAM.
|
This looks to be an issue with the iptable_nat module in the current 4.8.x Linux kernels (< 4.8.16), as per https://bugzilla.redhat.com/show_bug.cgi?id=1402695.
The 4.9 kernel includes a "real" fix - but as for Ubuntu, I am guessing we'll have to wait for Ubuntu 17.04 (Zesty Zapus) to see this officially. (4.9 will be included there, as per the release notes).
As for Ubuntu 16.10 (Yakkety Yak), it looks like a fixed 4.8.16 kernel is pending release per https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1654584, which includes the following fixes:
Revert "netfilter: nat: convert nat bysrc hash to rhashtable"
Revert "netfilter: move nat hlist_head to nf_conn"
I updated to this kernel using http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.8.16/ and the instructions at https://wiki.ubuntu.com/Kernel/MainlineBuilds. (I trust it will be further upgraded as usual as additional updates follow.) This not only resolved my issue, but resulted in a massive scanning performance improvement.
| nmap raw packet privileges not working ("operation not permitted", even as root) |
1,402,528,639,000 |
I would like to find a way to print for each IP address found to have at least one open port, to print that IP address, followed by a list of open ports separated by commas. The ports and IP address should be separated by a tab delimiter.
I can do this in an ugly fashion, by grepping only the ip address, writing that to file, then grepping the nmap file again using the ip address result file as the input file, and then trimming down the open ports with cut and sed, writing that to file, and then joining both files. This is an ugly process, and it does not work reliably for fringe situations.
Is there an easy way to do this in one line with awk? I would think I would need to have a function in awk to find all open ports and return them so they could be printed along with the IP address, but I have not found how to do that.
Example source data:
Host: 10.0.0.101 ()Ports: 21/closed/tcp//ftp///, 22/closed/tcp//ssh///, 23/closed/tcp//telnet///, 25/closed/tcp//smtp///, 53/closed/tcp//domain///, 110/closed/tcp//pop3///, 139/open/tcp//netbios-ssn///, 143/closed/tcp//imap///, 445/open/tcp//microsoft-ds///, 3389/closed/tcp//ms-wbt-server///
Expected output data:
10.0.0.101 139,445
|
This awk program should do it:
$ echo "Host: 10.0.0.101 ()Ports: 21/closed/tcp//ftp///, 22/closed/tcp//ssh///, 23/closed/tcp//telnet///, 25/closed/tcp//smtp///, 53/closed/tcp//domain///, 110/closed/tcp//pop3///, 139/open/tcp//netbios-ssn///, 143/closed/tcp//imap///, 445/open/tcp//microsoft-ds///, 3389/closed/tcp//ms-wbt-server///" |
awk '{printf "%s\t", $2;
for (i=4;i<=NF;i++) {
split($i,a,"/");
if (a[2]=="open") printf ",%s",a[1];}
print ""}' |
sed -e 's/,//'
10.0.0.101 139,445
Before you edited your question, I had assumed, your output would be from shell nmap, so I had prepared this answer:
$ nmap -sT 127.0.0.1-3 |
awk '/^Nmap scan report/{cHost=$5;}
/open/ { split($1,a,"/"); result[cHost][a[1]]=""}
END {
for (i in result) {
printf i;
for (j in result[i])
printf ",%s", j ;
print ""} }' |
sed -e 's/,/\t/'
localhost 445,25,139,631,22,80
127.0.0.2 445,139,22,80
127.0.0.3 445,139,22,80
If you need explanations for them, leave a comment. If you can help eliminate the trailing sed calls, or can enhance any of the invocations, please edit.
| Parse (grepable) nmap output to print a list of IP\t[all open ports] with text utils like awk |
1,402,528,639,000 |
How can I use nmap to find out the utorrent version installed on a PC by scanning it from another PC on the same subnet?
|
Looking at the nmap-service-probes database, it looks like nmap can't detect which version of uTorrent is running.
| How to remotely detect another machine's utorrent version with nmap? |
1,402,528,639,000 |
Extracting results from a command in a terminal
I ran a nmap scan on my local network using this command:
nmap -sP 192.168.1.*
When I ran that command I get something that looks similar to this:
Nmap scan report for macbook.att.net (192.168.1.21)
Host is up (0.019s latency).
MAC Address: 71:DF:4B:44:80:F1 (Apple)
Nmap scan report for lenovo.att.net (192.168.1.15)
Host is up (0.045s latency).
MAC Address: 21:EA:7D:84:08:A1 (Liteon Technology)
How can I run that command, but only output the results like this:
1. Apple (192.168.1.21)
2. Liteon Technology (192.168.1.15)
What I have tried so far
So far, I have tried to use grep, but it's not working out as well as I expected. I just need to know how to take the results from that nmap scan and organize it in a list with just what's between the "( )" and also the IP Address.
|
You could try with 'awk' command as follow,
nmap -sP 192.168.1.* | awk -F"[)(]" '/^Nmap/{Nmap=$(NF-1); C+=1} /^MAC Address/{print C"."$(NF-1) "("Nmap")" }'
output,
1. Apple (192.168.1.21)
2. Liteon Technology (192.168.1.15)
explanations:
with awk's -F open your are telling 'awk' that your inputs are delimited with ( and/or ), as what we specified within groups of delimiters -F"[)(]"
the '/.../{...} /.../{...}', it's awk's script body, which in your case it will only run first /^Nmap/{Nmap=$(NF-1); C+=1}, or second /^MAC Address/{print C"."$(NF-1) "("Nmap")" }or none of these two condition parts where we specified only run if input string or line starts ( ^ which is the start line anchor and pointing to the beginning of a line/record) with Nmap (or in second part MAC Address) patterns. any match found it will go to run the codes within its braces {...}
what the first part is doing?
As explained above, if match found, then hold the second last feild (NFpointing the last feild(or its returning number of feilds in a record based on defined delims, and $NF its value) value into variable Nmap with $(NF-1); the C+=1 is a counter flag variable we used to count number of matches also at the end using for ID list in output
what the second part is doing?
same as above, but when match found ^MAC Address, then first print counter C value, print a point ., next print the second last feild of matched line and at the end print the value of 'Nmap' within paranteces which is IP of previous matched line
| Extracting results from a command in terminal |
1,402,528,639,000 |
$ nmap -p0-65535 192.168.0.142
Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-10 17:53 EDT
Nmap scan report for ocean (192.168.0.142)
Host is up (0.000031s latency).
Not shown: 65531 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
3306/tcp open mysql
33060/tcp open mysqlx
Nmap done: 1 IP address (1 host up) scanned in 11.06 seconds
What is service mysqlx?
Why is it not mapped in /etc/services?
$ cat /etc/services | grep mysql
mysql 3306/tcp
mysql 3306/udp
mysql-proxy 6446/tcp # MySQL Proxy
mysql-proxy 6446/udp
Why is it not part of the command for the process?
$ ps -A | grep mysqlx
Is it possible that nmap can report misleading information? Do you find out the services running on the local machine not by nmap?
Thanks.
|
The mysqlx service on port 33060 is the MySQL X DevAPI service.
nmap does not use /etc/services, it uses its own database of services.
Note that anything listening on port 33060 will be reported as the mysqlx service, and that the name of a service does not necessarily have to be part of the name of the command providing the service (both exim and postfix may provide an smtp service, for example).
To see what's listening on port 33060 on the local machine, you may use, on a Linux system,
sudo lsof -i :33060
or
sudo fuser -v 33060/tcp
| How can I find out information about a service? |
1,402,528,639,000 |
All of a sudden, nmap throws the following error after executing the canonical sudo nmap -sP 192.168.109.* :
nmap: Target.cc:503: void Target::stopTimeOutClock(const timeval*):
Assertion `htn.toclock_running == true' failed.
Tried to reboot PC, restart switch, router and grandma but none worked.
Nmap version is 7.8 on Ubuntu 20.
|
As it states here, this bug has been solved in version 7.9.
Since via apt-get you won't get it (7.8 is the most recent on repo), I solved this by installing nmap via Snap as follows :
sudo apt install snapd
sudo snap install nmap
Check your new nmap version via :
sudo nmap --version
which should be the following :
Nmap version 7.91 ( https://nmap.org )
If you get the following error when testing nmap :
dnet: Failed to open device [device-name] QUITTING!
run the following :
sudo snap connect nmap:network-control
Cheers!
| Nmap 7.8 Assertion failed: htn.toclock_running == true |
1,402,528,639,000 |
I am running a clean install of Fedora 24, and I am not sure if I accidentally did some weird key combo, but I entered:
nmap -sT -Pn [IPaddress]
and, for some reason, this added a PCI network to my PC, switched to it, and did not allow me to require my usual eth0.
I restarted my PC, and am fine, now.
But what would have caused this network change?
|
AFAIK by default Nmap has nothing to do with your network interfaces unless you want it to. I recommend you read Gordon (Fyodor) Lyon's NMAP NETWORK SCANNING book.
For instance if you want to use a different network interface, you should pass -e option followed by required interface name, e.g. -e wlan0. So I don't think it can be caused by Nmap.
What you have used is including -sT and -Pn options:
-sT (TCP connect scan)
As we can see in both Nmap manual page and the book I just recommended, TCP connect scan is the default TCP scan type when SYN scan is not an option. This is the case when a user does not have raw packet privileges or is scanning IPv6 networks. Instead of writing raw packets as most other scan types do, Nmap asks the underlying operating system to establish a connection with the target machine and port by issuing the connect system call.
This is the same high-level system call that web browsers, P2P clients, and most other network-enabled applications use to establish a connection. Rather than read raw packet responses off the wire, Nmap uses this API to obtain status information on each connection attempt.
Note that this type of scan, and the FTP bounce scan (-b) are the only scan types available to unprivileged users.
Note that it is not a stealth scan.
-Pn (Ping-less Scan)
In previous versions of Nmap, -Pn was -P0, and -PN.
This option disables ping scan and skips the Nmap discovery stage.
There are many reasons for disabling ping, for instance intrusive vulnerability assessment, and it can be used to bypass when a host is protected by firewall.
CONCLUSION: It shouldn't have been caused by Nmap, at least not by this command you have used.
N.B. BTW, PCI (Peripheral Component Interconnect) is an industry specification for connecting hardware devices to a computer's central processor. So as we can guess both Ethernet and Wi-Fi network adapters for desktop and notebook computers commonly utilize PCI.
| Nmap Changed My Network? |
1,402,528,639,000 |
I am learning Nmap and a thought occurred to me with regards to a SYN scan...
A SYN scan sends an empty TCP packet with the SYN flag set to illicit a response from the target of either RST, indicating that the port is closed, or SYN/ACK, indicating that the port is open.
If the port is being firewalled by IPTABLES then Nmap is either getting an active REJECT response, or IPTABLES will DROP the packets and not respond at all. Either way, Nmap will designate the port as Filtered and lead one to believe that the port is open.
My question then is...Can you make IPTABLES send back an RST on a filtered port instead of just either REJECTING or DROPPING?
My thought is that if this is possible, then you can fool an Nmap -sS scan into reporting that filtered ports are actually closed.
Thanks for any help with this.
|
You can send back a RST with iptables -p tcp [...] -j REJECT --reject-with tcp-reset.
I doubt there is any real value to getting nmap to say a port is "closed" instead of "filtered", though. Mainly it's to get connections refused more quickly, instead of waiting for a timeout (e.g., with -j DROP) or sometimes-unreliable ICMP handling (with the other --reject-with options).
| Can you send a TCP packet with RST flag set using IPTABLES as a way to trick NMAP into thinking a port is closed? |
1,402,528,639,000 |
With nmap, I want to skip the scan on port 80. I'm sure this is in the man somewhere, but I haven't found it so far. My command is simple:
nmap 24.0.0.1\24
So this will scan ports in the 24.0.0.x range; I just need to avoid port 80 (for stealth reasons).
|
You can use the --exclude-ports option. Not sure why this wasn't mentioned earlier. Maybe it's new. I am using Nmap 7.01. So in your case you could simply do:
$ nmap 24.0.0.1\24 --exclude-ports 80
| How to skip (omit) a specific port in nmap |
1,444,532,918,000 |
I have the following command to process nmap output that contains a list of ips that I've been asked to scan:
cat ping-sweep.txt | grep "report for" | cut -d " " -f5
This is providing me a list of only the ip's (one per line) which I'd then like to scan for web servers.
I can scan an individual host with the following:
nmap -v -p 80,443,8080 10.1.1.1
I'd like to perform this scan on every ip in my list however piping it into nmap doesn't appear to work. Do I need to create a bash script with a foreach to do this or is there something simple that I'm missing?
|
The first step would be trying to use Nmap in the way it was designed. Since Nmap performs host discovery ("ping sweep") prior to each port scan, you can do both steps at once with this simple command:
nmap -p 80,443,8080 [TARGETS]
If you really do need to perform the host discovery separately from the port scan, then use Nmap's robust machine-readable output options like XML or Grepable output. For older versions of Nmap, the easiest way would be to do the host discovery and save the Grepable output like so:
nmap -sn [TARGETS] -oG ping-sweep.gnmap
Then you can extract the IP addresses easily with awk:
awk '/Status: Up/{print $2}' ping-sweep.gnmap > targets.txt
and import them directly to Nmap:
nmap -p 80,443,8080 -iL targets.txt
Alternatively, with Nmap 7.00 or newer, you can use the XML output format saved with -oX ping-sweep.xml and the targets-xml NSE script:
nmap -p 80,443,8080 --script targets-xml --script-args newtargets,iX=ping-sweep.xml
With any of these options, if your host discovery scan is recent enough, you can add the -Pn option to skip the host discovery phase of the port scan. This saves you a tiny bit of scan speed, since you ought to be able to count on those same hosts still being up.
What you really should not do is any solution involving loops or xargs, since these will end up launching a separate Nmap instance for each target. This is wasteful and unnecessary, since each instance will have to duplicate the work of loading data files and sending timing probes to monitor network capacity, and the separate instances will be competing with each other for network resources instead of cooperating. Also, you'll have to recombine their separate outputs in the end.
| How to pass multiple lines to a parameter without a for loop? |
1,444,532,918,000 |
I am trying to use Host Discovery Nmap function using the -PE (ICMP Echo Ping) option to discover hosts on my local network (virtuals machines on bridged connection).
So when I run :
nmap -PE 192.168.0.0/24
I expect Nmap to send ICMP Echo Ping, but Nmap only send classic TCP request :
Indeed, I have found in the documentation :
Also
note that ARP/Neighbor Discovery (-PR) is done by default against
targets on a local ethernet network even if you specify other -P*
options, because it is almost always faster and more effective.
But I really need to test the host discovery using ICMP Echo Request (-PE option).
How can I force Nmap to do a ICMP Echo Request discovery even if I am on a local ethernet network ? Or is there another workaround to do this ?
|
You have to disable arp-ping:
nmap -sP -PE --disable-arp-ping 192.168.56.1
| How to force Nmap to use -PE option on local network? |
1,444,532,918,000 |
[root@notebook ~] lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.4 LTS
Release: 12.04
Codename: precise
[root@notebook ~] dpkg -l nmap | grep ^ii
ii nmap 5.21-1.1ubuntu1 The Network Mapper
[root@notebook ~] wget -q https://svn.nmap.org/nmap/scripts/ssl-heartbleed.nse -O /usr/share/nmap/nselib/ssl-heartbleed.nse
[root@notebook ~] wget -q https://svn.nmap.org/nmap/nselib/tls.lua -O /usr/share/nmap/nselib/tls.lua
[root@notebook ~] wget -q https://svn.nmap.org/nmap/nselib/sslcert.lua -O /usr/share/nmap/nselib/sslcert.lua
[root@notebook ~] wget -q https://svn.nmap.org/nmap/nselib/asn1.lua -O /usr/share/nmap/nselib/asn1.lua
[root@notebook ~] wget -q https://svn.nmap.org/nmap/nselib/stdnse.lua -O /usr/share/nmap/nselib/stdnse.lua
[root@notebook ~] nmap -p 443 --script ssl-heartbleed www.ssllabs.com
Starting Nmap 5.21 ( http://nmap.org ) at 2014-06-25 07:49 CEST
NSE: failed to initialize the script engine:
/usr/share/nmap/nselib/stdnse.lua:59: attempt to index field 'socket' (a nil value)
stack traceback:
/usr/share/nmap/nselib/stdnse.lua:59: in main chunk
[C]: in function 'require'
/usr/share/nmap/nse_main.lua:95: in main chunk
[C]: ?
QUITTING!
[root@notebook ~] cat /usr/share/nmap/nselib/stdnse.lua
...
50 --- Sleeps for a given amount of time.
51 --
52 -- This causes the program to yield control and not regain it until the time
53 -- period has elapsed. The time may have a fractional part. Internally, the
54 -- timer provides millisecond resolution.
55 -- @name sleep
56 -- @class function
57 -- @param t Time to sleep, in seconds.
58 -- @usage stdnse.sleep(1.5)
59 _ENV.sleep = nmap.socket.sleep;
...
My question: What is the problem?
The many "wget's" before the nmap is because nmap said before it's missing modules.
|
In version 6.25, Nmap switched the language of the Nmap Scripting Engine (NSE) from Lua 5.1 to Lua 5.2. This means that you must be using at least version 6.25 in order to use the scripts on nmap.org.
Ubuntu 12.04 only has Nmap 5.21 available in its repositories, but any release after 13.10 will have a compatible version (6.40 specifically). Upgrading your OS may be too much for your needs, so you may want to install from source instead.
I've put together a guide for scanning for Heartbleed with Nmap that many folks have found helpful.
| How to scan for heartbleed vulnerability with nmap from Ubuntu 12.04? |
1,444,532,918,000 |
I wrote a very basic python script to port scan my system. I'm running linux-mint lisa:
open_ports = []
for port in xrange(65536):
conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
conn.connect(('localhost', port))
open_ports.append(port)
conn.close()
except socket.error:
pass
I returned a list of 7 ports and I ran netstat on each one:
sudo netstat -anlp | grep :(each port here)
I found that the first four were for cups, mysql, polipo, and tor but, the last three [44269, 46284, 47650] were much higher numbers and I didn't get anything back. I ran the script a few times more after this but, I would only return the first four.
Any ideas what they could be and what they're being used for?
|
Outbound traffic is normally sent with the higher ports. Your port scan happened while a tcp/udp session was in progress and ended before the sequential netstats
| Running a local port scan and found open ports but, don't know what they're used for? |
1,444,532,918,000 |
Is there a way to find my type of router from the command line? My IP address is the standard 192.168.1.1 IP address.
|
You could use curl http://192.168.1.1 just to get the HTML of the login page. It probably says on it.
Also, you could use arp -a to get the MAC of the router, and then look up the first 6 digits to see what hardware vendor it is.
| Find type of router |
1,444,532,918,000 |
I have a Fedora 24 server, serving an Angular2 project. Angular2 automatically opens ports 3000 and 3001 once the service is started. However, although running nmap localhost shows the ports are open, when I run an nmap from a remote computer, these ports are showing as closed.
Is there an iptables setting I can use to open these ports publicly, perhaps? Or anything similar?
I tried running:
iptables -A OUTPUT -p tcp -m tcp --dport 3000 -j ACCEPT
iptables -A OUTPUT -p tcp -m tcp --dport 3001 -j ACCEPT
But this has not helped and the ports remain closed when scanned from outside, and I cannot view the served content (internal requests function fine).
Output of netstat --an | egrep "3000|3001":
tcp6 0 0 :::3000 :::* LISTEN
tcp6 0 0 :::3001 :::* LISTEN`
A curl to the server's 'external' IP address works fine internally but won't work when run from other machines.
|
In the end the solution was to run the following command:
firewall-cmd --zone=FedoraServer --add-port=3000/tcp
Seems that on Fedora 24, or the Fedora 24 as set up on linodes perhaps, iptables doesn't have a TCP entry.
| Fedora 24: ports show as open when scanned from server, but closed when nmapped from outside |
1,444,532,918,000 |
Cannot seem to probe virtual guests from the virtual host.
These guests can be probed from other devices on the same LAN/Network, but not the host. I can understand why it might be struggling, but I am wondering if anyone ever found a way to make it work.
HOST: OSX 10.6
GUEST: FreeBSD 8 (two of them)
Edit:
Adding some finer details, I have networking set to "bridged", I can ping and regularly consume services running on TCP/IP on both guests. All of my nmap probe attempts are done from the root account on the host.
|
The KEXT for the network card somehow had gotten corrupt. Reloaded from the install CD and then I was able to hit the guests with various NMAP probes.
| NMAP probing VirtualBox Client |
1,444,532,918,000 |
$ nmap -n -iR 0 -sL > RANDOM-IPS-TMP.txt
$ grep -o "[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*" RANDOM-IPS-TMP.txt |
egrep -v "10.*|172.[16-32].*|192.168.*|[224-255].*" > RANDOM-IPS.txt
egrep: Invalid range end
$
How can I generate random IP Addresses outside the private IP address range and the multicast IP Address range?
|
You misunderstand regex syntax. [16-32] does not mean "match 16, 17, ... or 32". It means "match one character which is either 1 or 2 or in the range 6-3" (which is not a valid range, hence the error).
It's possible to write a regex to match a range of integers, but it's complex and error prone. In your case, it would be much easier to use nmap's --exclude option to exclude the ranges you don't want. It understands CIDR notation, which is a much simpler way to describe the ranges you're talking about.
nmap -n -iR 0 --exclude 10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,224-255.-.-.- -sL >RANDOM-IPS.txt
You didn't mention the loopback block (127.0.0.0/8), but you probably ought to exclude that too.
| How to generate random IP addresses |
1,444,532,918,000 |
I want to make a script which will check if a port is open on a server. If not open stay in a while. If open continue. The break conditions is use are if "host is up" is present and not "closed". I assume conenction is ok. The problem is that the grep is not working as expected.
I have tried with following:
while true; do
NMAP=$(nmap -p 1700 router.eu.thethings.network)
if [[$NMAP | grep "Host is up" -ne ""]] && [[$NMAP | grep "closed" -eq ""]]; then
echo "connection!!!"
break
fi
echo "waiting for connectiong"
done
I run it on a raspberry pi jessie system.
|
The problem is that [[$NMAP | grep "Host is up" -ne ""]] is very far from valid bash syntax. The error messages don't tell you exactly how to fix it, but they are a hint that something is seriously wrong.
[[ expression ]] requires spaces inside the brackets. See Brackets in if condition: why am I getting syntax errors without whitespace?
| is the pipe operator between commands. It isn't an operator in conditional expressions. In fact [[ foo | bar ]] is parsed as the command [[ foo piped into the command bar ]], which doesn't do anything useful.
The -eq operator compares integers, but what you put around it aren't integers.
To test whether a string contains a substring, you can either use the == operator in a conditional expression, or a pipe through grep (which doesn't involve a conditional expression). With grep, you're not running $NMAP as a command, you want to pass this as input to grep, so you need to echo it into the pipe. Pass -q to grep since you only care about its return status, not about its output.
if echo "$NMAP" | grep "Host is up" && echo "$NMAP" | grep "closed"; then …
With a conditional expression:
if [[ $NMAP == *"Host is up"* || $NMAP == *"closed"* ]]; then …
Do read Confused about operators [[ vs [ vs ( vs (( which explains about conditional expressions and how they aren't the only way to test whether a condition is true. Also, read Why does my shell script choke on whitespace or other special characters?
| Nmap check if port is open in bash |
1,444,532,918,000 |
I have a small snippet which gives me some ips of my current network:
#!/bin/bash
read -p "network:" network
data=$(nmap -sP $network | awk '/is up/ {print up}; {gsub (/\(|\)/,""); up = $NF}')
it returns ip addresses like this
10.0.2.1
10.0.2.15
and so on.
now I want to make them look like this:
10.0.2.1, 10.0.2.15, ...
I'm a total bash noob ,plz help me :)
|
If you need exactly ", " as separator, you could use
echo "$data" | xargs | sed -e 's/ /, /g'
or if you are enough with comma as separator, then
echo "$data" | paste -sd, -
| separating an array into comma separated values |
1,444,532,918,000 |
I run nmap on a Lubuntu machine using its own private IP address.
What are those "unknown" services?
How can I find them out? Is fuser supposed to find that out?
Thanks.
$ nmap -p0-65535 192.168.1.198
Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-19 23:32 EDT
Nmap scan report for olive.fios-router.home (192.168.1.198)
Host is up (0.00050s latency).
Not shown: 65526 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
139/tcp open netbios-ssn
445/tcp open microsoft-ds
2049/tcp open nfs
5900/tcp filtered vnc
41441/tcp open unknown
43877/tcp open unknown
44847/tcp open unknown
55309/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 6.22 seconds
|
What are those "unknown" services?
Those services are "unknown" because they are not listed in nmap's services file, nmap uses that to map port numbers to services. On my system, nmap uses /usr/share/nmap/nmap-services.
I found out where the file is located by doing (I am on devuan, a debian-based system like ubuntu or mint):
$ dpkg -L nmap
On RedHat/Suse based systems, you use rpm -ql nmap.
How can I find them out? Is fuser supposed to find that out?
fuser is nmap's friend, for each, simply run fuser <port>/<protocol> (the first column of what nmap prints out):
$ fuser 41441/tcp
41441/tcp 1234
This will give you the pid of the process (above example, 1234) which you can pass to ps
$ ps 1234
PID TTY STAT TIME COMMAND
1234 ? Sl 169:39 /usr/lib/jvm/java-8-oracle/bin/java -Dnop -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager [...]
In my example, it is actually Apache Tomcat ...
Now, I searched for 41441 in /usr/share/nmap/nmap-services and replaced the "unknown" with "tomcat":
tomcat 41441/tcp
Now, nmap detects my tomcat:
$ nmap -p0-65535 192.168.1.198
Starting Nmap 7.60 ( https://nmap.org ) at 2019-03-19 23:32 EDT
Nmap scan report for olive.fios-router.home (192.168.1.198)
Host is up (0.00050s latency).
Not shown: 65526 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
139/tcp open netbios-ssn
445/tcp open microsoft-ds
2049/tcp open nfs
5900/tcp filtered vnc
41441/tcp open tomcat
| What are those "unknown" services listed by nmap? |
1,444,532,918,000 |
I am using CentOS 6.5 and Nmap 5.51
I want to find all alive IPs in a LAN between two IPs
Easily get the answer
sudo nmap -sP 192.168.1.100-200
my problem when my network can access to internet the total time spend is 1.78 seconds but when my network can't access to internet the total time spend is 17.79 seconds
output with internet
[mgmt_user@Management root]$ sudo nmap -sP 192.168.1.100-200
Starting Nmap 5.51 ( http://nmap.org ) at 2014-05-21 23:05 EEST
Nmap scan report for 192.168.1.106
Host is up (0.00054s latency).
MAC Address: 08:00:27:93:2E:C5 (Cadmus Computer Systems)
Nmap scan report for 192.168.1.126
Host is up (0.0013s latency).
MAC Address: 00:16:3E:63:64:A0 (Xensource)
Nmap done: 101 IP addresses (2 hosts up) scanned in 1.78 seconds
output without internet
[mgmt_user@Management root]$ sudo nmap -sP 192.168.1.100-200
Starting Nmap 5.51 ( http://nmap.org ) at 2014-05-21 23:02 EEST
Nmap scan report for 192.168.1.106
Host is up (0.00042s latency).
MAC Address: 08:00:27:93:2E:C5 (Cadmus Computer Systems)
Nmap scan report for 192.168.1.126
Host is up (0.0011s latency).
MAC Address: 00:16:3E:63:64:A0 (Xensource)
Nmap done: 101 IP addresses (2 hosts up) scanned in 17.79 seconds
repeat the command many time and have the same time
is there any link between Nmap and the internet ?
|
Your nmap is trying to query DNS servers to resolve the hostnames associated with the IP addresses your scanning.
Because it cannot succeed to do so, it times out, but you get the extra delay in the meanwhile.
Use the -n option with nmap to avoid this. That would be:
sudo nmap -n -sP 192.168.1.100-200
If you had a properly configured local DNS server however, it would probably have answered quickly (usually saying that no hostname corresponds) and you wouldn't have noticed this problem in the first place.
| Nmap too slow with a network that can't access to internet |
1,444,532,918,000 |
I'm trying to connect to a second-hand external wifi camera. It has an ethernet slot and a sticker with the MAC address but no other branding or model/serial numbers.
I am trying to find its IP address.
My current plan is to connect an ethernet cable directly between my machine and this camera, then scan all reserved private IPv4 ranges with nmap:
ip addr add 10.0.0.1/8 dev eno2
ip addr add 172.16.0.1/12 dev eno2
ip addr add 192.168.0.1/16 dev eno2
nmap -sn 192.168.0.0/16 172.16.0.0/12 10.0.0.0/8
But this could take a long time (I'm guessing about 74 hours) and I can't be sure that this device isn't using IPv6. Is there a better solution?
|
You could install a DHCP server and then check its logs for the IP address the camera gets.
Alternatively you could run tcpdump to see any devices talking on your LAN.
You can monitor the ethernet port with tcpdump:
sudo tcpdump -A -i eno2
In my case, I got the following which seems to confirm that the device has no IP and is indeed communicating (MAC was correct)
11:26:29.247184 IP 0.0.0.0.bootpc > 255.255.255.255.bootps: BOOTP/DHCP, Request from xx:xx:xx:xx:xx:Xx (oui Unknown), length 291
Install and configure a DHCP server:
sudo apt install isc-dhcp-server
sudo ip addr add 192.168.2.1/24 dev eno2
add the following to `/etc/dhcp/dhcpd.conf
subnet 192.168.2.0 netmask 255.255.255.0 {
range 192.168.2.10 192.168.2.20;
}
Set INTERFACESv4="eno2" in /etc/default/isc-dhcp-server.
sudo systemctl restart ics-dhcp-server.service
Now check the journal to see if any IP addresses were issued:
sudo journalctl -u isc-dhcp-server.service
Nov 24 11:31:11 simswe24 systemd[1]: Started LSB: DHCP server.
Nov 24 11:31:12 simswe24 dhcpd[14238]: DHCPOFFER on 192.168.2.10 to xx:xx:xx:xx:xx:xx (BV-CAM06S) via eno2
Nov 24 11:31:12 simswe24 dhcpd[14238]: DHCPREQUEST for 192.168.2.10 (172.16.0.1) from xx:xx:xx:xx:xx:xx (BV-CAM06S) via eno2
Nov 24 11:31:12 simswe24 dhcpd[14238]: DHCPACK on 192.168.2.10 to xx:xx:xx:xx:xx:xx (BV-CAM06S) via eno2
| How to get IP addr from MAC |
1,444,532,918,000 |
I'm trying to install Zenmap after installing Nmap however it's not quite working. I tried just the regular terminal command dnf install zenmap however it tells me that they're unable to find a match.
I then went to the official website to download the RPM file and tried using the command 'rpm -i filename.rpm' which told me I needed to download PyGTK which I did and it worked. However, now when I try to run zenmap, it shows me
File "/usr/bin/zenmap", line 182 except ImportError, e: SyntaxError: invalid syntax
When I try launching zenmap from the search, it shows Zenmap GUI port Scanner but when I try clicking on it, there's a brief flash on my screen and then it's gone. I tried looking for solutions but there's only 2 when I search and neither of them has an answer to it.
I'm using Fedora 31 with KDE Plasma.
|
Apparently Zenmap reached EOL in F28 because it relies on now deprecated Python 2.
See the issue on github:
Zenmap and Ndiff are python2 only #1176
You should still be able to make it work by installing (deprecated) Python 2 and the necessary modules.
If I look at the source code zenmap relies on /usr/bin/env python, which on your system would normally default to Python 3 instead of Python 2 and that's probably why you are having that syntax error. Either edit the launchers or explicitly call Zenmap with python2 eg: python2 /path/to/zenmap.
| How to install zenmap on Fedora |
1,444,532,918,000 |
I am remotely connected to a system using ssh and want to run nmap on a system from there. But every 5 minutes, my SSH connection breaks and so the process running on my shell stops.
How can I run nmap in the background so that any SSH session can interact with the process?
|
You could use nohup. But screen is what you are probably looking for.
| How to run Bash Script in background [duplicate] |
1,444,532,918,000 |
I just noticed that my server is being blocked for rsync from a firewall outside of my server, so I can't rsync to any target. Now, I would also like to know what are all the ports that are being blocked by that firewall.
Is there any way to use nmap to do that? I know I can use nmap to scan the opened ports in a specific target, but what I want is to know what ports are closed in my server to send packets out.
|
No, you can not nmap to scan one computer from the same computer.
By definition, packets won't travel, and packets traveling is the whole base of the internet.
You need a router, printer, light-bulb or external computer that could run some commands and use it to look back to that computer.
I believe that you can send packets going out (to the outside) from any port to see which ports are being blocked in the outgoing direction without any security issue. But even to do that, you need some external address to send your packets to.
Trying to scan from outside to such firewalled computer may easily be seen as an attack and you may get banned or blocked even more than you are now. To actually perform such scan from the outside the best is to inform and ask for permission from the network managers in such system.
| How to scan outbound closed ports with nmap? |
1,444,532,918,000 |
I have a list of IPs and I need to check them for opened ports using nmap.
So far, my script is like this:
#!/bin/bash
filename="$1"
port="$2"
echo "STARTING NMAP"
while IFS= read -r line
do
nmap --host-timeout 15s -n $line -p $2 -oN output.txt | grep "Discovered open port" | awk {'print $6'} | awk -F/ {'print $1'} >> total.txt
done <"$filename"
It works great but it's slow and I want to check, for example, 100 IPs from the file at once, instead of running them one by one.
|
Here's one way:
#!/bin/bash
filename="$1"
port="$2"
echo "STARTING NMAP"
## Read the file in batches of 100 lines
for((i=100;i<=$(wc -l < "$filename");i+=100)); do
head -n "$i" "$filename" | tail -n 100 |
while IFS= read -r line
do
## Launch the command in the background
nmap --host-timeout 15s -n $line -p $2 -oN output.txt |
grep "Discovered open port" | awk {'print $6'} |
awk -F/ {'print $1'} >> total.txt &
done
## Wait for this batch to finish before moving to the next one
## so you don't spam your CPU
wait
done
| bash multi-threads |
1,444,532,918,000 |
I'd like to print all ips (IP space Port) which have open https ports given a gnmap file.
An example output for a line that has only https running on port 443:
123.123.123.123 443
A more elaborate example input and desired output (not all test-cases in there):
Host: 123.123.123.123 () Ports: 80/open/tcp//http?///, 443/open/tcp//https?///, 8083/closed/tcp//us-srv///, 65001/closed/tcp///// Ignored State: filtered (65531) Seq Index: 262 IP ID Seq: Randomized
Host: 123.123.123.124 () Ports: 80/open/tcp//http?///, 443/open/tcp//https?///, 10443/open/tcp//https///, 65001/closed/tcp///// Ignored State: filtered (65531) Seq Index: 262 IP ID Seq: Randomized
Host: 123.123.123.125 () Ports: 80/open/tcp//http?///, 443/open/tcp//https?///, 8083/closed/tcp//us-srv///, 8445/open/tcp//https///, 65001/closed/tcp///// Ignored State: filtered (65531) Seq Index: 262 IP ID Seq: Randomized
Host: 123.123.123.126 () Ports: 1337/open/tcp//https?///, 8083/closed/tcp//us-srv///, 65001/closed/tcp///// Ignored State: filtered (65531) Seq Index: 262 IP ID Seq: Randomized
The output for this file should be:
123.123.123.123 443
123.123.123.124 443
123.123.123.124 10443
123.123.123.125 443
123.123.123.125 8445
123.123.123.126 1337
What would be the awk solution for this?
|
If I got the “more or less open ports below 443” case correctly, this should be a generic solution handling it correctly:
awk '/\/https\// {for(i=5;i<=NF;i++)if($i~"/open/.+/https/"){sub("/.*","",$i); print $2" "$i}}' nmap-synscan.gnmap
| Use awk to find all Ports for each IP that have https open |
1,444,532,918,000 |
This was a question on my study guide, and I believe that the script is pinging the ip addresses, so the 2nd choice. Can somebody confirm this for me?
The 5th column of NMAP output below is the IP address. Armed with that information, what does the script below do?
nmap -n -p80 -sS -PN --open 192.168.1.0/24 | grep "Nmap scan report" | awk '{print $5}' | while read IP; do ping -c 1 $IP && echo "I can ping $IP" ; done
Select one:
It will resolve the network names of the hosts running web servers
It will ping all of the IP addresses between 192.168.1.0 and 192.168.1.255
It will search the network for Telnet servers
It will see if ICMP ECHO REPLY is returned from all hosts running web servers in the 192.168.1.1-255 range
It will connect to every web page on the network and download the banner
|
Firstly, I like to check the linux man(ual) pages for questions like this. Also of note, is that this script uses piping: http://en.wikipedia.org/wiki/Pipeline_%28computing%29
For example, by opening terminal and typing man nmap, we can see what nmap does and what each argument means
From the man page for nmap
Nmap (“Network Mapper”) is an open source tool for network exploration
and security auditing.
-n/-R: Never do DNS resolution/Always resolve [default: sometimes]
-p <port ranges>: Only scan specified ports
-sS/sT/sA/sW/sM: TCP SYN/Connect()/ACK/Window/Maimon scans
-Pn (No ping) . This option skips the Nmap discovery stage altogether.
--open (Show only open (or possibly open) ports) .
So, from that, it seems we're doing some kind of scan without DNS resolution, only on port 80, with a SYN packet and while skipping the discovery stage. Also, it seems we're only interested in open ports, and we're doing this for everything that seems to match 192.168.1.*.
This is because the address is 192.168.1.0/24, where 24 corresponds to the number of bits in the IP (Another lesson!). 192.168.1.1/16 would mean anything on 192.168.., 192.168.1.1/8 would mean anything on 192...*, and so on.
Grep will scan the input and print out the lines that match your query. In this case, it will run through everything that Nmap tells you, and print only the lines that contain "Nmap scan report".
When I run:
sudo nmap -n -p80 -sS -PN --open 192.168.1.1 | \
grep "Nmap scan report"
the result is: "Nmap scan report for 192.168.1.1"
From there, the line "Nmap scan report for 192.168.1.1" gets pushed into the awk input. From the manpage: "mawk - pattern scanning and text processing language"
In this context, awk is taking the "Nmap scan report for 192.168.1.1" that I mentioned, and isolating just the IP address. Indeed, we can cheat a little bit by testing this in terminal:
echo Nmap scan report for 192.168.1.1 | awk '{print $5}'
will spit out: 192.168.1.1
Now, there's the while loop going on:
while read IP; do ping -c 1 $IP && echo "I can ping $IP" ; done
The while loop says, while there's an IP to be read, ping it -c times (in this case, one), and (&&) print to the command window (echo) "I can ping <IP>", before that iteration of the loop comes to an end. What does ping do?
Let's check the manpage: "ping, ping6 - send ICMP ECHO_REQUEST to network hosts"
So for all of the IP addresses that nmap determined had open ports 80, the computer should ping (send an ICMP ECHO_REQUEST) each once and print that it has done so.
Now, port 80 is used for HTTP traffic, in other words, web server type stuff. Because nmap has been told to look at open port 80, a host (internet enabled computer) with a closed port 80 will be ignored. For that reason, I would say your best bet is likely "It will see if ICMP ECHO REPLY is returned from all hosts running web servers in the 192.168.1.1-255 range."
| What does the following script do? [closed] |
1,444,532,918,000 |
I am having problem with viewing hostnames of devices located in my LAN.
On my first laptop (Ubuntu 18.04 LTS Desktop edition) result of following command:
arp -a
Is exactly what I want:
X (192.168.56.243) at 40:a3:cc:99:2d:66 [ether] on wlan0
test-test-test (192.168.56.146) at 48:bf:6b:e3:bf:5a [ether] on wlan0
TP-Link_Archer_ (192.168.56.1) at 10:7b:44:40:61:70 [ether] on wlan0
Using nmap, I am able to scan my LAN using this command:
nmap -sn 192.168.56.0/24
And I get the perfect results with hostnames:
Starting Nmap 7.60 ( https://nmap.org ) at 2018-08-07 09:07 EDT
Nmap scan report for TP-Link_Archer_ (192.168.56.1)
Host is up (0.0054s latency).
Nmap scan report for ZZ (192.168.56.156)
Host is up (0.00045s latency).
However, on another laptop with Debian 9 x64 Minimal installed, the whole hostnames part is missing.
Whenever I issue arp -a, I get the following:
? (192.168.56.243) at 40:a3:cc:99:2d:66 [ether] on wlan0
? (192.168.56.146) at 48:bf:6b:e3:bf:5a [ether] on wlan0
? (192.168.56.1) at 10:7b:44:40:61:70 [ether] on wlan0
Moreover, scanning with nmap -sn 192.168.56.0/24 produces this output:
Starting Nmap 7.60 ( https://nmap.org ) at 2018-08-07 09:17 EDT
Nmap scan report for 192.168.56.1
Host is up (0.0054s latency).
Nmap scan report for 192.168.56.156
Host is up (0.00045s latency).
I have honestly no idea what is going on, most likely I am missing something in Debian Minimal installation, which is installed in Ubuntu. But I have no clue where to find the missing part so Debian machine can start showing me hostnames.
Any ideas?
EDIT:
My /etc/nsswitch.conf is exactly this:
root@zxcv:/home/test# cat /etc/nsswitch.conf
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.
passwd: compat
group: compat
shadow: compat
gshadow: files
hosts: files dns
networks: files
protocols: db files
services: db files
ethers: db files
rpc: db files
netgroup: nis
However I've setup a VM with bridged network adapter, installed same version of Debian Minimal, and I am able to get the hostnames for all IPs.
The question now is, which packets/modules/services is the failing Debian missing, which prevents from getting hostnames?
|
You don't have access to the DNS server that can translate IP addresses to names.
There are other ways to associate host names and IP addresses, but if you have more than one computer, DNS is your best answer.
And in reply to your comment, yes, /etc/resolv.conf should contain the address of the DNS server. In a small network this is often the same as the router.
| ARP hostnames problem |
1,444,532,918,000 |
Recently installed nmap but I don't know how to use it. I found the documentation, but I'd rather not have to leave the command line every time I try to learn something new with it. I'm still working through a book on Linux so I apologize if this is an easy thing to do.
|
You can use man nmap to read manual of nmap. If you want to read some other documentation outside of manual files, you can save the documentation as text file and cat or less it in console to read it.
| How do I download documentation and make it accessible from the command line? |
1,444,532,918,000 |
Closing ports except 22 and 443. This dramatically slows down the nmap scans:
-A INPUT -i eth0 -p tcp -m multiport --dports 22,443 -j ACCEPT
-A INPUT -i eth0 -p tcp -j REJECT --reject-with icmp-port-unreachable
If I remove the REJECT rule, nmap is fast.
So how to make other ports look like closed ports without slowing down the nmap?
|
It's the 'tcp-reset' reject type that does what the OS normally does with the closed ports:
-A INPUT -i ens3 -p tcp -j REJECT --reject-with tcp-reset
| Why REJECT slows nmap? |
1,444,532,918,000 |
Wanting to setup Nmap on my Ubuntu 14.04 LTS system to detect HeartBleed vulnerability. I followed the instructions here:
http://cyberarms.wordpress.com/2014/04/20/detecting-openssl-heartbleed-with-nmap-exploiting-with-metasploit/
To create the script files and place them in the proper directory. However the script throws an execution error.
<error>
|_ssl-heartbleed: ERROR: Script execution failed (use -d to debug)
</error>
So I ran it with -d to debug and get this:
<error>
NSE: Starting ssl-heartbleed against "testsite".com (IP Address:443).
Initiating NSE at 08:28
NSE: ssl-heartbleed against testsite.com (IP Address:443) threw an error!
/usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:77: variable 'keys' is not declared
stack traceback:
[C]: in function 'error'
/usr/bin/../share/nmap/nselib/strict.lua:80: in function '__index'
/usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:77: in function 'testversion'
/usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:232: in function </usr/bin/../share/nmap/scripts/ssl-heartbleed.nse:205>
(...tail calls...)
Completed NSE at 08:28, 0.01s elapsed
The host I scanned sits on public IP space so I know it's not a FW issue. I also am the owner of the files and have execute perms for the script.
|
I wrote this script, and my official guide is available here. The simplest solution is to upgrade to the latest Nmap (version 6.47 as of this writing).
| Nmap script execution to detect heartbleed is failing |
1,444,532,918,000 |
First I try running nmap -sn ip/24 to check live hosts on a subnet. It returns that all 255 hosts are live which I know is not true. I do fping -g ip/24 and get that 7 hosts are up which makes more sense.
Now I'm trying to figure out network topology using nmap -sn --traceroute ip/24 and the entire range of 0-255 is included. How can I just use the hosts that were returned by the fping command? I figure there has to be some way to pipe that argument or something to the nmap traceroute command, but I have no idea how to do this.
|
you could use fping output as nmap target list:
fping -aqg ip/24 | xargs nmap -sn --traceroute
If your problem is that some gateway in your network is giving fake ARP responses (generating false positives), you can use -sn -PE to fix that:
nmap -sn -PE --traceroute ip/24
That way, nmap will exclusively show a host (and make a traceroute) if the host reply the ICMP request (ping).
| how to use only certain addresses in subnet for traceroute? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.