date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,624,962,712,000 |
I am a novice to rsync and have a rather deep directory looking something like this:
source
|
+--foo
| |
| +--bar1
| | |
| | +--bla (more dirs down the line)
| +--bar2
| | |
| | +--...
| +--bar3
| |
| +--bla (more dirs down the line)
...
Now, I'd like to traverse/search the full path and recursively copy all csv files located somewhere inside/after a bla directory (i.e. in all directories sub to bla).
After reading some docs and posts on here, I thought that something similar to
rsync -ravm --include='/**/bla/' --include='*.csv' --exclude='*' source/ .
should do the trick, but whatever combination of includes and excludes I try, it either copies all CSVs from everywhere or nothing.
What am I not seeing here? Any help is greatly appreciated.
I am on bash 4.3.48 and rsync 3.1.1.
|
This seems to do the trick:
rsync -avm --include='*/' --include='foo/*/bla/**.csv' --exclude='*' source/ .
or
rsync -avm --include={'*/','foo/*/bla/**.csv'} --exclude='*' source/ .
--exclude='*' exclude all files
--include='*/' include directories
--include='foo/*/bla/**.csv' include *.csv files in or below the bla directory
(a relative --include='bla/**.csv' would work too if bla is always below directory foo/barX)
I removed option -r since it's already included in -a. And option -m removes all empty directories we might have found.
| rsync recursively copy files downstream of subdirectories matching pattern |
1,624,962,712,000 |
I have around 90 websites I need a plugin installed on (wordpress) and I was wondering if there's a way to copy the plugin folder to all of them in a single run (i.e., simultaneously / in parallel).
These are all on the same cPanel so same path, except for the domain name.
The paths look like this:
/home/user/site1/public_html/wp-content/plugins/
/home/user/site2/public_html/wp-content/plugins/
and so on.
I obviously tried the following, but it takes a lot of time :)
scp -r /path/to/local/dir /home/user/site1/public_html/wp-content/plugins/
|
If all destination folders are /wp-content/plugins/, then you could iterate using find command, for example like this (assuming you are using bash and directory names have no 'space'):
for dir in $(find /home/user -name wp-content); do
[ -d ${dir}/plugins ] && scp -r /path/to/local/dir ${dir}/plugins/
done
| Copy folder to multiple (similar) locations simultaneously |
1,624,962,712,000 |
Attempting to discover a command to copy files if the destination's (not source's) file has not been modified in the last hour.
|
I know of no command that will precisely match your requirement. Something like this should work (remove the --dry-run when you're sure you're happy with the result; replace the --verbose with --quiet if you want it to run more silently):
src=/path/to/source
dst=/path/to/target
comm -z -23 \
<(find "$src" -type f -printf '%P\0' | sort -z) \
<(find "$dst" -type f -mmin -60 -printf '%P\0' | sort -z) |
rsync --dry-run --verbose --archive --from0 --files-from - "$src" "$dst"
It assumes relatively recent utilities that understand how to handle NUL-terminated lines. If necessary, and provided that you can guarantee that no filenames contain newlines, you could remove the three -z flags and rsync's --from0 and replace the \0 in the find commands with \n.
| Copy files from one directory to another, ignoring files where the destination's file has been modified in the last hour? |
1,624,962,712,000 |
In a Mac Terminal, I want to find all directories that contain at least one file with the specified extension and copy them somewhere else. I found find . -iname '*.jpg' -exec dirname {} \; which seems to find all directories containing a *.jpg file, but I'm not sure how to copy them. I tried combining it with rsync but couldn't get it to work. What's the best way to do this?
|
Here's a bash friendly one liner that copies any folders that have a jpg in them into a folder called backup while maintaining directory structure
mkdir backup; for folder in $(find . -type f -name '*.jpg' | sed -r 's|/[^/]+$||' |sort |uniq); do cp -r --parents $folder backup; done
First it creates an empty backup folder then the find command looks for all the files in the current directory that end with .jpg. The sed, sort, and uniq commands trim the find output to just the directory names and remove repeats. Finally the cp -r --parents in a loop copies the folders over recursively while creating any files that are missing.
you can get around using sed -r by using the -printf flag with find like so
mkdir backup; for folder in $(find . -type f -name '*.jpg' -printf "%h\n" |sort |uniq); do cp -r --parents $folder backup; done
if your find doesn't support -printf you can try using grep
mkdir backup; for folder in $(find . -type f -name '*.jpg' | grep -o "\(.*\)/" |sort |uniq); do cp -r --parents $folder backup; done
| Copy all folders that contain file(s) with the given extension |
1,624,962,712,000 |
I would like to create a script that can write up from a text file a LibreOfficeCalc or equivalent table with the same columns than the following those from the following lines from a text file :
[1] 119.0(0.0) 73.0(0.0)
[2] 40.0(0.0) 17.0(0.0)
[3] 574.0(0.0) 469.0(0.0)
[4] 46.0(0.0) 47.0(0.0)
[5] 1.0(0.0) 2.0(0.0)
[6] 24001.0(0.0) 24618.0(0.0)
[7] 61.0(0.0) 91.0(0.0)
[8] 1.0(0.0) 1.0(0.0)
[9] 3910.0(0.0) 3491.0(0.0)
[10] 379.0(0.0) 381.0(0.0)
[11] 458.0(0.0) 445.0(0.0)
[12] 46.0(0.0) 48.0(0.0)
[13] 5598.0(0.0) 5619.0(0.0)
[14] 1653.0(0.0) 1644.0(0.0)
[15] 218.0(0.0) 223.0(0.0)
[16] 2.0(0.0) 2.0(0.0)
[17] 1.0(0.0) 1.0(0.0)
[18] 52.0(0.0) 50.0(0.0)
[19] 52.0(0.0) 55.0(0.0)
[20] 71.0(0.0) 72.0(0.0)
[21] 21.0(0.0) 21.0(0.0)
[22] 2193.0(0.0) 2151.0(0.0)
[23] 424.0(0.0) 433.0(0.0)
[24] 382.0(0.0) 369.0(0.0)
[25] 50.0(0.0) 49.0(0.0)
[26] 237.0(0.0) 233.0(0.0)
[27] 55.0(0.0) 57.0(0.0)
[28] 10539.0(0.0) 11519.0(0.0)
[29] 428.0(0.0) 422.0(0.0)
[30] 872.0(0.0) 897.0(0.0)
[31] 2219.0(0.0) 2198.0(0.0)
[32] 919.0(0.0) 946.0(0.0)
[33] 6.0(0.0) 5.0(0.0)
[34] 12.0(0.0) 12.0(0.0)
[35] 64.0(0.0) 60.0(0.0)
[36] 37.0(0.0) 34.0(0.0)
[37] 26.0(0.0) 27.0(0.0)
[38] 26.0(0.0) 29.0(0.0)
[39] 6.0(0.0) 6.0(0.0)
[40] 8.0(0.0) 7.0(0.0)
[41] 3371.0(0.0) 3366.0(0.0)
[42] 139.0(0.0) 140.0(0.0)
[43] 149.0(0.0) 147.0(0.0)
[44] 147.0(0.0) 151.0(0.0)
[45] 1047.0(0.0) 1027.0(0.0)
[46] 88.0(0.0) 93.0(0.0)
[47] 28.0(0.0) 27.0(0.0)
[48] 2904.0(0.0) 2945.0(0.0)
[49] 122.0(0.0) 114.0(0.0)
[50] 422.0(0.0) 413.0(0.0)
I knwo how to write content a new file
#!/bin/bash
echo "some file content" > /path/to/outputfile
But how to actually read lines and automatize writing in different columns ?
|
I am not sure whether I understood your question properly,
If your objective is to read the text file with any spreadsheet software like LibreOffice, you can change the default delimiter from ',' to whitespace and read them as columns.
Solution Update:
To create a .csv out of a text file separated by whitespaces, use
sed -e 's/ \{1,\}/,/g' values.txt > values.csv
| Copying a text file into a calculation sheet with a script |
1,624,962,712,000 |
I'm currently trying to organize several thousand files which are named according to what's in them, and the various "tags," if you will, are separated by spaces. So, for example:
foo_bar bar_foo.txt
I'm relatively new to Linux/Unix, and was wondering if there was a way to iterate through every file, create folders based on the tags, and copy the files to those folders?
So we'd end up with:
./foo_bar bar_foo.txt
./foo_bar/foo_bar bar_foo.txt
./bar_foo/foo_bar bar_foo.txt
So far, I've been manually doing everything like this:
mkdir foo_bar
cp *foo_bar* foo_bar/
mkdir bar_foo
cp *bar_foo* bar_foo/
...
Obviously this is pretty time-inefficient, so I'm just looking for a way to automatically do it.
Edit: Some more examples:
Input:
./a b c d.txt
./b a d.txt
./c d e.txt
./d a.txt
Output:
All original files still in parent directory, plus:
./a/a b c d.txt
./a/b a d.txt
./a/d a.txt
./b/a b c d.txt
./b/b a d.txt
./c/a b c d.txt
./c/c d e.txt
./d/a b c d.txt
./d/b a d.txt
./d/c d e.txt
./d/d a.txt
./e/c d e.txt
|
try this.. remove the echo.. if you are happy with the commands...
ls * | awk '{print $1}' | sort -u | while read a; do echo mkdir -p $a; echo mv ${a}*txt ${a}; done
modified answer
$ ls
a b c d.txt b a d.txt c d e.txt d a.txt
$ ls * | sed "s/.txt//;s/ /\n/g" | sort -u | while read file; do echo mkdir -p $file; echo mv *${file}*.txt ${file}; done
mkdir -p a
mv a b c d.txt b a d.txt d a.txt a
mkdir -p b
mv a b c d.txt b a d.txt b
mkdir -p c
mv a b c d.txt c d e.txt c
mkdir -p d
mv a b c d.txt b a d.txt c d e.txt d a.txt d
mkdir -p e
mv c d e.txt e
Note: as you want the file be present in your original directory.. use cp instead of mv
| Create folders from space-delimited filenames and copy files into them |
1,624,962,712,000 |
I am using Linux on my Android phone via use of Linux deploy.
I have some files in /root/android/ - folder of that linux partition.
And I want one of those files to move to my /storage/sdcard1/ - folder of my android.
Now the thing is all data of that linux file is saved in linux.img in sdcard1.
I literally have no way to move that file from "linux root" to "android sdcard".
|
I found the ans which is at number 3:::
1-by uploading-downloading
I went further to check if any online upload-download site supports use of command line, and came across
https://transfer.sh site. Using it I transfered my file by uploading and downloading.
First upload your file to that site using below command after moving to that folder which contains file.
curl --upload-file https://transfer.sh/yourfilename
When you execute this command , it will give you a new URL (e.g. https://transfer.sh/lPTH/yourfilename) using which u can download ur file..
To download your file use following command.
curl "https://transfer.sh/lPTH/yourfilename" -o -k yourfilename
(Replace URL in "" with whtever url you get by uploading)
Another way is just open any browser and open URL which you get by uploading.
Enjoy.
2- using usb
I still cant mount my sdcard(which is in) but i found way(actually command) to mount usb..
Use otg cable and connect usb with yout phone.
Then.
Use command "lsusb" to check if linux can detect your usb.
Use "blkid" command to check location of your usb(in my case it was /dev/block/sda), which normally would be somewhere in dev folder, also note down partition type of your usb(normally vfat) cause we will need it in mounting command.
Now to mount your usb in linux use this command.
mount -t vfat /dev/block/sda /mnt
It will mount your usb to /mnt folder from there you can transfer your things. To check size and left space use "df -h".
To unmount.:
umount /dev/block/sda
3-mount sdcard
Just like number 2, i found my sdcard at
/dev/block/mmcblk1p1
Location, using command
"blkid"..
So mount it (whtever ur sdcard location is) to /mnt and exchange data..
Enjoy..
| Transfer files from linux (of linuxdeploy in android) to android (in which linuxdeploy is installed) |
1,434,824,170,000 |
I want to copy the / directory to another directory, i.e. /Diskless-OS/centos-7/. I have tried using cp -r command, but it is throwing an "Permission Denied" error.
I'm working on a project where I'm developing a Diskless Booting System. So here, the Diskless Booting Clients, boot using the /Diskless-OS/centos-7/ partition. Therefore I'm trying to copy the / partition.
Please provide me an appropriate command for performing the above.
Images are attached below:
|
As already mentioned by Kusalananda, there are some directories you should not copy.
Therefore, you need to create them manually after copying the ones you need.
If you use Bash as shell, the following two commands should do what you want:
sudo cp -a /{b,e,h,l,m,o,ro,sb,sr,tf,u,v}* /Diskless-OS/
sudo mkdir /Diskless-OS/{dev,proc,run,sys,tmp}
The first command will copy all Directories except /dev, /Diskless-OS, /proc, /sys and /tmp.
The second command will then create the directories dev, proc, sys and tmp within the Diskless-OS directory.
| Copying / directory to another directory [OS:Centos-7] |
1,434,824,170,000 |
Let's say I have a directory /data/something with the following subdirs:
/data/something/iowa
/data/something/wyoming
/data/something/burkinafaso
/data/something/slovenia
All four subdirs have content. burkinafaso and slovenia are mount points; iowa and wyoming are not. I want to copy the directory structure in such a way that iowa and wyoming get copied recursively with all their subtrees, but burkinafaso and slovenia are copied as empty. cp doesn't seem to have such a switch, unlike du -x and find -xdev. What's the best way to do what I want?
|
On a machine with the GNU Coreutils (most Linux distros), the cp command has -x.
From cp man page:
-x, --one-file-system
| Is there a variant of the "cp -a" command that avoids copying from other filesystems? |
1,434,824,170,000 |
First of all it isn't a homework. I just don't want to dig through a bunch of manuals and learn to program shell from A-Z to do this one thing.
I have a folder /mnt/hdd/files with lots of subfolders where i store files and would like to select and copy files randomly to let's say /mnt/hdd/temp/1
Is there a way to select and copy files until the folder /mnt/hdd/temp/1 reaches a given size? (like 5 or 10Gbytes)
The average filesize is around 10mbyte so it'd be cool just to tell the command (script/batch?) the number of files (1000) to select randomly as well.
There are no duplicates and there is only one filetype in the dir tree.
|
One way to do it, assuming filenames don't have embedded newlines:
#! /bin/sh
dest=/mnt/hdd/temp/1
cd /mnt/hdd/files
find . -type f | \
shuf -n 1000 | \
while [ $(du -ks "$dest" | awk '{ print $1 }') -lt 10485760 ] && IFS= read -r fn; do
cp "$fn" "$dest"
done
This will copy random files from /mnt/hdd/files to /mnt/hdd/temp/1 as long as the du size of the destination is less than 10 GB, but not more than 1000 files.
| How to copy random files to a directory |
1,434,824,170,000 |
I have thousands of directories, within each directory is a subdirectory with a file called file.jpg. Since each subdirectory has a file called file.jpg I cannot transfer all those files into the same folder or they just overwrite each other until the last (one) file.jpg remains.
Instead, I want to paste some commands into terminal to do the following:
Append a {number} to each file.jpg upon copying to a 'review' folder. Note: there could be thousands of files so there should be no limit to where it stops. The last file could be file34634657.jpg for example.
Side question: will there be any way to trace back the copied file{number}.jpg to its original directory/subdirectory? These file.jpg are within directories that contain other subdirectories and files that may need to be reviewed.
|
If adding numbers is not mandatory requirement you can put paths in your file names.
That way you will solve both of your "issues" (overwritting files and not being able to trace back).
for ef in $(find * -name file.jpg); do cp $ef /path/to/dest_dir/$(echo $ef | sed 's,/,--,g'); done
-find * - *to get rid of ./ at the beginning of files,
-$(echo $ef | sed 's,/,--,g') - use sed to change / to -- (with this you will be able to trace back or you can use something more specific instead --).
Edit (due to OP comment below):
If your directories contain blank/white spaces change your IFS (Internal Field Separator).
If you want, you can backup your current one and restore it at the end of your command.
old_IFS=$IFS; IFS=$'\n'; for ef in $(find * -name file.jpg); do cp $ef dest_dir/$(echo $ef | sed 's,/,--,g;s/ /_/g'); done; IFS=$old_IFS
old_IFS=$IFS - backup your current IFS (which is, by default, space, tab and newline),
IFS=$'\n' - set IFS to be newline (just newline) to avoid word splitting by space,
IFS=$old_IFS - restore IFS back to its default value.
Of course you can remove second sed script s/ /_/g and just leave one (first one) if you want your files (at the destination location) include spaces from directories.
| REQUEST: Appending {number} to end of file upon transfer |
1,434,824,170,000 |
Here at work our machines are required to run off of old Sun Sparc5's (1995) all running a UFS architecture on the hard drives due to a Solaris 5.3ish base. The way they are configured it is not possible (that I have found) to hook up a second SCSI hard drive so that it can recognize it and allow me to do an internal copy. In order to increase reliability of these computers we purchased a raid array with a SCSI to SATA converter inside to write to 2.5" internals. This raid setup has worked successfully on the computers that use IDE type connections, using the same arrays also a UFS architecture.
I have tried an OmniSCSI-One-to-One tool, it properly clipped down the size and copied all the directories but did not move any of the actual data. I was able to read the hard drive after with UFS Explorer and verified the directories were present. Found out when I took it to the machine that it had no data on it.
I have authorization to purchase a computer just for the task of copying and backing up these drives. What OS or program would be best recommended to perform a copy that I know will work without risking a machine due to a corrupted drive? The computer will not be attached to a network, it will be a standalone. Keep in mind these are SCSI drives so I have to use a PCI SCSI converter too, if it is relevant. I have tried Ubuntu, it could not even mount the drives they are so old. All of the drives work as the machines are still running.
Thank you.
|
It is not very clear what you want to accomplish, but the best way (I know) to make a backup copy of Solaris installation is to create flash archives:
Build new machine (with linux for example), export via NFS some filesystem, mount on Solaris machine, create flash archives
flarcreate -n flash_archive_root -c -R / -x /export/flash.....
check archives
flar -i /export/flash/inst_x86
For more detailed info you can check my blog post (first part of post)
If you want only to copy the information on them who do not use rsync or scp
| How to externally copy an old UFS hard drive? |
1,434,824,170,000 |
Fist of all I have the files but the old system is gone.
I took the folders in /var/lib/mysql/ and gave then the permissions the mysql folder that was in there had. user mysql group root only accessible my user.
Now when I want to access my localhost sites I get a mysql database error. But I think everything is as it was before. What do I need to do to make this work?
Cannot find or open table blabla/wp_options from
the internal data dictionary of InnoDB though the .frm file for the
table exists. Maybe you have deleted and recreated InnoDB data
files but have forgotten to delete the corresponding .frm files
of InnoDB tables, or you have moved .frm files to another database?
or, the table contains indexes that this version of the engine
doesn't support.
See http://dev.mysql.com/doc/refman/5.5/en/innodb-troubleshooting.html
how you can resolve the problem.
the logfile shows me a hell of a lot errors like this. Should I delete this .frm files now?
|
Turns out that I not had copied the actual databases. I got this my looking into the folders all that in there are the .frm files with the table names and a 65 bite db.opt file, so I knew that cant be the actual database.
They are all inside the ibdata1 file in the root of /var/lib/mysql/ I guess, I thaught the databases where inside the folders that had the names of the databases I created before.
So no I don't needed to delete any .frm files or anything. I copied the ibdata1 file over (while stopped mysql server) and now everything works.
| Copyed mysql files from previous system now error while connect, what to do? |
1,434,824,170,000 |
I'm looking to copy a specific folder inside an other one, but I don't have the exact names, for example
/volume1/User/save/01/**-**** ?/GROUPES **-****/
**-**** is a number reference that changes everytime but always formated the same (eg : 75-1234) and ? are words (Lion Tiger Shark) --> /75-1234 Lion Tiger/ Shark)
And I want to copy /GROUPES **-****/ to /volume1/User/01/
My last try was
find /volume1/User/save/01/**-**** ?/ -iname "GROUPES **-*****" -exec cp -r /volume1/User/save/01/
And it's (obviously?) not working.
|
As @Jonas noted, you need a ? to match one character and a * to match multiple characters. A space character needs to be escaped with a \.
With the first pattern **-**** ? changed to ??-????\ * and the escaped space characters, the command should be:
cp -r /volume1/User/save/01/??-????\ */GROUPES\ ??-????/ /volume1/User/save/01/
| find and copy usage using wildcards |
1,434,824,170,000 |
I want to copy positionXYZ into another directory's inside I want both of them.I put:
Tutorials myname $ cp -r positionXYZ Documents/Gerris\ Programs/Tutorials/tutorial6/
Then it says :
cp: directory Documents/Gerris Programs/Tutorials/tutorial6 does not exist
Tutorials is the positionXYZ's current parent directory, tutorial6 is the directory which I want to copy the file into.
|
I assume you are already in
Documents/Gerris Programs/Tutorials/
so, all you need to do is:
cp -r positionXYZ tutorial6/
or if you want to use an absolute path (assuming that Documents is in your home directory ~):
cp -r positionXYZ ~/Documents/Gerris\ Programs/Tutorials/tutorial6/
| Copy a file into another directory's inside [closed] |
1,434,824,170,000 |
I want to copy directories from one variable to directories from another one but without loop.
from="fromdir1 fromdir2"
to="todir1 todir2"
I mean fromdir1 to todir1, fromdir2 to todir2.
I think it can be done with xargs but I don't know how.
|
You could use GNU parallel with linked arguments:
parallel --link cp {1} {2} ::: from1 from2 from3 ::: to1 to2 to3
If the from and to-files are in respectives text lists, use
parallel --link cp {1} {2} :::: fromlist :::: tolist
note the 4 colons vs. 3 colons previously. More info on GNU parallel on the website.
For reading them from bash array variables, this will do:
parallel --link cp {1} {2} ::: "${from[@]}" ::: "${to[@]}"
| copy multiple directories in multiple directories without loop [duplicate] |
1,345,806,144,000 |
How do we allow certain set of Private IPs to enter through SSH login(RSA key pair) into Linux Server?
|
You can limit which hosts can connect by configuring TCP wrappers or filtering network traffic (firewalling) using iptables. If you want to use different authentication methods depending on the client IP address, configure SSH daemon instead (option 3).
Option 1: Filtering with IPTABLES
Iptables rules are evaluated in order, until first match.
For example, to allow traffic from 192.168.0.0/24 network and otherwise drop the traffic (to port 22). The DROP rule is not required if your iptables default policy is configured to DROP.
iptables -A INPUT -p tcp --dport 22 --source 192.168.0.0/24 -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j DROP
You can add more rules before the drop rule to match more networks/hosts. If you have a lot of networks or host addresses, you should use ipset module. There is also iprange module which allows using any arbitrary range of IP addresses.
Iptables are not persistent across reboots. You need to configure some mechanism to restore iptables on boot.
iptables apply only to IPv4 traffic. Systems which have ssh listening to IPv6 address the necessary configuration can be done with ip6tables.
Option 2: Using TCP wrappers
Note: this might not be an option on modern distributions, as support for tcpwrappers was removed from OpenSSH 6.7
You can also configure which hosts can connect using TCP wrappers. With TCP wrappers, in addition to IP addresses you can also use hostnames in rules.
By default, deny all hosts.
/etc/hosts.deny:
sshd : ALL
Then list allowed hosts in hosts.allow. For example to allow network 192.168.0.0/24 and localhost.
/etc/hosts.allow:
sshd : 192.168.0.0/24
sshd : 127.0.0.1
sshd : [::1]
Option 3: SSH daemon configuration
You can configure ssh daemon in sshd_config to use different authentication method depending on the client address/hostname. If you only want to block other hosts from connecting, you should use iptables or TCP wrappers instead.
First remove default authentication methods:
PasswordAuthentication no
PubkeyAuthentication no
Then add desired authentication methods after a Match Address in the end of the file. Placing Match in the end of the file is important, since all the configuration lines after it are placed inside the conditional block until the next Match line. For example:
Match Address 127.0.0.*
PubkeyAuthentication yes
Other clients are still able to connect, but logins will fail because there is no available authentication methods.
Match arguments and allowed conditional configuration options are documented in sshd_config man page. Match patterns are documented in ssh_config man page.
| Limit SSH access to specific clients by IP address |
1,345,806,144,000 |
I am confused what's the actual difference between SNAT and Masquerade?
If I want to share my internet connection on local network, should I select SNAT or Masquerade?
|
The SNAT target requires you to give it an IP address to apply to all the outgoing packets. The MASQUERADE target lets you give it an interface, and whatever address is on that interface is the address that is applied to all the outgoing packets. In addition, with SNAT, the kernel's connection tracking keeps track of all the connections when the interface is taken down and brought back up; the same is not true for the MASQUERADE target.
Good documents include the HOWTOs on the Netfilter site and the iptables man page.
| Difference between SNAT and Masquerade |
1,345,806,144,000 |
I run a VPS which I would like to secure using UFW, allowing connections only to port 80.
However, in order to be able to administer it remotely, I need to keep port 22 open and make it reachable from home.
I know that UFW can be configured to allow connections to a port only from specific IP address:
ufw allow proto tcp from 123.123.123.123 to any port 22
But my IP address is dynamic, so this is not yet the solution.
The question is: I have dynamic DNS resolution with DynDNS, so is it possible to create a Rule using the domain instead of the IP?
I already tried this:
ufw allow proto tcp from mydomain.dyndns.org to any port 22
but I got ERROR: Bad source address
|
I don't believe this is possible with ufw. ufw is just a frontend to iptables which also lacks this feature, so one approach would be to create a crontab entry which would periodically run and check if the IP address has changed. If it has then it will update it.
You might be tempted to do this:
$ iptables -A INPUT -p tcp --src mydomain.dyndns.org --dport 22 -j ACCEPT
But this will resolve the hostname to an IP and use that for the rule, so if the IP later changes this rule will become invalid.
Alternative idea
You could create a script like so, called, iptables_update.bash.
#!/bin/bash
#allow a dyndns name
HOSTNAME=HOST_NAME_HERE
LOGFILE=LOGFILE_NAME_HERE
Current_IP=$(host $HOSTNAME | cut -f4 -d' ')
if [ $LOGFILE = "" ] ; then
iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT
echo $Current_IP > $LOGFILE
else
Old_IP=$(cat $LOGFILE)
if [ "$Current_IP" = "$Old_IP" ] ; then
echo IP address has not changed
else
iptables -D INPUT -i eth1 -s $Old_IP -j ACCEPT
iptables -I INPUT -i eth1 -s $Current_IP -j ACCEPT
/etc/init.d/iptables save
echo $Current_IP > $LOGFILE
echo iptables have been updated
fi
fi
source: Using IPTables with Dynamic IP hostnames like dyndns.org
With this script saved you could create a crontab entry like so in the file /etc/crontab:
*/5 * * * * root /etc/iptables_update.bash > /dev/null 2>&1
This entry would then run the script every 5 minutes, checking to see if the IP address assigned to the hostname has changed. If so then it will create a new rule allowing it, while deleting the old rule for the old IP address.
| UFW: Allow traffic only from a domain with dynamic IP address |
1,345,806,144,000 |
I'm trying to connect to port 25 with netcat from one virtual machine to another but It's telling me no route to host although i can ping. I do have my firewall default policy set to drop but I have an exception to accept traffic for port 25 on that specific subnet. I can connect from VM 3 TO VM 2 on port 25 with nc but not from VM 2 TO 3.
Here's a preview of my firewall rules for VM2
Here's a preview of my firewall rules for VM 3
When I show the listening services I have *:25 which means it's listening for all ipv4 ip addresses and :::25 for ipv6 addresses. I don't understand where the error is and why is not working both firewall rules accept traffic on port 25 so it's supposed to be connecting. I tried comparing the differences between both to see why I can connect from vm3 to vm2 but the configuration is all the same. Any suggestions on what could be the problem?
Update stopping the iptable service resolves the issue but I still
need those rules to be present.
|
Your no route to host while the machine is ping-able is the sign of a firewall that denies you access politely (i.e. with an ICMP message rather than just DROP-ping).
See your REJECT lines? They match the description (REJECT with ICMP xxx). The problem is that those seemingly (#) catch-all REJECT lines are in the middle of your rules, therefore the following rules won't be executed at all. (#) Difficult to say if those are actual catch-all lines, the output of iptables -nvL would be preferable.
Put those REJECT rules at the end and everything should work as expected.
| No route to host with nc but can ping |
1,345,806,144,000 |
I have docker installed on CentOS 7 and I am running firewallD.
From inside my container, going to the host (default 172.17.42.1)
With firewall on
container# nc -v 172.17.42.1 4243
nc: connect to 172.17.42.1 port 4243 (tcp) failed: No route to host
with firewall shutdown
container# nc -v 172.17.42.1 4243
Connection to 172.17.42.1 4243 port [tcp/*] succeeded!
I've read the docs on firewalld and I don't fully understand them. Is there a way to simply allow everything in a docker container (I guess on the docker0 adapter) unrestricted access to the host?
|
Maybe better than earlier answer;
firewall-cmd --permanent --zone=trusted --change-interface=docker0
firewall-cmd --permanent --zone=trusted --add-port=4243/tcp
firewall-cmd --reload
| How to configure Centos 7 firewallD to allow docker containers free access to the host's network ports? |
1,345,806,144,000 |
I have a system that came with a firewall already in place. The firewall consists of over 1000 iptables rules. One of these rule is dropping packets I don't want dropped. (I know this because I did iptables-save followed by iptables -F and the application started working.) There are way too many rules to sort through manually. Can I do something to show me which rule is dropping the packets?
|
You could add a TRACE rule early in the chain to log every rule that the packet traverses.
I would consider using iptables -L -v -n | less to let you search the rules. I would look port; address; and interface rules that apply. Given that you have so many rules you are likely running a mostly closed firewall, and are missing a permit rule for the traffic.
How is the firewall built? It may be easier to look at the builder rules than the built rules.
| Is there a way to find which iptables rule was responsible for dropping a packet? |
1,345,806,144,000 |
I want to set up CentOS 7 firewall such that, all the incoming requests will be blocked except from the originating IP addresses that I whitelist. And for the Whitelist IP addresses all the ports should be accessible.
I'm able to find few solutions (not sure whether they will work) for iptables but CentOS 7 uses firewalld. I can't find something similar to achieve with firewall-cmd command.
The interfaces are in Public Zone. I have also moved all the services to Public zone already.
|
I'd accomplish this by adding sources to a zone. First checkout which sources there are for your zone:
firewall-cmd --permanent --zone=public --list-sources
If there are none, you can start to add them, this is your "whitelist"
firewall-cmd --permanent --zone=public --add-source=192.168.100.0/24
firewall-cmd --permanent --zone=public --add-source=192.168.222.123/32
(That adds a whole /24 and a single IP, just so you have a reference for both a subnet and a single IP)
Set the range of ports you'd like open:
firewall-cmd --permanent --zone=public --add-port=1-22/tcp
firewall-cmd --permanent --zone=public --add-port=1-22/udp
This just does ports 1 through 22. You can widen this, if you'd like.
Now, reload what you've done.
firewall-cmd --reload
And check your work:
firewall-cmd --zone=public --list-all
Side note / editorial: It doesn't matter but I like the "trusted" zone for a white-listed set of IPs in firewalld. You can make a further assessment by reading redhat's suggestions on choosing a zone.
See also:
RHEL 7 using Firewalls article
Fedora FirewallD docs (fairly good, fedora's been using firewalld for some while)
If you'd like to DROP packets outside this source, here's an example for dropping those outside the /24 I used as an example earlier, you can use rich rules for this, I believe. This is conceptual, I have not tested it (further than seeing that centos 7 accepts the command), but, should be easy enough to do a pcap and see if it behaves how you'd expect
firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.100.0/24" invert="True" drop'
| Whitelist source IP addresses in CentOS 7 |
1,345,806,144,000 |
There's an example of iptables rules on archlinux wiki:
# Generated by iptables-save v1.4.18 on Sun Mar 17 14:21:12 2013
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
:TCP - [0:0]
:UDP - [0:0]
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -p icmp -m icmp --icmp-type 8 -m conntrack --ctstate NEW -j ACCEPT
-A INPUT -p udp -m conntrack --ctstate NEW -j UDP
-A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -m conntrack --ctstate NEW -j TCP
-A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable
-A INPUT -p tcp -j REJECT --reject-with tcp-reset
-A INPUT -j REJECT --reject-with icmp-proto-unreachable
COMMIT
# Completed on Sun Mar 17 14:21:12 2013
A few days ago my friend asked me why is there REJECT in the last three rules. He told me that there should be DROP instead, and he mentioned something about better security in case of DROP.
So, I have two questions:
What do the three rules do?
Does it make any difference when I put there DROP in place REJECT --reject-with ? If yes, what is the difference?
|
What do the three rules do?
Those 3 rules seem pretty self-explanatory:
Reject incoming UDP packets with an ICMP message "port unreachable"
Reject incoming TCP packets with "tcp reset"
Reject incoming packets (of any other protocol) with ICMP message "protocol unreachable"
If you're looking for more detail (about UDP/TCP packets, ICMP), you need to dig into networking docs, and perhaps the man iptables too.
Does it make any difference when I put there DROP in place REJECT --reject-with ? If yes, could someone explain the difference to me, I'll really appreciate it.
It makes a difference. And contrary to popular belief, DROP does not give better security than REJECT. It inconveniences legitimate users, and it's effectively no protection from malicious ones. This post explains the reasoning in detail:
http://www.chiark.greenend.org.uk/~peterb/network/drop-vs-reject
A common reason for using DROP rather than REJECT is to avoid giving
away information about which ports are open, however, discarding
packets gives away exactly as much information as the rejection.
With REJECT, you do your scan and categorise the results into
"connection established" and "connection rejected".
With DROP, you categorise the results into "connection established"
and "connection timed out".
The most trivial scanner will use the operating system "connect" call
and will wait until one connection attempt is completed before
starting on the next. This type of scanner will be slowed down
considerably by dropping packets. However, if the attack sets a
timeout of 5 seconds per connection attempt, it is possible to scan
every reserved port (1..1023) on a machine in just 1.5 hours. Scans
are always automated, and an attacker doesn't care that the result
isn't immediate.
A more sophisticated scanner will send packets itself rather than
relying on the operating system's TCP implementation. Such scanners
are fast, efficient and indifferent to the choice of REJECT or DROP.
CONCLUSION
DROP offers no effective barrier to hostile forces but can
dramatically slow down applications run by legitimate users. DROP
should not normally be used.
| Is it better to set -j REJECT or -j DROP in iptables? |
1,345,806,144,000 |
Do you need to run any of these commands:
sudo ufw reload
sudo ufw disable
sudo ufw enable
after adding a rule via sudo ufw allow?
|
No. It's enough to just add it. But if you add rules in the files, you need to execute commit.
You can check user rules, as they're called with:
ufw status
You can also add verbose for some details:
ufw status verbose
Or numbered to know which rule to remove with delete. The syntax for this one is this:
ufw delete <RULE NUMBER>
| Do you need to reload after adding a rule in ufw? |
1,345,806,144,000 |
I know linux has 3 built-in tables and each of them has its own chains as follow:
FILTER: PREROUTING, FORWARD, POSTROUTING
NAT: PREROUTING, INPUT, OUTPUT, POSTROUTING
MANGLE: PREROUTING, INPUT, FORWARD, OUTPUT, POSTROUTING
But I can't understand how they are traversed, in which order, if there is.
For example, how are they traversed when:
I send a packet to a pc in my same local network
when I send a packet to a pc in a different network
when a gateway receives a packet and it has to forward it
when I receive a packet destinated to me
any other case (if any)
|
Wikipedia has a great diagram to show the processing order.
For more details you can also look at the iptables documentation, specifically the traversing of tables and chains chapter. Which also includes a flow diagram.
The order changes dependent on how netfilter is being used (as a bridge or network filter and whether it has interaction with the application layer).
Generally (though there are more devil in the details in the chapter linked above) the chains are processed as:
See the INPUT chain as "traffic inbound from outside to this host".
See the FORWARD chain as "traffic that uses this host as a router" (source and destination are not this host).
see the OUTPUT chain as "traffic that this host wants to send out".
PREROUTING / POSTROUTING has different uses for each of the table types (for example for the nat tables, PREROUTING is for inbound (routed/forwarded) SNAT traffic and POSTROUTING is for outbound (routed/forwarded) DNAT traffic. Look at the docs for more specifics.
The various tables are:
Mangle is to change packets (Type Of Service, Time To Live etc) on traversal.
Nat is to put in NAT rules.
Raw is to be used for marking and connection tracking.
Filter is for filtering packets.
So for your five scenarios:
If the sending host your host with iptables, OUTPUT
The same as above
The FORWARD chain (provided the gateway is the host with iptables)
If "me" is the host with iptables, INPUT
Look at the chain rules above (which is the general rule of thumb) and the flow diagram (and this also varies on what you are trying to achieve with IPTables)
| How iptables tables and chains are traversed |
1,345,806,144,000 |
How do I set up the firewall on a system in a LAN so that some ports are only open to connections from the local area network, and not from the outside world?
For example, I have a box running Scientific Linux 6.1 (a RHEL based distro), and I want its SSH server to only accept connections from localhost or LAN. How do I do this?
|
With the kernel's iptables completely empty (iptables -F), this will do what you ask:
# iptables -A INPUT -p tcp --dport 22 -s 192.168.0.0/24 -j ACCEPT
# iptables -A INPUT -p tcp --dport 22 -s 127.0.0.0/8 -j ACCEPT
# iptables -A INPUT -p tcp --dport 22 -j DROP
This says that all LAN addresses are allowed to talk to TCP port 22, that localhost gets the same consideration (yes, 127.* not just 127.0.0.1), and packets from every other address not matching those first two rules get unceremoniously dropped into the bit bucket. You can use REJECT instead of DROP if you want an active rejection (TCP RST) instead of making TCP port 22 a black hole for packets.
If your LAN doesn't use the 192.168.0.* block, you will naturally need to change the IP and mask on the first line to match your LAN's IP scheme.
These commands may not do what you want if your firewall already has some rules configured. (Say iptables -L as root to find out.) What frequently happens is that one of the existing rules grabs the packets you're trying to filter, so that appending new rules has no effect. While you can use -I instead of -A with the iptables command to splice new rules into the middle of a chain instead of appending them, it's usually better to find out how the chains get populated on system boot and modify that process so your new rules always get installed in the correct order.
RHEL 7+
On recent RHEL type systems, the best way to do that is to use firewall-cmd or its GUI equivalent. This tells the OS's firewalld daemon what you want, which is what actually populates and manipulates what you see via iptables -L.
RHEL 6 and Earlier
On older RHEL type systems, the easiest way to modify firewall chains when ordering matters is to edit /etc/sysconfig/iptables. The OS's GUI and TUI firewall tools are rather simplistic, so once you start adding more complex rules like this, it's better to go back to good old config files. Beware, once you start doing this, you risk losing your changes if you ever use the OS's firewall tools to modify the configuration, since it may not know how to deal with handcrafted rules like these.
Add something like this to that file:
-A RH-Firewall-1-INPUT -p tcp --dport 22 -s 192.168.0.0/24 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp --dport 22 -s 127.0.0.0/8 -j ACCEPT
-A RH-Firewall-1-INPUT -p tcp --dport 22 -j DROP
Where you add it is the tricky bit. If you find a line in that file talking about --dport 22, simply replace it with the three lines above. Otherwise, it should probably go before the first existing line ending in -j ACCEPT. Generally, you'll need to acquire some familiarity with the way iptables works, at which point the correct insertion point will be obvious.
Save that file, then say service iptables restart to reload the firewall rules. Be sure to do this while logged into the console, in case you fat-finger the edits! You don't want to lock yourself out of your machine while logged in over SSH.
The similarity to the commands above is no coincidence. Most of this file consists of arguments to the iptables command. The differences relative to the above are that the iptables command is dropped and the INPUT chain name becomes the special RHEL-specific RH-Firewall-1-INPUT chain. (If you care to examine the file in more detail, you'll see earlier in the file where they've essentially renamed the INPUT chain. Why? Couldn't say.)
| Set some firewall ports to only accept local network connections? |
1,345,806,144,000 |
In a CentOS 7 server, I type in firewall-cmd --list-all, and it gives me the following:
public (default, active)
interfaces: enp3s0
sources:
services: dhcpv6-client https ssh
ports:
masquerade: no
forward-ports:
icmp-blocks:
rich rules:
What is the dhcpv6-client service? What does it do? And what are the implications of removing it?
I read the wikipedia page for dhcpv6, but it does not tell me specifically what this service on CentOS 7 Firewalld does.
This server is accessible via https and email via mydomain.com, but it is a private server that can only be accessed via https by a list of known ip addresses. In addition, this server can receive email from a list of known email addresses. Is the dhcpv6-client service required to reconcile the domain addresses from the known ip https requests and for exchanging the email with known email addresses?
|
This is needed if you are using DHCP v6 due to the slightly different way that DHCP works in v4 and v6.
In DHCP v4 the client establishes the connection with the server and because of the default rules to allow 'established' connections back through the firewall, the returning DHCP response is allowed through.
However, in DHCP v6, the initial client request is sent to a statically assigned multicast address while the response has the DHCP server's unicast address as the source (see RFC 3315). As the source is now different to the initial request's destination, the 'established' rule will not allow it through and consequently DHCP v6 will fail.
To combat this, a new firewalld rule was created called dhcpv6-client which allows incoming DHCP v6 responses to pass - this is the dhcpv6-client rule. If you're not running DHCP v6 on your network or you are using static IP addressing, then you can disable it.
| what is dhcpv6-client service in firewalld, and can i safely remove it? |
1,345,806,144,000 |
My router sends out multicast packets in regular intervals that are blocked by UFW's standard policies. These events are harmless but spam my syslogs and ufwlogs. I can't change the router's behaviour as that would require installing a modified firmware and thus void the warranty.
So my question is: Is there any way I can prevent UFW from logging this particular event without changing the blocking policies? And, as a possible follow-up: If I can't define a custom logging policy, would allowing this incoming traffic pose a possible security risk?
|
Base on this answer from ServerFault,
ufw supports per rule logging. By default, no logging is performed when a packet matches a rule.
All you have to do is create a UFW deny rule to match those multicast packets.
| How can I disable UFW logging for a specific event? |
1,345,806,144,000 |
I set up some iptables rules so it logs and drops the packets that are INVALID (--state INVALID). Reading the logs how can I understand why the packet was considered invalid? For example, the following:
Nov 29 22:59:13 htpc-router kernel: [6550193.790402] ::IPT::DROP:: IN=ppp0 OUT= MAC= SRC=31.13.72.7 DST=136.169.151.82 LEN=40 TOS=0x00 PREC=0x00 TTL=242 ID=5104 DF PROTO=TCP SPT=80 DPT=61597 WINDOW=0 RES=0x00 ACK RST URGP=0
|
Packets can be in various states when using stateful packet inspection.
New: The packet is not part of any known flow or socket and the TCP flags have the SYN bit on.
Established: The packet matches a flow or socket tracked by CONNTRACK and has any TCP flags. After the initial TCP handshake is completed the SYN bit must be off for a packet to be in state established.
Related: The packet does not match any known flow or socket, but the packet is expected because there is an existing socket that predicates it (examples of this are data on port 20 when there is an existing FTP session on port 21, or UDP data for an existing SIP connection on TCP port 5060). This requires an associated ALG.
Invalid: If none of the previous states apply the packet is in state INVALID. This could be caused by various types of stealth network probes, or it could mean that you're running out of CONNTRACK entries (which you should also see stated in your logs). Or it may simply be entirely benign.
In your case, the packet that you cite shows that the TCP flags ACK and RST, and that the source port is 80. What that means is that the web server at 31.13.72.7 (which happens to be Facebook) sent a reset packet to you. It's entirely impossible to say why without seeing the packets that came before it (if any). But most likely it is sending you a reset for the same reason your computer thinks it's invalid.
| How to understand why the packet was considered INVALID by the `iptables`? |
1,345,806,144,000 |
I want to stop internet on my system using iptables so what should I do?
iptables -A INPUT -p tcp --sport 80 -j DROP
or
iptables -A INPUT -p tcp --dport 80 -j DROP ?
|
Reality is you're asking 2 different questions.
--sport is short for --source-port
--dport is short for --destination-port
also the internet is not simply the HTTP protocol which is what typically runs on port 80. I Suspect you're asking how to block HTTP requests. to do this you need to block 80 on the outbound chain.
iptables -A OUTPUT -p tcp --dport 80 -j DROP
will block all outbound HTTP requests, going to port 80, so this won't block SSL, 8080 (alt http) or any other weird ports, to do those kinds of things you need L7 filtering with a much deeper packet inspection.
| What is sport and dport? |
1,345,806,144,000 |
When I try to telnet to a port on a server, and if there is no program listening on that port telnet dies with a "Unable to connect ... " error. I understand that. But, why do we need a firewall if there is no program listening on any ports?
|
There may not be a service running right now, but how about tomorrow? You have them all turned off, but what about your users? Anyone on a unix/windows/mac system can open a port > 1024 on any machine they have access to. What about malware? What about a virus? They can also open up ports and start serving information to the world, or start listening for connections from the network.
A firewall's main purpose is not to block the ports for services you know are disabled, it is to block the ports on services you might not know about. Think of it as a default deny with only certain holes punched in for services you authorize. Any user or program started by a user can start a server on a system they have access to, a firewall prevents someone else from connecting to that service.
A good admin knows what services need to be exposed, and can enable them. A firewall is mostly to mitigate the risk from unknown servers running on your system or your network, as well as to manage what is allowed into the network from a central place.
It's important to know what is running on your machine/server and only enable what you need, but a firewall provides that extra bit of protection against the things you don't know about.
| Why do we need a firewall if no programs are running on your ports? |
1,345,806,144,000 |
I have a Quake 3 server. And it's launched successfully.
The problem is that no one can connect to that server.
I am running: nmap -sU -p 27960 hostname and it's showing me that it's state open|filtered
if I am running that command right from the server it is open.
Also, I am making sure that it's binding to the right iface
I checked the iptables rules and couldn't find any filters related to it. Furthermore, I tried to open the port explicitly via iptables -A INPUT -p udp --dport 27960 -j ACCEPT
but this didn't help.
What it could be?
I called to ISP support center and they said they are not filtering anything.
|
Getting different nmap results from local machine and remote machines means there is some kind of firewall(whether running locally or some remote machine) which is blocking. According to the nmap documentation,
open|filtered
Nmap places ports in this state when it is unable to determine whether
a port is open or filtered. This occurs for scan types in which open
ports give no response. The lack of response could also mean that a
packet filter dropped the probe or any response it elicited. So Nmap
does not know for sure whether the port is open or being filtered. The
UDP, IP protocol, FIN, NULL, and Xmas scans classify ports this way.
I would recommend you to try out following tools to find out whether exactly the problem exists:
To capture the UDP packets destined to port 27960 using tcpdump and . Check whether the packets are reaching your machine or not.
Run the following command to capture the udp packets destined to port 27960 in a file tcpdump.out
$ sudo tcpdump -A 'udp and port 27960' -w tcpdump.out`
Try connecting from other machine to port using netcat
$ nc <server-ip-address> -u 27960
Now stop the dump and check whether any packet got captured in the tcpdump.out or not using wireshark.
$ wireshark tcpdump.out
If no packet got captured, this means some intermediate device(firewall) is preventing the communication. Else, if captured check the reply which the server is giving in return of the request. If it is any kind of ICMP reply with some error code, it means there is some local firewall which is blocking.
| nmap shows me that one service is "open|filtered" while locally it's "open", how to open? |
1,345,806,144,000 |
I have a firewall (csf) that lets you to separately allow incoming and outgoing TCP ports. My question is, why would anyone want to have any outgoing ports closed?
I understand that by default you might want to have all ports closed for incoming connections. From there, if you are running an HTTP server you might want to open port 80. If you want to run an FTP server (in active mode) you might want to open port 21. But if it's set up for passive FTP mode, a bunch of ports will be necessary to receive data connections from FTP clients... and so on for additional services. But that's all. The rest of ports not concerned with a particular service that the server provides, and especially if you are mostly a client computer, must be closed.
But what about outgoing connections? Is there any security gain in having destination ports closed for outbound connections? I ask this because at first I thought that a very similar policy of closing all ports as for incoming connections could apply. But then I realised that when acting as a client in passive FTP mode, for instance, random high ports try to connect to the FTP server. Therefore by blocking these high ports in the client side you are effectively disabling passive FTP in that client, which is annoying. I'm tempted to just allow everything outgoing, but I'm concerned that this might be a security threat.
Is this the case? Is it a bad idea, or has it noticeable drawbacks just opening all (or many) ports only for outgoing connections to facilitate services such as passive FTP?
|
There can be many reasons why someone might want to have outgoing ports closed. Here are some that I have applied to various servers at various times
The machine is in a corporate environment where only outbound web traffic is permitted, and that via a proxy. All other ports are closed because they are not needed.
The machine is running a webserver with executable code (think PHP, Ruby, Python, Perl, etc.) As part of a mitigation against possible code flaws, only expected outbound services are allowed.
A service or application running on the machine attempts to connect to a remote resource but the server administrator does not want it to do so.
Good security practice: what is not explicitly permitted should be denied.
| What's the point of firewalling outgoing connections? |
1,345,806,144,000 |
I want to open the following ports in my CentOS 7 firewall:
UDP 137 (NetBIOS Name Service)
UDP 138 (NetBIOS Datagram Service)
TCP 139 (NetBIOS Session Service)
TCP 445 (SMB)
I can guess that the services names include samba includes TCP 445 but I don't know if the other ports have a service name preconfigured.
I can list supported services with:
$ firewall-cmd --get-services
But this doesn't tell me what ports are configured with the services.
Is there a way to list what ports belong to these services so that I can grep for the one that I need?
|
You can find the xml files this information is stored in in /usr/lib/firewalld/services/ (for distro-managed services) and/or /etc/firewalld/services/ for your own user-defined services.
For example, samba.xml reads (on my centos7):
<?xml version="1.0" encoding="utf-8"?>
<service>
<short>Samba</short>
<description>This option allows you to access and participate in Windows file and printer sharing networks. You need the samba package installed for this option to be useful.</description>
<port protocol="udp" port="137"/>
<port protocol="udp" port="138"/>
<port protocol="tcp" port="139"/>
<port protocol="tcp" port="445"/>
<module name="nf_conntrack_netbios_ns"/>
</service>
so it's easy to spot what ports are enabled by this service.
| How do I get a list of the ports which belong to preconfigured firewall-cmd services? |
1,345,806,144,000 |
I'm trying to restrict access to a particular port for a particular user on my Debian.
Let's say user's id is 1000 and port I would like to block is 5000.
I tried using iptables with the following command :
iptables -I OUTPUT -o lo -p tcp --dport 5000 --match owner --uid-owner 1000 -j DROP
It works if the user does curl 127.0.0.1:5000 or curl <machine_ip>:5000 but not if the user execute curl localhost:5000.
I don't understand why it's not working. I though localhost was converted to 127.0.0.1. What's the difference ?
In my /etc/hosts file, I have
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
|
Do the same for IPv6 ... localhost resolves to both an IPv4 and IPv6 address, and v6 is preferred.
Edit 1:
ip6tables -I OUTPUT -o lo -p tcp --dport 5000 --match owner --uid-owner 1000 -j DROP
| Restrict local port access to a specific user |
1,345,806,144,000 |
I have just set up a DNS server for my own network, and many guides online suggest to make sure that port forwarding on port 53 is not enabled.
The thing that is not clear to me is this: should I configure this at the router level or at the firewall level? If I should do this on the firewall, how would I go about doing this on an Ubuntu Server 12.04?
My home network has a few clients, an ESXi server and a home router. One of the VMs inside ESXi is the DNS server (running on Ubuntu Server 12.04) which is used to handle local DNS requests but is also configured as to forward requests for external IPs to Google's DNS servers.
|
This should be configured on whatever equipment you have between the DNS server and the outside world. AFAIK port forwarding is disabled by default on pretty much everything so you shouldn't worry too much about it. If you're using residential network gear, there should be port forwarding configuration options in the web interface. To check the port forwarding settings on Ubuntu use iptables:
$ sudo iptables -t nat -vnL
To ultimately check your network for the forwarded port use netcat to connect to the port via your external IP:
$ nc -vu [external ip] 53
You'll have to monitor the connections on the DNS server to watch for the netcat connection because netcat may incorrectly report that the connection was successful due to the stateless nature of UDP
| How to check if port forwarding is enabled? |
1,345,806,144,000 |
When configuring a chain in nftables, one has to provide a priority value. Almost all online examples set a piority of 0; sometimes, a value of 100 gets used with certain hooks (output, postrouting).
The nftables wiki has to say:
The priority can be used to order the chains or to put them before or after some Netfilter internal operations. For example, a chain on the prerouting hook with the priority -300 will be placed before connection tracking operations.
For reference, here's the list of different priority used in iptables:
NF_IP_PRI_CONNTRACK_DEFRAG (-400): priority of defragmentation
NF_IP_PRI_RAW (-300): traditional priority of the raw table placed before connection tracking operation
NF_IP_PRI_SELINUX_FIRST (-225): SELinux operations
NF_IP_PRI_CONNTRACK (-200): Connection tracking operations
NF_IP_PRI_MANGLE (-150): mangle operation
NF_IP_PRI_NAT_DST (-100): destination NAT
NF_IP_PRI_FILTER (0): filtering operation, the filter table
NF_IP_PRI_SECURITY (50): Place of security table where secmark can be set for example
NF_IP_PRI_NAT_SRC (100): source NAT
NF_IP_PRI_SELINUX_LAST (225): SELinux at packet exit
NF_IP_PRI_CONNTRACK_HELPER (300): connection tracking at exit
This states that the priority controls interaction with internal Netfilter operations, but only mentions the values used by iptables as examples.
In which cases is the priority relevant (i.e. has to be set to a value ≠ 0)? Only for multiple chains with same hook? What about combining nftables and iptables? Which internal Netfilter operations are relevant for determining the correct priority value?
|
UPDATE: iptables-nft (rather than iptables-legacy) is using the nftables kernel API and in addition a compatibility layer to reuse xtables kernel modules (those described in iptables-extensions) when there's no native nftables translation available. It should be treated as nftables in most regards, except for this question that it has fixed priorities like the legacy version, so nftables' priorities still matter here.
iptables (legacy) and nftables both rely on the same netfilter infrastructure, and use hooks at various places. it's explained there: Netfilter hooks, or there's this systemtap manpage, which documents a bit of the hook handling:
PRIORITY is an integer priority giving the order in which the probe
point should be triggered relative to any other netfilter hook
functions which trigger on the same packet. Hook functions execute on
each packet in order from smallest priority number to largest priority
number. [...]
or also this blog about netfilter: How to Filter Network Packets using Netfilter–Part 1 Netfilter Hooks (blog disappeared, using a Wayback Machine link instead.)
All this together tell that various modules/functionalities can register at each of the five possible hooks (for the IPv4 case), and in each hook they'll be called by order of the registered priority for this hook.
Those hooks are not only for iptables or nftables. There are various other users, like systemtap above, or even netfilter's own submodules. For example, with IPv4 when using NAT either with iptables or nftables, nf_conntrack_ipv4 will register in 4 hooks at various priorities for a total of 6 times. This module will in turn pull nf_defrag_ipv4 which registers at NF_INET_PRE_ROUTING/NF_IP_PRI_CONNTRACK_DEFRAG and NF_INET_LOCAL_OUT/NF_IP_PRI_CONNTRACK_DEFRAG.
So yes, the priority is relevant only within the same hook. But in this same hook there are several users, and they have already their predefined priority (with often but not always the same value reused across different hooks), so to interact correctly around them, a compatible priority has to be used.
For example, if rules have to be done early on non-defragmented packets, then later (as usual) with defragmented packets, just register two nftables chains in prerouting, one <= -401 (eg -450), the other between -399 and -201 (eg -300). The best iptables could do until recently was -300, ie it couldn't see fragmented packets whenever conntrack, thus early defragmentation was in use (since kernel 4.15 with option raw_before_defrag it will register at -450 instead, but can't do both, but iptables-nft doesn't appear to offer such choice).
So now about the interactions between nftables and iptables: both can be used together, with the exception of NAT in older kernels where they both compete over netfilter's nat ressource: only one should register nat, unless using a kernel >= 4.18 as explained in the wiki. The examples nftables settings just ship with the same priorities as iptables with minor differences.
If both iptables and nftables are used together and one should be used before the other because there are interactions and order of effect needed, just sligthly lower or increase nftables' priority accordingly, since iptables' can't be changed.
For example in a mostly iptables setting, one can use nftables with a specific match feature not available in iptables to mark a packet, and then handle this mark in iptables, because it has support for a specific target (eg the fancy iptables LED target to blink a led) no available in nftables. Just register a sligthly lower priority value for the nftables hook to be sure it's done before. For an usual input filter rule, that would be for example -5 instead of 0. Then again, this value shouldn't be lower than -149 or it will execute before iptables' INPUT mangle chain which is perhaps not what is intended. That's the only other low value that would matter in the input case. For example there's no NF_IP_PRI_CONNTRACK threshold to consider, because conntrack doesn't register something at this priority in NF_INET_LOCAL_IN, neither does SELinux register something in this hook if something related to it did matter, so -225 has no special meaning here.
| When and how to use chain priorities in nftables |
1,345,806,144,000 |
I set up a bridge br0 "attached" to two interfaces:
eth0, my physical interface connected to the real LAN,
vnet0, a KVM virtual interface (connected to a Windows VM).
And I have this single firewall rule in the forward chain:
iptables -A FORWARD -j REJECT
Now, the only ping that is working is from the VM to the host.
The br0 interface owns the IP address of my host machine. eth0 and vnet0 do not "own" any IP, from the host point of view. The Windows VM has a static IP configuration.
If change my iptables rule to ACCEPT (or even use a more restrictive iptables -A FORWARD -o br0 -j ACCEPT), everything is working fine! (i.e. I can ping any LAN machine from the VM, and the other way round too).
All IP forwarding kernel options are disabled (like net.ipv4.ip_forward = 0).
So, how can the netfilter firewall block something that is not even enabled?
Furthermore, the VM - LAN traffic should only imply eth0 and vnet0. Yet it looks like allowing FORWARD traffic with -o br0 "works" (I did not check very carefully though).
|
The comment from Stéphane Chazelas provides the hint to the answer.
According to Bridge-nf Frequently Asked Questions bridge-nf enables iptables, ip6tables or arptables to see bridged traffic.
As of kernel version 2.6.1, there are five sysctl entries for bridge-nf behavioral control:
bridge-nf-call-arptables - pass bridged ARP traffic to arptables' FORWARD chain.
bridge-nf-call-iptables - pass bridged IPv4 traffic to iptables' chains.
bridge-nf-call-ip6tables - pass bridged IPv6 traffic to ip6tables' chains.
bridge-nf-filter-vlan-tagged - pass bridged vlan-tagged ARP/IP traffic to arptables/iptables.
net.bridge.bridge-nf-filter-pppoe-tagged - pass bridged pppoe-tagged IP/IPv6 traffic to {ip,ip6}tables
You can disable netfilter firewall blocking with:
# sysctl -w net.bridge.bridge-nf-call-iptables=0
# sysctl -w net.bridge.bridge-nf-call-ip6tables=0
| Why does my firewall (iptables) interfere in my bridge (brctl)? |
1,345,806,144,000 |
I've recently decided to do some security maintenance. I saw my logs, and there were some tries against my SSH server. At first, I moved away the SSH port from the default 22. After it, I read something about Fail2ban, BlockHosts and DenyHosts.
I took a look at the first: it is simple to configure, everything is understandable; but when I tried to "probe its protection", the tests are failed. Everything seems to be good, but I can still access the server.
I also tested the IPtables: # iptables -I INPUT -j DROP - after that my SSH connection was lost (so, what I wanted). Then # iptables -I INPUT -s 84.x.y.z -j DROP, which worked too.
But, what rules did the Fail2ban do, that doesn't work: ($ sudo iptables -L)
Chain INPUT (policy ACCEPT)
target prot opt source destination
fail2ban-apache tcp -- anywhere anywhere multiport dports www,https
fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh
fail2ban-ssh-ddos tcp -- anywhere anywhere multiport dports ssh
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain fail2ban-apache (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Chain fail2ban-ssh (1 references)
target prot opt source destination
DROP all -- 84.x.y.z anywhere
RETURN all -- anywhere anywhere
Chain fail2ban-ssh-ddos (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Kernel modules loaded: ($ lsmod | grep ip)
iptable_nat 4680 0
nf_nat 15576 1 iptable_nat
nf_conntrack_ipv4 12268 3 iptable_nat,nf_nat
nf_conntrack 55540 4 xt_state,iptable_nat,nf_nat,nf_conntrack_ipv4
xt_multiport 2816 2
iptable_filter 2624 1
ip_tables 10160 2 iptable_nat,iptable_filter
x_tables 13284 5 xt_state,xt_tcpudp,iptable_nat,xt_multiport,ip_tables
ipv6 235396 24
Versions:
Debian Lenny 5.06, kernel 2.6.26-2-686
IPtables 1.4.2-6
Fail2ban 0.8.3-2sid1
openssh-server 1:5.1p1-5
Test #1 step by step:
Configure Fail2ban to low bantime. 60 secs. Then reload.
Attempt to login (with SSH), directly with wrong passwd.
For the 6th time enter the correct passwd (max tries is only 4 here). I logged in. I can also access the web page hosted by that server.
iptables -L shown me as its mentioned above. So the ban was active, when I connected, commanded my server.
Test #2 step by step:
Stop Fail2ban. Create an at script, to remove the below wrote ban rule in the future. (iptables -D INPUT 1)
Create a ban rule: iptables -I INPUT 1 -s 84.x.y.z -j DROP
I couldn't type in anything else, the SSH connection is unuseable. I couldn't access the web page. So, what I wanted from iptables.
After the at script, I can access my server.
I don't see the solution, what should I do to make my IPtables ban (made by Fail2ban) work?
|
I found the problem, what I did, before installing fail2ban. Sorry for your time.
For security reason, I moved away my sshd from port 22 to an other. The reference in iptables refers to port 22 only. I thought, that it is a variable, what always refers to the current sshd port. But NOT.
The exact solution (if you moved away your daemon from its original port):
Open jail.local (or .conf).
Find your service (in braces).
Fix the port section to all. Example: port = all
Add or edit an existing banaction line after the port line, with value iptables-allports. Example: banaction = iptables-allports.
Restart the daemon. Example: # service fail2ban restart.
I couldn't find solution for change the port ssh directive, or write there a number. If you have a non-all-ports solution, I'll listen it!
| Fail2ban block with IPtables doesn't work on Debian Lenny. [moved ssh port] |
1,345,806,144,000 |
I have a CentOS 8 guest running on a Fedora 31 host. The guest is attached to a bridge network, virbr0, and has address 192.168.122.217. I can log into the guest via ssh at that address.
If I start a service on the guest listening on port 80, all connections from the host to the guest fail like this:
$ curl 192.168.122.217
curl: (7) Failed to connect to 192.168.122.217 port 80: No route to host
The service is bound to 0.0.0.0:
guest# ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 0.0.0.0:22 0.0.0.0:*
LISTEN 0 5 0.0.0.0:80 0.0.0.0:*
LISTEN 0 128 [::]:22 [::]:*
Using tcpdump (either on virbr0 on the host, or on eth0 on the guest), I see that the guest appears to be replying with an ICMP "admin prohibited" message.
19:09:25.698175 IP 192.168.122.1.33472 > 192.168.122.217.http: Flags [S], seq 959177236, win 64240, options [mss 1460,sackOK,TS val 3103862500 ecr 0,nop,wscale 7], length 0
19:09:25.698586 IP 192.168.122.217 > 192.168.122.1: ICMP host 192.168.122.217 unreachable - admin prohibited filter, length 68
There are no firewall rules on the INPUT chain in the guest:
guest# iptables -S INPUT
-P INPUT ACCEPT
The routing table in the guest looks perfectly normal:
guest# ip route
default via 192.168.122.1 dev eth0 proto dhcp metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.217 metric 100
SELinux is in permissive mode:
guest# getenforce
Permissive
If I stop sshd and start my service on port 22, it all works as expected.
What is causing these connections to fail?
In case someone asks for it, the complete output of iptables-save on the guest is:
*filter
:INPUT ACCEPT [327:69520]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [285:37235]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
*security
:INPUT ACCEPT [280:55468]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [285:37235]
COMMIT
*raw
:PREROUTING ACCEPT [348:73125]
:OUTPUT ACCEPT [285:37235]
COMMIT
*mangle
:PREROUTING ACCEPT [348:73125]
:INPUT ACCEPT [327:69520]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [285:37235]
:POSTROUTING ACCEPT [285:37235]
COMMIT
*nat
:PREROUTING ACCEPT [78:18257]
:INPUT ACCEPT [10:600]
:POSTROUTING ACCEPT [111:8182]
:OUTPUT ACCEPT [111:8182]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A DOCKER -i docker0 -j RETURN
COMMIT
|
Well, I figured it out. And it's a doozy.
CentOS 8 uses nftables, which by itself isn't surprising. It ships with the nft version of the iptables commands, which means when you use the iptables command it actually maintains a set of compatibility tables in nftables.
However...
Firewalld -- which is installed by default -- has native support for nftables, so it doesn't make use of the iptables compatibility layer.
So while iptables -S INPUT shows you:
# iptables -S INPUT
-P INPUT ACCEPT
What you actually have is:
chain filter_INPUT {
type filter hook input priority 10; policy accept;
ct state established,related accept
iifname "lo" accept
jump filter_INPUT_ZONES_SOURCE
jump filter_INPUT_ZONES
ct state invalid drop
reject with icmpx type admin-prohibited <-- HEY LOOK AT THAT!
}
The solution here (and honestly probably good advice in general) is:
systemctl disable --now firewalld
With firewalld out of the way, the iptables rules visible with iptables -S will behave as expected.
| Why are my network connections being rejected? |
1,345,806,144,000 |
I currently have a NAS box running under port 80. To access the NAS from the outside, I mapped the port 8080 to port 80 on the NAS as follow:
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 10.32.25.2:80
This is working like a charm. However, this is working only if I am accessing the website from the outside of the network (at work, at diffrent house, etc). So when I type in mywebsite.com:8080, IPTables do the job correctly and everything is working fine.
Now, the problem I have is, how can I redirect this port from the inside of the network ? My domain name mywebsite.com point to my router (my linux server) from the inside (10.32.25.1) but I want to redirect port 8080 to port 80 on 10.32.25.2 from the inside.
Any clue?
Edit #1
Attempting to help facilitate this question I put this diagram together. Please feel free to update if it's incorrect or misrepresenting what you're looking for.
iptables
| .---------------.
.-,( ),-. v port 80 |
.-( )-. port 8080________ | |
( internet )------------>[_...__...°]------------->| NAS |
'-( ).-' 10.32.25.2 ^ 10.32.25.1 | |
'-.( ).-' | | |
| '---------------'
|
|
__ _
[__]|=|
/::/|_|
|
I finally found how-to. First, I had to add -i eth1 to my "outside" rule (eth1 is my WAN connection). I also needed to add two others rules. Here in the end what I came with :
iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 8080 -j DNAT --to 10.32.25.2:80
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to 10.32.25.2:80
iptables -t nat -A POSTROUTING -p tcp -d 10.32.25.2 --dport 80 -j MASQUERADE
| IPTables - Port to another ip & port (from the inside) |
1,345,806,144,000 |
How can i do this in a single line?
tcp dport 53 counter accept comment "accept DNS"
udp dport 53 counter accept comment "accept DNS"
|
With a recent enough nftables, you can just write:
meta l4proto {tcp, udp} th dport 53 counter accept comment "accept DNS"
Actually, you can do even better:
set okports {
type inet_proto . inet_service
counter
elements = {
tcp . 22, # SSH
tcp . 53, # DNS (TCP)
udp . 53 # DNS (UDP)
}
And then:
meta l4proto . th dport @okports accept
You can also write domain instead of 53 if you prefer using port/service names (from /etc/services).
| How to match both UDP and TCP for given ports in one line with nftables |
1,345,806,144,000 |
The help information doesn't seem to be very informative:
--list -L [chain [rulenum]]
List the rules in a chain or all chains
--list-rules -S [chain [rulenum]]
Print the rules in a chain or all chains
The only difference is in the choice of word: "list" vs. "print".
THe manual is a bit more detailed but still doesn't help:
-L, --list [chain]
List all rules in the selected chain. If no chain is selected, all chains are listed. Like every other iptables command, it applies to the specified table (filter is the default), so NAT
rules get listed by
iptables -t nat -n -L
Please note that it is often used with the -n option, in order to avoid long reverse DNS lookups. It is legal to specify the -Z (zero) option as well, in which case the chain(s) will be atom‐
ically listed and zeroed. The exact output is affected by the other arguments given. The exact rules are suppressed until you use
iptables -L -v
-S, --list-rules [chain]
Print all rules in the selected chain. If no chain is selected, all chains are printed like iptables-save. Like every other iptables command, it applies to the specified table (filter is the
default).
Seems to me that -S is actually more detailed and printed out the exact ports that I allowed with a --dports argument. But why is that the case? I don't think the word "print" automatically suggests a higher level of details than "list"?
|
The difference is the output format. The -S option produces in the the fashion of iptables-save. And this can be reused with iptables-apply,iptables-restore. (Check their man pages entries for details.) So you can think of the difference as:
-L is for reference, to get a clue of what's there
-S is for reusable output, which is for machine parsing
If you think the -S option gives more details, then you should learn other iptables parameters that provide more details in combination with -L.
| What is the difference between iptables -S and iptables -L |
1,345,806,144,000 |
How can I permanently block any ipaddress who accesses known vulnerable pages such as /phpMyadmin/?
I am running a Debian server and I often see bots, or hackers scanning my server trying to find vulnerabilities.
73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpMyadmin/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"
73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpMyAdmin/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"
73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyAdmin/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"
73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyadmin2/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"
73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyadmin3/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"
73.199.136.112 - - [16/Oct/2017:05:18:05 -0700] "HEAD /phpmyadmin4/ HTTP/1.0" 404 182 "-" "Mozilla/5.0 Jorgee"
I have followed this stackoverflow question already: How to secure phpMyAdmin.
I am looking to start blocking bots from taking up bandwidth.
|
This may be more heavy weight than you're looking for, but you might consider using fail2ban (https://www.fail2ban.org). That's a tool that can monitor your log files and automatically ban addresses that generate logs that match a set of customizable patterns.
| How to block clients by IP address from accessing certain URLs on my web server? |
1,345,806,144,000 |
I'm reading iptables' man page at https://linux.die.net/man/8/iptables, and I've got a question regarding the use of user-defined chains:
In the Targets section, it says
If the end of a built-in chain is reached or a rule in a built-in chain with target RETURN is matched, the target specified by the chain policy determines the fate of the packet.
And underneath in the Options section, it also says
Only built-in (non-user-defined) chains can have policies, and neither built-in nor user-defined chains can be policy targets.
So my question is, what happens when a packet goes into a user-defined chain, and reaches the end without matching any of the rules? In other words, what is the default action of a user-defined chain?
The reason I'm asking is I'm wondering if it's necessary to add a catch-all rule to the end of every user-defined chain. Something like
iptables -A MY-CHAIN -j RETURN
Or is this the default behavior? Or something else?
|
If none of the rules in a user-defined chain match, the default behavior is effectively RETURN: processing will continue at the next rule in the parent chain.
When a packet matches a rule whose target is a user-defined chain, the packet begins traversing the rules in that user-defined chain. If that chain doesn't decide the fate of the packet, then once traversal on that chain has finished, traversal resumes on the next rule in the current chain.
(from the Linux 2.4 Packet Filtering HOWTO)
| iptables - default action at the end of user-defined chain |
1,345,806,144,000 |
Booting from a kernel which I recompiled with a custom .config, I got the the following kmsg(ie. dmesg) message:
systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
What kernel .config options do I need to fix this?
|
First enable CONFIG_BPF_SYSCALL=y
┌── Enable bpf() system call ─────────────────────────────────┐
│ │
│ CONFIG_BPF_SYSCALL: │
│ │
│ Enable the bpf() system call that allows to manipulate eBPF │
│ programs and maps via file descriptors. │
│ │
│ Symbol: BPF_SYSCALL [=y] │
│ Type : bool │
│ Prompt: Enable bpf() system call │
│ Location: │
│ -> General setup │
│ Defined at init/Kconfig:1414 │
│ Selects: ANON_INODES [=y] && BPF [=y] && IRQ_WORK [=y] │
│ Selected by [n]: │
│ - AF_KCM [=n] && NET [=y] && INET [=y] │
└─────────────────────────────────────────────────────────────┘
^ that allows you to then also enable CONFIG_CGROUP_BPF=y:
┌── Support for eBPF programs attached to cgroups ─────────────────┐
│ │
│ CONFIG_CGROUP_BPF: │
│ │
│ Allow attaching eBPF programs to a cgroup using the bpf(2) │
│ syscall command BPF_PROG_ATTACH. │
│ │
│ In which context these programs are accessed depends on the type │
│ of attachment. For instance, programs that are attached using │
│ BPF_CGROUP_INET_INGRESS will be executed on the ingress path of │
│ inet sockets. │
│ │
│ Symbol: CGROUP_BPF [=y] │
│ Type : bool │
│ Prompt: Support for eBPF programs attached to cgroups │
│ Location: │
│ -> General setup │
│ -> Control Group support (CGROUPS [=y]) │
│ Defined at init/Kconfig:845 │
│ Depends on: CGROUPS [=y] && BPF_SYSCALL [=y] │
│ Selects: SOCK_CGROUP_DATA [=y] │
└──────────────────────────────────────────────────────────────────┘
That's all that's necessary for those systemd messages to go away.
When you select the above, this is what happens in .config:
Before:
# CONFIG_BPF_SYSCALL is not set
After:
CONFIG_BPF_SYSCALL=y
# CONFIG_XDP_SOCKETS is not set
# CONFIG_BPF_STREAM_PARSER is not set
CONFIG_CGROUP_BPF=y
CONFIG_BPF_EVENTS=y
Two more options become available: CONFIG_XDP_SOCKETS and CONFIG_BPF_STREAM_PARSER but it's not necessary to enable them. But if you're wondering what they are about:
┌── XDP sockets ────────────────────────────────────────┐
│ │
│ CONFIG_XDP_SOCKETS: │
│ │
│ XDP sockets allows a channel between XDP programs and │
│ userspace applications. │
│ │
│ Symbol: XDP_SOCKETS [=n] │
│ Type : bool │
│ Prompt: XDP sockets │
│ Location: │
│ -> Networking support (NET [=y]) │
│ -> Networking options │
│ Defined at net/xdp/Kconfig:1 │
│ Depends on: NET [=y] && BPF_SYSCALL [=y] │
└───────────────────────────────────────────────────────┘
┌── enable BPF STREAM_PARSER ───────────────────────────────────────────┐
│ │
│ CONFIG_BPF_STREAM_PARSER: │
│ │
│ Enabling this allows a stream parser to be used with │
│ BPF_MAP_TYPE_SOCKMAP. │
│ │
│ BPF_MAP_TYPE_SOCKMAP provides a map type to use with network sockets. │
│ It can be used to enforce socket policy, implement socket redirects, │
│ etc. │
│ │
│ Symbol: BPF_STREAM_PARSER [=n] │
│ Type : bool │
│ Prompt: enable BPF STREAM_PARSER │
│ Location: │
│ -> Networking support (NET [=y]) │
│ -> Networking options │
│ Defined at net/Kconfig:301 │
│ Depends on: NET [=y] && BPF_SYSCALL [=y] │
│ Selects: STREAM_PARSER [=m] │
└───────────────────────────────────────────────────────────────────────┘
If wondering why CONFIG_BPF_EVENTS=y:
┌── Search Results ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ │
│ Symbol: BPF_EVENTS [=y] │
│ Type : bool │
│ Defined at kernel/trace/Kconfig:476 │
│ Depends on: TRACING_SUPPORT [=y] && FTRACE [=y] && BPF_SYSCALL [=y] && (KPROBE_EVENTS [=n] || UPROBE_EVENTS [=y]) && PERF_EVENTS [=y] │
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
Kernel tested 4.18.5 on a Fedora 28 AppVM inside Qubes OS 4.0
| How to fix "File" *.service "configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling"? |
1,345,806,144,000 |
i have a server (debian 7) setup in my university with public ip.
when I ssh into the system (from outside the campus), I get a weird delay of 5-10 seconds before I get the password prompt. Why is that?
I run ssh -v to get verbose output:
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
.... delay of 5-10 seconds here
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Trying private key: /home/nass/.ssh/id_rsa
debug1: Trying private key: /home/nass/.ssh/id_dsa
debug1: Trying private key: /home/nass/.ssh/id_ecdsa
debug1: Next authentication method: password
then I get the password prompt fine.
my resolv.conf looks like
domain <mydomain>.edu
nameserver <dns ip address>
firewall is controlled by webmin , and the config /etc/webmin/firewall/iptables.save looks like:
# Generated by iptables-save v1.4.14 on Mon Feb 10 17:41:38 2014
*filter
:FORWARD DROP [0:0]
:IP_TCP - [0:0]
:INPUT DROP [0:0]
:IP_UDP - [0:0]
:OUTPUT ACCEPT [0:0]
:IP_ICMP - [0:0]
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state ESTABLISHED -j ACCEPT
-A INPUT -m state --state RELATED -j ACCEPT
-A INPUT -p tcp -m tcp --tcp-flags ACK ACK -j ACCEPT
-A INPUT -s 127.0.0.1/32 -i eth0 -j DROP
-A INPUT -p icmp -i eth0 -j IP_ICMP
-A INPUT -p udp -m udp -i eth0 -j IP_UDP
-A INPUT -p tcp -m tcp -i eth0 -j IP_TCP
-A INPUT -m limit --limit 3/second --limit-burst 3 -j ULOG --ulog-prefix "FW_INPUT: " --ulog-nlgroup 1
-A IP_ICMP -p icmp -m icmp --icmp-type 0 -j ACCEPT
-A IP_ICMP -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A IP_ICMP -p icmp -m icmp --icmp-type 4 -j ACCEPT
-A IP_ICMP -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A IP_ICMP -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A IP_ICMP -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A IP_ICMP -p icmp -j RETURN
-A IP_TCP -p tcp -m tcp --dport 2049:2050 -j DROP
-A IP_TCP -p tcp -m tcp --dport 6000:6063 -j DROP
-A IP_TCP -p tcp -m tcp --dport 7000:7010 -j DROP
-A IP_TCP -p tcp -m tcp --dport 19001 -j ACCEPT
-A IP_TCP -p tcp -m tcp --dport 12321 -j ACCEPT
-A IP_TCP -p tcp -m tcp --dport 80 -j ACCEPT
-A IP_TCP -p tcp -m tcp --dport 443 -j ACCEPT
-A IP_TCP -p tcp -m tcp -j RETURN
COMMIT
# Completed on Mon Feb 10 17:41:38 2014
# Generated by iptables-save v1.4.14 on Mon Feb 10 17:41:38 2014
*mangle
:PREROUTING ACCEPT [2386474:238877913]
:INPUT ACCEPT [2251067:225473866]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1100410:5416839301]
:POSTROUTING ACCEPT [1100428:5416842284]
COMMIT
# Completed on Mon Feb 10 17:41:38 2014
# Generated by iptables-save v1.4.14 on Mon Feb 10 17:41:38 2014
*nat
:PREROUTING ACCEPT [211832:26633302]
:INPUT ACCEPT [444:26827]
:OUTPUT ACCEPT [1817:114098]
:POSTROUTING ACCEPT [1817:114098]
COMMIT
# Completed on Mon Feb 10 17:41:38 2014
Last but not least, a colleague who also has an account on the same system gets the prompt immediately!
|
As indicated in the comments, this is likely being caused by the UseDNS yes setting in the sshd_config on the server.
The UseDNS setting is a common culprit for this very issue. Basically what happens is that your IP netblock either has a defective, or missing DNS server. So sshd is trying to do a reverse lookup on your IP address, and waits until it times out. Other people do not experience the delay as they have a functional DNS server for their netblock.
Most people turn this setting off for this very reason. While yes, the setting is there for security, it is pretty much useless.
The solution is simply to set the following in the sshd_config:
UseDNS no
| delay to get password prompt when ssh'ing to a public server |
1,345,806,144,000 |
I got an external Debian server. The problem is that my university campus doesn't allow connections to go outside when the port is different than TCP port 22, 80, 443, or UDP port 123. I tested them manually. On my Debian server I would like to listen to all my UDP and TCP ports so I can clearly figure out which TCP and UDP ports my university let through their firewall. Nmap is wonderful on the client side to test that, but what should I do on the server side?
|
tcpdump usually comes as standard on Linux distros. It will log all packets visible at the server note that
you probably want to set it running with a filter for your client IP to cut down on the noise
I think this includes packets not accepted by iptables on the local machine - but you might want to test this
e.g.
/usr/sbin/tcpdump -i eth0 -c 3000000 -np host client.example.com >tcp.log
Then just run nmap from your client.
| How to listen to all ports (UDP and TCP) or make them all appear open in Debian |
1,345,806,144,000 |
I've been on CentOS 7 for a long time and was used to building my custom iptables configurations on a variety of both personal and business boxes.
I've recently started working with CentOS 8 and learned of the move from iptables to nftables and so I was able to rewrite my rulesets and got everything up and running. The problem was that my custom nft rulesets were not persisting after a reboot, I had to manually systemctl restart nftables to get my rules back into force.
I learned that the culprit was firewalld, which from my understanding (because I never used it in CentOS 7), is a front end management tool for both iptables and nftables... correct? Once I systemctl disable firewalld and tried a reboot, my nftables rulesets were in place as expected. Problem solved.
My question is, what are the repercussions of not using firewalld, nftables is still running and active, so I'm assuming that my actual firewall is still in place, is there any reason why I should leave firewalld running and instead adjust a setting to ensure it's using my nftables ruleset instead. Any clarity on it's use would be greatly appreciated!
|
I think the answer is fairly straightforward. First, you have done exactly the right thing...
Firewalld is a pure frontend. It's not an independent firewall by itself. It only operates by taking instructions, then turning them into nftables rules (formerly iptables), and the nftables rules ARE the firewall. So you have a choice between running "firewalld using nftables" and running "nftables only". Nftables in turn works directly as part of the kernel, using a number of modules there, which are partly new, and partly repeat the "netfilter" system of kernel hooks and modules which became part of the kernel around 2000.
It gets quite confusing to run firewalld and nftables (formerly, iptables) in parallel, though I believe some people do so. If you were accustomed to run your own iptables rules anyway, it is the perfect solution to have converted them to nftables rules, and let them be the rules of your firewall. The best thing indeed is to completely disable and preferably mask firewalld - to be slightly pedantic, you can run:
sudo systemctl stop firewalld
sudo systemctl disable firewalld
sudo systemctl mask --now firewalld
There is nothing else you need to do. I myself run directly with nftables too. I find that much more transparent than using a front end (there are others than firewalld of course) - it gives you a complete understanding of what you are doing, and you can easily get a complete review of the effect of your rules by running sudo nft list ruleset > /etc/nftables.conf . And the use of separate nft tables in /etc/nftables.d is a nice and easy way of tracking what you have done, and where things are...
It was subsequently asked in comments by @eriknelson why you would mask a service at all. This is done, afaik, for practical reasons of user experience, and to protect against mistakes and hard-to-find bugs. It is highly undesirable to have more than one firewall system running, as the results for most people would be unpredictable, and it is unlikely that you get clear error messages from any firewall about its interaction with another firewall that is not expected to be there. The kernel however tries to process whatever it is given. If you use either nftables or iptables, you should not use firewalld. Or ufw. Or any other higher level system. And if you mainly use a higher-level firewall like firewalld, you don't like to mess around with the detailed low-level instructions (although occasionally it is done for a particular difficult situation that you find it too difficult to specify in the more high-level firewall).
When you mask a systemd service, you can neither start nor enable it straightaway. If you find / realise that the service is masked, you can unmask it - and then do what you want. This is set up to prevent any such changes made inadvertently or automatically. You can see for yourself if you have masked services on your computer by running sudo systemctl list-unit-files | grep mask
So this situation that you may not necessarily want to remove firewalld completely, but equally don't want to run it perhaps inadvertently, is precisely one of the cases where using sudo systemctl mask xyz.service can come in handy.
I suppose from what you write that you know all this. But I am a bit of an evangelist for nftables, and if others read this answer, they might be helped by these small hints. The documentation of nftables is good, but not excessive.
| CentOS 8 firewalld + nftables or just nftables |
1,460,817,305,000 |
Let's say as the example that I have a firewall that blocks ALL ports from all sources/destinations.
What ports would I need to open to be able to successfully run:
ping google.com
...and are there any other ports I would have to open to be able to browse google.com via a browser?
I've tried opening port 53(dns) 80(http) and 443(https); this is not enough, I am using iptables but I am not asking for how to configure this on iptables, I'm just asking which ports need to be open regardless of what port based firewall you may be using.
|
For DNS, you need to allow UDP packets between any port on an IP address inside the firewall, and port 53 on an IP address outside the firewall.
For HTTPS, you need to allow TCP packets between any port on an IP address inside the firewall, and port 443 outside the firewall, or more rarely any port outside the firewall (some websites are not on the default port). For HTTP, it's the same with port 80.
TCP is a connected protocol; the two ends of the connection are not symmetric and firewalls usually make a difference between. There's rarely any security reason to prevent outgoing connections except maybe to force outgoing email to go through a dedicated relay (to prevent infected machines from sending spam undetected). A typical basic firewall for a client machine allows all or most outgoing connections, and blocks incoming connections.
For ping, allow ICMP. You should allow all ICMP unless you have a specific reason to block certain kinds of packets. Blocking ICMP indiscriminately can make network problems hard to diagnose and can cause floods due to applications not getting proper error replies.
Here's a simple Linux firewall configuration suitable for a typical client machine, that allows everything outdoing except SMTP to a machine other than smtp.example.com and blocks incoming TCP connections except on port 22 (SSH).
iptables -F INPUT
# Accept everything on localhost
iptables -A INPUT -i lo -j ACCEPT
# Accept incoming packets on existing connections
iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# Accept DNS replies
iptables -A INPUT -p udp --sport 53 -j ACCEPT
# Accept incoming SSH connections
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Reject everything else that's incoming
iptables -A INPUT -j REJECT
iptables -F OUTPUT
# Forbid outgoing SMTP except to a known relay
iptables -A OUTPUT -p tcp --dport 22 ! -host smtp.example.com -j REJECT
# Allow everything else that's outgoing
iptables -P OUTPUT -j ALLOW
| What ports need to be open on a firewall to access the internet? |
1,460,817,305,000 |
Is there a way to set a keepalive in the command-line MySQL client on Linux?
Our network recently moved to a VLAN setup, and our systems department no longer has control of the firewall. The powers-that-be decided to set a rule in their firewall to kill all connections after 30 minutes if no data has passed through (something about having to keep track of connections and limited resources). I have no control over this.
As a server admin and programmer, my problem is that this means I can not have a mysql command-line client running in the background while I do programming. If I don't send a query (or otherwise send data), when I try 35 minutes later, the client detects it no longer has a connection. I must reconnect every time.
In SSH, I am able to set a keepalive so that I can keep connections open throughout the day. Is there a similar configurable option for the mysql command-line client? I am having trouble finding anything, because "Mysql Keepalive" and similar searches all return results for a backend connection within a programming language.
|
You can set generic TCP keepalives, I think there is a kernel setting for that. But they're usually much less frequent (hours). There is a TCP Keepalive HOWTO which appears to have details.
Alternatively, why not just tunnel the MySQL connection over SSH, then you can use SSH keepalives?
$ ssh -L1234:127.0.0.1:3306 -o ServerAliveInterval=300 user@mysql-server
... (in another terminal)
$ mysql -u user -p -h 127.0.0.1 -P 1234
Third option would be to set up a tunnel (e.g., OpenVPN) to your server, and then that'll handle keepalive (and transparently reconnecting, etc.).
PS: The "our precious resources" argument for expiring established TCP connections in 30 minutes is BS. Unfortunately—I've been in their shoes before—some equipment is BS too. For example, BCP 142/RFC 5382 gives a minimum time of 2 hours 4 minutes (and that's only if the firewall isn't able to determine if the host is still up).
| MySQL Linux Client Timeout/Keepalive |
1,460,817,305,000 |
I think there is no iptables/pf solution to only allow an XY application on e.g.: outbound tcp port 80, eth0. So if I have a userid: "500" then how could I block any other communications then the mentioned on port 80/outbound/tcp/eth0? (e.g.: just privoxy is using port 80 on eth0)
Extra: virtualbox uses port 80 too? when a browser on the guest os visits a site..how to declaire that? - setting the normal user would be too much hole
|
here's the iptables command to allow for a certain uid through a certain port.
iptables -A OUTPUT -p tcp -m tcp --dport 80 -m owner --uid-owner username -j ACCEPT
from the man page
[!] --uid-owner userid[-userid]
Matches if the packet socket’s file structure (if it has one) is owned by the given user. You may also specify a numerical UID, or an UID range.
as far as virtualbox.. I believe it runs its own kernel... so you might want to use the --uid-owner of virtualbox on the host OS, but then have a --uid-owner owner rule on the virtual machine as well.
It might also be useful to note that --gid-owner also exists, and you could create a group browser and sgid your browser apps so it runs with an effective group browser and then only put users who you want to have browsing in that group... this would not be a perfect solution... but most of the users wouldn't try to run any other apps as that group, thus generally restricting the outbound to that application I believe. I haven't tried this, so I'm not 100% that it would work as I've described.
| iptables/pf rule to only allow XY application/user? |
1,460,817,305,000 |
Platform: RHEL 5.10
netcat Version: 1.84-10.fc6
I was trying to figure out if my inability to ssh was TCP-level and usually I use nc for this. This time, however, I got something unexpected.
[bratchley@ditirlns01 ~]$ nc -vz dixxxldv02.xxx.xxx 22 -w 15
nc: connect to dixxxldv02.xxx.xxx port 22 (tcp) timed out: Operation now in progress
Connection to dixxxldv02.ncat.edu 22 port [tcp/ssh] succeeded!
Normally if it can't connect within the specified timeout it just prints the first line. Thinking it was just some weird race condition (like the TCP connection kept completing just as I was approaching timeout) I lengthened the timeout period to 30 seconds but got the same exact results.
Telnet also fails so I think there is an IDS/Network Firewall blocking the traffic. I was just curious if anyone has seen this before or what it mean.
|
Shortly after posting, I found the problem:
[bratchley@ditirlns01 ~]$ host ditirldv02.ncat.edu
ditirldv02.ncat.edu has address 152.8.143.20
ditirldv02.ncat.edu has address 152.8.143.5
[bratchley@ditirlns01 ~]$
So it appears that nc will cycle through all A records for a given host and test each one individually. The first failure was for the incorrect IP address, the success was for the correct one.
| nc both fails and succeeds |
1,460,817,305,000 |
I have two interfaces on my VPS: eth0 and eth0:0. I want to block incoming packets on port 80 on eth0:0 using iptables. I tried this, but it doesn't work:
iptables -A INPUT -i "eth0:0" -p tcp --destination-port 80 -j DROP
If I change eth0:0 to eth0 it works correctly. What is the problem?
|
The short story: the way you did that is correct (as per your comment to the question).
The long story: on Linux, a network ‘device’ called foo:bar is an alias of ‘foo’ used when we need to assign multiple network settings to the ‘foo’ interface, e.g. to have it respond on multiple subnets on the same wire.
This is a kludgy way of doing this, and inconsistent to boot. For IPv6, all the addresses assigned to the interface eth0 are listed together under the eth0 entry. There's a more modern method of doing this (via the ip addr command).
You can spot alias interfaces because they have a colon : in their names, the part to the left of the colon is an extant interface name, and the interface stanza when you do ifconfig is very short. The HWaddr should also be identical to that of the ‘parent’ interface. They also won't be listed in /proc/net/dev. If you were to say ip addr, eth0:0 would show as the second address of interface eth0. (look for the indented line starting with inet)
Aliases and their parents share a lot of the settings and fields, since they share the physical layer. The kernel doesn't treat them as entirely separate interfaces. For one, traffic appears on the parent interface, not the alias. You may have noticed the alias doesn't even have packet/byte counters!
If you need to sniff traffic, firewall, etc on an alias interface, you have to use its parent instead. Since the only difference an alias has from its parent is its IPv4 settings, the only way to match traffic on an alias is to use those IP settings. With iptables, you match the alias's IPv4 address just like you did in the comment to your answer.
| Why doesn't my iptables rule work? |
1,460,817,305,000 |
I am using iptables with ipset on an Ubuntu server firewall. I am wondering if there is a command for importing a file containg a list of ip's to ipset. To populate an ipset, right now, I am adding each ip with this command:
ipset add manual-blacklist x.x.x.x
It would be very helpfull if I can add multiple ip's with a single command, like importing a file or so.
At command
for ip in `cat /home/paul/ips.txt`; do ipset add manual-blacklist $ip;done
I get this response
resolving to IPv4 address failed to parse 46.225.38.155
for each ip in ips.txt
I do not know how to apply it.
|
You can use ipset save/restore commands.
ipset save manual-blacklist
You can run above command and see how you need to create your save file.
Example output:
create manual-blacklist hash:net family inet hashsize 1024 maxelem 65536
add manual-blacklist 10.0.0.1
add manual-blacklist 10.0.0.2
And restore it with below command.
ipset restore -! < ips.txt
Here we use -! to ignore errors mostly because of duplication.
| How to import multiple ip's to Ipset? |
1,460,817,305,000 |
Sometimes I upload an application to a server that doesn't have external internet access.
I would like to create the same environment in my machine for testing some features in the application and avoid bugs (like reading a rss from an external source).
I thought about just unplugging my ethernet cable to simulate, but this seems archaic and I don't know if I'm going to raise the same exceptions (specially in Python) when doing this compared to the limitations at the server.
So, how do I simulate "no external access" in my development machine? Will "deactivating" my ethernet interface and reactivating later (with a "no hassle" command) have the same behavior as the server with no external access?
I'm using Ubuntu 10.04. Thanks!
|
Deleting the default route should do this. You can show the routing table with /sbin/route, and delete the default with:
sudo /sbin/route del default
That'll leave your system connected to the local net, but with no idea where to send packets destined for beyond. This probably simulates the "no external access" situation very accurately.
You can put it back with route add (remembering what your gateway is supposed to be), or by just restarting networking. I just tried on a system with NetworkManager, and zapping the default worked fine, and I could restore it simply by clicking on the panel icon and re-choosing the local network. It's possible that NM might do this by itself on other events, so beware of that.
Another approach would be to use an iptables rule to block outbound traffic. But I think the routing approach is probably better.
| Is it possible to simulate "no external access" from a Linux machine when developing? |
1,460,817,305,000 |
I am doing some research to figure out what distro's of linux contain kernel packet filtering and are compatible with BPF.
http://kernelnewbies.org/Linux_3.0
http://lwn.net/Articles/437981/
These two articles lead me to believe there is a package somewhere taht includes the libraries, and binaries?
I am specifically looking for the "pfctl" command like I have in FreeBSD
Thanks
|
I think you have mixed two different things:
The OpenBSD packet filter facilities (sometimes called pf, and mostly controlled by pfctl). These are the basis of OpenBSD firewalling, the Linux equivalent is netfilter, mostly controlled by the iptables command. Comparable, but not compatible (and most say that OpenBSD is superior to Linux in this aspect).
The (Berkeley) packet filter (mostly controlled by the libpcap library). This is a feature of the kernel that allows an application to be notified of packets flowing through a network interface. Since usually any client is only interested in a subset of all packets, most of the library is about filtering which packets should be forwarded to the application and which shouldn't. It's used for network analyzers like tcpdump and Wireshark.
The articles you link are not about a port of the OpenBSD pf, instead they describe a new JIT that optimizes the kernel-resident filters used by libpcap.
| Is berkeley packet filter ported to linux? |
1,460,817,305,000 |
We have a problem with doing SSH connections through remote port forwarding.
The scenario is an enterprise network, where a server on the internal network (let's call it "origin") must log in via SSH to a server in the DMZ ("target"). Since the target server in the DMZ is locked down for connections from the internal network (and cannot even be seen from the internal network), we have a jump host in the DMZ that we go through ("jumphost"). We do this by setting up remote port forwarding on the jumphost.
We run this command from the origin server on the internal network, to the jumphost:
origin> ssh -R *:1234:target:22 myusername@jumphost
This is to establish an SSH session on the jumphost, make it start listening on port 1234 (just an example arbitrary port number), and forward connections on that port to the target server port 22 (SSH).
Then we establish a second SSH session, still from the origin server to the jumphost, on port 1234, which then actually gets connected to the target server on port 22 - this is our 'real' SSH session where we can do our work on the target server:
origin> ssh jumphost -P 1234
Configuration
The jump host has been configured to allow remote port forwarding, with the following settings in sshd_config:
AllowTcpForwarding yes
GatewayPorts yes
Also, firewall openings are in place between the origin server and the jump host, for port 22 (for the initial SSH connection to set up the remote port forwarding) and port 1234 (for the subsequent SSH connection on the forwarded port). There is also a firewall between the jumphost and the target, which has been opened on port 22.
Outcome
When we establish the second connection (the one through the forwarded port), the connection is immediately closed ('connection closed by remote host').
Running tcpdump on the target server shows no activity, i.e. it seems the connection gets blocked.
However, we are able to successfully establish a regular SSH session from the jumphost to the target. Only when coming in through the forwarded port is the connection closed, although both connect to the target on port 22.
What's more, if we make the port forwarding point to a server on the internal network (i.e. a connection from origin on the internal network, to the jumphost in the DMZ, and back to a third server on the internal network), then the SSH session is established successfully.
Speculation and questions
All this leads me to believe that some network security setting is at play, which prevents connecting via the forwarded port on the jump server to the target server within the DMZ. Unfortunately I am not knowledgeable enough to know:
(1) Is an SSH connection coming from the origin server, through a forwarded port on the jump server, 'different', from a network security policy point of view, that it could technically be blocked, and if so, how? And what would need to be done to lift that restriction?
(2) Any other reasons that this connection is not allowed through - firewall configuration, router configuration, SSH settings on the origin or jumphost, anything else?
(3) Could it fail because the origin server does not know the target server, and thus the first ssh command does not work as intended? In other words, is the hostname specified in the first ssh command ("target") interpreted on the client (origin) or on the server we are connecting to to create the tunnel (the jumphost)?
What stumps me the most is that a regular SSH session can be established from the jumphost to the target, I would think that the SSH connection coming in over the forwarded port would be the same, but somehow it isn't.
Any input very much appreciated.
|
It looks like you should be using local port-forwarding instead of remote port-forwarding. You may want to refer to the following helpful blog post by Dirk Loss:
SSH port forwarding visualized
It includes the following illustrative diagram:
In order to read the diagram you need to know that it describes the relationships between 4 different roles involved in creating and utilizing an SSH tunnel:
an ssh client (i.e. ssh - the OpenSSH command-line client) used to establish the tunnel;
an ssh server (i.e. sshd - the OpenSSH server daemon) used to maintain the other end of the tunnel;
an application server (e.g. another ssh server or an http server);
an application client (e.g. another ssh client or a web-browser) which wants to access the application server via the tunnel.
It's also important to understand that the two different types of forwarding correspond to two different use cases:
Local Forwarding: where the application client connects via the ssh client
Remote Forwarding: where the application client connects via the ssh server
Remote forwarding is so called because the forwarding is performed remotely (at the ssh server) rather than locally (at the ssh client). I also find "remote forwarding = reverse forwarding" to be a useful mnemonic.
So as you can see, in order to initiate a connection from an ssh client at the origin host through an sshd server on a proxy jumphost to a third target host you would have to use local port-forwarding. Remote port-forwarding is for the case in which you want the entry point to the tunnel to be located at the host running the sshd server rather than at the host running the ssh client.
In the man page the local port-forwarding syntax is written as follows:
ssh -L [bind_address:]port:host:hostport user@remote
This can be written more intuitively as the following:
ssh -L [local_bind_address:]local_port:remote_host:remote_host_port user@proxy_host
Or, using your naming conventions:
ssh -L [origin_bind_address:]origin_port:target_host:target_host_port user@jump_host
If we modify your command to use local port-forwarding instead then we end up with the following:
user@origin:~$ ssh -L *:1234:target:22 myusername@jumphost
| SSH session through jumphost via remote port forwarding |
1,460,817,305,000 |
I have been reading for a while now.
What I understood is:
nftables is the modern Linux kernel packet classification framework. nftables is the successor to iptables. It replaces the existing iptables, ip6tables, arptables, and ebtables framework.
x_tables is the name of the kernel module carrying the shared code portion used by iptables, ip6tables, arptables and ebtables thus, Xtables is more or less used to refer to the entire firewall (v4, v6, arp, and eb) architecture. As a system admin, I should not worry about xtables / x_tables (some people use the underscore, so not sure whether xtables is same as x_tables or not) which is actually some code in the kernel.
nftables uses nf_tables, where nf_tables is the name of the kernel module. As a system admin, I should not worry about nf_tables which is actually some code in the kernel.
iptables-nft is something that looks like iptables but acts like nftables. Its whole purpose is to migrate from iptables to nftables.
iptables-nft uses xtables-nft, where xtables-nft is the name of the kernel module. As a system admin, I should not worry about xtables-nft.
Please let my know whether the above statements are right or wrong. If wrong then please give me the correct statement.
|
My view is that iptables, ip6tables, ebtables and arptable is a frontend tool-set to Netfilter.
They are a user-space tool-set that format and compile the rules to load them in the core Netfilter that runs in the kernel. You can find all the kernel parts of Netfilter in your modules directory ls /lib/modules/$(uname -r)/kernel/net/netfilter/ they have the form of yours nf_*, nft_*, xt_* kernel modules.
The problems with these tools is that they operate with rule granularity, so every adjust of rules implies to download ALL the kernel rules, make your modification on a binary blob, then upload it back to the kernel. This process become very intensive in CPU load when the rules becomes too numerous.
nftables is a rewrite of this tool series inside one unique tool (to rule them all ... ahemm), which make it simpler to use and more performant, but it is still a frontend to Netfilter, however the main differences is that it has a smooth syntax that is able to address every part of Netfilter, and it has the ability to modify the binary set of rules as a whole, directly inside Netfilter without having to download then and upload them one by one, which represent a big gain in performance.
This explain also that you can use both iptables and nftables to modify the rules but it is not recommended because you can't see precedence between different rules, or this precedence may not be what you wanted.
Now, depending on distribution policy one can find different set of packages to works with the new Netfilter core kernel set modules.
you mentioned xtables-nft : it is just a shortcut to designate an intermediary set (or package) of userspace tools made of {ip|ip6,eb,arp}tables with the ability to work on the new Netfilter core, the same way the olders tools were used to, so it helps and ease the migration from the old way to nftables (the new way).
There is also a package named iptables-legacy to keep ip{,6}tables set of traditional tools working the same way with the new Netfilter core without the ability to translate the rules directly to nftables, so Firewall scripting tools like ferm can keep working on new installation of modern kernel.
bridging the two, one can for example iptables-legacy-save |iptables-nft-restore to directly translate an old set of iptables rules to a new nftables ruleset.
xtables-nft is just a shortcut to designate an intermediary set (or package) of userspace tools made of {ip,eb,arp}tables with the ability to work on the new Netfilter core, the same way the olders tools were used to, so it helps and ease the migration from the old way to nftables (the new way)
regards.
| What is the relationship or difference among iptables, xtables, iptables-nft, xtables-nft, nf_tables, nftables |
1,460,817,305,000 |
I am using fail2ban with ipfw on FreeBSD. Is there a way to ignore a specific ip address, making sure that fail2ban never blocks or reports it?
|
See whitelisting on the fail2ban website:
# This will ignore connection coming from common private networks.
# Note that local connections can come from other than just 127.0.0.1, so
# this needs CIDR range too.
ignoreip = 127.0.0.0/8 10.0.0.0/8 172.16.0.0/12 192.168.0.0/16
Another reference here:
First, find ignoreip. It's always important for you to have a way in! These are IPs are fail2ban will ignore - IPs listed here can always have invalid login attempts and still not be blocked. In my file, I'm putting down the network ranges for my internal network (192.168.1.0/24) as well as one other trusted IP address of a machine that I will be able to SSH into if need be. These need to be space separated! If they are not, fail2ban won't block anyone.
| Ignore a specific ip for fail2ban |
1,460,817,305,000 |
I have a CentOS release 5.4 linux box on Amazon EC2 that I'm trying to set up to be monitored via Nagios. The machine is in the same security group as the nagios server, but it seems to be unresponsive to pings or NRPE checks, although apparently port 22 is open.
The CentOS box can ping itself using it's internal IP address, and it can ping the Nagios server, but the server can not ping the CentOS box.
I know the CentOS box is using iptables, here are the contents of the /etc/sysconfig/iptables file (some ips changed for security):
# Generated by iptables-save v1.3.5 on May 16 11:28:45 2012
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [56:6601]
-A INPUT -s 149.15.0.0/255.255.0.0 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -s 72.14.1.153 -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -s 184.119.28.174 -p tcp -m tcp --dport 5666 -j ACCEPT
COMMIT
# Completed on May 16 11:28:45 2012
The part that really gets me is that even after I do /etc/init.d/iptables stop:
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]
I am still unable to ping the box or do NRPE checks on it.
What else could be preventing ping or other connections? I'm not sure what else to try.
Here is a list of processes found with sudo ps -A:
aio/0
atd
bash
cqueue/0
crond
dbus-daemon
dhclient
events/0
hald
hald-runner
init
kauditd
kblockd/0
khelper
khubd
kjournald
kmirrord
kmpathd/0
kpsmoused
kseriod
ksoftirqd/0
kswapd0
kthread
master
migration/0
mingetty
nscd
pdflush
pickup
qmgr
sshd
su
syslog-ng
udevd
watchdog/0
xenbus
xenwatch
xinetd
|
I dont' think that it's related to ping problem, but if you want to put selinux temporary off, you have this option:
setenforce 0
it put selinux from enforcing to permissive mode, to check its condition run
sestatus
to diable selinux permanently you can use system-config-securitylevel or edit with nano or vi /etc/selinux/config and change the parameter from SELINUX=enforcing to SELINUX=disabled.
for me there is a rule in Amazon EC2 that prevent to allow the ping traffic between your machines...
| What prevents a machine from responding to pings? |
1,460,817,305,000 |
I'd like to receive a RTSP stream via VLC, but when I try to run
sudo -u non_root_user cvlc -vvv -I dummy rtsp://ip:port/x.sdp
I get:
Unable to determine our source address: This computer has an invalid IP address: 0x0
I think that the ports might be closed, because when I disabled firewall, I was able to receive the stream. I'd like to ask you how to set the iptables so I can receive RTSP. Thanks.
|
You've run into an ugly hack in Live555, the library VLC uses to provide the RTSP client feature. (VLC's RTSP server code is VLC-specific.) The hack attempts to figure out which IP your machine appears to use on the LAN. (Ugly as the hack is, I don't know a better way for Live555 to do this.)
You have to open UDP port 15947 in your firewall to fix the error that you've run into. That's the "test port" Live555 uses for this hack.
Having done that, you might also have to open additional ports to receive your stream, depending on how your firewall works. RTSP is only a stream control protocol, not a stream delivery protocol. Think of RTSP as "VCR buttons" for the actual stream delivery protocol: play, stop, pause, FF, rewind.... The RTSP client negotiates stream delivery ports with the server as part of the "play" action.
The upshot of this is that the client (VLC in this case) is going to ask the server to send the media to it on a particular port in the RTSP SETUP command:
SETUP rtsp://192.168.0.1:8554/42.ts/track1 RTSP/1.0
CSeq: 4
User-Agent: LibVLC/2.0.2 (LIVE555 Streaming Media v2011.12.23)
Transport: RTP/AVP;unicast;client_port=60860-60861
That is, VLC is telling the RTSP server it wants the media delivered on ports 60860 and 60861 via RTP. The client picks those ports randomly. If your firewall blocks them, it will block the stream delivery even though the RTSP negotiation succeeded.
In the best case, your firewall will either not block such high ports, or it will have some stateful inspection feature that lets it unblock them when it sees this RTSP negotiation.
If your firewall does block it, you can debug it with Wireshark. It understands the RTSP protocol. Right-click a packet in the RTSP stream and say "Follow TCP stream". In the window that pops up, find the RTSP SETUP command. Then start Wireshark again, this time looking for UDP traffic on those ports. (All of this while the RTSP client continues downloading the stream, or trying to.)
| Enable RTSP in iptables |
1,460,817,305,000 |
Netfilter connection tracking is designed to identify some packets as "RELATED" to a conntrack entry.
I'm looking to find the full details of TCP and UDP conntrack entries, with respect to ICMP and ICMPv6 error packets.
Specific to IPv6 firewalling, RFC 4890 clearly describes the ICMPv6 packets that shouldn't be dropped
http://www.ietf.org/rfc/rfc4890.txt
4.3.1. Traffic That Must Not Be Dropped
Error messages that are essential to the establishment and maintenance
of communications:
Destination Unreachable (Type 1) - All codes
Packet Too Big (Type 2)
Time Exceeded (Type 3) - Code 0 only
Parameter Problem (Type 4) - Codes 1 and 2 only
Appendix A.4 suggests some more specific checks that could be performed on Parameter Problem messages if a firewall has the
necessary packet inspection capabilities.
Connectivity checking messages:
Echo Request (Type 128)
Echo Response (Type 129)
For Teredo tunneling [RFC4380] to IPv6 nodes on the site to be possible, it is essential that the connectivity checking messages are
allowed through the firewall. It has been common practice in IPv4
networks to drop Echo Request messages in firewalls to minimize the
risk of scanning attacks on the protected network. As discussed in
Section 3.2, the risks from port scanning in an IPv6 network are much
less severe, and it is not necessary to filter IPv6 Echo Request
messages.
4.3.2. Traffic That Normally Should Not Be Dropped
Error messages other than those listed in Section 4.3.1:
Time Exceeded (Type 3) - Code 1
Parameter Problem (Type 4) - Code 0
In the case of a linux home router, is the following rule sufficient to protect the WAN interface, while letting through RFC 4890 ICMPv6 packets? (ip6tables-save format)
*filter
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
Addendum:
of course, one needs other rules for NDP and DHCP-PD:
-A INPUT -s fe80::/10 -d fe80::/10 -i wanif -p ipv6-icmp -j ACCEPT
-A INPUT -s fe80::/10 -d fe80::/10 -i wanif -p udp -m state --state NEW -m udp --sport 547 --dport 546 -j ACCEPT
In other terms, can I safely get rid of the following rules to comply with RFC 4980, keeping only the "RELATED" rule first?
-A INPUT -i wanif -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type ttl-exceeded -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
|
I don't know the answer, but you can find out yourself.
Use these rules (creates an empty chain "NOOP" for accounting purposes):
*filter
...
:NOOP - [0:0]
...
-A INPUT -i wanif -p icmpv6 --icmpv6-type destination-unreachable -j NOOP
-A INPUT -i wanif -p icmpv6 --icmpv6-type packet-too-big -j NOOP
-A INPUT -i wanif -p icmpv6 --icmpv6-type ttl-exceeded -j NOOP
-A INPUT -i wanif -p icmpv6 --icmpv6-type parameter-problem -j NOOP
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type ttl-exceeded -j ACCEPT
-A INPUT -i wanif -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
...
Then sometimes later, use ip6tables-save -c to see the counters for the above rules. If the counters are > 0 for the NOOP rules above the "RELATED" line but 0 for the ACCEPT rules below, you know the "RELATED" match has taken care of accepting them. If the counter for some NOOP rule is 0, then you can't tell yet for that particular icmpv6 type whether RELATED does it or not. If some ACCEPT line has its counter > 0, then you do need that explicit rule.
| netfilter TCP/UDP conntrack RELATED state with ICMP / ICMPv6 |
1,460,817,305,000 |
I am a student and I keep most of my files on my home computer. Unfortunately, i can't use ssh or scp from my laptop which I use at school because of the firewall. I was thinking about trying to use port 443 because that might be open.
My question is: I have multiple computers in my house and so I am using a router. Would it be bad if i were to port forward 443 to my computer?
I'm not sure if there are any security issues related with this or if it would screw anything up when trying to use https from my other computers.
|
It should work fine, it's not more secure than using a different port for ssh (or less secure for that matter). And no, outbound TCP sockets are not the same as inbound TCP sockets - so it should not interfere with your outbound network traffic.
| Is it bad to port forward port 443 for ssh? |
1,460,817,305,000 |
I have a (non production) machine where external supporters have shell-access (non-root). I want to prevent them from going further on into our network from that machine using iptables.
The "normal" firewall-gui only blocks incoming traffic. How can I set up rules like "accept all incoming traffic (plus response), but allow only new outgoing traffic for specific targets (like snmp-traps to the monitoring server)"?
OS is CentOS 5
|
There are two ways to drop all outgoing traffic except what you explicitly define as ACCEPT. The first is to set the default policy for the OUTPUT chain to drop.
iptables -P OUTPUT DROP
The downside to this method is that when the chain is flushed (all rules removed), all outbound traffic will be dropped. The other way is to put a "blanket" DROP rule at the end of the chain.
iptables -A OUTPUT -j DROP
Without knowing exactly what you need, I can not offer advice on what to accept. I personally use the method of putting a default DROP rule at the end of the chain. You may need to investigate how your GUI is setting rules, otherwise it may conflict with traditional CLI ways of restoring rules on boot (such as /etc/sysconfig/iptables).
| Blocking outgoing connects with iptables |
1,460,817,305,000 |
I'm trying to set up a firewall on my own desktop (currently I'm tinkering with a Fedora 29 virtual machine). I would like to have it on the "deny-everything-by-default" basis. Almost immediately I decided to disable and mask the firewalld.service, since firewalld had no way to drop the outgoing packets, except by using the native iptables syntax. So I decided to resort to nftables, since it's the modern replacement for the former.
The problem is that after a system reboot iptables chains have some rules, which I didn't set (and I have no idea where they come from). On the other hand # nft list ruleset returns nothing. So I assume, that rules from iptables and nft will be enabled simultaneously and when I set up some nft rules, rules from iptables, which can appear from "nowhere", will be able to meddle.
I tried to remove iptables, but dnf refused to do so and warned that systemd depends on it.
So could anyone answer a couple of my questions here, please?
Do I understand the concepts here correctly (that iptables rules and chains are separate from nft ones, and that they both are in effect at the same time)?
How can I reliably use nft without iptables rules interference?
Or should I simply use iptables and remove nft?
|
For the question per se, these are the last two questions from the original post:
How can I reliably use nft without iptables rules interference?
Or should I simply use iptables and remove nft?
this is what the nftables wiki says:
What happens when you mix Iptables and Nftables?
How do they interact?
nft Empty Accept Accept Block Blank
iptables Empty Empty Block Accept Accept
Results Pass Pass Unreachable Unreachable Pass
So one should not worry that some traffic will be allowed because it was allowed in one tool, while forbidden in the other.
As for those iptables rules, as I asked, "after a system reboot iptables chains have some rules, which I didn't set (and I have no idea where they come from)", they turned out to come from the libvirtd.service, which I disabled, since I don't need it. But it wouldn't have hurt even if I had not.
| How to prevent iptables and nftables rules from running simultaneously? |
1,460,817,305,000 |
I have a standard installation of Ubuntu 10.04, and have installed the LAMP stack so I can do some web development locally. On my router I have opened port 80 so I can develop with external services like paypal and facebook, as they need to see the website for them to work.
How unsecure has my development machine become by opening port 80? Can I secure it further, yet leave port 80 open?
I am asking this quesiton because on my apache error.log file, I noticed an external ip address trying to access webdav, which I do not have setup. I have yet to check the access.log file.
|
Of course any services you have open will increase your vulnerable attack surface. What runs behind those services will determine how secure and insecure you become. If you write insecure PHP scripts and host them in your newly accessible Apache site, the world will be able to (and will!) exploit them. You should seriously consider what you are making available and how secure the scripts that run your site are.
If this is just for development, you might consider blocking access to port 80 except from certain IPs that you know need to connect back to you. How to do this is frequently answered over on ServerFault.
The fact that the port got scanned for a service you don't run is not surprising. If they had found webdav, the next step would be to scan it for known vulnerabilities that you haven't patched. The same will happen with any software you host. If you put up an old version of some CMS and don't patch it, it will get scanned and exploited.
| Opening port 80 but remain secure |
1,460,817,305,000 |
I have one single ipset added to my iptables on a CentOS 6.x box and this rule is lost when the machine reboots.
I've found this answer showing how to make a Ubuntu system reload the iptables rules after a reboot but this directory is not present on CentOS.
How do I make this CentOS box load the firewall rules after a reboot?
NOTE: Yes, I'm saving the rules using iptables save and the file is being saved.
This is what is inside /etc/sysconfig/iptables:
# Generated by iptables-save v1.4.7 on Mon Apr 8 09:52:59 2013
*filter
:INPUT ACCEPT [2713:308071]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [1649:1766437]
-A INPUT -p tcp -m multiport --dports 25,587,465,110,143,993,995 -m state --state INVALID,NEW,RELATED,ESTABLISHED -m set
--match-set blocking src -j DROP
COMMIT
# Completed on Mon Apr 8 09:52:59 2013
the command shows -A INPUT but when I created it I have used -I INPUT.
The rule used to create this was:
iptables -I INPUT -p tcp -m multiport --dports 25,587,465,110,143,993,995 -m state --state NEW,ESTABLISHED,RELATED,INVALID -m set --set blocking src -j DROP
|
You lost rules because:
After adding rules you have to do save before restart service or server. because when you add rule, they are in memory but after saving they will save in file and restore from that file at start-up.
So first You need to save added rules using:
$ /etc/init.d/iptables save
This will save all rules in /etc/sysconfig/iptables, then just enable the iptables service at start-up using:
$ chkconfig --level 53 iptables on
Method 2
To save rules:
$ /sbin/iptables-save > /etc/iptables.rules
To restore rules [ Add Below entry in /etc/rc.local ]:
$ /sbin/iptables-restore < /etc/iptables.rule
| iptables rules not reloading on CentOS 6.x |
1,460,817,305,000 |
Is it possible to configured UFW to allow UPNP between computers in the home network?
Everything works if I turn off the firewall. I can see in syslog the firewall is blocking me. I've tried all sorts of tips out there like open 1900, 1901, 5353, these all seemed like random attempts. I know the issue is UPNP requests a random port and UFW is simply blocking it.
|
You seem to be close to the answer. The easiest thing to do is to temporarily turn off the firewall let your media boxes run for a couple of minutes and then check the output from lsof
lsof -i :1025-9999 +c 15
The -i lists "files" corresponding to an open port, use -i4 to restrict to IPv4 only. The number list restricts this to a list of port numbers - miss it off if you want everything. The +c bit just gives you more meaningfull command names associated with the ports
netstat -lptu --numeric-ports
This lists all of the active ports along with their protocol and source/target address.
With this information, you can build a script to set ufw correctly. Here is my script by way of example:
#!/bin/sh
# Set up local firewall using ufw (default install on Ubuntu)
# @see /etc/services for port names
# obtain server's IP address
SERVERIP=192.168.1.181
# Local Network
LAN="192.168.0.0/255.255.0.0"
# disable firewall
ufw disable
# reset all firewall rules
ufw reset
# set default rules: deny all incoming traffic, allow all outgoing traffic
#ufw default allow incoming
ufw default deny incoming
ufw default allow outgoing
# open port for SSH
ufw allow OpenSSH
# open port for Webmin
ufw allow webmin
# open ports for Samba file sharing
ufw allow from $LAN to $SERVERIP app Samba
ufw allow to $LAN from $SERVERIP app Samba
#ufw allow from $LAN to $SERVERIP 137/udp # NetBIOS Name Service
#ufw allow from $LAN to $SERVERIP 138/udp # NetBIOS Datagram Service
#ufw allow from $LAN to $SERVERIP 139/tcp # NetBIOS Session Service
#ufw allow from $LAN to $SERVERIP 445/tcp # Microsoft Directory Service
# open ports for Transmission-Daemon
ufw allow 9091
ufw allow 20500:20599/tcp
ufw allow 20500:20599/udp
# Mediatomb
## upnp service discovery
ufw allow 1900/udp
## Mediatomb management web i/f
ufw allow 49152
# Plex Media Server
## Manage
ufw allow 32400
# open port for MySQL
ufw allow proto tcp from $LAN to any port 3306
# open ports for web services
ufw allow 80
ufw allow 443
ufw allow 8000:9999/tcp
ufw allow 8000:9999/udp
# Deny FTP
ufw deny 21/tcp
# Webmin/usermin allow
ufw allow webmin
ufw allow 20000
# open port for network time protocol (ntpd)
ufw allow ntp
# Allow Firefly (DAAP)
ufw allow 3689
# enable firewall
ufw enable
# list all firewall rules
ufw status verbose
You should be able to see from the Mediatomb section that uPNP is working on the standard port 1900 over UDP (not TCP) and is open in both directions, this is the main port for you. But you can also see that there are numerous other ports required for specific services.
| Uncomplicated Firewall (UFW) and UPNP |
1,460,817,305,000 |
I have two servers
Server1 -> Static IP1
Server2 -> Static IP2
Server2's firewall allows access only from Static IP1
I can connect to Server1 via ssh from anywhere.
How can I connect to Server2 from my PC which is behind a dynamic IP via ssh in one step instead of connecting via ssh to Server1 and then doing another ssh to Server2 from within Server1s shell.
|
If you have OpenSSH 7.3p1 or later, you can tell it to use server1 as a jump host in a single command:
ssh -J server1 server2
See fcbsd’s answer for older versions.
| can I access ssh server by using another ssh server as intermediary [duplicate] |
1,460,817,305,000 |
I installed Webmin, and then set up the firewall like this:
INPUT
SSH port ALLOWED
Webmin port ALLOWED
HTTP port (80) ALLOWED
DROP EVERYTHING ELSE
FORWARDING
no rules
OUTPUT
no rules
If I remove DROP EVERYTHING ELSE from INPUT, everything works.
However, when that rule is added, apt-get doesn't work, and I can't ping or traceroute anything.
Even with DROP EVERYTHING ELSE enabled, Webmin, HTTP and SSH still work.
Which ports should I unblock to get apt-get working and allowed connecting to other domains from within the server?
Thanks
|
Make sure you accept also connection originated from inside. With iptables:
iptables -A INPUT -m state --state ESTABLISHED -j ACCEPT
With Webmin, allow
Connection states EQUALS Existing Connection
| Which ports should I open for apt-get to work? |
1,460,817,305,000 |
Running iptables -L -n gives me the following info:
Chain IN_ZONE_work_allow (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ctstate NEW
ACCEPT udp -- 0.0.0.0/0 224.0.0.251 udp dpt:5353 ctstate NEW
ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:631 ctstate NEW
What are ACCEPT udp 0.0.0.0/0 dest 224.0.0.251 ?
|
It means you are allowed to receive multicast dns packets (dpt = destination port, 5353 = multicast dns), udp is the protocol, 224.0.0.251 is a destination multicast address, 0.0.0.0/0 means from anywhere. ctstate new means if the connection is new (packets related to "not new", ie, established, connections would be accepted via a more general rule).
In case you are not aware, on a low level, all computers on a network receive all packets send by any other computer; then they each sort them out themselves.
| What does this firewall record mean? |
1,460,817,305,000 |
On Debian 10 buster I am having problems with docker containers unable to ping the docker host or even docker bridge interface, but able to reach the internet.
Allowing access as in related questions here, doesn't fix it in my case.
Seems iptables/nftables related, and I can probably figure out what to do, if I could first figure out how to log the errors.
I put in the log rules in both DOCKER-USER and INPUT, with likes of
nft insert rule ip filter DOCKER-USER counter log but they all show 0 packets logged.
/var/log/kern.log doesn't show any firewall related info, and neither does journalctl -k.
How is the new way to view firewall activity with this nftables system?
nft list ip table filter
table ip filter {
chain INPUT {
type filter hook input priority 0; policy drop;
ct state invalid counter packets 80 bytes 3200 drop
iifname "vif*" meta l4proto udp udp dport 68 counter packets 0 bytes 0 drop
ct state related,established counter packets 9479197 bytes 17035404271 accept
iifname "vif*" meta l4proto icmp counter packets 0 bytes 0 accept
iifname "lo" counter packets 9167 bytes 477120 accept
iifname "vif*" counter packets 0 bytes 0 reject with icmp type host-prohibited
counter packets 28575 bytes 1717278 drop
counter packets 0 bytes 0 log
counter packets 0 bytes 0 log
iifname "docker0" counter packets 0 bytes 0 accept
}
chain FORWARD {
type filter hook forward priority 0; policy drop;
counter packets 880249 bytes 851779418 jump DOCKER-ISOLATION-STAGE-1
oifname "br-cc7b89b40bee" ct state related,established counter packets 7586 bytes 14719677 accept
oifname "br-cc7b89b40bee" counter packets 0 bytes 0 jump DOCKER
iifname "br-cc7b89b40bee" oifname != "br-cc7b89b40bee" counter packets 5312 bytes 2458488 accept
iifname "br-cc7b89b40bee" oifname "br-cc7b89b40bee" counter packets 0 bytes 0 accept
oifname "br-d41d1510d330" ct state related,established counter packets 8330 bytes 7303256 accept
oifname "br-d41d1510d330" counter packets 0 bytes 0 jump DOCKER
iifname "br-d41d1510d330" oifname != "br-d41d1510d330" counter packets 7750 bytes 7569465 accept
iifname "br-d41d1510d330" oifname "br-d41d1510d330" counter packets 0 bytes 0 accept
oifname "br-79fccb9a0478" ct state related,established counter packets 11828 bytes 474832 accept
oifname "br-79fccb9a0478" counter packets 11796 bytes 707760 jump DOCKER
iifname "br-79fccb9a0478" oifname != "br-79fccb9a0478" counter packets 7 bytes 526 accept
iifname "br-79fccb9a0478" oifname "br-79fccb9a0478" counter packets 11796 bytes 707760 accept
counter packets 1756295 bytes 1727495359 jump DOCKER-USER
oifname "docker0" ct state related,established counter packets 1010328 bytes 1597833795 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 284235 bytes 16037499 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
ct state invalid counter packets 0 bytes 0 drop
ct state related,established counter packets 0 bytes 0 accept
counter packets 0 bytes 0 jump QBS-FORWARD
iifname "vif*" oifname "vif*" counter packets 0 bytes 0 drop
iifname "vif*" counter packets 0 bytes 0 accept
counter packets 0 bytes 0 drop
}
chain OUTPUT {
type filter hook output priority 0; policy accept;
}
chain QBS-FORWARD {
}
chain DOCKER {
}
chain DOCKER-ISOLATION-STAGE-1 {
iifname "br-cc7b89b40bee" oifname != "br-cc7b89b40bee" counter packets 5312 bytes 2458488 jump DOCKER-ISOLATION-STAGE-2
iifname "br-d41d1510d330" oifname != "br-d41d1510d330" counter packets 7750 bytes 7569465 jump DOCKER-ISOLATION-STAGE-2
iifname "br-79fccb9a0478" oifname != "br-79fccb9a0478" counter packets 7 bytes 526 jump DOCKER-ISOLATION-STAGE-2
iifname "docker0" oifname != "docker0" counter packets 590138 bytes 34612496 jump DOCKER-ISOLATION-STAGE-2
counter packets 1808904 bytes 1760729363 return
}
chain DOCKER-ISOLATION-STAGE-2 {
oifname "br-cc7b89b40bee" counter packets 0 bytes 0 drop
oifname "br-d41d1510d330" counter packets 0 bytes 0 drop
oifname "br-79fccb9a0478" counter packets 0 bytes 0 drop
oifname "docker0" counter packets 0 bytes 0 drop
counter packets 644929 bytes 74784737 return
}
chain DOCKER-USER {
counter packets 0 bytes 0 log
iifname "docker0" counter packets 305903 bytes 18574997 accept
counter packets 1450392 bytes 1708920362 return
}
}
|
You can use nftrace to trace packet flows. It's very verbose but doesn't go to kernel logs but instead is distributed over multicast netlink socket (ie if nothing listens to them, traces just go to "/dev/null").
If you really want to trace everything, trace from prerouting and output at a low priority. Better use a separate table, because what you are displaying with nft list ip table filter is actually iptables-over-nftables with the compatibility xt match layer API and shouldn't be tampered with (but can safely be used along traces). Also you should know there are probably other tables for iptables, like the nat table.
So, with a ruleset from the file traceall.nft loaded with nft -f traceall.nft:
table ip traceall
delete table ip traceall
table ip traceall {
chain prerouting {
type filter hook prerouting priority -350; policy accept;
meta nftrace set 1
}
chain output {
type filter hook output priority -350; policy accept;
meta nftrace set 1
}
}
You can now follow these (very verbose) IPv4 traces with:
nft monitor trace
This would even work the same if doing this inside a container (which is usually not the case for log targets).
You can activate these traces elsewhere, or put conditions before activating them in a rule in a later priority to avoid tracing all hooks/chains. Following this schematic will help understand the order of events: Packet flow in Netfilter and General Networking.
If choosing to use the equivalent -j TRACE target in iptables, consult also the man for xtables-monitor, because iptables-over-nftables changes its behaviour (compared to iptables-legacy).
While I answered OP's question, here are wild guesses about both issues and log issues:
if Docker itself is running within a container, logs might not be available. They can be made available to the host, and to all containers allowed to query the kernel messages, with sysctl -w net.netfilter.nf_log_all_netns=1, simply because kernel messages don't have namespace instances.
the counter at the log rule in ip filter INPUT is zero, while the counter at the previous rule with a drop statement is not. That means the log rule is made too late: after drop. The log rule (or rather iptables's -j LOG) should be inserted before the final drop statement, not appended after where it will never be reached.
The only INPUT rule about Docker is iifname "docker0" counter packets 0 bytes 0 accept. If the containers are not on the default Docker network, there's no rule allowing them to reach the host.
Try adding a rule for testing this. Be sure the result is inserted before the drop rule. Use iptables, avoid adding a rule with nftables that could be incompatible with iptables-over-nftables:
iptables -I INPUT 8 -i "br-*" -j ACCEPT
| How to properly log and view nftables activity? |
1,460,817,305,000 |
The L7-filter project appears to be 15 years old, requires kernel patches with no support for kernels past version 2.6, and most of the pattern files it has appear to have been written in 2003.
Usually when there's a project that is that old, and that popular, there are new projects to replace it, but I can't find anything more recent for Linux that does layer 7 filtering.
Am I not looking in the right places? Was the idea of layer 7 filtering abandoned entirely for some reason? I would think that these days, with more powerful hardware, this would be even more practical than it used to be.
|
You must be talking of (the former) project Application Layer Packet Classifier for Linux, which was implemented as patches, for the 2.4 and the 2.6 kernels.
The major problem with this project, is that the technology which it proposed to control, quickly outpaced the usefulness and efficacy of the implementation.
The members of the project, also had no time (and money) to further invest in outpacing some advancements of the technology, as far as I remember, and then sold the rights to the implementation, which killed for good an already problematic project.
The challenges this project/technology has faced over the years are, by no particular order:
adapting the patches to the 3.x/4.x kernel versions;
scarcity of processing power - in several countries, nowadays the speed of even domestic gigabit broad will demand ASICs to do efficient layer 7 traffic-shapping;
bittorrent started using heavy obfuscation;
HTTPS started being used heavily to encapsulate several protocols and/or to avoid detection;
peer-to-peer protocols stopped using fixed ports, and started trying to get their way by any open/allowed port;
the rise of ubiquitous voIP and video in real time, that makes traffic very sensitive to even small time delays:
the widespread use of VPN connections.
Heavy R&D was then invested heavily, into professional traffic shaping products.
The state of the art ten years ago, involved already specific ASICs and (heavy use) of heuristics, for detecting encrypted/obfuscated traffic.
At the present, besides of more than a decade of experience in advanced heuristics, with the advancement of global broadband, traffic-shapping (and firewall) vendors, are also using peer-2-peer sharing in real-time, of global data, to enhance the efficacy of their solutions.
They are combining advanced heuristics, with real time profiling / sharing data from thousands of locations in the world.
It would be very difficult, to put together a open source product, that will work as efficiently as an Allot NetEnforcer.
Using open source solutions, for the purpose of infra-structure bandwidth health, it is not so usual, anymore, trying to traffic shape by the type/nature of traffic that IP address is using at the network level.
Nowadays, for generic traffic control and protecting the bandwidth capacity of the infra-structure, the usual strategy is (besides firewalling), without using advanced traffic shaping hardware, allocating a small part of the bandwidth per IP address.
| Is there a way to do layer 7 filtering in Linux? |
1,460,817,305,000 |
We are using iptables firewall. It is logging and dropping various packages depending on its defined rules.
Iptables log file entries look like:
2017-08-08T19:42:38.237311-07:00 compute-nodeXXXXX kernel: [1291564.163235] drop-message : IN=vlanXXXX OUT=cali95ada065ccc MAC=24:6e:96:37:b9:f0:44:4c:XX:XX:XX:XX:XX:XX SRC=10.50.188.98 DST=10.49.165.68 LEN=60 TOS=0x00 PREC=0x00 TTL=57 ID=14005 DF PROTO=TCP SPT=52862 DPT=50000 WINDOW=29200 RES=0x00 SYN URGP=0
Is there any way to get the count of the dropped packets ?
I want to calculate metrics like the number of dropped packets in the last minute, hour…. so on.
The main purpose is monitoring for configuration mistakes and security breaches. If the firewall rules have a mistake, abruptly bunch of packets start to get dropped. Similarly if an attack is happening we expect variation in the number of denied packets.
|
There are counters for each rule in iptables which can be shown with the -v option. Add -x to avoid the counters being abbreviated when they are very large (eg 1104K). For example,
$ sudo iptables -L -n -v -x
Chain INPUT (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
39 22221 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp spts:67:68 dpts:67:68
...
182 43862 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4 prefix "input_drop: "
182 43862 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-host-prohibited
shows no dropped packets on my local network but 182 rejected with icmp and a log message such as the one you listed. The last two rules in the configuration with a policy of DROP were
-A INPUT -j LOG --log-prefix "input_drop: "
-A INPUT -j REJECT --reject-with icmp-host-prohibited
You can zero the counters for all chains with iptables -Z.
These counts are for the packets that iptables itself dropped. However,
there may be other filtering software that is also dropping packets due
to congestion, for example. You need to look at each one for whatever
statistics they provide. The (obsolete) netstat program can easily show the counts of packets that were dropped at the ethernet interface due to congestion before they are even delivered to iptables:
$ netstat -i
Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR
enp5s0 1500 1097107 0 38 0 2049166 0 0 0
and you can also get some statistics on packets dropped elsewhere by the kernel for various reasons:
$ netstat -s | grep -i drop
27 outgoing packets dropped
16 dropped because of missing route
2 ICMP packets dropped because socket was locked
| How to get metrics about dropped traffic via iptables? |
1,460,817,305,000 |
I am going to use iptables for port forwarding to listen on requests from my LAN on port 8080 and answer with container at port 80, like this:
iptables -t nat -A PREROUTING -p tcp -d 192.168.1.15 --dport 8080 -j DNAT --to 10.0.3.103:80
I am not sure if the rule is right (feel free to correct it), but the question is:
How to annotate this rule so that I can easily find and purge it?
If iptables cannot do this, what can?
|
There is a comment module for iptables which should do what you need. When adding a rule, one can add a comment like this:
iptables -A INPUT -p icmp -j ACCEPT -m comment --comment "Allow incoming ICMP"
| How to tag IPTABLES rules? |
1,460,817,305,000 |
We have two apps running (on top of linux) and both communicates through port 42605. I wanted to quickly verify if this is the only port that's been used for communication between them. I tried below rule, but it doesn't seems to work. So, just wanted to get this clarified, if I am doing it wrong.
Following is the sequence of commands i ran
iptables -I INPUT -j REJECT
iptables -I INPUT -p tcp --dport 42605 -j ACCEPT
iptables -I INPUT -p icmp -j ACCEPT
iptables -I OUTPUT -p tcp --dport 42605 -j ACCEPT
So, this will get added in reverse order since I am inserting it.
I wanted to allow incoming and outgoing communications from and to 42605. Does the above rule looks good or am I doing it wrong?
Another question, would this be the right way to test, or maybe I should use "netstat" command to see which port has connection established with the other ip?
|
We can make INPUT policy drop to block everything and allow specific ports only
# allow established sessions to receive traffic
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# allow your application port
iptables -I INPUT -p tcp --dport 42605 -j ACCEPT
# allow SSH
iptables -I INPUT -p tcp --dport 22 -j ACCEPT
# Allow Ping
iptables -A INPUT -p icmp --icmp-type 0 -m state --state ESTABLISHED,RELATED -j ACCEPT
# allow localhost
iptables -A INPUT -i lo -j ACCEPT
# block everything else
iptables -A INPUT -j DROP
Another question, would this be the right way to test, or maybe I
should use "netstat" command to see which port has connection
established with the other ip?
Yes, you can check netstat -antop | grep app_port and you can also use strace :
strace -f -e trace=network -s 10000 PROCESS ARGUMENTS
To monitor an existing process with a known pid:
strace -p $( pgrep application_name) -f -e trace=network -s 10000
| Iptables rule to allow only one port and block others |
1,460,817,305,000 |
If we use firewall-cmd to open a port before starting the firewalld service, it will fail saying "firewalld is not running".
If I start firewalld I'm getting disconnected from the remote server, I'm running SSH on a different port than 22.
How am I supposed to configure a remote server without losing connection to it?
|
firewall-offline-cmd is an offline command line client of the
firewalld daemon. It should be used only if the firewalld service is
not running. For example to migrate from system-config-firewall/lokkit
or in the install environment to configure firewall settings with
kickstart.
A few basic examples:
# firewall-offline-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
success
# firewall-offline-cmd --direct --add-rule ipv4 filter INPUT 0 -p udp -m state --state NEW -m udp --dport 69 -j ACCEPT
success
# firewall-offline-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m state --state NEW -m tcp --dport 8000 -j ACCEPT
success
# firewall-offline-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
success
# firewall-offline-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m state --state NEW -m tcp --dport 8443 -j ACCEPT
success
Tell your system to reboot in 2 minutes if your firewall kicks you out for some reason:
# shutdown -r +2 "Enabling firewall. If access is lost, server will restart in 5 minutes."
When you're ready:
systemctl start firewalld
If all is well, cancel shutdown:
# shutdown -c
And finally, enable the service and make sure your config is permanent:
# systemctl enable firewalld
# firewall-cmd --runtime-to-permanent
Full details here: https://firewalld.org/documentation/man-pages/firewall-offline-cmd.html and here: https://manpages.debian.org/unstable/firewalld/firewall-offline-cmd.1.en.html
| Set up firewalld on CentOS before starting it |
1,460,817,305,000 |
Many people don't clear the conntrack table when they want to reload their firewall rules. When you have some ESTABLISHED connections, all the sessions won't be affected when you add a rule that blocks some of the connections in question (in the NEW state). The only way to make sure that this isn't going to happen is to kill all the sessions by clearing the conntrack table. In this case all packets will hit the INVALID rule and you have to make a new connection, which now goes through the new rules of iptables.
In OpenWRT, you can simply do the following:
# echo f > /proc/net/nf_conntrack
But unfortunately this solution doesn't work on debian.
# echo f > /proc/net/nf_conntrack
echo: write error: Input/output error
Here's why:
# ls -al /proc/net/nf_conntrack
-r--r----- 1 root root 0 2016-06-05 10:45:52 /proc/net/nf_conntrack
On debian you have to install the conntrack package, and type the following command:
# conntrack -F
conntrack v1.4.3 (conntrack-tools): connection tracking table has been emptied.
Why echo f doesn't work on debian? Is there a way to make it work somehow, or am I forced to use the conntrack tool?
|
OpenWRT maintains several specific kernel patches. Amont them, there is the specific patch providing the feature you complain is lacking in Debian. It is actually available only in OpenWRT.
Choosing a git history near the asked question's timeline, for kernel 4.4.
600-netfilter_conntrack_flush.patch:
static const struct file_operations ct_file_ops = {
.owner = THIS_MODULE,
.open = ct_open, .read = seq_read,
+ .write = ct_file_write,
.llseek = seq_lseek,
.release = seq_release_net,
};
@@ -393,7 +450,7 @@ static int nf_conntrack_standalone_init_
{
struct proc_dir_entry *pde;
- pde = proc_create("nf_conntrack", 0440, net->proc_net, &ct_file_ops);
+ pde = proc_create("nf_conntrack", 0660, net->proc_net, &ct_file_ops);
if (!pde)
goto out_nf_conntrack;
While I couldn't figure out where the specific f syntax is handled (outside of the snippet above), the snipped above clearly shows OpenWRT adds a patch to allow writing to /proc/net/nf_conntrack which is read-only on normal kernels.
It's still available today for kernel 5.4 but its contents are even less obvious.
One can imagine it's to cope with embedded environments and limited size: if the kernel patch allows to avoid shipping the conntrack tools, space is spared for other features.
It's a hack. The intended modern interactions with the conntrack subsystem is through the conntrack tool and the netlink kernel API, which is continually evolving. (Read-only) /proc/net/nf_conntrack is only kept around for compatibility with simpler tools.
You could probably compile a Debian source tree with this additional patch (possibly requiring other related patches) to get this feature, but I'm not sure that would really be useful on a normal system.
| Why "echo f" in the case of clearing conntrack table doesn't work on debian? |
1,298,910,509,000 |
I'm having a friend behind a firewall, with a windows computer. I'm having a Linux machine at home which is not behind a firewall.
I want to have an rdesktop connection to his machine, without using any intermediate service such as LogMeIn.
My plan is:
Have him SSH to my machine (SSH is allowed by the firewall), and set the appropriate tunnel.
Activate rdesktop/vnc on my machine, on the currently ran X server.
What I don't like about it, is the hassle of running programs as his user on the currently running X server. I'd rather have him set the tunnel somehow for my user, so that I'll just be able to rdesktop localhost:1234 as long as he's connected to me.
Any smarter way?
|
I would prefer to setup a vpn (openvpn for example) with server on your machine and client on your friend's machine. When he wants you to connect, he opens the vpn (no login involved on your machine) and you open your remote desktop client to his machine's IP (at least with openvpn, you can assign a "fixed" IP to his machine so you can save it, not needing to look at it everytime).
This way you have no login to your machine and you only access his machine when he opens the VPN. On the other side, you can shutdown the server when you don't want him to connect to your machine. Anyways, if you don't give him a user on your machine (or a user with only the access you want), he won't be able to to much there.
And this way, you can do it with more friends easily if needed as they only need to install the vpn client.
| tunneling VNC/rdesktop over ssh |
1,298,910,509,000 |
I am trying to open some ports in CentOS 7.
I am able to open a port with the following command:
firewall-cmd --direct --add-rule ipv4 filter IN_public_allow 0 -m tcp -p tcp --dport 7199 -j ACCEPT
By inspecting via iptables -L -n, I get the confirmation that the setting was successful:
Chain IN_public_allow (1 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:7199
Unfortunately, I cannot make the changes permanent. Even by using the --permanent option like this:
firewall-cmd --direct --permanent --add-rule ipv4 filter IN_public_allow 0 -m tcp -p tcp --dport 7199 -j ACCEPT
Any idea on how to fix this? Why is the --permanent option not working correctly?
|
--direct commands cannot be made permanent. Use equivalent zone command:
sudo firewall-cmd --zone=public --add-port=7199/tcp --permanent
sudo firewall-cmd --reload
and to check the result:
sudo firewall-cmd --zone=public --list-all
| how to make firewall changes permanent via firewall-cmd? |
1,298,910,509,000 |
I want to create a dynamic blacklist with nftables. Under version 0.8.3 on the embedded device I create a ruleset looks like this with nft list ruleset:
table inet filter {
set blackhole {
type ipv4_addr
size 65536
flags timeout
}
chain input {
type filter hook input priority 0; policy drop;
ct state invalid drop
ct state established,related accept
iif "lo" accept
ip6 nexthdr 58 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, echo-reply, mld-listener-query, mld-listener-report, mld-listener-done, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept
ip protocol icmp icmp type { echo-reply, destination-unreachable, echo-request, router-advertisement, router-solicitation, time-exceeded, parameter-problem } accept
ip saddr @blackhole counter packets 0 bytes 0 drop
tcp flags syn tcp dport ssh meter flood { ip saddr timeout 1m limit rate over 10/second burst 5 packets} set add ip saddr timeout 1m @blackhole drop
tcp dport ssh accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
For me this is only a temporary solution. I want to use the example from the official manpage for dynamic blacklisting. If I use the offical example from the manpage my nftables file looks like this:
table inet filter {
set blackhole{
type ipv4_addr
flags timeout
size 65536
}
chain input {
type filter hook input priority 0; policy drop;
# drop invalid connections
ct state invalid drop
# accept traffic originating from us
ct state established,related accept
# accept any localhost traffic
iif lo accept
# accept ICMP
ip6 nexthdr 58 icmpv6 type { destination-unreachable, packet-too-big, time-exceeded, parameter-problem, echo-request, echo-reply, mld-listener-query, mld-listener-report, mld-listener-done, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept
ip protocol icmp icmp type { destination-unreachable, router-solicitation, router-advertisement, time-exceeded, parameter-problem, echo-request, echo-reply } accept
# accept SSH (port 22)
ip saddr @blackhole counter drop
tcp flags syn tcp dport ssh meter flood { ip saddr timeout 10s limit rate over 10/second} add @blackhole { ip saddr timeout 1m } drop
tcp dport 22 accept
}
chain forward {
type filter hook forward priority 0; policy drop;
}
chain output {
type filter hook output priority 0; policy accept;
}
}
But when I load this nftables file on version 0.8.3 with nft -f myfile I get this error:
Error: syntax error, unexpected add, expecting newline or semicolon
tcp flags syn tcp dport ssh meter flood { ip saddr timeout 10s limit rate over 10/second} add @blackhole { ip saddr timeout 1m } drop
I don't know why this is the case, but according to the wiki it should work from version 0.8.1 and kernel 4.3.
I have version 0.8.3 and kernel 4.19.94.
I have tested under Debian Buster the ruleset from the official manpage with version 0.9.0. The ruleset from the manpage works fine with Debian, but the ip is blocked only once.
With this example I want to create a firewall rule which blocks the ip adress on ssh port if an brute force attack is started to my device. But I want to block the ip e.g for 5 minutes. After that time it should be possible to connect again to the device from the attackers ip. If he do brute force again it should be block the ip again for 5 minutes and so on. I want to avoid to use another software for my embedded device like sshguard or fail2ban if it is possible with nftables.
I hope anyone can help me. Thanks!
|
The hydra tool connects concurrently multiple times to the SSH server. In OP's case (comment: hydra -l <username> -P </path/to/passwordlist.txt> -I -t 6 ssh://<ip-address>) it will use 6 concurrent threads connecting.
Depending on server settings, one connection could typically try 5 or 6 passwords and taking about 10 seconds before being rejected by the SSH server, so I fail to see how a rate of 10 connection attempts per second could be exceeded (but that's the case). It could mean that what triggers is that more than 5 connection attempts are done in less than 1/2s. I wouldn't trust too much the accuracy of 10/s, but it can be assumed it happens here.
Version and syntax issues
The syntax not working with versions 0.8.1 or 0.8.3 is a newer syntax that appeared in this commit:
src: revisit syntax to update sets and maps from packet path
For sets, we allow this:
nft add rule x y ip protocol tcp update @y { ip saddr}
[...]
It was committed after version 0.8.3 so available only with nftables >= 0.8.4
The current wiki revision for Updating sets from the packet path, in the same page, still displays commands with the former syntax
% nft add rule filter input set add ip saddr @myset
[...]
and results displayed with the newer syntax:
[...]
add @myset { ip saddr }
[...]
Some wiki pages or the latest manpage might not work with older nftables versions.
Anyway, if running with kernel 4.19, nftables >= 0.9.0 should be preferred to get additional features. For example it's available in Debian 10 or in Debian 9 backports.
Blacklisting should be done before stateful accept rules
Once the IP is added to the blacklist, this doesn't prevent established connections to continue, unhindered and unaccounted, until they're disconnected by the SSH server itself. That's because there's the usual existing short-circuit rule before:
# accept traffic originating from us
ct state established,related accept
This comment is misleading: it doesn't accept traffic originating from us but any traffic already ongoing. This is a short-circuit rule. Its role is to handle stateful connections by parsing all rules only for new connections: any rule after this one applies to new connections. Once connections are accepted, their individual packets stay accepted until end of connection.
For the specific case of blacklist handling, specific blacklist rules or part of them should be placed before this short-circuit rule to be able to take effect immediately. In OP's case that is:
ip saddr @blackhole counter drop
It should be moved before the ct state established,related accept rule.
Now once the attacker is added to the blacklist, other ongoing connections won't get some remaining free attempts at guessing a password: they'll immediately hang.
If there's a blacklist, consider a whitelist
As a side note, the cheap iif lo accept rule could itself be moved before both as an optimization and to be whitelisted: all (even long lived) local established connection will also now be subject to blacklisting in case of abuse (eg: from 127.0.0.1). Consider adding various whitelisting rules before the @blackhole rule.
Optionally warn applications faster
To also prevent ongoing replies from server to reach the blacklisted IP (especially for UDP traffic, not that useful for TCP, including SSH), the equivalent rule using daddr can also be added in the inet filter output chain, with reject to inform faster the local processes trying to emit that they should abort:
ip daddr @blackhole counter reject
Difference between add and update applied on a set
Now with such settings in place, even if ongoing connections are immediately stopped, the attacker is able to keep trying and get a new short window 1mn later, which is not optimal.
The entries must be updated in the input @blackhole ... drop rule. update will refresh the timer if the entry already existed, while add would do nothing. This will keep blocking any further (unsuccessful) attempt to connect to the SSH server until attacker gives up, with zero opened window. (The output rule I added above shouldn't be changed, it's not the attacker's actions):
replace:
ip saddr @blackhole counter drop
with (still keeping older syntax):
ip saddr @blackhole counter set update ip saddr timeout 1m @blackhole drop
It should even be moved before the ct state invalid rule, else if attacker tries invalid packets (eg TCP packet not part of a known connection, like a late RST from an already forgotten connection), the set won't be updated while it could have been.
Limit the maximum number of established connections
Requires kernel >= 4.18 and nftables >= 0.9.0, so can't be done with OP's current configuration.
The attacker might discover it can't connect too many times at once but can still keep adding, without limit, new connections, as long as not connecting too fast.
A limit on concurrent connections (as available with iptables's connlimit) can also be added with an other meter rule:
tcp flags syn tcp dport 22 meter toomanyestablished { ip saddr ct count over 3 } reject with tcp reset
will allow any given IP address to have only 3 established SSH connections.
Or while at it, instead, also trigger the @blackhole set (using newer syntax this time):
tcp flags syn tcp dport 22 meter toomanyestablished { ip saddr ct count over 3 } add @blackhole { ip saddr timeout 1m } drop
This should trigger even before the previous meter rule in OP's case. Use with care to avoid legitimate users to be affected (but see openssh's ControlMaster option).
IPv4 and IPv6
As there's no generic IPv4+IPv6 set address type, all rules handling IPv4 (whenever there's the 2-letters word ip) should probably be duplicated into a mirror rule having ip6 in them and working on an IPv6 set.
| Create dynamic blacklist with nftables |
1,298,910,509,000 |
Deployment:
VM -- (eth0)RPI(wlan0) -- Router -- ISP
^ ^ ^ ^
DHCP Static DHCP GW
NOTE: RPI hostname: gateway
• The goal was to make VMs accessible from the outside the network. Accomplished, according to the tutorial https://www.youtube.com/watch?v=IAa4tI4JrgI, via the Port Forwarding on Router and RPI, by installing dhcpcd and configuring iptables on RPI.
• Here is my interfaces, where I have commented out the auto wlan0, in attempt to fix the issue (before, it was uncommented, and was still the same thing...)
# interfaces(5) file used by ifup(8) and ifdown(8)
# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and 'man dhcpcd.conf'
# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
#auto wlan0
iface wlan0 inet dhcp
wpa-ssid FunBox-84A8
wpa-psk 7A73FA25C43563523D7ED99A4D
#auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.2.1
netmask 255.255.255.0
network 192.168.2.0
broadcast 192.168.2.255
• Here is the firewall.conf used by the iptables:
# Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019
*nat
:PREROUTING ACCEPT [86:11520]
:INPUT ACCEPT [64:8940]
:OUTPUT ACCEPT [71:5638]
:POSTROUTING ACCEPT [37:4255]
-A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 170 -j DNAT --to-destination 192.168.2.83:22
-A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 171 -j DNAT --to-destination 192.168.2.83:443
-A PREROUTING -d 192.168.1.21/32 -p tcp -m tcp --dport 3389 -j DNAT --to-destination 192.168.2.66:3389
-A POSTROUTING -o wlan0 -j MASQUERADE
COMMIT
# Completed on Sun Feb 17 20:01:56 2019
# Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019
*filter
:INPUT ACCEPT [3188:209284]
:FORWARD ACCEPT [25:2740]
:OUTPUT ACCEPT [2306:270630]
-A FORWARD -i wlan0 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i eth0 -o wlan0 -j ACCEPT
COMMIT
# Completed on Sun Feb 17 20:01:56 2019
# Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019
*mangle
:PREROUTING ACCEPT [55445:38248798]
:INPUT ACCEPT [3188:209284]
:FORWARD ACCEPT [52257:38039514]
:OUTPUT ACCEPT [2306:270630]
:POSTROUTING ACCEPT [54565:38310208]
COMMIT
# Completed on Sun Feb 17 20:01:56 2019
# Generated by iptables-save v1.6.0 on Sun Feb 17 20:01:56 2019
*raw
:PREROUTING ACCEPT [55445:38248798]
:OUTPUT ACCEPT [2306:270630]
COMMIT
# Completed on Sun Feb 17 20:01:56 2019
• iptables -L:
pi@gateway:/etc$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
• Here is the dhcpcd.conf:
# A sample configuration for dhcpcd.
# See dhcpcd.conf(5) for details.
# Allow users of this group to interact with dhcpcd via the control socket.
#controlgroup wheel
# Inform the DHCP server of our hostname for DDNS.
hostname
# Use the hardware address of the interface for the Client ID.
clientid
# or
# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361.
# Some non-RFC compliant DHCP servers do not reply with this set.
# In this case, comment out duid and enable clientid above.
#duid
# Persist interface configuration when dhcpcd exits.
persistent
# Rapid commit support.
# Safe to enable by default because it requires the equivalent option set
# on the server to actually work.
option rapid_commit
# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
option classless_static_routes
# Most distributions have NTP support.
option ntp_servers
# Respect the network MTU. This is applied to DHCP routes.
option interface_mtu
# A ServerID is required by RFC2131.
require dhcp_server_identifier
# Generate Stable Private IPv6 Addresses instead of hardware based ones
slaac private
# Example static IP configuration:
#interface eth0
#static ip_address=192.168.0.10/24
#static ip6_address=fd51:42f8:caae:d92e::ff/64
#static routers=192.168.0.1
#static domain_name_servers=192.168.0.1 8.8.8.8 fd51:42f8:caae:d92e::1
# It is possible to fall back to a static IP if DHCP fails:
# define static profile
#profile static_eth0
#static ip_address=192.168.1.23/24
#static routers=192.168.1.1
#static domain_name_servers=192.168.1.1
# fallback to static profile on eth0
#interface eth0
#fallback static_eth0
denyinterfaces eth0
host Accountant {
hardware ethernet 10:60:4b:68:03:21;
fixed-address 192.168.2.83;
}
host Accountant1 {
hardware ethernet 00:0c:29:35:95:ed;
fixed-address 192.168.2.66;
}
host Accountant3 {
hardware ethernet 30:85:A9:1B:C4:8B;
fixed-address 192.168.2.70;
}
• The error message, that I am not able to figure out:
root@gateway:/home/pi# systemctl restart dhcpcd
Warning: dhcpcd.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Job for dhcpcd.service failed because the control process exited with error code.
See "systemctl status dhcpcd.service" and "journalctl -xe" for details.
root@gateway:/home/pi# systemctl status dhcpcd
● dhcpcd.service - dhcpcd on all interfaces
Loaded: loaded (/lib/systemd/system/dhcpcd.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/dhcpcd.service.d
└─wait.conf
Active: failed (Result: exit-code) since Sun 2019-02-17 20:36:42 GMT; 6s ago
Process: 775 ExecStart=/usr/lib/dhcpcd5/dhcpcd -q -w (code=exited, status=6)
Feb 17 20:36:42 gateway systemd[1]: Starting dhcpcd on all interfaces...
Feb 17 20:36:42 gateway dhcpcd[775]: Not running dhcpcd because /etc/network/interfaces
Feb 17 20:36:42 gateway dhcpcd[775]: defines some interfaces that will use a
Feb 17 20:36:42 gateway dhcpcd[775]: DHCP client or static address
Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Control process exited, code=exited status=6
Feb 17 20:36:42 gateway systemd[1]: Failed to start dhcpcd on all interfaces.
Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Unit entered failed state.
Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Failed with result 'exit-code'.
Warning: dhcpcd.service changed on disk. Run 'systemctl daemon-reload' to reload units.
root@gateway:/home/pi#
root@gateway:/home/pi# systemctl daemon-reload
root@gateway:/home/pi# systemctl status dhcpcd
● dhcpcd.service - dhcpcd on all interfaces
Loaded: loaded (/lib/systemd/system/dhcpcd.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/dhcpcd.service.d
└─wait.conf
Active: failed (Result: exit-code) since Sun 2019-02-17 20:36:42 GMT; 1min 23s ago
Feb 17 20:36:42 gateway systemd[1]: Starting dhcpcd on all interfaces...
Feb 17 20:36:42 gateway dhcpcd[775]: Not running dhcpcd because /etc/network/interfaces
Feb 17 20:36:42 gateway dhcpcd[775]: defines some interfaces that will use a
Feb 17 20:36:42 gateway dhcpcd[775]: DHCP client or static address
Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Control process exited, code=exited status=6
Feb 17 20:36:42 gateway systemd[1]: Failed to start dhcpcd on all interfaces.
Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Unit entered failed state.
Feb 17 20:36:42 gateway systemd[1]: dhcpcd.service: Failed with result 'exit-code'.
root@gateway:/home/pi#
•gateway version:
pi@gateway:/etc$ cat os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
Questions:
1) What does the error message Not running dhcpcd because /etc/network/interfaces defines some interfaces that will use a DHCP client or static address mean? How to fix it, according to my config above?
2) Why hosts are not getting assigned the IP address according to my dhcpcd.conf, except the host Accountant, which is always getting the same IP, which I want, even if comment out the binding...? How to fix it, in order to be able to bind more than one hosts MAC with IP?
3) What does this notation mean:
#auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.2.1
netmask 255.255.255.0
network 192.168.2.0
broadcast 192.168.2.255
What are the notation rules for the interfaces file in Linux?
|
Question 1.) Sorry, it looks like you've misunderstood a few things.
dhcpcd is a DHCP client daemon, which is normally started by NetworkManager or ifupdown, not directly by systemd. It is what will be handling the IP address assignment for your wlan0.
You can use dhcpcd as started by systemd if you wish, however that will require disabling all the normal network interface configuration logic (i.e. /etc/network/interfaces must be empty of non-comment lines) of the distribution and replacing it with your own custom scripting wherever necessary. That is for special uses only; if you're not absolutely certain you should do that, you shouldn't.
dhcpcd will never serve IP addresses to any other hosts. This part you added to dhcpcd.conf looks like it would belong to the configuration file of ISC DHCP server daemon, dhcpd (yes it's just one-letter difference) instead:
host Accountant {
hardware ethernet 10:60:4b:68:03:21;
fixed-address 192.168.2.83;
}
host Accountant1 {
hardware ethernet 00:0c:29:35:95:ed;
fixed-address 192.168.2.66;
}
host Accountant3 {
hardware ethernet 30:85:A9:1B:C4:8B;
fixed-address 192.168.2.70;
}
But if you are following the YouTube tutorial you mentioned, you might not even have dhcpd installed, since dnsmasq is supposed to do that job.
As far as I can tell, the equivalent syntax for dnsmasq.conf would be:
dhcp-host=10:60:4b:68:03:21,192.168.2.83,Accountant
dhcp-host=00:0c:29:35:95:ed,192.168.2.66,Accountant1
dhcp-host=30:85:A9:1B:C4:8B,192.168.2.70,Accountant3
Disclaimer: I haven't actually used dnsmasq, so this is based on just quickly Googling its man page.
Question 2.) In the tutorial you mentioned, dnsmasq was supposed to act as a DHCP server on eth0. You did not say anything about it, so I don't know whether it was running or not. If not, the one client that was always getting the same IP might have been simply falling back to a previously-received old DHCP lease that wasn't expired yet. Yes, DHCP clients may store a DHCP lease persistently and keep using it if a network doesn't seem to have a working DHCP server available.
Question 3.): /etc/network/interfaces is a classic Debian/Ubuntu style network interface configuration file. Use man interfaces to see documentation for it, or look here.
In Debian, *Ubuntu, Raspbian etc., NetworkManager will have a plug-in that will read /etc/network/interfaces but won't write to it.
If NetworkManager configuration tools like nmcli, nmtui or GUI-based NetworkManager configuration tools of your desktop environment of choice are used, the configuration would be saved to files in /etc/NetworkManager/system-connections/ directory instead.
If NetworkManager is not installed, the /etc/network/interfaces file is used by the ifupdown package, which includes the commands ifup and ifdown. The package also includes a system start-up script that will run ifup -a on boot, enabling all network interfaces that have auto <interface name> in /etc/network/interfaces. There is also an udev rule which will run ifup <interface name> if a driver for a new network interface gets auto-loaded and /etc/network/interfaces has an allow-hotplug <interface name> line for it.
| "Not running dhcpcd because /etc/network/interfaces defines some interfaces that will use a DHCP client or static address" |
1,298,910,509,000 |
I have two network interfaces: eth0, and p2p1. My default zone is set to public. I would like to permanently set p2p1 to be trusted.
In order the achieve this I run:
sudo firewall-cmd --permanent --change-zone=p2p1 --zone=trusted
after that I get this:
The interface is under control of NetworkManager, setting zone to 'trusted'.
success
(I have netplan controlling my network.) To check if all is good I do:
sudo firewall-cmd --get-active-zones
public
interfaces: eth0
trusted
interfaces: p2p1
But after a reboot it is all gone. How can I make this stick?
Update: I found this "To permanently assign the eth0 network interface to the internal zone (a file called internal.xml is created in the /etc/firewalld/zones directory... "
root@me:~# nmcli con show | grep p2p1
netplan-p2p1 44db1fb7-b83f-36aa-8dd1-faa6fb97f6c4 ethernet p2p1
p2p1 3ad65062-db85-4ba6-9104-76644e78a5c4 ethernet --
p2p1 c3297794-7641-4033-9f68-156f26ffe024 ethernet --
root@me:~# nmcli con mod "netplan-p2p1" connection.zone trusted
root@me:~# nmcli con up "netplan-p2p1"
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
... the above calls changed p2p1 to zone=trusted. But same problem -- it is not sticking.
I am on an Ubuntu 18.10 system, so adding a config file like suggested here will not work. Not sure, but I am assuming I need to add some script in "/etc/network/if-up.d" folder.
Update 2: netplan config file 01-netcfg.yaml
network:
version: 2
renderer: NetworkManager
ethernets:
# WAN
eth0:
dhcp4: no
dhcp6: no
addresses: [76.80.54.221/29]
gateway4: 76.80.54.217
nameservers:
addresses: [209.18.47.61,209.18.47.62]
# LAN
p2p1:
dhcp4: no
dhcp6: no
addresses: [192.168.1.99/24]
gateway4: 192.168.1.100
|
I figured it out -- finally.
I added a script file zone-for-p2p1 inside the directory /etc/network/if-up.d.
zone-for-p2p1 script file content:
#!/bin/sh
#
# sets zone for p2p1 adapter to "trusted"
# to find out adapter name run "nmcli con show | grep p2p1"
#
nmcli con mod "netplan-p2p1" connection.zone trusted
Then I also made sure the file has execution permission:
sudo chmod +x /etc/network/if-up.d/zone-for-p2p1
Now doing a reboot the script assigns the proper zone to the adapter. This post helped me add the script.
| setting firewall-cmd --permanent is not sticking after reboot |
1,298,910,509,000 |
Can someone give me a hint on how to setup a basic deny rule whenever any TCP request is sent to a specific IP address? I am using the PF packet filter. Any help?
|
The most basic form would look like this, in your /etc/pf.conf config:
block from any to 192.0.2.2
# which is equivalent to:
block drop from any to 192.0.2.2
By default this block action will drop packets silently on all interfaces, from any source IP, in both directions. Because a client is unaware it is being blocked it will timeout and likely try again, and again...
block return is the 'friendly neighbor' way to let the client know the address is unreachable by responding in a protocol specific way, with a TCP RST (reset) or ICMP UNREACHABLE packet. A client can use this information to give up, or try again in a sane way.
block return from any to 192.0.2.2
The default block behavior can be changed using the set block-policy option.
A more involved example - but easier to manage and read when your rule set starts to grow:
mybadhosts = "{ 192.0.2.2, 203.0.113.0/24 }"
ext_if = "em0"
block return on $ext_if from any to $mybadhosts # example #1
block return on em0 from any to { 192.0.2.2, 203.0.113.0/24 } # ^expanded form
block drop out on egress from any to $mybadhosts # example #2
example #1 Shows simple use of variables, a list {}, a netmask /24, and specifies an interface em0. (Note variables are defined without a $ sign, and quotes are removed, when rules are expanded at runtime)
example #2 Drops out outbound packets, on the egress interface group (see ifconfig(8))
See Also:
OpenBSD Manual Pages - pf.conf(5)
OpenBSD PF Packet Filter User's Guide
Firewalling with OpenBSD's PF packet filter by Peter Hansteen
| Block outgoing connections to certain IP using PF |
1,298,910,509,000 |
I have an embedded Linux on some network device. Because this device is pretty important I have to make many network tests (I have a separate device for that). These tests include flooding my device with ARP packets (normal packets, malformed packets, packets with different size etc.)
I read about different xx-tables on the internet: ebtables, arptables, iptables, nftables etc. For sure I'm using iptables on my device.
What xx-tables is the best to filter (limit, not drop) ARP packets?
I heard something about /proc/config.gz file which suppose to have information what is included in the Kernel. I checked CONFIG_IP_NF_ARPFILTER which is not included. So - in order to use arptables - I should have Kernel compilled with CONFIG_IP_NF_ARPFILTER option enabled, correct? And the same goes to for example ebtables?
I read that ebtables & arptables works on OSI level 2 when iptables works on OSI level 3. So I would assume that filtering anything on level 2 is better (performance?) then on level 3, correct?
I found somewhere on this website answer to use ebtables to filter ARP packets. Does ebtables have any advantage over arptables?
EXTRA ONE. What is the best source on the internet to learn about limiting/filtering network traffic for different kind of packets and protocols?
|
What xx-tables is the best to filter (limit, not drop) ARP packets?
iptables
iptables starts from IP layer: it's already too late to handle ARP.
arptables
While specialized in ARP, arptables lacks the necessary matches and/or targets to limit rather than just drop ARP packets. It can't be used for your purpose.
ebtables
ebtables can be a candidate (it can both handle ARP and use limit to not drop everything).
pro:
− quite easy to use
con:
− it's working on ethernet bridges. That means if you're not already using a bridge, you have to create one and enslave your (probably unique) interface on it, for the sake of having it being usable at all. This comes with a price, both for configuration, and probably some networking overhead (eg: network interface is set promiscuous).
− as it doesn't have the equivalent of iptables's companion ipset, limiting traffic is crude. It can't do per-source on-the-fly metering (so such source MACs or IPs must be manually added in the rules).
nft (nftables)
pro:
− this tool was made with the goal to replace other tools and avoid duplication of code, like duplicating match modules (one could imagine that arptables could also have received a limit match, but that would just be the third implementation of such a match module, after ip(6)tables' xt_limit and ebtables' ebt_limit ones). So it's intended to be generic enough to use the same features at any layer: it can limit/meter traffic at ARP level while also doing it per source rather than globally.
con:
− some features might require a recent kernel and tools (eg: meter requires kernel >= 4.3 and nftables >= 0.8.3).
− since its syntax is more generic, rules can be more difficult to create correctly. Sometimes documentation can be misleading (eg: non-working examples).
tc (traffic control)?
It might perhaps be possible to use tc to limit ARP traffic. As tc feature works very early in the network stack, its usage could limit ressource usage. But this tool is also known for its complexity. Even to use it for ingress rather than egress traffic requires steps. I didn't even try on how to do this.
CONFIG_IP_NF_ARPFILTER
As seen in previous point, this is moot: arptables can't be used. You need instead NF_TABLES_ARP or else BRIDGE_NF_EBTABLES (or maybe if tc is actually a candidate, NET_SCHED). That doesn't mean it's the only prerequisite, you'll have to verify what else can be needed (at least what to make those options become available, and various match kernel modules needed to limit ARP).
What layer is best?
I'd say using the most specific layer doing the job would be the most easier to handle. At the same time, the earlier handled, the less overhead is needed, but it's usually more crude and so complex to handle then. I'm sure there are a lot of different possible advices here. ARP can almost be considered being between layer 2 and 3. It's implemented at layer 2, but for example equivalent IPv6's NDP is implemented at layer 3 (using multicast ICMPv6). That's not the only factor to consider.
Does ebtables have any advantage over arptables?
See points 1 & 2.
What is the best source on the internet to learn about limiting/filtering network traffic for different kind of packets and protocols?
Sorry there's nothing that can't be found using a search engine with the right words. You should start with easy topics before continuing with more difficult. Of course SE is already a source of informations.
Below are examples both for ebtables and nftables
with ebtables
So let's suppose you have an interface eth0 and want to use ebtables with it with IP 192.0.2.2/24. The IP that would be on eth0 becomes ignored once the interface becomes a bridge port. It has to be moved from eth0 to the bridge.
ip link set eth0 up
ip link add bridge0 type bridge
ip link set bridge0 up
ip link set eth0 master bridge0
ip address add 192.0.2.2/24 dev bridge0
Look at ARP options for ebtables to do further filtering. As told above ebtables is too crude to be able to limit per source unless you manually state each source with its MAC or IP address with rules.
To limit to accepting one ARP request per second (any source considered).
ebtables -A INPUT -p ARP --arp-opcode 1 --limit 1/second --limit-burst 2 -j ACCEPT
ebtables -A INPUT -p ARP --arp-opcode 1 -j DROP
There are other variants, like creating a veth pair, putting the IP on one end and set the other end as bridge port, leaving the bridge without IP (and filtering with the FORWARD chain,stating which interface traffic comes from, rather than INPUT).
with nftables
To limit to accepting one ARP request per second and on-the-fly per MAC address:
nft add table arp filter
nft add chain arp filter input '{ type filter hook input priority 0; policy accept; }'
nft add rule arp filter input arp operation 1 meter per-mac '{ ether saddr limit rate 1/second burst 2 packets }' counter accept
nft add rule arp filter input arp operation 1 counter drop
| Best way to filter/limit ARP packets on embedded Linux |
1,298,910,509,000 |
Without SSL, FTP works fine over a stateful Firewall, like netfilter (iptables) + the nf_conntrack_ftp kernel module like this:
# modprobe nf_conntrack_ftp
# iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# iptables -A INPUT -p tcp --dport 21 -j ACCEPT
The problem is that, when SSL is used, the FTP connection tracking module cannot work because it is unable to spy on the session to discover the session-port chosen for data exchange. It is thus unable to open that port dynamically.
Is there a proper way to make a SSL-enabled FTP server work, without disabling the firewall?
For information, I use vsftpd with the ssl_enable=YES configuration option.
|
There are several modes with SSL and FTP:
Implicit SSL, that is SSL from start (usually port 990) and never plain text. In this case you get no clear text information at the firewall about the dynamic data ports and thus cannot restrict communication to only these ports.
Explicit SSL with "AUTH TLS" command before login to enable SSL but without CCC after login to disable SSL. Here you have the same problem as with implicit SSL, that is you cannot read which data ports are in use.
Explicit SSL as before but with CCC command after login. In this case the login is protected by SSL, but the rest of the control connection uses plain text. The data transfer can still be protected by SSL. You must enable this mode at the client, like with ftp:ssl-use-ccc with lftp. There is no way to enforce this mode at the ftp server.
If you cannot get the exact data ports because the relevant commands are encrypted you could at least make the firewall a bit less restrictive:
In active mode ftp the server will originate the data connections from port 20 so you can have an iptables rule allowing these connections, i.e. something like
iptables -A OUTPUT -p tcp --sport 20 -j ACCEPT and additionally accept established connections.
In passive mode ftp you could restrict the port range offered by vsftpd with pasv_max_port and pasv_min_port settings and add a matching rule like iptables -A INPUT -p tcp --dport min_port:max_port -j ACCEPT. This is not very restrictive but at least more restrictive than disabling the firewall.
| Proper way to handle FTP over SSL with restrictive firewall rules? |
1,298,910,509,000 |
In iptables-extensions(8) the set module is described and it is discussed that it is possible to react to the presence or absence of an IP or more generally a match against an IP set.
However, it does not seem that there is a way to append items to an IP set on the fly using an iptables rule.
The idea being that if I use the recent module, I could then temporarily blacklist certain IPs that keep trying and add them into an IP set (which is likely faster). This would mean less rules to traverse for such cases and matching against an IP set is said to be faster as well.
|
Turns out it is possible, using the SET target described in iptables-extensions(8).
SET
This module adds and/or deletes entries from IP sets which can be defined by ipset(8).
--add-set setname flag[,flag...]
add the address(es)/port(s) of the packet to the set
--del-set setname flag[,flag...]
delete the address(es)/port(s) of the packet from the set
where flag(s) are src and/or dst specifications and there can be no more
than six of them.
--timeout value
when adding an entry, the timeout value to use instead of the default one
from the set definition
--exist
when adding an entry if it already exists, reset the timeout value to
the specified one or to the default from the set definition
Use of -j SET requires that ipset kernel support is provided, which, for standard
kernels, is the case since Linux 2.6.39.
I hadn't found it, because I hadn't searched further down after finding the set module description.
| Can iptables rules manipulate IP sets? |
1,298,910,509,000 |
I have an OpenWRT gateway (self-built 19.07, kernel 4.14.156) that sits on a public IP address in front of my private network. I am using nftables (not iptables).
I would like to expose a non-standard port on the public address, and forward it to a standard port on a machine behind the gateway. I think this used to be called port forwarding: it would look like your gateway machine was providing, say, http service, but it was really a machine behind the gateway on a private address.
Here is my nftables configuration. For these purposes, my "standard service" is on port 1234, and I want to allow the public to access it at gateway:4321.
#!/usr/sbin/nft -ef
#
# nftables configuration for my gateway
#
flush ruleset
table raw {
chain prerouting {
type filter hook prerouting priority -300;
tcp dport 4321 tcp dport set 1234 log prefix "raw " notrack;
}
}
table ip filter {
chain output {
type filter hook output priority 100; policy accept;
tcp dport { 1234, 4321 } log prefix "output ";
}
chain input {
type filter hook input priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "input " accept;
}
chain forward {
type filter hook forward priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "forward " accept;
}
}
table ip nat {
chain prerouting {
type nat hook prerouting priority 0; policy accept;
tcp dport { 1234, 4321 } log prefix "nat-pre " dnat 172.23.32.200;
}
chain postrouting {
type nat hook postrouting priority 100; policy accept;
tcp dport { 1234, 4321 } log prefix "nat-post ";
oifname "eth0" masquerade;
}
}
Using this setup, external machines can access the private machine at gateway:1234. Logging shows nat-pre SYN packet from external to gateway IP, then forward from external to internal IP, then nat-post from external to internal, and 'existing-connection` takes care of the rest of the packets.
External machines connecting to gateway:4321 log as raw, where the 4321 gets changed to 1234. Then the SYN packet gets forwarded to the internal server, the reply SYN packet comes back, and ... nothing!
The problem, I think, is that I'm not doing the nftables configuration that would change the internal:1234 back to gateway:4321, which the remote machine is expecting. Even if masquerade changes internal:1234 to gateway:1234, the remote machine is not expecting that, and will probably dump it.
Any ideas for this configuration?
|
You are not translating the port number. When the external connection is to port 1234, this is not a problem. But when it is to 4321, the dnat passes through to port 4321 on the internal server, not port 1234. Try
tcp dport { 1234, 4321 } log prefix "nat-pre " dnat 172.23.32.200:1234;
You do not need to translate the reply packets coming back from your internal server. This is done automagically using the entry in the connection tracking table that is created on the first syn packet.
| Port forwarding & NAT with nftables |
1,298,910,509,000 |
On my desktop, I want to configure iptables pretty strictly. I see no reason why I need to allow anything except for internet traffic that I initiated. And maybe even that could be limited to only a few ports. What are the basic rules that can close off my desktop? I only need:
To browse the internet
Download email
Any recommended set of rules?
|
The following rules will allow all outgoing connections, but block any incoming connections. The INPUT and FORWARD chains are set to reject packets by default, the OUTPUT chain is set to accept packets by default, and the last rule allows incoming packets which are part of existing connections (which in this case can only be outgoing connections).
iptables --policy INPUT DROP
iptables --policy FORWARD DROP
iptables --policy OUTPUT ACCEPT
iptables --append INPUT --match state --state ESTABLISHED,RELATED --jump ACCEPT
iptables --append INPUT --jump REJECT
iptables --append FORWARD --jump REJECT
If you want to restrict outgoing traffic you want to change the OUTPUT policy to reject, and add rules to accept traffic on certain ports. For example:
iptables --policy OUTPUT DROP
iptables --append OUTPUT --protocol udp --match multiport --dports domain,bootps --jump ACCEPT
iptables --append OUTPUT --protocol tcp --match multiport --dports domain,http,https,ssh,pop3s,imaps,submission --jump ACCEPT
iptables --append OUTPUT --jump REJECT
| What is a solid iptables configuration for my desktop? |
1,298,910,509,000 |
I'm planning to add the following rules to iptables to stop ssh brute force attack which are common these days.
-A INPUT -i eth0.103 -p tcp -m tcp --dport 4522 -m state --state NEW -m recent --set --name SSH --rsource
-A INPUT -i eth0.103 -p tcp -m tcp --dport 4522 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0.103 -p tcp -m tcp --dport 4522 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force "
-A INPUT -i eth0.103 -p tcp -m tcp --dport 4522 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset
-A INPUT -i eth0.103 -p tcp -m tcp --dport 4522 -j ACCEPT
I understand that second, third, and forth rules are responsible from logging and blocking the attack coming from the same IP within 30 sec. interval.
Questions
When the documentation says "log it to the system log once, then immediately reject it and forget about it" what do they mean by forget about it?
Forget about it forever?
Is there a blacklist that iptables check?
If an IP is rejected but then tried another connection an hour later, will there be another 3 attempts from that IP?
If it is the case then this is not blocking but slowing down the attacks as long as iptables doesn't have some blacklist count?
|
Question #1: When the documentation says "log it to the system log once, then immediately reject it and forget about it" what do they mean by forget about it?
Means that the message will show up in the logs, but only once, so your log won't get polluted with a continuous stream of messages every time the rule is triggered.
Question #2: Forget about it forever?
No not forever. There is a time period associated with how frequently these messages will occur in the logs.
Question #3: Is there a blacklist that iptables check?
No there is no blacklist that iptables "checks". It's dumb in the sense that it will only filter what you tell it to, and will allow only what you tell it to allow through.
Question #4: If an IP is rejected but then tried another connection an hour later, will there be another 3 attempts from that IP?
It depends on what's the over arching timeout. If additional attempts occur and the original timeout hasn't elapsed from the initial attempts, then no there should be no additional messaging in the logs, nor allowed connections. If that timeout has elapsed however, then yes you'll see additional messaging about these follow-on attempts.
I'd encourage you to take a look at the documentation for the Iptable's recent module for additional details on how it works.
iptables recent matching rule
IPTables/Netfilter Recent Module
IPTables documentation - HOWTO - 3.16 recent patch
Port Scanning
The length of time that an IP remains on the "badguy" list would be governed by a structuring of your rules like this:
iptables -A INPUT -i $OUTS -m recent --name badguys --update --seconds 3600 -j DROP
iptables ...
.
.
.
iptables -A INPUT -t filter -i $OUTS -j DROP -m recent --set --name badguys
The first rule would check to see if an incoming packet was already on the "badguy" list. If it were, then the clock would get "reset" by the --update switch and the badguy's IP would remain on this list for up to 1 hour (3600 seconds). Each subsequent attempt would reset this 1 hour window!
NOTE: With the rules structured like so, offenders would have to be completely silent for one hour in order to be able to communicate with us again
SSH
For SSH connections the tactic is slightly different. The timeout associated with IP's getting on our "badguy" list is still 1 hour, and still gets reset upon every reconnect within that 1 hour window.
For SSH we need to count each --syn connection. This is a TCP SYN packet, in the TCP 3 way handshake that occurs. By counting these we can "track" each connection attempt. If you try to connect to us more than say 2 times in a 30 second window, we drop you.
To get these IP's that continuously try to connect in the "badguy" list we can incorporate another rule that adds them if they attempt to connect to us say 6 times within a 5 minute window (300 seconds).
Here are the corresponding rules - their order is critical!:
iptables -N BADGUY
iptables -t filter -I BADGUY -m recent --set --name badguys
iptables -A INPUT -i $OUTS -p tcp --syn --dport ssh -m recent --name ssh --set
iptables -A INPUT -i $OUTS -p tcp --syn --dport ssh -m recent --name ssh --rcheck --seconds 300 --hitcount 6 -j BADGUY
iptables -A INPUT -i $OUTS -p tcp --syn --dport ssh -m recent --name ssh --rcheck --seconds 30 --hitcount 2 -j DROP
NOTE: You can see the "badguy" list under /proc/net/ipt_recent. There should be a file there with the name of the list, badguys.
Question #5: If it is the case then this is not blocking but slowing down the attacks as long as iptables doesn't have some blacklist count?
I think you'll find that in general there are 3 ways to deal with connection attempts from iptable's perspective.
Allow traffic known to be acceptable
Disallow traffic known to be unacceptable
Traffic that is potentially bad, slow down and/or reject either indefinitely or for a "timeout" period. If said behavior has been dealt with using a "timeout", re-allow it, after the timeout has elapsed.
#3 is where most of the complexity comes from in setting up iptables and/or firewalls in general. Since much of the traffic on the internet is OK, to a point, and it's when that traffic exhibits "obsessive" behavior, in the sense that a single IP tries to connect to your server's port X number of times where it becomes an issue.
References
Escalating Consequences with IPTables
| How to Block SSH Brute Force via Iptables and How does it work? |
1,298,910,509,000 |
I am trying to establish a whitelist of clients that have successfully logged into the system, using ipset. What options do I have to let an entry age so that I can later discard it based on its age?
Is there a better method than the idea outlined below?
I have not found anything provided by ipset directly, so I am trying to establish whether or not such a facility exists within the scope of ipset/iptables.
Right now the only idea I have come up with is to use a cronjob that swaps the list every X minutes or hours. So as an example I'd have a list whitelist which is active, plus a list for the next hour (say for 21:00 whistelist_21), if I am some time between 20:00 and 20:59. Any client connecting now would be added to the active whitelist and to the whitelist for next hour (or a given period). Then at each full hour (or given period) a cronjob - e.g. at 21:00 in the above case - swaps the existing whitelist for the whitelist_21 one and disposes of the (now renamed) whitelist. E.g.:
ipset swap whitelist whitelist_21
ipset destroy whitelist_21
|
Turns out the man page describes what I was looking for. It's aptly called timeout and can be specified when adding entries to an IP set. I missed it due to a search for wrong terms.
A default timeout value can be given when creating a set and later for each entry added - if it is desired to override the set default.
Examples from ipset(8):
ipset create test hash:ip timeout 300
ipset add test 192.168.0.1 timeout 60
ipset -exist add test 192.168.0.1 timeout 600
| How can I let ipset entries "age"? |
1,298,910,509,000 |
I made a very simple bash script (echo at start, runs commands, echos at end) to add approx 7300 rules to iptables blocking much of China and Russia, however it gets through adding approximately 400 rules before giving the following error for every subsequent attempt to add a rule to that chain:
iptables: Unknown error 18446744073709551615
I even tried manually adding rules afterwards and it won't let me add them (it gives the same error).
The command to add each rule looks like this:
/sbin/iptables -A sshguard -s x.x.x.0/x -j DROP
sshguard is a chain I created for use with the sshguard daemon, and I wanted to add the rules there so I wasn't muddying up the INPUT chain. The ip ranges I am supplying are not to blame here, as I have supplied valid ranges to test and they are met with the same error. Flushing the chain of rules and adding individual ones work, but again, not after ~400 entries.
I did some googling beforehand, but the others having this issue don't seem to be having it for the same reasons I am.
Is there some kind of rule limit per chain with iptables? Also, is this the proper way to go about blocking these ranges (errors aside)?
# iptables -V
iptables v1.3.5
# cat /etc/issue
CentOS release 5.8 (Final)
# uname -a
Linux domain.com 2.6.18-028stab101.1 #1 SMP Sun Jun 24 19:50:48 MSD 2012 x86_64 x86_64 x86_64 GNU/Linux
Edit: To clarify, the bash script is running each iptables command individually, not looping through a file or list of IPs.
Also, my purpose for blocking these ranges is preventative -- I am trying to limit the amount of bots that scrape, crawl, or attempt to create spam accounts on a few of my websites. I am already using sshguard to block brute force attempts on my server, but that does not help with the other bots, obviously.
|
OK, I figured it out.
I should have mentioned that I had a Virtuozzo container for my VPS. http://kb.parallels.com/en/746 mentions the following:
Also it might be required to increase numiptent barrier value to be
able to add more iptables rules:
~# vzctl set 101 --save --numiptent 400
FYI: The container has to be restarted for this to take effect.
This explains why I hit the limit at around 400. If I had CentOS 6, I would install the ipset module (EPEL) for iptables instead of adding all these rules (because ipset is fast).
As it stands now, on CentOS 5.9, I'd have to compile iptables > 1.4.4 and my kernel to get ipset. Since this is a VPS and my host may eventually upgrade to CentOS 6, I am not going to pursue that.
| Can't add large number of rules to iptables |
1,298,910,509,000 |
I'm using Ubuntu 18.04 on embedded system and I need to choose a firewall app between the followings: ufw, nftables, iptables.
Can you recommend one of them and why its better than the others?
Thanks
|
There are essentially two separate firewall stacks in the kernel: the older iptables-based stack, and the newer nftables-based stack. However, because some programs are designed to work with one and some with the other, and mixing and matching tends to cause lots of hard-to-troubleshoot breakage, on most modern systems, the iptables binary actually interacts with the nftables-based stack, so either one can be used.
ufw is designed to make firewall configuration easy. If you want to just have an easy-to-configure firewall, it's a great choice. If you need more complicated configuration, then nftables is a good choice for that because you can create a single file that has all the rules and then atomically update the rules, and it's a lot easier to work with than the iptables binary. If you're unsure, ufw is probably the right choice.
| choosing firewall: ufw vs nftables vs iptables |
1,298,910,509,000 |
There's this nice saying from Ubuntu: no open ports on the default install. Other Linux OS's look similar, including Fedora. Configuring firewall policies can be a pain in the neck, so this is a great default to have.
Ubuntu specifically exempt the DHCP client (essential) and mDNS. (Without firewall zones to distinguish, mDNS is nice to leave enabled. Poettering put some work into ensuring avahi-daemon is safe and secure, I think specifically for this reason).
So I can turn off Fedora's firewalld, to allow me to play with bridged / routed networking for Virtual Machines. Except - what about these dnsmasq ports? Will they be exposed to the outside network?
sudo ss -nultp # List TCP and UDP listening sockets
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=1249,fd=5))
udp UNCONN 0 0 0.0.0.0:41662 0.0.0.0:* users:(("avahi-daemon",pid=1216,fd=14))
udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:* users:(("avahi-daemon",pid=1216,fd=12))
udp UNCONN 0 0 192.168.122.1:53 0.0.0.0:* users:(("dnsmasq",pid=2011,fd=5))
udp UNCONN 0 0 0.0.0.0%virbr0:67 0.0.0.0:* users:(("dnsmasq",pid=2011,fd=3))
udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:* users:(("dhclient",pid=2354,fd=6))
udp UNCONN 0 0 0.0.0.0:68 0.0.0.0:* users:(("dhclient",pid=2184,fd=6))
udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=1249,fd=6))
udp UNCONN 0 0 [fe80::f3a:8415:60b9:e56b]%wlp2s0:546 [::]:* users:(("dhclient",pid=2373,fd=5))
udp UNCONN 0 0 [fe80::7e73:7a0c:e16f:a0d4]%eno1:546 [::]:* users:(("dhclient",pid=2242,fd=5))
udp UNCONN 0 0 [::]:5353 [::]:* users:(("avahi-daemon",pid=1216,fd=13))
udp UNCONN 0 0 [::]:48210 [::]:* users:(("avahi-daemon",pid=1216,fd=15))
tcp LISTEN 0 32 192.168.122.1:53 0.0.0.0:* users:(("dnsmasq",pid=2011,fd=6))
tcp LISTEN 0 5 127.0.0.1:631 0.0.0.0:* users:(("cupsd",pid=4225,fd=6))
tcp LISTEN 0 5 [::1]:631 [::]:* users:(("cupsd",pid=4225,fd=5))
|
On 23/11/12 12:50, Gene Czarcinski wrote:
Libvirt is in the process of changing for using bind-interface to using
bind-dynamic to fix a security related issue where dnsmasq was
responding to port 53 queries which did not occur on an address on the
virtual network interface that instance of dnsmasq was supporting.
From looking at ps -ax|grep dnsmasq, it is using the configuration file /var/lib/libvirt/dnsmasq/default.conf.
## dnsmasq conf file created by libvirt
strict-order
pid-file=/var/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
#...
So they have indeed moved to bind-dynamic from bind-interfaces. See also src/network.c in dnsmasq:
In --bind-interfaces, the only access control is the addresses we're listening on. There's nothing to avoid a query to the address of an internal interface arriving via an external interface where we don't want to accept queries, except that in the usual case the addresses of internal interfaces are RFC1918...
The fix is to use --bind-dynamic, which actually checks the arrival interface too. Tough if your platform doesn't support this.
Note that checking the arrival interface is supported in the standard IPv6 API and always done.
The DHCP socket (port 67) ends up bound to a specific interface. So we don't need to worry about DHCP, only DNS (port 53).
(dnsmasq only ever uses one DHCP socket. It listens to all addresses, but when there's exactly one interface it's able to bind to that using SO_BINDTODEVICE. Don't ask me to explain why it only uses one DHCP socket; doing DHCP is generally weird and low-level).
Testing dnsmasq from a second machine:
$ ip route add 192.168.124.1 via $FEDORA_IP
$ sudo nmap -A -F 192.168.124.1
Starting Nmap 6.47 ( http://nmap.org ) at 2016-01-18 16:11 GMT
Nmap scan report for 192.168.124.1
Host is up (0.0023s latency).
Not shown: 98 closed ports
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.1 (protocol 2.0)
|_ssh-hostkey: ERROR: Script execution failed (use -d to debug)
53/tcp open tcpwrapped
Device type: general purpose
Running: Linux 3.X
OS CPE: cpe:/o:linux:linux_kernel:3
OS details: Linux 3.11 - 3.14
Network Distance: 1 hop
TRACEROUTE (using port 8888/tcp)
HOP RTT ADDRESS
1 0.80 ms 192.168.124.1
OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 23.42 seconds
So I can see an open TCP port. However it responds as if it's "tcpwrapped". That implies if you connect over a different interface from virbr0, dnsmasq closes the connection without reading any data. So data you send to it doesn't matter; it can't e.g. exploit a classic buffer overflow.
| Is libvirt dnsmasq exposed to the network, if I run Fedora without a firewall? |
1,298,910,509,000 |
One of the most common recommendations that I read for users that recently installed Linux Mint is to enable their firewall, which is pretty simple to do. But why is the firewall off by default in the first place? Is there any reason for this?
|
Link Mint is an Ubuntu-based distribution intended for desktop systems. One of its chief priorities is "ease of use" so a firewall just puts into play something that could break things for users. It's easier if the firewall only gets turned on if the operator is someone who knows what such a thing even is versus a novice user saying "Why don't it no worky?"
| Why is the firewall off by default with Linux Mint? |
1,298,910,509,000 |
Suppose I am logged into a server via ssh. While in the session, I change the firewall config to block all traffic.
When I tried this previously with FreeBSD and pf, the current connection was broken. When I try it now, the current connection remains active but ping (and new connections) doesn't work. I am not sure if there is something else which I am missing.
What is the expected behavior - should this break my current session?
|
"Should changing firewall settings to block all interrupt ongoing ssh session"
The answer is, maybe. It depends on the precise rules, where the block all appears, and whether existing SSH connections are managed under keep state (or modulate state). set optimization is also relevant; a firewall set to aggressively prune state could drop a session at the same time one is fiddling around with firewall rule changes. There are other relevant settings that may influence whether state is maintained, e.g. set state-policy might be set to if-bound, and the SSH packets for some routing reason start showing up on another interface.
In pf the last matching rule takes effect, unless quick is added to a rule. This is opposite of other firewall rules systems, notably iptables on Linux. Thus the exact ordering of rules is important, as is whether quick is used.
If state is enabled, the existing connections should be preserved through rule changes (unless set optimization kills them by timeout).
An example: block all will not apply as the last matching rule wins; also, state is maintained for existing SSH connections:
block all
pass out on $ext_if proto tcp all modulate state
pass in on $ext_if proto tcp from any to any port ssh modulate state
The is next ruleset is a secure firewall, in that everything is quickly blocked, though existing SSH connections should still be maintained until the session times out:
block quick all
pass out proto tcp all modulate state
pass in proto tcp from any to any port ssh modulate state
Another way to write the above would be to put block all as the final rule (unless there are other quick rules), as by default the last matching rule wins.
(There is also a complication of how new states are matched; you can be less restrictive with flags any so that state is created for any portion of a TCP connection, not just the default of only new connections via the default flags S/SA. And other such complications from e.g. asymmetric routing.)
It is also usually a very good idea to have some sort of rollback or recovery option when making firewall rule changes, so that you do not lock yourself out of the system:
# pfctl -f pf.conf; sleep 30; cp pf.conf.bak pf.conf; pfctl -f pf.conf
jfkd^C
The rules change (to set block return quick) did not kill my existing session, so I hit control+c after mashing a few keys to see if they would be echo'd by the terminal.
| Should changing firewall settings to block all interrupt ongoing ssh session |
1,298,910,509,000 |
On a RHEL 7 server, /etc/hosts.allow has a number of IP addresses with full access. The firewall (confirmed with firewall-cmd), there are no specific sources defined, and the default zone allows certain ports and services.
Which takes precedence? Or for a specific example, if an IP address listed in /etc/hosts.allow tries to connect to the server using a port/service not allowed by the firewall rules, could it connect?
|
The answer is no.
Neither between the TCP Wrapper system and the firewall settings takes precedence; rather, they work as layers.
/etc/hosts.allow and /etc/hosts.deny are the host access control files used by the TCP Wrapper system. Each file contains zero or more daemon:client lines. The first matching line is considered.
Access is granted when a daemon:client pair matches an entry in /etc/hosts.allow. Otherwise, access is denied when a daemon:client pair matches an entry in /etc/hosts.deny. Otherwise, access is granted.
Now, if a service has been given access via the TCP Wrapper, but not on the firewall (and the firewall has a "deny all" rule by default, as it should be), the service won't be able to connect to the machine.
I haven't seen much TCP Wrappers configured nowadays -- you can avoid this system, which provides only basic filtering via libwrap, and use just firewalld to allow access to services. It's easier to configure and manage, and more powerful.
| Which takes precedence: /etc/hosts.allow or firewalld? |
1,298,910,509,000 |
Consider these two sets of rules:
Set A
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A INPUT -j REJECT
-A OUTPUT -j ACCEPT
Set B
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
-A INPUT -j REJECT
-A OUTPUT -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT
Previously I was under the impression that the two were functionally the same, however someone recently remarked to me:
To have an ESTABLISHED,RELATED connection you have to have a rule that add the connection into the db that this rule looks at. This is done with the NEW rule. Once the connection is accepted it is placed into the db so the ESTABLISHED,RELATED rules can match against it. Without the NEW rules nothing is laced into the db and thus ESTABLISHED,RELATED will never match anything.
So I admit I'm a bit confused as to how the internals of iptables operate, exactly how does iptables go about tagging packets based on the packet state?
|
[I unfortunately had to remove references from the below due to the disappearance of iptables.info so you may have to trust me on a couple of points.]
Without the NEW rules nothing is laced into the db and thus ESTABLISHED,RELATED will never match anything.
This is false.
There are five userland states (there are more in kernel space), and while an ESTABLISHED or RELATED connection does logically need to begin with a NEW packet, you do not need any explicit NEW rule to produce such a connection (you do need an explicit ACCEPT which implicitly includes NEW packets, however). Follow the logic and consider the other four states first, eg.:
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
-A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -m state --state INVALID -j REJECT --reject-with icmp-net-prohibited
-A INPUT -m state --state INVALID -j REJECT --reject-with icmp-net-prohibited
I've left out NOTRACK since that can only exist because of a previous iptables rule. Guess what's left after these rules have been applied? Only NEW packets. You can now sort those however you want. It will work, I promise, and note: with no NEW rules at all. None.
Exactly how does iptables go about tagging packets based on the packet state?
A connection is established once a reply has been sent. The nature of the protocol evidently comes into play here a bit (the kernel applies those rules too); e.g. when a NEW SYN TCP packet is accepted a SYN/ACK is sent in reply and the connection is ESTABLISHED; following a final FIN/ACK it is defunct.
How a connection is considered RELATED evidently depends further on the protocol involved; basically it relates to connections to/from hosts that already have an ESTABLISHED connection. Notice that you must sometimes load special modules (e.g., for ftp) in order to get this to work.
| How does iptables recognize packet state? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.