Endlessh and tc tarpits slow down attackers
Sticky Fingers
A number of methods can stop attackers from exhausting your server resources, such as filtering inbound traffic with a variety of security appliances locally or by utilizing commercial, online traffic-scrubbing services to catch upstream traffic for mitigating denial-of-service attacks. Equally, honeypots can be used to draw attackers in, so you get a flavor of the attacks that your production servers might be subjected to in the future.
In this article, I look at a relatively unusual technique for slowing attackers down. First, Endlessh, a natty piece of open source software, can consume an attacker's resources by keeping their connections open (so that they have less ability themselves to attack your online services), leaving them in a "tarpit." Second, to achieve similar results a more traditional rate-limiting approach, courtesy of advanced Linux networking and traffic control (tc
), is investigated with the kernel's built-in Netfilter packet filter controlled by its iptables frontend.
As surely as night follows day, automated attacks will target the default Secure Shell port (TCP port 22), so I will use SSH as the guinea pig test case with the knowledge that I can move the real SSH service to an alternative port without noticeable disruption.
Sticky Connections
If you visit the GitHub page for Endlessh [1], you are greeted with a brief description of its purpose: "Endlessh is an SSH tarpit that very slowly sends an endless, random SSH banner. It keeps SSH clients locked up for hours or even days at a time."
The documentation goes on to explain that if you choose a non-standard port for your SSH server and leave Endlessh running on TCP port 22, it's possible to tie attackers in knots, reducing their ability to do actual harm. One relatively important caveat, though, is that if you commit too much capacity to what is known as tarpitting (i.e., bogging something down), it is possible to cause a denial of service unwittingly to your own services. Therefore, you should never blindly deploy security tools like these in production environments without massive amounts of testing first.
The particular tarpit I build here on my Ubuntu 20.04 (Focal Fossa, with the exceptionally aesthetically pleasing Linux Mint 20 Ulyana sitting atop) will catch the connection when the SSH banner is displayed before SSH keys are exchanged. By keeping things simple, you don't have to worry about the complexities involved in solving encryption issues.
In true DevOps fashion, I fire up Endlessh with a Docker container:
$ git clone https://github.com/skeeto/endlessh
If you look at the Dockerfile within the repository, you can see an image that uses Alpine Linux as its base (Listing 1).
Listing 1
Endlessh Dockerfile
FROM alpine:3.9 as builder RUN apk add --no-cache build-base ADD endlessh.c Makefile / RUN make FROM alpine:3.9 COPY --from=builder /endlessh / EXPOSE 2222/tcp ENTRYPOINT ["/endlessh"] CMD ["-v"]
Assuming Docker is installed correctly (following the installation process for Ubuntu 20.04 in my case, or as directed otherwise [2]), you can build a container with the command:
$ cd endlessh/ $ docker build -t endlessh . [...] Successfully built 6fc5221548db Successfully tagged endlessh:latest
Next, check that the container image exists with docker images
(Listing 2). Now you can spawn a container. If you are au fait with Dockerfiles, you will have spotted that TCP port 2222 will be exposed, as shown in the output:
$ docker run -it endlessh 2020-11-09T15:38:03.585Z Port 2222 2020-11-09T15:38:03.586Z Delay 10000 2020-11-09T15:38:03.586Z MaxLineLength 32 2020-11-09T15:38:03.586Z MaxClients 4096 2020-11-09T15:38:03.586Z BindFamily IPv4 Mapped IPv6
Listing 2
docker images
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE endlessh latest 6fc5221548db 58 seconds ago 5.67MB <none> <none> 80dc7d447a48 About a minute ago 167MB alpine 3.9 78a2ce922f86 5 months ago 5.55MB
The command you really want to use, however, will expose that container port on the underlying host, too:
$ docker run -d --name endlessh -p 2222:2222 endlessh
You can, of course, adjust the first 2222
entry and replace it with 22
, the standard TCP port. Use the docker ps
command to make sure that the container started as hoped.
In another terminal, you can check to see whether the port is open as hoped by the Docker container (Listing 3). Next, having proven that you have a working Endlessh instance, you can put it through its paces.
Listing 3
Checking for Open Port
$ lsof -i :2222 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME docker-pr 13330 root 4u IPv4 91904 0t0 TCP *:2222 (LISTEN)
A simple test is to use the unerring, ultra-reliable netcat
to see what is coming back from port 2222 on the local machine:
$ nc -v localhost 2222 Connection to localhost 2222 port [tcp/*] succeeded! vc"06m6rKE"S40rSE2l &Noq1>p&DurlvJh84S bHzlY mTj-(!EP_Ta|B]CJu;s'1^:m7/PrYF LA%jF#vxZnN3Ai
Each line of output takes 10 seconds to appear after the succeeded line and is clearly designed to confuse whoever is connecting to the port into thinking something is about to respond with useful, sane commands. Simple but clever.
If you need more information to deploy Endlessh yourself, use the docker run
command with the -h
option at the end to see the help output (Listing 4).
Listing 4
Endlessh Help Output
$ docker run endlessh -h Usage: endlessh [-vh] [-46] [-d MS] [-f CONFIG] [-l LEN] [-m LIMIT] [-p PORT] -4 Bind to IPv4 only -6 Bind to IPv6 only -d INT Message millisecond delay [10000] -f Set and load config file [/etc/endlessh/config] -h Print this help message and exit -l INT Maximum banner line length (3-255) [32] -m INT Maximum number of clients [4096] -p INT Listening port [2222] -v Print diagnostics to standard output (repeatable) -V Print version information and exit
As the help output demonstrates, it is very easy to alter ports (from a container perspective), but you should edit the docker run
command as discussed above. Endlessh allows you to specify how much gobbledygook is displayed with the -l
(length) option. To prevent an unexpected, self-induced denial of service on your own servers, you can hard-code the maximum number of client connections permitted so that your network stack doesn't start to creak at the seams.
Finally, it is possible to expose only the desired connection port on IPv4, IPv6, or both and, with the -d
setting, alter the length of the gobbledygook display delays (which by default is currently set at 10,000ms or 10s).
Colonel Control
If you don't want to use a prebuilt solution like Endlessh for creating a tarpit, you can achieve similar results with tools available to Linux by default, or more accurately, with the correct kernel modules installed. As mentioned, this approach is more rate-limiting than tarpitting but ultimately much more powerful and well worth discovering. Much of the following was inspired by a blog post from 2017 [3]. An introduction that quotes Wikipedia notes that a tarpit "is a service on a computer system (usually a server) that purposely delays incoming connections" [4].
Having looked at that post, followed by a little more reading, I realized that I'd obviously missed the fact that iptables (the kernel's network firewall, courtesy of the Netfilter project's component [5]) has included by design a tarpit feature of its very own. A look at the online manual [6] (or the output of the man iptables
command) has some useful information to help you get started. The premise to using the TARPIT target module in iptables, as you'd expect from such a sophisticated piece of software, looks well considered and refined.
The docs state that the iptables target, "Captures and holds incoming TCP connections using no local per-connection resources." Note that this reassuring opening sentence apparently improves the Endlessh chance of unwittingly causing a local denial-of-service attack. The manual goes on to say, "Attempts to close the connection are ignored, forcing the remote side to time out the connection in 12-24 minutes." That sounds pretty slick. A careful reading of the manual reveals more tips. To open a "sticky" port to torment attackers, you can use the command:
$ iptables -A INPUT -p tcp -m tcp --dport 22 -j TARPIT
Noted in the documentation is that you can prevent the connection tracking (conntrack
) functionality in iptables from tracking such connections with the NOTRACK target. Should you not do this, the kernel will unnecessarily use up resources for those connections that are stuck in a tarpit, which is clearly unwelcome behavior.
To get started with the advanced Linux networking approach, according to the aforementioned blog post, you need to make sure your kernel supplies the tc
traffic control utility (stripped down kernels might not have it enabled). Thankfully, on my Linux Mint version it is. To determine whether your system has the tool, enter the tc
command (Listing 5). In Listing 6, you can see that no rules currently are loaded in iptables.
Listing 5
tc
$ tc Usage: tc [ OPTIONS ] OBJECT { COMMAND | help } tc [-force] -batch filename where OBJECT := { qdisc | class | filter | chain | action | monitor | exec } OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[aw] | -o[neline] | -j[son] | -p[retty] | -c[olor] -b[atch] [filename] | -n[etns] name | -N[umeric] | -nm | -nam[es] | { -cf | -conf } path }
Listing 6
iptables -nvL
$ iptables -nvL Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination
On my laptop, an SSH server isn't installed automatically so I have to add it to the system:
$ apt install openssh-server The following NEW packages will be installed ncurses-term openssh-server openssh-sftp-server ssh-import-id
After some testing, you can uninstall those exact packages to keep your system trim. The commands in Listing 7 start up the SSH daemon (sshd
) and tell you that it is listening on port 22. IPv6 and IPv4 connections are open on the default port, so you can continue.
Listing 7
Starting sshd
$ systemctl start sshd $ lsof -i :22 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME sshd 5122 root 3u IPv4 62113 0t0 TCP *:ssh (LISTEN) sshd 5122 root 4u IPv6 62115 0t0 TCP *:ssh (LISTEN)
At this stage, if you are testing on a server, make sure that you have altered your main SSH port and restarted your SSH daemon, or potentially you will be locked out of your server. The steps you aim to achieve are:
- With iptables, mark packets to all traffic hitting the SSH port, TCP port 22, with MARK as an option.
- Use
tc
to set up the hierarchy traffic bucket (HTB) qdisc, to catch traffic to be filtered. - Create an HTB rule that will be used for normal traffic (allowing the use of loads of bandwidth), calling it 1:0 .
- Create a second HTB rule that will only be allowed a tiny amount of traffic and call it 1:5 .
- Use
tc
to create a filter and get it to match the MARK option set in step 1; then, watch it get allocated to the 1:5 traffic flows. - Check the output of both the HTB traffic class rules to look for overlimits.
Now it's time to put those steps in action.
To begin, add an iptables rule with a "mangle" table that helps you manipulate connections to your heart's content. For step 1, you may need to adjust the -A
options to -I
if you have rules in place, MARK your connections to the SSH port (with two rules – one as the source port and one as the destination – because you'll test to see whether it works by using connections from another local machine):
$ iptables -A OUTPUT -t mangle -p tcp --sport 22 -j MARK --set-mark 10 $ iptables -A OUTPUT -t mangle -p tcp --dport 22 -j MARK --set-mark 10
As you can see, one source port and one destination port have been configured to mark the packets with the label 10 . Now, check that traffic is hitting the rules, or chains, created by checking the mangle table (Listing 8).
Listing 8
Checking the Mangle Table
$ iptables -t mangle -nvL Chain OUTPUT (policy ACCEPT 15946 packets, 7814K bytes) pkts bytes target prot opt in out source destination 192 35671 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spt:22 MARK set 0xa 31 4173 MARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 MARK set 0xa
For step 2, run a new tc
command to add the HTB qdisc (or the scheduler) to your network interface. First, however, you need to know the name of your machine's network interfaces with ip a
. In my case, I can ignore the lo
localhost interface, and I can see the wireless interface named wlpls0
, as seen in a line of the output:
wlp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP group default qlen 1000
From now on, I can simply alter the wlp1s0 entry for the network interface.
Now, add the HTB qdisc scheduler to the network interface as your parent rule (which is confusingly referenced as 1: , an abbreviated form of 1:0 ):
$ tc qdisc add dev wlp1s0 root handle 1: htb
As per step 3, you need to create a rule for all your network interface traffic, with the exception of your tarpit or throttled traffic, by dutifully naming this classifier 1:0 (I prefer to think of such rules as a "traffic class" for simplicity):
$ tc class add dev wlp1s0 parent 1: classid 1:0 htb rate 8000Mbit
For step 4, instead of adding loads of traffic allowance add just 10bps of available throughput and call the entry 1:5 for later reference:
$ tc class add dev wlp1s0 parent 1: classid 1:5 htb rate 80bit prio 1
The filter for step 5 picks up all marked packets courtesy of the iptables rules in step 1 and matches the 1:5 traffic class entry in HTB:
$ tc filter add dev wlp1s0 parent 1: prio 1 protocol ip handle 10 fw flowid 5
Note the flowid 5
to match the 1:5
traffic class and the ip handle 10
to match the iptables rules.
For step 6, you can see your qdisc in action:
$ watch -n1 tc -s -g class show dev wlp1s0
Figure 1 shows the hierarchical explanation of the two child classifiers, which are sitting under the parent qdisc.
For more granular information on qdisc, traffic class, or filter level, you can use the commands:
$ tc qdisc show dev wlp1s0 $ tc class show dev wlp1s0 $ tc filter show dev wlp1s0
Next, you should look for your local network interface's IP address (make sure you replace my network interface name with your own):
$ ip a | grep wlp1s0 | grep inet inet 192.168.0.16/24 brd 192.168.0.255 scope global dynamic noprefixroute wlp1s0
Now log in to another machine on your local network with SSH access (or open up SSH access for a remote machine) and run the command,
$ ssh 192.168.0.16
replacing your IP address for mine. After watching a very slow login prompt, you can generate some arbitrary noise in the terminal (use any command that you like), such as the following, which on my machine will push screeds of data up the screen:
$ find / -name chrisbinnie
If all goes well, you should see the output move very, very slowly indeed, and if you're serving an application over another port (I'm not on my laptop), other services should be absolutely fine, running as usual at the normal speed.
The proof in the pudding – that your second class is matching traffic to the filter that references the iptables rules – is seen in Figure 1 for the 1.5 class. If you run a few commands on your other machine that is SSH'd into your throttled SSH server, you should see the overlimits increase steadily. In Figure 1, that is showing 36 packets.
If you need to triple-check that it is working as hoped, you can remove the running tc
configuration:
$ tc qdisc del dev wlp1s0 root
SSH should magically return to being nice and responsive again.
Although you are not strictly tarpitting connections, you are limiting them significantly. The exceptional tc
will only allow that tiny upper limit of bandwidth to all SSH connections on TCP port 22, so rest assured that with multiple attackers vying for such a small amount of traffic, it's not going to be an enjoyable experience.
The End Is Nigh
If you're keen to learn more about the genuinely outstanding tc
and its collection of various qdiscs, which you should, you can find a lot more information online to alter your network traffic in almost any way you imagine. The manual page for Ubuntu [7] is a good start.
For further reading on tarpits, refer to the Server Fault forum [8], which discusses the good and bad elements of tarpits in detail and offers some useful insights into how the ordering of your iptable chains, or rules, can be set up to be the most efficient. Also, pay attention to the comments about accidentally filling up logs and causing a different type of denial of service that you might not have been expecting.
As mentioned before, be certain you know what you are switching on when it comes to tarpit functionality, whichever route you take. On occasion, honeypots and tarpits can be an invaluable addition to your security setup, but without some forethought, it is quite possible to tie your shoelaces across your shoes and trip yourself up, irritate your users, and cause a whole heap of extra work for yourself.
Infos
- Endlessh: https://github.com/skeeto/endlessh
- Install Docker engine on Ubuntu: https://docs.docker.com/engine/install/ubuntu
- "Super Simple SSH Tarpit" by Gabriel Nyman, November 20, 2017: https://nyman.re/super-simple-ssh-tarpit
- Tarpit: https://en.wikipedia.org/wiki/Tarpit_(networking)
- Netfilter: https://www.netfilter.org
- iptables: https://linux.die.net/man/8/iptables
- Ubuntu tc man page: http://manpages.ubuntu.com/manpages/cosmic/man8/tc.8.html
- Server Fault: https://serverfault.com/questions/611063/does-tarpit-have-any-known-vulnerabilities-or-downsides
Buy this article as PDF
(incl. VAT)