Enlarging the TCP initial window
Open the Floodgates
Most Internet services rely on the Transmission Control Protocol (TCP), an interprocess communication protocol that dates back to the 1980s, an era when data streams were more like trickles by today's standards. TCP includes the SMTP protocol for email and HTTP/1.1 and /2.0; therefore, all TCP stack optimizations have a positive effect on performance. In this article, I take a look at the TCP initial window, which defines the size of the first packet, and thus of all other packets, sent over TCP.
The first Request for Comments on TCP (RFC 793) dates back to 1981 [1]. One of the important aspects of TCP has always been that it maximizes the available network bandwidth. However, it avoids overloading the individual components in the connection or their buffers by splitting the payload to be transferred (e.g., an HTML file) into small packets and transferring them one by one. The receiver gives the sender regular feedback on how many packets it has received. If one is missing, the transmitter resends it. TCP uses sequence numbers to ensure the correct sequence of packets.
In the early days of the Internet, modems (acoustic couplers) commonly had a transmission rate of 300bps. Today, many private homes in large cities use fiber optics with transmission rates of 1GBps. However, the basic concept of TCP has not changed in all these years. TCP does not know the maximum transfer rate of a network connection, so it does not send the complete file to be transferred, just a small piece of it.
The TCP initial window (IW), on the other hand, has changed over the years and now has 10 segments, one of which can contain 1500 bytes, with 40 bytes lost as overhead. The size of the first data packet from the server to the receiver reflects this,
and it is completely irrelevant whether you are looking at a modern fiber-optic connection or an old analog modem.
Once the server has confirmed receipt of the 14KB from the client, TCP doubles the number of segments to 20, corresponding to 28KB. Once these have also arrived, TCP doubles the segments again. Duplication continues until the other side fails to receive a packet and cannot send an acknowledgement to the server; then, TCP transmits fewer segments at the next attempt. If the bandwidth fluctuates or interruptions occur, the same mechanisms apply.
Bad Trip
If you want to transfer a 100KB file by HTTP/1.1 and use a new TCP connection with an IW of 10 segments, you need four round trips. The unfortunate thing in this example is that the fourth round trip alone could transport more than 100KB, but needs the previous three round trips to achieve this bandwidth (Table 1). The download speed for the 100KB depends only on the network latency and not on the available bandwidth. In total, the various network packets cover the distance between server and client eight times.
Table 1
Roundtrip Times
Roundtrip | Payload (KB) | Total (KB) |
---|---|---|
1 | 14 | 14 |
2 | 28 | 42 |
3 | 57 | 99 |
4 | 114 | 213 |
With this in mind, it also becomes clear why the new 5G mobile network offers lower latency, not higher bandwidth, for faster surfing on normal websites. For very large files (e.g., when streaming HD movies), however, latency is not so important, because bandwidth actually counts.
In 2013, RFC 6928 [2] increased the IW, from between 2 and 4 segments, previously, to 10 segments. Since then, this value has been the default for all Linux flavors. An IW of 10 made sense worldwide in 2013, but if you need to deliver websites today, you can set a higher value.
All over the world, content delivery networks (CDNs) and cloud service providers are already doing this. A paper that appeared in IEEE Transactions on Network and Service Management [3] analyzes the situation and shows that Amazon uses an IW of 25, Microsoft 30, Akamai up to 32, and the Fastly CDN even up to 100 segments. These providers sometimes deliver different content with different IWs.
If you want to set a higher IW, you need root privileges and the ip
command-line tool. First, you need to discover the default route:
$ ip route show default via 123.123.123.241 dev eno1 onlink 123.123.123.240/29 dev eno1 proto kernel scope link src 123.123.123.242
With the information in the line that starts with default
and the ip
command, you can now set an IW of 32 segments with:
$ ip route change default via 123.123.123.241 dev eno1 onlink initcwnd 32
Calling ip route show
again checks whether the new value has been established. Lo and behold:
$ ip route show default via 123.123.123.241 dev eno1 onlink initcwnd 32 123.123.123.240/29 dev eno1 proto kernel scope link src 123.123.123.242
With an IW of 32, the 100KB file described earlier only needs two round trips to reach the client. The TCP-only delivery speed is therefore twice as fast.
Unfortunately, no IW is optimal for all cases. For local servers, an IW of 20 should be fairly harmless. If you are sure that most of the remote nodes are in the same country as you, try an IW of 32. If you choose a value that is too high, you are just shooting yourself in the foot. In this case, delivery can take longer because errors occur that the protocol needs to correct, requiring further round trips. (See also the "HTTP/3 On Its Way" box.)
HTTP/3 On Its Way
When it comes to transferring web pages, TCP is increasingly becoming a bottleneck. For this reason, the future HTTP/3 protocol [4] relies on the User Datagram Protocol (UDP), a minimal and connectionless network protocol. Although faster than TCP in sending data, it neither guarantees delivery nor the correct order of the data sent. HTTP/3 therefore emulates these TCP functions within an encrypted tunnel in UDP itself. Of course, this method entails a certain overhead, but in the end, HTTP/3 is still faster than HTTP/2.
Although HTTP/3 is hardly used in many countries, it is already very stable. Google has been using it for a long time, originally under the Quick UDP Internet Connections (QUIC) [5] label. Google has its own Chrome browser on which it can try out new technologies quickly and relatively easily. If these new technologies work, an RFC is then added. This is also the case with HTTP/3.
After finding the right settings, you can enter the change default
command in the /etc/network/interfaces
file for the appropriate iface
after a post-up
. The system then executes it automatically when restarted.
After setting up the IW, you will also want to enable pacing [6], which ensures that TCP sends the different packets with minimally different time intervals, reducing the congestion effects in case of network bottlenecks. Pacing can be activated with root privileges and the
sysctl -w net.core.default_qdisc=fq
command [7]. You will need to restart the system after this step.
Infos
- RFC 793: https://tools.ietf.org/html/rfc793
- RFC 6928: https://tools.ietf.org/html/rfc6928
- R¸th, J., I. Kunze, and O. Hohlfeld. TCP's Initial Window – Deployment in the Wild and its Impact on Performance, 2019: https://www.comsys.rwth-aachen.de/fileadmin/papers/2019/2019-rueth-iwtnsm.pdf
- HTTP/3: https://en.wikipedia.org/wiki/HTTP/3
- QUIC: https://blog.cloudflare.com/the-road-to-quic/
- Pacing: https://homes.cs.washington.edu/~tom/pubs/pacing.pdf
- Setting TCP pacing: http://man7.org/linux/man-pages/man8/tc-fq.8.html
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.