Reverse Proxies & Client Connections

Linux Kernel Tuning for High Performance Networking Series

John H Patton
Level Up Coding

--

Configuring Ephemeral Ports for High Performance
Photo by Sean Lim on Unsplash

Ephemeral Ports

An ephemeral port is used when making a client connection to a server. When operating a reverse proxy to upstream workers, the proxy service connecting to a worker creates a client-side socket and the upstream worker creates the server-side socket. When a reverse proxy needs to make many simultaneous upstream connections, ephemeral port exhaustion can occur when the number of upstream connections exceeds the ephemeral port range on the reverse proxy host. In this scenario, the ephemeral port range can be increased to make more sockets available to the proxy server, allowing for more connections.

Detecting ephemeral port exhaustion is dependent on the software that is making the client connections, but typically there will be references to upstream or proxy “no client ports available” errors in the application’s error log. Another symptom is a high number of connections in TIME_WAIT state. Of course, just checking the number of client connections against the total is a surefire way to know you’ve hit the ephemeral port limit.

The number of ephemeral ports in use can be checked with this via bash as root, the script may need to be modified for your system:

If the number of available ports is low, consider increasing the port range.

Ephemeral Port Range
The suggested ephemeral port range for client connections by IANA are ports 49152–65535. Ports 1024–49151 are considered semi-reserved by IANA since application developers can register a port for their application, but these are only of concern if one of these registered services are running on the server and listening on the registered port.

By default, the network stack in many linux kernels defaults to using the ports 32768–61000 for maintaining communication sessions on the server end. This only allows for a maximum of 28,232 TCP connections. To increase the number of ephemeral ports and allow for more proxy connections, this port range should be widened to handle the number of proxy connections configured in the reverse proxy.

System port numbers should be excluded from this range so the lowest port selected should be no lower than 1024, but must also be higher than the highest port used by any configured listener on the system to avoid port collision. For example, if there’s a service listening for connections on port 9990, the lowest ephemeral port should be 9991 or higher. For systems that have an environment port scheme, select something above the highest port in the scheme.

Although web servers tend to operate on port 80 and 443, the main reason to modify the listener to use a non-standard port is to allow the service to be easily controlled as a non-root service account. In this case, the web server should sit behind a load balancer that can route traffic to the non-standard port. Many services operate well using a listener within the 1024 and 9999 range, so excluding this range allows for flexibility in configuring service listeners on the system while still nearly doubling the number of ephemeral ports available for upstream connections. Although not the only benefit, opening a listener port range with 8976 ports also creates options for making a port scheme standard to help manage enterprise environments consisting of development, test, user acceptance, staging, and production environments.

Example
Nginx is listening on ports 6180/6143 for production traffic and 9443 for nginx API traffic and is a reverse proxy to several backend services.
As a result, the lowest ephemeral port should be 9444 or above; however, to avoid the issues caused by trying to maintain tribal knowledge transfer, 10000 would be a good choice since it is outside the most common listener ports and provides an easy standard to remember and share with the team: “don’t configure any service on a listener port above 10000.”

00-ip-local-port-range.conf

Maximum Segment Lifetime

Many reverse proxy servers can see an increased number of connections in “TIME_WAIT” under load. This is the result of opening/closing many client connections and there are a couple of settings to deal with this.

Reusing connections in TIME_WAIT

The kernel can reuse connections that are in “TIME_WAIT” if net.ipv4.tcp_tw_reuse is enabled by setting to 1, so if the system experiences a high number of “TIME_WAIT” this might be a good setting to test out on the impacted system.

In linux, MSL is set with one of the following kernel settings, depending on your kernel version.

  • net.netfilter.nf_conntrack_tcp_timeout_time_wait
  • net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait

--

--