101 of Network Performance


By Rufat Ibragimov

Looking to build a high performance NAS storage? The gateway to success would be using a high-speed network at the core of the system. Let’s round up the key elements we need to consider when designing the optimal network infrastructure.

Let’s start with hardware and general recommendations. For implementing NAS, we need to use 10GbE, 25GbE, 40GbE or 100GbE network interfaces. As for clients, they connect to the interfaces depending on required throughput. More often than not, clients are connected with the 1Gb or 10Gb channel.

Due to a lucrative price-to-performance ratio, network adapters and 10GB speed commutators post the top market ratings and create solid demand in the industry. However, a growing number of clients switch to 25GbE, 40GbE and even 100GbE. At this point, there a just a few companies with NAS offering that recommend optional use of 40Gb and 100Gb interfaces. RAIDIX is among these few.

While using high-performance interfaces on the NAS side, it’s crucial to employ kernel level networking enabling the commutator to provide access to clients. The commutator’s internal throughput should be sufficient to process all connections. If it turns out lesser that the bandwidth of all connection ports, such a commutator will most likely become a bottleneck when operating with multiple clients. This is why it’s key to peruse commutator specification and pay specific attention to the Switching Capacity parameter.

An essential condition for achieving high speeds on file access is the use of the latest SMB, NFS and other protocols. SMB 3.0, for one, delivers multi-session and RDMA support – the features that accelerate performance by a factor of two, as revealed by some independent tests. For correct RDMA functioning, we need to use appropriate network adapters.

Those equipped with TCP Offload function will definitely do a lot of good. This feature helps to lift the load off CPU, thus increasing overall performance. Ideally, such adapters are used on the NAS side as well as the client side.

Now that we arranged the hardware issues, let’s talk configuration. For all OS and commutators without exception, it’s necessary to provide MTU 9000 or higher, depending on hardware and manufacturer recommendations.

Next, we move on to configuring the TCP/IP stack. Let’s start with the Linux settings:

Turn off TCP timestamps for better CPU utilization

sysctl -w net.ipv4.tcp_timestamps=0

Turn on TCP selective acks for improved throughput

sysctl -w net.ipv4.tcp_sack=1

Increase maximum input length of the processor

sysctl -w net.core.netdev_max_backlog=250000

Increase default buffer size and maximum buffer size for TCP

sysctl -w net.core.rmem_max=4194304;

sysctl -w net.core.wmem_max=4194304

sysctl -w net.core.rmem_default=4194304

sysctl -w net.core.wmem_default=4194304

sysctl -w net.core.optmem_max=4194304

Increase auto-tuning buffer limits for TCP connections

sysctl -w net.ipv4.tcp_rmem=»4096 87380 4194304″

sysctl -w net.ipv4.tcp_wmem=»4096 65536 4194304″

Turn on the TCP features

Since different adapters require different settings, we first check which TCP offload options are supported by the “ethtool -k —show-offload eth0” adapter, and then perform their configuration.

As for Windows, it’s easy as pie. All settings are adjusted automatically and dynamically. The only special recommendation here is to turn on TCP Offload on the network adapter driver, and the OS will do the rest.