Sequential Data Storage. The Performance Race. Part 2


Performance measurement

Processing high-definition video implies uninterrupted multi-thread access to big volumes of media data. The video editing performance and time-to-market depend heavily on data storage characteristics.

It’s not easy to assess data storage system characteristics outside the working environment, workstations and actual applications. However, synthetic tests help to compare different connection types and see how performance responds to the changing workload.

Both cluster types, SAN and NAS, were assembled and tested in the Entry lab. The deployed solution was adapted to provide interchangeable modular and file access.

Cluster node: FC Cluster (16 GB/s) / NAS cluster (40 GbE)

  • CPU:1 x Intel Xeon E5-1620 v3 (4×3.5GHz)
  • RAM:4 x 16GB DDR4-2133 reg
  • Hard drive interface:SAS
  • SAS controller:LSI SAS HBA 9302-16e
  • Connection controller:ATTO 16 GB/s Dual Channel FC HBA (FC cluster) / Mellanox ConnectX®-3 Pro EN NIC, Dual 40/56GbE (NAS cluster 40 GbE)

Entry SAS JBOD 60: FC Cluster (16 GB/s) / NAS cluster (40 GbE)

  • JBOD: HGST 4U60 Storage Enclosure 60 x 8 TB
  • Raw capacity: 480 TB.

The Entry lab didn’t try to compare performance of the two cluster types as is. The choice of topology is the lot of system architects. While these storage systems have different application areas, they also reveal many differences:

  • FC cluster uses a modular access interface, NAS cluster uses the file interface (SMB 2.0)
  • FC cluster clients operate with disks, NAS cluster clients operate with network folders (shared resources)
  • Third-party access management software (e.g., MetaSAN by Tiger Technology) is needed to ensure simultaneous access of initiators to the FC cluster. In case of the NAS cluster, these functions are performed by NAS itself, whereas access management tools make their negative impact on performance.

Testing tools

The test sets included:

  • AJA System Test 2.1. This is a standard test tool used in the video industry. The test aims to assess disk subsystem capabilities of writing and reading streams of various definition and under versatile codecs. Sadly, the tools only support a single thread.
  • IOMeter 2008.06.18 RC2. A synthetic test for disk and network subsystems that helps to emulate multi-thread load within a wide range of parameters.

Windows Server 2012 R2 was installed on the initiators.

AJA System Test Results

FC cluster

NAS cluster

The charts reveal why video editing pros like working with FC clusters: sustainable read and write speeds, with no performance bursts or slumps. No frame will go missing, no thread will ever fail.

IOMeter Results

Unlike AJA that only works with a single initiator, IOMeter can create several threads. The lab researchers selected two identical servers for testing, each of them generating a sequence of 512КВ, 1MB and 8MB blocks (sequential read/write, queue depth Q=1).

At first, the lab testers locked the size of data blocks at 8 MB and evaluated behaviors of both data storage systems under increasing single-thread and two-thread test workload. The two-thread option for NAS enabled a workaround addressing the performance limitation for one SMB 2.0 connection.

The increase of workload led to the increase of overall performance, more distinctly on NAS. For one channel of FC 16 GB connection, it’s 1575 MB/s in the half-duplex mode. For NAS, the interaction of protocols is of significant importance. On SMB 2.0 (supported by the current version of RAIDIX), the load from 5 virtual machines yielded a write speed of 1870 MB/s and read speed of 1215 MB/s.

The second group of tests measured performance on two initiators with block sizes of 512K/1M/8M.

Interestingly, data block size of the test sequence impacted the writing speed more in the FC storage system. Contrary to SAN, the NAS system revealed greater impacts on reading.

Apart from tests on network clients, the lab performed internal performance measurements on the storage with its 60 8TB NL SAS disks. The read speed from disks into RAM was close to 4GB/s, the write speed surpassed 3 GB/s – it’s way more than the typical 1—1.5 GB/s in tests on clients.

Can the workstation user get more out of it? Depends on connection channels and options of their unification, as well as exchange protocols. The same is true for data storage systems – scaling productivity is possible: by shifting from FC 8GB to 16GB, from Ethernet 10GB to 40GB, adding new storage racks and access channels. The user has all the necessary tools to boost performance.


Applying new data storage approaches to a specific media production infrastructure, end users have to factor in the odds of processing big volumes of active data, a multitude of client platforms (workstations), opportunities for protocol exchange inherited by the environment, etc.

Apparently, scalability, performance and cost efficiency considerations urge the customer to switch to software-defined storage. Flexible software and hardware configuration ensures vendor-independent storage experience and steers the customer away from unwanted services and overpriced components. System scaling is easy and seamless with readily available and affordable tools.

Now, try to make a constructive change to a classic industrial data storage system! In most cases, you’d better not.