Sequential Data Storage. The Performance Race. Part 1
The original article was provided by the Entry Team.
Media & Entertainment is an epitome of a market that actively embraces software-defined storage. Classic data storage systems are no longer the optimal choice when it comes to high-performance processing of big volumes of sequential data.
Software-defined storage changes the industry landscape and approaches, catering to resource-intensive applications. It turns out that processing, storage and distribution of sequential video is now possible without employing complex scale-out systems like NetApp FAS or EMC Isilon.
It goes without saying that the market leaders have long-standing experience in classic storage, however the costs associated with data management in these industrial systems are often prohibitive. Direct expenses measured in capacity units tend to be unreasonably high, system maintenance and upgrade costs add up to the overhead.
It’s not all about the price
NetApp, a prominent provider of Ethernet data storage systems, is acclaimed for their WAFL (Write Anywhere File Layout) technology and RAID-DP system. The NetApp WAFL-friendly file system (FS) delivers high performance for file operation as well as modular access (SAN). The FS is deeply integrated with the RAID-manager. RAID-DP writes data in full stripes (random operations are written sequentially) and uses a fast RAID with double parity (protection against simultaneous failure of two disks, as in RAID 6). The performance is balanced due to the Flash Pool and Flash Cache technologies as well as hybridization of flexible SSD and high-capacity HDD devices.
The setback of WAFL is performance degradation on filling the array and in case of high data fragmentation. Although the background “garbage collector” works on the OS level, 10—30% of disk space has to be spared for manageable performance on intensive writing. Should read and write operations have a similar pattern, the user won’t notice any decrease in performance. Inconsistent data allocation on sequential reading may cause serious technical issues.
What sequential applications need
An OS for data storage systems, RAIDIX builds on a classic modular RAID approach (read—modify—write). The RAIDIX team adjusted advanced instructions of standard Intel Xeon processors to implementation of Reed-Solomon code. As a result, they developed unique fast algorithms that enable the system to survive simultaneous failure of up to 3 disks in RAID 7.3 – and 32 disks in RAID N+M. All this without any hardware RAID-controllers!
Fast checksum calculation along with robust reliability and optimized disk space usage have gained RAIDIX due recognition in the media business. This vertical poses the mission-critical requirements of sustainable high performance and smooth operation with big volumes of data.
RAIDIX transforms standard server equipment into high-performance data storage systems. It supports SAN (FC, iSCSI, SAS, InfiniBand) and NAS (SMB, NFS, AFP) protocols. The areas of application include Enterprise, HPC, Media, Video Surveillance and other data-rich industries.
New versions bring along new technologies to increase overall system performance and accelerate specific operations. Catering to increased productivity of transactional operations, the system introduced SSD caching and random access optimization. From the QoS perspective, RAIDIX included prioritization on the application level, in order to mitigate the negative impact some apps make on data storage performance.
User can manage RAIDIX through GUI, CLI or the LINUX console.
Universal compatibility with commodity hardware
RAIDIX works perfectly with commercial off-the-shelf server hardware that allows for low ownership costs and easy maintenance. The Intel Xeon E5 16xx platform is sufficient for smooth operation, providing a big growth envelope for RAM and peripheral devices. It can be integrated into the data storage network environment with a FC HBA (8—16GB) or 10—40GB Ethernet NIC.
Operational data volumes generated by video editing apps reach hundreds of terabytes, or tens of HDDs. High capacity comes with corporate series SATA disks and their siblings – NL SAS disks. As a rule, the disks are grouped into an external high-density JBOD container with duplicated input/output, power and cooling modules. Multichannel JBOD connection to the head server (controller) with the means of SAS 6—12GB guarantees minimal latency and high throughput.
One can accomplish high productivity and optimized costs by selecting specific hardware that addresses specific applications and workflows. As business tasks evolve, we can replace part of the components, limiting essential maintenance activity to regular software updates.
RAIDIX ensures uninterrupted operations with its Failover Cluster (FC or 40 GbE) – a high performance and high availability platform that has no single point of failure. The dual controller software-defined storage comprises two servers connected with a shared access JBOD. At that, every controller can serve a dedicated RAID group. In the Active-Active cluster, all nodes are connected with a low latency interface – FC, SAS 12Gb or Infiniband. The cache of both controllers is always synchronized, up to date and kept in a coherent condition. In case one of the controllers goes out of service, system recovery will take mere seconds.
JBOD has two independent input/output modules with duplicating expanders. Thanks to the dual connection of NL SAS disks, data remains available in case any input-output module goes astray. This is an edge over SATA disks employed on the same technological platform. Besides, NL SAS serves a greater depth of sequence than SATA, enabling performance gains on the same hard drive architecture. That said, NL SAS disks basically fall in the same price range as SATA drives of similar capacity.
Besides, the SAS protocol includes T10 CRC integrity control on the entire data route from disk to application. In some cases, data deterioration can go unnoticed should the system fail to come up with a “door-to-door” protection technology. In high-capacity data storage systems, silent disk errors (silent data corruption) are especially perilous and can lead to unexpected consequences. RAIDIX tracks and resolves hidden errors in the course of system operation.
Other RAIDIX features for data loss prevention include advanced reconstruction, or optimization of reading speed on data recovery, as well as dismissal of “short-living” disks. In the latter case, the system memorizes disks with poorest response rate and stops sending them requests for 1 second.
Data storage system in video production
Modern video production requires significant computing resources. The system must support heterogeneous (MacOS/Windows/Linux) workstations that host video editing, color correction and design applications. Moreover, it’s crucially important to ensure access to shared high-capacity data storage systems.
Sustainable high-performance delivery of multi-thread data, preventing failures and degradation, builds on the FC 8—16GB infrastructure. It’s worth noting that FC SAN networks have always been the de facto standard for all major TV channels and postproduction studios. By integrating the RAIDIX FC cluster in the existing environment, the user benefits from increased volumes of stored data and a significant performance leap – accomplished without much pain.
Twin-channel FC HBA, 8 or 16GB, are installed on the cluster nodes. The array is introduced into SAN as a modular access device (LUN) and automatically adjusts to the initiators of clients. The metadata controller defines access rights for shared resources to specific user groups.
Locations that had no previous FC infrastructure are unlikely to deploy one in the future. The reason is the dramatic growth of Ethernet throughput. Using cheap devices that deliver 10—40 GB/s has become common practice, 100 GB/s bandwidths are not far off, either. Affordable network commutators, adapters and data transfer protocols with low latencies are emerging on the market. All these factors open up broad horizons for network-based file-level data storage systems.
RAIDIX-powered NAS cluster differs from an FC cluster with external interfaces (10—40GB Ethernet NICs) and interfaces for client communication (SMB, NFS, AFP). They are identical by configuration of server nodes and the back-end: a shared SAS JBOD connects to the cluster nodes by the means of multichannel SAS HBA, the nodes synchronize – using SAS as well.
To learn more about RAIDIX performance results, read Sequential Data Storage. The Performance Race. Part 2.