Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

The gigabytes equal performance illusion

GB Labs' Ben Pearce discusses why it's so important for content owners to choose their storage systems wisely

I’ve been working with network area storage (NAS) for nearly 20 years and still see too many people buy what is clearly the wrong storage for what they need. Often, their decision is not based solely on perceived cost savings, it is usually a result of not fully understanding that the operative word is “shared” storage, meaning that the storage is about providing support for multiple files to multiple users. Looking at storage as a single purpose appliance has often proved to be short-sighted.

What I’m saying is that measuring peak performance and IOPS (input/output operations per second) as standalone criteria for purchasing a storage system is a mistake because those figures are often misleading. A single peak-performance figure provided by a manufacturer is not indicative of the totality of what a storage system can provide to a specific facility at which it is deployed. A high peak-performance figure may sound impressive, but doesn’t take into account multiple file access requirements that take place in a typical shared storage environment, so that figure is more applicable for DAS (directly accessed storage) systems. In short, IOPS are almost meaningless unless you know exactly what parameters were configured in the tests that were performed to arrive at that figure.  

It’s too easy to arrive at misleading figures that promise false economies. The issue is that sales collateral that emphasises peak performance and IOPS figures are so prevalent that they distort the truth and lead, in many cases, to unhappy users when they subsequently find out that the claims on which they based their purchase decision bear little resemblance to real-world performance.

Back to basics

Hard drives come in many shapes and sizes, from many different manufacturers, and each manufacturer chooses what to adopt and promote from numerous storage model types and technologies. 

What many end users don’t always grasp is that storage capacity alone is not a good measure of its ability to perform the tasks they need a storage system to do or to eliminate the bottlenecks they are buying it to fix. 

It’s common for people to want, indeed expect, high, 24-hour duty cycle performance from a high-density RAID. But to achieve that, you need a very specific type of hard drive that comes at a higher cost than the consumer-grade drives that many assume will be “good enough”. And, like many things when you decide on a cheaper, “good enough” option, it soon costs even more to retroactively put right.

Thinking outside the capacity

There are many aspects other than capacity that impact storage system performance and reliability. For example, communal backplanes that address RAIDs inside NAS storage; the storage interface; and the number of paths and the quality of the host-based adapter (HBA) also play key roles. However, the benefits of getting these areas right are often overlooked in favour of focussing solely on greater capacity or lower cost. Again, too many people consider price-per-terabyte to be the sole purchasing parameter rather than taking a more holistic view that encompasses the entire spectrum of what a system can do when it’s designed, configured, and deployed to take advantage of a system’s entire range of capabilities. To get optimal performance, all of those aspects must work together.

Think of it this way: A race car that is installed with a very powerful engine, but with a too light chassis, standard gearbox and high street tyres to save money, is likely to spend more time in the garage than out of it, let alone ever be competitive in any races.

The best way around this somewhat short-sighted decision-making is to fully understand the potential ramifications of choosing the least-cost option. Ask detailed questions.

For example: Is the RAID level achieved in hardware or software? There are advantages and disadvantages to achieving the RAID level with either approach, so it’s important to find out which will work best for what you want to do.  In some cases, it might be that a hybrid hardware and software-based RAID level system is the most appropriate option, but too many find this out after they’ve already installed a relatively cheap storage system that has little or no chance of delivering what they need. And all because they didn’t ask anything other than, “How much per terabyte?”

The OS is everything

I’ve been discussing questions to be asked and choices to be made concerning purchasing NAS, but I want to identify the main differentiator of any NAS, and that’s the operating system (OS) on which it runs.

No, I’m not talking about Windows or Mac. With NAS, the limitations of those operating systems are quickly reached and exceeded by NAS systems running on powerful hardware. Off-the-shelf operating systems are not suitable platforms for any professional shared storage system.

Nevertheless, the vast majority of NAS storage systems on the market today use generic, OTS operating systems that purport to turn hardware servers into functional NAS. The problem with that approach is that they must cater for a wide range of different hardware configurations from good to bad, which means that they are specifically tuned for none and even for those they can operate requires a great deal of compromise in many important areas. 

Those faux NAS systems are “kind of” functional, but there are still major issues with them. For one, they’re unstable, and they also suffer from being designed to run on the lowest common denominator, which means that they are not computationally able to take full advantage of whatever hardware it may run on, no matter how good that hardware is. NAS hardware performance that looks good on printed specifications by the marketing department tends to fall short of real-world performance after it’s deployed. An additional problem with that is, having spent the money on a new NAS, the buyer just can’t understand why there’s been little or no improvement.

And after that money’s been spent, the boss is going to want to see those improvements, too. That’s why it’s vital to seek out hardware that can reach its full potential by working seamlessly with specially developed OS software that is highly tuned to achieve peak performance and functionality. Every component of a system must be perfectly matched and finely tuned. Hardware, software, OS…everything.

Testing is key

It amazes me that most storage system suppliers do not test their systems in high bandwidth editing and content creation environments with multiple workstations. It’s true. Most don’t.

And that’s a problem because it’s precisely those high-end editing and creation environments where many of these systems will be expected to perform. But it is too common for storage manufacturers to simply take the highest peak figure for bandwidth or IOPS that they can “in theory” achieve and publish that as their benchmark network storage and performance. 

They then use that figure in the marketplace, claiming that you can just divide their figure by the number of workstations to calculate the performance that will be simultaneously delivered to each, which is patently absurd. Storage just doesn’t work like that. 

I know I risk repeating myself, but it’s a fact worth reinforcing: Peak performance figures may look good on paper and sound compelling from a salesperson, but they usually only tell you about how that system is theorised to perform in a single scenario that probably hasn’t even been tested. What they don’t tell you is how a system will actually perform under the load of multiple machines, often around the clock, which is exactly what the real world requires.

And it’s critical to understand that differentiation. The very high bandwidths we’re talking about normally require at least a couple of workstations or servers to test and confirm performance figures, but most manufacturers use speed testing software that reads only one file at a time It also writes the same file, which is easily cached by the storage and therefore skews the results. This is why GB Labs always tests on real world edit suites with real media streams; not just to generate the highest figure we can get away with for marketing purposes, but to ensure the honesty and integrity of our performance figures.

Delivering ‘real world performance’ to a network

It’s important to understand that powerful storage in a server room can equate to powerful network performance. Yes, eliminating bottlenecks by utilising the latest network protocols, connectivity, and distribution methods is important, but that’s not something most NAS systems enable you to do.

There are, however, a few exceptions. What a good NAS will do is control the delivery of data by automatically making intelligent decisions on who gets allocated what portion of the overall bandwidth. Sophisticated controls like this are rare, but they are increasingly necessary to ensure Quality of Service (QoS) to the many users on the network.

Moreover, finding a system with the ability to dynamically adapt to usage and deliver 100 percent of the available bandwidth narrows the field of potential NAS solutions even further.

Therefore, choose wisely

All of the above are just some of the reasons to take time to carefully analyse the storage system investment you are about to make. The acronym ‘NAS’ is a broad term that is rather too loosely used to cover many different grades of technology offerings in the market, many of which, in truth, have little or nothing to do with true NAS. As I’ve said, limiting your research to how much it will cost per TB is short-sighted and will end in disappointment, not to mention wasted time and money. 

So research your NAS options to determine all of what you need to deliver for your business, not just in terms of capacity to store additional assets, but how that storage can streamline your business whilst simultaneously providing the best and most efficient experience for multiple users, both now and in the future. 

Most of all, make doubly sure that each and every component is highly tuned to the others. It’s the only way to get what you paid for.