Testing Times for All-Flash Arrays
451 Research has estimated that in 2014 the market for All-Flash Arrays
across the leading vendors was a substantial $ 1.7 Billion with a growth of
over 150% from the previous period. The capacities being shipped are also very
large – IBM at top spot (for raw capacity shipped) moved a scarcely believable
22,773 TB and EMC in 2nd place shipped 13,404 TB. The reasons for
the growth of Flash are not difficult to find – the thinking today is that
applications that are heavy on data or that require frequent input / output
will do much better with Flash – that’s pretty much all enterprise and consumer
facing apps today.
The scale of the opportunity is such that all the big names and several
of the start-ups have skin in this particular game. It’s, then, small wonder
that the development and testing of this product class is front and center in
the concerns of many VP’s of Engineering. Our tech folks and the marketing team
put together a wonderful webinar a couple of weeks ago that zeroed in on some
of the challenges of testing AFAs and some of the myths– the content deserves
another look.
The first challenges is in understanding the various features and how
they are different in the context of AFA. Consider the Flash Translation Layer
(FTL). Given the different way Logical Block Addresses arrays are presented in
HDDs and Flash the FTL becomes essential. Similarly there are the other Flash
specific features like Garbage Collection and Discard. It is important to
understand what this means to testing – especially the performance benchmarking
& testing of the AFA.
While on the topic of Performance Testing another challenge becomes
apparent – that of settling on the appropriate tool to use for performance
benchmarking. There are several to consider from TPC, SPC-1/2/C/E, SPEC SFS and
the like. The other question to answer here is choosing the right SSD – a
single level cell or multi-level cell (SLC or MLC)?
In these days of Continuous Integration when product development is
based on shorter sprints and faster delivery automation has to play a key role.
This presents another key challenge to consider while testing AFAs – that of building
an appropriate automation framework for testing them.
There is sharp competition out there – one way that vendors are trying
to stand apart from the crowd is to address specific customer niches. This
implies that the testing of the AFA product also has to keep the customer in
mind – customer centric testing in other words. The need is to test by creating
environments similar to those in the customer’s enterprise including the
applications they are likely to use and the test workloads they are likely to
encounter. The other necessity becomes to test the AFAs in production-like
hardware configurations and combinations.
Given the vast array of products and solutions that reside in a typical
enterprise data center it has become more or less mandatory to find common
ground in the way they work, interface and communicate with each other. The
common language that makes this possible is the standards laid out by bodies
like SNIA. While most activities have several competing standards it is
necessary while putting out your AFA product to test it and certify it for
compliance with the most appropriate ones. This is necessary to convince
enterprise customers that when they deploy your AFA it will “play nice” with
all their other expensive toys.
Clearly this is not easy – there are a lot of moving parts and some of
them are poorly understood in general. I would highly recommend heading over to
the recording of the webinar I mentioned earlier in the piece. The guys
have done a decent job of explaining some of the complexity involved and of
offering some helpful solutions. It will be worth it – I’m sure of it.
To know more email: blog@calsoftinc.com
Anupam Bhide | Calsoft Inc.
