IBM’s Entry into Software-Defined Storage: Elastic Storage
By now, everyone has heard of the hot new buzzword: software-defined data center (SDDC). SDDC is the new data center paradigm where everything is software-defined: network, computers, and storage. Yes, there’s underlying hardware making the whole thing possible but what do software-defined resources really do for us? The answer is simple: It abstracts hardware into pooled resources that users can partake of in discreet slices for cloud applications and for cloud workloads.
But the real story here is IBM’s venture into software-defined storage that it calls Elastic Storage. On May 12, 2014, IBM announced a portfolio of software defined storage products that deliver improved economics at the same time they enable organizations to access and process any type of data, on any type of storage device, anywhere in the world. Elastic Storage offers unprecedented performance, infinite scale, and is capable of reducing storage costs up to 90 percent by automatically moving data onto the most economical storage device.
For example, if a company has data that’s accessed infrequently, that data will be moved to tape or to low cost disk systems for archiving. Alternatively, data that’s accessed regularly or that requires high speed access will be moved to flash storage. Data redistribution is based on policy-driven rules and data analytics. This type of automated data movement shows cost savings of up to 90 percent.
“Born in IBM Research Labs, this new, patented breakthrough technology allows enterprises to exploit – not just manage – the exploding growth of data in a variety of forms generated by countless devices, sensors, business processes, and social networks. The new storage software is ideally suited for the most data-intensive applications, which require high-speed access to massive volumes of information – from seismic data processing, risk management and financial analysis, weather modeling, and scientific research, to determining the next best action in real-time retail situations.”
Elastic Storage features and benefits:
As for performance, IBM’s Elastic Storages boasts the capability of scanning over 10 billion files on just one cluster in less than 45 minutes. This type of performance as extreme implications for analytics and big data applications. IBM’s Elastic Storage solution is built for performance for big data and is based on the same technologies used in the Watson computer.
“Elastic Storage offers unprecedented performance, infinite scale, and is capable of reducing storage costs up to 90 percent by automatically moving data onto the most economical storage device.”
Part of Elastic Storage’s performance enhancement is due to IBM’s parallel data access technology: (General Parallel File System (GPFS). It eliminates the performance bottlenecks and so-called “choke” points of other data access algorithms and technologies.
What it all means is that now you have the same capability to access, analyze, and report on huge data sets in a fraction of the time it used to take to perform these analyses as large companies have. Elastic Storage puts the data where it needs to be to best serve you and your data requirements at a tremendous cost savings.
IBM Elastic Storage supports OpenStack Cinder and Swift interfaces. IBM is a platinum sponsor of OpenStack Foundation and is now its second most prolific contributor. It also supports other open APIs, such as POSIX and Hadoop.
I’ve been compensated to contribute to this program, but the opinions expressed in this post are my own and don’t necessarily represent IBM’s positions, strategies or opinions.