Archive
Atlantis Computing’s Software Defined Storage, Hyperconverged Infrastructure, and Data Center Design
Atlantis Computing‘s Hyperscale appliance sports an all-flash array for storage, plus adds compute and virtualization to your remote office/branch office (ROBO) sites without the need for on-site IT staff. Its Hyperscale solutions offer your company:
- Data reduction
- I/O acceleration
- Data management
- Data mobility
- Data protection
- Unified storage
Its “turnkey” appliances offer simplified setup with enterprise-class all-flash storage that anyone in your ROBO can setup in minutes at a lower cost than competitive hyperconverged solutions. Starting with a two-node, 4 TB appliance, your Atlantis Computing-based solution can grow with you. You can read my article on ZDNet about Atlantis Computing’s latest announcements and listen to the podcast.
To find out more about how Atlantis Computing’s Hyperscale solutions can help your business, check out an in-depth article complete with supporting statistics and data: From the Field: Software Defined Storage and Hyperconverged Infrastructure in 2016.
Atlantis Computing is also offering you a free ebook so that you can have a look into the building of a modern data center.
Learn how agile IT principles and emerging data center services, such as software-defined storage and a hyperconverged infrastructure, will play an important role in meeting increasing business demands.
Sign up to reserve your copy.
Other free resources from Atlantis Computing:
DeepStorage Report on Atlantis
Atlantis Computing also helps companies setup and manage virtual desktop infrastructure (VDI) implementations. If you want your VDI to work like you’ve dreamed it would without spending your company’s retirement fund to do it, check out Atlantis’ solutions for VDI.
Disclaimer: This is a non-sponsored post.
Sponsorship: If you would like to sponsor a post or have me review a product, contact me via Twitter @kenhess.
IBM’s Entry into Software-Defined Storage: Elastic Storage
By now, everyone has heard of the hot new buzzword: software-defined data center (SDDC). SDDC is the new data center paradigm where everything is software-defined: network, computers, and storage. Yes, there’s underlying hardware making the whole thing possible but what do software-defined resources really do for us? The answer is simple: It abstracts hardware into pooled resources that users can partake of in discreet slices for cloud applications and for cloud workloads.
But the real story here is IBM’s venture into software-defined storage that it calls Elastic Storage. On May 12, 2014, IBM announced a portfolio of software defined storage products that deliver improved economics at the same time they enable organizations to access and process any type of data, on any type of storage device, anywhere in the world. Elastic Storage offers unprecedented performance, infinite scale, and is capable of reducing storage costs up to 90 percent by automatically moving data onto the most economical storage device.
For example, if a company has data that’s accessed infrequently, that data will be moved to tape or to low cost disk systems for archiving. Alternatively, data that’s accessed regularly or that requires high speed access will be moved to flash storage. Data redistribution is based on policy-driven rules and data analytics. This type of automated data movement shows cost savings of up to 90 percent.
“Born in IBM Research Labs, this new, patented breakthrough technology allows enterprises to exploit – not just manage – the exploding growth of data in a variety of forms generated by countless devices, sensors, business processes, and social networks. The new storage software is ideally suited for the most data-intensive applications, which require high-speed access to massive volumes of information – from seismic data processing, risk management and financial analysis, weather modeling, and scientific research, to determining the next best action in real-time retail situations.”
Elastic Storage features and benefits:
- Enhanced security – Protects data on disk from security breaches, unauthorized access, or being lost, stolen or improperly discarded with encryption of data at rest and enable HIPAA, Sarbanes-Oxley, EU, and various national data privacy laws compliance.
- Extreme performance – Server-side Elastic Storage flash caches speed IO performance up to 6X, benefitting application performance, while still enjoying all the manageability benefits of shared storage.
- Save acquisition costs – Uses standard servers and storage instead of expensive, special purpose hardware.
- Limitless elastic data scaling – Scale out with relatively inexpensive standard hardware, while maintaining world-class storage management.
- Increase resource and operational efficiency – Pools redundant isolated resources and optimizes utilization.
- Achieve greater IT agility – Quickly reacts, provisions and redeploys resources in response to new requirements.
- Intelligent resource utilization and automated management – Automated, policy-driven management of storage reduces storage costs up to 90% and drives operational efficiencies.
- Empower geographically distributed workflows – Places critical data close to everyone and everything that needs it, accelerating schedules and time to market.
As for performance, IBM’s Elastic Storages boasts the capability of scanning over 10 billion files on just one cluster in less than 45 minutes. This type of performance as extreme implications for analytics and big data applications. IBM’s Elastic Storage solution is built for performance for big data and is based on the same technologies used in the Watson computer.
“Elastic Storage offers unprecedented performance, infinite scale, and is capable of reducing storage costs up to 90 percent by automatically moving data onto the most economical storage device.”
Part of Elastic Storage’s performance enhancement is due to IBM’s parallel data access technology: (General Parallel File System (GPFS). It eliminates the performance bottlenecks and so-called “choke” points of other data access algorithms and technologies.
What it all means is that now you have the same capability to access, analyze, and report on huge data sets in a fraction of the time it used to take to perform these analyses as large companies have. Elastic Storage puts the data where it needs to be to best serve you and your data requirements at a tremendous cost savings.
IBM Elastic Storage supports OpenStack Cinder and Swift interfaces. IBM is a platinum sponsor of OpenStack Foundation and is now its second most prolific contributor. It also supports other open APIs, such as POSIX and Hadoop.
This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.
I’ve been compensated to contribute to this program, but the opinions expressed in this post are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
Storage Trends and the Future of Storage
Do you see the Infographic to the right? You only see part of it. How much information can you use from what you see now? Not much, right? It’s the same with your storage–you’re only getting part of the story because you’re only seeing part of the picture. Only seeing part of the picture is disturbing enough here but think about what you’re not seeing from your current storage tools.
What do you know about the storage in your company or organization? One thing you do know for sure, without much investigation on your part, is that a lot of the storage you’ve paid for is wasted. Your money’s wasted. Your capacity’s wasted. And all the while, your technology staff’s begging for more storage because they’re running out of space–or at least they think they are running out of it.
Some of it isn’t your staff’s fault. They too, are only seeing part of the picture. It’s your storage management tools, your storage management strategy, and your storage technology that’s causing most of your space waste problems.
But waste is also only part of the picture. How will you manage the rapidly growing volume of data with which you must contend? How efficiently can you retrieve it? Are you still relying on tape and traditional data recovery technologies?
And how about disaster recovery? How many tapes and restore points will you have to manage in case of a major outage? Have you estimated your mean time to restore (MTTR)?
There’s a way to manage your storage environment efficiently, with less waste, with lower power consumption, and with less sprawl. Check out the full The Top Trends in Storage Infographic from IBM to see the solution and get the whole story.
IBM offers a range of Storwize products from Entry to Enterprise.
Five of the many outstanding features of the Storwize family of products are:
- Flash Copy – Make up to 2,040 copies of your data.
- Remote Mirror Function – Copy data to a remote location for disaster recovery.
- Data Volume Management – Real-time compression takes place on data as it is written to disk.
- Visual enhancements – You can easily view your storage capacity, how much you’re using, how much is free, and how much space is saved by compression.
- Lower costs – Easy to manage storage that is space, time, and cost efficient.
The Storwize product line is part of IBM’s Smarter Storage for Smarter Computing solution.
This post was written as part of the IBM for Midsize Business program, which provides midsize businesses with the tools, expertise and solutions they need to become engines of a smarter planet.
I’ve been compensated to contribute to this program, but the opinions expressed in this post are my own and don’t necessarily represent IBM’s positions, strategies or opinions.
You must be logged in to post a comment.