July 9, 2020 By Eric Herzog 4 min read

Data is the one area where every company has an equal opportunity to be great.

Knowing your own data can give your company a clear competitive advantage.[1] Yet, according to a survey conducted by Forrester with global IT, data, and line-of-business decision makers, more than half of the respondents admitted they simply don’t know what their AI data needs are[2]. If business leaders do not fully understand their data needs, they can´t be expected to understand their infrastructure needs.

Mike Leone, Senior Analyst at Enterprise Strategy Group covering Data Platforms, Analytics, and AI, states: “Artificial intelligence is poised to revolutionize business around the globe. While it’s easy to get lost in the eye-opening use cases where AI is being applied today, it all starts with a foundational infrastructure that can satisfy the extensive list of AI requirements…”

There is no AI without information architecture (IA). But how do organizations turn their data AI aspirations into business outcomes? IBM calls the pathway to leveraging the power of AI the “AI journey”.  The AI journey starts by accelerating your ability to collect and organize data, gaining deeper insights by leveraging AI-driven data analysis, and then infusing your entire enterprise with these capabilities and insights.

Today’s IBM Storage announcements are about enabling enterprises of all types and sizes to build simple, high-performance, cost-efficient AI-optimized solutions that can potentially increase their ability to gain greater insight, value, and competitive advantage from data.

Example of one ESS 5000 configuration

First up is the new IBM Elastic Storage System (ESS) 5000 (GA: August 7, 2020) powered by IBM Spectrum Scale, a market leader. The ESS 5000 is designed for data lakes with 55 GB/s performance in a single 8-disk enclosure node, scalable to yottabyte configurations. The ESS 5000 is ideal for data collection and long-term storage capacity. The IBM ESS 3000 — introduced in October 2019 — and also powered by IBM Spectrum Scale, is a 2u building block with 40 GB/s performance designed to meet the challenge of analyzing vast amounts of data. Both systems are optimized for different stages of your AI journey.

“The new IBM ESS 5000 is designed to bring market-leading performance using IBM’s latest POWER9 processors. With increased disk and enclosure capacities, together with the strength of IBM Spectrum Scale, CSI is pleased to recommend this to our clients looking to scale-out file storage for efficient data lakes.” Paul Cameron, Vice President New Business Sales, CSI.

Highly flexible, massively scalable ESS 5000 systems aren’t the only solutions available from IBM Storage that are the essential foundation for optimized AI infrastructures. IBM Cloud Object Storage (COS), another market leader, is designed to provide very cost-efficient on-premises and hybrid cloud object storage from any network location. Today’s innovations enhance its traditional roles of backup and archive and also move IBM Cloud Object Storage into the realm of potentially supporting faster AI data collection and integration with high performance AI, big data and HPC workflows.

The IBM COS (GA: August 7, 2020) storage engine has gone through a complete modernization. This upgrade is designed to increase system performance to 55 GB/s in a 12-node configuration, which can improve reads by 300% and writes by 150%, depending on object size. These updates make IBM COS an even better solution for collecting and accessing object data. Additionally, IBM COS will support a new technology of high-capacity disk drives (Shingled Magnetic Recording, SMR drives) which will provide 1.9 PB in a 4u disk enclosure. With new AI acceleration (GA: 4Q20), IBM Spectrum Scale will move data from object storage minimizing duplicate copies of data.

These innovations can help enable enterprises to leverage the high durability and lower costs of IBM COS to build optimized AI infrastructures. To gain a deeper understanding of data assets themselves, organizations can turn to IBM Spectrum Discover. This software is designed to provide file and object data cataloging and indexing that ingests and exports not only with heterogeneous storage systems but also with IBM Watson solutions and IBM Cloud Pak for Data.

IBM Spectrum Discover (GA: 4Q20) will be incorporated into Red Hat OpenShift environments[3]. With this enhancement IBM Spectrum Discover hybrid multicloud deployment will be designed to be portable and more flexible across clouds and in any environment supported by OpenShift.  Plus, additions to the IBM Spectrum Discover policy engine (GA: September 7, 2020) can potentially help enable support for moving or copying data to optimize storage efficiency[4].

Ata Turk, VP, Infrastructure Svc Delivery of State Street Bank states: “We’ve had the opportunity to do an early evaluation of the updated software for IBM Cloud Object Storage and saw results that significantly improved access to our IBM COS system.  We expect this to enhance our user experience and allow us to save storage resources in the future. We have a growing capacity of PBs of IBM Cloud Object Storage spread across 3 sites with a single geo-protected copy of data. We love the reliability and the fact that we don’t have to worry about integrity or the availability of the data once it goes into IBM COS. The system is easy to use, manage, and upgrade.” 

The journey to AI begins with the right information architecture (IA). To do it right and beat your competition, you need the right tools, advice, and technologies. Just as importantly, you need a partner committed to ongoing innovation and industry leadership. The announcements today demonstrate that IBM Storage is the best choice for your AI journey.

Learn more about IBM Storage for Data and AI.

Disclaimer: Preview announcements provide insight into IBM plans and directions. General availability, prices, ordering information, and terms and conditions will be provided when the product is announced.  

IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion. Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.

Was this article helpful?
YesNo

More from Cloud

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

Optimize observability with IBM Cloud Logs to help improve infrastructure and app performance

5 min read - There is a dilemma facing infrastructure and app performance—as workloads generate an expanding amount of observability data, it puts increased pressure on collection tool abilities to process it all. The resulting data stress becomes expensive to manage and makes it harder to obtain actionable insights from the data itself, making it harder to have fast, effective, and cost-efficient performance management. A recent IDC study found that 57% of large enterprises are either collecting too much or too little observability data.…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters