The New HPE GreenLake for Block Storage – Powered by Alletra Storage MP.

Now that we have the basic Alletra Storage MP hardware architecture details explained, what is the new Block storage offering from HPE?

It is the next evolution of HPE storage, combining novel approaches with certain tried-and-true elements and concepts from our existing systems.

For the people that love looking at boxes, here’s a photo of one of the new systems.

Bezel designers are the unsung heroes of storage. Without them everything would look the same.

Some Current Challenges Customers Face

So far, if you just wanted more capacity – add more storage. That’s easy with most storage solutions.

But what if you only wanted more performance and your media wasn’t performance-saturated?

In this scenario, the typical storage industry option would be to replace the current controllers with faster ones. But what if your performance needs exceed what a couple of controllers can do? Traditionally, you’d have to buy a whole new system, even if you don’t need more capacity (the classic “orphan capacity” problem as I like to call it).

Or, in the cases of “high-end” monolithic storage, you’d be able to have more than 2 but still identical controllers – yet not be able to have any flexibility, dissimilar controllers, odd numbers, etc. And you’d still need to add more capacity along with the extra controllers, in most cases.

What about software-defined storage? That’s typically not disaggregated (it’s more similar to how most HCI works). Each server owns storage, so there’s no good way to grow capacity or performance separately, and if you lose a node, you also lose storage, which is not the behavior we desired.

With the new systems, HPE tackled this problem, plus several more.

What’s being carried over from before?

The advanced user-facing data services and APIs of the 3PAR/Primera/Alletra 9000 line are being retained in the new Block MP systems. So, things like Active-Active access to sync replicated volumes from both sites will be available (Active Peer Persistence). And migrations from older systems will be easy and non-disruptive (we are initially aiming this system at existing smaller 3PAR customers). Plus, conveniences like being able to granularly add capacity in 2-drive increments. And, of course, being able to be in a Peer Persistence relationship with an older system.

Some other concepts (no porting though, this was all new code) like significantly enhanced compression heuristics and redirect-on write snaps were carried over from Nimble. As was always doing full stripe writes and extremely advanced checksums that protect against lost writes, misplaced writes and misdirected reads.

And of course, centralized management of all devices from DSCC, the 100% uptime guarantee, and the unconditional satisfaction guarantee are also all there.

What is new? A Lot More Flexibility and Resilience.

The more exciting part is what’s different and new.

A key design principle for all the MP personas was to provide the option of disaggregated growth and fault domain management for everything.

What if you were able to add performance as needed, without worrying about the capacity?

What if you wanted to just add a single extra node? Maybe to have just a bit more speed, or perhaps to set it aside so you can have the same performance headroom even if you lost a node? 😊

What if you wanted to be able to withstand the simultaneous loss of multiple nodes in the cluster?

What if you could grow by adding dissimilar nodes instead of being forced into the exact same type all the time?

But we didn’t stop there.

Most (probably all) other systems need to have some sort of write cache, and must maintain coherence of that write cache between at least two controllers, even in 4+ controller monolithic arrays.

That write cache needs to be protected in case of node or power failures, so different techniques are normally used in the storage industry, like batteries or supercapacitors.

You can also lose write speed since that write cache has to be replicated between the nodes before being committed to stable media. Sure, the cache mirroring happens at the throughput speeds of some sort of fast interconnect, but at scale, this adds up and creates other issues.

Surviving simultaneous node failures also becomes a challenge with such mirrored write cache schemes – what if you lose the wrong two nodes?

So, we developed technology to get rid of all these limitations.

For starters – we don’t waste time mirroring write cache between nodes.

In fact, we don’t even have a volatile write cache as such, and therefore have no need for batteries or supercapacitors to protect that which does not exist 😊

Instead, we use an intelligent write-through mechanism directly to media.

Writes use a write-through model where dirty write data and associated metadata is hardened to stable storage before an I/O complete is sent to the host. The amount of hardening depends on how critical the data is, with more critical data having more copies.

The architecture allows for that storage to be a special part of the drives, or different media altogether. An extra copy is kept in the RAM of the controllers.

Each volume ends up getting its own log this way, which is a departure from other storage systems that tend to have a centralized cache to send writes to.

This way of protecting the writes also allows more predictable performance on a node failure, simpler and easier recovery, completely stateless nodes, the ability to recover from a multi-node failure, plus easy access to the same data by way more than 2 nodes…

Eventually, the writes are coalesced, deduped and compressed, and laid down in always full stripes (no overwrites means no read-modify-write RAID operations) to parity-protected stable storage.

Snapshots use a relocate on write mechanism, which means there is no snapshot performance penalty even with heavy writes.

Summary

The initial offerings of the new Block systems will be switchless arrays with disks inside the main chassis, and a near future update will bring the larger, switched, fully disaggregated, shared-everything architecture for much larger and more flexible implementations.

So there you have it: The new HPE GreenLake for Block Storage, Powered by Alletra Storage MP.

A bit of old, a lot of new, and lots of customer benefits and flexibility.

We are just getting started.

D

2 Replies to “The New HPE GreenLake for Block Storage – Powered by Alletra Storage MP.”

Comments are closed.