Around the Storage Block
1753943 Members
8581 Online
108811 Solutions
New Article ๎ฅ‚
StorageExperts

Data center transformation: The road to 400Gbps and beyond

With an increase in the number of mission-critical workloads running on denser and faster datacenter infrastructures, there is an increased need for speed and efficiency from high performance networking infrastructure.

HPE-Storage Networking_The-Road-to-400Gbe_blog_shutterstock_673828513 (2).pngNetwork traffic within and across data centers has consistently experienced exponential growth. This immense growth is primarily triggered by high-performance workloads such as artificial intelligence and machine learning, with data being generated at massive scale, volume and velocity. The challenge to organizations is real; all of this data needs to be collected, transported, stored, and analyzed in real time.

When we take a look at the data thatโ€™s generated at this unpreceded volume and velocity, we see that there is an increased need for managing and analyzing the data in order to draw actionable insights that will benefit the business or organization. That is only possible if we can correlate the historical stored data with near-real-time data. The enhanced customer experience is enabled for latency-sensitive applications with predictive analytics.

What is driving the need for increased bandwidth?

If we look at the need for increased bandwidth, weโ€™ll find growing densities within virtualized servers that have evolved on north-south and east-west data-centric traffic. The massive shift in machine-to-machine traffic has resulted in a major increase in required network bandwidth to accommodate demand. The arrival of faster storage in the form of solid state devices such as Flash and NVMe is also having a similar effect.

We find the need for increased bandwidth all around us as our lives are increasingly intertwined with technology. A leading driver in this evolution is artificial intelligence (AI) workloads, which spin off volumes of data to solve complex computations, and require fast, efficient data delivery for a vast amount of data sets.

Deploying networks at speeds of up to 100Gbps โ€“ and in the near future at 400Gbps โ€“ helps reduce  necessary training times. The use of lightweight protocols such as RDMA (Remote Direct Memory Access) can further help to complete the rapid exchange of data between computing nodes, while   streamlining the communication and delivery process. Think about it. It was only a few years ago when a majority of data centers started deploying 10GbE in volume. And now, we are seeing a shift toward 25 and 100GbE, with the adoption of 400GbE answering the call for emerging bandwidth concerns.

Transforming the Ethernet switch market

It is no wonder then, that the Ethernet switch market is undergoing a transformation. Previously, Ethernet switching infrastructure growth was led by 10/40GbE, but volume demanded that the tide start to turn in favor of 25 and 100GbE.

Analysts agree that soon 25 and 100Gb, as well as emerging 400Gb Ethernet speeds, are expected to surpass all other Ethernet solutions as the most deployed Ethernet bandwidths. This trend is being driven by mounting demands for host-side bandwidth as data center densities increase, and pressure grows for switching capacities to keep pace. More than just bandwidth, 100 and 400Gbps technology is helping to drive better cost efficiencies in capital and operating expenses, as compared to legacy connectivity infrastructure at 10/40Gbps.  These increased bandwidths also enable greater reliability and lower power requirements for optimal data center efficiency and scalability. 

Figure 1: Drivers for adoption and increased performanceFigure 1: Drivers for adoption and increased performance

Data center L2/L3 switching market snapshot

According to Dellโ€™Oro, the data center Ethernet switching market revenue has grown to approximately  $14B in 2021. Itโ€™s expected to experience a healthy growth rate of 9% CAGR โ€“ to approximately $20B by 2025. Overall total port shipments are also expected to experience significant, similar growth.

Figure 2: Data center Ethernet switching revenue (Source: Dellโ€™Oro)Figure 2: Data center Ethernet switching revenue (Source: Dellโ€™Oro) Figure 3: Data Center port shipments for Ethernet switches (Source: Dellโ€™Oro)Figure 3: Data Center port shipments for Ethernet switches (Source: Dellโ€™Oro)

Figure 4: 400GbE vs. 800GbE port shipments (Source: Dellโ€™Oro)Figure 4: 400GbE vs. 800GbE port shipments (Source: Dellโ€™Oro)

Breakout cabling provides scalability options while enabling a single 200GbE split into two 100GbE links. Likewise, any 100GbE port on a switch can breakout to four 25GbE ports linking one switch port, and accommodating up to four adapter cards in servers, storage or other subsystems. Similarly, a 400 Gbps port can be configured as 4x100GbE, 2x200GbE, or 1x400GbE. In the future with 800GbE systems, we can expect deployments to include scale option at lower speeds with breakouts to 2x400GbE or 8x100GbE. Breakout applications support many use cases, including aggregation, shuffle, better fault tolerance, and larger radix.

We have already experienced the market transition from 10/40GbE-based systems to 25/100GbE-based systems and in the next couple of years we anticipate much wider adaptation for 400Gbps systems by Enterprises. 400Gbps is expected to be less expensive when compared to 100Gbps on bandwidth bases. 400Gbps adoption is going to offer higher bandwidth when compared to 100Gbps systems, while at the same time it brings higher radix, lower latency, and fewer hops.

Tabe 1: Ethernet speeds vs. SerDes Lanes (Source: Dellโ€™Oro)Tabe 1: Ethernet speeds vs. SerDes Lanes (Source: Dellโ€™Oro)

What will the future bring? The data center Ethernet switch market is expected to transition through three-to four major speed upgrade cycles during the next five years. The first upgrade cycle was driven by 25G SerDes technology. The second cycle was powered by 50G SerDes Technology. The third cycle, which is projected to start in the very near future, will be propelled by 100G SerDes technology. A fourth upgrade cycle, resulting from 200 G SerDes, is also expected at a later phase.

Figure 5: 400GbE Ethernet Switching use casesFigure 5: 400GbE Ethernet Switching use cases

Transformation in data center architectures: Is edge eating the cloud?

There are many factors that drive the need for the evolving architecture of the modern data centers. According to Gartner, a vast majority of data will be processed outside traditional data centers by 2025. Weโ€™ll start to see modern architectures for data centers that are designed with a combination of edge, centralized data centers, and the cloud. There are exciting, transformational developments in how data is managed in this new ecosystem.

On the flip side, this transformation causes some IT managers to worry that edge is eating the cloud.

On the contrary, a balance is going to have to happen between edge, data center and cloud, while leveraging the strengths of each to maintain well-orchestrated distributed workloads. Each will have its own workload balance, depending on the type of application. In most cases, this orchestration will work  in sync with distributed architectures, depending on applicationโ€™s requirements. Based on workload and application needs, every organization is going to need to find the point of equilibrium for the best mix of edge, data center and cloud.

With all available connectivity options of 10/25GbE, 100GbE, 200GbE, 400GbE, weโ€™ll start to see and use the best option based on application needs. For example, many edge locations may still continue to use 25/100GbE; central data centers will start to leverage 100/200/400GbE, depending on the bandwidth and latency the application needs.

Edge computing: Processing data closer to point of creation

Edge computing allows data from the devices to be analyzed at the edge before being sent to the data center. Using intelligent edge technology can help maximize a businessโ€™s efficiency. Instead of sending data out to a central data center, analysis is performed at the location the data is generated. Micro data centers at the edge integrate storage, compute and networking to deliver the speed and agility needed for processing the data closer to where it is created. For applications requiring high performance infrastructure with low-latency while using edge computing, there is no need to make tradeoffs between high-bandwidth and ultra-low-latency; both are possible at the local area network at the edge. 

Figure 6: Need for high-bandwidth and low-latency for new high-performance applications and workloadsFigure 6: Need for high-bandwidth and low-latency for new high-performance applications and workloads

 

Figure 7: Combination of edge and central data center/cloud brings multiple benefits for data center infrastructureFigure 7: Combination of edge and central data center/cloud brings multiple benefits for data center infrastructure

The combination of edge and central data center plus cloud brings multiple benefits to an organization or enterprise.

  • It reduces the load on the network to minimize network congestion
  • It improves the reliability of the network by distributing and load balancing between edge and central data center location(s)
  • It enhances the customer experience for latency-sensitive applications
  • It reduces the total cost of ownership (TCO) by optimizing the infrastructure hosted at the central location with lower-cost edge infrastructure.

Clearly, the network will need to support more feature sets to accommodate the new requirements of digital transformation. Edge computing and IoT will power the need for security, automation, and AI/ML.

Designing for today while planning for the future

Most modern data centers are highly virtualized and running on solid state storage โ€“ whether or not you are supporting AI/ML or DA workloads today. For both fast NVMe storage and all computational inferencing workloads, predictable and reliable data delivery depends on fast and accurate data delivery โ€“ and this starts at the network.

Rapid improvement of data center computing with low-latency storage solutions have transferred data center performance bottlenecks to the network. Todayโ€™s data centers should be designed to handle this anticipated bandwidth with a low latency, lossless Ethernet fabric, while leveraging the new connectivity solutions up to 400Gbps. Data center switching needs to deliver ultra-low-latency, zero packet loss (no avoidable packet loss, for example due to traffic microbursts), and deliver that performance fairly and consistently across any packet size, mix of port speeds, or combination of ports.

For more information, please check out these blogs.

 

Headshot Faisal_400x400.jpgMeet Around the Storage Block blogger Faisal Hanif, Product Manager, HPE Storage and Big Data. Faisal is part of the HPE Storage and Big Data business group, leading Product Management and Marketing for next generation products and solutions for storage connectivity and network automation and orchestration.

Follow Faisal on Twitter @ffhanif .

 


Storage Experts
Hewlett Packard Enterprise

twitter.com/HPE_Storage
linkedin.com/showcase/hpestorage/
hpe.com/storage

 

0 Kudos
About the Author

StorageExperts

Our team of Hewlett Packard Enterprise storage experts helps you to dive deep into relevant infrastructure topics.