The Next Big Thing in Cloud Data Center Networking

Our team is on point with the “latest and greatest” technology. Heck, we make a lot of that technology! But it’s always interesting to attend shows and see what everyone else is working on…and how trends come to life. As our Frank Yang tells us, there are three reasons why 400G will be here before we know it.

Intel 360x203I started blogging about 100 Gigabit Ethernet (GbE) in 2016, and gave an update on the 100GbE landscape in 2017. In the past two years, the market has observed a tremendous growth in 100GbE in the data center.

“What’s next?” I asked myself.

From the recent Optical Fiber Communication Conference (OFC) 2018 and Open Compute Project (OCP) Summit 2018, I learned that 400GbE will most likely be the big thing in the cloud data center networking. Why?

First, the unquenchable thirst for bandwidth keeps driving higher speed networking technologies.

Second, the market would like to repeat the success of 40GbE breaking out to four 10GbE links with 100GbE and 400GbE.

CLICK TO TWEET: How far away is 400G? Closer than you think. Read more in Frank Yang's blog.

The third reason lies in the availability of switch silicon per lane speed at 50 or 100 Gigabit per second (e.g. SERDES speed, a very technical term). The 50/100Gbps silicon per lane speed provides much better scalability to build 400GbE networks. Andy Bechtolsheim, the well-known entrepreneur, investor and self-made billionaire, predicted at the OCP Summit 2018 that 50 and 100Gbps will take more than 50% of the SERDES mixes starting in 2020.

The DC core networking technology is trending from 10/40GbE to 100/400GbE. Data center customers are advised to examine the underlying connectivity infrastructure against the goal of supporting the technology transitions while maximizing investment preservation.

400GbE will be required to support several use cases in the cloud data center. From the distance perspective, the use cases include:

  • Short reach (less than 100 meters), e.g. within one server rack row or across multiple rows nearby;

  • Middle reach (100m to 500m/2km), e.g. within a compute hall or across multiple halls;

  • Campus reach (500m/2km to 10km); and

  • Metro reach (40km or longer).

Edge core 360x203Each of these use cases may require one or more variations of 400GbE optics. With the combination of reach, media, form factor and cost, the selection of 400GbE optics could easily get complicated. Another side effect of too many variations of 400GbE optics is market fragmentation, which was a lesson learned from 100GbE.

I will keep monitoring 400GbE market and post updates when appropriate. What do you think the industry should do to make 400GbE optics easy to choose and simple to use? Let me know your thoughts in the comments below.