Gigabit Ethernet has been the dominant server network option of choice for the past decade, with more than 200 million 1Gbps server ports shipped since the early 2000s. But the introduction of Intel’s new Romley server platform, together with rapid adoption of virtualization options and the emergence of technologies such as big data, are fueling migration to 10 Gigabit Ethernet.
This migration ultimately will spur an Ethernet switch refresh cycle and generate an estimated $50 billion in revenue for suppliers over the next five years, however the 10Gbps transition has been slow and is following a different path from that of 1Gbps.
This article describes the current status of the 10Gbps server migration and explores the factors impacting it. We then discuss our view on the catalysts as well as the timing for 10Gbps to become the mainstream server network choice.
10Gbps server migration: Where are we?
10Gbps server network ports have been shipping in material quantities for more than five years. Nevertheless, in mid-2012, we estimate that 10Gbps ports are integrated on less than 20% of servers while 1Gbps ports had reached a server penetration rate of well over 50% by that point in their ramp.
Why has the migration to 10Gbps been so slow?
We believe it’s due to five factors:
10Gbps server migration: Catalysts and outlook
Given the wide number of choices and all the uncertainties about user needs and wants, there is no single solution that is likely to satisfy the requirements of all customers. Hence suppliers are hesitant to pursue the classic integrated approach of soldering a LAN chip on a server motherboard – called LAN on motherboard (LOM) — that would otherwise facilitate early migration to 10Gbps. Instead, major server vendors such as HP, IBM and Dell are using modular integrated network cards, called daughter adapters or modular LOM, to help IT managers through the transition by providing choice of speed, type of network connectivity (10G Base-T or SFP+ DAC), type of protocol and brand.
While these new adapters provide more flexibility to end users, they come at a premium. The 10Gbps daughter adapters cost two to three times more than LOM counterparts. We estimate the price of a 1Gbps server LOM port to be around $7, the price of a 10Gbps LOM port to be about $24, and the price of a 10Gbps daughter adapter port to be over $65 (note that these are prices to the server vendor and not the end user).
Given the daughter adapters’ price premium, the question is whether or not 10Gbps LOM server connectivity is required for a mass migration to 10Gbps. Certainly, a move toward a LOM would accelerate the 10Gbps transition. However, we believe daughter adapters may persist in the market for two or more years. We also believe that a mass migration to 10Gbps could still occur without a full conversion toward LOM since the price of a server network port comprises less than 10% of the total price per server connectivity as illustrated in Figure 2.
We believe the vendor decision to switch from a daughter adapter to a true LOM will depend on several considerations:
Next, we believe the maturity of the 10G Base-T technology is a key catalyst for a 10Gbps mass migration. SMBs are very dependent upon 10G Base-T, as they will continue to have a mix of rack environments with an installed base of servers using 1G Base-T. Without 10G Base-T, smaller IT shops would require two switches, which is not optimal. Larger enterprises tend to purchase servers in racks, and therefore interoperability with an installed base of servers is less critical. We currently expect to see major improvement on the 10G Base-T technology with the next 28 nm PHY, planned for the 2014-2015 time frame.
Furthermore, we believe the price per 10Gbps switch port must come down to propel migration towards 10Gbps. As illustrated in Figure 2, switches and optics comprise the vast majority of the price per 10Gbps server connectivity. We expect a strong price per port decline from today’s 10Gbps switch products to those that will be shipping in the first quarter of 2013. We anticipate an Ethernet switch refresh cycle at the beginning of 2013 based on Broadcom’s Trident II silicon, which will enable higher switch port density and result in a price-per-port decline of more than 10%.
Now the question is whether 40 Gbps and 100 Gbps are needed in the core to aggregate servers with 10Gbps. We don’t think they are critical, because IT managers are changing their network architectures to match current traffic needs. They are moving away from a three-tier network architecture, with two tiers of modular switches in the core, to aggregate fixed top-of-rack switches.
Older architectures were designed to move traffic from servers to end users (referred to as north-south traffic). Current architectures flatten the number of tiers to accommodate machine-to-machine traffic (referred to as east-west traffic), which now comprises the bulk of the bandwidth used in data centers. The change in traffic flow topology means less traffic flows through the core of most data centers, removing some of the legacy requirements for higher-speed switching cores to drive adoption of server networking speed upgrades.
The Dell’Oro Group predicts a mass migration of servers to 10Gbps ahead of a core switch migration to higher speeds. We expect 10Gbps server connectivity to out-ship 1Gbps sometime between 2014 and 2015 (Figure 3), which would coincide with the 10G Base-T maturity. We do not anticipate that the higher-speed 40Gbps and 100Gbps core switches will eclipse 10Gbps for many years.