Facebook’s Frank Frankovsky at the Open Compute event in May 2012.
Facebook’s giant user base of 901 million is served by an equally impressive number of servers, data centers and assorted networking and storage gear. And unlike Google, which also has built its own hardware designed to promote its infrastructure advantage, Facebook has taken steps to open much of its thinking on hardware IP to the world, creating the Open Compute Project.
In a meeting Monday at Facebook headquarters, Facebook’s SVP of Infrastructure Frank Frankovsky apparently opened up a bit more, discussing Facebook’s plans for a new type of networking gear. According to Wired, Facebook is rethinking the top of rack switch as part of a total rethinking in the type of gear users and how one operates that gear inside the data center. From the Wired piece:
“The interaction between servers and networking devices is going to become a very blurry line over time,” Frankovsky said. “What I’m envisioning is that the top-of-rack switch…will evolve to be more than just an Ethernet switch.” These switches, he said, will likely incorporate the boot devices for the servers as well as the equivalent of network interface cards — the cards that connect today’s servers to switches.
But it’s not just about networking. It’s about changing the contents of the data center and eliminating servers as we know them today. While I wasn’t at the meeting yesterday, Frankovsky and I had a long talk last week to discuss his plans as a precursor to our chat at our Structure Conference in two weeks.
I won’t out all of his thoughts here, but we did discuss his plans for the data center and how storage, servers and networking will end up in the data center of the future. In direct opposition to large equipment vendors such as Cisco, which saw the same blurry line between servers and networking devices, and so built its unified computing system that combined both elements in one proprietary box, Frankovsky is heading the other way.
“We’ve solved a lot of problems and drawn a lot of attention to building out the infrastructure, and the next wave of innovation must come from how to operate it efficiently over time,” Frankovsky told me. “The focus has to move now to operating efficiently at scale.”
He brought up the problem of the PC-refresh cycle that occurs every three years or so as the chip guys release next generation processors. Right now, people have to rip and replace a lot of their gear just to move to the next generation CPU. That’s expensive and will one day be unnecessary for Facebook if Frankovsky has his way.
Networking cables along the ceiling at Facebook HQ.
He’s thinking of the server not as a box, but as a CPU with some DRAM on a sled that is slotted into the Open Rack design that Facebook is developing for the Open Compute Project with others. He also discussed changes to the top-of-rack switch that would see the elimination of network interface cards on a motherboard with the CPU. This way CPUs are replaceable without touching the networking which will be integrated throughout the rack with a new type of switch on top.
“The next generation of top of rack switches look like IO appliances with an integrated NIC that are no longer part of the server,” Frankovsky said. “So the server of tomorrow looks more like a rack.”
That’s a huge statement from someone in a very influential position in the data center and IT world. When Facebook created Open Compute last year, it was essentially staging a coup on the hardware vendors that weren’t meeting it’s needs. Frankovsky said at the time vendors were focused on “gratuitous innovation” as opposed to innovating where it counts. And in the current line of Open Compute designs those hardware vendors have now been hemmed in to a select zone on the Open Rack design where they can differentiate.
Executives from Dell and HP got up onstage at an Open Compute event in May and pretended to be happy about their new ability to focus and engineer in that area, but it’s no secret that their margins will suffer in the Open Compute regime. That’s one reason Dell is pushing ARM servers, which will still represent a system as yet untouched by Open Compute, as opposed to soldering some CPUs and DRAM on a sled and trying to sell it for a premium. That’s like trying to sell peanut butter and jelly at a premium — it can be done, but not to a mass audience.
And if Facebook’s re-imagining of the data center with storage (provided eventually by its Knox designs for Open Compute), servers and eventually the networking gear described above come to pass, its possible that EMC, Cisco, Juniper, Arista and others will feel HP and Dell’s current pain.
Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.
- Infrastructure Q1: IaaS Comes Down to Earth; Big Data Takes Flight
- Big Data, ARM and Legal Troubles Transformed Infrastructure in Q4
- Infrastructure Q3: OpenStack and flash step into the spotlight