By Julie Bort.
In the shadow of its acquisition by Microsoft, LinkedIn has quietly begun talking about an internal project that has the potential to shake up the roughly $175 billion data-center hardware market.
LinkedIn’s plan is somewhat similar to what Facebook is doing with its Open Compute Project. OCP is creating brand-new “open source” data-center hardware, in which the engineers from different companies work together and everyone freely shares the designs.
Likewise, LinkedIn is designing and building nearly all the pieces and parts of software and hardware that it needs for its data centers, poaching key people from Facebook and Juniper to do it.
“We are not building servers and switches and all these things because we want to be good at it. We are doing it because we believe it gives us an advantage to control our own destiny,” Zaid Ali Kahn, senior director of infrastructure architecture and operations at LinkedIn, told Business Insider.
This is a terrifying trend for vendors like Cisco and Juniper. In the past, only the biggest internet companies like Amazon, Google, and Facebook have gone this route: designing their own IT infrastructure from scratch.
LinkedIn isn’t as big as those guys. It has a handful of data centers in California, Texas, and Virginia — most of them using leased space at a hosting provider — and only recently started designing and building its own in Singapore and Oregon. The one in Portland, Oregon, is its crown jewel, and the other data centers will eventually be upgraded with the new technology.
A superfast network for $1
The story begins with a Facebook network hardware engineer named Yuval Bachar. He was part of a Facebook team in 2013 that had a big goal: reducing the price of building a super-high-speed computer networks tenfold. Facebook had stolen him from Cisco, and he did a stint at Juniper, too.
Yuval Bachar. YouTube/@Scale
He then went on to help Facebook build its industry-changing, l0w-cost, open source Wedge switch that put market leader Cisco on notice. Earlier this month, Facebook announced the second generation of that switch.
About the time Bachar announced his goal, the LinkedIn networking team was struggling with its own network, which wasn’t handling the company’s user growth very well.
“The Production Engineering Operations (PEO) team found it very difficult to meet the demands of our applications when network routers and switches are beholden to commercial vendors, who are in control of features and fixing bugs,” Kahn wrote in a blog post.
In early 2015, the team began to build its own switch, called Pigeon. In the fall, it hired Kahn to help do it. It began testing the switch early this year.
LinkedIn’s Pigeon switch. LinkedIn
In the meantime, having been a part of OCP, Bachar came up with a similar plan for LinkedIn. OCP started by creating a rack that holds stacks of computers, storage drives, and network switches.
The Open 19 rack. Open19.org
As a company grows, it simply adds more switches, servers, and disk drives to the rack. But the racks themselves can be expensive, including all sorts of bells and whistles that LinkedIn didn’t need.
Facebook had the same problem, so it built a stripped-down 21-inch rack, then designed its own servers and storage to put in it.
But hardly anyone else uses a 21-inch rack. “Probably 99.5% [of companies] are using a 19-inch rack,” Kahn told us.
That means for LinkedIn (or anyone else) to use Facebook’s rack, it had to renegotiate supply deals with its vendors to get gear in different sizes.
It was deja vu. Bachar led an initiative called Open 19 to create an open standard for a low-cost 19-inch rack. This rack can be stuffed with 96 servers for $50,000 total, saving $25 million across a 500-rack data center, the organization says.
Having seen the impact of OCP, vendors jumped on board, including some of the Chinese contract manufacturers that have made a killing supporting OCP. Hewlett-Packard Enterprise, which was late to OCP, is also a member.
Full-steam ahead, no turning back
Microsoft, which expects its $26.2 billion acquisition of LinkedIn to close by the end of this year, is a member of OCP and has standardized its 21-inch racks and other OCP technology.
Kahn wouldn’t comment on the impact of the acquisition, but Microsoft has promised to let LinkedIn operate independently. A person with knowledge of the situation told us projects Altair, Falco, and Open 19 are still full-steam ahead.
This person points to the fact that in September, three months after the merger was announced, the company hired Doug Hanks from Juniper Networks.
Doug Hanks. LinkedIn/Doug Hanks
Hanks was Juniper’s director of product management and strategy, and has written a number of books on Juniper’s tech. He’s now LinkedIn’s director of engineering.
“Doug Hanks reports to me,” Kahn said. “He recently joined and we’re delighted to have him.”
“His focus is to build the network engineering team and take it to the next level and help execute a number of initiatives, understanding the blend between software and networking,” he said.
Our source said that with Hanks on board, LinkedIn plans to be almost fully reliant on its own home-grown network gear in 18 to 24 months, and then “it’s no turning back at that point.”
Kahn insists that LinkedIn’s goal differs from Facebook’s. He’s not looking to pick a public fight with the network industry led by Cisco.
In fact, he’s still buying network gear from a number of commercial vendors — as long as they allow him to ditch their software so he can install his own, he said.
“A lot of vendors are open to that, to meet the needs of a web scale company,” he said.
LinkedIn also hasn’t fully committed to giving away all of its home-grown infrastructure software, the designs of its switch, or other hardware, as OCP has. But Kahn hasn’t ruled out openly sharing its technology either.
“LinkedIn’s culture is open source, so when the time is right we will be open to that,” Kahn said.
In fact, LinkedIn was a founding member in Hewlett-Packard Enterprise’s open source project, called OpenSwitch, to build a Linux-based switch. OpenSwitch is now run by the Linux Foundation (and word is that the initiative is floundering and LinkedIn is looking for alternatives).
Meanwhile, LinkedIn has also been sharing technical articles about its network software.
Big money at stake
Internet companies using commercial network gear often spend $40 million to $140 million a year on it with vendors like Cisco, Arista, and others, one person who ran a large internet network recently told Business Insider.
Seeing a company LinkedIn’s size roll its own, they could be encouraged to try that themselves.
One person not associated with LinkedIn who built a huge data center for one of the world’s largest tech companies said that after his company starting building its own network equipment, it drove the costs down by a factor of 10: from $40,000 per Cisco switch to $4,000 per cheaper “commodity” switch capable of running home-grown software.
“Arista, Cisco, Juniper, they are all s—ing themselves about this trend,” said someone familiar with LinkedIn’s project. “The big guys, Google, Amazon, Facebook, are all doing this for economies of scale. For them, it’s all about money. It’s cheaper to build their own. At LinkedIn, cost is not the No. 1 priority at all. They want to have complete control over the user experience, to own everything in the stack. Then they can standardize it.”