Podcast: LinkedIn’s Innovative Hyperscale Data Center a Model for Handling Dynamic Loads in an Energy-Efficient Manner.

LinkedIn has one of the world's most efficient data centers.
LinkedIn has one of the world's most efficient data centers.

In recent years LinkedIn has moved toward the implementation of a more centralized data center strategy, consolidating some smaller data centers into four main facilities, with three data centers in the US and one in Asia. Those data centers must meet the needs of over 400 million LinkedIn users worldwide, and the company’s storage and processing needs are growing at a rate of 34% annually.

With that kind of scale and growth, improving cost efficiency can have a tremendous impact on both the company’s bottom line as well as the ability of the infrastructure to keep up with the user growth. For their newest data center in Hillsboro, Oregon, LinkedIn decided to think creatively and was willing to explore a new path toward a cost-effective, energy-efficient data center.

John Sheputis, president of Infomart Data Centers, LinkedIn’s partner in that project, believes LinkedIn can credibly claim that their Hillsboro facility is among the world’s most efficient data centers, with innovative cooling infrastructure that significantly reduces power usage and allows for greater power density.

In our conversation with John we discuss:

  • The innovative design that Infomart collaborated on that permits LinkedIn to handle its massive, yet unpredictable load, as well as earn commendations from the Uptime Institute and US Green Building Council.
  • The “chilled door” cooling strategy that utilizes coils immediately adjacent to the IT, greatly reducing of hot air that needs to be cooled, allowing for more cost efficient and effective cooling.
  • How the design allows for dynamic load that ranges from 4kW to 24kW per rack.
  • Economic advantages that come with efficient energy use and whether the data center industry is being properly credited for innovations in this area.
  • The responsibility of the data center industry to use energy resources wisely and how the media is more willing to take shots at perceived shortcomings than it is to highlight achievements.
  • After years of unmet forecasts of higher power density in data centers, are innovations in data center cooling technology paving the way for a more meaningful move toward higher densities?

John is one of the more thoughtful executives in the data center space and we touch on several other interesting topics as well. You can listen to our interview in the player and/or you can read the transcript of the interview beneath the player.

 

 

Transcript:

Kevin O’Neill, Data Center Spotlight: This is Kevin O’Neill of Data Center Spotlight. We have with us today John Sheputis, the president of Infomart Data Centers. John, thanks for joining us today.

John Sheputis, Infomart Data Centers: Pleasure to be here.

Data Center Spotlight: John and Infomart are involved in a very interesting data center project on behalf of LinkedIn. It’s going to be a pretty interesting topic. John, before we get to that, and some of the state-of-the-art aspects of that project, why don’t you tell us just a little bit about you and Infomart?

John Sheputis: Infomart is a wholesale data center provider, operates in some of the major markets in the country, Northern Virginia, Dallas, Texas, Silicon Valley, and, of course, Hillsboro, Oregon, which is a suburb of Portland, where the LinkedIn West Coast facility is. It was formed, originally, about 10 years ago as Fortune Data Centers, of which I was a founder. About two years ago we merged with the Dallas Infomart to create Infomart Data Centers.

Data Center Spotlight: John, let’s talk about LinkedIn. LinkedIn’s storage and processing needs were growing on an annual basis about 34%, which is just massive growth that can be difficult for your IT infrastructure to keep up with. Particularly if you’re a company that is focused on your own infrastructure and you’re not out in the public cloud.

They had a data center in Virginia, a data center in Texas, and the obvious next move was to have a data center on the West Coast. Why don’t you take it from there and tell us how you got involved in the project.

John Sheputis: LinkedIn, like a lot of companies that are in, what we call the hyper-scale mode, originally, whatever, 10 years ago, probably had a server count in the hundreds, or maybe low thousands, and now it’s in the hundreds of thousands. You’ve got two orders of magnitude of growth.

The methods to manage that amount of IT infrastructure to support a user base, which was, again, it’s probably grown by several orders of magnitude over the last decade. Now, they’re north of 400 million users worldwide. It creates an opportunity to, well, probably to screw up if you don’t do it right. It creates a time to rethink all the investments you make.

Like a lot of companies in their space, they had a large number of what we call retail footprints – smaller footprints distributed in many data centers, and then began consolidating. You just mentioned Dallas and Northern Virginia – Ashburn. Those were the first two projects. They also have an international one which they’ve talked about in Singapore to serve Asia-Pac.

The last piece of their migration from smaller retail footprints to larger wholesale was to put one on the West Coast. They did a search and that search was completed about a little over a year ago, and with the selection of Oregon, and working with our company, Infomart.

Data Center Spotlight: Are you their data center provider at their other sites in Virginia, and Texas, and Singapore?

John Sheputis: No, they went with another provider, Digital Realty, which is one of the bigger data center providers in the world, who has established facilities in those markets.

Data Center Spotlight: Yeah, and it’s certainly not untypical at all for a large company like LinkedIn to be using different providers in different markets.

John Sheputis: No.

Data Center Spotlight: LinkedIn wanted to do this in an energy efficient manner, and they wanted to do it in some ways that handled, for lack of a better term, the burstability of their user base. When it comes to the energy efficiency, you can wrap that up in environmental responsibility all you like, but CIOs are under a lot of pressure from their CFO counterparts to reign in their technology costs. Fortunately, optimizing energy usage has a positive financial impact on an organization, while also offering the ability to wrap it up in an environmentally responsible package.

John Sheputis: Yeah, I think if, again, looking back over time and sort of what were the original motivations, the IT operations for mission critical… We live in an always-on environment. It would be news if you went to LinkedIn and saw “Servers Not Available” as a message. It’s not just unforgivable, it’s unheard of, and it would be newsworthy in itself. There’s a tremendous pressure to always be available. The traditional method of doing that is to simply over-provision and make thinks redundant. There’s a lot of thought that’s gone into how to do that in an energy efficient way.

The other competing interest here is the one you mentioned, which is the dynamic nature of the load. Any consumer-facing application probably doesn’t have a constant load. We live in an on-demand world, not a constant-demand world. So how do you be efficient over a dynamic pattern, an unpredictable pattern, of usage?

It’s a tough engineering challenge. I think, for a long time, people have been somewhat satisfied with meeting the first objective, which is availability, and not necessarily optimizing around the second, which is efficiency. It’s the same way we expected our car to have a better gas mileage on the highway versus in city driving, which is much more stop and go.

Data Center Spotlight: To talk about the dynamic load issue – back in, I don’t know if it was 2008, 2009, around then – when Nick Carr wrote his book, “The Big Switch,” he said the average enterprise, in their data center, was running on most days at about 20% of capacity. He was making the case for cloud computing with that stat and with a lot of other elements…

John Sheputis: I’m going to interrupt you. He was right.

Data Center Spotlight: Yeah, he was right. I want to ask you a question because this interests me and I generally can’t get a good answer from a lot of people. Since then we’ve had a lot of virtualization; we’ve had a lot of data center consolidation. If you had to guess, what are most enterprises running at on a daily basis, as a percentage of their data center capacity?

John Sheputis: It’s hard for me to answer the enterprise because I don’t have any visibility into that, but my guess is it’s higher than what it used to be, and largely due to some of the technologies that you’ve talked about.

Historically speaking, every time you had a new application you had to get a new server, had to be configured to run that application, nothing else could run on it because it might interfere. You had all these isolated silos – and I don’t mean departmentally or functional – I mean devices. A storage device only stored. The application server, which was a device, only ran one application. I think a lot of the low utilization is simply because they were probably only operating during business hours.

If you looked at an office where people work, it’s not utilized 100% of the time. It’s utilized when people are there, which is a working day. If you’re a factory, maybe it’s longer, but it’s not 100%. I go back to the idea that we’re supposed to continuously run all our IT 100% is probably a fallacy for most applications because they simply aren’t used 100% of the time.

To contrast that with what we see in the cloud, which is a more dense form of IT, the utilization rates are much higher. Going back to a decade ago, or so, when virtualization was making its inroads, to today where it’s largely widespread and now you’re seeing applications run on the cloud and run on platforms, you have much higher utilization rates.

We’ve had clients, very large ones – around that time we loaded up with our first big lease in San Jose, our Silicon Valley facility, and it was for Facebook.

The way my business runs is we are, effectively, a power… We don’t run the IT, we are essentially providing a feed stock for the IT, which is critical power and cooling, and physically securing a building, but everything inside the data hall – all the IT, all the applications, all the data – are the domain of the tenant. That’s their IT operation.

When we think of utilization, I’m not actually looking at CPUs. I’m looking at, “Are they drawing the amount of power that they have at their…what’s their capacity?” I would say your cloud guys, when they’re stable, when they’ve ramped up, will routinely hit 70% or 80%, some cases 90%, which is phenomenal.

If you think about all those servers, and all those power circuits inside of a room, and in a big room there’d be hundreds or thousands, to have an average utilization of 90% – that means the worst ones are at 80% and some of them are close to 100%. We’ve come a long way. Sorry for the long-winded answer. We’ve come a long way.

Data Center Spotlight: It’s an interesting topic and one that I don’t know if people appreciate – the inroads that the data center industry has made. Just the IT infrastructure industry, certainly want to include the cloud side of things, which is also the data center industry, but I don’t know that people appreciate the inroads that companies have made towards operating in a much more positive manner across the board.

John Sheputis: No. I’m sorry to interrupt you again. There was a series of articles about four years ago in the New York Times. Essentially, they were calling out, if not just downright blasting, the data center industry as being these centers of pollution because no one wants to talk about how much power they were using.

Make no bones about it, these are highly resource intensive buildings. They use tens of megawatts, which is the power load of a small city, in one building. The energy intensity of a normal building, like an office, even one that’s running a lot of computers for office applications and densely packed with people, might be in the tens of watts per square foot – data centers are in the hundreds. They are 10 plus times the power density of most other buildings we think about. They do use a lot of power.

The question is, “Are they doing it responsibly?” I think the article, while it pointed out the fact that the facility were power-intensive, I don’t think the industry has done a good job of pointing out all the benefits that all this technology use has brought about.

Data Center Spotlight: When they ran that series of articles it was overblown, but not entirely unfair, because the Berkeley National Lab had done a study that data center energy use had grown 90% from 2000 to 2005, and then had grown 24% from 2005 to 2010, and had reached the point where it was consuming about 2% of all energy in the United States.

I recently, here on Data Center Spotlight, interviewed the folks from the Berkeley Lab who redid that same research in, obviously, more advanced ways with more information. From 2010 to 2014 there was 4% growth in energy usage and they’re anticipating from 2014 through 2020 another 4% growth, so it’s down to… When you think of the application growth and the fact on an annual basis… LinkedIn has their IT infrastructure demand going up by about 34% and that’s not unusual at all. The fact that the energy usage is only going to be 4% between 2010 and 2014, or was 4%, and is anticipated to be only another 4% between 2014 and 2020 is really borderline heroic by the data center industry, John, and no one talks about that.

John Sheputis: I agree. There’s a natural incentive on behalf of both the providers, such as myself, and the users, because these resources are expensive. From my standpoint, as a provider, a producer of this capacity, I don’t want to waste it. If I have a fixed amount of electrical capacity, I want to sell it as efficiently as I can because that’s like wasting a floor of a building and not renting it.

If you’re a user and you have a certain amount of capacity, it’s expensive to rent. It’s expensive to own, it’s expensive to put all these… A rack full of servers today costs hundreds. The rack, itself, the four steel posts, the cabinet that the racks go in – they cost a couple thousand dollars. The IT that goes inside it may cost a quarter million. No one wants to waste that. They want to use it as efficiently as possible. The electric bills are enormous.

Data Center Spotlight: The profit motive and the desire to drive down expenses, that has led to virtualization, that has led to more efficient server utilization, that has led to improvements in cooling strategies and technology. All those things have combined to contribute to that increase in efficiency. It’s widespread in the industry, almost universal in the industry. You don’t see the New York Times doing a big front page story about the great work that’s been done in the data center industry.

John Sheputis: No they don’t, but you hear… What people forget, I think… Let’s say I built a car factory and put all those big steel machines inside. You don’t necessarily update those every couple of years. In the world of IT the rate of refresh is high. All those servers have a fairly finite life. It’s not as bad as, say, your smartphone, where you’re replacing it every year or two years. I don’t know many guys that have a five-year-old smartphone.

These things are always being refreshed. Every time a new one comes out it probably uses more gross power, but it probably has an order of magnitude more storage, or more computing. So if you think about the productivity that goes into that – imagine a new iPhone that came out, where it had less memory, a shorter battery life, and a crappier screen, you can’t even think about that. That rate of refresh is always more energy efficient.

Data Center Spotlight: As an aside, I’m actually experiencing that because I have misplaced my phone, so I’m back to an older phone. When you say no one’s using a four or five year old smartphone, you’re talking to a guy who for the next couple days is still going to be doing that.

John, let’s get on to the LinkedIn project. I know that there’s a certain amount of information that LinkedIn has shared, but why don’t you tell us what you can about this project? What was it that made this project different and tell us how you accomplished that?

John Sheputis: Thank you for saying that. It’s unusual for us to talk about our tenants, but in this case we have a limited amount of permission to serve reference of things we’ve said together. LinkedIn chose Oregon for a variety of reasons. I think they liked the climate; they liked the green and sustainability culture of Oregon. It was close to a series of connectivity, so from a geographic standpoint, Oregon had some advantages, including lesser proximity to some of the natural hazards of California, such as seismic threat.

Second, why here and why us? They wanted to produce something a little different. In other markets, they’d more or less gone with premium, but fairly stock, product, meaning highly reliable, highly usable, well designed, but it wasn’t their design.

In this particular case, they wanted to try a couple things new. From an electrical standpoint they wanted to move to a higher voltage, more flexible use. Basics of math are electrical distribution is higher voltage means less line loss. The method of delivery is rather than having a lot of wires, we use these busbars to create more pooling and flexible usage. It also means more energy efficient.

From a cooling standpoint, to accommodate this sort of dynamic nature of the load, we did something inside the data center which I think is kind of radical. Instead of having these big consolidated air handling and cooling units that are pulled away from the IT, where there’s a lot of work to… All these servers are creating hot air. It’s more than this, but from a physical standpoint they are space heaters, and there are a lot of them, and we don’t know how hot they’re going to run or when they’re going to run hot.

To move around that much cold air and get it to where you want it to be takes energy. What we did in this case is we actually took the cooling units and put them in what are called “chilled doors.” Essentially, the refrigerating unit, the coil that’s turning hot air back into cooler air is on a hinge on the back of the cabinet next to the IT.

What it means is that we’re not really moving around the air very much. It’s very inefficient to blow around large volumes of air. It’s fairly efficient to distribute water, so what we had is a series of plumbing and fixtures, and a lot of risk-reduction measures to reduce chances of leaks or detection of leaks, but the water is essentially being put in coils right next to the IT. It’s, essentially, having rows of where we contained hot and cold like we do in a lot of efficient data centers. Here the containment unit is the rack, itself.

So, you could have a rack, which would normally run on idle a few kw of power and heat, could flex up to 10 times that level, but it could be next to one that’s still in idle. The room would remain the same temperature because the air that’s being exhausted out the back of the servers is going straight into a coil and being cooled.

This has two or three really interesting advantages: Because it’s so easy to distribute water versus distributing volumes of air, you can save money on a lot of infrastructure that would normally be associated with that. You also can use warmer water because you’re, essentially, cooling at the point. In a typical data center chilled water’s delivered at 50 degrees or maybe 55 degrees. In this case we’re delivering water at 67 degrees, which means the real energy needed to cool the water is significantly less.

That has two effects, one, I’m allowed to use external methods, like free-cooling, ambient cooling, to chill my water 300 days a year, between 200 and 300 days a year, and second is the water at 67 degrees reduces the number one threat of water in a data center, which is condensation.

People always say they don’t want water in a data center. I’ve never seen a pipe burst in a data center because most of these pipes are commercial industrial grade. What I have seen, though, is condensation. By having water this warm, there is no risk of condensation.

Data Center Spotlilght: The two main points are you have traded the need for air flow by utilizing new technology and a new method for reducing the need for air flow, which is very expensive, and you’ve reduced the need for the chilling of the water, which is also a very expensive element of the data center. It sounds like there’s a lot around it, but those are the two key breakthroughs there?

John Sheputis: Yes. The key thing from LinkedIn’s new design is we are allowed to take advantage of much warmer water and we are allowed to distribute that water fairly efficiently. Water’s easy to move around. The net effect is that you have a PUE, or it’s essentially a common metric of energy efficiency for data centers, of 1.06. Most interestingly is we get to stay close to that for a very wide range of use cases. We believe this will be among the most efficient merchant data centers in the world.

I wanted to go one more thing – again, sorry for being long-winded – the electrical and mechanical efficiencies aside – and I think LinkedIn deserves a large amount of credit for this, and we just announced this, as well – they’ve received the Uptime Institute’s stamp of approval for energy efficiency, and not just for their data center, for their IT.

Think about it. The building was built to the highest standards from the U.S. Green Building Council. There will be a combination of lead gold and lead platinum for the two different projects.

The facility is being powered through a direct-access purchase contract with the Bonneville Power Administration, which is the Nation’s largest hydro generation manager. So you’re using, effectively, carbon-free power. It was built efficiently; it’s being powered by efficient power.

The PUE and energy efficiency of the plant and the management of the data center are to the highest standards per the Uptime Institute, and the IT itself, where all the techniques regarding virtualization, converged infrastructure, tracking, reporting – essentially all the methods and controls one would expect to run the wonderfully efficient operation are all in use.

I would think the LinkedIn people can probably state, with great credibility, that this is the most sustainable IT operation in the world.

Data Center Spotlight: John, from your perspective as the wholesale data center provider, was this a risky project for you? How were you able to insure profitability on this project, and the fact that there’s so much new thinking on this project, did that limit the amount of competition you had from your wholesale data center competitors?

John Sheputis: You ask two questions. 1) Were there risks, and 2) how did that affect competition? Yes, there’s risks. Anytime you try something new there’s risks. I think there’s right ways to go about mitigating those risks. We were trying some new things here. We had done a lot of the things we talked about before, but the new piece was clearly the…

In fact I think it’s the largest installation of these chilled doors, or rear door heat exchangers, in the world. Normally you see that used for spot-cooling, like in a room where there’s limited other options, so we put in one of these doors. Essentially put a refrigerator in the middle of an IT operation. In this case this was the default.

Second, how do we engineer our way around those risks? We did it through a collaborative design process. We got one of the units early on. We hired a couple of engineers to figure out how was that going to fit into the design. It was a tremendous amount of up front thought and working together. This wasn’t like they showed up with a list and said we need bids on Friday. This was a thought out change in design, and it was meant to accommodate flexibility, which it’s already doing that.

To your last question on competition – yeah, I’d like to think my company distinguished itself on a fairly regular basis by trying things first and using those for the advantage of both ourselves, and the marketplace, and for the benefit of our clients. We’re the first to do lead certification, first to do direct access, first to pass through the PUE, first to get the management and operations Uptime stamps, and first to, essentially, underwrite our client’s participation in efficient IT. This is just another way of doing that.

Data Center Spotlight: John, the multi-tenant data center and the colocation business is a very capital intensive business. The investors are much more likely to be real estate oriented investors than they are technology investors. They’re very risk-averse. They’re not your typical technology investor shooting for the moon, looking for a unicorn among a number of losing investments. Instead, they want a guaranteed return, or as close to a guaranteed return as they possibly can have.

Since this project involves some elements that were out of the ordinary, did your financial backers sense risk in the project and ask a lot of questions about it, or since you were building for a specific tenant, were they fully onboard?

John Sheputis: First of all, you say a lot of things correctly. The investors in this business are not technology speculators. Essentially, we’re spending a lot of money, and we expect to make it back over time and rent. The returns are lower than you would expect in a speculative technology venture and, arguably, because it should be a lot lower risk. That’s a universal relationship.

In this case, we would have never specced a data center with this technology on our own. This was all agreed to up front, that we would use this method of cooling. The electrical methods here are largely incremental change, so I didn’t see a lot of risk in that. The risk was mostly in the design, and the use of this new equipment for mechanical cooling.

To offset that risk, 1) we all agreed to do it up front before we pursued it, and 2) the actual doors, themselves, were purchased and are owned by LinkedIn. We are more or less a provider of chilled water and conditioned power to these doors.

We shared the risk from both a financial standpoint and from an engineering standpoint. Unlike a typical lease, there’s a high degree of operating component in a turnkey data center, and we share that. That’s not unusual for this sector. We spent a lot of time talking about whose responsibility it was to procure, to inspect, to accept, to test, to implement, to manage, and to control all these devices.

Data Center Spotlight: Let’s move back to technology. We’ve been hearing that higher density deployments are on the way in the data center for a long time, yet it hasn’t really happened, and one of the reasons it hasn’t happened, maybe the key reason, has been that it’s expensive and difficult to properly cool at higher density environment.

It sounds like what you’re doing may be building a bridge towards the long-awaited higher density deployments in the data center world.

John Sheputis: There’s been people arguing about how dense data centers get and what are the other methods to be more efficient and still be usable. There are limits to how much you can cool with air. I think what we’re proving here is that this is a very efficient method to cool, not only high density, but dynamic and high density, because running continuously at a high density is very different from having unknown levels of usage. This is a good method for that.

I think density is going to continue to edge upward. I don’t know where the plateau will be, but five years ago, or seven years ago, I’d say six or eight kw per rack was considered dense and now it’s twice that, at least that, and we’re seeing three or four times that. In this particular facility we can be anywhere from 4 to 24 kW per rack. It’s not just the high density, it’s the dynamic nature of the density.

Data Center Spotlight: I was going to say that, that’s not static – 4 kW to 24 kW – that’s…

John Sheputis: Anything that’s running at a static load can optimize. It’s trying to optimize something that’s got an unpredictable and wide-range of uses. That’s a much harder engineering problem to solve. I think this is a good method.

I’ve seen other methods where the servers, themselves, are bathed in a tub of mineral oil, or some other method, and I think there’s advantages to using those, but a lot of those use methods interfere with your ability to actually touch the IT, which may not be frequent, but if it’s incredibly inconvenient, it’s going to change the way you think about it.

Data Center Spotlight: John, as interesting as this topic is, and as interesting a guest as you’ve been, I don’t like to ask people to spend much more than 30 minutes or so listening to these podcasts, so what’s the point you’d like to make about this that maybe I haven’t properly led you to?

John Sheputis: I would say this, which is challenging convention is the only way you’re going to make improvements. I think what we have proven here is that by asking some new questions – and we’ve been talking about the plant, we haven’t talked at all what LinkedIn did inside the data center to improve their network and their usage of their IT, which is really questions for them – but you have to be willing to revisit these things.

I think what we’re seeing here is a large scale of innovation driven by people who are challenging convention which is not unusual, but being open about it, which is unusual. I think the openness is something I’d love to see more of in our industry.

Data Center Spotlight: Do you think we will?

John Sheputis: The returns are there. The energy usage is so high that I think people trying to all invent their own wheels is not going to lead us to the best outcome. These are methods to be better in things that are critical, but energy efficiency is not a competitive advantage. Energy efficiency is something we should all be helping each other with, not something you should be denying somebody else an opportunity to do better on.

Data Center Spotlight: I think critics of the data center industry don’t appreciate how much of that goes on within the industry, which is something maybe the industry needs to better in getting the word out.

John Sheputis: I agree with you 100%, and I think what you’re doing here with Data Center Spotlight is a great example of that, and I wanted to thank you for the opportunity.

Data Center Spotlight: Thanks for coming on with us, John, a very interesting topic, and a half hour well spent. I appreciate it.

John Sheputis: Thanks so much.

 

 

 

Image used under Creative Commons license.