A great many factors go into developing your organization’s data center strategy. There are numerous elements that influence future space, power, cooling, and compute needs. Those elements are different than they used to be, and continue to change. Not properly understanding your future needs can lead to significant problems, both operationally and financially.
If you are considering a data center consolidation, data center transformation, or otherwise evaluating your data center strategy moving forward, this interview with Jeff Gilmer of Excipio Consulting is a great listen or read, as the transcript is below. A few highlights:
- Jeff isolates a number of trends that are having a great impact on how you should approach your data center future, including the continuing move toward virtualization. What many organizations are not properly evaluating is the change in the physical size and compute power of servers, the increase in density, and how all of that is impacting cooling.
- How all of these elements contribute to properly sizing your data center, along with the financial impact that can have.
- The common problem of the disconnect between facilities teams and IT staff, and the issues it can lead to not only operationally, but in properly planning future data center needs.
- The five categories of IT you need to understand to properly ascertain your future needs when developing your data center strategy.
- Two examples of client engagements, including one where estimates of future use were off by a multiple. More accurately evaluating future needs of space and power led to considerable (massive, actually) savings.
This is a very valuable interview for anyone engaged in a data center consolidation, a data center transformation, or otherwise attempting to accurately project future data center usage.
You can listen to the podcast in the player, read the transcript below, or both.
Transcript of Jeff Gilmer podcast:
Kevin O’Neill, Data Center Spotlight: This is Kevin O’Neill with Data Center Spotlight, and I appreciate you joining our podcast today. The topic for discussion is, how does IT impact your overall data center strategy. We’re fortunate to have Jeff Gilmer from Excipio Consulting joining us, and Jeff, we appreciate you joining us today. I’d describe you as a data center and cloud business analyst, but I was wondering if you could quickly give us some more specific overview of your background, and what is it that you and your colleagues at Excipio do?
Jeff Gilmer, Excipio Consulting : Sure, Kevin, that would be great. Excipio Consulting is a business advisory group that focuses on the methodology that we developed in the late 1990s, and that methodology is a framework that allows us to assess a multitude of different IT types of projects and create IT strategies. Data center lifecycle management is one of the six core areas that we focus on, and within data center lifecycle management, we get into facilities assessments, we also deal with the compute infrastructure. We look at external solutions, such as colocation or wholesale or outsourcing or cloud. We also get a little bit into the applications side, and other areas, so when it comes to talking about computing infrastructure related to IT and the data center itself, we’re going to talk a little bit about the methodology and the process we use today.
Data Center Spotlight: Okay, well, good, and what I’ve seen you do, Jeff, is you go into organizations and you just help them with their future. You help them assess their future needs and whether or not what they’re doing now suits their future needs, and if not, what would be better for them, is that an accurate description, generalized but accurate description of what you do?
Jeff: Yeah, that’s correct. Our methodology really has three main components within that framework. So the first thing we do is go in and establish the current state with the client, help them understand where their current status of the data center is, what risks they may be facing, what issues they may have between IT and the business side, and then also identify the financial aspects, what is the cost to run that data center facility today, including all the operations and compute infrastructure, and then based on that we have done hundreds of data center assessments in recent years, we have a very nice database of comparative information, and we can do a comparative analysis. So we can point out for that client where are they doing things effectively, where can they optimize, where are they cost effective, where maybe are they not cost effective? Where are their risks that are identified? What risks should be mitigated, or not mitigated, through that comparative analysis? And then, we roll that into a future state strategy, so that strategy defines the solutions or recommendations for identified risks, issues, costs, structures, improving operations, other factors, and then recommendations on what types of future data center solutions they should really implement.
Data Center Spotlight: Good, this will be interesting for anyone who’s trying to navigate their future data center and cloud needs. It seems like in data center discussions, Jeff, we’re always talking about trends and direction, and trying to stay ahead of the trends and direction, and have the future solution meet where we think the trends and direction are going to end up. Now, when it comes to trends in compute infrastructure, what are you seeing with your clients that are having the most impact?
Jeff: Well, there’s several things, obviously that we can begin to hear, but some of the more prevalent ones would be virtualization, and everyone’s well aware of virtualization from the server side, and from that aspect of it. If you go back and look five, six, seven years ago, most organizations probably had very few virtualized servers, and everything was physical, and now that you’ve got into the virtualization phase, now it’s become standard.
Now that your software manufacturers and application people are supporting it, you’re seeing a tremendous amount of virtualization, so that’s one impact of it. The second impact we’re seeing is just the physical difference in the compute power and physical size of servers today. If you look at servers again seven, eight, nine, ten years ago, they were relatively large in size, but people would relate to them being the size of an old desktop. Today, they’re the size of a laptop, and even smaller than a laptop, and a great example that I give people to think about this, when they’re thinking about an overall data center strategy is, my phone today has more computing power than a server had five years ago, almost everyone’s phone is that way. Who’s to say that five years from now, our servers won’t be the size of our phones? So when you start to look at that, it has a huge impact. The biggest impact, of course, is reduced square footage, and if you look at trends, that the virtualization, that even the storage densities today, you’re seeing what used to be 1 terabyte in a rack, up to 10, to 100 terabytes in a rack. You’re having a great reduction in square footage. Now, that’s having an implication on a couple other factors, your other capacity factors in the data center would be power and cooling.
Power consumption, receiving a higher density of power per rack, because you have smaller devices, but they’re also more powerful. So we’re now seeing in terms of kW per rack, 5 kW per rack is pretty much your minimum standard. 10 kW per rack is a relatively normal standard you should probably develop or produce in a data center out for today’s standards. The side effect of that, though, is while you have more power per rack, actually, power consumption has reduced when you optimize the data center, because you’re using virtualized servers, or enterprise-class servers, you’re consolidating physical servers into larger physical servers, you have greater storage densities. Overall, power consumption is being reduced, because you’re having fewer racks in the data center.
The flip side of that is, cooling requirements are actually going up. Because it is more dense, you have more hot spots and more density that you really need to cool within a set environment. So that is a factor that people are taking into account when you look at trends. So, overall, reduction in square footage, increase at total power in the rack, and overall reduction in power consumption, but yet an increase in the cooling requirements. Those are the trends we’re seeing in the data center environment today.
Data Center Spotlight: Okay, that’s an interesting one. Well, Jeff, can we ask you to expand on the impacts that these trends are having on the decisions being made by your clients, and specifically, the strategies that they are taking to best respond to the changing data center environment? How are they attacking their overall data center strategy in the face of these trends?
Jeff: Okay, yeah, so as you may remember, Kevin, in the past a lot of times the data centers were operated by the facilities group, and the facilities group made decisions related to really your capacity requirements within the data center, space, power, and cooling, and the facilities team really determined how large that data center should be, what direction it should go, redundancy and other factors. Today, when you talk about the technology, as we just did, related to the servers, the storage, the network, and some of the other factors, that has a huge driver to the facilities of the data center. So when you look at an overall strategy, and you look at the trends that we just mentioned, a few of those, they’re going to really drive how that facility needs to be designed, how that facility needs to be operated. So you can’t have the facilities group, and the technology group each operating in their own vacuum, and unaware of each other and not cooperating or working together with each other.
Any decisions around the data center really require both of those entities. They require the technology side, including the servers, the storage, the network, and the people that understand the IT aspects of it, and they also require the facilities side, from the main capacity of space, power, cooling, along with other factors, such as security or geographical location, or other key parameters that might be involved in that facility.
Data Center Spotlight: Do you find that technology and facilities groups interacting more effectively than maybe they have in the past, three, four, five years ago?
Jeff: I would say that’s really client-driven, and client-dependent. We see some of them that work very well together, and very effectively, and in most cases those are the most optimized data centers, and most effective from an operations standpoint, where we do still see some of the old guard in each of those areas, that have been there for a long time, when it was a mainframe data center, and you still see some separation, and unfortunately you don’t see some of the cost savings and some of the better operational aspects in general in those facilities where they’re not cooperating together.
Data Center Spotlight: Okay, if someone’s looking to develop a data center strategy moving forward, what are the first steps that need to be incorporated into that data center strategy? With all the changes that are happening now, where do you begin? Maybe you can share just a little bit of your process, Jeff, that people could use to model in their own strategic development.
Jeff: Sure. I’m going to start at a higher level view, I’ll go into this, and then you’ll probably some more questions, Kevin, and we can dig into more details, but initially when you look at the IT side, there’s really five categories that we group things into, and you really need to understand your strategy in each of these areas. The operating system, now where are you going with the operating systems? Are you a mainframe environment today? Do you still have mainframe operations functioning? Midrange, what types of midrange do you have, is it Linux, is it Unix? Is it HPUX, is it AIX? What type of environment are you operating within, and where are you going with that?
And then you take the x86 environment, or the Wintel environment, and your structure there. In the past, there were multitudes of operating systems, what you really want to be at to optimize that is probably somewhere between three and five for the maximum number of operating systems, and that includes different versions of operating systems within the environment. So, if you’re in a Windows environment, if you still have Server 2003 out there, and you’ve got current version, each one of those versions we look at as a separate operating system.
So your first step is to develop your overall strategy on where are you going to go with those operating systems, and what are going to be your platforms, and your standard images, and other structures you’re going to have for your overall strategy? From there, you can then start to look at your applications, and really identify on those applications. Should I be running this application in a midrange environment, in an HPUX environment, or should I be running it in a Wintel environment? Many of those applications today can function in either or of those environments, including their backend database requirements, and the utility servers that support them.
So, looking at those applications is key, and then a secondary to that is also application versions. A lot of organizations we see think that it’s more cost effective to stay on an older version, that the cost of upgrade is too expensive. Then, in reality, if they probably looked at that financially, they’d find out that the upgrade and the consistency of their applications across the board would flow through, and help them optimize and reduce cost, reduce risk, reduce support structures, resource demand, and a whole bunch of other criteria. So, after you look at those applications, of course, optimizing the operating system strategy, looking at your applications, looking at your version control leads to your hardware technology, or mainly your servers, right? Are you going to run that in a mainframe? What type of servers are you going to have? Are you going to have commodity, are you going to have blade, are you going to have enterprise-class servers? Maybe you should have a mix of those.
What can be virtualized? Are we going to do application stacking? How are we really going to approach that, from a consolidation standpoint, and there’s a wide range of approaches there.
But you need to understand that server strategy. Once you have the server strategy, then you can define your storage strategy. What’s going to be that type of strategy that you’re going to use? Is it going to be a SAN environment, a NAS environment? A big one is, what are your data retention policies? If you haven’t worked with the business to drive your data retention policies, it’s going to be very difficult to drive your overall storage strategy that ties in to your server strategy, your applications, and your operating system.
And then finally, you look at your network, look at your network infrastructure, and your LAN equipment, and your WAN equipment. What do you really need for your switches, your routers, your firewalls, for your connectivity to get everything from within that data center to the outside world. Now all five of those, while they look like they’re independent, if you put together a strategy of all five of those areas, you’re now going to be able to go to your data center facilities and say, we’re going to go from X number of servers, to Y number of numbers. This is going to drive our power requirements from A to B. This is going to impact our overall cooling requirement. This is going to impact our overall space requirement. Here’s what we need for redundancy. Here’s what we need for geography, for our secondary site for these critical applications that we’re defined in our application strategy to recover those. We maybe don’t need everything, but we need this percentage of those in a secondary site. Here’s what we need for a network bandwidth between those sites.
All of those types of discussions, you can now come with an educated plan, or an educated data center strategy, and talk to those facilities groups, internal or external, whoever you may be working with, to be sure that you drive the proper requirements for your business.
Data Center Spotlight: That’s interesting, Jeff. Now, as you start to sort of dig into all this, and really understand your IT strategy, what are some of the key areas that have a significant impact on your overall data center decisions. What I’m asking you is, what are these factors that people need to understand when they determine their data center strategy?
Jeff: Well, there’s a multitude of those, Kevin, and we could probably spend an hour alone just talking about that one topic, but let me take two or three of them here, maybe we can cover in the next few minutes. The first one would be consolidation. We have a lot of people when they come to talk to us and say, I want to consolidate, I want to consolidate everything and move it to the cloud, or I want to consolidate my five data centers down to one or two, or I want to consolidate my servers. What does consolidation really mean, and there’s a whole bunch of different methodologies or ways to go about it. Some of the more obvious ones center around virtualizing, virtualizing your physical servers into basically a digital server, so that you no longer have that physical piece or device there.
Some of the focus is on an enterprise-class server, I’ve got servers today that are maybe four core, or eight core type of processing devices, and I can put them in a 32 or 64 core and I can consolidate 16 to 24 physical servers into one enterprise-class server and partition it off. Some of it you would do through a rolling refresh. You might look at your overall mass refresh and determine, or you’re going to just do a mass refresh, and move everything to a certain blade technology, and reduce our racks? Or, are we going to do this under a life cycle management process, so we can manage our cash flow? So these all different types of consolidation factors come into play.
Now, the key thing that we talk about when we work with our clients is, we go through a list of all of these factors, but then we determine what ones are the best for their overall strategy. So, there are some key criteria there. Those key criteria include, what’s the outright cash investment that this business is going to have to make? If they’re just going to go through a format of where they’re going to refresh on a rolling migration based on lifecycle management, that’s a fairly moderate to low cash flow perspective for them.
If they’re going to walk in and say, I’m going to refresh all thousand physical servers right now and move them all to blades, that’s going to be a pretty high upfront cash investment.
Another factor would be the technical risk; you can look at the technical risk to do that. Some of those things, again, a mass refresh is going to be fairly high on the technical risk, where if I’m only moving three to five servers a week, or I’m virtualizing X number of servers a month, I can manage that process and keep the technical risk low, which leads into another risk, your outage risk, your chance of an outage. Again, managing your outage risk is critical. You don’t want to be consolidating a data center, and have that data center fail on you.
Probably a couple other things to look at would be your overall support, what’s the risk, how is this going to impact our support in actually doing the project, and then long term, what’s the impact on our resources? Are we going get more cost effective, so that we don’t need to hire additional resources in the future? Or, is it going to shift workflow? Maybe we’re migrating off a mainframe and we’re re-platforming onto a midrange, or onto a Wintel. Do I need to shift my resources from mainframe expertise to another expertise? Maybe I’m going to go from internal to external with my data center, and now I need to have somebody who manages the data center vendor, rather than managing the equipment in our data center. So, there’s all sorts of impacts from that perspective.
Again, Kevin, go back and look at those five categories. Upfront cash investment, technical risk, outage risk, support impact risk, and your overall resource impact for running it in production mode. That will help answer a lot of the questions of what’s the right way to look at consolidation, and make sure you do that with your strategy before you go talk to your facilities, from a sizing and capacity perspective.
Data Center Spotlight: And when you talk about the IT people going to the facilities people, I would imagine the impact of consolidation, it’s clearly significant in the overall data center strategy, Jeff, and it really relates to your earlier comments about getting both the facilities side and the IT side on the same page, and involved in the same discussion. Do you have maybe a real world example that you can share from this joint approach where the facilities side and the IT side worked in conjunction to help get a consolidation executed?
Jeff: Sure, well, actually, why don’t I give you two? How about I give you that they did not work together, and I’ll give you one where they did work together. Would that be fair?
Data Center Spotlight: Sounds good to me.
Jeff: Okay, so let’s take Client #1. I won’t name these clients, or share any information other than what’s needed here, but this client had a multitude, well, actually, both of the clients I’m going to use, one had about 45 data centers, one had about 50 data centers. They’re fairly large clients, they were going to consolidate those down, so they wanted to look at their overall data consolidated. They engaged Excipio to come in and look at those facilities, and help with a consolidation plan of the facilities. In both cases, they had already received funding to build a new data center for the consolidation process. The approach, however, was different. In the first client, they utilized their facilities group to go out and identify the physical square footage, the power demand, and the cooling demand of all of their data centers today.
When the facilities group went out, they pulled that power demand off of the UPS systems, and off of other systems within there, which as you’re probably aware, when you’re looking at that, you have power that’s going to support the UPS, but also to run the UPS. It’s going to support the servers and storage and network, but it’s also supporting the CRAC units for the cooling, or it’s supporting the chillers, or it’s supporting other areas that are on the facilities side to run that data center.
Well, when you go from 40 some data centers of those UPS systems, you’re not going to have, let’s say, 2 per facility, 80 UPS systems in the new facility. So you’re going to have a significant power decrease when you go to that consolidated site. You might, instead of having 80 UPS systems, maybe you have 20. So all of a sudden, now the power to run those other 60 is now gone, that’s a cost savings. The other thing they did is they totaled up the square footage of each of those, and that square footage came up to a significant amount.
They went to their board for approval, and they received funding to build a data center based on adding the square footage, and adding the power of those facilities together. The issue is, they didn’t do the IT strategy first, and their IT people were looking at a reduction, and what they actually found out is, that in the end, they needed 25% of the actual physical space and physical power of what they went for the board for their financial approval, and unfortunately in this case, they had started to construct that data center. So, there were things that we could help them with mid-stream, but they basically overbuilt to an extreme where they would never be able to fulfill that.
Now let’s take a second client. The second client, again, similar data centers, they already had a greenfield site picked out, they were going to go build the data center. They already had a contract with their architect firm. They already had a contract with the engineering firm, and in your case, they had Excipio come in and go visit all 49 of those data centers, but our approach was different. We started by gathering all of the IT infrastructure within all of those data center facilities. So, we totaled up all the physical servers, we totaled up all the virtual instances, we totaled up all the overall instances, physical and virtual combined. We totaled the storage, and I’ll just take one of their data centers and give you an example.
So this data center, when we first visited it, had 445 physical servers, and it had 98 virtual servers. So in total, it was running 543 server instances. Well, when we sat down with the group, we started projecting with them, where were they going to go with virtualization? Where were they with their growth? And overall, the server instances were going to grow from 543 to about 730. So, we had about a 200 server increase there in instances, but what was interesting is, we started to work with them on strategy, and we looked at different options related to enterprise class servers, related to replatforming servers, related to virtualizing servers. In the end, we reduced those physical servers from 445 down to 80 physical servers. So, we went from 98 virtual to 600+ some other virtual or enterprise type breakdown partition server instances within the new structure of server equipment.
In reality, this was 1 data center, out of 49, so you can imagine. If you built that data center for 445 physical, and you only needed 80, you basically were at 20% of physical space, 20% the power, 20% the cooling. We then combined that through of all of those facilities, and provided that information to the architectural firm, and to the engineering firm, to right-size the new data center, of which they did, and in this case, that facility was built correctly, and has worked very effectively and very cost effective for the client overall.
Data Center Spotlight: To not handle that correctly, and to overbuild, that’s a pretty big weight to have a stranded capacity problem to that extent.
Jeff: And another great example that we see a lot of as well is, people say, my data center’s 15 years old, I don’t want to upgrade it, I just want to move to the cloud, or I want to move to colocation, but they go in with a non-optimized quantity of their physical servers, and their storage, and they provide that to that external provider, and that provider provides them with a quote, or a proposal on what it’s going to be to use their services, and it’s four, five, six the amount of what the cost is in their current data center, and they’re just sent way back on their heels, the client is, because why is this costing me so much?
Well reality is, if they had taken the time to do the optimization of the IT strategy we just talked about, they could probably migrate into that new facility for equal or lower cost than what they had previously, but it’s all dependent upon going through that strategy, working with IT to build a proper overall IT strategy on where you’re going to go with those physical devices, and really understand the power and cooling demands before you get that proposal from that external service provider.
Data Center Spotlight: You’re talking, Jeff, about moving physical servers to virtual servers, as obviously a lot of people are, and you’re talking about changing power needs earlier, which brings us to architecture, and the importance of understanding your architecture strategy before you develop your data center strategy.
Jeff: Kevin, architecture, you take the term architecture, and you ask different people, first of all, then you’re going to get different definitions. So, just having an architecture design can mean a lot of different things to a lot of different people. For some people, it’s just the overall server architecture, make, model, configuration, what’s in it, what’s our direction? To other people, architecture includes the applications, it includes the database, it includes all the inter-dependencies to the other applications, and the other infrastructure from the server standpoint, all the connectivity to the storage, and what are your criteria for storage and backup and recovery? So, architecture as a term is one of the first things you really want to clarify.
Let me take it a step further, and I’m going to go through another example here, and I’ll talk about a client that we went into. Now this client is a fairly large client, had a significant number of servers, somewhere around 3,500 to 4,000 servers. So, a pretty good size server farm inside their data center. So they were only looking at, from an architecture standpoint, really only had commodity servers within that environment. They were not looking at any alternatives related to blade servers, or related to enterprise-class servers.
So when we came in, we started talking to them about a strategy, we looked at both blade servers and enterprise-class servers, along with the commodity servers, and what we found is, that of the 4,000 server instances, there was a large quantity of those that we could actually put into a high-density blade type of chassis, as an example. Now, when the blade servers first came out, one of the biggest issues that we had was the manufacturers included the power supply within that blade server. So as you started to stack these blade servers, which are about the size of a laptop when you think about it in dimensions, then you put 10, 12 of these stacked within a single rack, you’d have a very high power demand, with a power supply in each one, and more importantly, you had a very high-density cooling issue, and cooling those racks when you had put those blade servers in was a problem.
So the manufacturers quickly realized that if they were going to have this actually be a viable technology, that people could purchase and implement, that they had to do something to deal with that power supply.
Data Center Spotlight: Right.
Jeff: So what they did, is they created a chassis, and that chassis now has the power supply, and you can then install the blades within the chassis. So now we have a 3 kW power supply in the chassis that can run, in some cases, 12 and 16 blades per chassis. So now I can put three of these chassis in a rack, 3 kW each, and I’ll have 9 kW in the rack, which is under, again, remember the 10 kW wattage that we talked about that is pretty much a standard today, and I don’t have an issue with cooling, and I don’t have an issue with power.
Now, we worked with this client, and what we found is that we could actually put 25 virtual servers per blade. So if you take 16 blades per chassis, and 25 virtual servers per blade, we were able to move 400 server instances into each chassis within that rack, and in this case we put three chassis within the rack. So all of a sudden now, we had 48 physical blades, at 3 chassis, at 400 servers per chassis, we were putting 1,200 server instances in one rack.
Now, let’s go back to your additional question, Kevin. How does architecture impact our overall data center strategy, and our overall data center facility? So, prior to this, with their traditional commodity servers, they were putting 13 physical servers per rack, so that equaled 74 racks, at 5 kW per rack, were at 370 kW of power. When we were finished, we went from 74 racks to 1 rack. We were able to cut that power from 370 kW from the 74 racks at 5 KW, down to 9 kW. So you had a significant reduction in power, you had a significant reduction in square footage, and overall, those had significant impacts on the design of the data center, and where they were going with their overall data center sign, for their future facility. Didn’t matter if it was an internal facility they were going to build, or an external facility, this would have a huge impact when you start to go negotiate contracts with an external provider, or when you start working with your engineering firm on the process size of an internal data center.
Now, I’m going to clarify one thing here. Again, this client had 4,000 servers, 1,200 of the 4,000 fit this type of architecture. Not everything within their environment could be consolidated into this type of a high-density example. Some of it still needed to go into commodity servers, some still had to go into enterprise-class servers. Some still had to stay in somewhat of a physical server instance, depending on some of the business drivers or other factors. But still, taking a third of those servers and reducing that environment by 2,000 physical square feet, about 74 racks at 30 square feet a rack? Significant reduction in the space, and again, 370 kW to run those racks, down to 9 kW, significant different in power.
Data Center Spotlight: Sounds like that means does architecture does matter, Jeff.
Jeff: It definitely has a huge impact on it. To summarize the last two things we talked about here, your consolidation strategy is the first thing you really need to look, and really understand. The second thing under that consolidation strategy is, what’s the impact from a virtualization type standpoint? Then the third thing is, what’s the impact from an architecture standpoint? Now when you combine the architecture and the virtualization together, you can really optimize that data center environment through your consolidation strategy.
Data Center Spotlight: You mentioned, Jeff, that your firm Excipio has performed hundreds of data center assessments over the past few years. Are there any particular ones that come to mind which are a good example of what we discussed today? I know you’ve given us some examples.
Jeff: Let me go to back to the one that I mentioned earlier that was sizing their data center, and I’ll talk about the before and the after and tell you the impact that it had. Again, this organization had a significant number of data centers. They already had contracted with their architecture firm and their engineering firm, and they initially engaged to construct to that data center based on the power, cooling, and space requirements of their current facilities with little to no involvement of the IT side of the organization.
When they were introduced to Excipio, they were introduced to Excipio to come in and actually design their little groups, and help them consolidate, which of these 50 some data centers, or 45, I don’t remember the exact number, but which one of these is highest risk? Which one do we move first? Which one do we move second? Which one do we move third? That’s really what they engaged Excipio to help them with, was to do an assessment of each of those facilities and their compute devices.
Well, what happened is, when we ended up going in and started totaling up the compute devices to design the move groups to move them to the new facilities, within about a two week period, we came back to them and said, time out here, you’re looking to design a 50,000 square foot data center. We’re coming up with a maximum square footage required for the next 15 years, which 15 years is what the industry uses as a base average for the life of a data center, so we’re capitalizing the cost, and we’re looking at the design to be a 15 year design.
50,000 square feet is what we calculated, we calculated 12,000 square feet and actually thought we could optimize and get even down under 10 or 9,000 square feet. So we started going through the process, and the project was quickly shifted from, okay, let’s continue to evaluate the facilities and rank which one our highest risk was, but let’s also then incorporate an IT assessment and understand the strategy of each of our different business departments, and what are our IT strategies overall, so that we really can understand, where are we going from a compute infrastructure standpoint. Where’s our direction on service? What’s our direction on storage? What’s our direction on our networking gear, our WAN, LAN, and our firewalls and security devices?
In general, in the industry today, for most clients, you’re seeing a 0 to negative growth in physical servers. You’re probably still seeing somewhat of a growth, but majority are virtualized. The biggest demand we’re seeing for increased growth in physical space and power, what do you think that would be Kevin, any idea?
Data Center Spotlight: Hm, actually, no, I don’t know. I have about three things in mind, but I don’t know.
Jeff: Okay, well, nothing here would be wrong, but the biggest thing is actually in storage, seeing an increase in storage, because we find that people have what we would call inappropriate data retention policies. A lot of organizations are, keep everything we have, because you never know when you’re going to need it, type of mentality, and unfortunately that’s driving significant cost and drivers within the storage environment, but you can table that conversation for another day, and dig into that as well.
So, my point is, when you go back and look at your overall strategy, if you’re looking for some benchmark comparisons, if your server growth is 3% positive to neutral to negative, you’re probably right. If your storage growth is your largest growth component, you’re probably pretty accurate. Your network components should probably be pretty stable unless you’re increasing the number of employees, or even increasing the number ports or connectivity or other factors. But anyway, the result of this was, when we talked about the 50,000 square feet to 12,000 square feet, let me put it to you a little bit in the dollar factor, and what it meant for this company.
The initial data center, 50,000 square feet with four data halls, tier 3 facility, estimated value, including some office space, parking, and other areas, $251 million to build that facility. When we finished restructuring that facility down to 12,000 square feet, which greatly reduced the land requirements, which greatly reduced the parking requirements, which greatly reduced the office space requirements, the requirements for the yard where you were going to put the generators, instead of 12 generators, you put in 3. All of those factors really compounded, we ended up with an actual requirement somewhere between $35 and $50 million dollar range depending on how conservative they wanted to be for their growth. So we were talking a reduction here, in building and designing a tier 3 facilities, that met all of their redundancy requirements, at about 20 to 25% of the overall initial cost factor.
Now that’s a significant amount of difference. $250 million to $50 million, huge difference on a 15 year capital investment, that this company was able to then utilize for other areas within the organization.
Data Center Spotlight: Sounds like an effective engagement, Jeff.
Jeff: Very positive, matter of fact, this client did multiple engagements with us in other areas as well, following this one.
Data Center Spotlight: Okay, well, terrific. This is some good stuff, Jeff. I don’t want to firehose people too significantly, so I think this might be a good jumping off point. I look forward to talking to you soon about more of these issues, but in the interim, what would be a good contact point for people to get in touch with you and Excipio?
Jeff: Excuse me, you can always go to our website, that’s the easiest way to go. There are case studies there, there are examples of our solution suites. There are seminars, both video and audio as well that are present on our website. Our website is www.excipio.net, and lots of information available there, contact information for any of our people around the country, and our locations as well as background information on the company.
Data Center Spotlight: Well, terrific. This was a very interesting discussion Jeff, and I appreciate your time and your content, this is some really good stuff.
Jeff: Thank you Kevin, I would be happy to help any time that you would like.
Data Center Spotlight: All right, we’ll do this again soon. Thank you, Jeff.