Press "Enter" to skip to content

Black Box, White Box, No Box: How to balance the equation of CAPEX and OPEX for Today’s Data Center

Chloe Shi 0

-Guest blog by Per Brashers, Yttibrium

Everyone in the data center space is looking to reduce CAPEX, but before you throw off the shackles of the OEMs to embrace a commodity buying strategy, there are some considerations to maximize the potential of this strategy.

Two Extremes: DIY or Buy from the OEM Vendors

Let’s first look at the hardware itself. A good friend of mine at iSquared has a great way of referring to the commodity approach—“unstacking margins”.  At the bottom of the proverbial pyramid is the DIY model.  This requires companies to directly negotiate prices with component vendors, handle integration, setup and maintenance in-house, set their leasing terms, and make strategic investments in startups.  Thus they enjoy the lowest possible cost of technology acquisition.  On the flip side it does mean a significant spares inventory, increased risk in infant mortality and the required resources to handle troubleshooting, repairs and maintenance, as well as a staff dedicated to managing the component vendors. All this means a lower CAPEX, but more resource requirements down the line.

At the other end of the spectrum are those who buy from the OEM vendors and take the relationship and products lock-stock-and-barrel.  These accounts accept premiums from 4 hr onsite service to “FedEx label for replacement” which translate to a 20-40% markup of the original equipment cost as well as tolerating a failure rate of 5%.

What do most Enterprises do?

For most enterprises to stay competitive while achieving optimal data center cost and operational efficiency, the answer must lie between these extremes.

In the middle area exist two main incumbents and an emerging category that is born out of the evolving needs of today’s market.  The first category features the MongoVARs.  Basically component distributors who offer integration capabilities, these VARs target CAPEX by offering integrated platforms built upon the off-the-shelf components they distribute, but the support and technology value-add end once products are transacted and out the door. These companies serve more as outsourced supply-chain management companies and require customers to have dedicated in-house resources to design, develop and support the infrastructure derived from these platforms. This means what may be initially saved on CAPEX is transformed into operational and engineering expenses due to the in-house resources required to make the VAR-products application ready in most enterprises and data centers.

In the second category is the ODM as Integrator. This category attempts to repeat the business model they provide to the hyperscale Internet Moguls, and sell the promise of low costs through direct ODM purchasing. In effect, these are platforms with the opposite of bells and whistles. They often cut costs in strange ways, even to the effect of not deburring the sheet metal (sharp edges in the data center which tend to lead to a high number of sick day occurrences).  In the end the customer is forced to take on the burden of QA and engineering to integrate the platforms into an infrastructure they can use. For hyperscale companies with plenty of in-house engineering talent and the willingness to spend resources working with raw platforms, this is a model that works, because of the economy of scale.

Interestingly enough, as data center requirements have evolved and companies pay attention to OPEX, resource efficiency and time to revenue, a new category has emerged—that of the Solutions Provider.  The main differentiator of Solutions Providers from Integrators and VARs is that they are technically savvy and cater to the needs of the infrastructure as a whole.  Thus they can handle not only the supply chain, but can build truly integrated end-to-end solutions that are power-on ready.  This view to go beyond merely the hardware platforms but include consideration for the infrastructure design, the network topology, the applications, the current/future scale and overall resource efficiency, results in pre-tuned and tested configurations that meet both the CAPEX objective, OPEX objective, and your Time-to-Value objective.  After all, one of the things that saves the Internet Moguls so much is that these rack-scale solutions are pre-loaded and ready to start generating value on day 1.

So how should you set your corporate strategy when it comes to maximizing the efficiency and return of your data center?  Step 1 consists of a simple, yet difficult shift in mindset.  All assets have to be looked at for their useful lifespan.  There are parts of the enterprise that get this concept well. For example, a building is depreciated over 15-20 years and cost of capital comes into play when deciding if one should build said structure.  The building also has a resale value at the end of life and is typically 50% of the day 1 costs.  Meaning it is time to sell or lease that building to another when it hits a certain depreciated value.  Common sense, right?  So why then are so few IT shops held to this standard?

Let me put it in infrastructure terms: a Watt costs ~$1/yr to the utility company, another $1/yr to the taxman.  But if you look at the data center depreciation costs published by the hyperscale guys, you are knocked another $1/yr for the infrastructure cost.  If you were to build a data center with some level of redundancy, that cost rapidly doubles.  Meaning in net, the loaded cost of a watt in the enterprise is $4-$6/yr.

So let’s pretend you have some 5-year-old servers in your data center.  They have 2 CPUs with 4-6 cores each and run ~450W.  Compare that against a new 2 CPU 12-core 350W server.  Normalizing the numbers, 3 servers x 8 cores x 450W versus 1 server x 24 cores x 350W, or said differently, 1350W versus 350W translated at $5/W becomes $6750 versus $1750 over the course of a year.

Since a new server costs ~$5000, payback over 3 years, nets ~$10,000 for the one swap-out alone.  That is pretty good ROI.

So what to do now?  There are some tactical steps to be taken immediately.

1) Start a zombie hunt.  Yes, there are zombie servers in every data center, and the criterion to rate them on is W/op (or W/IOPs in the case of storage systems) and % utilization over a 3-month period.  This will produce a list of hosts who are ready for consolidation.

2) Engage a Solution Provider, such as AMAX, to cut the black-box bond and architect you a true value-add solution that includes both hardware and software validation that can be ready to run the day it arrives.  They should be able to suggest methods to reduce CAPEX, and offer resolutions to relieve OPEX, including minimizing engineering resources required to setup or troubleshoot the hardware and make it application-ready.  However they are typically not good at offering lease/return options to keep the zombies from piling up again.  So…

3) Create an asset management process to rid yourself of sub-optimal components and track how the new density has helped avoid building another data center, or a data hall expansion, etc.

If these steps are difficult or not enough, your Solution Provider may suggest a specialist consulting company like Yttibrium, to come in and lay out a strategy that is attainable and sustainable.  It is also helpful to remember that greening a data center is also a relief on all bottom line costs, as the tax man gives credits for improved data center efficiency, and the power utility rewards heavily those willing to trim peak power on demand.

As a parting note, one radical and massively successful approach some large web companies employ is to drop all bonuses.  Consolidate those savings into one pool, and pay it out to any and all individuals or teams that come up with a way to increase efficiency, thus reducing bottom line spending.  It is amazing how fast corporate culture changes with a change of how money is doled out.

About the Author:

Per Brashers
Founder, Yttibrium LLC.

Per Brashers is an inventor, strategist, and the founder of Yttibrium LLC, a consultancy providing end-to-end strategy consultation on efficient data storage and compute systems design, with the goal of helping companies reduce energy consumption while reducing costs. He has experience in industry and academia, ranging from enterprise storage to high performance computing, and has been a senior strategist for storage heavyweights such as EMC, DDN, and Facebook. He holds 19 patents and patent-pending inventions, with over 25 years in the business. Per has designed systems to support scalable BigData solutions, and brings the business savvy to help organizations derive value from data. Per is a long-time supporter and charter member of OpenCompute, having architected OpenVault and the ColdStorage solutions. He has a passion for efficiency and efficient solutions. Outside work, Per’s interests include amateur radio, home brewing, and growing organic vegetables. Per can be reached though LinkedIn or by email: per@yttibrium.com