Let's Get Cirrus About Cloud Computing

Rich Bruklis

Subscribe to Rich Bruklis: eMailAlertsEmail Alerts
Get Rich Bruklis via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Cloud Computing, Cloudonomics Journal, IBM Journal, ERP Journal on Ulitzer, CIO/CTO Update

Blog Feed Post

Could cloud take out one-third of your processing costs?

I make you two promises.

First, this blog will have more dollar signs and numbers with commas in them than any other blog about cloud computing. This space is all about the business end of cloud. I leave the technological discussions to others who can do them more justice.

Second, even as we discuss the numbers, we won't get bogged down in them. My goal is to help you frame the justification for earnings-accretive cloud projects, and maybe even find the flaws in the business cases propping up ill-conceived proposals. So we'll show the numbers, we'll discuss the numbers, but we'll keep it at the strategic level. This blog does nobody any good if it's too dense.

So how do I come up with cloud being able to take out more than a third of your processing costs?

As an industry average, let's say that 40% of your infrastructure directly supports your test, development, and other pre-production enviornments. (We can quibble about the precision of this number; one survey of self-reporting companies might report higher, another lower. But 30%-50% is the range I've seen in print and, after taking a few minutes to go through my old customer files, I can validate that based on my own experience.)

Next, let's say that these servers are 10% utilized. Here, I think we're being generous. This is a key business driver for cloud: that you only need to pay for the horsepower you're using, so you're by definition 100% utilized. Even in a virtualized domain, you're always going to have that "white space" or "headroom" requirement. You'll also have load balancing issues and a hypervisor layer adding extra complexity (read: labor costs) to your software stack. In the cloud, these latency and complexity issues are spread around to the point of being negligible to the individual firm.

Back to the math: The difference between 10% and 100% utilization is 90%. And 90% of 40% is 36%, or more than one-third.

To put it in dollar terms, if you're spending $10 million/year on server depreciation, server maintenance, operating system support, middleware support, sys admin labor, floorspace and the 3Ps (power, pipe and ping), then $4 million of that supports your pre-production. Of that, 90% is essentially wasted due to inefficiency. With the efficiency promised by cloud, you'd only have to spend 10% of that $4 million, or $400,000. That means you'd save $3.6 million -- or 36% of your $10 million budget.

And that's enough math for now.

Of course, we're assuming a perfect-world solution. No cloud will be perfectly efficient. And let's remember that this is a business, and your provider is going to want to negotiate: "Hey, we can save you $3.6 million -- we'll charge you $2 million for the service and you'll still come out ahead."

And some costs aren't going away. The data center isn't going to shrink just because you got rid of a few servers, so the rent or depreciation on the building -- not to mention the taxes, insurance, contractors and critical systems maintenance that are part and parcel -- aren't going anywhere. (You should save some on utilities.) Some machines, due to regulation or intramural politics or whatever other machines, will need to be kept in-house. And The developers and DBAs on the application labor are just as much a fixture of the data center as the front door. At some point, IT will become such a commodity that your whole ERP system could fit on a machine the size of the ThinkPad I'm writing on now. When that day comes, your entire apps team will still be showing up Monday morning and expecting to be paid on Friday.

We haven't even considered the cost to implement. True, there's no capital expenditure, but the providers might come up with some "initiation fee" that could drop on you like a six-digit quantity of bricks. There's the depreciation writeoff. And there are the costs associated with resource actions should you, after an organizational assessment, determine that enough of the sys admin and machine operator workload has evaporated into the cloud. The cost part of the cost-benefit analysis needs to be well understood before greenlighting any project -- cloud plays being no exceptions.

Still, the benefits are there to be had. To be more granular, IBM's cloud CTO Kristof Kloeckner estimates that cloud can save you:

- 73% on utilities for end-user computing,
- 40% on support for end-user computing,
- 50%-75% on hardware capex and software licensing, and
- 30%-50% on labor (largely by reducing re-work stemming from config and modeling errors).

For more on IBM's value proposition, click:


Have a better day,


Read the original blog entry...

More Stories By Rich Bruklis

A 20 year veteran of the storage industry, Rich has been a business leader in product marketing. He has seen the industry change from backup on 5.25" floppies to 10,000 cartridge tape libraries with every tape "standard" in between. Rich has supported 5.25" 30MB hard drives and launched disk arrays with hundreds of drives. Most recently, Rich has focused on business continuity and disaster recovery.

While the hardware industry continues to experience BBFC (Bigger, Better, Faster, Cheaper), there is a cloud on the horizon that is about to disrupt that trend. Cloud Computing will fundamentally change the IT world much like the network changed client-server computing.