The Myth of Chargeback


Cash Register

Cash register by the National Cash Register Co., Dayton, Ohio, United States, 1915.

Imagine a world where every aspect of a project gets charged correctly. Where the massive amount of compute time for a given project gets labeled into the proper department and billed correctly. Where resources can be allocated and associated to the projects that need them. It’s an exciting prospect, isn’t it? I’m sure that at least one person out there said “chargeback” when I started mentioning all these lofty ideas. I would have agreed with you before, but I don’t think that chargeback actually exists in today’s IT environment.

Taking Charge

The idea of chargeback is very alluring. It’s been on slide decks for the last few years as a huge benefit to the analytics capabilities in modern converged stacks. By collecting information about the usage of an application or project, you can charge the department using that resource. It’s a bold plan to change IT departments from cost centers to revenue generators.

IT is the red headed stepchild of the organization. IT is necessary for business continuity and function. Nothing today can run without computers, networking, or phones. However, we aren’t a visible part of the business. Much like the plumbers and landscapers around the organization, IT’s job is to make things happen and not be seen. The only time users acknowledge IT is when something goes wrong.

That’s where chargeback comes into play. By charging each department for their usage, IT can seek to ferret out extraneous costs and reduce usage. Perhaps the goal is to end up a footnote in the weekly management meeting where Brian is given recognition for closing a $500,000 deal and IT gets a shout-out for figuring out marketing was using 45% more Exchange server space than the rest of the organization. Sounds exciting, doesn’t it?

In theory, chargeback is a wonderful way to keep departments honest. In practice, no one uses it. I’ve talked to several IT professionals about chargeback. About half of them chuckled when I mentioned it. Their collective experience can best be summarized as “They keep talking about doing that around here but no one’s actually figured it out yet.”

The rest have varying levels of implementation. The most advanced ones that I’ve spoken to use chargeback only for physical assets in a project. If Sales needs a new server and five new laptops for Project Hunter, then those assets are charged back correctly to the department. This keeps Sales from asking for more assets than they need and hoping that the costs can be buried in IT somewhere.

No one that I’ve spoken to is using chargeback for the applications and software in an organization. We can slice the pie as fine as we want for how to allocate assets that you can touch but when it comes to figuring out how to make Operations pay their fair share of the bill for the new CRM application we’re stuck. We can pull all the analytics all day long but we can’t seem to get them matched to the right usage.

Worse yet, politics plays a big role in chargeback. If a department head disagrees with the way their group is being characterized for IT usage, they can go to their superiors and talk about how critical their operation is to the business and how they need to be able to work without the restrictions of being billed for their usage. A memo goes out the next day and suddenly the department vanishes from the records with an admonishment to “let them do their jobs”.

Cloud Charges

The next thing that always comes up is public cloud. Chargeback proponents are waiting for wide-spread adoption of public cloud. That’s because the billing method for cloud is completely democratic. Everyone pays the price no matter what. If an AWS instance is running someone needs to pay for it. If those systems can be isolated to a specific application or department then the chargeback takes care of itself. Everyone is happy in the end. IT gets to avoid blame for not producing and the other departments get their resources.

Of course, the real problem comes when the bills start piling up. Cloud isn’t cheap. It exposes the dirty little secret that sunk-cost hardware has a purpose. When you bill based on CPU hour you’ll find that a lot of systems sit idle. Management will come unglued trying to figure out how cloud costs so much. The commercials and sales pitches said we would save money!

Then the politics start all over again. IT gets blamed because cloud was implemented wrong. No protesting will fix that. Then comes the rapid costs cutting measures. Shutting off systems not in use. Databases lose data capture for down periods. People can access systems in off hours. Work falls off and the cloud project gets scrapped for the old, cheaper way.

Cloud is the model for chargeback that should be used. But it should be noted that we need to remember those numbers need to be correctly attributed. Just pushing a set of usage statistics down without context will lead to finger pointing and scrambling for explanation. Instead, we need to provide context from the outset. Maybe Marketing used an abnormally high amount of IT resources last week. But did it have anything to do with the end of the quarter? Can we track that usage back to higher profits from sales? That context is critical to figuring out how usage statistics affect things overall.

Tom’s Take

Chargeback is the stick that we use to threaten organizations to shape up and fly right. We make plans to implement a process to track all the evil things that are hidden in a department and by the time the project is ready to kick off we find that costs are down and productivity is up. That becomes the new baseline and we go on about our day think about how chargeback would have let us catch it before it became a problem.

In reality, chargeback is a solution that will take time to implement and cost money and time to get right. We need data context and allocation. We need actionable information and the ability to coordinate across departments. We need to know where the charges are coming from and why, not just complaining about bills. And there can be no exceptions. That’s the only way to put chargeback in charge.


3 thoughts on “The Myth of Chargeback

  1. I worked for a company that used company-wide chargebacks to recoup cost for OS software licensing for lab VMs and computers. They had gotten burned because of improper use previously and employees sharing the MAK info (with other employees, contractors etc). When the MAK ran out, the vendor was completely willing to bump up the number of available licenses (of course). When the inevitable audit came around and the renewal discussions started, the actual usage was well over the previously agreed amount and the company had a huge bill to settle.

    This drove the eventual chargeback model that they moved to where each license was tracked by user and department. The departmental usage each quarter was calculated (a mix of tools and administrative auditing) which then fed into a request to finance to recoup an appropriate percentage of the cost from each VP level organization.

    It was a headache, it wasn’t entirely fair, and it was always a “surprise” to the charged organization’s finance and budget people (though they were informed of the process and got the charges ever quarter – they also had access to the reporting real-time and the data used to compile the chargebacks).

    I don’t know if that company still uses this process as I moved on and was the sole maintainer of the quarterly compilation, audit and chargeback process to finance. I trained my replacement but, it was a lot of work and know-how for something that was done once a quarter.

    I don’t have any object lesson or take-away from this other than as an example of how chargebacks were used to recoup costs for a software asset across a company. It did not significantly impact or reduce the usage but it did ensure that there were no more unexpected and uncovered overages during the period it was in use.

  2. At a previous employer, a chargeback model was tried by building a private cloud – IT purchased a load of blade centres/ VMware etc… upfront and created “tiers” of service – different levels of resources/monitoring etc… A business area could then purchase a number of servers at a certain tier, the idea being that over time the initial

    The unexpected impact of this was that the business areas then saw IT as just another supplier, competing with the likes of AWS, who were cheaper than IT! This lead to a big increase in business areas utilizing public cloud services, but still expecting the IT department to support them when things go south.

    Once this idea that people could purchase their IT services from someone other than the IT department set in, there was a real problem with Shadow IT. Business areas began to purchase their own end user devices – laptops, phones etc.. that had non-standard builds on, and wanted to connect them to the corporate network, given this was a semi-financial organization, BYoD was a seen as a non-starter.

    A further problem which resulted from the increased use of public cloud was the impact on the companies internet connectivity; this was only really used for browsing, email, VPN etc… there was no hosting environment, so bandwidth wasn’t excessive. Overtime, the extra internet traffic generated by accessing public cloud services resulted in maxed out circuits at peek times. (the network had not been commoditized as the other IT services had)

  3. Pingback: Read Like An Analyst: Slacklash, IoT, & Artificial Intelligence - Cascade Insights

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s