The Myth of Chargeback

 

Cash Register

Cash register by the National Cash Register Co., Dayton, Ohio, United States, 1915.

Imagine a world where every aspect of a project gets charged correctly. Where the massive amount of compute time for a given project gets labeled into the proper department and billed correctly. Where resources can be allocated and associated to the projects that need them. It’s an exciting prospect, isn’t it? I’m sure that at least one person out there said “chargeback” when I started mentioning all these lofty ideas. I would have agreed with you before, but I don’t think that chargeback actually exists in today’s IT environment.

Taking Charge

The idea of chargeback is very alluring. It’s been on slide decks for the last few years as a huge benefit to the analytics capabilities in modern converged stacks. By collecting information about the usage of an application or project, you can charge the department using that resource. It’s a bold plan to change IT departments from cost centers to revenue generators.

IT is the red headed stepchild of the organization. IT is necessary for business continuity and function. Nothing today can run without computers, networking, or phones. However, we aren’t a visible part of the business. Much like the plumbers and landscapers around the organization, IT’s job is to make things happen and not be seen. The only time users acknowledge IT is when something goes wrong.

That’s where chargeback comes into play. By charging each department for their usage, IT can seek to ferret out extraneous costs and reduce usage. Perhaps the goal is to end up a footnote in the weekly management meeting where Brian is given recognition for closing a $500,000 deal and IT gets a shout-out for figuring out marketing was using 45% more Exchange server space than the rest of the organization. Sounds exciting, doesn’t it?

In theory, chargeback is a wonderful way to keep departments honest. In practice, no one uses it. I’ve talked to several IT professionals about chargeback. About half of them chuckled when I mentioned it. Their collective experience can best be summarized as “They keep talking about doing that around here but no one’s actually figured it out yet.”

The rest have varying levels of implementation. The most advanced ones that I’ve spoken to use chargeback only for physical assets in a project. If Sales needs a new server and five new laptops for Project Hunter, then those assets are charged back correctly to the department. This keeps Sales from asking for more assets than they need and hoping that the costs can be buried in IT somewhere.

No one that I’ve spoken to is using chargeback for the applications and software in an organization. We can slice the pie as fine as we want for how to allocate assets that you can touch but when it comes to figuring out how to make Operations pay their fair share of the bill for the new CRM application we’re stuck. We can pull all the analytics all day long but we can’t seem to get them matched to the right usage.

Worse yet, politics plays a big role in chargeback. If a department head disagrees with the way their group is being characterized for IT usage, they can go to their superiors and talk about how critical their operation is to the business and how they need to be able to work without the restrictions of being billed for their usage. A memo goes out the next day and suddenly the department vanishes from the records with an admonishment to “let them do their jobs”.

Cloud Charges

The next thing that always comes up is public cloud. Chargeback proponents are waiting for wide-spread adoption of public cloud. That’s because the billing method for cloud is completely democratic. Everyone pays the price no matter what. If an AWS instance is running someone needs to pay for it. If those systems can be isolated to a specific application or department then the chargeback takes care of itself. Everyone is happy in the end. IT gets to avoid blame for not producing and the other departments get their resources.

Of course, the real problem comes when the bills start piling up. Cloud isn’t cheap. It exposes the dirty little secret that sunk-cost hardware has a purpose. When you bill based on CPU hour you’ll find that a lot of systems sit idle. Management will come unglued trying to figure out how cloud costs so much. The commercials and sales pitches said we would save money!

Then the politics start all over again. IT gets blamed because cloud was implemented wrong. No protesting will fix that. Then comes the rapid costs cutting measures. Shutting off systems not in use. Databases lose data capture for down periods. People can access systems in off hours. Work falls off and the cloud project gets scrapped for the old, cheaper way.

Cloud is the model for chargeback that should be used. But it should be noted that we need to remember those numbers need to be correctly attributed. Just pushing a set of usage statistics down without context will lead to finger pointing and scrambling for explanation. Instead, we need to provide context from the outset. Maybe Marketing used an abnormally high amount of IT resources last week. But did it have anything to do with the end of the quarter? Can we track that usage back to higher profits from sales? That context is critical to figuring out how usage statistics affect things overall.


Tom’s Take

Chargeback is the stick that we use to threaten organizations to shape up and fly right. We make plans to implement a process to track all the evil things that are hidden in a department and by the time the project is ready to kick off we find that costs are down and productivity is up. That becomes the new baseline and we go on about our day think about how chargeback would have let us catch it before it became a problem.

In reality, chargeback is a solution that will take time to implement and cost money and time to get right. We need data context and allocation. We need actionable information and the ability to coordinate across departments. We need to know where the charges are coming from and why, not just complaining about bills. And there can be no exceptions. That’s the only way to put chargeback in charge.

 

We Are Number Two!

Checklist

In my old IT life I once took a meeting with a networking company. They were trying to sell me on their hardware and get me to partner with them as a reseller. They were telling me how they were the number two switching vendor in the world by port count. I thought that was a pretty bold claim, especially when I didn’t remember seeing their switches at any of my deployments. When I challenged this assertion, I was told, “Well, we’re really big in Europe.” Before I could stop my mouth from working, I sarcastically replied, “So is David Hasselhoff.” Needless to say, we didn’t take this vendor on as a partner.

I tell this story often when I go to conferences and it gets laughs. As I think more and more about it the thought dawns on me that I have never really met the third best networking vendor in the market. We all know who number one is right now. Cisco has a huge market share and even though it has eroded somewhat in the past few years they still have a comfortable lead on their competitors. The step down into the next tier of vendors is where the conversation starts getting murky.

Who’s Number Two???

If you’ve watched a professional sporting event in the last few years, you’ve probably seen an unknown player in close up followed by an oddly specific statistic. Like Bucky has the most home runs by a left handed batter born in July against pitchers that thought The Matrix should have won an Oscar. When these strange stats are quoted, the subject is almost always the best or second best. While most of these stats are quoted by color announcers trying to fill airtime, it speaks to a larger issue. Especially in networking.

No one wants the third best anything. John Chambers is famous for saying during every keynote, “If Cisco can’t be number one or number two in a market we won’t be in it.” That’s easy to say when you’re the best. But how about the market down from there? Everyone is trying to position themselves as the next best option in some way or another. Number two by port counts. Number two by ports sold (which is somehow a different metric). Number two by units shipped or amount sold or some other way of phrasing things slightly differently in order to the viable alternative.

I don’t see this problem a lot in other areas. Servers have a clear first, second, and third order. Storage has a lot of tiering. Networking, on the other hand, doesn’t have a third best option. Yet there are way more than three companies doing networking. Brocade, HPE, Extreme, Dell, Cumulus Networks, and many more if you want to count wireless companies with switching gear for the purpose of getting into the Magic Quadrant. No one wants to be seen as the next best, next best thing.

How can we fix this? Well, for one thing we need impartial ranking. No more magical polygons and reports that take seventeen pages to say “It depends”. There are too many product categories that you can slice your solution into to be the best. Let’s get back to the easy things. Switches are campus or data center. Routers are campus or service provider. Hardware falls here or there. No unicorns. No single-product categories. If you’re the only product of your kind you are in the wrong place.


Tom’s Take

Once again, I think it’s time for a networking Consumer Reports type of publication. Or maybe something like Storage Review. We need someone to come in and tell vendors that they are the third or fourth best option out there by the following simple metrics. Yes, it’s very probable that said vendors would just ignore the ranking and continue building their own marketing bubble around the idea of being the second best switching platform for orange switches sold to school districts not named after presidents. Or perhaps finding out they are behind the others will spur people inside the company into actually getting better. It’s a faint hope, but hey. The third time’s a charm.

The Butcher’s Bill

chopping-block-Dell-EMC

Watching a real butcher work is akin to watching a surgeon. They are experts with their tools, which are cleavers and knives instead of scalpels and stitches. They know how to carve the best cut of meat from a formless lump. And they do it with the expert eye of a professional trained in their trade.

Butcher is a term that is often loaded with all manner of negative connotations. It makes readers think of indiscriminate slaughter and haphazard destruction. But the real truth is that a butcher requires time and training to cut as they do. There is nothing that a butcher does that isn’t calculated and careful.

Quick Cuts

Why all the discussion about butchers? Because you’re going to see a lot more comparisons in the future when people talk about the pending Dell/EMC acquisition. The real indiscriminate cutting has already started. EMC hid an undisclosed number of layoffs in a Dec. 31 press release. VMware is going to take a 5% hit in jobs, including the entire Workstation and Fusion teams.

It’s no secret that the deal is in trouble right now. Investors are cringing at some of the provisions. The Virtustream spin out was rescinded after backlash. The tracking stock created to creatively dodge some tax issues is now so low that it needs a ladder to tickle a snake’s belly. Every news day brings another challenge to the deal that is more likely to sink it than to save it.

In order to meet this rising tide of disillusionment, Dell and EMC are pulling out all the stops. Expect to see more ham-handed decisions in the future, like cashiering entire teams and divisions in order to get under some magical number that investors like and will be willing to support in order to make this mega merger happen. Given Michael Dell’s comments about investors during his run to make Dell a private company, I’m sure he probably has a very sour taste in his mouth thanks to all this.

Butchers work to make the best possible product from the raw materials given. There are no second chances. No do-overs. You have to get it right the first time. That very reason is why all this scrambling looks more like the throes of a desperate gambit instead of a sound merger strategy.

Prime Cuts

All companies that merge have duplicate jobs. It’s a fact of business. Much of the job overlap comes in the administrative side of the house. Legal, accounts, and management teams all have significant overlap no matter where you go. And while those teams are important for keeping the lights on and getting the bills paid, the positions represent redundancy that almost never gets trimmed away. Staff positions keep the machine moving. That means they stay.

Assuming that no one inside of either organization wants to cut staff positions, how can we approach something resembling more sane carving that accomplishes the same goals without leading to the hemorrhaging that will come from large-scale indiscriminate layoffs?

  1. Kill off needless products. While I’m sure this is an on-going process, there are some pretty easy targets for this one. Haven’t sold that SKU in two years? Gone. Wind down support and give a discounted upgrade to something you do support. Kill off SKUs that exists solely to win awards.
  2. Reduce products by collapsing product lines. You don’t need two entry-level products for iSCSI storage. Or five different enterprise-class arrays. Kill off the things that overlap or directly compete against each other. Who survives? The one that sells better. The one that has better tech. The one that costs less to support. If you’re going to pinch pennies in other places, you had better start doing it here too.
  3. Management reductions need to happen too. For all the talk of reducing engineering teams and creating synergy, it’s surprising how often managers escape the layoffs. They’re almost like professors with tenure. Well, it’s time for them to prove their worth too. If their department is gone, so are they. If they are an ineffective manager, pay them a severance and let them earn their role somewhere else all over again. And that goes double for the 500 CTOs that seem to have sprung up inside large organizations lately.

You’d think these things were obvious and easy to figure out. Yet these are the kinds of decisions that get overlooked during every merger.


Tom’s Take

Layoffs hurt lots of people. It’s never fun when your teammates and friends get sacked. But you can be smart about who goes and how best to make the new company survive and even thrive. Chopping away at the company with a machete is like a horror movie. People are going to scream and cry and you’ll be lucky to live through the end. Instead of taking that approach, be smart. Make the best cuts you can from what you’ve got. Find ways to package the parts no one might want with other parts that people find attractive. Do what you can to use as much as you can. Think like a professional butcher. Don’t act like an amateur one.

The Tortoise and the Austin Hare

Dell_Logo

Dell announced today the release of their newest network operating system, OS10 (note the lack of an X). This is an OS that is slated to build on the success that Dell has had selling 3rd party solutions from vendors like Cumulus Networks and Big Switch. OS10’s base core will be built on an unmodified Debian distro that will have a “premium” feature set that includes layer 2 and layer 3 functionality. The aim to have a fully open-source base OS in the networking space is lofty indeed, but the bigger question to me is “what happens to Cumulus”?

Storm Clouds

As of right this moment, before the release of Dell OS10, the only way to buy Linux on a Dell switch is to purchase it with Cumulus. In the coming months, Dell will begin to phase in OS10 as an option in parallel with Cumulus. This is especially attractive to large environments that are running custom-built networking today. If your enterprise is running Quagga or sFlow or some other software that has been tweaked to meet your unique needs you don’t really need a ton of features wrapped in an OS with a CLI you will barely use.

So why introduce an OS that directly competes with your partners? Why upset that apple cart? It comes down to licenses. Every time someone buys a Dell data center switch, they have to pick an OS to run on it. You can’t just order naked hardware and install your own custom kernel and apps. You have to order some kind of software. When you look at the drop-down menu today you can choose from FTOS, Cumulus, or Big Switch. For the kinds of environments that are going to erase and reload anyway the choice is pointless. It boils down to the cheapest option. But what about those customers that choose Cumulus because it’s Linux?

Customers want Linux because they can customize it to their heart’s content. They need access to the switching hardware and other system components. So long as the interface is somewhat familiar they don’t really care what engine is driving it. But every time a customer orders a switch today with Cumulus as the OS option, Dell has to pay Cumulus for that software license. It costs Dell to ship Linux on a switch that isn’t made by Dell.

OS10 erases that fee. By ordering a base image that can only boot and address hardware, Dell puts a clean box in the hands of developers that are going to be hacking the system anyway. When the new feature sets are released later in the year that increase the functionality of OS10, you will likely see more customers beginning to experiment with running Linux development environments. You’ll also see Dell beginning to embrace a model that loads features on a switch as software modules instead of purpose-built appliances.

Silver Lining

Dell’s future is in Linux. Rebuilding their OS from the ground up to utilize Linux only makes sense given industry trends. Junos, EOS, and other OSes from upstarts like Pluribus and Big Switch are all based on Linux or BSD. Reinventing the wheel makes very little sense there. But utilizing the Switch Abstraction Interface (SAI) developed for OpenCompute gives them an edge to focus on northbound feature development while leaving the gory details of addressing hardware to the abstraction talking to something below it.

Dell isn’t going to cannibalize their Cumulus partnership immediately. There are still a large number of shops running Cumulus that a are going to want support from their vendor of choice in the coming months. Also, there are a large number of Dell customers that aren’t ready to disaggregate hardware from software radically. Those customers will require some monitoring, as they are likely to buy the cheapest option as opposed to the best fit and wind up with a switch that will boot and do little else to solve network problems.

In the long term, Cumulus will continue to be a fit for Dell as long as OS10 isn’t ported to the Campus LAN. Once that occurs, you will likely see a distancing of these two partners as Dell embraces their own Linux OS options and Cumulus moves on to focus on using whitebox hardware instead of bundling themselves with existing vendors. Once the support contracts expire on the Cumulus systems supported by Dell, I would expect to see a professional services offering to help those users of Cumulus-on-Dell migrate to a “truly open and unmodified kernel”.


Tom’s Take

Dell is making strides in opening up their networking with Linux and open source components. Juniper has been doing it forever, and HP recently jumped into the fray with OpenSwitch. Making yourself open doesn’t solve your problems or conjure customers out of thin air. But it does give you a story to tell about your goals and your direction. Dell needs to keep their Cumulus partnerships going forward until they can achieve feature parity with the OS that currently runs on their data center switches. After that happens and the migration plans are in place, expect to see a bit of jousting between the two partners about which approach is best. Time will tell who wins that argument.

 

 

My Thoughts On The Death Of IP Telephony

A Candlestick Phone (image courtesy of WIkipedia)

A Candlestick Phone (image courtesy of Wikipedia)

Greg Ferro (@EtherealMind) posted a thought provoking article about collaboration in his Human Infrastructure magazine (which you should be reading). He talks about the death of IP Telephony and the rise of asynchronous communications methods like Slack. He’s got a very interesting point of view. I just happen to disagree with a few of his assertions.

IP Telephony Is Only Mostly Dead

Greg’s stance that IP Telephony is dead is a bit pointed to say the least. He is correct that the market isn’t growing. It is also true that a great number of new workers entering the workforce prefer to use their smartphones for communications, especially the asynchronous kind. However, desk phones are a huge part of corporate communications going forward.

IT shops have a stilted and bizarre world view. If you have a workforce that has to be mobile, whether it be for making service calls or going to customer sites for visits, you have a disproportionately large number of mobile users for sure. But what about organizations that don’t have large mobile populations? What about financial firms or law offices or hospitals? What about retail organizations? These businesses have specific needs for communications, especially with external customers and users.

Imagine if your pharmacy replaced their phone with a chat system? How about your doctor’s office throwing out their PBX and going to an email-only system? How would you feel about that? A couple of you might cheer because they finally “get it”. But a large number of people, especially more traditional non-technical folks, would switch providers or move their business elsewhere. That’s because some organizations rely on voice communications. For every millennial dumping their office phone to use a mobile device there is still someone they need to call on the other end.

We’re not even talking about the important infrastructure that still needs a lot of specialized communications gear. Fax machines are still a huge part of healthcare and legal work. Interactive Voice Response (IVR) systems are still crucial to handle call volumes for things like support lines. These functions can’t be replaced on mobile devices easily. You can fake IVRs and call queuing with the right setup, but faxing things to a mobile device isn’t possible.

Yes, services do exist to capture fax information as a TIFF or JPG and email it to the destination. But for healthcare and legal, this breaks confidentiality clauses and other important legal structures. The area around secure faxing via email is still a bit murky, and most of the searches you can do for the topic revolve around companies trying to tell you that it’s acceptable and okay to use (as long as you use their product).

IP Telephony isn’t far removed from buggy whip manufacturers. The oft-cited example of a cottage industry has relevance here. At some point after the introduction of the automobile, buggy whip growth slowed and eventually halted. But they didn’t go away permanently. The market did contract and still exists to this day. It’s not as big as the 13,000-strong market it once was, but it does exist today to meet a need that people still have. Likewise, IP Telephony will still have solutions to meet needs for specific customers. Perhaps we’ll contract down to two or three providers at some point in the future, but it will never really go away.

I’ll Have My People IM Your People

Contacting people is an exercise today. There are so many ways to reach someone via various communications that you have to spend time figuring out how to reach them. Direct message, Text message, Phone call, Voice Mail, Email, and smoke signals are all valid communications forms. It is true that a lot of these communications are moving toward asynchronous methods. But as mentioned above, a lot of customer-facing businesses are still voice-only.

Sales is one of these areas that is driven by sound. The best way to sell something to someone is to do it face-to-face. But a phone call is a close second. Sales works because you can use your voice to influence buyers. You can share their elation and allay their fears. You can be soothing or exciting or any range of emotion in between. That’s something you don’t get through the cold text of instant messaging or email.

It’s also much harder to ignore phone calls. Sure, you can send read receipts with emails but these are rarely implemented and even more rarely used correctly in my experience. Phone calls alert people about intent. Even ignoring or delaying them means being send to a voice mail box or to another phone in the department where your call can be dealt with. The synchronous nature of the communication means there has to be a connection with someone. You can’t just launch bytes of text into the ether and hope it gets where it’s supposed to go.

It is true that these voice communications happen via mobile numbers more often than not in this day and age. But corporations still prefer those calls to go through some kind of enterprise voice system. Those systems can track communications and be audited. Records can be subpoenaed for legal reasons without needing to involve carriers.

It’s much easier for call centers to track productivity via phone logs than seeing who is on the phone. If you’ve ever worked in a corporate call center, you know there are metrics for everything you do on the phone. Average call time, average wait time, amount of non-call time, and so on. Each of these metrics can be tracked via a desk phone with a headset, not so with a mobile phone and an app.


Tom’s Take

I live on my mobile phone. I send emails and social media updates. I talk in Slack and Skype and other instant messaging platforms. But I still get on the phone at least three times a week to talk to someone. Most of those calls take place on a conference bridge. That’s because people want to hear someone’s voice. It’s still comforting and important to listen to someone.

Doing away with IP Telephony sounds like an interesting strategy for small businesses and startups. It’s a cost-reduction method that has benefits in the short term. But as companies grow and change they will soon find that having a centralized voice system to control and manipulate calls is a necessity. Given the changes in voice technology in the last few years, I highly expect that “centralized” voice will eventually be a pay-per-seat cloud leased model with specific executives and support personnel using traditional phones while non-critical employees have no voice communications device or choose to use their personal mobile device.

IP Telephony isn’t dead. It’s not even dying. But it’s well past the age where it needs to consider retirement and a long and fulfilling life concentrating on specific people instead of trying to make everyone happy.

 

 

It’s Time For IPv6, Isn’t It?

 

I made a joke tweet the other day:

It did get lots of great interaction, but I feel like part of the joke was lost. Every one of the things on that list has been X in “This is the Year of X” for the last couple of years. Which is sad because I would really like IPv6 to be a big part of the year.

Ars Technica came out with a very good IPv6-focused article on January 3rd talking about the rise in adoption to 10% and how there is starting to be a shift in the way that people think about IPv6.

Old and Busted

One of the takeaways from the article that I found most interesting was a quote from Brian Carpenter of The University of Aukland about address structure. Most of the time when people complain about IPv6, they say that it’s stupid that IPv6 isn’t backwards compatible with IPv4. Carpenter has a slightly different take on it:

The fact that people don’t understand: the design flaw is in IPv4, which isn’t forwards compatible. IPv4 makes no allowance for anything that isn’t a 32 bit address. That means that, whatever the design of IPng, an IPv4-only node cannot communicate directly with an IPng-only node.

That’s a very interesting take on the problem that hadn’t occurred to me before. We’ve spent a lot of time trying to make IPv6 work with IPv4 in a way that doesn’t destroy things when the problem has nothing to do with IPv6 at all!

The real issue is that our aging IPv4 protocol just can’t be fixed to work with anything that isn’t IPv4. When you frame the argument in those terms you can start to realize why IPv4’s time is coming to an end. I’ve been told by people that moving to 128-bit addressing is overkill and that we need to just put another octet on the end of IPv4 and make them compatible so we can use things as they are for a bit longer.

Folks, the 5th octet plan would work exactly like IPv6 as far as IPv4 is concerned. The issue boils down to this: IPv4 is hard-coded to reject any address that isn’t exactly 32-bits in length. It doesn’t matter if your proposal is 33 bits or 256 bits, the result is the same: IPv4 won’t be able to directly talk to it. The only way to make IPv4 talk to any other protocol version would be extend it. And the massive amount of effort that it would take to do that is why we have things like dual stack and translation gateways for IPv6. Every plan to make IPv4 work a little longer ends in the same way: scrap it for something new.

New Hotness

Fresh from our take on how IPv4 is a busted protocol for the purposes of future proofing, lets take a look at what’s driving IPv6 right now. I got an email from my friend Eric Hileman, who runs a rather large network, asking me when he should consider his plans to transition to IPv6. My response was “wait for mobile users to force you there”.

Mobility is going to be the driving force behind IPv6 adoption. Don’t believe me? Grab the closest computing device to your body right now. I’d bet more than half of you reached for a phone or a tablet if you didn’t already have a smartwatch on your wrist. We are a society that is embracing mobile devices at an increasingly rapid rate.

Mobility is the new consumer compute. That means connectivity. Connectivity everywhere. My children don’t like not being able to consume media away from wireless access points. They would love to have a cellular device to allow them access to TV shows, movies, or even games. That generation is going to grow up to be the primary tech consumer in the next ten years.

In those intervening years, our tech infrastructure is going to balloon like never before. Smart devices will be everywhere. We are going to have terabytes of data from multiple sources flooding collectors to produce analysis that must be digested by some form of intelligence, whether organic or artificial. How do you think all that data is going to be transmitted? On a forty-year-old protocol with no concept of the future?

IPv6 has to become the network protocol to support future infrastructure. Mobility is going to drive adoption, but the tools and software we build around mobility is going to push new infrastructure adoption as well. IPv6 is going to be a huge part of that. Devices that don’t support IPv6 are going to be just like the IPv4 protocol they do support – forever stuck in the past with no concept of how the world is different now.


Tom’s Take

It’s no secret I’m an IPv6 champion. Even my distaste for NAT has more to do with its misuse with regard to IPv6 than any dislike for it as a protocol. IPv6 is something that should have been recognized ten years ago as the future of network addressing. When you look at how fast other things around us transform, like mobile computing or music/video consumption, you can see that technology doesn’t wait for stalwarts to figure things out. If you don’t want to be using IPv4 along with your VCR it’s time to start planning for how you’re going to use it.

 

 

Doing 2016 Write

 

calendar

It’s the first day of 2016 and it’s time for me to look at what I wanted to do and what I plan to accomplish in the coming 366 days. We’ve got a busy year ahead with a leap day, the Olympics, and a US presidential election. And somewhere in the middle of all that there’s a lot of exciting things related to tech.

2015 In Rewind

Looking back at my 2015 goals, I think I did a fairly good job:

  • Writing in Markdown – Read about it all here
  • Blog themes – I really did look at quite a few themes and tried to find something that worked the way I wanted it to work without major modifications. What I finally settled on was a minor font change to make things more readable. For me, form has never been more important than function, so I spend less time worrying about how my blog looks and much more time focusing on how it reads.
  • Cisco Live Management – Didn’t quite get this one done. I wanted to put up the poll for the big picture at the end and I managed to miss it this year! The crew got a chance to say hello to keynote speaker Mike Rowe, so I think it was a good tradeoff. This year for Cisco Live 2016, I hope we have some more interesting things in store as well as some surprises.

A hit, a miss, and a foul tip. Not terribly bad. 2015 was a busy year. I think I wrote more words than ever. I spoke a few times at industry events. I enjoyed participating in the community and being a part of all the wonderful things going on to move forward.

Looking Ahead to 2016

2016 is going to be another busy year as well. Lots of conferences in Las Vegas this year (Aruba Atmosphere, Interop, Cisco Live, and VMworld) as well as other industry events and a full slate of Tech Field Day events. I don’t think there’s a month in the entire year where something isn’t going on.

I’m sure this is an issue for quite a few people in the community as well. There’s a lot of time that gets taken up by doing things. The leaves very little time for writing about those things. I’ve experienced it and I know a lot of my friends have felt the same way. I can’t tell yo the number of times that I’ve heard “I need to write something about that.” or “I’m way behind on my blogging!”

Two Wrongs Don’t Make A Write

My biggest goal for 2016 is writing. I’ve been doing as much as I can, but I want to help others do it as well. I want to find a way to encourage people to start writing and add their thoughts to the community. I also want to find a way to keep the other great voices of the community going and writing regularly.

There’s been a huge shift recently away from blogging as a primary method of information transfer. Quite a few have moved toward non-writing methods to convey that information. Podcasts, both audio and video, are starting to become a preferred method of disseminating information.

I don’t have a problem with podcasts. Some of my friends have great resources that you should check out. But podcasts are very linear. Aside from using software that speeds up the talking, it’s very hard to condense podcasts into quick hit formats. Blog posts can be as short or long as they need to be to get the information across.

What I want to accomplish is a way to foster writers to write more. To help new writers get started and established writers to keep contributing. But keeping the blogging format alive and growing, we can continue to contribute great thoughts to the community and transfer knowledge to a new group of up-and-coming IT professionals.

I’ve got some ideas along these lines that I’ll be rolling out in the coming months. Be sure to say tuned. If you’re willing to help out in any way please drop me a line and let me know. I’m always looking for great people in the community to help make others great as well.


Tom’s Take

A new year doesn’t always mean a totally fresh start. I’ve been working on 2016 things for a few weeks now and I’m continuing great projects that I’ve been doing for a while now as well. But a new year does mean that it is time to find ways to do things better. My mission for the year is to make people better writers. To encourage more people to put thoughts down on paper. I want a world full of thinkers that aren’t afraid to share. That’s something that could make the new year a great one indeed.