Fixing My Twitter

It’s no surprise that Twitter’s developers are messing around with the platform. Again. This time, it’s the implementation of changes announced back in May. Twitter is finally cutting off access to their API that third party clients have been using for the past few years. They’re forcing these clients to use their new API structure for things like notifications and removing support for streaming. This new API structure also has a hefty price tag. For 250 users it’s almost $3,000/month.

You can imagine the feedback that Twitter has gotten. Users of popular programs like Tweetbot and Twitterific were forced to degrade client functionality thanks to the implementation of these changes. Twitter power users have been voicing their opinions with the hashtag #BreakingMyTwitter. I’m among the people that are frustrated that Twitter is chasing the dollar instead of the users.

Breaking The Bank

Twitter is beholden to a harsh mistress. Wall Street doesn’t care about user interface or API accessibility. They care about money. They care are results and profit. And if you aren’t turning a profit you’re a loser that people will abandon. So Twitter has to make money somehow. And how is Twitter supposed to make money in today’s climate?

Users.

Users are the coin of Twitter’s realm. The more users they have active the more eyeballs they can get on their ads and sponsored tweets and hashtags. Twitter wants to court celebrities with huge followings that want to put sponsored tweets in their faces. Sadly for Twitter, those celebrities are moving to platforms like Instagram as Twitter becomes overrun with bots and loses the ability to have discourse about topics.

Twitter needs real users looking at ads and sponsored things to get customers to pay for them. They need to get people to the Twitter website where these things can be displayed. And that means choking off third party clients. But it’s not just a war on Tweetbot and Twitterific. They’ve already killed off their Mac client. They have left Tweetdeck in a state that’s barely usable, positioning it for power users. Yet, power users prefer other clients.

How can Twitter live in a world where no one wants to use their tools but can’t use the tools they like because access to the vital APIs that run them are choked off behind a paywall that no one wants to pay for? How can us poor users continue to use a service that sucks when used through the preferred web portal?

You probably heard my rant on the Gestalt IT Rundown this week. If not, here you go:

I was a little animated because I’m tired of getting screwed by developers that don’t use Twitter the way that I use it. I came up with a huge list of things I didn’t like. But I wanted to take a moment to talk about some things that I think Twitter should do to get their power users back on their side.

  1. Restore API Access to Third Party Clients – This is a no-brainer for Twitter. If you don’t want to maintain the old code, then give API access to these third party developers at the rates they used to have it. Don’t force the developers working hard to make your service usable to foot the bills that you think you should be getting. If you want people to continue to develop good features that you’ll “borrow” later, you need to give them access to your client.
  2. Enforce Ads on Third Party Clients – I hate this idea, but if it’s what it takes to restore functionality, so be it. Give API access to Tweetbot and Twitterific, but in order to qualify for a reduced rate they have to start displaying ads and promoted tweets from Twitter. It’s going to clog our timeline but it would also finance a usable client. Sometimes we have to put up with the noise to keep the signal.
  3. Let Users Customize Their Experience – If you’re going to drive me to the website, let me choose how I view my tweets. I don’t want to see what my followers liked on someone else’s timeline. I don’t want to see interesting tweets from people I don’t follow. I want to get a simple timeline with conversations that don’t expand until I click on them. I want to be able to scroll the way I want, not the way you want me to use your platform. Customizability is why power users use tools like third party clients. If you want to win those users back, you need to investigate letting power users use the web platform in the same way.
  4. Buy A Third Party Client and Don’t Kill Them Off – This one’s kind of hard for Twitter. Tweetie. The original Tweetdeck. There’s a graveyard of clients that Twitter has bought and caused to fail through inattention and inability to capitalize on their usability. I’m sure Loren Britcher is happy to know that his most popular app is now sitting on a scrap heap somewhere. Twitter needs to pick up a third party developer, let them develop their client in peace without interference internally at Twitter, and then not get fired for producing.
  5. Listen To Your Users, Not Your Investors – Let’s be honest. If you don’t have users on Twitter, you don’t have investors. Rather than chasing the dollars every quarter and trying to keep Wall Street happy, you should instead listen to the people that use your platform and implement the changes their asking for. Some are simple, like group DMs in third party clients. Or polls that are visible. Some are harder, like robust reporting mechanisms or the ability to remove accounts that are causing issues. But if Twitter keeps ignoring their user base in favor of their flighty investors they’re going to be sitting on a pile of nothing very soon.

Tom’s Take

I use Twitter all the time. It’s my job. It’s my hobby. It’s a place where I can talk to smart people and learn things. But it’s not easy to do that when the company that builds the platform tries as hard as possible to make it difficult for me to use it the way that I want. Instead of trying to shut down things I actively use and am happy with, perhaps Twitter can do some soul searching and find a way to appeal to the people that use the platform all the time. That’s the only way to fix this mess before you’re in the same boat as Orkut and MySpace.

Advertisements

The Cargo Cult of Google Tools

You should definitely watch this amazing video from Ben Sigelman of LightStep that was recorded at Cloud Field Day 4. The good stuff comes right up front.

In less than five minutes, he takes apart crazy notions that we have in the world today. I like the observation that you can’t build a system more than three or four orders of magnitude. Yes, you really shouldn’t be using Hadoop for simple things. And Machine Learning is not a magic wand that fixes every problem.

However, my favorite thing was the quick mention of how emulating Google for the sake of using their tools for every solution is folly. Ben should know, because he is an ex-Googler. I think I can sum up this entire discussion in less than a minute of his talk here:

Google’s solutions were built for scale that basically doesn’t exist outside of a maybe a handful of companies with a trillion dollar valuation. It’s foolish to assume that their solutions are better. They’re just more scalable. But they are actually very feature-poor. There’s a tradeoff there. We should not be imitating what Google did without thinking about why they did it. Sometimes the “whys” will apply to us, sometimes they won’t.

Gee, where have I heard something like this before? Oh yeah. How about this post. Or maybe this one on OCP. If I had a microphone I would have handed it to Ben so he could drop it.

Building a Laser Moustrap

We’ve reached the point in networking and other IT disciplines where we have built cargo cults around Facebook and Google. We practically worship every tool they release into the wild and try to emulate that style in our own networks. And it’s not just the tools we use, either. We also keep trying to emulate the service provider style of Facebook and Google where they treated their primary users and consumers of services like your ISP treats you. That architectural style is being lauded by so many analysts and forward-thinking firms that you’re probably sick of hearing about it.

Guess what? You are not Google. Or Facebook. Or LinkedIn. You are not solving massive problems at the scale that they are solving them. Your 50-person office does not need Cassandra or Hadoop or TensorFlow. Why?

  • Google Has Massive Scale – Ben mentioned it in the video above. The published scale of Google is massive, and even it’s on the low side of the number. The real numbers could even be an order of magnitude higher than what we realize. When you have to start quoting throughput numbers in “Library of Congress” numbers to make sense to normal people, you’re in a class by yourself.
  • Google Builds Solutions For Their Problems – It’s all well and good that Google has built a ton of tools to solve their issues. It’s even nice of them to have shared those tools with the community through open source. But realistically speaking, when are you really going to use Cassandra to solve all but the most complicated and complex database issues? It’s like a guy that goes out to buy a pneumatic impact wrench to fix the training wheels on his daughter’s bike. Sure, it will get the job done. But it’s going to be way overpowered and cause more problems than it solves.
  • Google’s Tools Don’t Solve Your Problems – This is the crux of Ben’s argument above. Google’s tools aren’t designed to solve a small flow issue in an SME network. They’re designed to keep the lights on in an organization that maps the world and provides video content to billions of people. Google tools are purpose built. And they aren’t flexible outside that purpose. They are built to be scalable, not flexible.

Down To Earth

Since Google’s scale numbers are hard to comprehend, let’s look at a better example from days gone by. I’m talking about the Cisco Aironet-to-LWAPP Upgrade Tool:

I used this a lot back in the day to upgrade autonomous APs to LWAPP controller-based APs. It was a very simple tool. It did exactly what it said in the title. And it didn’t do much more than that. You fed it an image and pointed it at an AP and it did the rest. There was some magic on the backend of removing and installing certificates and other necessary things to pave the way for the upgrade, but it was essentially a batch TFTP server.

It was simple. It didn’t check that you had the right image for the AP. It didn’t throw out good error codes when you blew something up. It only ran on a maximum of 5 APs at a time. And you had to close the tool every three or four uses because it had a memory leak! But, it was a still a better choice than trying to upgrade those APs by hand through the CLI.

This tool is over ten years old at this point and is still available for download on Cisco’s site. Why? Because you may still need it. It doesn’t scale to 1,000 APs. It doesn’t give you any other functionality other than upgrading 5 Aironet APs at a time to LWAPP (or CAPWAP) images. That’s it. That’s the purpose of the tool. And it’s still useful.

Tools like this aren’t built to be the ultimate solution to every problem. They don’t try to pack in every possible feature to be a “single pane of glass” problem solver. Instead, they focus on one problem and solve it better than anything else. Now, imagine that tool running at a scale your mind can’t comprehend. And you’ll know now why Google builds their tools the way they do.


Tom’s Take

I have a constant discussion on Twitter about the phrase “begs the question”. Begging the question is a logical fallacy. Almost every time the speaker really means “raises the question”. Likewise, every time you think you need to use a Google tool to solve a problem, you’re almost always wrong. You’re not operating at the scale necessary to need that solution. Instead, the majority of people looking to implement Google solutions in their networks are like people that put chrome everything on a car. They’re looking to show off instead of get things done. It’s time to retire the Google Cargo Cult and instead ask ourselves what problems we’re really trying to solve, as Ben Sigelman mentions above. I think we’ll end up much happier in the long run and find our work lives much less complicated.

Are We Seeing SD-WAN Washing?

You may have seen a tweet from me last week referencing a news story that Fortinet was now in the SD-WAN market:

It came as a shock to me because Fortinet wasn’t even on my radar as an SD-WAN vendor. I knew they were doing brisk business in the firewall and security space, but SD-WAN? What does it really mean?

SD Boxes

Fortinet’s claim to be a player in the SD-WAN space brings the number of vendors doing SD-WAN to well over 50. That’s a lot of players. But how did the come out of left field to land a deal rumored to be over a million dollars for a space that they weren’t even really playing in six months ago?

Fortinet makes edge firewalls. They make decent edge firewalls. When I used to work for a VAR we used them quite a bit. We even used their smaller units as remote appliances to allow us to connect to remote networks and do managed maintenance services. At no time during that whole engagement did I ever consider them to be anything other than a firewall.

Fast forward to 2018. Fortinet is still selling firewalls. Their website still focuses on security as the primary driver for their lines of business. They do talk about SD-WAN and have a section for it with links to whitepapers going all the way back to May. They even have a contributed article for SDxCentral back and February. However, going back that far the article reads more like a security company that is saying their secure endpoints could be considered SD-WAN.

This reminds me of stories of Oracle counting database licenses as cloud licenses so they could claim to be the fourth largest cloud provider. Or if a company suddenly decided that every box they sold counted as an IPS because it had a function that could be enabled for a fee. The numbers look great when you start counting them creatively but they’re almost always a bit of a fib.

Part Time Job

Imagine if Cisco suddenly decided to start counting ASA firewalls as container engines because of a software update that allowed you to run Kubernetes on the box. People would lose their minds. Because no one buys an ASA to run containers. So for a company like Cisco to count them as part of a container deployment would be absurd.

The same can be said for any company that has a line of business that is focused on one specific area and then suddenly decides that the same line of business can be double-counted for a new emerging market. It may very well be the case that Fortinet has a huge deployment of SD-WAN devices that customers are very happy with. But if those edge devices were originally sold as firewalls or UTM devices that just so happened to be able to run SD-WAN software, it shouldn’t really count should it? If a customer thought they were buying a firewall they wouldn’t really believe it was actually an SD-WAN router.

The problem with this math is that everything gets inflated. Maybe those SD-WAN edge devices are dedicated. But, if they run Fortinet’s security suite are also being counting in the UTM numbers? Is Cisco going to start counting every ISR sold in the last five years as a Viptela deployment after the news this week that Viptela software can run on all of them? Where exactly are we going to draw the line? Is it fair to say that every x86 chip sold in the last 10 years should count for a VMware license because you could conceivably run a hypervisor on them? It sounds ridiculous when you put it like that, but only because of the timelines involved. Some crazier ideas have been put forward in the past.

The only way that this whole thing really works is if the devices are dedicated to their function and are only counted for the purpose they were installed and configured for. You shouldn’t get to add a UTM firewall to both the security side and the SD-WAN side. Cisco routers should only count as traditional layer 3 or SD-WAN, not both. If you try to push the envelope to put up big numbers designed to wow potential customers and get a seat at the big table, you need to be ready to defend your reporting of those numbers when people ask tough questions about the math behind those numbers.


Tom’s Take

If you had told me last year that Fortinet would sell a million dollars worth of SD-WAN in one deal, I’d ask you who they bought to get that expertise. Today, it appears they are content with saying their UTM boxes with a central controller count as SD-WAN. I’d love to put them up against Viptela or VeloCloud or even CloudGenix and see what kind of advanced feature sets they produce. If it’s merely a WAN aggregation box with some central control and a security suite I don’t think it’s fair to call it true SD-WAN. Just a rinse and repeat of some washed up marketing ideas.

Cisco and the Two-Factor Two-Step

In case you missed the news, Cisco announced yesterday that they are buying Duo Security. This is a great move on Cisco’s part. They need to beef up their security portfolio to compete against not only Palo Alto Networks but also against all the up-and-coming startups that are trying to solve problems that are largely being ignored by large enterprise security vendors. But how does an authentication vendor help Cisco?

Who Are You?

The world relies on passwords to run. Banks, email, and even your mobile device has some kind of passcode. We memorize them, write them down, or sometimes just use a password manager (like 1Password) to keep them safe. But passwords can be guessed. Trivial passwords are especially vulnerable. And when you factor in things like rainbow tables, it gets even scarier.

The most secure systems require you to have some additional form of authentication. You may have heard this termed as Two Factor Authentication (2FA). 2FA makes sure that no one is just going to be able to guess your password. The most commonly accepted forms of multi-factor authentication are:

  • Something You Know – Password, PIN, etc
  • Something You Have – Credit Card, Auth token, etc
  • Something You Are – Biometrics

You need at least two of these in order to successfully log into a system. Not having an additional form means you’re locked out. And that also means that the individual components of the scheme are useless in isolation. Knowing someone’s password without having their security token means little. Stealing a token without having their fingerprint is worthless.

But, people are starting to get more and more sophisticated with their attacks. One of the most popular forms of 2FA is the SMS authentication. It combines What You Know, in this case you password for your account, with Something You Have, which is a phone capable of receiving an SMS text message. When you log in, the authentication system sends an SMS to the authorized number and you have to type in the short-lived code to get into the system.

Ask Reddit how that worked out for them recently. A hacker (or group) was able to intercept the 2FA SMS codes for certain accounts and use both factors to log in and gather account data. It’s actually not as trivial as one might think to intercept SMS codes. It’s much, much harder to crack the algorithm of something like a security token. You’d need access to the source code and months to download everything. Like exactly what happened in 2011 to RSA.

In order for 2FA to work effectively, it needs to be something like an app on your mobile device that can be updated and changed when necessary to validate new algorithms and expire old credentials. It needs to be modern. It needs to be something that people don’t think twice about. That’s what Duo Security is all about. And, from their customer base and the fact that Cisco payed about $2.3 billion for them, they must do it well.

Won’t Get Fooled Again

How does Duo help Cisco? Well, first and foremost I hope that Duo puts an end to telnet access to routers forever. Telnet is the lazy way we enable remote access to devices. SSH is ten times better and a thousand times more secure. But setting it up properly to authenticate with certificate authentication is a huge pain. People want it to work when they need it to work. And tying it to a specific machine or location isn’t the easiest or more convenient thing.

Duo can give Cisco the ability to introduce real 2FA login security to their devices. IOS could be modified to require Duo Security app login authentication. That means that only users authorized to log into that device would get the login codes. No more guessed remote passwords!

Think about integrating Duo with Cisco ISE. That could be a huge boon for systems that need additional security. You could have groups of system that need 2FA and others that don’t. You could easily manage those lists and move systems in and out as needed. Or, you could start a policy that all systems needs 2FA and phase in the requirements over time to make people understand how important it is and give them time to download the app and get it set up. The ISE possibilities are endless.

One caveat is that Duo is a program that works with a large number of third party programs right now. Including integrations with Juniper Networks. As you can imagine, that list might change once Cisco takes control of the company. Some organizations that use Duo will probably see a price increase and will continue to offer the service to their users. Others, possibly Juniper as an example, may be frozen out as Cisco tries to keep the best parts of the company for their own use. If Cisco is smart, they’ll keep Duo available for any third party that wants to use the platform or integrate. It’s the best solution out there for solving this problem and everyone deserves to have good security.


Tom’s Take

Cisco buying a security company is no shock. They need the horsepower to compete in a world where firewalls are impediments at best and hackers have long since figured out how to get around static defenses. They need to get involved in software too. Security isn’t fought in silicon any more. It’s all in code and beefing up the software side of the equation. Duo gives them a component to compete in the broader authentication market. And the acquisition strategy is straight out of the Chambers playbook.

A plea to Cisco: Don’t lock everyone out of the best parts of Duo because you want to bundle them with recurring Cisco software revenue. Let people integrate. Take a page from the Samsung playbook. Just because you compete with Apple doesn’t mean you can’t make chips for them. Keep your competitors close and make they use your software and you’ll make more money than freezing everyone out and claiming your software is the best and least used of the bunch.

It’s About Time and Project Management

I stumbled across a Reddit thread today from /u/Magician_Hiker that posed a question I’ve always found fascinating. When we work on projects, it always seems like there is a disconnect between the project management team and the engineering team doing the work. The statement posted at the top of this thread is as follows:

Project Managers only plan for when things go right.

Engineers always plan for when things go wrong.

How did we get here? And can anything be done about it?

Projecting Management

I’ve had a turn or two at project management. I got my Project+ many years back, and even more years before that I had to learn all about project management in college. The science behind project management is storied and deep. The idea of having someone assigned to keep things running on task and making sure all the little details get taken care of is a huge boon as the size of projects grow.

As an engineer, can you imagine trying to juggle three different installations across 5 different sites that all need to be coordinated together? Can you think about the effort needed to make sure that everything works together and is done on time? The thought alone probably gives you hives.

Project managers are capable of juggling lots of things in their professional capabilities. That means keeping all the dishes cooking at the same time and making sure that everything is done on time to eat dinner. It also means that people need to know about timelines and how those timelines intersect and can impact the execution of multiple phases of a project. Sure, it’s easy to figure out that we can’t start installing the equipment until it arrives on the dock. But how about coordinating the installers to be on-site on the right day knowing that the company is drop shipping the equipment to three different receiving docks? That’s a bit harder.

Project managers need to know timelines for things because they have to juggle everything together. If you’ve ever had the misfortune to need to use a Gantt chart you’ll know what I’m talking about. These little jewels have all the timeline needs of a project visualized for everyone to figure out how to make things happen. Stable time is key to a project. Estimates need to make sense. You can’t just spitball something and hope it works. If part of your project timeline is off in either direction, you’re going to get messed up further down the line.

Predictability

Project timelines need to be consistent. Most people try to err on the side of caution when trying to make them work. They fudge the numbers and pad things out a bit so that everything will work out in the end. Even if that means that there may be a few hours when someone is sitting around with nothing to do.

I worked with a project manager that jokingly told me that the way he figured out the timing for an installation project was to take the units from his engineers and double it and move to the next time unit. So hours became days, and days became weeks. We chuckled about this at the time, but it also wasn’t surprising when their projects always seemed to talk a LOT longer than most people budgeted for.

The problem with inflated numbers is that no customer is going to want to pay for wasted time. If you think it’s hard to get a customer to buy off on an installation that might take 30 hours try getting them to pay when they are telling you your engineers were sitting around for 10 of those hours. Customers only want to pay for the hours worked, not the hours spent on the phone trying to locate shipments or trying to figure out what this weird error message is.

Likewise, trying to go the other direction and get things done more quickly than the estimate is a recipe for disaster too. There’s even a specific term for it: crashing (sounds great, eh?). Crashing a project means adding resources to a project or removing items from the critical execution path to make a deadline or complete something earlier. If you want a textbook example of why messing with a project timeline is a bad idea, go read or watch The Martian. The first resupply mission is a prime example of this practice in action and why it can go horribly wrong.

These are all great reasons why cloud is so appealing to people. Justin Warren (@JPWarren) did a great presentation a couple of years ago about what happens when projects run late and why cloud fixes that:

Watch that whole video and you’ll understand things from a project manager’s point of view. Cloud is predictable and stable and it always works the same way. The variance on things is tight. You don’t have to worry about projects slipping up or taking too much time. Cloud removes uncertainty and doubt about execution. That’s something that project managers love.


Tom’s Take

I used to get asked to quote my projected installation times to the sales managers for projects. Most of the time, I’d give them an estimate that I felt comfortable with and that would be the end of it. One day, I asked them about why a 10-hour project was quoted as 14 on an order. The sales manager told me that they’d developed “Tom Time”, which was 1.4 times the amount of whatever I quoted. So, 10 hours became 14 and 20 hours became 28, and so on. When I asked why I was told that engineers often run into problems and don’t think to account for it. So project managers need to build in the time somehow. Perhaps that’s one of the reasons why software defined and cloud are more attractive. Because there isn’t any Tom Time involved.

Friday Musings on Network Analytics

I’ve been at Networking Field Day this week, and as always the conversations have been great and focused around a variety of networking topics. One that keeps jumping out at me is network analytics. There’s a few things that have come up that were especially interesting to me:

  • Don’t ask yourself if networking monitoring is not worth your time. Odds are good you’re already monitoring stuff in your network and you don’t even realize it. Many networking vendors enable basic analytics for troubleshooting purposes. You need to figure out how to build that into a bigger part of your workflows.
  • Remember that analytics can integrate with platforms you’re already using. If you’re using ServiceNow you can integrate everything into it. No better way to learn how analytics can help you than to setup some kind of ticket generation for down networks. And, if that automation causes you to get overloaded with link flaps you’ll have even more motivation to figure out why your provider can’t keep things running.
  • Don’t discount open source tools. The world has come a long way since MRTG and Cacti. In fact, a lot of the flagship analytics platforms are built with open source tools as a starting point. If you can figure out how to use the “free” versions, you can figure out how to implement the bigger stuff too. The paid versions may look nicer or have deeper integrations, but you can bet that they all work mostly the same under the hood.
  • Finally, remember that you can’t possible deal with all this data yourself. You can collect it but parsing it is like trying to drink from a firehose of pond water. You need to treat the data and then analyze that result. Find tools (probably open source) that help you understand what you’re seeing. If it saves you 10 minutes of looking, it’s worth it.

Tom’s Take

Be sure to say tuned to our Gestalt IT On-Premise IT Roundtable podcast in the coming weeks for more great discussion on the analytics topic. We’ve got an episode that should be out soon that will take the discussion of the “expense” of networking analytics in a new direction.

Independence, Impartiality, and Perspective

In case you haven’t noticed recently, there are a lot of people that have been going to work for vendors and manufacturers of computer equipment. Microsoft has scored more than a few of them, along with Cohesity, Rubrik, and many others. This is something that I see frequently from my position at Tech Field Day. We typically hear the rumblings of a person looking to move on to a different position early on because we talk to a great number of companies. We also hear about it because it represents a big shift for those who are potential delegates for our events. Because going to a vendor means loss of their independence. But what does that really mean?

Undeclaring Independence

When people go to work for a manufacturer of a computing product, the necessarily lose their independence. But that’s not the only case where that happens. You can also not be truly independent if you work for reseller. If your company specializes in Cisco and EMC, are you truly independent when discussion Juniper and NetApp? If you make your money by selling one group of products you’re going to be unconsciously biased toward them. If you’ve been burned or had the rug pulled out from under you by a vendor, you may be biased against them.

Likewise, if you work yourself in an independent consulting business, are you honestly and truly independent? That would mean you’re making no money from any vendor as your customer. That means no drafting of whitepapers, no testing, and no interaction with them that involves compensation. This falls into the same line of thinking as the reseller employee. Strictly speaking, you aren’t totally independent.

Independence would really mean not even being involved in the market. A public defender is truly independent of IT. A plumber is independent of IT. They don’t have biases. They don’t care either way what type of computer or network solution they use. But you don’t consider a public defender or a plumber to be an expert on IT. Because in order to know anything about IT you have to understand the companies. You have to talk to them and get what they’re doing. And that means you’re going to start forming opinions about them. But, truthfully, real independence is not what you’re looking for here.

Partial Neutrality

Instead of independence, which is a tricky subject, what you should look for is impartiality. Being impartial means treating everything fairly. You accept that you have bias and you try to overcome it. For example, judges are impartial. They have their own opinions and thoughts about subjects. They rule based on the law. They aren’t independent of the justice system. Instead, they do their best to use logic and rules to decide matters.

Likewise, people in IT should strive to be impartial. Instead of forming an opinion about something without thought we should try to look at all aspects of the ideas before we form our opinions. I used to be a huge fan of IBM Thinkpad laptops. I carried one for many years at my old job. Today, I’m a huge fan of MacBooks. I use one every day. Does that mean that I don’t like Windows laptops any more? It means that I don’t use them enough to have a solid opinion. I may compare some of the features I have on my current laptop against the ones that I see, but I also recognize that I am making that comparison and that I need to take it into account when I arrive at my decision.

Extending this further, when making decisions about how we analyze a solution from a company, we have to take into account how impartial we’re going to be in every direction. When you work for a company that makes a thing, whether it be a switch or a laptop or a car, you’re going to be partial to the thing you make. That’s just how people are. We like things we are involved in making. Does that mean we are incapable of admiring other things? No, but it does mean that we have some things to overcome to truly be impartial.

The hardest part of trying to be impartial is realizing when you aren’t. Unconscious bias creeps in all the time. Thoughts about the one time you used a bad product or ate bad food in a restaurant make you immediately start thinking about buying something else quickly. Even when it’s unfounded we still do it. And we have to recognize when we’re doing it in order to counter it.

Being impartial isn’t easy on purpose. Being able to look at all sides of an issue, both good and bad, takes a lot of extra effort. We don’t always like seeing the good in something we dislike or finding the bad parts of an opinion we agree with. But in order to be good judges of things we need to find the balance.


Tom’s Take

I try my best to be impartial. I know it’s hard to do. I have the fortune to be independent. Sure, I’m still a CCIE so I have a lot of Cisco knowledge. I also have a lot of relationships with people in the industry. A lot of those same people work at companies that are my customers. In the end, I have to realize that I need to work hard to be impartial every day. That’s the only way I can say that I’m independent and capable of evaluating things on their own merits. It’s just a matter of having the right perspective.