Making Alexa Tech Demos Useful

Technology always marches on. People want to see the latest gadgets doing amazing things, whether it be flying electric cars or telepathic eyeglasses. Our society is obsessed with the Jetsons and the look of the future. That’s why we’re developing so many devices to help us get there. But it’s time for IT to reconsider how they are using one of them for a purpose far from the original idea.

Speaking For The People

By all accounts, the Amazon Echo is a masterful device. It’s a smart speaker that connects to an Amazon service that offers you a wider variety of software programs, called skills, to enhance what you can do with it. I have several of these devices that were either given out as conference attendance gifts or obtained from other giveaways.

I find the Echo speaker a fascinating thing. It’s a good speaker. It can play music through my phone or other Bluetooth-connected devices. But, I don’t really use it for that purpose. Instead, I use the skills to do all kinds of other things. I play Jeopardy! frequently. I listen to news briefings and NPR on a regular basis. I get weather forecasts. My son uses the Echo to check simple fraction math when he’s doing homework. My daughter uses it to time her math facts practice.

It would appear that the power behind an Echo speaker lies not in the hardware, but in the software stack built on it. It’s so powerful that most people don’t even refer to the speaker as an “Echo”, but instead as “Alexa”, the default name used to activate the listening service. People ask Alexa all kinds of things. And Alexa provides answers or ways to get the answers. It’s so popular that modern IT organizations have started to get in on the action.

Alexa, Tell Me A Story

Enterprise IT vendors are starting to show off their programming skills by creating Alexa skills to integrate with their software. Ostensibly, this would be to showcase how the platform has a rich API that allows for a large amount of information to be queried all at once. Users could ask Alexa to give them a readout of what’s going on without having to log into the system at any given time. I’ve personally seen demos that ask Alexa to find out who is using all the network bandwidth, what is the status of the wireless network, and even details on protocols.

However, there is a huge downside to using Alexa for this purpose. Without specifically crafted questions, you get a readout that is like trying to drink from a monotone firehose. Alexa is just like any computer system in that it will dutifully read you whatever input is given to it. That’s fine if you want the kind of detail that you get in your average computer monitoring system. But, if you’re using a smart speaker to cut down on the amount of information you are processing, you probably don’t want the entire text of the system read out to you.

I always fall back on the idea of people trying to make small talk. When you ask someone how their day is going, you typically aren’t looking for a recitation of their entire schedule from start to finish with all the details they can pack in. You’re looking for simple answer – good, okay, or not good. That’s the basic level of information that anyone wants about anything. More specific queries can drill down into other areas, but the initial conversation needs to be easy to parse in one or two sentences.

Another issue with using Alexa for technical demos is how the system parses IP addresses and DNS names. Alexa will dutifully read an IP address to you one digit at a time, including periods between octets. That can be annoying for addresses in the old Class C range with lots of 3-digit numbers. Also, you’d have to write them down to get any kind of coherence about which system was being discussed, which does kind of eliminate the usefulness of getting information from a speaker. With DNS names, Alexa will try to read the name of the system as if it were a real word. That can produce results that range from hilarious to downright unintelligible. It makes trying to understand these briefings much harder.

So, how can this be fixed? The answer is actually quiet easy. Instead of making your Alexa skill read off every possible piece of information with a simple query, have it give a basic readout. Possible answers like:

  • Things look good now
  • There are a couple of trouble spots to look at. Would you like to know more?
  • There are quite a few problems. I suggest logging in to learn more.

Each of these answers gives the user a chance to understand things. A “good” response means everything is good and you don’t need to know more. An “okay” or middle response says there are only a couple of issues that could be summarized here. A “bad” response tells the user that there is too much information to be easily digested in an audio briefing an that they should log into the system to see more. That gives the user the option of getting more compact information in a format that makes sense to them rather than listening to the speaker drone on for 5 minutes about all the errors in the system.


Tom’s Take

Technology is a wonderful thing. Technology used for the proper purpose is even better. The Amazon Echo is a great tool that helps advance our understanding of what people listen to and how they use machine learning and AI to ask questions and get answers. But, ultimately the Echo is a consumer device built around consumer questions. It’s up to enterprise tech vendors to write skills that give us the chance to interact with the speaker, not just get an information dump first thing in the morning. Enterprise tech vendors need to understand that they are what makes Alexa’s briefing useful. Select the information they will receiving and package it in such as way as to make it digestible.

Advertisements

The Winds of Change From January

Some quick thoughts on networking from my last couple of weeks at Networking Field Day 17 and Tech Field Day Extra at Cisco Live Europe:

  • Cisco is in the middle of turning a big ship away from hardware. All their innovation is coming in the software side of the house. Big announcements around network assurance. It’s not enough any more to do the things. Now you need to prove they were done and show your work. Context and Intent only work if you can quantitatively show that they were applied.
  • Containers are still a thing. Cisco has a new container platform. I also had the chance to chat with a startup called AppOrbit that’s doing some interesting things around containers but including storage and networking. They should be primed for some announcements soon, so stayed tuned for that!
  • Automation is cool again. Well, maybe it never stopped being cool. But thanks to Extreme Networks and Juniper people are really hopping on the train to talk more about removing the limitations of the CLI and doing it with tools like Slack. Check out Lindsay Hill and Matt Oswalt showing this off to people in some finely crafted demos.
  • 2018 is the year that the CLI dies. Sure, we’ll go with that. Between Slack and Github and even Cisco’s push to drive ACI through literally everything we’re going to see more and more people configuring networks with a mouse instead of a keyboard. Which is a bit crazy when you think about it, but it’s not so far fetched as you might think compared to the way people are configuring AWS right now. I dare you to find the CLI for AWS’s switches in your control panel.
  • Lastly, change is inevitable. People reading through the above items may say to themselves that their job is going to away. They may worry that they’re going to be an old fuddy duddy before they know it. If you never want to change, that’s fine. As Truman Boyes said this week: https://twitter.com/trumanboyes/status/961785937993846789 But if you want to really succeed and move along, you can’t be afraid to change. You need to pick up new skills and learn new things. Oceans and rivers don’t erode mountains because they are there. They wear them down because they are incapable of moving and changing. Change is thrust upon them.

Tom’s Take

Go out and make a change this week. Do something different. Use a different treadmill for your workout. Visit a store you’ve never seen before. Place yourself in a different situation and see how you respond to it. Then come back to your desk and look at your work. Look at containers and automation with new eyes. I bet it will look a lot less scary and lot more fun to you. Don’t be afraid of change. Embrace it and grow.

 

Is ACI Coming For The CLI?

I’m soon to depart from Cisco Live Barcelona. It’s been a long week of fun presentations. While I’m going to avoid using the words intent and context in this post, there is one thing I saw repeatedly that grabbed my attention. ACI is eating Cisco’s world. And it’s coming for something else very soon.

Devourer Of Interfaces

Application-Centric Infrastructure has been out for a while and it’s meeting with relative success in the data center. It’s going up against VMware NSX and winning in a fair number of deals. For every person that I talk to that can’t stand it I hear from someone gushing about it. ACI is making headway as the tip of the spear when it comes to Cisco’s software-based networking architecture.

Don’t believe me? Check out some of the sessions from Cisco Live this year. Especially the Software-Defined Access and DNA Assurance ones. You’re going to hear context and intent a lot, as those are the key words for this new strategy. You know what else you’re going to hear a lot?

Contract. Endpoint Group (EPG). Policy.

If you’re familiar with ACI, you know what those words mean. You see the parallels between the data center and the push in the campus to embrace SD-Access. If you know how to create a contract for an EPG in ACI, then doing it in DNA Center is just as easy.

If you’ve never learned ACI before, you can dive right in with new DNA Center training and get started. And when you finally figured out what you’re doing, you can not only use those skills to program your campus LAN. You can extend them into the data center network as well thanks to consistent terminology.

It’s almost like Cisco is trying to introduce a standard set of terms that can be used to describe consistent behaviors across groups of devices for the purpose of cross training engineers. Now, where have we seen that before?

Bye Bye, CLI

Oh yeah. And, while you’re at it, don’t forget that Arista “lost” a copyright case against Cisco for the CLI and didn’t get fined. Even without the legal ramifications, the Cisco-based CLI has been living on borrowed time for quite a while.

APIs and Python make programming networks easy. Provided you know Python, that is. That’s great for DevOps folks looking to pick up another couple of libraries and get those VLANs tamed. But it doesn’t help people that are looking to expand their skillset without leaning an entirely new language. People scared by semicolons and strict syntax structure.

That’s the real reason Cisco is pushing the ACI terminology down into DNA Center and beyond. This is their strategy for finally getting rid of the CLI across their devices. Now, instead of dealing with question marks and telnet/SSH sessions, you’re going to orchestrate policies and EPGs from your central database. Everything falls into place after that.

Maybe DNA Center does some fancy Python stuff on the back end to handle older devices. Maybe there’s even some crazy command interpreters literally force-feeding syntax to an ancient router. But the end goal is to get people into the tools used to orchestrate. And that day means that Cisco will have a central location from which to build. No more archaic terminal windows. No more console cables. Just the clean purity of the user interface built by Insieme and heavily influenced by Cisco UCS Director.


Tom’s Take

Nothing goes away because it’s too old. I still have a VCR in my house. I don’t even use it any longer. It sits in a closet for the day that my wife decides she wants to watch our wedding video. And then I spend an hour hooking it up. But, one of these days I’m going to take that tape and transfer it to our Plex server. The intent is still the same – my wife gets to watch videos. But I didn’t tell her not to use the VCR. Instead, I will give her a better way to accomplish her task. And on that day, I can retire that old VCR to the same pile as the CLI. Because I think the ACI-based terminology that Cisco is focusing on is the beginning of the end of the CLI as we know it.

Legacy IT Is Not A Monument

During Networking Field Day 17, there was a lot of talk about legacy IT constructs, especially as they relate to the cloud. Cloud workloads are much better when they are new things with new applications and new processes. Existing legacy workloads are harder to move to the cloud, especially if they require some specific Java version or special hardware to work properly.

We talk a lot about how painful legacy IT is. So why do we turn it into a monument that spans the test of time?’

Keeping Things Around

Most monuments that we have from ancient times are things that we never really intended to keep. Aside from the things that were supposed to be saved from the beginning, most iconic things were never built to last. Even things like the Parthenon or the Eiffel Tower. These buildings were always envisioned to be torn down sooner or later.

Today, we can’t imagine a world without those monuments. We can’t conceive of a time without them. And, depending on the amount of time that elapsed between the building or creation of things and the decision to preserve it there could be irreparable damage. Yet that just adds to the charm.

Now, apply those factors to legacy IT. We have software that is outdated. We have applications that need old Java versions to run correctly. Or outdated DLLs. Or some other kind of thing that could cause complication or damage to our systems. Yet, we can’t bear to part with our old familiar IT. Maybe it was a UI change. Maybe it never really ran correctly on a new operating system. Or maybe the new version cost so much to upgrade that it was just cheaper to keep hacking the old thing to work slightly better each time.

Rather than examining how we could replicate the workload or find a better, quicker way to do things, we find ourselves building legacy IT into a preserved monument. We freeze the software or hardware and we never update it. We build crazier and more complicated solutions to keep something running that is well past the retirement date.

View Only

The other complication of legacy IT is that we use the programs but we never really do more than look at them. We don’t focus on the data or the process. Instead, we just plug things into a system and run what we need to run. I remember working for IBM and having to enter my weekly timesheet twice. Why? Because the new timesheet system was a Java app that ran on Windows 2000. But the system that paychecks were generated from for hourly employees (like me) was run from an AS/400 terminal window. So, after I spent half an hour entering my time for the week, I had to spend another half hour entering those exact same time entries into a console.

Would it have been easier to replicate the functions of the terminal program in Java and make time entry a single thing? Sure. Would it be easier for everything to integrate and reduce time for employees? Sure. But the people that had been using the console program for the last decade had it down. They could enter their time in a few minutes. In fact, even though the Java program was more precise for time increments most employees hated it. They’d rather use an imprecise and outdated program because it was faster and more familiar. Even though they had to manually edit their timesheet after the fact because the newer reason codes weren’t loaded in the old system.

Read-only legacy IT makes everyone’s life miserable. All kinds of crazy patches and hacks are necessary to make it run correctly with new functions. And no matter what the replacement solution is something that people will hate. Simply because it’s not the old system.

Hands Off

This, for me, is the hardest part of legacy IT monuments. Once it works, NEVER TOUCH IT AGAIN. You can’t migrate, upgrade, or move a machine. You can’t get newer hardware that’s under support. The number of times that I’ve had to buy parts from Ebay to fix broken legacy systems is much, much to high.

Now, we have to worry about what happens when the system never comes back. Or when we push it past the breaking point for some legacy app. Look at the shift from 32-bit applications to 64-bit. We’re in the transition process and yet, still, there are people that have some old application or hardware device that can’t run on newer software. Once we force a cutoff, we have to find a way to build band-aids that people can use to make a decade-old thing work.

This hands-off mentality is also part of the reason why cloud migration projects fail. Even if you can get 90% of the software in your environment to work the way you want, the odds are that the remaining 10% is composed of legacy applications that haven’t been retired because they are mission critical and very finicky. They won’t migrate. They might even be running in VMs that have to emulate old OSes because they can’t ever be upgraded. And those kinds of old familiar pets are the ones that take too much of your time.


Tom’s Take

Monuments are old buildings. We keep them around because they remind us of how things used to be. They might be falling apart. They may take more time to keep up but people love to see them and enjoy them for the nostalgia. Legacy IT is not that. It’s a headache. It’s a pain to have read-only apps that we can never change because we don’t want to or can’t get them running on something new. Rather than building them into a static monument, we need to retire them and find a way to build something new. Because no matter how beautiful they may be, no legacy IT project will ever stand the test of time like the Parthenon.

Chipping Away At Technical Debt

We’re surrounded by technical debt every day. We have a mountain of it sitting in distribution closets and a yard full of it out behind the data center. We make compromises for budget reasons, for technology reasons, and for political reasons. We tell ourselves every time that this is the last time we’re giving in and the next time it’s going to be different. Yet we find ourselves staring at the landscape of technical debt time and time again. But how can we start chipping away at it?

Time Is On Your Side

You may think you don’t have any time to work on the technical debt problem. This is especially true if you don’t have the time due to fixing problems caused by your technical debt. The hours get longer and the effort goes up exponentially to get simple things done. But it doesn’t have to be that way.

Every minute you spend trying to figure out where a link goes or a how a server is connected to the rest of the pod is a minute that should have been spent documenting it somewhere. In a text document, in a picture, or even on the back of a napkin stuck to the faceplate of said server!

I once watch an engineer get paid an obscene amount of money to diagram a network. Because it had never been done. He didn’t modify anything. He didn’t change a setting. All he did was figure out what went where and what the interface addresses were. He put it in Visio and showed it to the network administrators. They were so thrilled they had it printed in poster size and framed! He didn’t do anything above and beyond showing them what they already had.

It’s easy to say that you’re going to document something. It’s hard to remember to do it. It’s even harder when you forget to do it and have to spend an hour remembering why you did it in the first place. So it’s better to take a minute to write in an interface description. Or perhaps a comment about why you did a thing the way you did it. Every one of those minutes chips away a little more technical debt.

Ask yourself this question next time: If I spent less time solving documentation problems, what could I do to reduce debt overall?

Mad As Hell And Not Going To Design It Anymore

The other sources of technical debt are money and politics. And those usually come into play in the design phase. People jockeying for position or trying to save money for something else. I don’t mind honest budget saving measures. What I mind is trying to cut budgets for things like standing desks and quarterly bonuses. The perception of IT is that it is still a budget sink with no real purpose to support the business. And then the email server goes down. That’s why IT’s real value comes out.

When you design a network, you need to take into account many factors. What you shouldn’t take into account is someone dictating how things are going to be just because they like the way that something looks or sounds. When I worked at a VAR there were always discussions like this. Who should we use for the switching hardware? Which company has a great rebate this quarter? I have a buddy that works at Vendor X so we should throw him a bone this time.

As the networking professional, you need to stand up for your decisions. If you put the hard work and effort into making something happen you shouldn’t sit down and let it get destroyed by someone else’s bad idea. That’s how technical debt starts. And if you let it start this way you’ll never be able to get rid of it. Both because it will be outside of your control and because you’ll always identify someone else as the source of the problem.

So, how do you get mad and fix it before it starts? Know your project cold. Know every bolt and screw. Justify every decision. Point out the mistakes when they happen in front of people that don’t like to be embarrassed. In short, be a jerk. The kind of self-important, overly cerebral jerk that is smug because he knows he’s right. And when people know you’re right they’ll either take your word for things or only challenge you when they know they’re right.

That’s not to say that you should be smug and dismissive all the time. You’re going to be wrong. We all are. Well, everyone except my wife. But for the rest of us, you need to know that when you’re wrong you need to accept it and discuss the situation. No one is 100% infallible, but someone that can’t recognize when they are wrong can create a whole mountain range of technical debt before they’re stopped.


Tom’s Take

Technical Debt is just a convenient term for something we all face. We don’t like it, but it’s part of the job. Technical Debt compounds faster than any finance construct. And it’s stickier than student loans. But, we can chip away at it slowly with some care. Take a little time to stop the little problems before they happen. Take a minute to save an hour. And when you get into that next big meeting about a project and you see a huge tidal wave of technical debt headed your way, stand up and let it be known. Don’t be afraid to speak out. Just make sure you’re right and you won’t drown.

Does Juniper Need To Be Purchased?

You probably saw the news this week that Nokia was looking to purchase Juniper Networks. You also saw pretty quickly that the news was denied, emphatically. It was a curious few hours when the network world was buzzing about the potential to see Juniper snapped up into a somewhat larger organization. There was also talk of product overlap and other kinds of less exciting but very necessary discussions during mergers like this. Which leads me to a great thought exercise: Does Juniper Need To Be Purchased?

Sins of The Father

More than any other networking company I know of, Juniper has paid the price for trying to break out of their mold. When you think Juniper, most networking professionals will tell you about their core routing capabilities. They’ll tell you how Juniper has a great line of carrier and enterprise switches. And, if by some chance, you find yourself talking to a security person, you’ll probably hear a lot about the SRX Firewall line. Forward thinking people may even tell you about their automation ideas and their charge into the world of software defined things.

Would you hear about their groundbreaking work with Puppet from 2013? How about their wireless portfolio from 2012? Would anyone even say anything about Junosphere and their modeling environments from years past? Odds are good you wouldn’t. The Puppet work is probably bundled in somewhere, but the person driving it in that video is on to greener pastures at this point. The wireless story is no longer a story, but a footnote. And the list could go on longer than that.

When Cisco makes a misstep, we see it buried, written off, and eventually become the butt of really inside jokes between groups of engineers that worked with the product during the short life it had on this planet. Sometimes it’s a hardware mistake. Other times it’s software architecture missteps. But in almost every case, those problems are anecdotes you tell as you watch the 800lb gorilla of networking squash their competitors.

With Juniper, it feels different. Every failed opportunity is just short of disaster. Every misstep feels like it lands on a land mine. Every advance not expanded upon is the “one that got away”. Yet we see it time and time again. If a company like Cisco pushed the envelope the way we see Juniper pushing it we would laud them with praise and tell the world that they are on the verge of greatness all over again.

Crimes Of The Family

Why then does Juniper look like a juicy acquisition target? Why are they slow being supplanted by Arista as the favored challenger of the Cisco Empire? How is it that we find Juniper under the crosshairs of everyone, fighting to say alive?

As it turns out, wars are expensive. And when you’re gearing to fight Cisco you need all the capital you can. That forces you to make alliances that may not be the best for you in the long run. And in the case of Juniper, it brought in some of the people that thought they could get in on the ground floor of a company that was ready to take on the 800lb gorilla and win.

Sadly, those “friends” tend to be the kind that desert you when you need them the most. When Juniper was fighting tooth and nail to build their offerings up to compete against Cisco, the investors were looking for easy gains and ways to make money. And when those investors realize that toppling empires takes more than two quarters, they got antsy. Some bailed. Those needed to go. But the ones that stayed cause more harm than good.

I’ve written before about Juniper’s issues with Elliott Capital Management, but it bears repeating here. Elliott is an activist investor in the same vein as Carl Ichan. They take a substantial position in a company and then immediately start demanding changes to raise the stock price. If they don’t get their way, they release paper after paper decrying the situation to the market until the stock price is depressed enough to get the company to listen to Elliott. Once Elliott’s demands are met, the company exits their position. They get a small profit and move on to do it all over again, leaving behind a shell of a company wonder what happened.

Elliott has done this to Juniper in droves. Pulse VPN. Trapeze. They’ve demanded executive changes and forced Juniper to abandon good projects that have long term payoffs because they won’t bounce the stock price higher this quarter. And worse yet, if you look back over the last five years you can find story in the finance industry about Juniper being up for sale or being a potential acquisition target. Five. Years. When’s the last time you heard about Cisco being a potential target for buyout? Hell, even Arista doesn’t get shopped as much as Juniper.


Tom’s Take

I think these symptoms are all the same root issue. Juniper is a great technology company that does some exciting and innovative things. But, much like a beautiful potted plant in my house, they are reaching the maximum amount of size they can grow to without making a move. Like a plant, you can only grow as big as their container. If you leave them in a small one, they’ll only ever be small. You can transfer them to something larger but you risk harm or death. But you’ll never grow if you don’t change. Juniper has the minds and the capability to grow. And maybe with the eyes of the Wall Street buzzards looking elsewhere for a while, they can build a practice that gives them the capability to challenge in the areas they are good at, not just being the answer for everything Cisco is doing.

Complexity Isn’t Always Bad

I was reading a great post this week from Gian Paolo Boarina (@GP_Ifconfig) about complexity in networking. He raises some great points about the overall complexity of systems and how we can never really reduce it, just move or hide it. And it made me think about complexity in general. Why are we against complex systems?

Confusion and Delay

Complexity is difficult. The more complicated we make something the more likely we are to have issues with it. Reducing complexity makes everything easier, or at least appears to do so. My favorite non-tech example of this is the carburetor of an internal combustion engine.

Carburetors are wonderful devices that are necessary for the operation of the engine. And they are very complicated indeed. A minor mistake in configuring the spray pattern of the jets or the alignment of them can cause your engine to fail to work at all. However, when you spend the time to learn how to work with one properly, you can make the engine perform even above the normal specifications.

Carburetors have been largely replaced in modern engines by computerized fuel injectors. These systems accomplish the same goal of injecting the fuel-air mixture into the engine. However, they are completely controlled by a computer system instead of being mechanically configured. It’s a great leap forward for people that aren’t mechanics or gear heads. The system either works or it doesn’t. There’s no configuration parameters. Of course, if it doesn’t work there’s also very little that you as a non-mechanic can do to rectify the situation. As Gian Paolo points out, the complexity in the system has just been moved from the carburetor to the computer system running it.

But why is that a bad thing? If the standard user is never supposed to fiddle with the system why is moving the complexity a bad thing? It could be argued that removing complications from the operation and diagnostics of the system are good, but only if you ever intend untrained people to ever work on the system. A non-mechanic might never be able to fix a fuel injector system, but a trained person should be able to fix it quickly. Here, the complexity isn’t a barrier to the people who have been trained properly to anticipate it.

Complexity is only a problem for people who don’t understand it. Whether it’s a routing protocol or a file system, complex things are going to exist no matter what we do. Understanding them doesn’t have to be the job of everyone that uses the system.

A Tangled Web

I remember briefly working with Novell’s original identity management system back when it was still called DirXML. It was horribly complicated. It required a number of XML drivers importing information into eDirectory, which itself had quirks. And that identity repository fed multiple systems via XML rules to populate those data structures. It was a complicated nightmare to end all nightmares.

Except when it worked. When the system did the job properly, it looked like magic. Information entered for a new employee in the HR system automatically created an Active Directory user account in a different system, provisioned an email account in a third different system, and even created a time card entry in a fourth totally different system. The complexity under the hood churned its way through to provide usability to the people that relied on the system. Could they have manually entered all of that information? Sure. But having it automatically happen was a huge time saver for them. And when you apply it to a school where those actions needed to be repeated dozens of times for new students you can see how it would save a significant amount of time.

Here, complexity is the reason the system exists. You didn’t have the capability to feed those individual systems at the time because of the lack of API support or various other reasons. You had to find a way to force feed the information to a system that wasn’t expecting to get it any other way. Complexity here was required. And it worked. Until it didn’t.

Troubleshooting the XML issues in the system and keeping it running with new updates and broken links consumed a huge amount of time for the people I knew that were good at using it. So much time, in fact, that a couple of them made a business out of remotely supported DirXML for customers that utilized it and either didn’t know how to use it or didn’t have the specific knowledge necessary to make it work the way they wanted. Here, the complexity wasn’t only a necessity of the system, but it was a driver to create a new support level for it.

Ultimately, DirXML went away as it was consumed by the Novell Identity Manager. And now, the idea of these systems not having an API is silly. We focus our efforts more on the programming of the API and not on the extra complexity of the layers on top of it. But even those API interactions can be complex. So, we’ve essentially traded one complexity for another. We have simplified some aspects of the complexity while introducing others. We’ve also standardized things without necessarily making them an easier to do.


Tom’s Take

Complexity is bad when we don’t understand it. Trying to explain lagrange points and orbital dynamics is a huge pain when you aren’t talking to rocket scientists. However, most people that understand the complexities of the college football playoff system are more than happy to explain it to you in depth simply because they “get it”. Complexity isn’t always the enemy. If the people working on the system understand it enough to get the reasons why it needs to be complex to fulfill a job requirement then it’s not a bad thing. Instead of trying to move or reduce complexity, we should instead to try to ensure that we don’t add any additional complexity to the system. That’s how you keep the complexity snowball from rolling you over.