How Should We Handle Failure?

I had an interesting conversation this week with Greg Ferro about the network and how we’re constantly proving whether a problem is or is not the fault of the network. I postulated that the network gets blamed when old software has a hiccup. Greg’s response was:

Which led me to think about why we have such a hard time proving the innocence of the network. And I think it’s because we have a problem with applications.

Snappy Apps

Writing applications is hard. I base this on the fact that I am a smart person and I can’t do it. Therefore it must be hard, like quantum mechanics and figuring out how to load the dishwasher. The few people I know that do write applications are very good at turning gibberish into usable radio buttons. But they have a world of issues they have to deal with.

Error handling in applications is a mess at best. When I took C Programming in college, my professor was an actual coder during the day. She told us during the error handling portion of the class that most modern programs are a bunch of statements wrapped in if clauses that jump to some error condition in if they fail. All of the power of your phone apps is likely a series of if this, then that, otherwise this failure condition statements.

Some of these errors are pretty specific. If an application calls a database, it will respond with a database error. If it needs an SSL certificate, there’s probably a certificate specific error message. But what about those errors that aren’t quite as cut-and-dried? What if the error could actually be caused by a lot of different things?

My first real troubleshooting on a computer came courtesy of Lucasarts X-Wing. I loved the game and played the training missions constantly until I was able to beat them perfectly just a few days after installing it. However, when I started playing the campaign, the program would crash to DOS when I started the second mission with an out of memory error message. In the days before the Internet, it took a lot of research to figure out what was going on. I had more than enough RAM. I met all the specs on the side of the box. What I didn’t know and had to learn is that X-Wing required the use of Expanded Memory (EMS) to run properly. Once I decoded enough of the message to find that out, I was able to change things and make the game run properly. But I had to know that the memory X-Wing was complaining about was EMS, not the XMS RAM that I needed for Windows 3.1.

Generic error messages are the bane of the IT professional’s existence. If you don’t believe me, ask yourself how many times you’ve spent troubleshooting a network connectivity issue for a server only to find that DNS was misconfigured or down. In fact, Dr. House has a diagnosis for you:

Unintuitive Networking

If the error messages are so vague when it comes to network resources and applications, how are we supposed to figure out what went wrong and how to fix it? I mean, humans have been doing that for years. We take the little amount of information that we get from various sources. Then we compile, cross-reference, and correlate it all until we have enough to make a wild guess about what might be wrong. And when that doesn’t work, we keep trying even more outlandish things until something sticks. That’s what humans do. We will in the gaps and come up with crazy ideas that occasionally work.

But computers can’t do that. Even the best machine learning algorithms can’t extrapolate data from a given set. They need precise inputs to find a solution. Think of it like a Google search for a resolution. If you don’t have a specific enough error message to find the problem, you aren’t going to have enough data to provide a specific fix. You will need to do some work on your own to find the real answer.

Intent-based networking does little to fix this right now. All intent-based networking products are designed to create a steady state from a starting point. No matter how cool it looks during the demo or how powerful it claims to be, it will always fall over when it fails. And how easily it falls over is up to the people that programmed it. If the system is a black box with no error reporting capabilities, it’s going to fail spectacularly with no hope of repair beyond an expensive support contract.

It could be argued that intent-based networking is about making networking easier. It’s about setting things up right the first time and making them work in the future. Yet, no system in a steady state works forever. With the possible exception of the tar pitch experiment, everything will fail eventually. And if the control system in place doesn’t have the capability to handle the errors being seen and propose a solution, then your fancy provisioning system is worth about as much as a car with no seat belts.

Fixing The Issues

So, how do we fix this mess? Well, the first thing that we need to do is scold the application developers. They’re easy targets and usually the cause behind all of these issues. Instead of giving us a vague error message about network connectivity, we need more like lp0 Is On Fire. We need to know what was going on when the function call failed. We need a trace. We need context around process to be able to figure out why something happened.

Now, once we get all these messages, we need a way to correlate them. Is that an NMS or some other kind of platform? Is that an incident response system? I don’t know the answer there, but you and your staff need to know what’s going on. They need to be able to see the problems as they occur and deal with the errors. If it’s DNS (see above), you need to know that fast so you can resolve the problem. If it’s a BGP error or a path problem upstream, you need to know that as soon as possible so you can assess the impact.

And lastly, we’ve got to do a better job of documenting these things. We need to take charge of keeping track of what works and what doesn’t. And keeping track of error messages and their meanings as we work on getting app developers to give us better ones. Because no one wants to be in the same boat as DenverCoder9.


Tom’s Take

I made a career out of taking vague error messages and making things work again. And I hated every second of it. Because as soon as I became the Montgomery Scott of the Network, people started coming to me with every more bizarre messages. Becoming the Rosetta Stone for error messages typed into Google is not how you want to spend your professional career. Yet, you need to understand that even the best things fail and that you need to be prepared for it. Don’t spend money building a very pretty facade for your house if it’s built on quicksand. Demand that your orchestration systems have the capabilities to help you diagnose errors when everything goes wrong.

 

Advertisements

Don’t Build Big Data With Bad Data

I was at Pure Accelerate 2017 this week and I saw some very interesting things around big data and the impact that high speed flash storage is going to have. Storage vendors serving that market are starting to include analytics capabilities on the box in an effort to provide extra value. But what happens when these advances cause issues in the training of algorithms?

Garbage In, Garbage Out

One story that came out of a conversation was about training a system to recognize people. In the process of training the system, the users imported a large number of faces in order to help the system start the process of differentiating individuals. The data set they started with? A collection of male headshots from the Screen Actors Guild. By the time the users caught the mistake, the algorithm had already proven that it had issues telling the difference between test subjects of particular ethnicities. After scrapping the data set and using some different diverse data sources, the system started performing much better.

This started me thinking about the quality of the data that we are importing into machine learning and artificial intelligence systems. The old computer adage of “garbage in, garbage out” is never more apt today than it has been in history. Before, bad inputs caused data to be suspect when extracted. Now, inputting bad data into a system designed to make decisions can have even more far-reaching consequences.

Look at all the systems that we’re programming today to be more AI-like. We’ve got self-driving cars that need massive data inputs to help navigate roads at speed. We have network monitoring systems that take analytics data and use it to predict things like component failures. We even have these systems running the background of popular apps that provide us news and other crucial information.

What if the inputs into the system cause it to become corrupted or somehow compromised? You’ve probably heard the story about how importing UrbanDictionary into Watson caused it to start cursing constantly. These kinds of stories highlight how important the quality of data being used for the basis of AI/ML systems can be.

Think of a future when self-driving cars are being programmed with failsafes to avoid living things in the roadway. Suppose that the car has been programmed to avoid humans and other large animals like horses and cows. But, during the import of the small animal data set, the table for dogs isn’t imported for some reason. Now, what would happen if the car encountered a dog in the road? Would it make the right decision to avoid the animal? Would the outline of the dog trigger a subroutine that helped it make the right decision? Or would the car not be able to tell what a dog was and do something horrible?

Do You See What I See?

After some chatting with my friend Ryan Adzima, he taught me a bit about how facial recognition systems work. I had always assumed that these systems could differentiate on things like colors. So it could tell a blond woman from a brunette, for instance. But Ryan told me that it’s actually very difficult for a system to tell fine colors apart.

Instead, systems try to create contrast in the colors of the picture so that certain features stand out. Those features have a grid overlaid on them and then those grids are compared and contrasted. That’s the fastest way for a system to discern between individuals. It makes sense considering how CPU-bound things are today and the lack of high definition cameras to capture information for the system.

But, we also must realize that we have to improve data collection for our AI/ML systems in order to ensure that the systems are receiving good data to make decisions. We need to build validation models into our systems and checks to make sure the data looks and sounds sane at the point of input. These are the kinds of things that take time and careful consideration when planning to ensure they don’t become a hinderance to the system. If the very safeguards we put in place to keep data correct end up causing problems, we’re going to create a system that falls apart before it can do what it was designed to do.


Tom’s Take

I thought the story about the AI training was a bit humorous, but it does belie a huge issue with computer systems going forward. We need to be absolutely sure of the veracity of our data as we begin using it to train systems to think for themselves. Sure, teaching a Jeopardy-winning system to curse is one thing. But if we teach a system to be racist or murderous because of what information we give it to make decisions, we will have programmed a new life form to exhibit the worst of us instead of the best.

AI, Machine Learning, and The Hitchhiker’s Guide

Deep_Thought

I had a great conversation with Ed Horley (@EHorley) and Patrick Hubbard (@FerventGeek) last night around new technologies. We were waxing intellectual about all things related to advances in analytics and intelligence. There’s been more than a few questions here at VMworld 2016 about the roles that machine learning and artificial intelligence will play in the future of IT. But during the conversation with Ed and Patrick, I finally hit on the perfect analogy for machine learning and artificial intelligence (AI). It’s pretty easy to follow along, so don’t panic.

The Answer

Machine learning is an amazing technology. It can extrapolate patterns in large data sets and provide insight from seemingly random things. It can also teach machines to think about problems and find solutions. Rather than go back to the tired Target big data example, I much prefer this example of a computer learning to play Super Mario World:

You can see how the algorithms learn how to play the game and find newer, better paths throughout the level. One of the things that’s always struck me about the computer’s decision skills is how early it learned that spin jumps provide more benefit than regular jumps for a given input. You can see the point in the video when this is figured out by the system, whereafter all jumps become spinning for maximum effect.

Machine learning appears to be insightful and amazing. But the weakness of machine learning is be exemplified by Deep Thought from The Hitchhiker’s Guide to the Galaxy. Deep Thought was created to find the answer to the ultimate question of life, the universe, and everything. It was programmed with an enormous dataset – literally every piece of knowledge in the known universe. After seven million years, it finally produces The Answer (42, if you’re curious). Which leads to the plot of the book and other hijinks.

Machine learning is capable of great leaps of logic, but it operates on a fundamental truth: all inputs are a bounded data set. Whether you are performing a simple test on a small data set or combing all the information in the universe for answers you are still operating on finite information. Machine learning can do things very fast and find pieces of data that are not immediately obvious. But it can’t operate outside the bounds of the data set without additional input. Even the largest analytics clusters won’t produce additional output without more data being ingested. Machine learning is capable of doing amazing things. But it won’t ever create new information outside of what it is operating on.

The Question

Artificial Intelligence (AI), on the other hand, is more like the question in The Hitchhiker’s Guide. Deep Thought admonishes the users of the system that rather than looking for the answer to Life, The Universe, and Everything, they should have been looking for The Question instead. That involves creating a completely different computer to find the Question that matches the Answer that Deep Thought has already provided.

AI can provide insight above and beyond a given data set input. It can provide context where none exists. It can make leaps of logic similarly to those that humans are capable of doing. AI doesn’t simply stop when it faces an incomplete data set. Even though we are seeing AI in infancy today, the most advanced systems are capable of “filling in the blanks” to cover missing information. As the algorithms learn more and more how to extrapolate they’ll become better at making incomplete decisions.

The reason why computers are so good at making quick decisions is because they don’t operate outside the bounds of the possible. If the entire universe for a decision is a data set, they won’t try to look around that. That ability to look beyond and try to create new data where none exists is the hallmark of intelligence. Using tools to create is a uniquely biologic function. Computers can create subsets of data with tools but they can’t do a complete transformation.

AI is pushing those boundaries. Given enough time and the proper input, AI can make the leaps outside of bounds to come up with new ideas. Today it’s all about context. Tomorrow may find AI providing true creativity. AI will eventually pass a Turing Test because it can look outside the script and provide the pseudorandom type of conversation that people are capable of.


Tom’s Take

Computers are smart. They think faster than we do. They can do math better than we can. They can produce results in a fraction of the time at a scale that boggles the mind. But machine learning is still working from a known set. No matter how much work we pour into that aspect of things, we are still going to hit a limit of the ability of the system to break out of it’s bounds.

True AI will behave like we do. It will look for answers when their are none. It will imagine when confronted with the impossible. It will learn from mistakes and create solutions to them. That’s the power of intelligence versus learning. Even the most power computer in the universe couldn’t break out of its programming. It needed something more to question the answers it created. Just like us, it needed to sit back and let its mind wander.