Changing The Baby With The Bathwater In IT

If you’re sitting in a presentation about the “new IT”, there’s bound to be a guest speaker talking about their digital transformation or service provider shift in their organization. You can see this coming. It’s a polished speaker, usually a CIO or VP. They talk about how, with the help of the vendor on stage with them, they were able to rapidly transform their infrastructure into something modern while at the same time changing processes to accommodate faster IT response, more productive workers, and increase revenue or transform IT from a cost center to a profit center. The key components are simple:

  1. Buy new infrastructure from $vendor
  2. Transform all processes to be more agile, productive, and better.

Why do those things always happen in concert?

Spring Cleaning

Infrastructure grows old. That’s a fact of life. Outside of some very specialized hardware, no one is using the same desktop they had ten years ago. No enterprise is still running Windows 2000 server on an IBM NetFinity server. No one is still using 10Mbps Ethernet over Thinnet to connect their offices. Hardware marches on. So when we buy new things, we as technology professionals need to find a way to integrate them into our existing technology stack.

Processes, on the other hand, are very slow to change. I can remember dealing with process issues when I was an intern for IBM many, many years ago. The process we had for deploying a new workstation had many, many reboots involved. The deployment team worked out a new strategy to streamline deployments and make things run faster. We brought our plan to the head of deployments. From there, we had to:

  • Run tests to prove that it was faster
  • Verify that the process wasn’t compromised in any way
  • Type up new procedures in formal language to match the existing docs
  • Then submit them for ISO approval

And when all those conditions were met, we could finally start using our process. All in all, with aggressive testing, it still took two months.

Processes are things that are thought to be carved in stone, never to be modified or changed in any way for the rest of time. Unless the stones break or something major causes a process change. Usually, that major change is a whole truckload of new equipment showing up on the back dock attached to a consultant telling IT there is a better way (TM) to do things.

Ceteris Paribus

Ceteris Paribus is a latin term that means “all else unchanged”. We use it when we talk about having multiple variables in an equation and the need to keep them constant to be able to measure changes appropriately.

The funny thing about all these transformations is that it’s hard to track what actually made improvements when you’re changing so many things at once. If the new hardware is three or four times faster than your old equipment, would it show that much improvement if you just used your old software and processes on it? How much faster could your workloads execute with new CPUs and memory management techniques? How about collapsing your virtual infrastructure onto fewer and fewer physical servers because of advances there? Running old processes on new hardware can give you a very good idea of how good the hardware is. Does it meet the criteria for selection that you wanted when it was purchased? Or, better still, does it seems like you’re not getting the performance you paid for?

Likewise, how are you able to know for sure that the organization and process changes you implemented actually did anything? If you’re implementing them on new hardware how can you capture the impact? There’s no rule that says that new processes can only be implemented on new shiny hardware. Take a look at what Walmart is doing with OpenStack. They most certainly aren’t rushing out to buy tons and tons of new servers just for OpenStack integration. Instead, they are taking streamlined processes and implementing them on existing infrastructure to see the benefits. Then it’s easy to measure and say how much hardware you need to expand instead of overbuying for the process changes you make.


Tom’s Take

So, why do these two changes always seem to track with each other? The optimist in me wants to believe that it’s people deciding to make positive changes all at once to pull their organization into the future. Since any installation is disruptive, it’s better to take the huge disruption and retrain for the massive benefits down the road. It’s a rosy picture indeed.

The pessimist in me wonders if all these massive changes aren’t somehow tied to the fact that they always come with massive new hardware purchases from vendors. I would hope there isn’t someone behind the scenes with the ear of the CIO pushing massive changes in organization and processes for the sake of numbers. I would also sincerely hope that the idea isn’t to make huge organizational disruptions for the sake of “reducing overhead” or “helping tell the world your story” or, worse yet, “making our product look good because you did such a great job with all these changes”.

The optimist in me is hoping for the best. But the pessimist in me wonders if reality is a bit less rosy.

Short Take – The Present Future of the Net

A few random thoughts from ONS and Networking Field Day 15 this week:

  • Intel is really, really, really pushing their 5 generation (5G) wireless network. Note this is not Gen5 fibre channel or 5G 802.11 networking. This is the successor to LTE and capable of pushing a ridiculous amount of data to a very small handset. This is one of those “sure thing” technologies that is going to have a huge impact on our networks. Carriers and service providers are already trying to cope with the client rates we have now. What happens when they are two or three times faster?
  • PNDA has some huge potential for networking a data analytics. Their presentation had some of the most technical discussion during the event. They’re also the basis for a lot of other projects that are in the pipeline. Make sure you check them out. The project organizers suggest that you get started with the documentation and perhaps even help contribute some writing to get more people on board.
  • VMware hosted a dinner for us that had some pretty luminary speakers like Bruce Davie and James Watters. They talked about the journey from traditional networking to a new paradigm filled with microservices and intelligence in the application layer. While I think this is the golden standard that everyone is looking toward for the future, I also think there is still quite a bit of technical debt to unpack before we can get there.
  • Another fun thought kicking around: When we look at these new agile, paradigm shifting deployments, why are they always on new hardware? Would you see the similar improvement of existing processes on new hardware? What would these new processes look like on existing things? I think this one is worth investigating.

Do Network Professionals Need To Be Programmers?

With the advent of software defined networking (SDN) and the move to incorporate automation, orchestration, and extensive programmability into modern network design, it could easily be argued that programming is a must-have skill. Many networking professionals are asking themselves if it’s time to pick up Python, Ruby or some other language to create programs in the network. But is it a necessity?

Interfaces In Your Faces

The move toward using API interfaces is one of the more striking aspects of SDN that has been picked up quickly. Instead of forcing information to be input via CLI or information to be collected from the network via scraping the same CLI, APIs have unlocked more power than we ever imagined. RESTful APIs have giving nascent programmers the ability to query devices and push configurations without the need to learn cumbersome syntax. The ability to grab this information and feed it to a network management system and analytics platform has extended the capabilites of the systems that support these architectures.

The syntaxes that power these new APIs aren’t the copyrighted CLIs that networking professionals spend their waking hours memorizing in excruciating detail. JUNOS and Cisco’s “standard” CLI are as much relics of the past as CatOS. At least, that’s the refrain that comes from both sides of the discussion. The traditional networking professionals hold tight to the access methods they have experience with and can tune like a fine instrument. More progressive networkers argue that standardizing around programming languages is the way to go. Why learn a propriety access method when Python can do it for you?

Who is right here? Is there a middle ground? Is the issue really about programming? Is the prattle from programming proponents posturing about potential pitfalls in the perfect positioning of professional progress? Or are anti-programmers arguing against attacks, aghast at an area absent archetypical architecture?

Who You Gonna Call?

One clue in this discussion comes from the world of the smartphone. The very first devices that could be called “smartphones” were really very dumb. They were computing devices with strict user interfaces designed to mimic phone functions. Only when the device potential was recognized did phone manufacturers start to realize that things other than address books and phone dialers be created. Even the initial plans for application development weren’t straightforward. It took time for smartphone developers to understand how to create smartphone apps.

Today, it’s difficult to imagine using a phone without social media, augmented reality, and other important applications. But do you need to be a programmer to use a phone with all these functions? There is a huge market for smartphone apps and a ton of courses that can teach someone how to write apps in very little time. People can create simple apps in their spare time or dedicate themselves to make something truly spectacular. However, users of these phones don’t need to have any specific programming knowledge. Operators can just use their devices and install applications as needed without the requirement to learn Swift or Java or Objective C.

That doesn’t mean that programming isn’t important to the mobile device community. It does mean that programming isn’t a requirement for all mobile device users. Programming is something that can be used to extend the device and provide additional functionality. But no one in an AT&T or Verizon store is going to give an average user a programming test before they sell them the phone.

This, to me, is the argument for network programmability in a nutshell. Network operators aren’t going to learn programming. They don’t need to. Programmers can create software that gathers information and provides interfaces to make configuration changes. But the rank-and-file administrator isn’t going to need to pull out a Java manual to do their job. Instead, they can leverage the experience and intelligence of people that do know how to program in order to extend their network functionality.


Tom’s Take

It seems like this should be a fairly open-and-shut case, but there is a bit of debate yet left to have on the subject. I’m going to be moderating a discussion between Truman Boyes of Bloomberg and Vijay Gill of Salesforce around this topic on April 25th. Will they agree that networking professionals don’t need to be programmers? Will we find a middle ground? Or is there some aspect to this discussion that will surprise us all? I’ll make sure to keep you updated!

Building Reliability

Systems are inherently reliable. Until they aren’t. On a long enough timeline, even the most reliable system will eventually fail. How you manage that failure says a lot about the way your build your system or application. So, why is it then that we’re so focused on failing?

Ten Feet Tall And Bulletproof

No system is infallible. Networks go down. Cloud services get knocked offline. Even Facebook, which represents “the Internet” for a large number of people, has days when it’s unreachable. When we examine these outages, we often find issues at the core of the system that cause services to be unreachable. In the most recent case of Amazon’s cloud system, it was a typo in a script that executed faster than it could be stopped.

It could also be a failure of the system to anticipate increased loads when minor failures happen. If systems aren’t built to take on additional load when the worst happens, you’re going to see bigger outages. That is a particular thorn in the side of large cloud providers like Amazon and Google. It’s also something that network architects need to be aware of when building redundant pathways to handle problems.

Take, for example, a recent demo during Aruba Atmosphere 2017. During the Day 2 keynote, CTO Partha Narasimhan wowed the crowd in the room when he disclosed that they had been doing a controller upgrade during the morning talk. Users had been tweeting, surfing, and using the Internet without much notice from anyone aside from the most technical wireless minds in the room. Even they could only see some strange AP roaming behavior as an indicator of the controllers upgrading the APs.

Aruba showed that they built a resilient network that could survive a simulated major outage cause by a rolling upgrade. They’ve done everything they can to ensure uptime no matter what happens. But the bigger question for architects and engineers is “why are we solving the problem for others?”

Why Dodge Bullets When You Don’t Have To?

As amazing as it is to build a system that can survive production upgrades with no impact on users, what are we really building when we create these networks? Are we encouraging our users to respect our technology advantage in the network or other systems? Are we telling our application developers that they can count on us to keep the lights on when anything goes wrong? Or are we instead sending the message that we will keep scrambling to prevent issues in applications from being noticeable?

Building a resilient network is easy. Making something reliable isn’t rocket science. But create a network that is going to stay up for a long, long time without any outages is very expensive and process intensive. Engineering something to never be down requires layers of exception handling and backup systems that are as reliable as their primary counterparts.

A favorite story from the storage world involves recovery. When you initially ask a customer what their recovery point objective (RPO) is in a system, the answer is almost always “zero” or “as low as you can make it”. When the numbers are put together to include redundant or dual-active systems with replication and data assurance, the price tag of the solution is usually enough to start a new round of discussion along the lines of “how reliable can you make it for this budget?”

In the networking and systems world, we don’t have the luxury of sticker shock when it comes to creating reliability. Storage systems can have longer RPOs because lost data is gone forever. Taking the time to ensure it is properly recovered is important. But data in transmission can be retransmitted. That’s at the heart of TCP. So it’s expected that networks have near-instantaneous RPOs for no extra costs. If you don’t believe that, ask yourself what happens when you tell someone the network went down because there’s only one router or switch connecting devices together.

Instead of making systems ultra-reliable and absolving users and developers from thought, networks and systems should be as reliable as they can be made without huge additional costs. That reliability should be stated emphatically without wiggle room. These constraints should inform developers writing code so that exception handling can be built in to prevent issues when the inevitable outage occurs. Knowing your limitations is the first step to creating an atmosphere to overcome them.

A lesson comes from the programmers of old. When you have a limited amount of RAM, storage, or compute cycles, you can write very tight code. DOS programs didn’t need access to a cloud worth of compute. Mainframes could execute programs written on punch cards. The limitations were simple and could be overcome with proper problem solving. As compute and memory resources have exploded, so too have code bases. Rather than giving developers the limitless capabilities of the cloud without restraint, perhaps creating some limits is the proper way to ensure that reliability stays in the app instead of being bolted on to the network.


Tom’s Take

We had a lot of fun recording this roundtable. We talked about Aruba’s controller upgrade and building reliable wireless networks. But I think we also need to make sure we’re aware that continually creating protocols and other constructs in the underlay won’t solve application programming problems. Things like vMotion set networking and application development back a decade. Giving developers a magic solution to avoid building proper exception handling doesn’t make better developers. Instead, it puts the burden of uptime back on the networking team. And we would rather build the best network we can instead of building something that can solve every problem that could every possibly be created.

Can I Question You An Ask?

question-mark-706906_1280

Words mean things! — Justin Warren (@JPWarren)

 

As a reader of my blog, you know that words are my tradecraft. Picking the right word to describe a topic or a technical idea is very important. Using incorrect grammar can cause misunderstandings and lead to issues later on. You’re probably all familiar with my dissection of the Premise vs. Premises issue in IT, but today’s post is all about interrogatives.

A Question, You Say?

One would think that the basic question is something that doesn’t need to be explained. It is one of the four basic types of sentences that we learn in grade school. It’s the easiest one of the bunch to pick out because it ends in a question mark. Other languages, like Japanese, have similar signals for making a statement into an interrogative declaration.

Asking a question is important because it allows us to understand our world. We learn when we ask questions. We grow as people and as professionals. Kids learn to question everything around them at an early age to figure out how the world works. Questions are a cornerstone of society.

However, how do you come up with question? In what manner do you call for an answer to an interrogative statement? How do you make a request? Or seek information? How do we know how to relay a question to someone at all?

We. Ask.

Note that ask is a verb. It can be transitive or intransitive. It’s something that we do so transparently that it never even crosses our minds. We ask for directions. We ask for help. We asking for a lunch suggestion. But every time we do, we are using the word to perform an action. Until we aren’t.

Ask Not

A trend in IT that dates all the way back to at least 2004 is the use of ask as noun. Note that this would take the form of the following:

What’s the ask here?

That’s a mighty big ask of the engineering department.

I’m still looking for the ask here.

Even though this practice has roots that stretch back even further, the primary use of ask as a noun is in the IT space. The same group that thinks on-premise refers to a location believes that asks are really questions or requests. Are they using it in the same way that they shorten premises by one syllable? Do they need to save time by using a one-syllable word in place of a two-syllable one?

Raymond Chen’s article linked above does have a bit of insight from even a decade ago. The idea behind using ask as a noun really comes from trying to wrap a demand in a more palatable coating. Think back to the number of times that some has an ask and substitute the word request or demand and see if it is really appropriate there. Odds are good that it fits seamlessly.

If we go back to the idea that words still mean things and that precision is the key to saving time instead of shortening words, then why are we using ask instead of the other words? Is it, as Raymond says, because the speaker is trying be passive-aggressive? Are they trying to avoid using a better, more inflammatory word? Or do they truly believe that using ask is a better way to convey things? Maybe they just hope it makes them sound cool and futuristic?


Tom’s Take

Hearing ask as a noun makes my ears crawl. Do we question asks? Or do we ask questions? To we make requests? Or do we request makes? Despite the fact that the use of ask as a noun comes back time after time from history, it quickly goes away as being awkward and non-specific in conversation. I think it’s due time for this generation of ask as a noun to disappear and be relegated to the less important questions of history.

Networking Grows To Invisibility

Cat5

Networking is done. The way you have done things before is finished. The writing has been on the wall for quite a while now. But it’s going to be a good thing.

The Old Standard

Networking purchase models look much different today than they have in the past. Enterprises no longer buy a switch or a router. Instead, they buy solution packages. The minimum purchase unit is a networking pod or rack. Perhaps your proof-of-concept minimum is a leaf-spine of no less than 3 switches. Firewalls are purchased in pairs. Nowhere in networking is something simple any longer.

With the advent of software, even the deployment of these devices is different. Automation and orchestration systems provide provisioning as the devices are brought online. Network Monitoring Systems ensure the devices are operating correctly via API call instead of relying on SNMP. Analytics and telemetry systems can pull statistics on the fly and create datasets that give you insight into all manner of network traffic. The intelligence built into the platform supporting the hardware is more apparent than ever before.

Networking is no longer about fast connectivity speed. Instead, networking is about stability. Providing a transport network that stays healthy instead of growing by leaps and bounds every few years. Organizations looking to model their IT departments after service providers and cloud providers care more about having a reliable system than the most cutting edge technology.

This is nothing new in IT. Both storage and virtualization have moved in this direction for a while. Hardware wizardry has been replaced by software intelligence. Custom hardware is now merchant-based and easy to replace and build. The expertise in deployment and operations has more to do with integration and architecture than in simple day-to-day setup.

The New Normal

Where does that leave networkers? Are we a dying breed, soon to join the Unix admins of the word and telco experts on a beach in retirement? The reality is that things aren’t as dire for us as one might believe.

It is true that we have shifted our thinking away from operations and more toward system building. Rather than worry if the switch ports have been provisioned, we instead look at creating resilient constructs that can survive outages and traffic spikes. Networks are becoming the utility service we’ve always hoped they would be.

This is not the end. It’s the beginning. As networks join storage and compute as utilities in the data center, the responsibilities for our sphere of wizardry are significantly reduced. Rather than spending our time solving crazy user or developer problems, we can instead focus on the key points of stability and availability.

This is going to be a huge shift for the consumers of IT as well. As cloud models have already shown us, people really want to get their IT on their schedules. They want to “buy” storage and networking when it’s needed without interruption. Creating a utility resource is the best way to accomplish that. No longer will the blame for delays be laid at the feet of IT.

But at the same time, the safety net of IT will be gone as well. Unlike Chief Engineer Scott, IT can’t save the day when a developer needs to solve a problem outside of their development environment. Things like First Hop Reachability Protocols (FHRP), multipathing, and even vMotion contribute to bad developer behavior. Without these being available in a utility IT setup, application writers are going to have to solve their own problems with their own tools. While the network team will end up being leaner and smarter, it’s going to make everything run much more smoothly.


Tom’s Take

I live for the day when networking is no different than the electrical grid. I would rather have a “dumb” network that provides connectivity rather than hoping against hope that my “smart” network has all the tricks it needs to solve everyone’s problem. When the simplicity of the network is the feature and we don’t solve problems outside the application stack, stability and reliability will rule the day.

Blogging By The Refrigerator’s Light

Blogging isn’t starting off to a good 2017 so far. Ev Williams announced that Medium is cutting back and trying to find new ways to engage readers. The platform of blogging is scaling back as clickbait headlines and other new forms of media capture the collective attention for the next six seconds. How does that all relate to the humble tech blogger?

Mindshare, Not Eyeshare

One of the reasons why things have gotten so crazy is the drive for page views. Clickbait headlines serve the singular purpose of getting someone to click on an article to register a page view. Ever clicked on some Top Ten article only to find that it’s actually a series of 10 pages in a slideshow format? Page views. I’ve even gone so far as to see an article of top 7 somethings broken down into 33(!) pages, each with 19 ads and about 14 words.

Writers competing for eyeballs are always going to lose in the end. Because the attention span of the average human doesn’t dally long enough to make a difference. Think of yourself in a crowded room. Your eyes dart back and forth and all around trying to find something in the crowd. You may not even know what you’re looking for. But you’ll know it when you see it. Your attention wanders as you scan through the crowd.

Blogging, on the other hand, is like finding a good conversation in the crowd. It engages the mind. It causes deeper thinking and engagement that leads to lasting results. The best blog posts don’t have thousands of views in the first week followed by little to nothing for the rest of eternity. They have active commenters. They have response pieces. They have page views and search results that get traffic years after publication.

The 3am Ah Ha Moments

Good blogs shouldn’t just be about “going viral”. Good blogs should have something called Fridge Brilliance. Simply put, the best blogs hit you out of the blue a day after you read it standing in front of your fridge door. BANG. Now you get it! You run off to see how it applies to what you’re doing or even to give your perspective on things.

The mark of a truly successful blog is creating something that lasts and is memorable in the minds of readers. Even if all you’re really known for is “that one post” or a series of great articles, you’ve made an impression. And, as I’ve said before, you can never tell which post is going to hit it big. So the key is to keep writing what you write and making sure you’re engaging your audience at a deeper level than their corneas.

That’s not to say that you can’t have fun with blog posts now and then or post silly things here and there. But if you really want to be known as an authoritative source of content, you have to stay consistent. One of the things that Dave Henry (@DaveMHenry) saw in his 2016 wrap-up was that his most viewed posts were all about product announcements. Those tend to get lots of headlines, but for an independent blog it’s just as much about the perspective the writer lends as it is for the news itself. That’s how you can continue to engage people beyond the eyeball and into the brain.


Tom’s Take

I’ve noticed that people still like to write. They want to share thoughts. But they pick the wrong platforms. They want eyeballs instead of minds. They don’t want deep thoughts. They just want an audience. That’s the wrong way to look at it. You want engagement. You want disagreement and argument and 4,000 word response posts about why you’re completely wrong. Because that’s how you know you’ve hooked the reader. You’re a splinter in their mind that won’t go away. That’s the real draw. Keep your page views. I’d rather have memories and fridge brilliance instead.

Bringing 2017 To Everyone

calendar

It’s time once again for my traditional New Year’s Day navel gazing. As per tradition with my blog, I’m not going to make prognostications about networking or IT in general. Either I’m going to wind up totally wrong or be totally right and no one will care. I rather enjoy the ride as we go along, so trying to guess what happens is kind of pointless.

Instead, I’m going to look at what I want to accomplish in the coming year. It gives me a chance to analyze what I’m doing and what I want to be working on. And it’s a whole lot easier than predicting that SDN is going to take everyone’s job or OpenFlow being dead again.

Write Like the Wind

My biggest goal for 2016 was to write more. And that I did. I worked in writing any time I could. I wrote about ONUG, SD-WAN, and other fun topics. I even wrote a small book! Finding time to work all the extra typing in to my Bruce Wayne job at Tech Field Day was a bit challenging here and there. And more than once I was publishing a blog post at the deadline. But all that writing did help me talk about new subjects in the industry and develop great ideas at the same time.

I also encouraged more people to write. I wanted to get people putting their thoughts down in a form that didn’t require listening or watching video. Writing is still very important and I think it’s a skill that more people should develop. My list of blogs to read every day grew in 2016 and I was very happy to see it. I hope that it continues well into 2017 as well.

King Of The Hill

2017 is going to be an exciting year for me and Tech Field Day. I ran Networking Field Day 12 as the host of the event for the first time. In the coming year, Stephen and I are going to focus on our topics areas even deeper. For me, that means immersing myself in networking and wireless technologies more than ever before. I’m going to be learning as much as I can about all the new things going on. It’s a part of the role of being the host and organizer for both Networking Field Day and Mobility Field Day coming up this year.

I’m also going to be visiting lots of other conferences. Cisco Live, Interop, and even Open Networking Summit are on my list this year. We’re going to be working closely with those shows to put on even more great Tech Field Day content. I love hearing the excitement from my friends in the industry when they learn that Tech Field Day is going to be present at a show like Cisco Live. It means that we’re reaching a great audience and giving them something that they are looking for.

We’re also going to be looking at new ideas and new things to do with our growing media presence with Gestalt IT. There should be some interesting things there on the horizon as we embrace the new way that media is used to communicate with readers and fans alike. Stay tuned there for all the excitement we’ll be bringing your way in 2017!


Tom’s Take

Analyzing a year’s worth of work helps one see progress and build toward even more goals in the coming year. I’m going to keep moving forward with the projects that excite me and challenge me to be a better representative for the networking community. Along the way I hope to learn more about what makes our technology exciting and useful. And share than knowledge with everyone I know in the best way I can. Thanks for being here with me. I hope 2017 is a great year for you as well!

Visibility In Networking – Quick Thoughts from Networking Field Day

nfd-logo

I’m at Networking Field Day 13 this week. You can imagine how much fun I’m having with my friends! I wanted to drop some quick thoughts on visibility for this week on you all about what we’re hearing and raise some interesting questions.

I Can See Clearly Now

Visibility is a huge issue for companies. Seeing what’s going on is hard for people. Companies like Ixia talk about the need to avoid dropping any packets to make sure we have complete knowledge of the network. But that requires a huge amount of hardware and design. You’re always going to need traditional monitoring even when everything is using telemetry and other data models. Make sure you size things right.

Forward Networks told us that there is an increasing call for finding a way to monitor both the underlay network and the overlay network. Most overlay companies give you a way to tie into their system via API or other telemetry. However, there is no visibility into the underlay because of the event horizon. Likewise, companies like Forward Networks are focusing on the underlay with mapping technologies and modeling software but they can’t pass back through the event horizon to see into the overlay. Whoever ends up finding a way to marry both of these together is going to make a lot of money.

Apstra is taking the track of not caring what the underlay looks like. They’re going to give you the tools to manage it all without hard setup. You can rip and replace switches as needed with multivendor support. That’s a huge win if you run a heterogeneous network or you’re looking to start replacing traditional hardware with white or bright box options. Likewise, their ability to pull configs can help you visualize your device setup more effectively no matter what’s under there.


Tom’s Take

I’ve got some more Networking Field Day thoughts coming soon, but I wanted to get some thoughts out there for you to think about this weekend. Stay tuned for some new ideas coming out of the event!

How To Ask A Question At A Conference

question-mark-706906_1280

The last time you went to a conference, did you ask any questions? Were you curious about a technology and wanted to know more? Was there something that you didn’t quite get and needed an explanation? Congratulations. You’re in a quiet group of people that ask questions for knowledge. More and more, we are seeing questions becoming a vehicle for more than just knowledge acquisition. If you want to learn how to ask a proper question at a conference, read on.

1. Have A Question

I know it goes without saying, but if you’re going to raise your hand at a conference to ask a question, you should actually have a question in mind. Some people grab a microphone without thinking through what they’re going to say. This leads to stammering and broken thoughts that usually culminate in a random question mark here or there. This makes it difficult for the speaker to figure out what you’re trying to ask.

If you’re going to raise your hand, jot some notes down first. Bullet points help as does making a note or two. This is especially true if the speaker is answering questions before yours. If they answer part of your question before you get to ask it, you may have to reframe your thoughts. It never hurts to have an idea of what you’re going to say before you say it.

2. Look For Knowledge, Not To Make A Statement

The other side of the coin from the above recommendation of actually having a question is to make sure that what you’re asking is actually a question and not a statement. A great example of this is a video from Scott Bradner during a recent ONUG meeting:

I’m sure Scott has seen his fair share of statements masquerading as questions during his time. And I can’t disagree with him. Far too often, people seeking questions aren’t really asking to get information. Instead, they are trying to make a point about why they think they are right or why they disagree with the speaker. The point stops becoming a question and more of a soliloquy or soapbox. The most egregious will usually end this rant with an actual question along the lines of, “So, what do you think of my opinion?”

Please, at all costs, avoid this behavior. This is singularly the most annoying thing a speaker has to deal with. It’s enough to be questioned on your material, but it’s something else entirely to have to shift your thinking to someone else’s viewpoint while on stage. If you have a point you’d like to bring up with the speaker that is contrary to their thought process, you should do it after the presentation without people watching. Have a discussion and express opinions there. Don’t grandstand in front of the crowd just to satisfy your ego.

theres-no-question-youre-clever

3. Make Sure Your Question Wasn’t Already Answered.

This one’s a bit tougher. If you’re sitting in a session and you have a question, it’s important to make sure it wasn’t already asked and answered beforehand. This can be tougher if you have to duck out to take a call or you miss a section of the presentation. In these cases, you can ask for clarification or additional information but it would be better to ask after the session. Audiences tend to get a bit irritated if someone asks a question that was previously answered or that was covered earlier.

This one is probably the most forgivable of the question faux pas. People at conferences know that ducking out to deal with things is more common now. But if you are going to ask a question because you missed something, please make sure to address then when you ask. That helps everyone get the frame of reference for why you’re asking it. That will keep the audience on your side and less likely to boo you.


Tom’s Take

I ask lots of questions. I also answer them. And nothing irritates me more than having to deal with someone making a point during Q&A to try and make them look smarter than me. I get it. I have a hatred of keynotes and other speeches with no ability to get feedback. But at the same time, as Scott Bradner says above, the focus of the presentation is about the people presenting. It’s about the people doing the work and sharing the ideas. If you want to use Q&A time to pontificate about your position, then you need to volunteer to be a speaker.