Extreme-ly Interesting Times In Networking

If you’re a fan of Extreme Networks, the last few months have been pretty exciting for you. Just yesterday, it was announced that Extreme is buying the data center networking business of Brocade for $55 million once the Broadcom acquisition happens. Combined with the $100 million acquisition of Avaya’s campus networking portfolio on March 7th and the purchase of Zebra Wireless (nee Motorola) last September, Extreme is pushing itself into the market as a major player. How is that going to impact the landscape?

Building A Better Business

Extreme has been a player in the wireless space for a while. Their acquisition of Enterasys helped vault them into the mix with other big wireless players. Now, the rounding out of the portfolio helps them complete across the board. They aren’t just limited to playing with stadium wifi and campus technologies now. The campus networking story that was brought in through Avaya was a must to help them compete with Aruba, A Hewlett Packard Enterprise Company. Aruba owns the assets of HPE’s campus networking business and has been leveraging them effectively.

The data center play was an interesting one to say the least. I’ve mused recently that Brocade’s data center business may end up lying fallow once Arris grabbed Ruckus. Brocade had some credibility in very large networks through VCS and the MLX router series, but outside of the education market and specialized SDN deployments it was rare to encounter them. Arista has really dug into Cisco’s market share here and the rest of the players seem to be content to wait out that battle. Juniper is back in the carrier business, and the rest seem to be focusing now on OCP and the pieces that flow logically from that, such as Six Pack, Backpack, and Whatever Facebook Thinks The Next Fast Switch Should Be Called That Ends In “Pack”.

Seeing Extreme come from nowhere to snap up the data center line from Brocade signals a new entrant into the data center crowd. Imagine, if you will, a mosh pit. Lots of people fighting for their own space to do their thing. Two people in the middle have decided to have an all-out fight over their space. Meanwhile, everyone else is standing around watching them. Finally, a new person enters the void of battle to do their thing on the side away from the fistfight that has captured everyone’s attention. This is where Extreme finds itself now.

Not Too Extreme

The key for Extreme now is to tell the “Full Stack” story to customers. Whereas before they had to hand off the high end to another “frenemy” and hope that it didn’t come back to bite them, now Extreme can sell all the way up and down the stack. They have some interesting ideas about SDN that will bear some watching as they begin to build them into their stack. The integration of VCS into their portfolio will take some time, as the way that Brocade does their fabric implementation is a bit different than the rest of the world.

This is also a warning call for the rest of the industry. It’s time to get off the sidelines and choose your position. Arista and Cisco won’t be fighting forever. Cisco is also reportedly looking to create a new OS to bring some functionality to older devices. That means that they can continue and try to innovate while fighting against their competitors. The winner of the Cisco and Arista battle is inconsequential to the rest of the industry right now. Either Arista will be wiped off the map and a stronger Cisco will pick a new enemy, or Arista will hurt Cisco and pull even with them in the data center market, leaving more market share for others to gobble up.

Extreme stands a very good chance of picking up customers with their approach. Customers that wouldn’t have considered them in the past will be lining up to see how Avaya campus gear will integrate with Enterasys wireless and Brocade data center gear. It’s not all the different from the hodge-podge approach that many companies have picked for years to lower costs and avoid having a single vendor solution. Now, those lower cost options are available in a single line of purple boxes.


Tom’s Take

Who knew we were going to get a new entrant into the Networking Wars for the tidy sum of $155 million? Feels like it should have cost more than that, but given the number of people holding fire sales to get rid of things they have to divest before pending acquisition or pending dissolution, it really doesn’t come as much surprise. Someone had to buy these pieces and put them together. I think Extreme is going to turn some heads and make some for some interesting conversations in the next few months. Don’t count them out just yet.

Do Network Professionals Need To Be Programmers?

With the advent of software defined networking (SDN) and the move to incorporate automation, orchestration, and extensive programmability into modern network design, it could easily be argued that programming is a must-have skill. Many networking professionals are asking themselves if it’s time to pick up Python, Ruby or some other language to create programs in the network. But is it a necessity?

Interfaces In Your Faces

The move toward using API interfaces is one of the more striking aspects of SDN that has been picked up quickly. Instead of forcing information to be input via CLI or information to be collected from the network via scraping the same CLI, APIs have unlocked more power than we ever imagined. RESTful APIs have giving nascent programmers the ability to query devices and push configurations without the need to learn cumbersome syntax. The ability to grab this information and feed it to a network management system and analytics platform has extended the capabilites of the systems that support these architectures.

The syntaxes that power these new APIs aren’t the copyrighted CLIs that networking professionals spend their waking hours memorizing in excruciating detail. JUNOS and Cisco’s “standard” CLI are as much relics of the past as CatOS. At least, that’s the refrain that comes from both sides of the discussion. The traditional networking professionals hold tight to the access methods they have experience with and can tune like a fine instrument. More progressive networkers argue that standardizing around programming languages is the way to go. Why learn a propriety access method when Python can do it for you?

Who is right here? Is there a middle ground? Is the issue really about programming? Is the prattle from programming proponents posturing about potential pitfalls in the perfect positioning of professional progress? Or are anti-programmers arguing against attacks, aghast at an area absent archetypical architecture?

Who You Gonna Call?

One clue in this discussion comes from the world of the smartphone. The very first devices that could be called “smartphones” were really very dumb. They were computing devices with strict user interfaces designed to mimic phone functions. Only when the device potential was recognized did phone manufacturers start to realize that things other than address books and phone dialers be created. Even the initial plans for application development weren’t straightforward. It took time for smartphone developers to understand how to create smartphone apps.

Today, it’s difficult to imagine using a phone without social media, augmented reality, and other important applications. But do you need to be a programmer to use a phone with all these functions? There is a huge market for smartphone apps and a ton of courses that can teach someone how to write apps in very little time. People can create simple apps in their spare time or dedicate themselves to make something truly spectacular. However, users of these phones don’t need to have any specific programming knowledge. Operators can just use their devices and install applications as needed without the requirement to learn Swift or Java or Objective C.

That doesn’t mean that programming isn’t important to the mobile device community. It does mean that programming isn’t a requirement for all mobile device users. Programming is something that can be used to extend the device and provide additional functionality. But no one in an AT&T or Verizon store is going to give an average user a programming test before they sell them the phone.

This, to me, is the argument for network programmability in a nutshell. Network operators aren’t going to learn programming. They don’t need to. Programmers can create software that gathers information and provides interfaces to make configuration changes. But the rank-and-file administrator isn’t going to need to pull out a Java manual to do their job. Instead, they can leverage the experience and intelligence of people that do know how to program in order to extend their network functionality.


Tom’s Take

It seems like this should be a fairly open-and-shut case, but there is a bit of debate yet left to have on the subject. I’m going to be moderating a discussion between Truman Boyes of Bloomberg and Vijay Gill of Salesforce around this topic on April 25th. Will they agree that networking professionals don’t need to be programmers? Will we find a middle ground? Or is there some aspect to this discussion that will surprise us all? I’ll make sure to keep you updated!

There Won’t Be A CCIE: SDN. Here’s Why

There’s a lot of work that’s been done recently to bring the CCIE up to modern network standards. Yusuf and his team are working hard to incorporate new concepts into the written exam. Candidates are broadening their horizons and picking up new ideas as they learn about industry stalwarts like OSPF and spanning tree. But the biggest challenge out there is incorporating the ideas behind software defined networking (SDN) into the exam. I don’t believe that this will ever happen. Here’s why.

Take This Broken Network

If you look at the CCIE and what it’s really testing, the exam is really about troubleshooting and existing network integration. The CCIE introduces and tests on concepts like link aggregation, routing protocol redistribution, and network service implementation. These are things that professionals are expected to do when they walk in the door, either as a consultant or as someone advising on the incorporation of a new network.

The CCIE doesn’t deal with the design of a network from the ground up. It doesn’t task someone with coming up with the implementation of a greenfield network from scratch. The CCIE exam, especially the lab component, only tests a candidate on their ability to work on something that has already exists. That’s been one of the biggest criticisms of the CCIE for a very long time. Since the knowledge level of a CCIE is at the highest level, they are often drafted to design networks rather than implementing them.

That’s the reason why the CCDE was created. CCDEs create networks from nothing. Their coursework focuses on taking requirements and making a network out of it. That’s why their practical exam focuses less on command lines and more on product knowledge and implementation details. The CCDE is where people that build networks prove they know their trade.

The Road You Must Design For

When you look at the concepts behind SDN, it’s not really built for troubleshooting and implementation without thought. Yes, automation does help implementation. Orchestration helps new devices configure themselves on the fly. API access allows us to pull all kinds of useful information out of the network for the purposes of troubleshooting and management. But each and every one of these things is not in the domain of the CCIE.

Can SDN solve the thorny issues behind redistributing EIGRP into OSPF? How about creating Multiple Spanning Tree instances for odd numbered VLANs? Will SDN finally help me figure out how to implement Frame Relay Traffic Shaping without screwing up the QoS policies? The answer to almost every one of these questions is no.

SDNs major advantages can only be realized with forethought and guidelines. Orchestration and automation make sense when implemented in pods or with new greenfield deployments. Once they have been tested and proven, these concepts can be spread across the entire network and used to ease design woes.

Does it make more sense to start using Ansible and Jinja at the beginning? Or halfway through a deployment? Would you prefer to create Python scripts to poll against APIs after you’ve implemented a different network monitoring system (NMS)? Or would it make more sense to do it right from the start?

CCIEs may see SDN in practice as they start using things like APIC-EM to roll out polices in the network, but CCDEs are the real SDN gatekeepers. They alone can make the decisions to incorporate these ideas from the very beginning to leverage capabilities to ease deployment and make troubleshooting easier. Even though CCIEs won’t see SDN, they will reap the benefits from it being baked in to everything they do.


Tom’s Take

Rather than asking when the CCIE is going to get SDN-ified, a better question would be “Should the CCIE worry?” The answer, as explained above, is no. SDN isn’t something that a CCIE needs to study for. CCDEs, on the other hand, will be hugely impacted by SDN and it will make a big difference to them in the long run. Rather than forcing CCIEs into a niche role that they aren’t necessarily suited for, we should instead let them do what they do best. We should incorporate SDN concepts into the CCDE and let them do what they do best and make the network a better place for CCIEs. Everyone will be better in the long run.

Building Reliability

Systems are inherently reliable. Until they aren’t. On a long enough timeline, even the most reliable system will eventually fail. How you manage that failure says a lot about the way your build your system or application. So, why is it then that we’re so focused on failing?

Ten Feet Tall And Bulletproof

No system is infallible. Networks go down. Cloud services get knocked offline. Even Facebook, which represents “the Internet” for a large number of people, has days when it’s unreachable. When we examine these outages, we often find issues at the core of the system that cause services to be unreachable. In the most recent case of Amazon’s cloud system, it was a typo in a script that executed faster than it could be stopped.

It could also be a failure of the system to anticipate increased loads when minor failures happen. If systems aren’t built to take on additional load when the worst happens, you’re going to see bigger outages. That is a particular thorn in the side of large cloud providers like Amazon and Google. It’s also something that network architects need to be aware of when building redundant pathways to handle problems.

Take, for example, a recent demo during Aruba Atmosphere 2017. During the Day 2 keynote, CTO Partha Narasimhan wowed the crowd in the room when he disclosed that they had been doing a controller upgrade during the morning talk. Users had been tweeting, surfing, and using the Internet without much notice from anyone aside from the most technical wireless minds in the room. Even they could only see some strange AP roaming behavior as an indicator of the controllers upgrading the APs.

Aruba showed that they built a resilient network that could survive a simulated major outage cause by a rolling upgrade. They’ve done everything they can to ensure uptime no matter what happens. But the bigger question for architects and engineers is “why are we solving the problem for others?”

Why Dodge Bullets When You Don’t Have To?

As amazing as it is to build a system that can survive production upgrades with no impact on users, what are we really building when we create these networks? Are we encouraging our users to respect our technology advantage in the network or other systems? Are we telling our application developers that they can count on us to keep the lights on when anything goes wrong? Or are we instead sending the message that we will keep scrambling to prevent issues in applications from being noticeable?

Building a resilient network is easy. Making something reliable isn’t rocket science. But create a network that is going to stay up for a long, long time without any outages is very expensive and process intensive. Engineering something to never be down requires layers of exception handling and backup systems that are as reliable as their primary counterparts.

A favorite story from the storage world involves recovery. When you initially ask a customer what their recovery point objective (RPO) is in a system, the answer is almost always “zero” or “as low as you can make it”. When the numbers are put together to include redundant or dual-active systems with replication and data assurance, the price tag of the solution is usually enough to start a new round of discussion along the lines of “how reliable can you make it for this budget?”

In the networking and systems world, we don’t have the luxury of sticker shock when it comes to creating reliability. Storage systems can have longer RPOs because lost data is gone forever. Taking the time to ensure it is properly recovered is important. But data in transmission can be retransmitted. That’s at the heart of TCP. So it’s expected that networks have near-instantaneous RPOs for no extra costs. If you don’t believe that, ask yourself what happens when you tell someone the network went down because there’s only one router or switch connecting devices together.

Instead of making systems ultra-reliable and absolving users and developers from thought, networks and systems should be as reliable as they can be made without huge additional costs. That reliability should be stated emphatically without wiggle room. These constraints should inform developers writing code so that exception handling can be built in to prevent issues when the inevitable outage occurs. Knowing your limitations is the first step to creating an atmosphere to overcome them.

A lesson comes from the programmers of old. When you have a limited amount of RAM, storage, or compute cycles, you can write very tight code. DOS programs didn’t need access to a cloud worth of compute. Mainframes could execute programs written on punch cards. The limitations were simple and could be overcome with proper problem solving. As compute and memory resources have exploded, so too have code bases. Rather than giving developers the limitless capabilities of the cloud without restraint, perhaps creating some limits is the proper way to ensure that reliability stays in the app instead of being bolted on to the network.


Tom’s Take

We had a lot of fun recording this roundtable. We talked about Aruba’s controller upgrade and building reliable wireless networks. But I think we also need to make sure we’re aware that continually creating protocols and other constructs in the underlay won’t solve application programming problems. Things like vMotion set networking and application development back a decade. Giving developers a magic solution to avoid building proper exception handling doesn’t make better developers. Instead, it puts the burden of uptime back on the networking team. And we would rather build the best network we can instead of building something that can solve every problem that could every possibly be created.

Can I Question You An Ask?

question-mark-706906_1280

Words mean things! — Justin Warren (@JPWarren)

 

As a reader of my blog, you know that words are my tradecraft. Picking the right word to describe a topic or a technical idea is very important. Using incorrect grammar can cause misunderstandings and lead to issues later on. You’re probably all familiar with my dissection of the Premise vs. Premises issue in IT, but today’s post is all about interrogatives.

A Question, You Say?

One would think that the basic question is something that doesn’t need to be explained. It is one of the four basic types of sentences that we learn in grade school. It’s the easiest one of the bunch to pick out because it ends in a question mark. Other languages, like Japanese, have similar signals for making a statement into an interrogative declaration.

Asking a question is important because it allows us to understand our world. We learn when we ask questions. We grow as people and as professionals. Kids learn to question everything around them at an early age to figure out how the world works. Questions are a cornerstone of society.

However, how do you come up with question? In what manner do you call for an answer to an interrogative statement? How do you make a request? Or seek information? How do we know how to relay a question to someone at all?

We. Ask.

Note that ask is a verb. It can be transitive or intransitive. It’s something that we do so transparently that it never even crosses our minds. We ask for directions. We ask for help. We asking for a lunch suggestion. But every time we do, we are using the word to perform an action. Until we aren’t.

Ask Not

A trend in IT that dates all the way back to at least 2004 is the use of ask as noun. Note that this would take the form of the following:

What’s the ask here?

That’s a mighty big ask of the engineering department.

I’m still looking for the ask here.

Even though this practice has roots that stretch back even further, the primary use of ask as a noun is in the IT space. The same group that thinks on-premise refers to a location believes that asks are really questions or requests. Are they using it in the same way that they shorten premises by one syllable? Do they need to save time by using a one-syllable word in place of a two-syllable one?

Raymond Chen’s article linked above does have a bit of insight from even a decade ago. The idea behind using ask as a noun really comes from trying to wrap a demand in a more palatable coating. Think back to the number of times that some has an ask and substitute the word request or demand and see if it is really appropriate there. Odds are good that it fits seamlessly.

If we go back to the idea that words still mean things and that precision is the key to saving time instead of shortening words, then why are we using ask instead of the other words? Is it, as Raymond says, because the speaker is trying be passive-aggressive? Are they trying to avoid using a better, more inflammatory word? Or do they truly believe that using ask is a better way to convey things? Maybe they just hope it makes them sound cool and futuristic?


Tom’s Take

Hearing ask as a noun makes my ears crawl. Do we question asks? Or do we ask questions? To we make requests? Or do we request makes? Despite the fact that the use of ask as a noun comes back time after time from history, it quickly goes away as being awkward and non-specific in conversation. I think it’s due time for this generation of ask as a noun to disappear and be relegated to the less important questions of history.

Networking Grows To Invisibility

Cat5

Networking is done. The way you have done things before is finished. The writing has been on the wall for quite a while now. But it’s going to be a good thing.

The Old Standard

Networking purchase models look much different today than they have in the past. Enterprises no longer buy a switch or a router. Instead, they buy solution packages. The minimum purchase unit is a networking pod or rack. Perhaps your proof-of-concept minimum is a leaf-spine of no less than 3 switches. Firewalls are purchased in pairs. Nowhere in networking is something simple any longer.

With the advent of software, even the deployment of these devices is different. Automation and orchestration systems provide provisioning as the devices are brought online. Network Monitoring Systems ensure the devices are operating correctly via API call instead of relying on SNMP. Analytics and telemetry systems can pull statistics on the fly and create datasets that give you insight into all manner of network traffic. The intelligence built into the platform supporting the hardware is more apparent than ever before.

Networking is no longer about fast connectivity speed. Instead, networking is about stability. Providing a transport network that stays healthy instead of growing by leaps and bounds every few years. Organizations looking to model their IT departments after service providers and cloud providers care more about having a reliable system than the most cutting edge technology.

This is nothing new in IT. Both storage and virtualization have moved in this direction for a while. Hardware wizardry has been replaced by software intelligence. Custom hardware is now merchant-based and easy to replace and build. The expertise in deployment and operations has more to do with integration and architecture than in simple day-to-day setup.

The New Normal

Where does that leave networkers? Are we a dying breed, soon to join the Unix admins of the word and telco experts on a beach in retirement? The reality is that things aren’t as dire for us as one might believe.

It is true that we have shifted our thinking away from operations and more toward system building. Rather than worry if the switch ports have been provisioned, we instead look at creating resilient constructs that can survive outages and traffic spikes. Networks are becoming the utility service we’ve always hoped they would be.

This is not the end. It’s the beginning. As networks join storage and compute as utilities in the data center, the responsibilities for our sphere of wizardry are significantly reduced. Rather than spending our time solving crazy user or developer problems, we can instead focus on the key points of stability and availability.

This is going to be a huge shift for the consumers of IT as well. As cloud models have already shown us, people really want to get their IT on their schedules. They want to “buy” storage and networking when it’s needed without interruption. Creating a utility resource is the best way to accomplish that. No longer will the blame for delays be laid at the feet of IT.

But at the same time, the safety net of IT will be gone as well. Unlike Chief Engineer Scott, IT can’t save the day when a developer needs to solve a problem outside of their development environment. Things like First Hop Reachability Protocols (FHRP), multipathing, and even vMotion contribute to bad developer behavior. Without these being available in a utility IT setup, application writers are going to have to solve their own problems with their own tools. While the network team will end up being leaner and smarter, it’s going to make everything run much more smoothly.


Tom’s Take

I live for the day when networking is no different than the electrical grid. I would rather have a “dumb” network that provides connectivity rather than hoping against hope that my “smart” network has all the tricks it needs to solve everyone’s problem. When the simplicity of the network is the feature and we don’t solve problems outside the application stack, stability and reliability will rule the day.

The Rising Tide of CCIE Written Costs

CCIELogo

In CCIE news this week, Cisco has raised the price of their exams across the board. The CCNA has moved up to $325, and the CCIE Written moves from $400 to $450. It goes without saying that there is quite a bit of outcry in the community. Why is the price of the CCIE Written exam surging so high?

No Such Thing As A Free Test

The most obvious answer is that the amount of work going in to development of the exam has increased. The number of people working behind the scenes to create a better exam has caused the amount of outlay to go up, hence the need to recover those costs. This is the simplest explanation of all the cost increases.

As Cisco pours more and more technology into the tests, the amount of hands and fingers touching them has gone down. At the same time, the quality of the eyeballs that do look at the exam has gone up. It’s a lot like going to a specialist doctor. The quality of the care you receive for your condition is high, but the costs associated with that doctor are higher than a regular general practice doctor. Cisco’s headcount is now focused on keeping exam quality high. That kind of expertise is always more expensive per capita, even if the number of those people is fewer.

The odd thing here is that even if the costs of the people doing the work are going up, the amount that the test is increasing doesn’t seem to correlate. It’s been less than two years since the formal introduction of the current version of the CCIE written exam at the then-unheard of price point of $400. We’re two and a half years removed from the CCIE 4.0 Written exam and it’s lofty $350 price point. Has the technology changed so much in less than three years?

The Great Barrier Test

Going back to the introduction of the 5.0 version of the CCIE Written, there was also a retake policy change introduced. Cisco wanted to create a “backoff timer” to reduce the amount of times that a person could take the exam before needing to wait. The change still allowed you to take the second attempt after 30 days, but then the third attempt must wait an additional 90 days after that. So, instead of being able to get three exam attempts in 60 days, those same three attempts would have taken 120 days.

This change was rolled back about six months ago due to outcry from the community. CCIEs trying to recertify were stymied by the exam and forced to wait longer and longer to pass it, with their certification hanging in the balance. With the increased timeouts and limit of four retakes per year, some long time CCIEs were in danger of exhausting their attempts and watching their certification slide away without any recourse to fix it.

Now, the increased price behind the CCIE Written could indeed be attributed to the increased overhead. But it could also be an attempt to keep people from rushing in to take the test every 30 days. Making a policy change to keep people out the exam is one way to do it. But making the exam financially painful to continually fail is another. If you’re willing to drop $1350 in three months to try and pass then you either have money to burn or you’re desperate to pass.

In addition, a higher exam fee would cause test takers to be absolutely certain of their knowledge level before attempting the exam. Creating an initial barrier to entry that will make people think twice before scheduling an exam on a whim does create a situation where the first-time pass rate will improve significantly. This will also help drive funding to certification materials and classes, as candidates will want to know that they will pass before stepping into a certification exam center.


Tom’s Take

I’d really like to think that Cisco is just trying to cover their overhead with the recent price increases. Everything goes up in price. Some things go up faster than others. But the conspiracy theorist in me wonders if Cisco isn’t trying to use the increased price of the exam to help raise the pass rates and discourage folks from rushing the test repeatedly to see the exam question pool. $450 is a tough pill to swallow even if you pass. I think we’re going to see a lot more people taking advantage of the free Cisco Live exam as well as the half price cert exams there. And I sincerely hope the rumored options for recertification take flight soon. Because I don’t know how ready I am to go all out to study when there’s that much money on the line.