Expiring The Internet

Embed from Getty Images

An article came out this week that really made me sigh.  The title was “Six Aging Protocols That Could Cripple The Internet“.  I dove right in, expecting to see how things like Finger were old and needed to be disabled and removed.  Imagine my surprise when I saw things like BGP4 and SMTP on the list.  I really tried not to smack my own forehead as I flipped through the slideshow of how the foundation of the Internet is old and is at risk of meltdown.

If It Ain’t Broke

Engineers love the old adage “If it ain’t broke, don’t fix it!”.  We spend our careers planning and implementing.  We also spend a lot of time not touching things afterwards in order to prevent it from collapsing in a big heap.  Once something is put in place, it tends to stay that way until something necessitates a change.

BGP is a perfect example.  The basics of BGP remain largely the same from when it was first implemented years ago.  BGP4 has been in use since 1994 even though RFC 4271 didn’t officially formalize it until 2006.  It remains a critical part of how the Internet operates.  According to the article, BGP is fundamentally flawed because it’s insecure and trust based.  BGP hijacking has been occurring with more frequency, even as resources to combat it are being hotly debated.  Is BGP to blame for the issue?  Or is it something more deeply rooted?

Don’t Fix It

The issues with BGP and other protocols mentioned in the article, including IPv6, aren’t due to the way the protocol was constructed.  It is due in large part to the humans that implement those protocols.  BGP is still in use in the current insecure form because it works.  And because no one has proposed a simple replacement that accomplishes the goal of fixing all the problems.

Look at IPv6.  It solves the address exhaustion issue.  It solves hierarchical addressing issues.  It restores end-to-end connectivity on the Internet.  And yet adoption numbers still languish in the single digit percentage.  Why?  Is it because IPv6 isn’t technically superior? Or because people don’t want to spend the time to implement it?  It’s expensive.  It’s difficult to learn.  Reconfiguring infrastructures to support new protocols takes time and effort.  Things that are better spent on answering user problems or taking on additional tasks as directed by management that doesn’t care about BGP insecurity until the Internet goes down.

It Hurts When I Do This

Instead of complaining about how protocols are insecure, the solution to the problem should be two fold: First, we need to start building security into protocols and expiring their older, insecure versions.  POODLE exploited SSLv3, an older version that served as a fallback to TLS.  While some old browsers still used SSLv3, the simple easy solution was to disable SSL and force people to upgrade to TLS-capable clients.  In much the same way, protocols like NTP and BGP can be modified to use more security.  Instead of suggesting that people use those versions, architects and engineers need to implement those versions and discourage use of the old insecure protocols by disabling them.  It’s not going to be easy at first.  But as the movement gains momentum, the solution will work.

The next step in the process is to build easy-to-configure replacements.  Bolting security onto a protocol after the fact does stop the bleeding.  But to fix the underlying symptoms, the security needs to be baked into the protocol from the beginning.  But doing this with an entirely new protocol that has no backwards compatibility will be the death of that new protocol.  Just look at how horrible the transition to IPv6 has been.  Lack of an easy transition coupled with no monetary incentive and lack of an imminent problem caused the migration to drag out until the eleventh hour.  And even then there is significant pushback against an issue that can no longer be ignored.

Building the next generation of secure Internet protocols is going to take time and transparent effort.  People need to see what’s going into something to understand why it’s important.  The next generation of engineers needs to understand why things are being built the way they are.  We’re lucky in that many of the people responsible for building the modern Internet are still around.  When asked about limitations in protocols the answer remains remarkably the same – “We never thought it would be around this long.”

The longevity of quick fixes seems to be the real issue.  When the next generation of Internet protocols is built there needs to be a built-in expiration date.  A point-of-no-return beyond which the protocol will cease to function.  And there should be no method for extending the shelf life of a protocol to forestall it’s demise.  In order to ensure that security can’t be compromised we have to resign ourselves to the fact that old things need to be put out to pasture.  And the best way to ensure that new things are put in place to supplant them is to make sure the old things go away on time.


Tom’s Take

The Internet isn’t fundamentally broken.  It’s a collection of things that work well in their roles that maybe have been continued a little longer than necessary.  The probability of an exploit being created for something rises with every passing day it is still in use.  We can solve the issues of the current Internet with some security engineering.  But to make sure the problem never comes back again, we have to make a hard choice to expire protocols on a regular basis.  It will mean work.  It will create strife.  And in the end we’ll all be better for it.

Cisco Just Killed The CLI

DeadCLI

Gallons of virtual ink have been committed to virtual paper in the last few days with regards to Cisco’s lawsuit against Arista Networks.  Some of it is speculating on the posturing by both companies.  Other writers talk about the old market vs. the new market.  Still others look at SDN as a driver.

I didn’t just want to talk about the lawsuit.  Given that Arista has marketed EOS as a “better IOS than IOS” for a while now, I figured Cisco finally decided to bite back.  They are fiercely protective of IOS and they have to be because of the way the trademark laws in the US work.  If you don’t go after people that infringe you lose your standing to do so and invite others to do it as well.  Is Cisco’s timing suspect? One does have to wonder.  Is this about knocking out a competitor? It’s tough to say.  But one thing is sure to me.  Cisco has effectively killed the command line interface (CLI).

“Industry Standards”

EOS is certainly IOS-like.  While it does introduce some unique features (see the NFD3 video here), the command syntax is very much IOS.  That is purposeful.  There are two broad categories of CLIs in the market:

  • IOS-like – EOS, HP Procurve, Brocade, FTOS, etc
  • Not IOS-like – Junos, FortiOS, D-Link OS, etc

What’s funny is that the IOS-like interfaces have always been marketed as such.  Sure, there’s the famous “industry standard” CLI comment, followed by a wink and a nudge.  Everyone knows what OS is being discussed.  It is a plus point for both sides.

The non-Cisco vendors can sell to networking teams by saying that their CLI won’t change.  Everything will be just as easy to configure with just a few minor syntax changes.  Almost like speaking a different dialect of a language.  Cisco gains because more and more engineers become familiar with the IOS syntax.  Down the line, those engineers may choose to buy Cisco based on familiarity with the product.

If you don’t believe that being IOS-like is a strong selling point, take a look PIX and Airespace.  The old PIX OS was transformed into something that looked a lot more like traditional IOS.  In ASA 8.2 they even changed the NAT code to look like IOS.  With Airespace it took a little longer to transform the alien CLI into something IOS-like.  They even lost functionality in doing so, simply to give networking teams an interface that is more friendly to them.  Cisco wants all their devices to run a CLI that is IOS-like.  Junos fans are probably snickering right now.

In calling out Arista for infringing on the “generic command line interface” in patent #7,047,526, Cisco has effectively said that they will start going after companies that copy the IOS interface too well.  This leaves companies in a bit of conundrum.  How can you continue to produce an OS with an “industry standard” CLI and hope that you don’t become popular enough to get noticed by Cisco?  Granted, it seems that all network switching vendors are #2 in the market somehow.  But at what point does being a big enough #2 get the legal hammer brought to bear?  Do you have to be snarky in marketing messages? Attack the 800-pound gorilla enough that you anger them?  Or do you just have to have a wildly successful quarter?

Laid To REST

Instead, what will happen is a tough choice.  Either continue to produce the same CLI year and year and hope that you don’t get noticed or overhaul the whole system.  Those that choose not to play Russian Roulette with the legal system have a further choice to make.  Should we create a new, non-infringing CLI from the ground up? Or scrap the whole idea of a CLI moving forward?  Both of those second choices are going to involve a lot of pain and effort.  One of them has a future.

Rewriting the CLI is a dead-end road.  By the time you’ve finished your Herculean task you’ll find the market has moved on to bigger and better things.  The SDN revolution is about making complex networks easier to program and manage.  Is that going to be accomplished via yet another syntax?  Or will it happen because of REST APIs and programing interfaces?  Given an equal amount of time and effort on both sides, the smart networking company will focus their efforts on scrapping the CLI and building programmability into their devices.  Sure, the 1.0 release is going to sting a little.  It’s going to require a controller and some rough interface conventions.  But building the seeds of a programmable system now means it will be growing while other CLIs are withering on the vine.

It won’t be easy.  It won’t be fun.  And it’s a risk to alienate your existing customer base.  But if your options are to get sued or spend all your effort on a project that will eventually go the way of the dodo your options don’t look all that appealing anyway.  If you’re going to have to go through the upheaval of rewriting something from the ground up, why not choose to do it with an eye to the future?


Tom’s Take

Cisco and Arista won’t be finished for a while.  There will probably be a settlement or a licensing agreement or some kind of capitulation on both sides in a few years time.  But by that point, the fallout from the legal action will have finally finished off the CLI for good.  There’s no sense in gambling that you won’t be the next target of a process server.  The solution will involve innovative thinking, blood, sweat, and tears on the part of your entire development team.  But in the end you’ll have a modern system that works with the new wave of the network.  If nothing else, you can stop relying on the “industry standard” ploy when selling your interface and start telling your customers that you are setting the new standard.

 

Vendor Whitebox Switches – Better Together?

ChocoPeanut

Whitebox switching has moved past the realm of original device manufacturers and has been taken up by traditional networking vendors. Andre Kindness (@AndreKindness) of Forrester recently posted that he fields several calls from his customers every day asking about a particular vendor’s approach to whitebox switching. But what do these vendor offerings look like? And can we predict how a given vendor will address the whitebox market?

Chocolate In My Peanut Butter

Dell was one of the first traditional networking vendors to announce a whitebox switch offering that decoupled the operating system from the switching hardware. Dell offered packages from Cumulus Linux and Big Switch Networks alongside their PowerConnect lineup. This makes sense when you consider that the operating system on the switch has never been the strong suit of Dell. The PowerConnect OS is not very popular with network engineers, being very dissimilar from more popular CLIs such as Cisco IOS and its look-alikes.  Their attempts to capitalize on the popularity of Force Ten OS (FTOS) and adapt it or use on PowerConnect switches has been difficult at best, due to the divide been hardware architecture of the two platforms.

What Dell is very good at is offering hardware at a greatly reduced cost. By utilizing this strength, they can enter the whitebox market successfully by partnering with OS vendors to provide customer options. This also gives them time to adapt FTOS to more switches and attempt to drive acquisition posts down once the port of FTOS to PowerConnect is complete.

Peanut Butter In My Chocolate

What happens when a vendor sees software as their strength? You get an announcement like the one last week from Juniper Networks. Juniper has put a significant amount of time and effort into Junos. The FreeBSD base of the system gives it the adaptability that Cumulus enjoys. Since Juniper sees Junos as a huge advantage, their oath to whitebox switching was to offer hardware that reduces the acquisition cost. Porting Junos to run on the OCP-based OCX1100 allows Juniper to use silicon that is more in line with merchant offering price points. The value to the customer comes from existing experience with Junos allowing for reduced learning time on the new platform.

So how will the rest of the market adopt whitebox switching offerings? HP will likely go the same route as Dell, as their software picture is murky with products split evenly between HP Procurve OS and 3Com/H3C Comware. HP has existing silicon manufacturing facilities that allow for economy of scale to reduce acquisition costs to the customer. Conversely, Brocade will likely leverage existing Vyatta development and investment in projects like OpenDaylight to standardize their whitebox offerings on software while offering OCP-style hardware platforms.

The 800-pound Whitebox Gorilla

And what of Cisco? Cisco had invested significant time and effort into both hardware and software. IOS is being renovated with API access and being ported into containers to broaden the platforms on which it can operate. The Cisco investment in custom silicon development is significant as well, with only the Nexus 3000 and 9000 series using merchant offerings from Broadcom. Their eventual whitebox offering could take any form.

Cisco feels very strongly about keeping IOS and its variants exclusive to Cisco hardware. Given that they sued Arista Networks late last week for patent infringement in EOS, it should be apparent how strongly they feel about IOS. That will be the impetus that pushes them to offering some limited custom silicon that is capable of running third-party operating systems. This allows Cisco to partner closely with one of those developers to ensure peak performance and tight integrations with whatever hardware Cisco includes.  They would likely offer this platform with a bundle of SmartNET support services, recouping the costs of producing the switch with some very high margin services.

The possibility of porting IOS to an OCP-like reference platform is remote at best. A whitebox IOS offering would still carry a high price tag to reflect Cisco R&D and would be priced too high above what customers would be willing to pay for total acquisition cost.  It would also open the door for someone to “port” that version of IOS to run on platforms that it shouldn’t be running on.  At the very least, it will expose Cisco in the market as having too high a price tag on their intellectual property in IOS and give competitors like Juniper and Big Switch ammunition to fight back.


Tom’s Take

When evaluating vendor whitebox offerings, be sure your assessment of the strengths matches theirs. Wide adoption of a given strategy will solidify that approach in the future. Be sure to give feedback to your local account teams and tell them the critical features you need to be supported. That will ensure the vendor has you in mind when the time comes to produce a whitebox offering.  And remember that you always have the option of going your own way.  Nothing says that you have to buy a solution with bundled services from traditional networking vendors.  If you’re willing to fly without a safety net for a while, you can find some great deals on ODM switches and OSes to run on them.

HP Networking – Hitting The Right Notes

HP has quietly been making waves recently with their networking strategies.  They recently showed off their technology around software defined networking (SDN) applications at Interop New York.  Here’s a video:

It would seem that HP has been doing a lot of hard work on the back end with SDN.  So why haven’t we heard about it?

Trumpet and Bugle

HP Networking hasn’t been in the news as much as Cisco and VMware as of late.  When you consider that both of those companies are pushing agendas related to redefining the paradigm of networking around policy and virtualization their trumpeting of those agendas makes total sense.  But even members of the League of Non-Aligned Vendors like Brocade are talking a lot about their SDN strategy with the Vyatta Controller and OpenStack integrations.  Vendors have layers and layers of plans for the “new” networking.  But HP has actually been doing it!  Why haven’t we known until now?

HP has been content to play the role of the bugler to the trumpeters of the bigger organizations.  Rather than talking over and over again about what they are planning on doing, HP waits until they’ve actually done it to talk about it.  It’s a sound strategy.  I love making everything work first and then discussing what you’ve done rather than spending week after week, month after month, talking about a plan that may or may not come to fruition.

The issue with HP is that they need to bugle a little more often to stay afloat in the space.  Only making announcements won’t cut it.  The breakneck pace of innovation and adoption is disrupting the ability of laggard developers to stay afloat.  New technologies are being supplanted by upstarts.  Docker is old news.  Now we’re talking about SocketPlane and Rocket.  You’d be forgiven if you haven’t been keeping up as a blogger or engineer.  But if you’ve missed the boat as a vendor, you’re going to have a hard time treading water.

The Tijuana Brass

How can HP solve their problem?  Technically, they need to keep doing what they’ve been doing all along.  They are making good decisions and innovating around ideas like the HP SDN App Store.  What they need to do it tell more people about it.  Get the word out.  Start some discussions around what you’re doing.  Don’t be afraid to engage.  The more you talk to people about your solutions, the more your name will come up in conversation. You need to be loud and on-key.  Herb Alpert and the Tijuana Brass weren’t popular right away.  It took years of recording and playing before the mainstream “discovered” them and popularized their music.

HP Networking has spent considerable time building SDN infrastructure.  The fact that their are OpenFlow images for a wide variety of their existing switch infrastructure is proof they are concerned about making everything fit together.  Now it’s time to tell the story.  With the impending divestiture of HP’s enterprise businesses from the consumer line, it will be far too easy to get lost in the shuffle of reorganization.  They way to prevent that is to step out and make yourself known.  Write blogs, record podcasts, and interact with the community.  Don’t be afraid to toot your own horn a little.


Disclaimer

HP invited me to attend HP Discover Barcelona as their guest.  They provided travel and lodging expenses during my time in Europe.  They did not require any blog posts or consideration for this invitation, nor where they offered any on my part.  The opinions and analysis expressed herein represents my thoughts alone.

Riding the SD-WAN Wave

Embed from Getty Images

Software Defined Networking has changed the way that organizations think about their network infrastructure.  Companies are looking at increasing automation of mundane tasks, orchestration of policy, and even using white box switches with the help of new unbound operating systems.  A new class of technologies that is coming to market hopes to reduce complexity and cost for the Achilles Heel of many enterprises: the Wide Area Network (WAN).

Do You WANt To Build A Snowman?

The WAN has always been a sore spot for enterprise networks.  It’s necessary to connect your organization to the world.  If you have remote sites or branch locations, it is critical for daily operations.  If you have an e-commerce footprint your WAN connection needs to be able to handle the generated traffic.  But good WAN connectivity costs money.  Lots of money.

WAN protocols are constantly being refined to come up with the fastest possible transmission and the highest possible uptime.  Frame Relay, Asynchronous Transfer Mode (ATM) and Multi-Protocol Label Switching (MPLS) are a succession of technologies that have shaped enterprise WAN connectivity for over a decade.  They have their strengths and weaknesses.  But it is difficult to build an enterprise WAN without one.

Some customers can’t get MPLS connectivity.  Or even Frame Relay for the matter.  Their locations are too remote or the cost of having the connection installed is far above the return on investment.  These customers are often forced to resort to consumer-class connections, like cable modems, Digital Subscriber Line (DSL), or even 4G/LTE modem uplinks.  While cheaper and easy to install, these solutions are often not as robust as their business-grade counterparts.  And when it comes to support on a down circuit…

Redefining the WAN

How does Software Defined WAN (SD-WAN) help?  SD-WAN technologies from companies like Silver Peak, CloudGenix, and Viptela function like overlay networks for the WAN.  They take the various inputs that you have, such as MPLS, cable, and 4G/LTE networks.  These inputs are then arranged in such a way as to allow you to intelligently program how traffic will behave on the links.  If you want only critical business traffic on the MPLS circuit during business hours you can do that.  If you want to ensure the 4G/LTE uplink is only used in the event of an emergency outage, you can do that too.  You can even program various costs and metrics into the system to help you make decisions about when a given link would be a better economic decision given the time of day or amount of transferred data.

You’re probably saying to yourself, “But I can do all of that today.” And you would be right. But all of this has to happen manually, or at the least require a lot of programming.  If you’ve ever tried to configure OER/PFR on a Cisco router you know what I’m talking about.  And that’s just one vendor’s equipment.  What if there are multiple devices in play?  How do you configure the edge routers for fifty sites?  What happens when a circuit goes down at 3 a.m.?  Having a simple interface for making decisions or even the ability to script actions based on inputs makes the system much more flexible and responsive.

It all comes down to a simple number for all parties involved.  For engineering, the amount of time spent configuring and maintaining complex WAN connectivity will be reduced.  Engineers love not needing to spend time on things.  For the decision makers (and bean counters), it all comes down to money.  SD-WAN technologies reduce costs by better utilizing existing infrastructure.  Eventually, their analysis can allow you to reduce or remove unnecessary connectivity.  That means more money in the pockets of the people that want the money.


Tom’s Take

I’ve referred to WAN applications as the “hello world” for SDN.  That’s because I saw so many people demoing them when SDN was first being talked about.  Cisco did this at Cisco Live 2012 in San Diego.  SD-WAN didn’t really become a concrete thing in my mind until is was the topic of discussion on the Spring ONUG meeting.  Those are the people with the money.  And they are looking at the cost savings and optimization from SD-WAN technologies.  You can better believe that the first wave of SD-WAN that you’ve seen in the last couple of months is just the precursor to a wider look at connectivity in general.  Better get ready to surf.

Wires Are The Exception

cropped-dsc_0734.jpg

Last week I went to go talk to a group of vocational students about networking.  While I was there, I needed to send a couple of emails.  I prefer to write emails from my laptop, so I pulled it out of my bag between talks and did the first thing that came to mind: I asked for the wireless SSID and password.  Afterwards, I started thinking about how far we’ve come with connectivity.

I can still remember working with a wireless card back in 2001 trying to get the drivers to play nice with Windows 2000.  Now, wireless cards are the rule and wired ports are the exception.  My primary laptop needs a dongle to have a wired port.  My new Mac Mini is happily churning along halfway across the room connected to my network as a server over wireless.  It would appear that the user edge quietly became wireless and no tears were shed for the wire.

It’s also funny that a lot of the big security features like 802.1x and port security became less and less of an issue once open ports started disappearing in common areas.  802.1x for wired connections is barely even talked about now.  It’s more of an authentication mechanism for wireless now.  I’ve even heard some vendors of these solutions touting the advantages of using it with wireless and then throwing in the afterthought comment, “We also made it easy to configured for wired connections too.”

We still need wires, of course.  Access points have to connect to the infrastructure.  Power still can be delivered via microwave.  But the shift toward wireless has made ubiquitous cabling unnecessary.  I used to propose a minimum of four cable drops per room to provide connectivity in a school.  I would often argue for six in case a teacher wanted to later add an IP phone and a couple of student workstations.  Now, almost everything is wireless.  The single wire powers a desk phone and an antiquated desktop.  Progressive schools are replacing the phones with soft clients and the desktops with teach laptops.

The wire is not in any danger of becoming extinct.  But it is going to be relegated to the special purpose category.  Wires will only live behind the scenes in data centers and IDF closets.  They will be the thing that we throw in our bag for emergencies, like an extra console cable or a VGA adapter.

Wireless is the future.  People don’t walk into a coffee shop and ask, “Hey, where’s the Ethernet cable?” Users don’t crowd around wall plates with hubs to split the one network drop into four or eight so they can plug their tablets in.  Companies like Aruba Networks recognized this already when they started posing questions about all-wireless designs.  We even made a video about it:

While I don’t know that the all-wireless design is going to work, I can say with certainty that the only wires that will be running across your desktop soon will be power cables and the occasional USB cord.  Ethernet will be relegated to the same class as electrical wires connected to breaker boxes and water pipes.  Important and unseen.

The Trap of Net Neutrality

net-neutrality

The President recently released a video and statement urging the Federal Communications Commission (FCC) to support net neutrality and ensure that there will be no “pay for play” access to websites or punishment for sites that compete against a provider’s interests.  I wholeheartedly support the idea of net neutrality.  However, I do like to stand on my Devil’s Advocate soapbox every once in a while.  Today, I want to show you why a truly neutral Internet may not be in our best interests.

Lawful Neutral

If the FCC mandates a law that the Internet must remain neutral, it will mean that all traffic must be treated equally.  That’s good, right?  It means that a provider can’t slow my Netflix stream or make their own webmail service load faster than Google or Yahoo.  It also means that the provider can’t legally prioritize packets either.

Think about that for a moment.  We, as network and voice engineers, have spent many an hour configuring our networks to be as unfair as possible.  Low-latency queues for voice traffic.  Weighted fair queues for video and critical applications.  Scavenger traffic classes and VLANs for file sharers and other undesirable bulk noise.  These plans take weeks to draw up and even longer to implement properly.  It helps us make sense out of the chaos in the network.

By mandating a truly neutral net, we are saying that those carefully marked packets can’t escape from the local network with their markings intact.  We can’t prioritize voice packets once they escape the edge routers.  And if we move applications to the public cloud, we can’t ensure priority access.  Legally, the providers will be forced to remark all CoS and DSCP values at the edge and wash their hands of the whole thing.

And what about provider MPLS circuits?  If the legally mandated neutral provider is administering your MPLS circuits (as they do in small and medium enterprise), can they copy the DSCP values to the MPLS TE field before forwarding the packet?  Where does the law stand on prioritizing private traffic transiting a semi-public link?

Chaotic Neutral

The idea of net neutrality is that no provider should have the right to decide how your traffic should be handled.  But providers will extend that idea to say they can’t deal with any kind of marking.  They won’t legally be able to offer you differentiated service even if you were wiling to pay for it.  That’s the double-edge sword of neutrality.

You can be sure that the providers will already have found a “solution” to the problem.  Today, quality of service (QoS) only becomes an issue when the link becomes congested.  Packets don’t queue up if there’s bandwidth available to use.  So the provider solution is simple.  If you need differentiated service, you need to buy a bigger pipe.  Over provision your WAN circuits!  We can’t guarantee delivery unless you have more bandwidth than you need!  Who cares what the packets are marked?  Which, of course, leads to a little gem from everyone’s favorite super villain:

SyndromeEF

Of course, the increased profits from these services will line the pockets of the providers instead of going to build out the infrastructure necessary to support these overbuilt networks.  The only way to force providers to pony up the money to build out networks is to make it so expensive to fail that the alternative is better.  That requires complex negotiation and penalty-laden, iron-clad service level agreements (SLAs).

The solution to the issue of no prioritized traffic is to provide a list of traffic that should be prioritized.  Critical traffic like VoIP should be allowed to be expedited, as the traffic characteristics and protections we afford it make sense.  Additionally, traffic destined for a public cloud site that function as internal traffic of a company should be able to be prioritized across the provider network.  Tunneling or other forms of traffic protection may be necessary to ensure this doesn’t interfere with other users.  Exempt traffic should definitely be the exception, not the rule.  And it should never fall on the providers to determine which traffic should be exempted from neutrality rules.


Tom’s Take

Net neutrality is key to the future of society.  The Internet can’t function properly if someone else with a vested interest in profits decides how we consume content.  It’s like the filter bubble of Google.  A blind blanket policy doesn’t do us any good, either.  Everyone involved in networking knows there are types of traffic that can be prioritized without having a detrimental effect.  We need to make smart decisions about net neutrality and know when to make exceptions.  But that power needs to be in the hands of the users and customers.  They will make decisions in their best interest.  The providers should have the capability to implement the needs of their customers.  Only then will the Internet be truly neutral.

How Do You Spell That?

I spent a bit of my career on the phone doing support for a national computer vendor. In addition to the difficulties of walking people through opening the case and diagnosing motherboard issues, I found myself needing to overcome language barriers. While I only have a hint of an accent (or so I’ve been told), spelling out acronyms was a challenge. That’s where the phonetic alphabet comes into play

By now, almost everyone uses the NATO phonetic alphabet. It’s the most recognized in the world. The US joint Army/Navy version varies a bit but does have a lot of similarities. However, when I first started out using the NATO version quite a few callers didn’t know what Lima was or giggled when I said Tango.

I decided that some people have much more familiarity with first names. This was borne out when I kept using Mary for “M” instead of Mike. People immediately knew it. Same for Victor, Peter, and so on. So I cobbled together my own Name Phonetic Alphabet.

A – Adam
B – Barbara
C – Charlie
D – David
E – Edward
F – Frank
G – George
H – Harold
I – Irwin
J – John
K – Kevin
L – Larry
M – Mary
N – Nancy
O – Oliver
P – Peter
Q – Quincy (or queen)
R – Roger
S – Sam
T – Tom (my favorite)
U – Umbrella
V – Victor
W – William
X – X-Ray
Y – Yellow
Z – Zebra

Finding a name for Y and Z was pretty difficult, but everyone knows Yellow and Zebra. I was tempted to use Zander, but the more popular version of that name from Buffy the Vampire Slayer was spelled Xander. No sense confusing folks. As for X, if you don’t know X from the sound we need to have a chat.

Was it a duplication of effort? Certainly. But it works universally with everyone I’ve ever talked to, including children. It makes “Roger Adam Irwin David” easy to get across to people without trying to remember Romeo and India.

The key to communication with others is to find something that works for you.  If you can easily convey your information to someone else, the shortcuts you take don’t matter.  If first names work best, use them.  If drawing pictures works better, use those.  In the end, getting the point across is the goal.

Twitter, Please Stop Giving Me Things I Don’t Want

new-twitter-logo

Last week, Twitter confirmed that they will start injecting tweets from users you don’t follow into your timeline.  The collective cry from their user base ranged from outrage to a solid “meh”.  It seems that Twitter has stumbled onto the magic formula that Facebook has perfected: create a feature the users don’t care about and force it onto them.  Why?

Twitter Doesn’t Care About Power Users

Twitter has an interesting mix of users.  They reported earlier this year that 44% of their user base has never tweeted.  That’s a lot of accounts that were created for the purpose of reserving a name or following people in read-only mode.  That must concern Twitter.  Because people that don’t tweet can’t be measure for things like advertising.  They won’t push the message of a sponsored tweet.  They won’t add their voice to the din.  But what about those users that tweet regularly?

Power users are those that tweet frequently without a large follower base.  Essentially, everyone that isn’t a celebrity with a million followers or a non-tweeting account.  You know, the real users on Twitter.  The people that make typos in their tweets and actually check to see who follows them.  The ones that don’t have a “social media team” tweeting for them.  Nothing wrong with a team tweeting for a brand, but when they’re tweeting for a person it’s a little disconcerting.

Power users keep getting screwed by Twitter.  The API changes really hurt those that use clients other than the official ones.  Given that Twitter has killed most of it’s “official” clients in favor of pushing people to use the web, it makes you wonder what their strategy might be.  They are entirely beholden to their investors right now.  That means user signups and ad revenue.  And it means focusing on making the message widespread.  Why worry about placating the relatively small user base that uses your product when you can create a method for reaching millions with a unicast sponsored hashtag? Or by injecting tweets from people you don’t follow into your timeline?

The tweet injection thing is like a popup ad.  It serves the purpose of Twitter deciding to show you some tweets from other “users”.  Anyone want to bet those users will quickly start becoming corporate accounts? Perhaps they pay Twitter to ensure their tweets show up in a the timelines of a specific demographic.  It makes total sense when your users are nothing but a stream of revenue

Making Twitter Usable Again

I mentioned some things the other day that I think Twitter needs to do to make their service usable for power users again.  I wanted to expand on them a bit here:

The Unfollow Bug – Twitter has a problem with keeping followers.  For some reason, your account will randomly unfollow a user with no notification.  You usually don’t figure it out until you want to send them a DM or notice that they’ve unfollowed you and mention it.  It’s an irritating bug that’s been going on for years with no hope of resolution.  Twitter needs to sort this one out quickly.  As a side note, if you run a service that monitors people that have unfollowed you, consider adding a digest of users that I have unfollowed this week.  if the list doesn’t match those that I purposely unfollow, at least you know when you’ve been hit by this bug.

Links in Direct Messages – Twitter disabled the ability to send a link in a direct message a few months ago.  Their argument was that it cut down on spam.  The real reason was Twitter’s attempt to turn DMs into a instant message platform.  Twitter experimented with a setting that enabled DMs from users you don’t follow.  They pulled it before it went live due to user feedback.  One of the arguments was that spam accounts could bombard you with URLs that led to phishing attacks and other unsavory things.  Twitter responded by disabling links in DMs even though they removed the feature it was intended to protect.  It’s time for Twitter to give us this feature back.

Token Limits – This “feature” has to go.  Restricting 3rd party clients because they exist destroys the capabilities of your power users. I use a client because it gives me easy access to features I use all the time, like conversation views and muting.  I also don’t like sitting on the garish Twitter website and constantly refreshing to see new tweets.  I’d rather use some other client. Twitter has a love/hate relationship with non-official clients.  Mostly because those clients strip out ads and sponsored tweets.  They don’t let Twitter earn money from them.  Which is why Twitter is stamping them out for “replicating official client features” left and right.  Curiously enough, I’ve never heard about HootSuite being hit with user token limits.  But considering that a lot of Twitter’s favorite celebrities use it (or at least their social media teams do), I’m not shocked they’re on the exempt list.


Tom’s Take

I still find Twitter a very useful tool.  It’s not something that can just be set into automatic and left alone.  It takes curation and attention to make it work for you.  But it also needs help from Twitter’s side.  Instead of focusing on ways to make me see things I don’t care about from people I don’t want to follow, how about making your service work the way I want it to work.  I’m more like to use (and suggest) a service that works.  I barely check Facebook anymore because I’m constantly “fixing” their Top Posts algorithm.  Don’t turn your service into something I spend most of my time fixing.

The Great Tech Reaving

Embed from Getty Images

It seems as though the entire tech world is splitting up.  HP announced they are splitting the Personal Systems Group into HP, Inc and the rest of the Enterprise group in HP Enterprise.  Symantec is forming Veritas into a separate company as it focuses on security and leaves the backup and storage pieces to the new group.  IBM completed the sale of their x86 server business to Lenovo.  There are calls for EMC and Cisco to split as well.  It’s like the entire tech world is breaking up right before the prom.

Acquisition Fever

The Great Tech Reaving is a logical conclusion to the acquisition rush that has been going on throughout the industry for the past few years.  Companies have been amassing smaller companies like trading cards.  Some of the acquisitions have been strategic.  Buying a company that focuses on a line of work similar to the one you are working on makes a lot of sense.  For instance, EMC buying XtremIO to help bolster flash storage.

Other acquisitions look a bit strange.  Cisco buying Flip Video.  Yahoo buying Tumblr. There’s always talk around these left field mergers.  Is the CEO looking for synergy? Is there a hidden play that we’re unaware of? Sometimes that kind of thinking pays off.  Other times you end up with Zimbra.  More often than not, the company ends up writing down the assets of the acquired company and taking very little from the purchase.  Maybe not as big as the Autonomy write down, but even getting rid of Flip can make waves.

It makes a person wonder what the point of an acquisition is if it’s just going to wind up being an accounting charge later.  Is it a tax shelter?  A way to use up outstanding cash?  Maybe even a way to buy a particularly good developer and fold them into your organization to keep them out of a competitor’s hands?  The reasons are myriad but it appears that the fever is dying down.  And that might end up hurting innovation in the long term.

This Is Not An Exit Strategy

Think about the startup out there making a hot new technology.  They had their heart set on getting bought by a bigger company in the market.  Now, they just watched that company split off half of their business into a new company.  Cash is hard to find for a new acquisition.  Now the startup has to find a different way to monetize things.  Should we redouble our efforts to market the product? Get new investors? Go public?

I’ve said before that pinning your hopes on getting purchased isn’t the best way to run a business.  It’s like betting all your hopes on getting the winning numbers in the lottery.  It might happen, but the odds are against it.  Perhaps the end result of a market full of split companies will be a reevaluation of the idea of an exit strategy.  Rather than building a business for the sole purpose of being bought entrepreneurs will start building businesses to make products and sell them.  It’s a radical idea, but not so radical as to be unbelievable.  Just ask Hewlett and Packard.  Or Jobs and Wozniak. Or anyone else that didn’t have an exit strategy instead of a business plan.


Tom’s Take

Companies can be too big.  IBM has sold off most of what made it IBM.  Symantec and HP are in the process.  The next domino to fall will be EMC.  Then Cisco.  After that, the landscape will look much different.  But in a good way.  It’s like a stock split.  The same amount of knowledge is out there.  It’s just held differently.  That’s good for the industry because it forces the status quo to change.  New alliances, new partnerships, and new synergies can be found by upsetting the apple cart now and then.