About networkingnerd

Tom Hollingsworth, CCIE #29213, is a network engineer that works with Cisco, HP, Microsoft, VMware, and various other technologies. Tom has been in the IT industry since 2002, and has been a nerd since he first drew breath.

Do We Need To Redefine Open?

beer-mug

There’s a new term floating around that seems to be confusing people left and right.  It’s something that’s been used to describe a methodology as well as used in marketing left and right.  People are using it and don’t even really know what it means.  And this is the first time that’s happened.  Let’s look at the word “open” and why it has become so confusing.

Talking Beer

For those at home that are familiar with Linux, “open” wasn’t the first term to come to mind.  “Free” is another word that has been used in the past with a multitude of loaded meanings.  The original idea around “free” in relation to the Open Source movement is that the software is freely available.  There are no restrictions on use and the source is always available.  The source code for the Linux kernel can be searched and viewed at any time.

Free describes the fact that the Linux kernel is available for no cost.  That’s great for people that want to try it out.  It’s not so great for companies that want to try and build a business around it, yet Red Hat has managed to do just that.  How can they sell something that doesn’t cost anything?  It’s because they keep the notion of free sharing of code alive while charging people for support and special packages that interface with popular non-free software.

The dichotomy between unencumbered idea software and no cost software is so confusing that the movement created a phrase to describe it:

Free as in freedom, not free as in beer.

When you talk about freedom, you are unrestricted.  You can use the software as the basis for anything.  You can rewrite it to your heart’s content.  That’s your right for free software. When you talk about free beer, you set the expectation that whatever you create will be available at no charge.  Many popular Linux distributions are available at no cost.  That’s like getting beer for nothing.

Open, But Not Open Open

The word “open” is starting to take on aspects of the “free” argument.  Originally, the meaning of open came from the Open Source community.  Open Source means that you can see everything about the project.  You can modify anything.  You can submit code and improve something.  Look at the OpenDaylight project as an example.  You can sign up, download the source for a given module, and start creating working code.  That’s what Brent Salisbury (@NetworkStatic) and Matt Oswalt (@Mierdin) are doing to great effect.  They are creating the network of the future and allowing the community to do the same.

But “open” is being redefined by vendors.  Open for some means “you can work with our software via an API, but you can’t see how everything works”.  This is much like the binary-only NVIDIA driver.  Proprietary programming is pre-compiled and available to download for free, but you can’t modify the source at all.  While it works with open source software, it’s not open.

A conversation I had during Wireless Field Day 7 drove home the idea of this new “open” in relation to software defined networking.  Vendors tout open systems to their customers. They standardize on northbound interfaces that talk to orchestration platforms and have API support for other systems to call them.  But the southbound interface is proprietary.  That means that only their controller can talk to the network hardware attached to it.  Many of these systems have “open” in the name somewhere, as if to project the idea that they work with any component makeup.

This new “open” definition of having proprietary components with an API interface feels very disingenuous.  It also makes for some very awkward conversations:

$VendorA: Our system is open!

ME: Since this is an open system, I can connect my $VendorB switch and get full functionality from your controller, right?

$VendorA: What exactly do you mean by “full”?


Tom’s Take

Using “open” to market these systems is wrong.  Telling customers that you are “open” because your other equipment can program things through a narrow API is wrong.  But we don’t have a word to describe this new idea of “open”.  It’s not exactly closed.  Perhaps we can call it something else.  Maybe “ajar”.  That might make the marketing people a bit upset.  “Try our new AjarNetworking controller.  As open as we wanted to make it without closing everything.”

“Open” will probably be dominated by marketing in the next couple of year.  Vendors will try to tell you how well they interoperate with everyone.  And I will always remember how open protocols like SIP are and how everyone uses that openness against it.  If we can’t keep the definition of “open” clean, we need to find a new term.

Maybe MU-MIMO Matters

Wireless

As 802.11ac becomes more widely deployed in environments I find myself looking to the next wave and the promise it brings.  802.11ac Wave 1 for me really isn’t that groundbreaking.  It’s an incremental improvement on 802.11n.  Wave 1 really only serves to wake up the manufacturers to the fact that 5 GHz radios are needed on devices now.  The real interesting stuff comes in Wave 2.  Wider channels, more spatial streams, and a host of other improvements are on the way.  But the most important one for me is MU-MIMO.

Me Mi Mo Mum

Multi-user Multiple-Input Multiple-Output (MU-MIMO) is a huge upgrade over the MIMO specification in 802.11n.  MIMO allowed access points to multiplex signals on different channels into one data stream.  It accomplished this via Spatial Division Multiplexing (SDM).  This means that more antennas on an access point are a very good thing.  It increases the throughput above and beyond what could be accomplished with just a single antenna.  But it does have a drawback.

Single-user MIMO can only talk to one client at a time.  All the work necessary to multiplex those data streams require the full attention of a single access point for the period in time that the client is transmitting.  That means that crowded wireless networks can see reduced throughput because of shorter transmit windows.  And what wireless network isn’t crowded today?

MU-MIMO solves this problem by utilizing additional antenna capacity to transmit multiple data streams at once.  If you have spare antennas, you can send another data stream.  The AP then takes the multiple streams and stitches them back together.  This means an effective increase in throughput for certain devices even though the signal strength isn’t as high (thanks to FCC power limits).  Here’s a great video from Wireless Field Day 7 that explains the whole process:

What I found most interesting in this video is two-fold.  First, MU-MIMO is of great benefit to client devices that don’t have the maximum number of spatial streams.  Laptops are going to have three stream and four stream cards, so their MU-MIMO benefit is minimal.  However, the majority of devices on the market using wireless are mobile.  Tablets and phones don’t have multiple spatial streams, usually just one (or in some cases two).  They do this to improve battery life.  MU-MIMO will help them out considerably.

The second takeaway is that devices without a high number of receive chains will make the AP do more work.  That’s because the AP has to process the transmit stream and prevent the extra streams from being transmitted toward the wrong client.  That’s going to incur processing power.  That means you’ll need an AP with a lot of processing power.  Or a control system that can crunch those numbers for you.

When you consider that a large number of APs in a given system are sitting idle for a portion of the time it would be nice to be able to use that spare capacity for MU-MIMO processing.  In addition, having those extra antennas available to help with MU-MIMO sounding would be nice too.  There’s already been some work done on the research side of things.  Maybe we’ll soon see the ability to take the idle processing power of a wireless network and use it to boost the client throughput as needed.


 

Tom’s Take

Wireless never ceases to amaze me.  When I started writing this article, I thought I knew how MU-MIMO worked.  Thankfully, Cisco set me straight at Wireless Field Day 7.  MU-MIMO is going to help clients that can’t run high-performance networking cards.  The kinds of clients that are being sold as fast as possible today.  That means that the wireless system is already being developed to support a new kind of wireless device.

A device that doesn’t have access to limitless power from a wall socket or a battery that lasts forever.  There’s been talk of tablets with increased spatial streams for a while, but the cruel mistress of battery life will always win in the end.  That’s why MU-MIMO matters the most. Because if the wireless device can’t get more powerful, maybe it’s time for the wireless network to do the heavy lifting.

IPv6 and the VCR

IPv6 isn’t a fad.  It’s not a passing trend that will be gone tomorrow.  When Vint Cerf is on a nationally televised non-technical program talking about IPv6 that’s about as real as it’s going to get.  Add in the final depletion of IPv4 address space from the RIRs and you will see that IPv6 is a necessity.  Yet there are still people in tech that deny the increasing need for IPv6 awareness.  Those same people that say it’s not ready or that it costs too much.  It reminds me of a different argument.

IPvcr4

My house is full of technology.  Especially when it comes to movie watching.  I have DVRs for watching television, a Roku for other services, and apps on my tablet so the kids can watch media on demand.  I have a DVD player in almost every room of the house.  I also have a VCR.  It serves one purpose – to watch two movies that are only available on a video tape.  Those two movies are my wedding and the birth of my oldest son.

At first, the VCR stated connected to our television all the time.  We had some movies that we owned on VHS that we didn’t have DVD or digital copies.  As time wore on, those VHS movies were replaced by digital means.  Soon, the VCR only served to enable viewing of the aforementioned personal media.  We couldn’t get that on a DVD from just anywhere.  But the VCR stayed connected for those occasions when the movies needed to be watched.  Soon, it was too much of a hassle to reconnect the VCR, even for these family films.  Eventually, we figured out how to hook up the VCR and record the content into a digital format that’s available from non-analog sources.

How does this compare to IPv6?  Most people assume that the transition to IPv6 from IPv4 will be sudden and swift.  They will wake up one morning and find that all their servers and desktops are running global IPv6 addresses and IPv4 will be a distant memory.  In fact, nothing can be further from the truth.  IPv4 and IPv6 can coexist on the same system.  IPv6 can be implemented alongside IPv4 without disrupting connectivity.  As in the above example, you can watch DVDs and VHS tapes on the same TV without disruption.

As IPv4 address availability is restricted, may engineers will find themselves scrambling to replace existing systems and deploy new ones without access to IPv4 addresses.  That’s when the real cutover begins.  As these new systems are brought online, IPv6 will be the only address space available.  These systems will be connected with IPv6 first and provisions will be made for them to connect to other systems via IPv4.  Eventually, IPv4 will only exist for legacy systems that can’t be upgraded or migrated.  Just like the VCR above, it will only be needed for a handful of operations.

We never have to reach a point where IPv4 will be completely eliminated.  IPv4-only hosts will still be able to connect to one another so long as the global IPv4 routing table is available.  It may be reduced in size as IPv6 gains greater adoption, but it will never truly go away.  Instead, it will be like IBM’s SNA protocol.  Relevant to a few isolated hosts at best.  The world will move on and IPv6 will be the first choice for connectivity.


 Tom’s Take

I must admit that this idea was fostered from a conversation with Ed Horley (@EHorley).  The evangelism that he’s doing with both the CAv6TF and the RMv6TF is unparalleled.  They are doing their best to get the word out about IPv6 adoption.  I think it’s important for people in tech to know that IPv6 isn’t displacing IPv4.  It’s extending network functionality. It’s granting a new lease on life for systems desperately in need of address space.  And it allows IPv4-only systems to survive a little while longer.  You don’t have to watch the same old VHS tapes every day.  But you don’t have to leave the IPvcr4 hooked up all the time either.

The Pain of Licensing

Frequent readers of my blog and Twitter stream may have noticed that I have a special loathing in my heart for licensing.  I’ve been subjected to some of the craziest runarounds because of licensing departments.  I’ve had to yell over the phone to get something taken care of.  I’ve had to produce paperwork so old it was yellowed at the edges.  Why does this have to be so hard?

Licensing is a feature tracking mechanism.  Manufacturers want to know what features you are using.  It comes back to tracking research and development.  A lot of time and effort goes into making the parts and pieces of a product.  Many different departments put work into something before it goes out the door.  Vendors need a way to track how popular a given feature might be to customers.  This allows them to know where to allocate budgets for the development of said features.

Some things are considered essential.  These core pieces are usually allocated to a team that gets the right funding no matter what.  Or the features are so mature that there really isn’t much that can be done to drive additional revenue from them.  When’s the last time someone made a more streamlined version of OSPF?  But there are pieces that can be attached to OSPF that carry more weight.

Rights and Privileges

Here’s an example from Cisco.  In IOS 15, OSPF is considered a part of the core IOS functionality.  You get it no matter what on a router.  You have to pay an extra license on a switch, but that’s not part of this argument.  OSPF is a mature protocol, even in version 3 which enables IPv6 support.  If you have OSPF for IPv4, you have it for IPv6 as well.  One of the best practices for securing OSPF against intrusion is to authenticate your area 0 links.  This is something that should be considered core functionality.  And with IPv4, it is.  The MD5 authentication mechanism is built into the core OS.  But with IPv6, the IPSec license needed to authenticate the links has to be purchased as a separate license upgrade.  That’s because IPSec is part of the security license bundle.

Why the runaround for what is considered a best practice, core function?  It’s because IPv6 uses a different mechanism.  One that has more reach that simple MD5 authentication.  In order to capture the revenue that the IPSec security team is putting in, Cisco won’t just give away that functionality.  Instead, it needs to be tracked by a license.  The R&D work from that team needs to be recovered somehow.  And so you pay extra for something Cisco says you should be doing anyway.  That’s the licensing that upsets me so.

License Unit Report

How do we fix it?  The money problem is always going to be there.  Vendors have to find a way to recapture revenue for R&D while at the same time not making customers pay for things they don’t need, like advanced security or application licenses.  That’s the necessary evil of having affordable software.  But there is a fix for the feature tracking part.

We have the analytics capability with modern software to send anonymized usage statistics to manufacturers and vendors about what feature sets are being used.  Companies can track how popular IPSec is versus MD5 or other such feature comparisons.  The software doesn’t have to say who you are, just what you are using.  That would allow the budgets to be allocated exactly like they should be used, not guessing based on who bought the whole advanced communications license for Quality of Service (QoS) reporting.


 

Tom’s Take

Licensing is like NAT.  It’s a necessary evil of the world we live in.  People won’t pay for functionality they don’t use.  At the same time, they won’t use functions they have to pay extra for if they think it should have been included.  It’s a circular problem that has no clear answer.  And that’s the necessary evil of it all.

But just because it’s necessary doesn’t mean we can’t make it less evil.  We can split the reporting pieces out thanks to modern technology.  We can make sure the costs to develop these features gets driven down in the future because there are accurate statistics about usage.  Every little bit helps make licensing less of a hassle that it currently is.  It may not go away totally, but it can be marginalized to the point where it isn’t painful.

I Can’t Drive 25G

Ethernet

The race to make things just a little bit faster in the networking world has heated up in recent weeks thanks to the formation of the 25Gig Ethernet Consortium.  Arista Networks, along with Mellanox, Google, Microsoft, and Broadcom, has decided that 40Gig Ethernet is too expensive for most data center applications.  Instead, they’re offering up an alternative in the 25Gig range.

This podcast with Greg Ferro (@EtherealMind) and Andrew Conry-Murray (@Interop_Andrew) does a great job of breaking down the technical details on the reasoning behind 25Gig Ethernet.  In short, the current 10Gig connection is made of four multiplexed 2.5Gig connections.  To get to 25Gig, all you need to do is over clock those connections a little.  That’s not unprecedented, as 40Gig Ethernet accomplishes this by over clocking them to 10Gig, albeit with different optics.  Aside from a technical merit badge, one has to ask themselves “Why?”

High Hopes

As always, money is the factor here.  The 25Gig Consortium is betting that you don’t like paying a lot of money for your 40Gig optics.  They want to offer an alternative that is faster than 10Gig but cheaper than the next standard step up.  By giving you a cheaper option for things like uplinks, you gain money to spend on things.  Probably on more switches, but that’s beside the point right now.

The other thing to keep in mind, as mentioned on the Coffee Break podcast, is that the cable runs for these 25Gig connectors will likely be much shorter.  Short term that won’t mean much.  There aren’t as many long-haul connections inside of a data center as one might thing.  A short hop to the top-of-rack (ToR) switch, then another different hop to the end-of-row (EoR) or core switch.  That’s really about it.  One of the arguments against 40/100Gig is that it was designed for carriers for long-haul purposes.  25G can give you 60% of the speed of that link at a much lower cost.  You aren’t paying for functionality you likely won’t use.

Heavy Metal

Is this a good move?  That depends.  There aren’t any 25Gig cards for servers right now, so the obvious use for these connectors will be uplinks.  Uplinks that can only be used by switches that share 25Gig (and later 50Gig) connections.  As of today, that means you’re using Arista, Dell, or Brocade.  And that’s when the optics and switches actually start shipping.  I assume that existing switching lines will be able to retrofit with firmware upgrades to support the links, but that’s anyone’s guess right now.

If Mellanox and Broadcom do eventually start shipping cards to upgrade existing server hardware to 25Gig then you’ll have to ask yourself if you want to pursue the upgrade costs to drive that little extra bit of speed out of the servers.  Are you pushing the 10Gig links in your servers today?  Are they the limiting factor in your data center?  And will upgrading your servers to support twice the bandwidth per network connection help alleviate your bottlenecks? Or will they just move to the uplinks on the switches?  It’s a quandary that you have to investigate.  And that takes time and effort.


 

Tom’s Take

The very first thing I ever tweeted (4 years ago):

We’ve come a long way from ratified standards to deployment of 40Gig and 100Gig.  Uplinks in crowded data centers are going to 40Gig.  I’ve seen a 100Gig optic in the wild running a research network.  It’s interesting to see that there is now a push to get to a marginally faster connection method with 25Gig.  It reminds me of all the competing 100Mbit standards back in the day.  Every standard was close but not quite the same.  I feel that 25Gig will get some adoption in the market.  So now we’ll have to choose from 10Gig, 40Gig, or something in between to connect servers and uplinks.  It will either get sent to the standards body for ratification or die on the vine with no adoption at all.  Time will tell.

 

Security Dessert Models

MMCOOKIE

I had the good fortune last week to read a great post from Maish Saidel-Keesing (@MaishSK) that discussed security models in relation to candy.  It reminded me that I’ve been wanting to discuss security models in relation to desserts.  And since Maish got me hungry for a Snicker’s bar, I decided to lay out my ideas.

When we look at traditional security models of the past, everything looks similar to creme brûlée.  The perimeter is very crunchy, but it protects a soft interior.  This is the predominant model of the world where the “bad guys” all live outside of your network.  It works when you know where your threats are located.  This model is still in use today where companies explicitly trust their user base.

The creme brûlée model doesn’t work when you have large numbers of guest users or BYOD-enabled users.  If one of them brings in something that escapes into the network, there’s nothing to stop it from wreaking havoc everywhere.  In the past, this has caused massive virus outbreaks and penetrations from things like malicious USB sticks in the parking lot being activated on “trusted” computers internally.

A Slice Of Pie

A more modern security model looks more like an apple pie.  Rather than trusting that everything inside the network, the smart security team will realize that users are as much of a threat as the “bad guys” outside.  They crunchy crust on top will also be extended around the whole soft area inside.  Users that connect tablets, phones, and personal systems will have a very aggressive security posture in place to prevent access of anything that could cause problems in the network (and data center).  This model is great when you know that the user base is not to be trusted.  I wrote about it over a year ago on the Aruba Airheads community site.

The apple pie model does have some drawbacks.  While it’s a good idea to isolate your users outside the “crust”, you still have nothing protecting your internal systems if a rogue device or “trusted” user manages to get inside the perimeter.  The pie model will protect you from careless intrusions but not from determined attackers.  To fix that problem, you’re going to have to protect things inside the network with a crunchy shell as well.

Melts In Your Firewall, Not In Your Hand

Maish was right on when he talked about M&Ms being a good metaphor for security.  They also do a great job of visualizing the untrusted user “pie” model.  But the ultimate security model will end up looking more like an M&M cookie.  It will have a crunchy edge all around.  It will be “soft” in the middle.  And it will also have little crunchy edges around the important chocolate parts of your network (or data center).  This is how you protect the really important things like customer data.  You make sure that even getting past the perimeter won’t grant access.  This is the heart of “defense in depth”.

The M&M cookie model isn’t easy by any means. It requires you to identify assets that need to be protected.  You have to build the protection in at the beginning.  No ACLs that permit unrestricted access.  The communications between untrusted devices and trusted systems needs to be kept to the bare minimum necessary.  Too many M&Ms in a cookie makes for a bad dessert.  So too must you make sure to identify the critical systems that need be protected and group them together to minimize configuration effort and attack surface.


 

Tom’s Take

Security is a world of protecting the important things while making sure they can be used by people.  If you err on the side of too much caution, you have a useless system.  If you are too permissive, you have a security risk.  Balance is the key.  Just like the recipe for cookies, pie, or even creme brûlée the proportion of ingredients must be just right to make a tasty dessert.  In security you have to have the same mix of permissions and protections.  Otherwise, the whole thing falls apart like a deflated soufflé.

 

CCNA Data Center on vBrownBag

vbrownbagSometimes when I’m writing blog posts, I forget how important it is to start off on the right foot.  For a lot of networking people just starting out, discussions about advanced SDN topics and new theories can seem overwhelming when you’re trying to figure out things like subnetting or even what a switch really is.  While I don’t write about entry level topics often, I had the good fortune recently to talk about them on the vBrownBag podcast.

For those that may not be familiar, vBrownBag is a great series that goes into depth about a number of technology topics.  Historically, vBrownBag has been focused on virtualization topics.  Now, with the advent of virtual networking become more integrated into virtualization the vBrownBag organizers asked me if I’d be willing to jump on and talk about the CCNA Data Center.  Of course I took the opportunity to lend my voice to what will hopefully be the start of some promising data center networking careers.

These are the two videos I recorded.  The vBrownBag is usually a one-hour show.  I somehow managed to go an hour and half on both.  I realized there is just so much knowledge that goes into these certifications that I couldn’t do it all even if I had six hours.

Also, in the midst of my preparation, I found a few resources that I wanted to share with the community for them to get the most out of the experience.

Chris Wahl’s CCNA DC course from PluralSight – This is worth the time and investment for sure.  It covers DCICN in good depth, and his work with NX-OS is very handy if you’ve never seen it before.

Todd Lamle’s NX-OS Simulator – If you can’t get rack time on a real Nexus, this is pretty close to the real thing.  You should check it out even if only to get familiar with the NX-OS CLI.

NX-OS and Nexus Switching, 2nd Edition – This is more for post-grad work.  Ron Fuller (@CCIE5851) helped write the definitive guide to NX-OS.  If you are going to work on Nexus gear, you need a copy of this handy. Be sure to use the code “NETNERD” to get it for 30% off!


 

Tom’s Take

Never forget where you started.  The advanced topics we discuss take a lot for granted in the basic knowledge department.  Always be sure to give a little back to the community in that regard.  The network engineer you help shepherd today may end up being the one that saves your job in the future.  Take the time to show people the ropes.  Otherwise you’ll end up hanging yourself.

Fixing E-Rate – SIP

I was talking to my friend Joshua Williams (@JSW_EdTech) about our favorite discussion topic: E-Rate.  I’ve written about E-Rate’s slow death and how it needs to be modernized.  One of the things that Joshua mentioned to me is a recent speech from Commissioner Ajit Pai in front of the FCC.  The short, short version of this speech is that the esteemed commissioner doesn’t want to increase the pool of money paid from the Universal Service Fund (USF) into E-Rate.  Instead, he wants to do away with “wasteful” services like wireline telephones and web hosting.  Naturally, when I read this my reaction was a bit pointed.

Commissioner Pai has his heart in the right place.  His staff gave him some very good notes about his interviews with school officials.  But he’s missed the boat completely about the “waste” in the program and how to address it.  His idea of reforming the program won’t come close to fixing the problems inherent in the system.

Voices Carry

Let’s look at the phone portion for moment.  Commissioner Pai says that E-Rate spends $600 million per year on funding wireline telephone services.  That is a pretty big number.  He says that the money we sink into phone services should go to broadband connections instead.  Because the problems in schools aren’t decaying phone systems or lack of wireless or even old architecture.  It’s faster Internet.  Never mind that broadband circuits are part of the always-funded Priority One pool of money.  Or that getting the equipment required to turn up the circuit is part of Priority Two.  No, the way to fix the problem is to stop paying for phones.

Commissioner Pai obviously emails and texts the principals and receptionists at his children’s schools.  He must have instant messaging communications with them regularly. Who in their right mind would call a school?  Oh, right.  Think of all the reasons that you might want to call a school.  My child forget their sweater.  I’m picking them up early for a doctor’s appointment.  The list is virtually endless.  There are so many reasons to call a school.  Telling the school that you’re no longer paying for phone service is likely to get your yelled at.  Or run out of town on a rail.

What about newer phone technologies?  Services that might work better with those fast broadband connections that Commissioner Pai is suggesting are sorely needed?  What about SIP trunking?  It seems like a no-brainer to me.  Take some of the voice service money and earmark it for new broadband connections.  However, it can only be used for a faster broadband connection if the telephone service is converted to a SIP trunk.  That’s a brilliant idea that would redirect the funding where it’s needed.

Sure, it’s likely going to require an upgrade of phone gear to support SIP and VoIP in general.  Yes, some rural phone companies are going to be forced to upgrade their circuits to support SIP.  But given that the major telecom companies have already petitioned the FCC to do away with wireline copper services in favor of VoIP, it seems that the phone companies would be on board with this.  It fixes many of the problems while still preserving the need for voice communications to the schools.

This is a win for the E-Rate integrators that are being targeted by Commissioner Pai’s statement that it’s too difficult to fill out E-Rate paperwork.  Those same integrators will be needed to take legacy phone systems and drag them kicking and screaming into the modern era.  This kind of expertise is what E-Rate should be paying for.  It’s the kind of specialized knowledge that school IT departments shouldn’t need to have on staff.


Tom’s Take

I spent a large part of my career implementing voice systems for education.  Many times I wondered why we would hook up a state-of-the-art CallManager to a cluster of analog voice lines.  The answer was almost always about money.  SIP was expensive.  SIP required a faster circuit.  Analog was cheap.  It was available.  It was easy.

Now schools have to deal with the real possibility of losing funding for E-Rate voice service because one of the commissioners thinks that no one uses voice any more.  I say we should take the money he wants to save and reinvest it into modernizing phone systems for all E-Rate eligible schools.  Doing so would go a long way toward removing the increasing maintenance costs for legacy phone systems as well as retiring circuits that require constant attention.  That would increase the pool of available money in future funding years.  The answer isn’t to kill programs.  It’s to figure out why they cost so much and find ways to make them more efficient.  And if you don’t think that’s what’s needed Commissioner Pai, give me a call.  I still have a working phone.

Is It Time To Eliminate Long Distance?

“What’s your phone number?”

It seems like an innocuous question.  But what are you expecting?  Phone numbers in the US can vary in length greatly depending up on where you live.  I grew up in a small town.  My first telephone line was a party line.  Because there were four families on the same line, phone numbers didn’t mean much beyond getting you to the general location.  When we moved into town we finally got our own telephone line.  But the number was only four digits, like a PBX extension.  Since all phones in two had the same prefix, all calls were switched via the last four digits.  The day finally came when we all had to dial the prefix along with the four-digit number.  Now were were up to seven.

If you ask someone their phone number, you’re likely to get any one of several number combinations.  Seven digits, ten digits, or even eleven digits for those that do international business.  Computer systems can be coded to automatically fill in the area code for small stores that need contact information.  Other nationwide chains ask for the area code every time.  And those international business people always start their number with “+1″, which may not even be an option on the system.  How do we standardize?

Cracking The Code

Part of our standardization issues come from the area codes we’ve been using for sixty years.  Originally conceived as a way to regionalize telephone exchanges, area codes have become something of a quandary.  In larger cities, we use 10-digit phone dialing because of overlay area codes.  Rather than using one code for all the users in a given area, the dial plan has grown so large that more codes were needed to serve the population.  In order to insure these codes are used correctly you must dial all ten digits of the phone number.

In smaller locations still served by one area code, the need for 10-digit dialing is less clear. In my home area code of 405, I don’t need to dial ten digits to reach the Oklahoma City metro area.  If I want to dial outside of my area code, I need to use the long distance prefix.  However, there are some areas in the 405 area code that are not long distance but require dialing 405.  These are technically Inter-LATA Intrastate long distance calls.  And the confusion over the area codes comes down to the long distance question.

Going the Distance

The long distance system in America is the cause of all the area code confusion.  Users universally assume that they need to dial a 1 before any number to cross area codes.  That is true in places where a given area code covers all users.  But users also need to dial the long distance code to access users on different phone systems and in different towns.  It’s difficult to remember the rules.  And when you dial a 1 and it’s not needed, you get the reorder tone from your telco provider.

Now add mobile phones into the equation.  My friend from college still has the same mobile number he had ten years ago in this area code.  He lives in Seattle now.  If I want to call and talk to him, it’s a local call on my home phone.  If his next door neighbor wants to call him it will be a long distance call.  Many people still have their first mobile number even though they have moved to area codes across the country.

Mobile phone providers don’t care about long distance calls.  A call to a phone next to you is no different than a call to a phone in Alaska.  This reinforces the importance on 10-digit dialing.  I give my mobile number as ten digits all the time, unless I give the 11-digit E.164 globalized E.164 number.  It’s quick and easy and people in large areas are used to it.

It’s time to do the same for landline phones.  I think the utility for landlines would increase immensely if long distance was no longer an issue.  If you force all users to dial ten digits they won’t mind so long as the calls can be routed anywhere in any area code.  When you consider that most phone providers give users free long distance plans or even service for just a few cents, holding on to the idea of long distance calls makes little to no sense.


Tom’s Take

As a former voice engineer, long distance always gave me fits.  People wanted to track long distance calls to assign charges, even when they had hundreds of minutes of free long distance.  The need to enter a long distance access code rendered my Cisco Cius unusable.  I longed for the day that long distance was abolished.

Now, local phone companies see users evaporating before their very eyes.  No one uses their home phone any more.  I know I never answer mine, since most of the calls are from people I don’t want to talk to.  I think the last actual call I made was to my mother, which just happened to be long distance.

If telcos want users to use landlines, they should abolish the idea of long distance and make the system work like a mobile phone.  Calling my neighbor with a 212 area code would just require a 10-digit call.  No long distance.  No crazy rules.  Just a simple phone call.  People would start giving 10-digit numbers.  Billing would be simplified.  The world would be a better place.

Twitter Tips For Finding Followers

new-twitter-logo

I have lots of followers on Twitter.  I also follow a fair number of people as well.  But the ratio of followers to followed isn’t 1:1.  I know there are a lot of great people out there and I try to keep up with as many of them as I can without being overwhelmed.  It’s a very delicate balance.

There are a few things I do when I get a new follower to decide if I want to follow them back.  I also do the same thing for new accounts that I find.  It’s my way of evaluating how they will fit into my feed.  Here are the three criteria I use to judge adding people to my feed.

Be Interesting

This one seems like a no brainer, right?  Have interesting content that people want to read and interact with.  But there’s one specific piece here that I want to call attention to.  I love reading people with original thoughts.  Clever tweets, interesting observations, and pertinent discussion are all very important.  But one thing that I usually shy away from is the account that is more retweets than actual content.

I don’t mind retweets.  I do it a lot, both in quote form and in the “new” format of pasting the original tweet into my timeline.  But I use the retweet sparingly.  I do it to call attention to original thought.  Or to give credit where it’s due.  But I’ve been followed by accounts that are 75% (or more) retweets from vendors and other thought leaders.  If the majority of your content comes from retweeting others, I’m more likely to follow the people you’re retweeting and not you.  Make sure that the voice on your Twitter account is your own.

Be On Topic

My Twitter account is about computer networking.  I delve into other technologies, like wireless and storage now and then.  I also make silly observations about trending events.  But I’m on topic most of the time.  That’s the debt that I owe to the people that have chosen to follow me for my content.  I don’t pollute my timeline with unnecessary conversation.

When I evaluate followers, I look at their content.  Are they talking about SANs? Or are they talking about sports?  Is their timeline a great discussion about SDN? Or check ins on Foursquare at the local coffee shop?  I like it when people are consistent.  And it doesn’t have to be about technology.  I follow meteorologists, musicians, and actors.  Because they are consistent about what they discuss.  If you’re timeline is polluted with junk and all over the place it makes it difficult to follow.

Note that I do talk about things other than tech.  I just choose to segregate that talk to other platforms.  So if you’re really interested in my take on college football, follow me on Facebook.

Be Interactive

There are lots of people talking on Twitter.  There are conversations going on every second that are of interest to lots of people.  No one has time to listen to all of them.  You have to find a reason to be involved.  That’s where the interactivity aspect comes into play.

My fifth tweet was interacting with someone (Ethan Banks to be precise):

If you don’t talk to other people and just blindly tweet into the void, you may very well add to the overall body of knowledge while missing the point at the same time.  It’s called “social” media.  That means talking to other people.  I’m more likely to follow an account that talks to me regularly.  That tells me I’m wrong or points me at a good article.  People feel more comfortable with people they’ve interacted with before.

Don’t be shy.  Mention someone.  Start a conversation.  I’ll bet you’ll pick up a new follower in no time.


Tom’s Take

These are my guidelines.  They aren’t hard-and-fast rules.  I don’t apply them to everyone. But it does help me figure out if deeper analysis is needed before following someone.  It’s important to make sure that the people you follow help you in some way.  They should inform you.  They should challenge you.  They should make you a better person.  That’s what social media really means to me.

Take a look at your followers and find a few to follow today.  Find that person that stays on topic and has great comments.  Give them a chance.  You might find a new friend.