Fixing My Twitter

It’s no surprise that Twitter’s developers are messing around with the platform. Again. This time, it’s the implementation of changes announced back in May. Twitter is finally cutting off access to their API that third party clients have been using for the past few years. They’re forcing these clients to use their new API structure for things like notifications and removing support for streaming. This new API structure also has a hefty price tag. For 250 users it’s almost $3,000/month.

You can imagine the feedback that Twitter has gotten. Users of popular programs like Tweetbot and Twitterific were forced to degrade client functionality thanks to the implementation of these changes. Twitter power users have been voicing their opinions with the hashtag #BreakingMyTwitter. I’m among the people that are frustrated that Twitter is chasing the dollar instead of the users.

Breaking The Bank

Twitter is beholden to a harsh mistress. Wall Street doesn’t care about user interface or API accessibility. They care about money. They care are results and profit. And if you aren’t turning a profit you’re a loser that people will abandon. So Twitter has to make money somehow. And how is Twitter supposed to make money in today’s climate?

Users.

Users are the coin of Twitter’s realm. The more users they have active the more eyeballs they can get on their ads and sponsored tweets and hashtags. Twitter wants to court celebrities with huge followings that want to put sponsored tweets in their faces. Sadly for Twitter, those celebrities are moving to platforms like Instagram as Twitter becomes overrun with bots and loses the ability to have discourse about topics.

Twitter needs real users looking at ads and sponsored things to get customers to pay for them. They need to get people to the Twitter website where these things can be displayed. And that means choking off third party clients. But it’s not just a war on Tweetbot and Twitterific. They’ve already killed off their Mac client. They have left Tweetdeck in a state that’s barely usable, positioning it for power users. Yet, power users prefer other clients.

How can Twitter live in a world where no one wants to use their tools but can’t use the tools they like because access to the vital APIs that run them are choked off behind a paywall that no one wants to pay for? How can us poor users continue to use a service that sucks when used through the preferred web portal?

You probably heard my rant on the Gestalt IT Rundown this week. If not, here you go:

I was a little animated because I’m tired of getting screwed by developers that don’t use Twitter the way that I use it. I came up with a huge list of things I didn’t like. But I wanted to take a moment to talk about some things that I think Twitter should do to get their power users back on their side.

  1. Restore API Access to Third Party Clients – This is a no-brainer for Twitter. If you don’t want to maintain the old code, then give API access to these third party developers at the rates they used to have it. Don’t force the developers working hard to make your service usable to foot the bills that you think you should be getting. If you want people to continue to develop good features that you’ll “borrow” later, you need to give them access to your client.
  2. Enforce Ads on Third Party Clients – I hate this idea, but if it’s what it takes to restore functionality, so be it. Give API access to Tweetbot and Twitterific, but in order to qualify for a reduced rate they have to start displaying ads and promoted tweets from Twitter. It’s going to clog our timeline but it would also finance a usable client. Sometimes we have to put up with the noise to keep the signal.
  3. Let Users Customize Their Experience – If you’re going to drive me to the website, let me choose how I view my tweets. I don’t want to see what my followers liked on someone else’s timeline. I don’t want to see interesting tweets from people I don’t follow. I want to get a simple timeline with conversations that don’t expand until I click on them. I want to be able to scroll the way I want, not the way you want me to use your platform. Customizability is why power users use tools like third party clients. If you want to win those users back, you need to investigate letting power users use the web platform in the same way.
  4. Buy A Third Party Client and Don’t Kill Them Off – This one’s kind of hard for Twitter. Tweetie. The original Tweetdeck. There’s a graveyard of clients that Twitter has bought and caused to fail through inattention and inability to capitalize on their usability. I’m sure Loren Britcher is happy to know that his most popular app is now sitting on a scrap heap somewhere. Twitter needs to pick up a third party developer, let them develop their client in peace without interference internally at Twitter, and then not get fired for producing.
  5. Listen To Your Users, Not Your Investors – Let’s be honest. If you don’t have users on Twitter, you don’t have investors. Rather than chasing the dollars every quarter and trying to keep Wall Street happy, you should instead listen to the people that use your platform and implement the changes their asking for. Some are simple, like group DMs in third party clients. Or polls that are visible. Some are harder, like robust reporting mechanisms or the ability to remove accounts that are causing issues. But if Twitter keeps ignoring their user base in favor of their flighty investors they’re going to be sitting on a pile of nothing very soon.

Tom’s Take

I use Twitter all the time. It’s my job. It’s my hobby. It’s a place where I can talk to smart people and learn things. But it’s not easy to do that when the company that builds the platform tries as hard as possible to make it difficult for me to use it the way that I want. Instead of trying to shut down things I actively use and am happy with, perhaps Twitter can do some soul searching and find a way to appeal to the people that use the platform all the time. That’s the only way to fix this mess before you’re in the same boat as Orkut and MySpace.

Doing 2016 Write

 

calendar

It’s the first day of 2016 and it’s time for me to look at what I wanted to do and what I plan to accomplish in the coming 366 days. We’ve got a busy year ahead with a leap day, the Olympics, and a US presidential election. And somewhere in the middle of all that there’s a lot of exciting things related to tech.

2015 In Rewind

Looking back at my 2015 goals, I think I did a fairly good job:

  • Writing in Markdown – Read about it all here
  • Blog themes – I really did look at quite a few themes and tried to find something that worked the way I wanted it to work without major modifications. What I finally settled on was a minor font change to make things more readable. For me, form has never been more important than function, so I spend less time worrying about how my blog looks and much more time focusing on how it reads.
  • Cisco Live Management – Didn’t quite get this one done. I wanted to put up the poll for the big picture at the end and I managed to miss it this year! The crew got a chance to say hello to keynote speaker Mike Rowe, so I think it was a good tradeoff. This year for Cisco Live 2016, I hope we have some more interesting things in store as well as some surprises.

A hit, a miss, and a foul tip. Not terribly bad. 2015 was a busy year. I think I wrote more words than ever. I spoke a few times at industry events. I enjoyed participating in the community and being a part of all the wonderful things going on to move forward.

Looking Ahead to 2016

2016 is going to be another busy year as well. Lots of conferences in Las Vegas this year (Aruba Atmosphere, Interop, Cisco Live, and VMworld) as well as other industry events and a full slate of Tech Field Day events. I don’t think there’s a month in the entire year where something isn’t going on.

I’m sure this is an issue for quite a few people in the community as well. There’s a lot of time that gets taken up by doing things. The leaves very little time for writing about those things. I’ve experienced it and I know a lot of my friends have felt the same way. I can’t tell yo the number of times that I’ve heard “I need to write something about that.” or “I’m way behind on my blogging!”

Two Wrongs Don’t Make A Write

My biggest goal for 2016 is writing. I’ve been doing as much as I can, but I want to help others do it as well. I want to find a way to encourage people to start writing and add their thoughts to the community. I also want to find a way to keep the other great voices of the community going and writing regularly.

There’s been a huge shift recently away from blogging as a primary method of information transfer. Quite a few have moved toward non-writing methods to convey that information. Podcasts, both audio and video, are starting to become a preferred method of disseminating information.

I don’t have a problem with podcasts. Some of my friends have great resources that you should check out. But podcasts are very linear. Aside from using software that speeds up the talking, it’s very hard to condense podcasts into quick hit formats. Blog posts can be as short or long as they need to be to get the information across.

What I want to accomplish is a way to foster writers to write more. To help new writers get started and established writers to keep contributing. But keeping the blogging format alive and growing, we can continue to contribute great thoughts to the community and transfer knowledge to a new group of up-and-coming IT professionals.

I’ve got some ideas along these lines that I’ll be rolling out in the coming months. Be sure to say tuned. If you’re willing to help out in any way please drop me a line and let me know. I’m always looking for great people in the community to help make others great as well.


Tom’s Take

A new year doesn’t always mean a totally fresh start. I’ve been working on 2016 things for a few weeks now and I’m continuing great projects that I’ve been doing for a while now as well. But a new year does mean that it is time to find ways to do things better. My mission for the year is to make people better writers. To encourage more people to put thoughts down on paper. I want a world full of thinkers that aren’t afraid to share. That’s something that could make the new year a great one indeed.

Fixing E-Rate – SIP

Embed from Getty Images

I was talking to my friend Joshua Williams (@JSW_EdTech) about our favorite discussion topic: E-Rate.  I’ve written about E-Rate’s slow death and how it needs to be modernized.  One of the things that Joshua mentioned to me is a recent speech from Commissioner Ajit Pai in front of the FCC.  The short, short version of this speech is that the esteemed commissioner doesn’t want to increase the pool of money paid from the Universal Service Fund (USF) into E-Rate.  Instead, he wants to do away with “wasteful” services like wireline telephones and web hosting.  Naturally, when I read this my reaction was a bit pointed.

Commissioner Pai has his heart in the right place.  His staff gave him some very good notes about his interviews with school officials.  But he’s missed the boat completely about the “waste” in the program and how to address it.  His idea of reforming the program won’t come close to fixing the problems inherent in the system.

Voices Carry

Let’s look at the phone portion for moment.  Commissioner Pai says that E-Rate spends $600 million per year on funding wireline telephone services.  That is a pretty big number.  He says that the money we sink into phone services should go to broadband connections instead.  Because the problems in schools aren’t decaying phone systems or lack of wireless or even old architecture.  It’s faster Internet.  Never mind that broadband circuits are part of the always-funded Priority One pool of money.  Or that getting the equipment required to turn up the circuit is part of Priority Two.  No, the way to fix the problem is to stop paying for phones.

Commissioner Pai obviously emails and texts the principals and receptionists at his children’s schools.  He must have instant messaging communications with them regularly. Who in their right mind would call a school?  Oh, right.  Think of all the reasons that you might want to call a school.  My child forget their sweater.  I’m picking them up early for a doctor’s appointment.  The list is virtually endless.  There are so many reasons to call a school.  Telling the school that you’re no longer paying for phone service is likely to get your yelled at.  Or run out of town on a rail.

What about newer phone technologies?  Services that might work better with those fast broadband connections that Commissioner Pai is suggesting are sorely needed?  What about SIP trunking?  It seems like a no-brainer to me.  Take some of the voice service money and earmark it for new broadband connections.  However, it can only be used for a faster broadband connection if the telephone service is converted to a SIP trunk.  That’s a brilliant idea that would redirect the funding where it’s needed.

Sure, it’s likely going to require an upgrade of phone gear to support SIP and VoIP in general.  Yes, some rural phone companies are going to be forced to upgrade their circuits to support SIP.  But given that the major telecom companies have already petitioned the FCC to do away with wireline copper services in favor of VoIP, it seems that the phone companies would be on board with this.  It fixes many of the problems while still preserving the need for voice communications to the schools.

This is a win for the E-Rate integrators that are being targeted by Commissioner Pai’s statement that it’s too difficult to fill out E-Rate paperwork.  Those same integrators will be needed to take legacy phone systems and drag them kicking and screaming into the modern era.  This kind of expertise is what E-Rate should be paying for.  It’s the kind of specialized knowledge that school IT departments shouldn’t need to have on staff.


Tom’s Take

I spent a large part of my career implementing voice systems for education.  Many times I wondered why we would hook up a state-of-the-art CallManager to a cluster of analog voice lines.  The answer was almost always about money.  SIP was expensive.  SIP required a faster circuit.  Analog was cheap.  It was available.  It was easy.

Now schools have to deal with the real possibility of losing funding for E-Rate voice service because one of the commissioners thinks that no one uses voice any more.  I say we should take the money he wants to save and reinvest it into modernizing phone systems for all E-Rate eligible schools.  Doing so would go a long way toward removing the increasing maintenance costs for legacy phone systems as well as retiring circuits that require constant attention.  That would increase the pool of available money in future funding years.  The answer isn’t to kill programs.  It’s to figure out why they cost so much and find ways to make them more efficient.  And if you don’t think that’s what’s needed Commissioner Pai, give me a call.  I still have a working phone.

A Plan To Fix E-Rate

The federal E-Rate program is in the news again. This time, it is due to a mention in the president’s State of the Union speech. He asked a deceptively simple question: “Why can’t schools have the same kind of wifi we have in coffee shops?” After the speech, press releases went flying from the Federal Communications Commission (FCC) talking about a restructuring plan that will eliminate older parts of the Federal Universal Service Fund (USF) like pagers and dial-up connections.

This isn’t the first time that E-Rate has been skewered by the president. Back in June of 2013, he asked some tough questions about increasing the availability of broadband connections to schools. Back then, I thought a lot about what his aim was and how easily it would be accomplished. With the recent news, I feel that I have to say something that the government doesn’t want to hear but needs to be said.

Mr. President, E-Rate is fundamentally broken and needs to be completely overhauled before your goals are met.

E-Rate hasn’t really changed since its inception in 1997. All that’s happened is more rules being piled on the top to combat fraud and attempt to keep up with changing technologies. Yet no one has really looked at the landscape of education technology today to see how best to use E-Rate funding. Computer labs have given way to laptop carts or even tablet carts. T1 WAN connections are now metro fiber handoffs at speeds of 100Mbit or better. Servers do much more than run DNS or web servers.

E-Rate has to be completely overhauled. The program no longer meets the needs of its constituents. Today, it serves as a vehicle for service providers and resellers to make money while forcing as much technology into schools as their meager budgets can afford. When schools with a 90% discount percentage are still having a hard time meeting their funding commitments, you have to take a long hard look at the prices being charged and the value being offered to the schools.

With that in mind, I’ve decided to take a stab at fixing E-Rate. It’s not a perfect solution, but I think it’s a great start. We need to build on the important pieces and discard the things that no longer make sense. To that end, I’m suggesting the Priority 1 / Priority 2 split be abolished. Cisco even agrees with me (PDF Link). In it’s place, we need to take a hard look at what our schools need to educate the youth of America.

Tier 1: WAN Connections

Schools need faster WAN connections. Online testing has replaced Scantrons. Streaming video is augmenting the classroom. The majority of traffic is outbound to the Internet, not internally. T1/T3 doesn’t cut it any more. Schools are going to need 100Mbit or better to meet student needs. Yet providers are reluctant to build out fiber networks that are unprofitable. Schools don’t want to pay for expensive circuits that are going to be clogged with junk.

Tier 1 in my proposal will be funding for fast WAN circuits and the routers that connect them. In the current system, that router is Priority 2, so even if you get the 10Gbit circuit you asked for, you may not be able to light it if P2 doesn’t come through. Under my plan, these circuits would be mandated to be fiber. That way, you can increase the amount of bandwidth to a site without the need to run a new line. That’s important, since most schools find themselves quickly consuming additional bandwidth before they realize it. Having a circuit capable of having additional head room is key to the future.

Service providers would also be capped at the amount that they could charge on a monthly basis for the circuit. It does a school no good to order a 1Gbps fiber circuit if they can’t afford to pay for it every month. By capping the amount that SPs can charge, they will be forced to compete or find other means to fund build outs.

Tier 2: Wireless Infrastructure

Wireless is key to the LAN connectivity in schools today. The days of wiring honeycombing the walls is through. Yet, Priority 2 still has a cabling component. It’s time to bring out schools into the 21st century. To that end, Tier 2 of my plan will be focused entirely on improving school wireless connectivity. No more cable runs unless they have a wireless AP on the end. Switches must be PoE/PoE+ capable to support the wireless infrastructure.

In addition, wireless site surveys must be performed before any installation plan is approved. VARs tend to skimp on the surveys now due to inability to recover costs in a straightforward manner. Now, they must do them. The costs of the site survey will be a line item for the site that is capped based on discount percentage. This will lead to an overall reduction in the amount of equipment ordered and installed, so the costs are easy to justify. The capped amount keeps VARs from price gouging with unnecessary additional work that isn’t critical to the infrastructure.

Tier 3: Server Infrastructure

Servers are still an important part of education IT. Even though the applications and services they provide are being increasing outsourced to hosted facilities there will still be a need to maintain existing equipment. However, current E-Rate rules only allow servers to serve Internet functions like DNS, DHCP, or Web Servers. This is ridiculous. DNS is an integral part of Active Directory, so almost every server that is a member of a domain is running it. DHCP is a minuscule part of a server’s function. Given the costs of engineering multiple DHCP servers in a network, having this as a valid E-Rate function is pointless. And when’s the last time a school had their own web server? Hosting services provide scale and easy-of-use that can’t be matched by a small box running in the back of the data center.

Tier 3 of my plan has servers available for schools. However, the hardware can run only one approved role: hypervisors. If you take a server under my E-Rate plan, it has to run ESX/Hyper-V/KVM on the bare metal. This means that ordering fewer big servers will be key to run virtual workloads. They cost allocation nightmare is over. These servers will be performing hypervisor duties all the time. The end user will be responsible for licensing the OS running on the guest. That gets rid of the gray areas we have today.

If you take a virtual server under Tier 3, you must provide a migration plan for your existing non-virtualized workloads. That means that once you accept Tier 3 funding for a server, you have one calendar year to migrate your workloads to that system. After that year, you can no longer claim existing servers as eligible. Moving to the future shouldn’t be painful, but buying a big server and not taking advantage of it is silly. If you show the foresight to use virtualization you’re going to use it all the way.

Of course, for those schools that don’t want to take a server because their workloads already exist in private clouds like Amazon Web Services (AWS) or Rackspace, there will be funding for AWS as well. We have to embrace the available options to ensure our students are learning at the fullest capacity.

Tom’s Take

E-Rate is a fifteen year old program in need of a remodel. The current system is underfunded, prone to gaming, and will eventually collapse in on itself. When USF is forced to rely on rollover funds from previous years to meet funding goals even at 90% something has to change. Priority 1 is bleeding E-Rate dry. The above plan focuses on the technology needed for schools to continue efficiently educating students in the coming years. It meets the immediate needs of education without starving the fund, since an increase is unlikely to come, even though other parts of USF have a sketchy reputation at best, as a quick Google search about the USF-funded cell phone program will attest. As Damon Killian said in The Running Man, “Hard times call for hard choices.” We have to be willing to blow up E-Rate as we know it in order to refocus it to make it serve the ultimate goal: Educating our students.

Disclaimer

Because I know someone from the FCC or SLD is going to read this, here’s the following disclaimer: I don’t work for an E-Rate provider. While I have in the past, this post does not reflect the opinions of anyone at that organization or any other organization involved in the consulting or execution of E-Rate. Don’t assume that because I think the program is broken that means the people that are still working with the program should be punished. They are doing good work while still conforming to the craziest red tape ever. Don’t punish them because I spoke out. Instead, fix the system so no one has to speak out again.

Don’t Just Curate, Cultivate

Sprout

Content curation is all the rage.  The rank and file folks online tend to get overwhelmed by the amount of information spewing from the firehose.  For the most part, they don’t want to know every little detail about everything.  They want salient points about a topic or how an article fits into the bigger picture.  This is the calling card of a content curator.  They organize the chaos and attempt to attach meaning and context to things.  It does work well for some applications.

Hall of Books

One of the biggest issues that I have with curation is that it lends itself to collection only.  I picture curated content like a giant library or study full of old books.  All that information has been amassed and cataloged somehow.  The curator has probably read each of those books once or perhaps twice before.  They can recall the important points when prompted.  But why does all that information need to be stored in a building the size of a warehouse?  Why do we feel the need to collect all that data and then leave it at rest, whether it be in a library or in a list of blogs or sources?

Content curation feels lazy.  I can create a list of bloggers that I want you to follow.  I want you to know that I read these blogs and think the writers make excellent points.  But how often should you go back and look at those lists again?  One of the greatest tragedies of blogging is the number of dead, dying, or abandoned blogs out there.  Part of my job is to evaluate potential delegates for Tech Field Day based on a number of factors.  One of my yardsticks is blogging.

Seeing a blog that has very infrequent posts makes me a bit sad.  That person obviously had something to say at some point.  As time wore on, the amount of things to say drifted away.  Maybe real life got in the way.  Perhaps a new shiny object caught their attention.  The worst is a blog that has only had two posts in the last year that both start with, “I know I haven’t blogged here in a while, but that’s going to change…”

Reaping What You Sow

I think the key to keeping that from happening is to avoid static collection of content.  We need to cultivate that content just like a farmer would cultivate a field.  Care and feeding of writers and bloggers is very important.  Writers can be encouraged by leaving comments or by sharing articles that they have written.  Engaging them in discussion to feed new ideas is also a great way to keep the fire of inspiration burning.

One of the other important ways to keep content creators from getting stale is to look at your blogrolls and lists of followed blogs and move things around from time to time.  I know for a fact that many people don’t scroll very far down the list to find blogs to read.  The further up the list you are, the more likely people are to take the time to read what you have to say.  The key for those wanting to share great writers is to put them up higher on the list.  Too often a blog will be buried toward the bottom of a list and not get the kind of attention the writer needs to keep going.  More likely is a blog at the top of a list that hasn’t posted in weeks or months.

Everyone should do their part to cultivate content creators.  Don’t just settle for putting them on a list and calling it a day.  Revisit those lists frequently to be sure that the people on them are still producing.  For some it will be easy.  There are people like Ivan Pepelnjak and Greg Ferro that are born writers.  Others might need some encouragement.  If you see a good writer than has fallen off in the posting department lately, all it might take is a new comment on a recent post or a mention on Twitter/Facebook/Google+ asking how the writing is coming along.  Just putting the thought in their mind is often enough to get the creative juices flowing again.


Tom’s Take

I’m going to do my part as well.  I’m going to try to keep up with my blogroll a bit more often.  I’m going to make sure people are writing and showing everyone just how great they are.  Perhaps it’s a bit selfish on my part.  The more writers and creators there are the more choices I have to pick from when it’s time to pick new Field Day delegates.  Deep down inside, I just want more writers.  I want to spend as much time as possible every morning reading great articles and adding to the greater body of knowledge.  If that means  I need to spend more time looking after those same writers, then I guess it’s time for me to be a writer farmer.

I Can Fix Gartner

MQFix

I’ve made light of my issues with Gartner before. From mild twitching when the name is mentioned to outright physical acts of dismissal. Aneel Lakhani did a great job on an episode of the Packet Pushers dispelling a lot of the bad blood that most people have for Gartner. I listened and my attitude toward them softened somewhat. It wasn’t until recently that I I finally realized that my problem isn’t necessarily with Gartner. It’s with those that use Gartner as a blunt instrument against me. Simply put, Gartner has a perception problem.

Because They Said So

Gartner produces a lot of data about companies in a number technology related spaces. Switches, firewalls, and wireless devices are all subject to ranking and data mining by Gartner analysts. Gartner takes all that data and uses it to give form to a formless part of the industry. They take inquiries from interested companies and produce a simple ranking for them to use as a yardstick for measuring how one cloud hosting provider ranks against another. That’s a good and noble cause. It’s what happens afterwards that shows what data in the wrong hands can do.

Gartner makes their reports available to interested parties for a price. The price covers the cost of the analysts and the research they produce. It’s no different that the work that you or I do. Because this revenue from the reports is such a large percentage of Gartner’s income, the only folks that can afford it are large enterprise customers or vendors. Enterprise customers are unlikely to share that information with anyone outside their organization. Vendors, on the other hand, are more than willing to share that information with interested parties. Provided that those parties offer up their information as a lead generation exercise and the Gartner report is favorable to the company. Vendors that aren’t seen as a leader in their particular slice of the industry aren’t particularly keen on doing any kind of advertising for their competitors. Leaders, on the other hand, are more than willing to let Gartner do their dirty work for them. Often, that conversation goes like this:

Vendor: You should buy our product. We’re the best.
Customer: Why are you the best? Are you the fastest or the lowest cost? Why should I buy your product?
Vendor: We’re the best because Gartner says so.

The only way that users outside the large enterprises see these reports is when a vendor publishes them as the aforementioned lead generation activity. This skews things considerably for a lot of potential buyers. This disparity becomes even more insulting when the club in question is a polygon.

Troubling Trigonometry

Gartner reports typically include a lot of data points. Those data points tell a story about performance, cost, and value. People don’t like reading data point. They like graphs and charts. In order to simplify the data into something visual, Gartner created their Magic Quadrant (MQ). The MQ distills the entire report into four squares of ranking. The MQ is the real issue here. It’s the worst kind of graph. It doesn’t have any labels on either axis. There’s no way to rank the data points without referring to the accompanying report. However, so many readers rarely read the report that the MQ becomes the *only* basis for comparison.

How much better is Company A at service provider routing than Company B? An inch? Half an inch? $2 billion in revenue? $2,000 gross margin? This is the key data that allows the MQ to be built. Would you know where to find it in the report if you had to? Most readers don’t. They take the MQ as the gospel truth and the only source of data. And the vendors love to point out that they are further to the top and right of the quadrant than their competitors. Sometimes, the ranking seems arbitrary. What makes a company be in the middle of the leaders quadrant versus toward the middle of the graph? Are all companies in the leaders quadrant ranked and placed against each other only? Or against all companies outside the quadrant? Details matter.

Assisting the Analysis

Gartner can fix their perception problems. It’s not going to be easy though. They have the same issue as the Consumer’s Union, producer of Consumer Reports. Where the CU publishes a magazine that has no advertising, they use donations and subscription revenues to offset operating costs. You don’t see television or print ads with Consumer Reports reviews pasted all over them. That’s because the Consumer’s Union specifically forbids their inclusion for commercial purposes.

Gartner needs to take a similar approach if they want to fix the issues with how they’re seen by others. Sell all the reports you want to end users that want to know the best firewall to buy. You can even sell those reports to the firewall vendors themselves. But the vendors should be forbidden from using those reports to resell their products. The integrity you gain from that stance may not offset the loss of vendor revenue right away. But it will gain you customers in the long run that will respect your stance refusing the misuse of Gartner reports as 3rd party advertising copy.

Put a small disclaimer at the bottom of every report: “Gartner provides analysis for interested parties only. Any use of this information as a sales tool or advertising instrument is unintended and prohibited.” That shows what the purpose of the report is about as well as discouraging use simply to sell another hundred widgets.

Another idea that might work to dispel advertising usage of the MQ is releasing last year’s report for little to no cost after 12 months.  That way, the small-to-medium enterprises gain access to the information without sacrificing their independence from a particular vendor.  I don’t think there will be any loss of revenue from these reports, as those that typically buy them will do so within 6-8 months of the release.  That will give the vendors very little room to leverage information that should be in the public domain anyway.  If you feel bad for giving that info away, charge a nominal printing fee of $5 or something like that.  Either way, you’ll blunt the advertising advantage quickly and still accomplish your goal of being seen as the leader in information gathering.


Tom’s Take

I don’t have to whinny like a horse every time someone says Gartner. It’s become a bit of legend by now. What I do take umbrage with is vendors using data points intended for customers to rank purchases and declare that the non-labeled graph of those data points is the sole arbiter of winners and losers in the industry. What if your company doesn’t fit neatly into a Magic Quadrant category? It’s hard to call a company like Palo Alto a laggard in traditional firewalls when they have something that is entirely non-traditional. Reader discretion is key. Use the data in the report as your guide, not the pretty pictures with dots all over them. Take that data and fold it into your own analysis. Don’t take anyone’s word for granted. Make your own decisions. Then, give feedback. Tell people what you found and how accurate those Gartner reports were in making your decision. Don’t give your email address to a vendor that wants to harvest it simply to gain access to the latest report that (surprisingly) show them to be the best. When the advertising angle dries up, vendors will stop using Garter to sell their wares. When that day comes, Gartner will have a real opportunity to transcend their current image and become something more. And that’s a fix worth implementing.

CPE Credits for CCIE Recertification

conted

Every year at Cisco Live the CCIE attendees who are also NetVets get a special reception with John Chambers where they can ask one question of him (time permitting).  I’ve had hit-or-miss success with this in the past so I wanted to think hard about a question that affected CCIEs the world over and could advance the program.  When I finally did ask my question, no only was it met with little acclaim but some folks actually argued against my proposal.  At that moment, I figured it was time to write a blog post about it.

I think the CCIE needs to adopt a Continuing Professional Education (CPE) route for recertification.

I can hear many of you out there now jeering me and saying that it’s a dumb idea.  Hear me out first before you totally dismiss the idea.

Many respected organizations that issue credentials have a program that records CPEs in lieu of retaking certification exams.  ISACA, (ISC)^2, and even the American Bar Assoication use continuing education programs as a way of recertifying their members.  If so many programs use them, what is the advantage?

CPEs ensure that certification holders are staying current with trends in technology.  It forces certified individuals to keep up with new advances and be on top of the game.  It rewards those that spend time researching and learning.  It provides a method of ensuring that a large percentage of the members are able to understand where technology is headed in the future.

There seems to be some hesitation on the part of CCIEs in this regard.  Many in the NetVet reception told me outright I was crazy for thinking such a thing.  They say that the only real measure of recertification is taking the written test.  CCIEs have a blueprint that they need to know and they is how we know what a CCIE is.  CCIEs need to know spanning tree and OSPF and QoS.

Let’s take that as a given.  CCIEs need to know certain things.  Does that mean I’m not a real CCIE because I don’t know ATM, ISDN, or X.25?  These were things that have appeared on previous written exams and labs in the past.  Why do we not learn them now?  What happened to those technologies to move them out of the limelight and relegate them to the same pile that we find token ring and ARCnet?  Technology advances every day.  Things that we used to run years ago are now as foreign to us as steam power and pyramid construction.

If the only true test of a CCIE is to recertify on things they already know, why not make them take the lab exam every two years to recertify?  Why draw the line at simple multiple choice guessing?  Make them show the world that they know what they’re doing.  We could drop the price of the lab for recertification.  We could offer recert labs in other locations via the remote CCIE lab technology to ensure that people don’t need to travel across the globe to retake a test.  Let’s put some teeth in the CCIE by making it a “real” practical exam.

Of course, the lab recert example is silly and a bit much.  Why do we say that multiple choice exams should count?  Probably because they are easy to administer and grade.  We are so focused on ensuring that CCIEs retrain on the same subjects over and over again that we are blind to the opportunity to make CCIEs the point of the spear when it comes to driving new technology adoption.

CCIE lab revamps don’t come along every six months.  They take years of examination and testing to ensure that the whole process integrates properly.  In the fourth version of the CCIE lab blueprint, MPLS appeared for the first time as a lab topic.  It took years of adoption in the wider enterprise community to show that MPLS was important to all networkers and not just service provider engineers.  The irony is that MPLS appears in the blueprint right alongside Frame Relay, a technology which MPLS is rapidly displacing.  We are still testing on a twenty-year-old technology because it represents so much of a networker’s life as it is ripped out and replaced with better protocols.

Where’s the CCIE SDN? Why are emerging technologies so underrepresented in the CCIE?  One could argue that new tech needs time to become adopted and tested before it can be a valid topic.  But who does that testing and adoption?  CCIEs?  CCNPs? Unwitting CCNAs who have this thrust upon them because the CIO saw a killer SDN presentation and decided that he needed it right now!  The truth is somewhere in the middle, I think.

Rather than making CCIEs stop what they are working over every 18 months to read up and remember how 802.1d spanning tree functions or how to configure an NBMA OSPF-over-frame-relay link, why not reward them for investigating and proofing new technology like TRILL or OpenFlow?  Let the research time count for something.  The fastest way to stagnate a certification program is to force it in upon itself and only test on the same things year after year.  I said as much in a previous CCIE post which in many ways was the genesis of my question (and this post).  If CCIEs know the only advantage of studying new technology is gaining a leg up with the CxO comes down to ask how network function virtualization is going to benefit the company then that’s not much of an advantage.

CPEs can be anything.  Reading an article.  Listening to a webcast.  Preparing a presentation.  Volunteering at a community college.  Even attending Cisco Live, which I have been informed was once a requirement of CCIE recertification.  CPEs don’t have to be hard.  They have to show that CCIEs are keeping up with what’s happening with modern networking.  That stands in contrast to reading the CCIE Certification Guide for the fourth or fifth time and perusing 3-digit RFCs for technology that was developed during the Reagan administration.

I’m not suggesting that the CPE program totally replace the test.  In fact, I think those tests could be complementary.  Let CPEs recertify just the CCIE exam.  The written test could still recertify all the existing CCNA/CCNP level certifications.  Let the written stand as an option for those that can’t amass the needed number of CPE credits in the recertification period.  (ISC)^2 does this as do many others.  I see no reason why it can’t work for the CCIE.

There’s also the call of fraud and abuse of the system.  In any honor system there will be fraud and abuse.  People will do whatever they can to take advantage of any perceived weakness to gain advantage.  Similarly to (ISC)^2, an audit system could be implemented to flag questionable submissions and random ones as well to ensure that the certified folks are on the up and up.  As of July 1, 2013 there are almost 90,000 CISSPs in the world.  Somehow (ISC)^2 can manage to audit all of those CPE submissions.  I’m sure that Cisco can find a way to do it as well.


Tom’s Take

People aren’t going to like my suggestion.  I’ve already heard as much.  I think that rewarding those that show initiative and learn all they can is a valuable option.  I want a legion of smart, capable individuals vetting new technology and keeping the networking world one step into the future.  If that means reworking the existing certification program a bit, so be it.  I’d rather the CCIE be on the cutting edge of things rather than be a laggard that is disrespected for having its head stuck in the sand.

If you disagree with me or have a better suggestion, I implore you leave a comment to that affect.  I want to really understand what the community thinks about this.

Accelerating E-Rate

ERateSpeed

Right after I left my job working for a VAR that focused on K-12 education and the federal E-Rate program a funny thing happened.  The president gave a speech where he talked about the need for schools to get higher speed links to the Internet in order to take advantage of new technology shifts like cloud computing.  He called for the FCC and the Universal Service Administration Company (USAC) to overhaul the E-Rate program to fix deficiencies that have cropped up in the last few years.  In the last couple of weeks a fact sheet was released by the FCC to outline some of the proposed changes.  It was like a breath of fresh air.

Getting Up To Speed

The largest shift in E-Rate funding in the last two years has been in applying for faster Internet circuits.  Schools are realizing that it’s cheaper to host servers offsite either with software vendors or in clouds like AWS than it is to apply for funding that may never come and buy equipment that will be outdated before it ships.  The limiting factor has been with the Internet connection of these schools.  Many of them are running serial T-1 circuits even today.  They are cheap and easy to install.  Enterprising ISPs have even started creating multilink PPP connections with several T-1 links to create aggregate bandwidth approaching that of fiber connections.

Fiber is the future of connectivity for schools.  By running a buried fiber to a school district, the ISP can gradually increase the circuit bandwidth as a school increases needs.  For many schools around the country that could include online testing mandates, flipped classrooms, and even remote learning via technologies like Telepresence.  Fiber runs from ISPs aren’t cheap.  They are so expensive right now that the majority of funding for the current year’s E-Rate is going to go to faster ISP connections under Priority 1 funding.  That leaves precious little money left over to fund Priority 2 equipment.  A former customer of mine spent the Priority 1 money to get a 10Gbit Internet circuit and then couldn’t afford a router to hook up to it because of the lack of money leftover for Priority 2.

The proposed E-Rate changes will hopefully fix some of those issues.  The changes call for  simplification of the rules regarding deployments that will hopefully drive new fiber construction.  I’m hoping this means that they will do away with the “dark fiber” rule that has been in place for so many years.  Previously, you could only run fiber between sites if it was lit on both ends and in use.  This discouraged the use of spare fiber, or dark fiber, because it couldn’t be claimed under E-Rate if it wasn’t passing traffic.  This has led to a large amount of ISP-owned circuits being used for managed WAN connections.  A very few schools that were on the cutting edge years ago managed to get dedicated point-to-point fiber runs.  In addition, the order calls for prioritizing funding for fiber deployments that will drive higher speeds and long-term efficiency.  This should enable schools to do away with running multimode fiber simply because it is cheap and instead give preferential treatment to single mode fiber that is capable of running gigabit and 10gig over long distances.  It should also be helpful to VARs that are poised to replace aging multimode fiber plants.

Classroom Mobility

WAN circuits aren’t the only technology that will benefit from these E-Rate changes.  The order calls for a focus on ensuring that schools and libraries gain access to high speed wireless networks for users.  This has a lot to do with the explosion of personal tablet and laptop devices as opposed to desktop labs.  When I first started working with schools more than a decade ago it was considered cutting edge to have a teacher computer and a student desktop in the classroom.  Today, tablet carts and one-to-one programs ensure that almost every student has access to some sort of device for research and learning.  That means that schools are going to need real enterprise wireless networks.  Sadly, many of them that either don’t qualify for E-Rate or can’t get enough funding settle for SMB/SOHO wireless devices that have been purchase for office supply stores simply because they are inexpensive.  It causes the IT admins to spend entirely too much time troubleshooting these connections and distracting them from other, more important issues. It think this focus on wireless will go a long way to helping alleviate connectivity issues for schools of all sizes.

Finally, the FCC has ordered that the document submission process be modernized to include electronic filing options and that older technologies be phased out of the program. This should lead to fewer mistakes in the filing process as well as more rapid decisions for appropriate technology responses.  No longer do schools need to concern themselves with whether or not they need directory assistance on their Priority 1 phone lines.  Instead, they can focus on their problem areas and get what they need quickly.  There is also talk of fixing the audit and appeals process as well as speeding the deployment of funds.  As anyone that has worked with E-Rate will attest, the bureaucracy surrounding the program is difficult for anyone but the most seasoned professionals.  Even the E-Rate wizards have problems from year to year figuring out when an application will be approved or whether or not an audit will take place.  Making these processes easier and more transparent will be good for everyone involved in the program.


Tom’s Take

I posted previously that the cloud would kill the E-Rate program as we know it.  It appears I was right from a certain point of view.  Mobility and the cloud have both caused the E-Rate program to be evaluated and overhauled to address the changes in technology that are now filtering into schools from the corporate sector.  Someone was finally paying attention and figured out that we need to address faster Internet circuits and wireless connectivity instead of DNS servers and more cabling for nonexistent desktops.  Taking these steps shows that there is still life left in the E-Rate program and its ability to help schools.  I still say that USAC needs to boost the funding considerably to help more schools all over the country.  I’m hoping that once the changes in the FCC order go through that more money will be poured into the program and our children can reap the benefits for years to come.

Disclaimer

I used to work for a VAR that did a great deal of E-Rate business.  I don’t work for them any longer.  This post is my work and does not reflect the opinion of any education VAR that I have talked to or have been previously affiliated with.  I say this because the Schools and Libraries Division (SLD) of USAC, which is the enforcement and auditing arm, can be a bit vindictive at times when it comes to criticism.  I don’t want anyone at my previous employer to suffer because I decided to speak my mind.

Devaluing Experts – A Response

I was recently reading a blog post from Chris Jones (@IPv6Freely) about the certification process from the perspective of Juniper and Cisco. He talks about his view of the value of a certification that allows you to recertify from a dissimilar track, such as the CCIE, as opposed to a certification program that requires you to use the same recertification test to maintain your credentials, such as the JNCIE. I figured that any comment I had would run much longer than the allowed length, so I decided to write it down here.

I do understand where Chris is coming from when he talks about the potential loss of knowledge in allowing CCIEs to recert from a dissimilar certification track. At the time of this writing, there are six distinct tracks, not to mention the retired tracks, such as Voice, Storage, and many others. Chris’s contention is that allowing a Routing and Switching CCIE to continue to recertify from the Data Center or Wireless track causes them to lose their edge when it comes to R&S knowledge. The counterpoint to that argument is that the method of using the same (or updated) test in the certified track as the singular recertification option is superior because it ensures the engineer is always up on current knowledge in their field.

My counter argument to that post is two fold. The first point that I would debate is that the world of IT doesn’t exist in a vacuum. When I started in IT, I was a desktop repair technician. As I gradually migrated my skill set to server-based skills and then to networking, I found that my previous knowledge was important to continue forward but that not all of it was necessary. There are core concepts that are critical to any IT person, such as the operation of a CPU or the function of RAM. But beyond the requirement to answer a test question is it really crucial that I remember the hex address of COM4 in DOS 5.0? My skill set grew and changed as a VAR engineer to include topics such as storage, voice, security, and even returning to servers by way of virtualization. I was spending my time working with new technology while still utilizing my old skills. Does that mean that I needed stop what I was working on every 1.5 years to start studying the old CCIE R&S curriculum to ensure that I remembered what OSPF LSA types are present in a totally stubby area? Or is it more important to understand how SDN is impacting the future of networking while not having any significant concrete configuration examples from which to generate test questions?

I would argue that giving an engineer an option to maintain existing knowledge badges by allowing new technology to refresh those badges is a great idea for vendors that want to keep fresh technology flowing into their organization. The risk of forcing your engineers into a track without an incentive to stay current comes in when you have a really smart engineer that is not capable of thinking beyond their certification area. Think about the old telecommunications engineers that have spent years upon years in their wiring closets working with SS7 or 66-blocks. They didn’t have an incentive or need to learn how voice over IP (VoIP) worked. Now that their job function has been replaced by something they don’t understand many of them are scrambling to retrain or face being left behind in the market. As Steven Tyler once sang, “If you do what you’ve always done, you’ll always get what you’ve always got.”

Continuous Learning

The second part of my counterpoint is that the only true way to maintain the level of knowledge required for certification shouldn’t rely on 50-100 multiple choice questions. Any expert-level program should allow for the use of continuing education to recertify the credential on a yearly basis. This is how the legal bar system works. It’s also how (ISC)2’s CISSP program works. By demonstrating that you are acquiring new knowledge continually and contributing to the greater knowledge base you are automatically put into a position that allows you to continue to hold your certification. It’s a smart concept that creates information and ensures that the holders of those certifications stay current on new knowledge. Think for moment about changing the topics of an exam. If the exam is changed every two years there is a potential for a gap in knowledge to occur. If someone were recertified on the last day of the CCIE version 3 exam, it would have been almost two years before they had to take an exam that required any knowledge of MPLS, which is becoming an increasingly common enterprise core protocol. Is it fair that the person that took the written exam the next day was required to know about MPLS? What happens if that CCIEv3 gets a job working with MPLS a few months later. According to the current version 4 curriculum they CCIE should know about MPLS. Within the confines of the certification program the user has failed to demonstrate familiarity with the topic.

Instead, if we ensure that the current certification holders are studying new topics such as MPLS or SDN or any manner of networking-related discussions we can be reasonably sure they are conversant with what the current state of the industry looks like. There is no knowledge gap because new topics can be introduced quickly as they become relevant. There is no fear that someone following the letter of the certification law and recertifying on the same material will run into something they haven’t seen before because of a timing issue. Continuous improvement is a much better method in my mind.


Tom’s Take

Recertification is going to be a sticky topic no matter how it’s sliced. Some will favor allowing engineers to spread their wings and become conversant in many enterprise and service provider topics. Still others will insist that the only way to truly be an expert in a field is to study those topics exclusively. Still others will say that a melding of the two approaches is needed, either through continuous improvement or true lab recertification. I think the end result is the same no matter the case. What’s needed is an agile group of engineers that is capable of not only being an expert at their field but is also encouraged to do things outside their comfort zone without fear of losing that which they have worked so hard to accomplish. That’s valuable no matter how you frame it.

Note that this post was not intended to be an attack against any person or any company listed herein. It is intended as a counterpoint discussion of the topics.

Blog Posts and CISSP CPE Credit

CISSPLogoAmong my more varied certifications, I’m a Certified Information Systems Security Professional (CISSP).  I got it a few years ago since it was one of the few non-vendor specific certifications available at the time.  I studied my tail off and managed to pass the multiple choice scantron-based exam.  One of the things about the CISSP that appealed to me was the idea that I didn’t need to keep taking that monster exam every three years to stay current.  Instead, I could submit evidence that I had kept up with the current state of affairs in the security world in the form of Continuing Professional Education (CPE) credits.

CPEs are nothing new to some professions.  My lawyer friends have told me in the past that they need to attend a certain number of conferences and talks each year to earn enough CPEs to keep their license to practice law.  For a CISSP, there are many things that can be done to earn CPEs.  You can listen to webcasts and podcasts, attend major security conferences like RSA Conference or the ISC2 Security Congress, or even give a security presentation to a group of people.  CPEs can be earned from a variety of research tasks like reading books or magazines.  You can even earn a mountain of CPEs from publishing a security book or article.

That last point is the one I take a bit of umbrage with.  You can earn 5 CPEs for having a security article published in a print magazine or other established publishing house.  You can write all you want but you still have to wait on an old fashioned editor to decide that your material was worth of publication before it can be counted.  Notice that “blog post” is nowhere on the list of activities that can earn credit.  I find that rather interesting considering that the majority of security related content that I read today comes in the form of a blog post.

Blog posts are topical.  With the speed that things move in the security world, the ability to react quickly to news as it happens means you’ll be able to generate much more discussion.  For instance, I wrote a piece for Aruba titled Is It Time For a Hacking Geneva Convention?  It was based on the idea that the new frontier of hacking as a warfare measure is going to need the same kinds of protections that conventional non-combat targets are offered today.  I wrote it in response to a NY Times article about the Chinese calling for Global Hacking Rules.  A week later, NATO released a set of rules for cyberwarfare that echoed my ideas that dams and nuclear plants should be off limits due to potential civilian casualties.  Those ideas developed in the span of less than two weeks. How long would it have taken to get that published in a conventional print magazine?

I spend time researching and gathering information for my blog posts.  Even those that are primarily opinion still have facts that must be verified.  I spend just as much time writing my posts as I do writing my presentations.  I have a much wider audience for my blog posts than I do for my in-person talks.  Yet those in-person talks count for CPEs while my blog posts count for nothing.  Blogs are the kind of rapid response journalism that gets people talking and debating much faster than an article in a security magazine that may be published once a quarter.

I suppose there is something to be said for the relative ease with which someone can start a blog and write posts that may be inaccurate or untrue.  As a counter to that, blog posts exist and can be referenced and verified.  If submitted as a CPE, they should need to stay up for a period of time.  They can be vetted by a committee or by volunteers.  I’d even volunteer to read over blog post CPE submissions.  There’s a lot of smart people out there writing really thought provoking stuff.  If those people happen to be CISSPs, why can’t they get credit for it?

To that end, it’s time for (ISC)^2 to start allowing blog posts to count for CPE credit.  There are things that would need to change on the backend to ensure that the content that is claimed is of high quality.  The desire to have only written material allowed for CPEs is more than likely due to the idea that an editor is reading over it and ensuring that it’s top notch.  There’s nothing to prevent the same thing from occurring for blog authors as well.  After all, I can claim CPE credits for reading a lot of posts.  Why can I get credit for writing them?

The company that oversees the CISSP, (ISC)^2, has taken their time in updating their tests to the modern age.  I’ve not only taken the pencil-and-paper version, I’ve proctored it as well.  It took until 2012 before the CISSP was finally released as a computer-based exam that could be taken in a testing center as opposed to being herded into a room with Scantrons and #2 pencils.  I don’t know whether or not they’re going to be progressive enough to embrace new media at this time.  They seem to be getting around to modernizing things on their own schedule, even with recent additions of more activist board members like Dave Lewis (@gattaca).

Perhaps the board doesn’t feel comfortable allowing people to post whatever they want without oversight or editing.  Maybe reactionary journalism from new media doesn’t meet the strict guidelines needed for people to learn something.  It’s tough to say if blogs are more popular than the print magazines that they forced into email distribution models and quarterly publication as opposed to monthly.  What I will be willing to guarantee is that the quality of security-related blog posts will continue to be high and can only get higher as those that want to start claiming those posts for CPE credit really dig in and begin to write riveting and useful articles.  The fact that they don’t have to be wasted on dead trees and overpriced ink just makes the victory that much sweeter.