Fixing My Twitter

It’s no surprise that Twitter’s developers are messing around with the platform. Again. This time, it’s the implementation of changes announced back in May. Twitter is finally cutting off access to their API that third party clients have been using for the past few years. They’re forcing these clients to use their new API structure for things like notifications and removing support for streaming. This new API structure also has a hefty price tag. For 250 users it’s almost $3,000/month.

You can imagine the feedback that Twitter has gotten. Users of popular programs like Tweetbot and Twitterific were forced to degrade client functionality thanks to the implementation of these changes. Twitter power users have been voicing their opinions with the hashtag #BreakingMyTwitter. I’m among the people that are frustrated that Twitter is chasing the dollar instead of the users.

Breaking The Bank

Twitter is beholden to a harsh mistress. Wall Street doesn’t care about user interface or API accessibility. They care about money. They care are results and profit. And if you aren’t turning a profit you’re a loser that people will abandon. So Twitter has to make money somehow. And how is Twitter supposed to make money in today’s climate?

Users.

Users are the coin of Twitter’s realm. The more users they have active the more eyeballs they can get on their ads and sponsored tweets and hashtags. Twitter wants to court celebrities with huge followings that want to put sponsored tweets in their faces. Sadly for Twitter, those celebrities are moving to platforms like Instagram as Twitter becomes overrun with bots and loses the ability to have discourse about topics.

Twitter needs real users looking at ads and sponsored things to get customers to pay for them. They need to get people to the Twitter website where these things can be displayed. And that means choking off third party clients. But it’s not just a war on Tweetbot and Twitterific. They’ve already killed off their Mac client. They have left Tweetdeck in a state that’s barely usable, positioning it for power users. Yet, power users prefer other clients.

How can Twitter live in a world where no one wants to use their tools but can’t use the tools they like because access to the vital APIs that run them are choked off behind a paywall that no one wants to pay for? How can us poor users continue to use a service that sucks when used through the preferred web portal?

You probably heard my rant on the Gestalt IT Rundown this week. If not, here you go:

I was a little animated because I’m tired of getting screwed by developers that don’t use Twitter the way that I use it. I came up with a huge list of things I didn’t like. But I wanted to take a moment to talk about some things that I think Twitter should do to get their power users back on their side.

  1. Restore API Access to Third Party Clients – This is a no-brainer for Twitter. If you don’t want to maintain the old code, then give API access to these third party developers at the rates they used to have it. Don’t force the developers working hard to make your service usable to foot the bills that you think you should be getting. If you want people to continue to develop good features that you’ll “borrow” later, you need to give them access to your client.
  2. Enforce Ads on Third Party Clients – I hate this idea, but if it’s what it takes to restore functionality, so be it. Give API access to Tweetbot and Twitterific, but in order to qualify for a reduced rate they have to start displaying ads and promoted tweets from Twitter. It’s going to clog our timeline but it would also finance a usable client. Sometimes we have to put up with the noise to keep the signal.
  3. Let Users Customize Their Experience – If you’re going to drive me to the website, let me choose how I view my tweets. I don’t want to see what my followers liked on someone else’s timeline. I don’t want to see interesting tweets from people I don’t follow. I want to get a simple timeline with conversations that don’t expand until I click on them. I want to be able to scroll the way I want, not the way you want me to use your platform. Customizability is why power users use tools like third party clients. If you want to win those users back, you need to investigate letting power users use the web platform in the same way.
  4. Buy A Third Party Client and Don’t Kill Them Off – This one’s kind of hard for Twitter. Tweetie. The original Tweetdeck. There’s a graveyard of clients that Twitter has bought and caused to fail through inattention and inability to capitalize on their usability. I’m sure Loren Britcher is happy to know that his most popular app is now sitting on a scrap heap somewhere. Twitter needs to pick up a third party developer, let them develop their client in peace without interference internally at Twitter, and then not get fired for producing.
  5. Listen To Your Users, Not Your Investors – Let’s be honest. If you don’t have users on Twitter, you don’t have investors. Rather than chasing the dollars every quarter and trying to keep Wall Street happy, you should instead listen to the people that use your platform and implement the changes their asking for. Some are simple, like group DMs in third party clients. Or polls that are visible. Some are harder, like robust reporting mechanisms or the ability to remove accounts that are causing issues. But if Twitter keeps ignoring their user base in favor of their flighty investors they’re going to be sitting on a pile of nothing very soon.

Tom’s Take

I use Twitter all the time. It’s my job. It’s my hobby. It’s a place where I can talk to smart people and learn things. But it’s not easy to do that when the company that builds the platform tries as hard as possible to make it difficult for me to use it the way that I want. Instead of trying to shut down things I actively use and am happy with, perhaps Twitter can do some soul searching and find a way to appeal to the people that use the platform all the time. That’s the only way to fix this mess before you’re in the same boat as Orkut and MySpace.

Advertisements

Doing 2016 Write

 

calendar

It’s the first day of 2016 and it’s time for me to look at what I wanted to do and what I plan to accomplish in the coming 366 days. We’ve got a busy year ahead with a leap day, the Olympics, and a US presidential election. And somewhere in the middle of all that there’s a lot of exciting things related to tech.

2015 In Rewind

Looking back at my 2015 goals, I think I did a fairly good job:

  • Writing in Markdown – Read about it all here
  • Blog themes – I really did look at quite a few themes and tried to find something that worked the way I wanted it to work without major modifications. What I finally settled on was a minor font change to make things more readable. For me, form has never been more important than function, so I spend less time worrying about how my blog looks and much more time focusing on how it reads.
  • Cisco Live Management – Didn’t quite get this one done. I wanted to put up the poll for the big picture at the end and I managed to miss it this year! The crew got a chance to say hello to keynote speaker Mike Rowe, so I think it was a good tradeoff. This year for Cisco Live 2016, I hope we have some more interesting things in store as well as some surprises.

A hit, a miss, and a foul tip. Not terribly bad. 2015 was a busy year. I think I wrote more words than ever. I spoke a few times at industry events. I enjoyed participating in the community and being a part of all the wonderful things going on to move forward.

Looking Ahead to 2016

2016 is going to be another busy year as well. Lots of conferences in Las Vegas this year (Aruba Atmosphere, Interop, Cisco Live, and VMworld) as well as other industry events and a full slate of Tech Field Day events. I don’t think there’s a month in the entire year where something isn’t going on.

I’m sure this is an issue for quite a few people in the community as well. There’s a lot of time that gets taken up by doing things. The leaves very little time for writing about those things. I’ve experienced it and I know a lot of my friends have felt the same way. I can’t tell yo the number of times that I’ve heard “I need to write something about that.” or “I’m way behind on my blogging!”

Two Wrongs Don’t Make A Write

My biggest goal for 2016 is writing. I’ve been doing as much as I can, but I want to help others do it as well. I want to find a way to encourage people to start writing and add their thoughts to the community. I also want to find a way to keep the other great voices of the community going and writing regularly.

There’s been a huge shift recently away from blogging as a primary method of information transfer. Quite a few have moved toward non-writing methods to convey that information. Podcasts, both audio and video, are starting to become a preferred method of disseminating information.

I don’t have a problem with podcasts. Some of my friends have great resources that you should check out. But podcasts are very linear. Aside from using software that speeds up the talking, it’s very hard to condense podcasts into quick hit formats. Blog posts can be as short or long as they need to be to get the information across.

What I want to accomplish is a way to foster writers to write more. To help new writers get started and established writers to keep contributing. But keeping the blogging format alive and growing, we can continue to contribute great thoughts to the community and transfer knowledge to a new group of up-and-coming IT professionals.

I’ve got some ideas along these lines that I’ll be rolling out in the coming months. Be sure to say tuned. If you’re willing to help out in any way please drop me a line and let me know. I’m always looking for great people in the community to help make others great as well.


Tom’s Take

A new year doesn’t always mean a totally fresh start. I’ve been working on 2016 things for a few weeks now and I’m continuing great projects that I’ve been doing for a while now as well. But a new year does mean that it is time to find ways to do things better. My mission for the year is to make people better writers. To encourage more people to put thoughts down on paper. I want a world full of thinkers that aren’t afraid to share. That’s something that could make the new year a great one indeed.

Fixing E-Rate – SIP

Embed from Getty Images

I was talking to my friend Joshua Williams (@JSW_EdTech) about our favorite discussion topic: E-Rate.  I’ve written about E-Rate’s slow death and how it needs to be modernized.  One of the things that Joshua mentioned to me is a recent speech from Commissioner Ajit Pai in front of the FCC.  The short, short version of this speech is that the esteemed commissioner doesn’t want to increase the pool of money paid from the Universal Service Fund (USF) into E-Rate.  Instead, he wants to do away with “wasteful” services like wireline telephones and web hosting.  Naturally, when I read this my reaction was a bit pointed.

Commissioner Pai has his heart in the right place.  His staff gave him some very good notes about his interviews with school officials.  But he’s missed the boat completely about the “waste” in the program and how to address it.  His idea of reforming the program won’t come close to fixing the problems inherent in the system.

Voices Carry

Let’s look at the phone portion for moment.  Commissioner Pai says that E-Rate spends $600 million per year on funding wireline telephone services.  That is a pretty big number.  He says that the money we sink into phone services should go to broadband connections instead.  Because the problems in schools aren’t decaying phone systems or lack of wireless or even old architecture.  It’s faster Internet.  Never mind that broadband circuits are part of the always-funded Priority One pool of money.  Or that getting the equipment required to turn up the circuit is part of Priority Two.  No, the way to fix the problem is to stop paying for phones.

Commissioner Pai obviously emails and texts the principals and receptionists at his children’s schools.  He must have instant messaging communications with them regularly. Who in their right mind would call a school?  Oh, right.  Think of all the reasons that you might want to call a school.  My child forget their sweater.  I’m picking them up early for a doctor’s appointment.  The list is virtually endless.  There are so many reasons to call a school.  Telling the school that you’re no longer paying for phone service is likely to get your yelled at.  Or run out of town on a rail.

What about newer phone technologies?  Services that might work better with those fast broadband connections that Commissioner Pai is suggesting are sorely needed?  What about SIP trunking?  It seems like a no-brainer to me.  Take some of the voice service money and earmark it for new broadband connections.  However, it can only be used for a faster broadband connection if the telephone service is converted to a SIP trunk.  That’s a brilliant idea that would redirect the funding where it’s needed.

Sure, it’s likely going to require an upgrade of phone gear to support SIP and VoIP in general.  Yes, some rural phone companies are going to be forced to upgrade their circuits to support SIP.  But given that the major telecom companies have already petitioned the FCC to do away with wireline copper services in favor of VoIP, it seems that the phone companies would be on board with this.  It fixes many of the problems while still preserving the need for voice communications to the schools.

This is a win for the E-Rate integrators that are being targeted by Commissioner Pai’s statement that it’s too difficult to fill out E-Rate paperwork.  Those same integrators will be needed to take legacy phone systems and drag them kicking and screaming into the modern era.  This kind of expertise is what E-Rate should be paying for.  It’s the kind of specialized knowledge that school IT departments shouldn’t need to have on staff.


Tom’s Take

I spent a large part of my career implementing voice systems for education.  Many times I wondered why we would hook up a state-of-the-art CallManager to a cluster of analog voice lines.  The answer was almost always about money.  SIP was expensive.  SIP required a faster circuit.  Analog was cheap.  It was available.  It was easy.

Now schools have to deal with the real possibility of losing funding for E-Rate voice service because one of the commissioners thinks that no one uses voice any more.  I say we should take the money he wants to save and reinvest it into modernizing phone systems for all E-Rate eligible schools.  Doing so would go a long way toward removing the increasing maintenance costs for legacy phone systems as well as retiring circuits that require constant attention.  That would increase the pool of available money in future funding years.  The answer isn’t to kill programs.  It’s to figure out why they cost so much and find ways to make them more efficient.  And if you don’t think that’s what’s needed Commissioner Pai, give me a call.  I still have a working phone.

A Plan To Fix E-Rate

The federal E-Rate program is in the news again. This time, it is due to a mention in the president’s State of the Union speech. He asked a deceptively simple question: “Why can’t schools have the same kind of wifi we have in coffee shops?” After the speech, press releases went flying from the Federal Communications Commission (FCC) talking about a restructuring plan that will eliminate older parts of the Federal Universal Service Fund (USF) like pagers and dial-up connections.

This isn’t the first time that E-Rate has been skewered by the president. Back in June of 2013, he asked some tough questions about increasing the availability of broadband connections to schools. Back then, I thought a lot about what his aim was and how easily it would be accomplished. With the recent news, I feel that I have to say something that the government doesn’t want to hear but needs to be said.

Mr. President, E-Rate is fundamentally broken and needs to be completely overhauled before your goals are met.

E-Rate hasn’t really changed since its inception in 1997. All that’s happened is more rules being piled on the top to combat fraud and attempt to keep up with changing technologies. Yet no one has really looked at the landscape of education technology today to see how best to use E-Rate funding. Computer labs have given way to laptop carts or even tablet carts. T1 WAN connections are now metro fiber handoffs at speeds of 100Mbit or better. Servers do much more than run DNS or web servers.

E-Rate has to be completely overhauled. The program no longer meets the needs of its constituents. Today, it serves as a vehicle for service providers and resellers to make money while forcing as much technology into schools as their meager budgets can afford. When schools with a 90% discount percentage are still having a hard time meeting their funding commitments, you have to take a long hard look at the prices being charged and the value being offered to the schools.

With that in mind, I’ve decided to take a stab at fixing E-Rate. It’s not a perfect solution, but I think it’s a great start. We need to build on the important pieces and discard the things that no longer make sense. To that end, I’m suggesting the Priority 1 / Priority 2 split be abolished. Cisco even agrees with me (PDF Link). In it’s place, we need to take a hard look at what our schools need to educate the youth of America.

Tier 1: WAN Connections

Schools need faster WAN connections. Online testing has replaced Scantrons. Streaming video is augmenting the classroom. The majority of traffic is outbound to the Internet, not internally. T1/T3 doesn’t cut it any more. Schools are going to need 100Mbit or better to meet student needs. Yet providers are reluctant to build out fiber networks that are unprofitable. Schools don’t want to pay for expensive circuits that are going to be clogged with junk.

Tier 1 in my proposal will be funding for fast WAN circuits and the routers that connect them. In the current system, that router is Priority 2, so even if you get the 10Gbit circuit you asked for, you may not be able to light it if P2 doesn’t come through. Under my plan, these circuits would be mandated to be fiber. That way, you can increase the amount of bandwidth to a site without the need to run a new line. That’s important, since most schools find themselves quickly consuming additional bandwidth before they realize it. Having a circuit capable of having additional head room is key to the future.

Service providers would also be capped at the amount that they could charge on a monthly basis for the circuit. It does a school no good to order a 1Gbps fiber circuit if they can’t afford to pay for it every month. By capping the amount that SPs can charge, they will be forced to compete or find other means to fund build outs.

Tier 2: Wireless Infrastructure

Wireless is key to the LAN connectivity in schools today. The days of wiring honeycombing the walls is through. Yet, Priority 2 still has a cabling component. It’s time to bring out schools into the 21st century. To that end, Tier 2 of my plan will be focused entirely on improving school wireless connectivity. No more cable runs unless they have a wireless AP on the end. Switches must be PoE/PoE+ capable to support the wireless infrastructure.

In addition, wireless site surveys must be performed before any installation plan is approved. VARs tend to skimp on the surveys now due to inability to recover costs in a straightforward manner. Now, they must do them. The costs of the site survey will be a line item for the site that is capped based on discount percentage. This will lead to an overall reduction in the amount of equipment ordered and installed, so the costs are easy to justify. The capped amount keeps VARs from price gouging with unnecessary additional work that isn’t critical to the infrastructure.

Tier 3: Server Infrastructure

Servers are still an important part of education IT. Even though the applications and services they provide are being increasing outsourced to hosted facilities there will still be a need to maintain existing equipment. However, current E-Rate rules only allow servers to serve Internet functions like DNS, DHCP, or Web Servers. This is ridiculous. DNS is an integral part of Active Directory, so almost every server that is a member of a domain is running it. DHCP is a minuscule part of a server’s function. Given the costs of engineering multiple DHCP servers in a network, having this as a valid E-Rate function is pointless. And when’s the last time a school had their own web server? Hosting services provide scale and easy-of-use that can’t be matched by a small box running in the back of the data center.

Tier 3 of my plan has servers available for schools. However, the hardware can run only one approved role: hypervisors. If you take a server under my E-Rate plan, it has to run ESX/Hyper-V/KVM on the bare metal. This means that ordering fewer big servers will be key to run virtual workloads. They cost allocation nightmare is over. These servers will be performing hypervisor duties all the time. The end user will be responsible for licensing the OS running on the guest. That gets rid of the gray areas we have today.

If you take a virtual server under Tier 3, you must provide a migration plan for your existing non-virtualized workloads. That means that once you accept Tier 3 funding for a server, you have one calendar year to migrate your workloads to that system. After that year, you can no longer claim existing servers as eligible. Moving to the future shouldn’t be painful, but buying a big server and not taking advantage of it is silly. If you show the foresight to use virtualization you’re going to use it all the way.

Of course, for those schools that don’t want to take a server because their workloads already exist in private clouds like Amazon Web Services (AWS) or Rackspace, there will be funding for AWS as well. We have to embrace the available options to ensure our students are learning at the fullest capacity.

Tom’s Take

E-Rate is a fifteen year old program in need of a remodel. The current system is underfunded, prone to gaming, and will eventually collapse in on itself. When USF is forced to rely on rollover funds from previous years to meet funding goals even at 90% something has to change. Priority 1 is bleeding E-Rate dry. The above plan focuses on the technology needed for schools to continue efficiently educating students in the coming years. It meets the immediate needs of education without starving the fund, since an increase is unlikely to come, even though other parts of USF have a sketchy reputation at best, as a quick Google search about the USF-funded cell phone program will attest. As Damon Killian said in The Running Man, “Hard times call for hard choices.” We have to be willing to blow up E-Rate as we know it in order to refocus it to make it serve the ultimate goal: Educating our students.

Disclaimer

Because I know someone from the FCC or SLD is going to read this, here’s the following disclaimer: I don’t work for an E-Rate provider. While I have in the past, this post does not reflect the opinions of anyone at that organization or any other organization involved in the consulting or execution of E-Rate. Don’t assume that because I think the program is broken that means the people that are still working with the program should be punished. They are doing good work while still conforming to the craziest red tape ever. Don’t punish them because I spoke out. Instead, fix the system so no one has to speak out again.

Don’t Just Curate, Cultivate

Sprout

Content curation is all the rage.  The rank and file folks online tend to get overwhelmed by the amount of information spewing from the firehose.  For the most part, they don’t want to know every little detail about everything.  They want salient points about a topic or how an article fits into the bigger picture.  This is the calling card of a content curator.  They organize the chaos and attempt to attach meaning and context to things.  It does work well for some applications.

Hall of Books

One of the biggest issues that I have with curation is that it lends itself to collection only.  I picture curated content like a giant library or study full of old books.  All that information has been amassed and cataloged somehow.  The curator has probably read each of those books once or perhaps twice before.  They can recall the important points when prompted.  But why does all that information need to be stored in a building the size of a warehouse?  Why do we feel the need to collect all that data and then leave it at rest, whether it be in a library or in a list of blogs or sources?

Content curation feels lazy.  I can create a list of bloggers that I want you to follow.  I want you to know that I read these blogs and think the writers make excellent points.  But how often should you go back and look at those lists again?  One of the greatest tragedies of blogging is the number of dead, dying, or abandoned blogs out there.  Part of my job is to evaluate potential delegates for Tech Field Day based on a number of factors.  One of my yardsticks is blogging.

Seeing a blog that has very infrequent posts makes me a bit sad.  That person obviously had something to say at some point.  As time wore on, the amount of things to say drifted away.  Maybe real life got in the way.  Perhaps a new shiny object caught their attention.  The worst is a blog that has only had two posts in the last year that both start with, “I know I haven’t blogged here in a while, but that’s going to change…”

Reaping What You Sow

I think the key to keeping that from happening is to avoid static collection of content.  We need to cultivate that content just like a farmer would cultivate a field.  Care and feeding of writers and bloggers is very important.  Writers can be encouraged by leaving comments or by sharing articles that they have written.  Engaging them in discussion to feed new ideas is also a great way to keep the fire of inspiration burning.

One of the other important ways to keep content creators from getting stale is to look at your blogrolls and lists of followed blogs and move things around from time to time.  I know for a fact that many people don’t scroll very far down the list to find blogs to read.  The further up the list you are, the more likely people are to take the time to read what you have to say.  The key for those wanting to share great writers is to put them up higher on the list.  Too often a blog will be buried toward the bottom of a list and not get the kind of attention the writer needs to keep going.  More likely is a blog at the top of a list that hasn’t posted in weeks or months.

Everyone should do their part to cultivate content creators.  Don’t just settle for putting them on a list and calling it a day.  Revisit those lists frequently to be sure that the people on them are still producing.  For some it will be easy.  There are people like Ivan Pepelnjak and Greg Ferro that are born writers.  Others might need some encouragement.  If you see a good writer than has fallen off in the posting department lately, all it might take is a new comment on a recent post or a mention on Twitter/Facebook/Google+ asking how the writing is coming along.  Just putting the thought in their mind is often enough to get the creative juices flowing again.


Tom’s Take

I’m going to do my part as well.  I’m going to try to keep up with my blogroll a bit more often.  I’m going to make sure people are writing and showing everyone just how great they are.  Perhaps it’s a bit selfish on my part.  The more writers and creators there are the more choices I have to pick from when it’s time to pick new Field Day delegates.  Deep down inside, I just want more writers.  I want to spend as much time as possible every morning reading great articles and adding to the greater body of knowledge.  If that means  I need to spend more time looking after those same writers, then I guess it’s time for me to be a writer farmer.

I Can Fix Gartner

MQFix

I’ve made light of my issues with Gartner before. From mild twitching when the name is mentioned to outright physical acts of dismissal. Aneel Lakhani did a great job on an episode of the Packet Pushers dispelling a lot of the bad blood that most people have for Gartner. I listened and my attitude toward them softened somewhat. It wasn’t until recently that I I finally realized that my problem isn’t necessarily with Gartner. It’s with those that use Gartner as a blunt instrument against me. Simply put, Gartner has a perception problem.

Because They Said So

Gartner produces a lot of data about companies in a number technology related spaces. Switches, firewalls, and wireless devices are all subject to ranking and data mining by Gartner analysts. Gartner takes all that data and uses it to give form to a formless part of the industry. They take inquiries from interested companies and produce a simple ranking for them to use as a yardstick for measuring how one cloud hosting provider ranks against another. That’s a good and noble cause. It’s what happens afterwards that shows what data in the wrong hands can do.

Gartner makes their reports available to interested parties for a price. The price covers the cost of the analysts and the research they produce. It’s no different that the work that you or I do. Because this revenue from the reports is such a large percentage of Gartner’s income, the only folks that can afford it are large enterprise customers or vendors. Enterprise customers are unlikely to share that information with anyone outside their organization. Vendors, on the other hand, are more than willing to share that information with interested parties. Provided that those parties offer up their information as a lead generation exercise and the Gartner report is favorable to the company. Vendors that aren’t seen as a leader in their particular slice of the industry aren’t particularly keen on doing any kind of advertising for their competitors. Leaders, on the other hand, are more than willing to let Gartner do their dirty work for them. Often, that conversation goes like this:

Vendor: You should buy our product. We’re the best.
Customer: Why are you the best? Are you the fastest or the lowest cost? Why should I buy your product?
Vendor: We’re the best because Gartner says so.

The only way that users outside the large enterprises see these reports is when a vendor publishes them as the aforementioned lead generation activity. This skews things considerably for a lot of potential buyers. This disparity becomes even more insulting when the club in question is a polygon.

Troubling Trigonometry

Gartner reports typically include a lot of data points. Those data points tell a story about performance, cost, and value. People don’t like reading data point. They like graphs and charts. In order to simplify the data into something visual, Gartner created their Magic Quadrant (MQ). The MQ distills the entire report into four squares of ranking. The MQ is the real issue here. It’s the worst kind of graph. It doesn’t have any labels on either axis. There’s no way to rank the data points without referring to the accompanying report. However, so many readers rarely read the report that the MQ becomes the *only* basis for comparison.

How much better is Company A at service provider routing than Company B? An inch? Half an inch? $2 billion in revenue? $2,000 gross margin? This is the key data that allows the MQ to be built. Would you know where to find it in the report if you had to? Most readers don’t. They take the MQ as the gospel truth and the only source of data. And the vendors love to point out that they are further to the top and right of the quadrant than their competitors. Sometimes, the ranking seems arbitrary. What makes a company be in the middle of the leaders quadrant versus toward the middle of the graph? Are all companies in the leaders quadrant ranked and placed against each other only? Or against all companies outside the quadrant? Details matter.

Assisting the Analysis

Gartner can fix their perception problems. It’s not going to be easy though. They have the same issue as the Consumer’s Union, producer of Consumer Reports. Where the CU publishes a magazine that has no advertising, they use donations and subscription revenues to offset operating costs. You don’t see television or print ads with Consumer Reports reviews pasted all over them. That’s because the Consumer’s Union specifically forbids their inclusion for commercial purposes.

Gartner needs to take a similar approach if they want to fix the issues with how they’re seen by others. Sell all the reports you want to end users that want to know the best firewall to buy. You can even sell those reports to the firewall vendors themselves. But the vendors should be forbidden from using those reports to resell their products. The integrity you gain from that stance may not offset the loss of vendor revenue right away. But it will gain you customers in the long run that will respect your stance refusing the misuse of Gartner reports as 3rd party advertising copy.

Put a small disclaimer at the bottom of every report: “Gartner provides analysis for interested parties only. Any use of this information as a sales tool or advertising instrument is unintended and prohibited.” That shows what the purpose of the report is about as well as discouraging use simply to sell another hundred widgets.

Another idea that might work to dispel advertising usage of the MQ is releasing last year’s report for little to no cost after 12 months.  That way, the small-to-medium enterprises gain access to the information without sacrificing their independence from a particular vendor.  I don’t think there will be any loss of revenue from these reports, as those that typically buy them will do so within 6-8 months of the release.  That will give the vendors very little room to leverage information that should be in the public domain anyway.  If you feel bad for giving that info away, charge a nominal printing fee of $5 or something like that.  Either way, you’ll blunt the advertising advantage quickly and still accomplish your goal of being seen as the leader in information gathering.


Tom’s Take

I don’t have to whinny like a horse every time someone says Gartner. It’s become a bit of legend by now. What I do take umbrage with is vendors using data points intended for customers to rank purchases and declare that the non-labeled graph of those data points is the sole arbiter of winners and losers in the industry. What if your company doesn’t fit neatly into a Magic Quadrant category? It’s hard to call a company like Palo Alto a laggard in traditional firewalls when they have something that is entirely non-traditional. Reader discretion is key. Use the data in the report as your guide, not the pretty pictures with dots all over them. Take that data and fold it into your own analysis. Don’t take anyone’s word for granted. Make your own decisions. Then, give feedback. Tell people what you found and how accurate those Gartner reports were in making your decision. Don’t give your email address to a vendor that wants to harvest it simply to gain access to the latest report that (surprisingly) show them to be the best. When the advertising angle dries up, vendors will stop using Garter to sell their wares. When that day comes, Gartner will have a real opportunity to transcend their current image and become something more. And that’s a fix worth implementing.

CPE Credits for CCIE Recertification

conted

Every year at Cisco Live the CCIE attendees who are also NetVets get a special reception with John Chambers where they can ask one question of him (time permitting).  I’ve had hit-or-miss success with this in the past so I wanted to think hard about a question that affected CCIEs the world over and could advance the program.  When I finally did ask my question, no only was it met with little acclaim but some folks actually argued against my proposal.  At that moment, I figured it was time to write a blog post about it.

I think the CCIE needs to adopt a Continuing Professional Education (CPE) route for recertification.

I can hear many of you out there now jeering me and saying that it’s a dumb idea.  Hear me out first before you totally dismiss the idea.

Many respected organizations that issue credentials have a program that records CPEs in lieu of retaking certification exams.  ISACA, (ISC)^2, and even the American Bar Assoication use continuing education programs as a way of recertifying their members.  If so many programs use them, what is the advantage?

CPEs ensure that certification holders are staying current with trends in technology.  It forces certified individuals to keep up with new advances and be on top of the game.  It rewards those that spend time researching and learning.  It provides a method of ensuring that a large percentage of the members are able to understand where technology is headed in the future.

There seems to be some hesitation on the part of CCIEs in this regard.  Many in the NetVet reception told me outright I was crazy for thinking such a thing.  They say that the only real measure of recertification is taking the written test.  CCIEs have a blueprint that they need to know and they is how we know what a CCIE is.  CCIEs need to know spanning tree and OSPF and QoS.

Let’s take that as a given.  CCIEs need to know certain things.  Does that mean I’m not a real CCIE because I don’t know ATM, ISDN, or X.25?  These were things that have appeared on previous written exams and labs in the past.  Why do we not learn them now?  What happened to those technologies to move them out of the limelight and relegate them to the same pile that we find token ring and ARCnet?  Technology advances every day.  Things that we used to run years ago are now as foreign to us as steam power and pyramid construction.

If the only true test of a CCIE is to recertify on things they already know, why not make them take the lab exam every two years to recertify?  Why draw the line at simple multiple choice guessing?  Make them show the world that they know what they’re doing.  We could drop the price of the lab for recertification.  We could offer recert labs in other locations via the remote CCIE lab technology to ensure that people don’t need to travel across the globe to retake a test.  Let’s put some teeth in the CCIE by making it a “real” practical exam.

Of course, the lab recert example is silly and a bit much.  Why do we say that multiple choice exams should count?  Probably because they are easy to administer and grade.  We are so focused on ensuring that CCIEs retrain on the same subjects over and over again that we are blind to the opportunity to make CCIEs the point of the spear when it comes to driving new technology adoption.

CCIE lab revamps don’t come along every six months.  They take years of examination and testing to ensure that the whole process integrates properly.  In the fourth version of the CCIE lab blueprint, MPLS appeared for the first time as a lab topic.  It took years of adoption in the wider enterprise community to show that MPLS was important to all networkers and not just service provider engineers.  The irony is that MPLS appears in the blueprint right alongside Frame Relay, a technology which MPLS is rapidly displacing.  We are still testing on a twenty-year-old technology because it represents so much of a networker’s life as it is ripped out and replaced with better protocols.

Where’s the CCIE SDN? Why are emerging technologies so underrepresented in the CCIE?  One could argue that new tech needs time to become adopted and tested before it can be a valid topic.  But who does that testing and adoption?  CCIEs?  CCNPs? Unwitting CCNAs who have this thrust upon them because the CIO saw a killer SDN presentation and decided that he needed it right now!  The truth is somewhere in the middle, I think.

Rather than making CCIEs stop what they are working over every 18 months to read up and remember how 802.1d spanning tree functions or how to configure an NBMA OSPF-over-frame-relay link, why not reward them for investigating and proofing new technology like TRILL or OpenFlow?  Let the research time count for something.  The fastest way to stagnate a certification program is to force it in upon itself and only test on the same things year after year.  I said as much in a previous CCIE post which in many ways was the genesis of my question (and this post).  If CCIEs know the only advantage of studying new technology is gaining a leg up with the CxO comes down to ask how network function virtualization is going to benefit the company then that’s not much of an advantage.

CPEs can be anything.  Reading an article.  Listening to a webcast.  Preparing a presentation.  Volunteering at a community college.  Even attending Cisco Live, which I have been informed was once a requirement of CCIE recertification.  CPEs don’t have to be hard.  They have to show that CCIEs are keeping up with what’s happening with modern networking.  That stands in contrast to reading the CCIE Certification Guide for the fourth or fifth time and perusing 3-digit RFCs for technology that was developed during the Reagan administration.

I’m not suggesting that the CPE program totally replace the test.  In fact, I think those tests could be complementary.  Let CPEs recertify just the CCIE exam.  The written test could still recertify all the existing CCNA/CCNP level certifications.  Let the written stand as an option for those that can’t amass the needed number of CPE credits in the recertification period.  (ISC)^2 does this as do many others.  I see no reason why it can’t work for the CCIE.

There’s also the call of fraud and abuse of the system.  In any honor system there will be fraud and abuse.  People will do whatever they can to take advantage of any perceived weakness to gain advantage.  Similarly to (ISC)^2, an audit system could be implemented to flag questionable submissions and random ones as well to ensure that the certified folks are on the up and up.  As of July 1, 2013 there are almost 90,000 CISSPs in the world.  Somehow (ISC)^2 can manage to audit all of those CPE submissions.  I’m sure that Cisco can find a way to do it as well.


Tom’s Take

People aren’t going to like my suggestion.  I’ve already heard as much.  I think that rewarding those that show initiative and learn all they can is a valuable option.  I want a legion of smart, capable individuals vetting new technology and keeping the networking world one step into the future.  If that means reworking the existing certification program a bit, so be it.  I’d rather the CCIE be on the cutting edge of things rather than be a laggard that is disrespected for having its head stuck in the sand.

If you disagree with me or have a better suggestion, I implore you leave a comment to that affect.  I want to really understand what the community thinks about this.