Betting On The Right Horse

HobbyHorse

The annoucement of the merger of Alcatel-Lucent and Nokia was a pretty big discussion last week. One of the quotes that kept being brought up in several articles was from John Chambers of Cisco. Chambers has said the IT industry is in for a big round of “brutal consolidation” spurred by “missed market transitions”, which is a favorite term for Chambers. While I agree that consolidation is coming in the industry, I don’t think market transitions are the driver. Instead, it helps to think of it more like a day at the races.

Tricky Ponies

Startups in the networking industry have to find a hook to get traction with investors and customers. Since you can’t boil the ocean, you have to stand out. You need to find an application that gives you the capability to sell into a market. That is much easier to do with SDN than hardware-based innovation. The time-to-market for software is much lower than the barriers to ramp up production of actual devices.

Being a one-trick pony isn’t a bad thing when it comes to SDN startups. If you pour all your talent into one project, you get the best you can build. If that happens to be what your company is known for, you can hit a home run with your potential customers. You could be the overlay company. Or the policy company. Or the Docker networking layer company.

That rapid development time and relative ease of creation makes startups a tantalizing target for acquistion as well. Bigger companies looking to develop expertise often buy that expertise. Either acquiring the product or the team that built it gives the acquiring company a new horse in their stable.

If you can assemble an entire stable of ponies, you can build a networking company that addresses a lot of the needs of your customers. In fact, that’s how Cisco has managed to thrive to the point where they can gamble on those “market transitions”. The entity we call Cisco is really Crescendo, Insieme, Nuova, Andiamo, and hundreds of other single focus networking companies. There’s nothing wrong with that strategy if you have patience and good leadership.

Buy Your Own Stable

If you don’t have patience but have deep pockets, you will probably end up going down a different road. Rather than buying a startup here and there to add to a core strategy, you’ll be buying the whole strategy. That’s what Dell did when they bought Force10. If the rumors are true, that’s what EMC is looking to do soon.

Buying a company to provide your strategy has benefits. You can immediately compete. You don’t have to figure out synergies. Just sell those products and keep moving forward. You may not be the most agile company on the market but you will get the job done.

The issue with buying the strategy is most often “brain drain”. We see brain drain with a small startup going to a mid-sized company. Startup founders aren’t usually geared to stay in a corporate structure for long. They vest their interest and cash out. Losing a founder or key engineer on a product line is tough, but can be overcome with a good team.

What happens when the whole team walks out the door? If the larger acquiring company mistreats the acquired assets or influences creativity in a negative way, you can quickly find your best and brightest teams heading for green pastures. You have to make sure those people are taken care of and have their needs met. Otherwise your new product strategy will crumble before you know it.


Tom’s Take

The Nokia/Alcatel deal isn’t the last time we’ll hear about mergers of networking companies. But I don’t think it’s because of missed market transitions or shifting strategies. It comes down to companies with one or two products wanting protection from external factors. There is strength in numbers. And those numbers will also allow development of new synergies, just like horses in a stable learning from the other horses. If you’re a rich company with an interest in racing, you aren’t going to assemble a stable piece by piece. You’ll buy your way into an established stable. In the end, all the horses end up in a stable owned by someone. Just make sure your horse is the right one to bet on.

Going Out With Style

720367_54066174

Watching the HP public cloud discussion has been an interesting lesson in technology and how it is supported and marketed. HP isn’t the first company to publish a bold statement ending support for a specific technology or product line only to go back and rescind it a few days later. Some think that a problem like that shows that a company has some inner turmoil with regards to product strategy. More often than not, the real issue doesn’t lie with the company. It’s the customers fault.

No Lemonade From Lemons

It’s no secret that products have a lifespan. No matter how popular something might be with customers there is always a date when it must come to an end. This could be for a number of reasons. Technology marches on each and every day. Software may not run on newer hardware. Drivers may not be able to be written for new devices. CPUs grow more powerful and require new functions to unlock their potential.

Customers hate the idea of obsolescence. If you tell them the thing they just bought will be out-of-date in six years they will sneer at you. No matter how fresh the technology might be, the idea of it going away in the future unnerves customers. Sometimes it’s because the customers have been burned on technology purchases in the past. For every VHS and Blu-Ray player sold, someone was just as happy to buy a Betamax or HD-DVD unit that is now collecting dust.

That hatred of obsolescence sometimes keeps things running well past their expiration date. The most obivous example in recent history is Microsoft being forced to support Windows XP. Prior to Windows XP, Microsoft supported consumer releases of Windows for about five years. WIndows 95 was released in 1995 and support ended in 2001. Windows 98 reached EOL around the same time. Windows 2000 enjoyed ten years of support thanks to a shared codebase with popular server operating systems. Windows XP should have reached end-of-life shortly after the release of Windows Vista. Instead, the low adoption rate of Vista pushed system OEMs to keep installing Windows XP on their offerings. Even Windows 7 failed to move the needle significantly for some consumers to get off of XP. It finally took Microsoft dropping the hammer and setting a final end of extended support date in 2014 to get customers to migrate away from Windows XP. Even then, some customers were asking for an extension to the thirteen-year support date.

Microsoft kept supporting an OS three generations old because customers didn’t want to feel like XP had finally given up the ghost. Even though drivers couldn’t be written and security holes couldn’t be patched, consumers still wanted to believe that they could run XP forever. Even if you bought one of the last available copies of Windows XP when you purchased your system, you still got as much support for your OS as Microsoft gave Windows 95/98. Never mind that the programmers had moved on to other projects or had squeezed every last ounce of capability from the software. Consumers just didn’t want to feel like they’d been stuck with a lemon more than a decade after it had been created.

The Lesson of the Lifecycle

How does this apply to situations today? Companies have to make customers understand why things are being replaced. A simple annoucement (or worse, a hint of an unofficial annoucement from a third party source) isn’t enough any more. Customers may not like hearing their their favorite firewall or cloud platform is going away, but if you tell them the reasons behind the decision they will be more accepting.

Telling your customers that you are moving away from a public cloud platform to embrace hybrid clouds or to partner with another company doing a better job or offering more options is the way to go. Burying the annoucement in a conversation with a journalist and then backtracking later isn’t the right method. Customers want to know why. Vendors should have faith that customers are smart enough to understand strategy. Sure, there’s always the chance that customers will push back like they did with Windows XP. But there’s just as much chance they’ll embrace the new direction.


Tom’s Take

I’m one of those consumers that hates obsolescence. Considering that I’ve got a Cius and a Flip it should be apparent that I don’t bet on the right horse every time. But I also understand the reasons why those devices are no longer supported. I choose to use Windows 7 on my desktop for my own reasons. I know why it has been put out to pasture. I’m not going to demand Microsoft devote time and energy to a tired platform when Windows 10 needs to be finished.

In the enterprise technology arena, I want companies to be honest and direct when the time comes to retire products. Don’t hem and haw about shifting landscapes and concise technology roadmaps. Tell the world that things didn’t work out like you wanted and give us the way you’re going to fix it next time.

Are We The Problem With Wearables?

applewatchface
Something, Something, Apple Watch.

Oh, yeah. There needs to be substance in a wearable blog post. Not just product names.

Wearables are the next big product category that is driving innovation. The advances being made in screen clarity, battery life, and component miniaturization are being felt across the rest of the device market. I doubt Apple would have been able to make the new Macbook logic board as small as it is without a few things learned from trying to cram transistors into a watch case. But, are we the people sending the wrong messages about wearable technology?

The Little Computer That Could

If you look at the biggest driving factor behind technology today, it comes down to size. Technology companies are making things smaller and lighter with every iteration. If the words thinnest and lightest don’t appear in your presentation at least twice then you aren’t on the cutting edge. But is this drive because tech companies want to make things tiny? Or is it more that consumers are driving them that way?

Yes, people the world over are now complaining that technology should have other attributes besides size and weight. A large contingent says that battery life is now more important than anything else. But would you be okay with lugging around an extra pound of weight that equates to four more hours of typing time? Would you give up your 13-inch laptop in favor of a 17-inch model if the battery life were doubled?

People send mixed signals about the size and shape of technology all the time. We want it fast, small, light, powerful, and with the ability to run forever. Tech companies give us as much as they can, but tradeoffs must be made. Light and powerful usually means horrible battery life. Great battery life and low weight often means terrible performance. No consumer has ever said, “This product is exactly what I wanted with regards to battery, power, weight, and price.”

Where Wearables Dare

As Jonny Ive said this week, “The keyboard dictated the size of the new Macbook.” He’s absolutely right. Laptops and Desktops have a minimum size that is dictated by the screen and keyboard. Has anyone tried typing on a keyboard cover for and iPad? How about an iPad Mini cover? It’s a miserable experience, even if you don’t have sausage fingers like me. When the size of the device dictates the keyboard, you are forced to make compromises that impact user experience.

With wearables, the bar shifts away from input to usability. No wearable watch has a keyboard, virtual or otherwise. Instead, voice control is the input method. Spoken words drive communication beyond navigation. For some applications, like phone calls and text messages, this is preferred. But I can’t imagine typing a whole blog post or coding on a watch. Nor should I. The wearable category is not designed for hard-core computing use.

That’s where we’re getting it wrong. Google Glass was never designed to replace a laptop. Apple Watch isn’t going to replace an iPhone, let alone an iMac. Wearable devices augment our technology workflows instead of displacing them. Those fancy monocles you see in sci-fi movies aren’t the entire computer. They are just an interface to a larger processor on the back end. Trying to shrink a laptop to the size of a silver dollar is impossible. If it were, we’d have that by now.

Wearables are designed to give you information at a glance. Google Glass allows you to see notifications easily and access information. Smart watches are designed to give notifications and quick, digestible snippets of need-to-know information. Yes, you do have a phone for that kind of thing. But my friend Colin McNamara said it best:

I can glance at my watch and get a notification without getting sucked into my phone


Tom’s Take

That’s what makes the wearable market so important. It’s not having the processing power of a Cray supercomputer on your arm or attached to your head. It’s having that power available when you need it, yet having the control to get information you need without other distractions. Wearables free you up to do other things. Like building or creating or simply just paying attention to something. Wearables make technology unobtrusive, whether it’s a quick text message or tracking the number of steps you’ve taken today. Sci-Fi is filled with pictures of amazing technology all designed to do one thing – let us be human beings. We drive the direction of product development. Instead of crying for lighter, faster, and longer all the time, we should instead focus on building the right interface for what we need and tell the manufacturers to build around that.

 

Are Your Tweets Really Your Own?

new-twitter-logo350105_lg

We’ve all seen it recently. Twitter bios and blog profile pages with some combination of the following:

My tweets are my own.

Retweets are not endorsements.

My views do not represent my employer.

It has come to the point where the people in the industry are more visible and valuable than the brands they work for. Personal branding has jumped to the forefront of marketing strategies. But with that rise in personal branding comes a huge risk for companies. What happens when one of our visible stars says something we disagree with? What happens when we have to pull back?

Where Is My Mind?

Social media works best when it’s genuine. People sharing thoughts and ideas with each other without filters or constraint. Where it breaks down is when an external force starts interfering with that information exchange. Think about corporate social media policies that restrict what you can say. Or even policies that say your Twitter handle has to include the company you work for (yes, that exists). Why should my profile have to include miles of disclaimers telling people that I’m not a robot?

Is it because we have become so jaded as to believe that people can’t divorce their professional life from their personal life? Or is it because the interference from people telling you the “right way” to do social media has forced people to become robotic in their approach to avoid being disciplined?

Personal accounts that do nothing but reinforce the party line are usually unimportant to the majority of social media users. The real draw with speaking to someone from a company is the interaction behind the message. If a person really believes in the message then it shows through in their discussions without the need to hit all the right keywords or link to the “right” pages on a site.

Voices Carry

If you want more genuine, organic interaction with your people in the social world, you need to take off the leash. Don’t force them to put disclaimers in their profiles. Don’t make them take up valuable real estate telling the world what most of them already know. People speak for themselves. Their ideas and thoughts belong to them. Yes, you can tell the difference between when someone is parroting the party line and giving a real, honest introspective look at a discussion. People are not robots. Social media policies shouldn’t treat them as such.


Tom’s Take

I find myself in the situation that I’ve described above. I have to be careful with the things I say sometimes. I’m always ready to hit the Delete button on a tweet before it goes out. But what I don’t do is disclaim all over the place that “my tweets are my own”. Because everyone that I work with knows my mind. They know when I’m speaking for me and when I’m not. There is trust that I will speak my mind and stand by it. That’s the key to being honest in social media. Trust that your audience will understand you. Have faith in them. Which is something that a profile disclaimer can’t do.

 

Hypermyopia In The World Of Networking

myopia

The more debate I hear over protocols and product positioning in the networking market today, the more I realize that networking has a very big problem with myopia when it comes to building products. Sometimes that’s good. But when you can’t even see the trees for the bark, let alone the forest, then it’s time to reassess what’s going on.

Way Too Close

Sit down in a bar in Silicon Valley and you’ll hear all kinds of debates about which protocols you should be using in your startup’s project. OpenFlow has its favorite backers. Others say things like Stateless Transport Tunneling (STT) are the way to go. Still others have backed a new favorite draft protocol that’s being fast-tracked at the IETF meetings. The debates go on and on. It ends up looking a lot like this famous video.

But what does this have to do with the product? In the end, do the users really care which transport protocol you used? Is the forward table population mechanism of critical importance to them? Or are they more concerned with how the system works? How easy it is to install? How effective it is at letting them do their jobs?

The hypermyopia problem makes the architecture and engineering people on these projects focus on the wrong set of issues. They think that an elegant and perfect solution to a simple technical problem will be the magical panacea that will sell their product by the truckload. They focus on such minute sets of challenges that they often block out the real problems that their product is going to face.

Think back to IBM in the early days of the Internet. Does anyone remember Blue Lightning? How about the even older MCA Bus? I bet if I said OS/2 I’d get someone’s attention. These were all products that IBM put out that were superior to their counterparts in many ways. Faster speeds, better software architecture, and even revolutionary ideas about peripheral connection. And yet all of them failed miserably in one way or another. Was it because they weren’t technically complete? Or was it because IBM had a notorious problem with marketing and execution when it came to non-mainframe computing?

Take A Step Back

Every writer in technology uses Apple as a comparison company at some point. In this case, you should take a look at their simplicity. What protocol does FaceTime use? Is it SIP? Or H.264? Does it even matter? FaceTime works. Users like that it works. They don’t want to worry about traversing firewalls or having supernodes available. They don’t want to fiddle with settings and tweak timers to make a video call work.

Enterprise customers are very similar. Think about WAN technologies for a moment. Entire careers have been built around finding easy ways to connect multiple sites together. We debate Frame Relay versus ATM. Should we use MPLS? What routing protocol should we use? The debates go on and on. Yet the customer wants connectivity, plain and simple.

At the recent Networking Field Day 9, two companies that specialize in software defined WAN (SD-WAN) had a chance to present. Velocloud and CloudGenix both showcased their methods for creating WANs with very little need for user configuration. The delegates were impressed that the respective company’s technologies “just worked”. No tuning timers. No titanic arguments about MPLS. Just simple wizards and easy configuration.

That’s what enterprise technology should be. It shouldn’t involve a need to get so close to the technology that you lose the big picture. It shouldn’t be a series of debates about which small technology choice to make. It should just work. Users that spend less time implementing technology spend more time using it. They spend more time happy with it. And they’re more likely to buy from you again.


Tom’s Take

If I hear one more person arguing the merits of their technology favorite again, I may throw up. Every time someone comes to me and tells me that I should bet on their horse in the race because it is better or faster or more elegant, I want to grab them by the shoulders and shake some sense into them. People don’t buy complicated things. People hate elegant but overly difficult systems. They just want things to work at the end of the day. They want to put as little thought into a system as they can to maximize the return they get from it. If product managers spent the next iteration of design focusing on ease-of-use instead of picking the perfect tunneling protocol, I think they would see a huge return on their investment in the long run. And that’s something you can see no matter how close you are.

 

Rules Shouldn’t Have Exceptions

MerkurRazor

On my way to Virtualization Field Day 4, I ran into a bit of a snafu at the airport that made me think about policy and application. When I put my carry-on luggage through the X-ray, the officer took it to the back and gave it a thorough screening. During that process, I was informed that my double-edged safety razor would not be able to make the trip (or the blade at least). I was vexed, as this razor had flown with me for at least a whole year with nary a peep from security. When I related as much to the officer, the response was “I’m sorry no one caught it before.”

Everyone Is The Same, Except For Me

This incident made me start thinking about polices in networking and security and how often they are arbitrarily enforced. We see it every day. The IT staff comes up with a new plan to reduce mailbox sizes or reduce congestion by enforcing quality of service (QoS). Everyone is all for the plan during the discussion stages. When the time comes to implement the idea, the exceptions start happening. Upper management won’t have mailbox limitations. The accounting department is exempt from the QoS policy. The list goes on and on until it’s larger than the policy itself.

Why does this happen? How can a perfect policy go from planning to implementation before it falls apart? Do people sit around making up rules they know they’ll never follow? That does happen in some cases, but more often it happens that the folks that the policy will end up impacting the most have no representation in the planning process.

Take mailboxes for example. The IT department, being diligent technology users, strive for inbox zero every day. They process and deal with messages. They archive old mail. They keep their mailbox a barren wasteland of in-process things and shuffle everything else off to the static archive. Now, imagine an executive. These people are usually overwhelmed by email. They process what they can but the wave will always overtake them. In addition, they have no archive. Their read mail sits around in folders for easy searching and quick access when a years-old issue becomes present again.

In modern IT, any policies limiting mailbox sizes would be decided by the IT staff based on their mailbox size. Sure, a 1 GB limit sounds great. Then, when the policy is implemented the executive staff pushes back with their 5 GB (or larger) mailboxes and says that the policy does not apply to them. IT relents and finds a way to make the executives exempt.

In a perfect world, the executive team would have been consulted or had representation on the planning team prior to the decision. The idea of having huge mailboxes would have been figured out in the planning stage and dealt with early instead of making exceptions after the fact. Maybe the IT staff needed to communicate more. Perhaps the executive team needed to be more involved. Those are problems that happen every day. So how do we fix them?

Exceptions Are NOT The Rule

The way to increase buy-in for changes and increase communication between stakeholders is easy but not without pain. When policies are implemented, no deviations are allowed. It sounds harsh. People are going to get mad at you. But you can’t budge an inch. If a policy exception is not documented in the policy it will get lost somewhere. People will continue to be uninvolved in the process as long as they think they can negotiate a reprieve after the fact.

IT needs to communicate up front exactly what’s going into the change before the the implementation. People need to know how they will be impacted. Ideally, that will mean that people have talked about the change up front so there are no surprises. But we all know that doesn’t happen. So making a “no exceptions” policy or rule change will get them involved. Because not being able to get out of a rule means you want to be there when the rules get decided so you can make your position clear and ensure the needs of you and your department are met.


Tom’s Take

As I said yesterday on Twitter, people don’t mind rules and polices. They don’t even mind harsh or restrictive rules. What they have a problem with is when those rules are applied in an arbitrary fashion. If the corporate email policy says that mailboxes are supposed to be no more than 1 GB in size then people in the organization will have a problem if someone has a 20 GB mailbox. The rules must apply to everyone equally to be universally adopted. Likewise, rules must encompass as many outlying cases as possible in order to prevent one-off exceptions for almost everyone. Planning and communication are more important than ever when planning those rules.

Time For A Data Diet?

I’m running out of drive space. Not just on my laptop SSD or my desktop HDD. But everywhere. The amount of data that I’m storing now is climbing at an alarming rate. What’s worse is that I often forget I have some of it until I go spelunking back through my drive to figure out what’s taking up all that room. And it’s a problem that the industry is facing too.

The Data Junkyard

Data is accumulating. You can’t deny that. Two factors have lead to this. The first is that we now log more data from things than ever before. In this recent post from Chris Evans (@ChrisMEvans), he mentions that Virgin Atlantic 787s are generating 500GB of data per flight. I’m sure that includes telemetry, aircraft performance, and other debugging information that someone at some point deemed crucial. In another recent article from Jacques Mattheij (@JMattheij), he mentions that app developers left the debug logging turned on, generating enormous data files as the system was in operation.

Years ago we didn’t have the space to store that much data. We had to be very specific about what needed to be capture and stored for long periods of time. I can remember having a 100MB hard drive in my first computer. I can also remember uninstalling and deleting several things in order to put a new program on. Now there is so much storage space that we don’t worry about running out unless a new application makes outrageous demands.

You Want To Use The Data?

The worst part about all this data accumulation is that once it’s been stored, no one ever looks at it again. This isn’t something that’s specific to electronic data, though. I can remember seeing legal offices with storage closets dedicated to boxes full of files. Hospitals have services that deal with medical record storage. In the old days, casinos hired vans to shuffle video tapes back and forth between vaults and security offices. All that infrastructure just on the off-chance that you might need the data one day.

With Big Data being a huge funding target and buzzword source today, you can imagine that every other startup in the market is offering to give you some insight into all that data that you’re storing. I’ve talked before about the drive for analysis of data. It’s the end result of companies trying to make sense of the chaos. But what about the stored data?

Odds are good that it’s going to just sit there in perpetuity. Once the analysis is done on all this data, it will either collect dust in a virtual file box until it is needed again (perhaps) in the far future or it will survive until the next SAN outage and never be reconstructed from backup. The funny thing about this collected data cruft is that no one misses it until the subpoena comes.

Getting Back To Fighting Weight

The solution to the problem isn’t doing more analysis on data. Instead, we need to start being careful about what data we’re storing in the first place. When you look at personal systems like Getting Things Done, they focus on stemming the flow of data quickly to give people more time to look at the important things. In much the same way, instead of capturing every bit coming from a data source and deciding later what to do with it, the decision needs to be made right away. Data Scientists need to start thinking like they’re on a storage budget, not like they’ve been handed the keys to the SAN kingdom.

I would be willing to bet that a few discrete decisions in the data collection process about what to keep and what to throw away would significantly cut down on the amount data we need to store and process. Less time spent querying and searching through that mess would optimize data retrieval systems and make our infrastructure run much faster. Think of it like spring cleaning for the data garage.


Tom’s Take

I remember a presentation at Networking Field Day a few years ago when Statseeker told us that they could scan data points from years in the past down to the minute. The room collectively gasped. How could you look that far back? How big are the drives in your appliance? The answer was easy: they don’t store every little piece of data coming from the system. They instead look at very specific things that tell them about the network and then record those with an eye to retrieval in the future. They optimize at storage time to help the impact of lookup in the future.

Rather than collecting everything in the world in the hopes that it might be useful, we need to get away from the data hoarding mentality and trim down to something more agile. It’s the only way our data growth problem is going to get better in the near future.


If you’d like to hear some more thoughts on the growing data problem, be sure to check out the Tech Talk sponsored by Fusion-io.