The Sunset of Windows XP

WindowsXPMeadow

The end is nigh for Microsoft’s most successful operating system of all time. Windows XP is finally set to reach the end of support next Tuesday. After twelve and a half years, and having its death sentence commuted at least twice, it’s time for the sunset of the “experienced” OS. Article after article has been posted as of late discussing the impact of the end of support for XP. The sky is falling for the faithful. But is it really?

Yes, as of April 8, 2014, Windows XP will no longer be supported from a patching perspective. You won’t be able to call in a get help on a cryptic error message. But will your computer spontaneously combust? Is your system going to refuse to boot entirely and force you at gunpoint to go out and buy a new Windows 8.1 system?

No. That’s silly. XP is going to continue to run just as it has for the last decade. XP will be as secure on April 9th as it was on April 7th. But it will still function. Rather than writing about how evil Microsoft is for abandoning an operating system after one of the longest support cycles in their history, let’s instead look at why XP is still so popular and how we can fix that.

XP is still a popular OS with manufacturing systems and things like automated teller machines (ATMs). That might be because of the ease with which XP could be installed onto commodity hardware. It could also be due to the difficulty in writing drivers for Linux for a large portion of XP’s life. For better or worse, IT professionals have inherited a huge amount of embedded systems running an OS that got the last major service pack almost six years ago.

For a moment, I’m going to take the ATMs out of the equation. I’ll come back to them in a minute. For the other embedded systems that don’t dole out cash, why is support so necessary? If it’s a manufacturing system that’s been running for the last three or four years what is another year of support from Microsoft going to get you? Odds are good that any support that manufacturing system needs is going to be entirely unrelated to the operating system. If we treat these systems just like an embedded OS that can’t be changed or modified, we find that we can still develop patches for the applications running on top of the OS. And since XP is one of the most well documented systems out there, finding folks to write those patches shouldn’t be difficult.

In fact, I’m surprised there hasn’t been more talk of third party vendors writing patches for XP. I saw more than a few start popping up once Windows 2000 started entering the end of its life. It’s all a matter of the money. Banks have already started negotiating with Microsoft to get an extension of support for their ATM networks. It’s funny how a few million dollars will do that. SMBs are likely to be left out in the cold for specialized systems due to the prohibitive cost of an extended maintenance contract, either from Microsoft or another third party. After all, the money to pay those developers needs to come from somewhere.


Tom’s Take

Microsoft is not the bad guy here. They supported XP as long as they could. Technology changes a lot in 12 years. The users aren’t to blame either. The myth of a fast upgrade cycle doesn’t exist for most small businesses and enterprises. Every month that your PC can keep running the accounting software is another month of profits. So who’s fault is the end of the world?

Instead of looking at it as the end, we need to start learning how to cope with unsupported software. Rather than tilting at the windmills in Remond and begging for just another month or two of token support we should be investigating ways to transition XP systems that can’t be upgraded within 6-8 months to an embedded systems plan. We’ve reached the point where we can’t count on anyone else to fix our XP problems but ourselves. Once we have the known, immutable fact of no more XP support, we can start planning for the inevitable – life after retirement.

The Alignment of Net Neutrality

Net neutrality has been getting a lot of press as of late, especially as AT&T and Netflix have been sparring back and forth in the press.  The FCC has already said they are going to take a look at net neutrality to make sure everyone is on a level playing field.  ISPs have already made their position clear.  Where is all of this posturing going to leave the users?

Chaotic Neutral

Broadband service usage has skyrocketed in the past few years.  Ideas that would never have been possible even 5 years ago are now commonplace.  Netflix and Hulu have made it possible to watch television without cable.  Internet voice over IP (VoIP) allows a house to have a phone without a phone line.  Amazon has replaced weekly trips to the local department store for all but the most crucial staple items.  All of this made possible by high speed network connectivity.

But broadband doesn’t just happen.  ISPs must build out their networks to support the growing hunger for faster Internet connectivity.  Web surfing and email aren’t the only game in town.  Now, we have streaming video, online multiplayer, and persistently connected devices all over the home.  The Internet of Things is going to consume a huge amount of bandwidth in an average home as more smart devices are brought online.  ISPs are trying to meet the needs of their subscribers.  But are they going far enough?

ISPs want to build networks their customers will use, and more importantly pay to use.  They want to ensure that complaints are kept to a minimum while providing the services that customers demand.  Those ISP networks cost a hefty sum.  Given the choice between paying to upgrade a network and trying to squeeze another month or two out of existing equipment, you can guarantee the ISPs are going to take the cheaper route.  Coincidentally, that’s one of the reasons why the largest backers of 802.1aq Shortest Path Bridging were ISP-oriented.  SPB doesn’t require new equipment to forward frames (like TRILL).  ISPs can use existing equipment to deliver SPB with no out-of-pocket expenditure on hardware.  That little bit of trivia should give you an idea why ISPs are trying to do away with net neutrality.

True Neutral

ISPs want to keep using their existing equipment as long as possible.  Every dollar they make from this cycle’s capital expenditure means a dollar of profit in their pocket before they have to replace a switch.  If there was a way to charge even more money for existing services, you can better believe they would do it.  Which is why this infographic hits home for most:

net-neutrality

Charging for service tiers would suit ISPs just fine.  After all, as the argument goes, you are using more than the average user.  Shouldn’t you shoulder the financial burden of increased network utilization?  That’s fine for corner cases like developers or large consumers of downstream bandwidth.  But with Netflix usage increasing across the board, why should the ISP charge you more on top of a Netflix subscription?  Shouldn’t their network anticipate the growing popularity of streaming video?

The other piece of the tiered offering above that should give pause is the common carrier rules for service providers.  Common carriers get to be absolved of liability for the things they transport because they have to agree to transport everything offered to them.  What do you think would happen if those carriers suddenly decide they want to discriminate about what they send?  If that discrimination revokes their common carrier status, what’s to stop them from acting like a private carrier and start refusing to transport certain applications or content?  Maybe forcing a video service to negotiate a separate peering agreement for every ISP they want to use?  Who would do that?

Neutral Good

Net Neutrality has to exist to ensure that we are free to use the services we want to consume.  Sure, this means that things like Quality of Service (QoS) can’t be applied to packets to ensure they are all being treated equally.  The inverse is to have guaranteed delivery for an additional fee.  And every service you add on to the top would incur more fees.  New multiplayer game launching next week? The ISP will charge you an extra $5 per month to insure you have a low ping time to beat the other guy.  If you don’t buy the package, your multiplayer traffic gets dumped in with Netflix and the rest of the bulk traffic.

This is part of the reason why Google Fiber is such a threat to existing ISPs.  When the only options for local loop delivery are the cable company and the phone company, it’s difficult to have options that aren’t being tiered in the absence of neutrality.  With viable third party fiber buildouts like Google starting to spring up it becomes a bargaining chip to increase speeds to users and upgrade backbones to support heavy usage.  If you don’t believe that, look at what AT&T did immediately after Google announced Google Fiber in Austin, TX.


Tom’s Take

ISPs shouldn’t be able to play favorites with their customers.  End users are paying for a connection.  End users are also paying services to use their offerings.  Why should we have to pay for a service twice if the ISP wants to charge me more in a tiering setup?  That smells of a protection racket in many ways.  I can imagine the ISP techs sitting there in a slick suit saying, “That’s a nice connection you got there.  It would be a shame if something were to happen to it.”  Instead, it’s up to the users to demand ISPs offer free and unrestricted access to all content.  In some cases, that will mean backing alternatives and “voting with your dollar” to make the message be heard loud and clear.  I won’t sign up for services that have data usage caps or metered speed limits past a certain ceiling.  I would drop any ISP that wants me to pay extra just because I decide to start using a video streaming service or a smart thermostat.  It’s time for ISPs to understand that hardware should be an investment in future customer happiness and not a tool that’s used to squeeze another dime out of their user base.

Throw More Storage At It

All of  this has happened before.   All of this will happen again.

The more time I spend listening to storage engineers talk about the pressing issues they face in designing systems in this day and age, the more I’m convinced that we fight the same problems over and over again with different technologies.  Whether it be networking, storage, or even wireless, the same architecture problems crop in new ways and require different engineers to solve them all over again.

Quality is Problem One

A chance conversation with Howard Marks (@DeepStorageNet) at Storage Field Day 4 led me to start thinking about these problems.  He was talking during a presentation about the difficulty that storage vendors have faced in implementing quality of service (QoS) in storage arrays.  As Howard described some of the issues with isolating neighboring workloads and ensuring they can’t cause performance issues for a specific application, I started thinking about the implementation of low latency queuing (LLQ) for voice in networking.

LLQ was created to solve a specific problem.  High volume, low bandwidth flows can starve traditional priority queuing systems.  In much the same way, applications that demand high amounts of input/output operations per second (IOPS) while storing very little data can cause huge headaches for storage admins.  Storage has tried to solve this problem with hardware in the past by creating things like write-back caching or even super fast flash storage caching tiers.

Make It Go Away

In fact, a lot of the problems in storage mirror those from networking world many years ago.  Performance issues used to have a very simple solution – throw more hardware at the problem until  it goes away.  In networking, you throw bandwidth at the issue.  In storage, more IOPS are the answer.  When hardware isn’t pushed to the absolute limit, the answer will always be to keep driving it higher and higher.  But what happens when performance can’t fix the problem any more?

Think of a sports car like the Bugatti Veyron.  It is the fastest production car available today, with a top speed well over 250 miles per hour.  In fact, Bugatti refuses to talk about the absolute top speed, instead countering “Would you ask how fast a jet can fly?”  One of the limiting factors in attaining a true measure of the speed isn’t found in the car’s engine.  Instead, the sub systems around the engine being to fail at such stressful speeds.  At 258 miles per hour, the tires on the car will completely disintegrate in 15 minutes.  The fuel tank will be emptied in 12 minutes.  Bugatti wisely installed a governor on the engine limiting it to a maximum of 253 miles per hour in an attempt to prevent people from pushing the car to its limits.  A software control designed to prevent performance issues by creating artificial limits.  Sound familiar?

Storage has hit the wall when it comes to performance.  PCIe flash storage devices are capable of hundreds of thousands of IOPS.  A single PCI card has the throughput of an entire data center.  Hungry applications can quickly saturate the weak link in a system.  In days gone by, that was the IOPS capacity of a storage device.  Today, it’s the link that connects the flash device to the rest of the system.  Not until the advent of PCIe was the flash storage device fast enough to keep pace with workloads starved for performance.

Storage isn’t the only technology with hungry workloads stressing weak connection points.  Networking is quickly reaching this point with the shift to cloud computing.  Now, instead of the east-west traffic between data center racks being a point of contention, the connection point between the user and the public cloud will now be stressed to the breaking point.  WAN connection speeds have the same issues that flash storage devices do with non-PCIe interfaces.  They are quickly saturated by the amount of outbound traffic generated by hungry applications.  In this case, those applications are located in some far away data center instead of being next door on a storage array.  The net result is the same – resource contention.


Tom’s Take

Performance is king.  The faster, stronger, better widgets win in the marketplace.  When architecting systems it is often much easier to specify a bigger, faster device to alleviate performance problems.  Sadly, that only works to a point.  Eventually, the performance problems will shift to components that can’t be upgraded.  IOPS give way to transfer speeds on SATA connectors.  Data center traffic will be slowed by WAN uplinks.  Even the CPU in a virtual server will eventually become an issue after throwing more RAM at a problem.  Rather than just throwing performance at problems until they disappear, we need to take a long hard look at how we build things and make good decisions up front to prevent problems before they happen.

Like the Veyron engineers, we have to make smart choices about limiting the performance of a piece of hardware to ensure that other bottlenecks do not happen.  It’s not fun to explain why a given workload doesn’t get priority treatment.  It’s not glamorous to tell someone their archival data doesn’t need to live in a flash storage tier.  But it’s the decision that must be made to ensure the system is operating at peak performance.

Replacing Nielsen With Big Data

I’m a fan of television. I like watching interesting programs. It’s been a few years since one has caught my attention long enough to keep my watching for multiple seasons. Part of that reason is due to my fear that an awesome program is going to fail to reach a “target market” and will end up canceled just when it’s getting good. It’s happened to several programs that I liked in the past.

Sampling the Goods

Part of the issue with tracking the popularity of television programs comes from the archaic method in which programs are measured. Almost everyone has heard of the Nielsen ratings system. This sampling method was created in the 1930s as a way to measure radio advertising reach. In the 50s, it was adapted for use in television.

Nielsen selects target audiences that represent the greater whole. They ask users to keep written diaries of their television watching habits. They also have the ability to install a device called a set meter which allows viewers to punch in a code to identify themselves via age groups and lock in a view for a program. The set meter can tell the instant a channel is changed or the TV is powered off.

In theory, the sampling methodology is sound. In practice, it’s a bit shaky. View diaries are unreliable because people tend to overreport their view habits. If they feel guilty that they haven’t been writing anything down, they tend to look up something that was on TV and write it down. Diaries also can’t determine if a viewer watched the entire program or changed the channel in the middle. Set meters aren’t much better. The reliance on PIN codes to identify users can lead to misreported results. Adults in a hurry will sometimes punch in an easier code assigned to their children, leading to skewed age results.

Both the diary and the set meter fail to take into account the shift in viewing habits in modern households. Most of the TV viewing in my house takes place through time-shifted DVR recordings. My kids tend to monopolize the TV during the day, but even they are moving to using services like Netflix to watch all episodes of their favorite cartoons in one sitting. Neither of these viewing habits are easily tracked by Nielsen.

How can we find a happy medium? Sample sizes have been reduced signifcantly due to cord-cutting households moving to Internet distribution models. People tend to exaggerate or manipulate self-reported viewing results. Even “modern” Nielsen technology can’t keep up. What’s the answer?

Big Data

I know what you’re saying: “We’ve already got that with Nielsen, right?” Not quite. TV viewing habits have shifted in the past few years. So has TV technology. Thanks to the shift from analog broadcast signals to digital and the explosion of set top boxes for cable decryption and movie service usage, we now have a huge portal into the living room of every TV watcher in the world.

Think about for a moment. The idea of a sample size works provided it’s a good representative sample. But tracking this data is problematic. If we have access to a way to crunch the actual data instead of extrapolating from incomplete sets shouldn’t we use that instead? I’d rather believe the real numbers instead of trying to guess from unreliable sources.

This also fixes the issue of time-shifted viewing. Those same set top boxes are often responsible for recording the programs. They could provide information such as number of shows recorded versus viewed and whether or not viewers skip through commercials. For those that view on mobile devices that data could be compiled as well through integration with the set top box. User logins are required for mobile apps as it is today. It’s just a small step to integrating the whole package.

It would require a bit of technical upgrading on the client side. We would have to enable the set top boxes to report data back to a service. We could anonymize the data to a point to be sure that people aren’t being unnecessarily exposed. It will also have to be configured as an opt-out setting to ensure that the majority is represented. Opt-in won’t work because those checkboxes never get checked.

Advertisers are going to demand the most specific information about people that they can. The ratings service exists to work for the advertisers. If this plan is going to work, a new company will have to be created to collect and analyze this data. This way the analysis company can ensure that the data is specific enough to be of use to the advertisers while at the same time ensuring the protection of the viewers.


Tom’s Take

Every year, promising new TV shows are yanked off the airwaves because advertisers don’t see any revenue. Shows that have a great premise can’t get up to steam because of ratings. We need to fix this system. In the old days, the deluge of data would have drown Nielsen. Today, we have the technology to collect, analyze, and store that data for eternity. We can finally get the real statistics on how many people watched Jericho or After MASH. Armed with real numbers, we can make intelligent decisions about what to keep on TV and what to jettison. And that’s a big data project I’d be willing to watch.

The Value of the Internet of Things

NestPrice
The recent sale of IBM’s x86 server business to Lenovo has people in the industry talking.  Some of the conversation has centered around the selling price.  Lenovo picked up IBM’s servers for $2.3 billion, which is almost 66% less than the initial asking price of $6 billion two years ago.  That price drew immediate comparisons to the Google acquisition of Nest, which was $3.2 billion.  Many people asked how a gadget maker with only two shipping products could be worth more than the entirety of IBM’s server business.
Are You Being Served?
It says a lot for the decline of hardware manufacturing, especially at the low end.  IT departments have been moving away from smaller, task focused servers for many years now.  Instead of buying a new 1U, dual socket machine to host an application, developers have used server virtualization as a way to spin up new services quickly with very little additional cost.  That means that older low end servers aren’t being replaced when they reach the end of their life.  Those workloads are being virtualized and moved away while the equipment is permanently retired.
It also means that the target for server manufacturers is no longer the low end.  IT departments that have seen the benefits of virtualization now want larger servers with more memory and CPU power to insert into virtual clusters.  Why license several small servers when I can save money by buying a really big server?  With advances in SAN technology and parts that can be replaced without powering down the system, the need to have multiple systems for failover is practically negated.
And those virtual workloads are easily migrated away from onsite hardware as well.  The shift to cloud computing is the coup-de-gras for the low end server market.  It is just as easy to spin up an Amazon Web Services (AWS) instance to test software as it is to provision new hardware or a virtual cluster.  Companies looking to do hybrid cloud testing or public cloud deployments don’t want to spend money on hardware for the data center.  They would rather pour that money into AWS instances.
Those Internet Things
I think the disparity in the purchase price also speaks volumes for the value yet to be recognized in the Internet of Things (IoT).  Nest was worth so much to Google because it gave them an avenue not previously available.  Google wants to have as many devices in your home as it can afford to acquire.  Each of those devices can provide data to tune Google’s algorithms and provide quality data to advertisers that pay Google handsomely for those analytics.
IoT devices don’t need home servers.  They don’t ask for DNS entries.  They don’t have web interfaces.  The only setup needed out of the box is a connection to the wireless network in your home.  Once that happens, IoT devices usually connect back to a server in the cloud.  The customer accesses the device via an application downloaded from an app store.  No need for any additional hardware in the customer’s home.
IoT devices need infrastructure to work effectively.  However, they don’t need that infrastructure to exist on premises.  The shift to cloud computing means that these devices are happy to exist anywhere without dependence on hardware.  Users are more than willing to download apps to control them instead of debating how to configure the web UI.  Without the need for low end hardware to run these devices, the market for that hardware is effectively dead.

Tom’s Take
I think IBM got exactly what they wanted when they offloaded their server business.  They can now concentrate on services and software.  The kinds of things that are going to be important in the Internet of Things.  Rather than lamenting the fire sale price of a dying product line, we should instead by looking to the value still locked inside IoT devices and how much higher it can go.

Let’s Hear It For Uptime

I recently had to have a technician come troubleshoot a phone issue at my home.  I still have a landline with my cable provider.  Mostly because it would be too expensive to change to a package without a phone.  The landline does come in handy on occasion, so I needed to have it fixed.  When I was speaking with the technician that came to fix things, I inquired about something the customer service people on the phone had said about upgrading my equipment.  The field tech told me, “You don’t want that.  Your old system is much better.”  When he explained how the low voltage system would be replaced by a full voice over IP (VoIP) router, I agreed with him.  My thoughts were mostly around the uptime of my phone in the event of a power outage.

Uptime is something that we have grown accustomed to in today’s world.  If you don’t believe me, go unplug your wireless router for the next five minutes.  If your family isn’t ready to burn you at the stake then you are luckier than most.  For the rest of us we measure our happiness in the availability of services.  Cloud email, streaming video, and Internet access all have to be available at the touch of a button.  Whether it be for work or for personal use, uptime is very important in a connected world.

It still surprises me that people don’t focus on uptime as an important metric of their solutions.  Selling redundant equipment or ensuring redundant paths should be one of the first considerations you have when planning a system.  As Greg Ferro once told me, “When I tell you to buy one switch, I always mean two.” Backup equipment is as important as anything you can install.

You have to test your uptime as well.  You don’t have to go to all the trouble of building your own chaos monkey, but you need to pull the plug on the primary every so often to be sure everything works.  You also need to make sure that your backup systems are covered all the way down.  Switches may function just fine with two control engines, but everything stops without power.  Generators and battery backups are important.  In the above case, I would need to put my entire network on a battery backup system in order to ensure I have the same phone uptime that I enjoy now with a relatively low-tech system.

You also have to account for other situations as well.  Several gaming sites were taken offline recently due to the efforts of a group launching distributed denial of service (DDoS) attacks against soft targets like login servers.  You have to make sure that the important aspects of your infrastructure are protected against external issues like this.  Customers don’t know the difference between a security related attack and an outage.  They all look the same in the eyes of a person paying for your service.

We should all strive to provide the most uptime possible for everything that we do.  Potential customers may scoff at the idea of paying for extra parts they don’t currently use.  That usually falls away once you explain what happens in the event of an outage.  We should also strive to point out issues with contingency plans when we see them.  Redundant circuits from a provider aren’t really redundant if they share the same last mile. You’ll never know how this affects you until you test your settings.  When it comes to uptime, take nothing for granted.  Test everything until you know that it won’t quit when failure happens.

image1

Don’t just plan for downtime.  Forget how many nines you support.  I was once told that a software vendor had “seven nines” of uptime.  I responded by telling them, “that’s three seconds of downtime allowed per year.  Wouldn’t it just be easier to say you never go down?”  Rather than having the mindset that something will eventually fail you should instead have the idea that everything will stay up and running.  It’s a subtle shift in thinking, but changing your perception does wonders for designing solutions that are always available.

2014 – Introductions Are In Order

It’s January 1 again.  Time to look back at what I said I was going to do for 2013.  Remember how there was going to be lots of IPv6 in the coming year?  Three whole posts.  Not exactly ushering the future, is it?  What did I work on instead?

It’s been a bit of a change for me.  I’ve gone from bits and bytes to spreadsheets and event planning.  It’s a good thing.  I’m more in touch with people now that I ever was behind a console screen.  I can see the up-and-comers in the industry.  I help bring attention to people that deserve it.  People like Brent Salisbury (@NetworkStatic), Jason Edelman (@JEdelman8), and Jake Snyder (@JSnyder81).

I still get involved with technology.  It’s just more at a higher architectural level.  That means I can stay grounded while at the same time interacting with the people that really know what’s going on.  In many ways, it’s the cross discipline aspect that I’ve been preaching to my old coworkers for years taken to a different extreme.

That means 2014 is going to look much different than I thought it would a year ago.  Almost like I need to introduce myself to the new year all over again.

I really want to spend the next year concentrating on the people.  I want to help bring bloggers and influencers along and give them a way to express themselves.  Perhaps that means social media.  Or a new blog.  Or maybe getting them on board with programs like the Solarwinds Ambassadors.  I want the smart people out there to show the world how smart they are.  I don’t want anyone to go unheard for lack of a platform.

I also really liked this article from John Mark Troyer about creating the new year you want to see.  John has some great points here.  I’ve always tried to stay away from making bold predictions for the coming year because they never pan out.  If you want to be right, you either couch the prediction with a healthy about of uncertainty or you guess something that’s almost guaranteed to happen.  I much prefer writing about what I need to accomplish or what I think needs to happen.  You really are more likely to get something accomplished if you have a concrete goal of self advancement.

Every new year starts out with limitless potential.  Every one of us has the ability and the desire to do something amazing.  I’ve never been one for making resolutions, as that seems to be setting yourself up for failure in many cases.  Instead, I try to do what I can every day to be awesome.  You should too.  Make 2014 an even better year than the last ten or twenty.  Learn how SDN works.  Learn a programming language.  Write a book or a blog or a funny tweet. Express yourself so that everyone knows who you are.  Make 2014 the year you introduce yourself to the world.  If you’ve already done that, make sure the world won’t forget you any time soon.

Is The Blog Dead?

I couldn’t help but notice an article that kept getting tweeted about and linked all over the place last week.  It was a piece by Jason Kottke titled “R.I.P. The Blog, 1997-2013“.  It’s actually a bit of commentary on a longer piece he wrote for the Nieman Journalism Lab called “The Blog Is Dead, Long Live The Blog“.  Kottke talks about how people today are more likely to turn to the various social media channels to spread their message rather than the tried-and-true arena of the blog.

Kottke admits in both pieces that blogging isn’t going away.  He even admits that blogging is going to be his go-to written form for a long time to come.  But the fact that the article spread around like wildfire got me to thinking about why blogging is so important to me.  I didn’t start out as a blogger.  My foray into the greater online world first came through Facebook.  Later, as I decided to make it more professional I turned to Twitter to interact with people.  Blogging wasn’t even the first thing on my mind.  As I started writing though, I realized how important it is to the greater community.  The reason?  Blogging is thought without restriction.

Automatic Filtration

Social media is wonderful for interaction.  It allows you to talk to friends and followers around the world.  I’m still amazed when I have conversations in real time with Aussies and Belgians.  However, social media facilitates these conversations through an immense filtering system.  Sometimes, we aren’t aware of the filters and restrictions placed on our communications.

twitter02_color_128x128Twitter forces users to think in 140-ish characters.  Ideas must be small enough to digest and easily recirculate.  I’ve even caught myself cutting down on thoughts in order to hit the smaller target of being about to put “RT: @networkingnerd” at the begging for tweet attribution.  Part of the reason I started a blog was because I had thoughts that were more than 140 characters long.  The words just flow for some ideas.  There’s no way I could really express myself if I had to make ten or more tweets to express what I was thinking on a subject.  Not to mention that most people on Twitter are conditioned to unfollow prolific tweeters when they start firing off tweet after tweet in rapid succession.

facebook_color02_128x128Facebook is better for longer discussion, but they are worse from the filtering department. The changes to their news feed algorithm this year weren’t the first time that Facebook has tweaked the way that users view their firehose of updates.  They believe in curating a given users feed to display what they think is relevant.  At best this smacks of arrogance.  Why does Facebook think they know what’s more important to me that I do?  Why must my Facebook app always default to Most Important rather than my preferred Most Recent?  Facebook has been searching for a way to monetize their product even before their rocky IPO.  By offering advertisers a prime spot in a user’s news feed, they can guarantee that the ad will be viewed thanks to the heavy handed way that they curate the feed.  As much reach as Facebook has, I can’t trust them to put my posts and articles where they belong for people that want to read what I have to say.

Other social platforms suffer from artificial restriction.  Pinterest is great for those that post with picture and captions or comments.  It’s not the best for me to write long pieces, especially when they aren’t about a craft or a wish list for gifts.  Tumblr is more suited for blogging, but the comment system is geared toward sharing and not constructive discussion.  Add in the fact that Tumblr is blocked in many enterprise networks due to questionable content and you can see how limiting the reach of a single person can be when it comes to corporate policy.  I had to fight this battle in my old job more than once in order to read some very smart people that blogged on Tumblr.

Blogging for me is about unrestricted freedom to pour out my thoughts.  I don’t want to worry about who will see it or how it will be read.  I want people to digest my thoughts and words and have a reaction.  Whether they choose to share it via numerous social media channels or leave a comment makes no difference to me.  I like seeing people share what I’ve committed to virtual paper.  A blog gives me an avenue to write and write without worry.  Sometimes that means it’s just a few paragraphs about something humorous.  Other times it’s an activist rant about something I find abhorrent.  The key is that those thoughts can co-exist without fear of being pigeonholed or categorized by an algorithm or other artificial filter.


Tom’s Take

Sometimes, people make sensationalist posts to call attention to things.  I’ve done it before and will likely do it again in the future.  The key is to read what’s offered and make your own conclusion.  For some, that will be via retweeting or liking.  For others, it will be adding a +1 or a heart.  For me, it’s about collecting my thoughts and pouring them out via a well-worn keyboard on WordPress.  It’s about sharing everything rattling around in my head and offering up analysis and opinion for all to see.  That part isn’t going away any time soon, despite what others might say about blogging in general.  So long as we continue to express ourselves without restriction, the blog will never really die no matter how we choose to share it.

Brave New (Dell) World

Dell_Logo

Companies that don’t reinvent themselves from time to time find themselves consigned to the scrap heap of forgotten technology.  As anyone that worked at Wang.  Or Packard Bell.  Or Gateway.  But, not everyone can be like IBM.  It takes time and careful planning to pull of a radical change.  And last but not least, it takes a lot of money and people willing to ride out the storm.  That’s why Dell has garnered so much attention as of late with their move to go private.

I was invited to attend Dell World 2013 in Austin, TX by the good folks at Dell.  Not only did I get a chance to see the big keynote address and walk around their solutions area, but I participated in a Think Tank roundtable discussion with some of the best and brightest in the industry and got to take a tour of some of the Dell facilities just up the road in Round Rock, TX.  It was rather interesting to see some of the changes and realignments since Michael Dell took his company private with the help of Silver Lake Capital.

ESG Influencer Day

The day before Dell World officially kicked off was a day devoted to the influencers.  Sarah Vela (@SarahVatDell) and Michelle Richard (@Meesh_Says) hosted us as we toured Dell’s Executive Briefing Center.  We got to discuss some of Dell’s innovations, like the containerized data center concept.

DellDCContainer

Dell can drop a small data center on your property with just a couple of months of notice.  Most of that is prepping the servers in the container.  There’s a high-speed video of the assembly of this particular unit that runs in the EBC.  It’s interesting to think that a vendor can provide a significant amount of processing power in a compact package that can be delivered almost anywhere on the planet with little notice.  This is probably as close as you’re going to get to the elasticity of Amazon in an on-premise package.  Not bad.

The Think Tank was another interesting outing.  After a couple of months of being a silent part of Tech Field Day, I finally had an opportunity to express some opinions about innovation.  I’ve written about it before, and also recently.  The most recent post was inspired in large part by things that were discussed in the Think Tank.  It believe that IT is capable of a staggering amount of innovation if they could just be given the chance to think about it.  That’s why DevOps and software defined methodologies have such great promise.  If I can use automation to take over a large part of my day-to-day work, I can use that extra time to create improvement.  Unloading the drudgery from the workday can create a lot of innovation.  Just look at Google’s Ten Percent Time idea.  Now what if that was 25%?  Or 50%?

Dell does a great job with their influencer engagements.  This was my second involvement with them and it’s been very good.  I felt like a valued part of the conversation and got to take a sneak peek at some of the major announcements the day before they came out.  I think Dell is going to have a much easier road in front of it by continuing to involve the community in events such as this.

What’s The Big Dell?

Okay, so you all know I’m not a huge fan of keynotes.  Usually, that means that I’m tweeting away in Full Snark Mode.  And that’s if I’m not opposed to things being said on stage.  In the case of Dell World, Michael Dell confirmed several ideas I had about the privatization of his company.  I’ve always held the idea that Dell was upset the shareholders were trying to tell him how to run his company.  He has a vision for what he wants to do and if you agree with that then you are welcome to come along for the ride.

The problem with going public is much the same as borrowing $20 from your friend.  It’s all well and good at first.  After a while, your buddy may be making comments about your spending habits as a way to encourage you to pay him back.  The longer that relationship goes, the more pointed the comments.  Now, imagine if that buddy was also your boss and had a direct impact on the number of hours your worked or the percentage of the commission you earned.  What if comments from him had a direct impact on the amount of money you earned?  That is the shareholder problem in a nutshell.  It’s nice to be flush with cash from an IPO.  It’s something else entirely when those same shareholder start making demands of you or start impacting your value because they disagree with your management style.  Ask Michael Dell how he feels about Carl Icahn?  I’m sure that one shareholder could provide a mountain of material.  And he wasn’t the only one that threatened to derail the buyout.  He was just the most vocal.

With the shareholders out of the way, Dell can proceed according to the visions of their CEO.  The only master he has to answer to now is Silver Lake Capital.  So long as Dell can provide good return on investment to them I don’t see any opposition to his ideas.  Another curious announcement was the Dell Strategic Innovation Venture Fund.  Dell has started a $300 million fund to explore new technologies and fund companies doing that work.  A more cynical person might think that Michael Dell is using he new-found freedom to offer an incentive to other startups to avoid the same kinds of issues he had – answering to single-minded masters only focused on dividends and stock price.  By offering to invest in a hot new startup, Michael Dell will hopefully spur innovation in areas like storage.  Just remember that venture capital funds need returns on their investments as well, so all that money will come with some strings attached.  I’m sure that Silver Lake has more to do with this than they’re letting on.  Time will tell if Dell’s new venture fund will pay off as handsomely as they hope.


Tom’s Take

Dell World was great.  It was smaller than VMWorld or Cisco Live.  But it fit the culture of the company putting on the show.  There weren’t any earth shattering announcements to come out of the event, but that fits the profile of a company finding its way in the world for the second time.  Dell is going to need to consolidate and coordinate business units to maximize effort and output.  That’s not a surprise.  The exuberance that Michael Dell showed on stage during the event is starting to flow down into the rest of Dell as well.  Unlike a regular startup in a loft office in San Francisco, Dell has a track record and enough stability to stick around for while.  I just hope that they don’t lose their identity in this brave new world.  Dell has always been an extension of Michael Dell.  Now it’s time to see how far that can go.

Disclaimer

I was an invited guest of Dell at Dell World 2013.  They paid for my travel and lodging at the event. I also received a fleece pullover, water bottle, travel coffee mug, and the best Smores I’ve ever had (really).  At no time did they ask for any consideration in the writing of this review, nor were they promised any.  The opinions and analysis presented herein reflect my own thoughts.  Any errors or omissions are not intentional.

Are Exit Strategies Hurting Innovation?

MSP-exit-strategy-300x225

During the Think Tank that I participated in at Dell World, the topic of conversation turned to startups.  Specifically, how do startups drive innovation?  As I listened to the folks around the table like Justin Warren (@JPWarren) and Bob Plankers (@Plankers) talk about the advantages that startups enjoy when it comes to agility, I started to wonder if some startups today are hurting the process more than they are helping it.

Exit Strategy

The entire point of creating a business is to make money.  You do that by creating a product that you can sell to someone.  It doesn’t have to be a runaway success.  So long as you structure the business correctly you can make money for a good long while.  The key is that you must structure the business to pay off in the long run.

Startups seem to have this idea that the most important part of the equation is to build something quickly and get it onto the market.  The business comes second.  That only works if you are playing a very short game.  The bad decisions you make in the foundation of your business will come back to bite you down the road.

Startups that don’t have a business plan only have one other option – an exit strategy.  In far too many cases, the business plan for a startup is to build something so awesome that another larger company is going to want to buy them.  As I’ve said before talking about innovation, buying your way into a new product line does work to a point.  For the large vendor, it is dependent on the available cash on hand.  For the startup, the idea is that you need to have enough capital on hand to survive long enough to be bought.

Looking For A Buyer For The End Of The World

There’s nothing more awkward than a company that’s obviously seeking a buyout from a large vendor that hasn’t received it yet.  I look at the example of Violin Memory.  Violin makes flash storage cards for servers to accelerate caching for workloads.  They were a strategic parter of HP for a long time.  Eventually, HP decided to build those cards themselves rather than rely on Violin as a supplier.  This put Violin in a very difficult position.  In hindsight, I’m sure that Violin wanted to be bought by HP and become a division inside the server organization.  Instead, they were forced to look elsewhere for funds.  They chose to file an initial public offering (IPO) that hit the initial target.  After that, the parts of the business that weren’t so great started dragging the stock price down, angering the investors to the point where executives are starting to leave and lawsuits look likely.

Where did Violin go wrong?  If they had built a solid business in the first place they might have been able to keep selling along even though HP had decided not to buy them.  They might have been able to stay afloat long enough to find another buyer or file an IPO when they have a more stable situation with earnings or expenses.  They might have been able to make money instead of losing it hand over fist.

The Standoff

The idea that startups are just looking for a payday from a larger company is hurting innovation.  Startups just want to get the idea formed enough to get some interest from buyers.  Who cares if we can make it work in reality?  We just have to get someone to bite on the prototype.  That means that development is key.  Things like payroll and operating expenses come second.

In return, companies are becoming skittish of buying a startup.  Why take a chance on a company that may not be around next week?  I’d rather spend my time on internal innovation.  Or, better yet buy that failed startup for pennies on the dollar when they liquidate due to inability to manage a business.  Larger companies are going to shy away from startups that want millions (or billions) for ideas.  That lengthens the time that it takes to innovate, either because companies must invest time internally or spend countless hours negotiating purchases of intellectual property.


Tom’s Take

Obviously, not all startups have these issues.  Look at companies like Nimble Storage.  They are a business first.  They make money.  They don’t have out-of-control expenses.  They filed for an IPO because it was the right time, not because they needed more money to keep the lights on.  That’s the key to proper innovation.  Build a company that just happens to make something.  Don’t build a product that just happens to have a business around it.  That means you can continue to improve and innovate on your product as time goes on.  It means you don’t have to look for an exit strategy as soon as the funding starts running dry.  Then your strategy looks more like a plan.