The Pain of Licensing

Frequent readers of my blog and Twitter stream may have noticed that I have a special loathing in my heart for licensing.  I’ve been subjected to some of the craziest runarounds because of licensing departments.  I’ve had to yell over the phone to get something taken care of.  I’ve had to produce paperwork so old it was yellowed at the edges.  Why does this have to be so hard?

Licensing is a feature tracking mechanism.  Manufacturers want to know what features you are using.  It comes back to tracking research and development.  A lot of time and effort goes into making the parts and pieces of a product.  Many different departments put work into something before it goes out the door.  Vendors need a way to track how popular a given feature might be to customers.  This allows them to know where to allocate budgets for the development of said features.

Some things are considered essential.  These core pieces are usually allocated to a team that gets the right funding no matter what.  Or the features are so mature that there really isn’t much that can be done to drive additional revenue from them.  When’s the last time someone made a more streamlined version of OSPF?  But there are pieces that can be attached to OSPF that carry more weight.

Rights and Privileges

Here’s an example from Cisco.  In IOS 15, OSPF is considered a part of the core IOS functionality.  You get it no matter what on a router.  You have to pay an extra license on a switch, but that’s not part of this argument.  OSPF is a mature protocol, even in version 3 which enables IPv6 support.  If you have OSPF for IPv4, you have it for IPv6 as well.  One of the best practices for securing OSPF against intrusion is to authenticate your area 0 links.  This is something that should be considered core functionality.  And with IPv4, it is.  The MD5 authentication mechanism is built into the core OS.  But with IPv6, the IPSec license needed to authenticate the links has to be purchased as a separate license upgrade.  That’s because IPSec is part of the security license bundle.

Why the runaround for what is considered a best practice, core function?  It’s because IPv6 uses a different mechanism.  One that has more reach that simple MD5 authentication.  In order to capture the revenue that the IPSec security team is putting in, Cisco won’t just give away that functionality.  Instead, it needs to be tracked by a license.  The R&D work from that team needs to be recovered somehow.  And so you pay extra for something Cisco says you should be doing anyway.  That’s the licensing that upsets me so.

License Unit Report

How do we fix it?  The money problem is always going to be there.  Vendors have to find a way to recapture revenue for R&D while at the same time not making customers pay for things they don’t need, like advanced security or application licenses.  That’s the necessary evil of having affordable software.  But there is a fix for the feature tracking part.

We have the analytics capability with modern software to send anonymized usage statistics to manufacturers and vendors about what feature sets are being used.  Companies can track how popular IPSec is versus MD5 or other such feature comparisons.  The software doesn’t have to say who you are, just what you are using.  That would allow the budgets to be allocated exactly like they should be used, not guessing based on who bought the whole advanced communications license for Quality of Service (QoS) reporting.


 

Tom’s Take

Licensing is like NAT.  It’s a necessary evil of the world we live in.  People won’t pay for functionality they don’t use.  At the same time, they won’t use functions they have to pay extra for if they think it should have been included.  It’s a circular problem that has no clear answer.  And that’s the necessary evil of it all.

But just because it’s necessary doesn’t mean we can’t make it less evil.  We can split the reporting pieces out thanks to modern technology.  We can make sure the costs to develop these features gets driven down in the future because there are accurate statistics about usage.  Every little bit helps make licensing less of a hassle that it currently is.  It may not go away totally, but it can be marginalized to the point where it isn’t painful.

Twitter Tips For Finding Followers

new-twitter-logo

I have lots of followers on Twitter.  I also follow a fair number of people as well.  But the ratio of followers to followed isn’t 1:1.  I know there are a lot of great people out there and I try to keep up with as many of them as I can without being overwhelmed.  It’s a very delicate balance.

There are a few things I do when I get a new follower to decide if I want to follow them back.  I also do the same thing for new accounts that I find.  It’s my way of evaluating how they will fit into my feed.  Here are the three criteria I use to judge adding people to my feed.

Be Interesting

This one seems like a no brainer, right?  Have interesting content that people want to read and interact with.  But there’s one specific piece here that I want to call attention to.  I love reading people with original thoughts.  Clever tweets, interesting observations, and pertinent discussion are all very important.  But one thing that I usually shy away from is the account that is more retweets than actual content.

I don’t mind retweets.  I do it a lot, both in quote form and in the “new” format of pasting the original tweet into my timeline.  But I use the retweet sparingly.  I do it to call attention to original thought.  Or to give credit where it’s due.  But I’ve been followed by accounts that are 75% (or more) retweets from vendors and other thought leaders.  If the majority of your content comes from retweeting others, I’m more likely to follow the people you’re retweeting and not you.  Make sure that the voice on your Twitter account is your own.

Be On Topic

My Twitter account is about computer networking.  I delve into other technologies, like wireless and storage now and then.  I also make silly observations about trending events.  But I’m on topic most of the time.  That’s the debt that I owe to the people that have chosen to follow me for my content.  I don’t pollute my timeline with unnecessary conversation.

When I evaluate followers, I look at their content.  Are they talking about SANs? Or are they talking about sports?  Is their timeline a great discussion about SDN? Or check ins on Foursquare at the local coffee shop?  I like it when people are consistent.  And it doesn’t have to be about technology.  I follow meteorologists, musicians, and actors.  Because they are consistent about what they discuss.  If you’re timeline is polluted with junk and all over the place it makes it difficult to follow.

Note that I do talk about things other than tech.  I just choose to segregate that talk to other platforms.  So if you’re really interested in my take on college football, follow me on Facebook.

Be Interactive

There are lots of people talking on Twitter.  There are conversations going on every second that are of interest to lots of people.  No one has time to listen to all of them.  You have to find a reason to be involved.  That’s where the interactivity aspect comes into play.

My fifth tweet was interacting with someone (Ethan Banks to be precise):

If you don’t talk to other people and just blindly tweet into the void, you may very well add to the overall body of knowledge while missing the point at the same time.  It’s called “social” media.  That means talking to other people.  I’m more likely to follow an account that talks to me regularly.  That tells me I’m wrong or points me at a good article.  People feel more comfortable with people they’ve interacted with before.

Don’t be shy.  Mention someone.  Start a conversation.  I’ll bet you’ll pick up a new follower in no time.


Tom’s Take

These are my guidelines.  They aren’t hard-and-fast rules.  I don’t apply them to everyone. But it does help me figure out if deeper analysis is needed before following someone.  It’s important to make sure that the people you follow help you in some way.  They should inform you.  They should challenge you.  They should make you a better person.  That’s what social media really means to me.

Take a look at your followers and find a few to follow today.  Find that person that stays on topic and has great comments.  Give them a chance.  You might find a new friend.

Overlay Transport and Delivery

amazon-com-boxes

The difference between overlay networks and underlay networks still causes issues with engineers everywhere.  I keep trying to find a visualization that boils everything down to the basics that everyone can understand.  Thanks to the magic of online ordering, I think I’ve finally found it.

Candygram for Mongo

Everyone on the planet has ordered something from Amazon (I hope).  It’s a very easy experience.  You click a few buttons, type in a credit card number, and a few days later a box of awesome shows up on your doorstep.  No fuss, no muss.  Things you want show up with no effort on your part.

Amazon is the world’s most well-known overlay network.  When you place an order, a point-to-point connection is created between you and Amazon.  Your item is tagged for delivery to your location.  It’s addressed properly and finds its way to you almost by magic.  You don’t have to worry about your location.  You can have things shipped to a home, a business, or a hotel lobby halfway across the country.  The magic of an overlay is that the packets are going to get delivered to the right spot no matter what.  You don’t need to worry about the addressing.

That’s not to say there isn’t some issue with the delivery.  With Amazon, you can pay for expedited delivery.  Amazon Prime members can get two-day shipping for a flat fee.  In overlays, your packets can take random paths depending on how the point-to-point connection is built.  You can pay to have a direct path provided the underlay cooperates with your wishes.  But unless a full mesh exists, your packet delivery is going to be at the mercy of the most optimized path.

Mongo Only Pawn In Game Of Life

Amazon only works because of the network of transports that live below it.  When you place an order, your package could be delivered any number of ways.  UPS, FedEx, DHL, and even the US Postal Service can be the final carrier for your package.  It’s all a matter of who can get your package there the fastest and the cheapest.  In many ways, the transport network is the underlay of physical shipping.

Routes are optimized for best forwarding.  So are UPS trucks.  Network conditions matter a lot to both packets and packages.  FedEx trucks stuck in traffic jams at rush hour don’t do much good.  Packets that traverse slow data center interconnects during heavy traffic volumes risk slow packet delivery.  And if the road conditions or cables are substandard?  The whole thing can fall apart in an instant.

Underlays are the foundation that higher order services are built on.  Amazon doesn’t care about roads.  But if their shipping times get drastically increased due to deteriorating roadways you can bet their going to get to the bottom of it.  Likewise, overlay networks don’t directly interact with the underlay but if packet delivery is impacted people are going to take a long hard look at what’s going on down below.

Tom’s Take

I love Amazon.  It beats shopping in big box stores and overpaying for things I use frequently.  But I realize that the infrastructure in place to support the behemoth that is Amazon is impressive.  Amazon only works because the transport system in place is optimized to the fullest.  UPS has a computer system that eliminates left turns from driver routes.  This saves fuel even if it means the routes are a bit longer.

Network overlays work the same way.  They have to rely on an optimized underlay or the whole system crashes in on itself.  Instead of worrying about the complexity of introducing an overlay on top of things, we need to work on optimizing the underlay to perform as quickly as possible.  When the underlay is optimized, the whole thing works better.

Who Wants A Free Puppy?

Years ago, my wife was out on a shopping trip. She called me excitedly to tell me about a blonde shih-tzu puppy she found and just had to have. As she talked, I thought about all the things that this puppy would need to thrive. Regular walks, food, and love are paramount on the list. I told her to use her best judgement rather than flat out saying “no”. Which is also how I came to be a dog owner. Today, I’ve learned there is a lot more to puppies (and dogs) than walks and feeding. There is puppy-proofing your house. And cleaning up after accidents. And teaching the kids that puppies should be treated gently.

An article from Martin Glassborow last week made me start thinking about our puppy again. Scott McNealy is famous for having told the community that “Open Source is free like a puppy.” back in 2005. While this was a dig at the community in regards to the investment that open source takes, I think Scott was right on the mark. I also think Martin’s recent article illustrates some of the issues that management and stakeholders don’t see with comunity projects.

Open software today takes care and feeding. Only instead of a single OS on a server in the back of the data center, it’s all about new networking paradigms (OpenFlow) or cloud platform plays (OpenStack). This means there are many more moving parts. Engineers and programmers get it. But go to the stakeholders and try to explain what that means. The decision makers love the price of open software. They are ambivalent to the benefits to the community. However, the cost of open projects is usually much higher than the price. People have to invest to see benefits.

TNSTAAFL

At the recent SolidFire Summit, two cloud providers were talking about their software. One was hooked in to the OpenStack community. He talked about having an entire team dedicating to pulling nightly builds and validating them. They hacked their own improvements and pushed them back upstream for the good of the community. He seemed love what he was talking about. The provider next to him was just a little bit larger. When asked what his platform was he answered “CloudStack”. When I asked why, he didn’t hesitate. “They have support options. I can have them fix all my issues.”

Open projects appeal to the hobbiest in all of us. It’s exciting to build something from the ground up. It’s a labor of love in many cases. Labors of love don’t work well for some enterprises though. And that’s the part that most decision makers need to know. Support for this awesome new thing may not alwasy be immediate or complete. To bring this back to the puppy metaphor, you have to have patience as your puppy grows up and learns not to chew on slippers.

The reward for all this attention? A loving pet in the case of the puppy. In the case of open software, you have a workable framework all your own that is customized to your needs and very much a part of your DNA. Supported by your staff and hopefull loved as much or more than any other solution. Just like dog owners that look forward to walking the dog or playing catch at the dog part, your IT organization should look forward to the new and exciting challenges that can be solved with the investment of time.


Tom’s Take

Nothing is free. You either pay for it with money or with time. Free puppies require the latter, just as free software projects do. If the stakeholders in the company look at it as an investment of time and energy then you have the right frame of mind from the outset. If everything isn’t clear up front, you will find yourself needing to defend all the time you’ve spent on your no-cost project. Hopefully your stakeholders are dog people so they understand that the payoff isn’t in the price, but the experience.

Opening the Future

I’ve been a huge proponent of open source and open development for years.  I may not be as vocal about it as some of my peers, but I’ve always had my eye on things coming out of the open community.  As networking and virtualization slowly open their processes to these open development styles, I can’t help but think about how important this will be for the future.

Eyes Looking Forward

If Heartbleed taught me anything in the past couple of weeks, it’s that the future of software has to be open.  Finding a vulnerability in a software program that doesn’t have source access or an open community build around it is impossible.  Look at how quickly the OpenSSL developers were able to patch their bug once it was found.  Now, compare that process to the recently-announced zero day bug in Internet Explorer.  While the OpenSSL issue was much bigger in terms of exposure, the IE bug is bigger in terms of user base.

Open development isn’t just about having multiple sets of eyes looking at code.  It’s about modularizing functionality to prevent issues from impacting multiple systems.  Look at what OpenStack is doing with their plugin system.  Networking is a different plug in from block storage.  Virtual machine management isn’t the same as object storage.  The plugin idea was created to allow very smart teams to work on these pieces independently of each other.  The side effect is that a bug in one of these plugins is automatically firewalled away from the rest of the system.

Open development means that the best eyes in the world are looking at what you are doing and making sure you’re doing it right.  They may not catch every bug right away but they are looking.  I would argue that even the most stringent code checking procedures at a closed development organization would still have the same error rate as an open project.  Of course, those same procedures and closed processes would mean we would never know if there was an issue until after it was fixed.

Code of the People

Open development doesn’t necessarily mean socialism, though.  Look at all the successful commercial projects that were built using OpenSSL.  They charged for the IP built on a project that provide secure communication.  There’s no reason other commercial companies can’t do the same.  Plenty of service providers are charging for services offered on top of OpenStack.  Even Windows uses BSD code in parts of its networking stack.

Open development doesn’t mean you can’t make money.  It just means you can’t hold your methods hostage.  If someone can find a better way to do something with your project, they will.  Developers are worried that someone will “steal” code and rewrite a clone of your project.  While that might be true of a mobile app or simple game, it’s far more likely that an open developer will contribute code back to your project rather than just copying it.  You do take risk by opening yourself up to the world, but the benefits of that risk far outweigh any issues you might run into by closing your code base.


Tom’s Take

It may seem odd for me to be talking about development models.  But as networking moves toward a background that requires more knowledge about programming it will become increasingly important for a new generation of engineers to be comfortable with programming.  It’s too late for guys like me to jump on the coding bandwagon.  But at the same time, we need to ensure that the future generation doesn’t try to create new networking wonders only to lock the code away somewhere and never let it be improved.  There are enough apps in the app stores of the world that will never be updated past a certain revision because the developer ran out of time or patience with their coding.  Instead, why not train developers that the code they write should allow for contribution and teamwork to continue.  An open future in networking means not repeating the mistakes of the past.  That alone will make the outcome wonderful.

The Sunset of Windows XP

WindowsXPMeadow

The end is nigh for Microsoft’s most successful operating system of all time. Windows XP is finally set to reach the end of support next Tuesday. After twelve and a half years, and having its death sentence commuted at least twice, it’s time for the sunset of the “experienced” OS. Article after article has been posted as of late discussing the impact of the end of support for XP. The sky is falling for the faithful. But is it really?

Yes, as of April 8, 2014, Windows XP will no longer be supported from a patching perspective. You won’t be able to call in a get help on a cryptic error message. But will your computer spontaneously combust? Is your system going to refuse to boot entirely and force you at gunpoint to go out and buy a new Windows 8.1 system?

No. That’s silly. XP is going to continue to run just as it has for the last decade. XP will be as secure on April 9th as it was on April 7th. But it will still function. Rather than writing about how evil Microsoft is for abandoning an operating system after one of the longest support cycles in their history, let’s instead look at why XP is still so popular and how we can fix that.

XP is still a popular OS with manufacturing systems and things like automated teller machines (ATMs). That might be because of the ease with which XP could be installed onto commodity hardware. It could also be due to the difficulty in writing drivers for Linux for a large portion of XP’s life. For better or worse, IT professionals have inherited a huge amount of embedded systems running an OS that got the last major service pack almost six years ago.

For a moment, I’m going to take the ATMs out of the equation. I’ll come back to them in a minute. For the other embedded systems that don’t dole out cash, why is support so necessary? If it’s a manufacturing system that’s been running for the last three or four years what is another year of support from Microsoft going to get you? Odds are good that any support that manufacturing system needs is going to be entirely unrelated to the operating system. If we treat these systems just like an embedded OS that can’t be changed or modified, we find that we can still develop patches for the applications running on top of the OS. And since XP is one of the most well documented systems out there, finding folks to write those patches shouldn’t be difficult.

In fact, I’m surprised there hasn’t been more talk of third party vendors writing patches for XP. I saw more than a few start popping up once Windows 2000 started entering the end of its life. It’s all a matter of the money. Banks have already started negotiating with Microsoft to get an extension of support for their ATM networks. It’s funny how a few million dollars will do that. SMBs are likely to be left out in the cold for specialized systems due to the prohibitive cost of an extended maintenance contract, either from Microsoft or another third party. After all, the money to pay those developers needs to come from somewhere.


Tom’s Take

Microsoft is not the bad guy here. They supported XP as long as they could. Technology changes a lot in 12 years. The users aren’t to blame either. The myth of a fast upgrade cycle doesn’t exist for most small businesses and enterprises. Every month that your PC can keep running the accounting software is another month of profits. So who’s fault is the end of the world?

Instead of looking at it as the end, we need to start learning how to cope with unsupported software. Rather than tilting at the windmills in Remond and begging for just another month or two of token support we should be investigating ways to transition XP systems that can’t be upgraded within 6-8 months to an embedded systems plan. We’ve reached the point where we can’t count on anyone else to fix our XP problems but ourselves. Once we have the known, immutable fact of no more XP support, we can start planning for the inevitable – life after retirement.

The Alignment of Net Neutrality

Net neutrality has been getting a lot of press as of late, especially as AT&T and Netflix have been sparring back and forth in the press.  The FCC has already said they are going to take a look at net neutrality to make sure everyone is on a level playing field.  ISPs have already made their position clear.  Where is all of this posturing going to leave the users?

Chaotic Neutral

Broadband service usage has skyrocketed in the past few years.  Ideas that would never have been possible even 5 years ago are now commonplace.  Netflix and Hulu have made it possible to watch television without cable.  Internet voice over IP (VoIP) allows a house to have a phone without a phone line.  Amazon has replaced weekly trips to the local department store for all but the most crucial staple items.  All of this made possible by high speed network connectivity.

But broadband doesn’t just happen.  ISPs must build out their networks to support the growing hunger for faster Internet connectivity.  Web surfing and email aren’t the only game in town.  Now, we have streaming video, online multiplayer, and persistently connected devices all over the home.  The Internet of Things is going to consume a huge amount of bandwidth in an average home as more smart devices are brought online.  ISPs are trying to meet the needs of their subscribers.  But are they going far enough?

ISPs want to build networks their customers will use, and more importantly pay to use.  They want to ensure that complaints are kept to a minimum while providing the services that customers demand.  Those ISP networks cost a hefty sum.  Given the choice between paying to upgrade a network and trying to squeeze another month or two out of existing equipment, you can guarantee the ISPs are going to take the cheaper route.  Coincidentally, that’s one of the reasons why the largest backers of 802.1aq Shortest Path Bridging were ISP-oriented.  SPB doesn’t require new equipment to forward frames (like TRILL).  ISPs can use existing equipment to deliver SPB with no out-of-pocket expenditure on hardware.  That little bit of trivia should give you an idea why ISPs are trying to do away with net neutrality.

True Neutral

ISPs want to keep using their existing equipment as long as possible.  Every dollar they make from this cycle’s capital expenditure means a dollar of profit in their pocket before they have to replace a switch.  If there was a way to charge even more money for existing services, you can better believe they would do it.  Which is why this infographic hits home for most:

net-neutrality

Charging for service tiers would suit ISPs just fine.  After all, as the argument goes, you are using more than the average user.  Shouldn’t you shoulder the financial burden of increased network utilization?  That’s fine for corner cases like developers or large consumers of downstream bandwidth.  But with Netflix usage increasing across the board, why should the ISP charge you more on top of a Netflix subscription?  Shouldn’t their network anticipate the growing popularity of streaming video?

The other piece of the tiered offering above that should give pause is the common carrier rules for service providers.  Common carriers get to be absolved of liability for the things they transport because they have to agree to transport everything offered to them.  What do you think would happen if those carriers suddenly decide they want to discriminate about what they send?  If that discrimination revokes their common carrier status, what’s to stop them from acting like a private carrier and start refusing to transport certain applications or content?  Maybe forcing a video service to negotiate a separate peering agreement for every ISP they want to use?  Who would do that?

Neutral Good

Net Neutrality has to exist to ensure that we are free to use the services we want to consume.  Sure, this means that things like Quality of Service (QoS) can’t be applied to packets to ensure they are all being treated equally.  The inverse is to have guaranteed delivery for an additional fee.  And every service you add on to the top would incur more fees.  New multiplayer game launching next week? The ISP will charge you an extra $5 per month to insure you have a low ping time to beat the other guy.  If you don’t buy the package, your multiplayer traffic gets dumped in with Netflix and the rest of the bulk traffic.

This is part of the reason why Google Fiber is such a threat to existing ISPs.  When the only options for local loop delivery are the cable company and the phone company, it’s difficult to have options that aren’t being tiered in the absence of neutrality.  With viable third party fiber buildouts like Google starting to spring up it becomes a bargaining chip to increase speeds to users and upgrade backbones to support heavy usage.  If you don’t believe that, look at what AT&T did immediately after Google announced Google Fiber in Austin, TX.


Tom’s Take

ISPs shouldn’t be able to play favorites with their customers.  End users are paying for a connection.  End users are also paying services to use their offerings.  Why should we have to pay for a service twice if the ISP wants to charge me more in a tiering setup?  That smells of a protection racket in many ways.  I can imagine the ISP techs sitting there in a slick suit saying, “That’s a nice connection you got there.  It would be a shame if something were to happen to it.”  Instead, it’s up to the users to demand ISPs offer free and unrestricted access to all content.  In some cases, that will mean backing alternatives and “voting with your dollar” to make the message be heard loud and clear.  I won’t sign up for services that have data usage caps or metered speed limits past a certain ceiling.  I would drop any ISP that wants me to pay extra just because I decide to start using a video streaming service or a smart thermostat.  It’s time for ISPs to understand that hardware should be an investment in future customer happiness and not a tool that’s used to squeeze another dime out of their user base.

Throw More Storage At It

All of  this has happened before.   All of this will happen again.

The more time I spend listening to storage engineers talk about the pressing issues they face in designing systems in this day and age, the more I’m convinced that we fight the same problems over and over again with different technologies.  Whether it be networking, storage, or even wireless, the same architecture problems crop in new ways and require different engineers to solve them all over again.

Quality is Problem One

A chance conversation with Howard Marks (@DeepStorageNet) at Storage Field Day 4 led me to start thinking about these problems.  He was talking during a presentation about the difficulty that storage vendors have faced in implementing quality of service (QoS) in storage arrays.  As Howard described some of the issues with isolating neighboring workloads and ensuring they can’t cause performance issues for a specific application, I started thinking about the implementation of low latency queuing (LLQ) for voice in networking.

LLQ was created to solve a specific problem.  High volume, low bandwidth flows can starve traditional priority queuing systems.  In much the same way, applications that demand high amounts of input/output operations per second (IOPS) while storing very little data can cause huge headaches for storage admins.  Storage has tried to solve this problem with hardware in the past by creating things like write-back caching or even super fast flash storage caching tiers.

Make It Go Away

In fact, a lot of the problems in storage mirror those from networking world many years ago.  Performance issues used to have a very simple solution – throw more hardware at the problem until  it goes away.  In networking, you throw bandwidth at the issue.  In storage, more IOPS are the answer.  When hardware isn’t pushed to the absolute limit, the answer will always be to keep driving it higher and higher.  But what happens when performance can’t fix the problem any more?

Think of a sports car like the Bugatti Veyron.  It is the fastest production car available today, with a top speed well over 250 miles per hour.  In fact, Bugatti refuses to talk about the absolute top speed, instead countering “Would you ask how fast a jet can fly?”  One of the limiting factors in attaining a true measure of the speed isn’t found in the car’s engine.  Instead, the sub systems around the engine being to fail at such stressful speeds.  At 258 miles per hour, the tires on the car will completely disintegrate in 15 minutes.  The fuel tank will be emptied in 12 minutes.  Bugatti wisely installed a governor on the engine limiting it to a maximum of 253 miles per hour in an attempt to prevent people from pushing the car to its limits.  A software control designed to prevent performance issues by creating artificial limits.  Sound familiar?

Storage has hit the wall when it comes to performance.  PCIe flash storage devices are capable of hundreds of thousands of IOPS.  A single PCI card has the throughput of an entire data center.  Hungry applications can quickly saturate the weak link in a system.  In days gone by, that was the IOPS capacity of a storage device.  Today, it’s the link that connects the flash device to the rest of the system.  Not until the advent of PCIe was the flash storage device fast enough to keep pace with workloads starved for performance.

Storage isn’t the only technology with hungry workloads stressing weak connection points.  Networking is quickly reaching this point with the shift to cloud computing.  Now, instead of the east-west traffic between data center racks being a point of contention, the connection point between the user and the public cloud will now be stressed to the breaking point.  WAN connection speeds have the same issues that flash storage devices do with non-PCIe interfaces.  They are quickly saturated by the amount of outbound traffic generated by hungry applications.  In this case, those applications are located in some far away data center instead of being next door on a storage array.  The net result is the same – resource contention.


Tom’s Take

Performance is king.  The faster, stronger, better widgets win in the marketplace.  When architecting systems it is often much easier to specify a bigger, faster device to alleviate performance problems.  Sadly, that only works to a point.  Eventually, the performance problems will shift to components that can’t be upgraded.  IOPS give way to transfer speeds on SATA connectors.  Data center traffic will be slowed by WAN uplinks.  Even the CPU in a virtual server will eventually become an issue after throwing more RAM at a problem.  Rather than just throwing performance at problems until they disappear, we need to take a long hard look at how we build things and make good decisions up front to prevent problems before they happen.

Like the Veyron engineers, we have to make smart choices about limiting the performance of a piece of hardware to ensure that other bottlenecks do not happen.  It’s not fun to explain why a given workload doesn’t get priority treatment.  It’s not glamorous to tell someone their archival data doesn’t need to live in a flash storage tier.  But it’s the decision that must be made to ensure the system is operating at peak performance.

Replacing Nielsen With Big Data

I’m a fan of television. I like watching interesting programs. It’s been a few years since one has caught my attention long enough to keep my watching for multiple seasons. Part of that reason is due to my fear that an awesome program is going to fail to reach a “target market” and will end up canceled just when it’s getting good. It’s happened to several programs that I liked in the past.

Sampling the Goods

Part of the issue with tracking the popularity of television programs comes from the archaic method in which programs are measured. Almost everyone has heard of the Nielsen ratings system. This sampling method was created in the 1930s as a way to measure radio advertising reach. In the 50s, it was adapted for use in television.

Nielsen selects target audiences that represent the greater whole. They ask users to keep written diaries of their television watching habits. They also have the ability to install a device called a set meter which allows viewers to punch in a code to identify themselves via age groups and lock in a view for a program. The set meter can tell the instant a channel is changed or the TV is powered off.

In theory, the sampling methodology is sound. In practice, it’s a bit shaky. View diaries are unreliable because people tend to overreport their view habits. If they feel guilty that they haven’t been writing anything down, they tend to look up something that was on TV and write it down. Diaries also can’t determine if a viewer watched the entire program or changed the channel in the middle. Set meters aren’t much better. The reliance on PIN codes to identify users can lead to misreported results. Adults in a hurry will sometimes punch in an easier code assigned to their children, leading to skewed age results.

Both the diary and the set meter fail to take into account the shift in viewing habits in modern households. Most of the TV viewing in my house takes place through time-shifted DVR recordings. My kids tend to monopolize the TV during the day, but even they are moving to using services like Netflix to watch all episodes of their favorite cartoons in one sitting. Neither of these viewing habits are easily tracked by Nielsen.

How can we find a happy medium? Sample sizes have been reduced signifcantly due to cord-cutting households moving to Internet distribution models. People tend to exaggerate or manipulate self-reported viewing results. Even “modern” Nielsen technology can’t keep up. What’s the answer?

Big Data

I know what you’re saying: “We’ve already got that with Nielsen, right?” Not quite. TV viewing habits have shifted in the past few years. So has TV technology. Thanks to the shift from analog broadcast signals to digital and the explosion of set top boxes for cable decryption and movie service usage, we now have a huge portal into the living room of every TV watcher in the world.

Think about for a moment. The idea of a sample size works provided it’s a good representative sample. But tracking this data is problematic. If we have access to a way to crunch the actual data instead of extrapolating from incomplete sets shouldn’t we use that instead? I’d rather believe the real numbers instead of trying to guess from unreliable sources.

This also fixes the issue of time-shifted viewing. Those same set top boxes are often responsible for recording the programs. They could provide information such as number of shows recorded versus viewed and whether or not viewers skip through commercials. For those that view on mobile devices that data could be compiled as well through integration with the set top box. User logins are required for mobile apps as it is today. It’s just a small step to integrating the whole package.

It would require a bit of technical upgrading on the client side. We would have to enable the set top boxes to report data back to a service. We could anonymize the data to a point to be sure that people aren’t being unnecessarily exposed. It will also have to be configured as an opt-out setting to ensure that the majority is represented. Opt-in won’t work because those checkboxes never get checked.

Advertisers are going to demand the most specific information about people that they can. The ratings service exists to work for the advertisers. If this plan is going to work, a new company will have to be created to collect and analyze this data. This way the analysis company can ensure that the data is specific enough to be of use to the advertisers while at the same time ensuring the protection of the viewers.


Tom’s Take

Every year, promising new TV shows are yanked off the airwaves because advertisers don’t see any revenue. Shows that have a great premise can’t get up to steam because of ratings. We need to fix this system. In the old days, the deluge of data would have drown Nielsen. Today, we have the technology to collect, analyze, and store that data for eternity. We can finally get the real statistics on how many people watched Jericho or After MASH. Armed with real numbers, we can make intelligent decisions about what to keep on TV and what to jettison. And that’s a big data project I’d be willing to watch.

The Value of the Internet of Things

NestPrice
The recent sale of IBM’s x86 server business to Lenovo has people in the industry talking.  Some of the conversation has centered around the selling price.  Lenovo picked up IBM’s servers for $2.3 billion, which is almost 66% less than the initial asking price of $6 billion two years ago.  That price drew immediate comparisons to the Google acquisition of Nest, which was $3.2 billion.  Many people asked how a gadget maker with only two shipping products could be worth more than the entirety of IBM’s server business.
Are You Being Served?
It says a lot for the decline of hardware manufacturing, especially at the low end.  IT departments have been moving away from smaller, task focused servers for many years now.  Instead of buying a new 1U, dual socket machine to host an application, developers have used server virtualization as a way to spin up new services quickly with very little additional cost.  That means that older low end servers aren’t being replaced when they reach the end of their life.  Those workloads are being virtualized and moved away while the equipment is permanently retired.
It also means that the target for server manufacturers is no longer the low end.  IT departments that have seen the benefits of virtualization now want larger servers with more memory and CPU power to insert into virtual clusters.  Why license several small servers when I can save money by buying a really big server?  With advances in SAN technology and parts that can be replaced without powering down the system, the need to have multiple systems for failover is practically negated.
And those virtual workloads are easily migrated away from onsite hardware as well.  The shift to cloud computing is the coup-de-gras for the low end server market.  It is just as easy to spin up an Amazon Web Services (AWS) instance to test software as it is to provision new hardware or a virtual cluster.  Companies looking to do hybrid cloud testing or public cloud deployments don’t want to spend money on hardware for the data center.  They would rather pour that money into AWS instances.
Those Internet Things
I think the disparity in the purchase price also speaks volumes for the value yet to be recognized in the Internet of Things (IoT).  Nest was worth so much to Google because it gave them an avenue not previously available.  Google wants to have as many devices in your home as it can afford to acquire.  Each of those devices can provide data to tune Google’s algorithms and provide quality data to advertisers that pay Google handsomely for those analytics.
IoT devices don’t need home servers.  They don’t ask for DNS entries.  They don’t have web interfaces.  The only setup needed out of the box is a connection to the wireless network in your home.  Once that happens, IoT devices usually connect back to a server in the cloud.  The customer accesses the device via an application downloaded from an app store.  No need for any additional hardware in the customer’s home.
IoT devices need infrastructure to work effectively.  However, they don’t need that infrastructure to exist on premises.  The shift to cloud computing means that these devices are happy to exist anywhere without dependence on hardware.  Users are more than willing to download apps to control them instead of debating how to configure the web UI.  Without the need for low end hardware to run these devices, the market for that hardware is effectively dead.

Tom’s Take
I think IBM got exactly what they wanted when they offloaded their server business.  They can now concentrate on services and software.  The kinds of things that are going to be important in the Internet of Things.  Rather than lamenting the fire sale price of a dying product line, we should instead by looking to the value still locked inside IoT devices and how much higher it can go.