Google+ And The Quest For Omniscience

GooglePlusEverything

When you mention Google+ to people, you tend to get a very pointed reaction. Outside of a select few influencers, I have yet to hear anyone say good things about it. This opinion isn’t helped by the recent moves by Google to make Google+ the backend authentication mechanism for their services. What’s Google’s aim here?

Google+ draws immediate comparisons to Facebook. Most people will tell you that Google+ is a poor implementation of the world’s most popular social media site. I would tend to agree for a few reasons. I find it hard to filter things in Google+. The lack of a real API means that I can’t interact with it via my preferred clients. I don’t want to log into a separate web interface simply to ingest endless streams of animated GIFs with the occasional nugget of information that was likely posted somewhere else in the first place.

It’s the Apps

One thing the Google of old was very good at doing was creating applications that people needed. GMail and Google Apps are things I use daily. Youtube gets visits all the time. I still use Google Maps to look things up when I’m away from my phone. Each of these apps represent a separate development train and unique way of looking at things. They were more integrated than some of the attempts I’ve seen to tie together applications at other vendors. They were missing one thing as far as Google was concerned: you.

Google+ isn’t a social network. It’s a database. It’s an identity store that Google uses to nail down exactly who you are. Every +1 tells them something about you. However, that’s not good enough. Google can only prosper if they can refine their algorithms.  Each discrete piece of information they gather needs to be augmented by more information.  In order to do that, they need to increase their database.  That means they need to drive adoption of their social network.  But they can’t force people to use Google+, right?

That’s where the plan to integrate Google+ as the backend authentication system makes nefarious sense. They’ve already gotten you hooked on their apps. You comment on Youtube or use Maps to figure out where the nearest Starbucks already. Google wants to know that. They want to figure out how to structure AdWords to show you more ads for local coffee shops or categorize your comments on music videos to sell you Google Play music subscriptions. Above all else, they can use that information as a product to advertisers.

Build It Before They Come

It’s devilishly simple. It’s also going to be more effective than Facebook’s approach. Ask yourself this: when’s the last time you used Facebook Mail? Facebook started out with the lofty goal of gathering all the information that it could about people. Then they realized the same thing that Google did: You have to collect information on what people are using to get the whole picture. Facebook couldn’t introduce a new system, so they had to start making apps.

Except people generally look at those apps and push them to the side. Mail is a perfect example. Even when Facebook tried to force people to use it as their primary communication method their users rebelled against the idea. Now, Facebook is being railroaded into using their data store as a backend authentication mechanism for third party sites. I know you’ve seen the “log In With Facebook” buttons already. I’ve even written about it recently. You probably figured out this is going to be less successful for a singular reason: control.

Unlike Google+ and the integration will all Google apps, third parties that utilize Facebook logins can choose to restrict the information that is shared with Facebook. Given the climate of privacy in the world today, it stands to reason that people are going to start being very selective about the information that is shared with these kinds of data sinks. Thanks to the Facebook login API, a significant portion of the collected information never has to be shared back to Facebook. On the other hand, Google+ is just making a simple backend authorization. Given that they’ve turned on Google+ identities for Youtube commenting without a second though, it does make you wonder what other data their collecting without really thinking about it.


Tom’s Take

I don’t use Google+. I post things there via API hacks. I do it because Google as a search engine is too valuable to ignore. However, I don’t actively choose to use Google+ or any of the apps that are now integrated into it. I won’t comment on Youtube. I doubt I’ll use the Google Maps functions that are integrated into Google+. I don’t like having a half-baked social media network forced on me. I like it even less when it’s a ham-handed attempt to gather even more data on me to sell to someone willing to pay to market to me. Rather than trying to be the anti-Facebook, Google should stand up for the rights of their product…uh, I mean customers.

The Value of the Internet of Things

NestPrice
The recent sale of IBM’s x86 server business to Lenovo has people in the industry talking.  Some of the conversation has centered around the selling price.  Lenovo picked up IBM’s servers for $2.3 billion, which is almost 66% less than the initial asking price of $6 billion two years ago.  That price drew immediate comparisons to the Google acquisition of Nest, which was $3.2 billion.  Many people asked how a gadget maker with only two shipping products could be worth more than the entirety of IBM’s server business.
Are You Being Served?
It says a lot for the decline of hardware manufacturing, especially at the low end.  IT departments have been moving away from smaller, task focused servers for many years now.  Instead of buying a new 1U, dual socket machine to host an application, developers have used server virtualization as a way to spin up new services quickly with very little additional cost.  That means that older low end servers aren’t being replaced when they reach the end of their life.  Those workloads are being virtualized and moved away while the equipment is permanently retired.
It also means that the target for server manufacturers is no longer the low end.  IT departments that have seen the benefits of virtualization now want larger servers with more memory and CPU power to insert into virtual clusters.  Why license several small servers when I can save money by buying a really big server?  With advances in SAN technology and parts that can be replaced without powering down the system, the need to have multiple systems for failover is practically negated.
And those virtual workloads are easily migrated away from onsite hardware as well.  The shift to cloud computing is the coup-de-gras for the low end server market.  It is just as easy to spin up an Amazon Web Services (AWS) instance to test software as it is to provision new hardware or a virtual cluster.  Companies looking to do hybrid cloud testing or public cloud deployments don’t want to spend money on hardware for the data center.  They would rather pour that money into AWS instances.
Those Internet Things
I think the disparity in the purchase price also speaks volumes for the value yet to be recognized in the Internet of Things (IoT).  Nest was worth so much to Google because it gave them an avenue not previously available.  Google wants to have as many devices in your home as it can afford to acquire.  Each of those devices can provide data to tune Google’s algorithms and provide quality data to advertisers that pay Google handsomely for those analytics.
IoT devices don’t need home servers.  They don’t ask for DNS entries.  They don’t have web interfaces.  The only setup needed out of the box is a connection to the wireless network in your home.  Once that happens, IoT devices usually connect back to a server in the cloud.  The customer accesses the device via an application downloaded from an app store.  No need for any additional hardware in the customer’s home.
IoT devices need infrastructure to work effectively.  However, they don’t need that infrastructure to exist on premises.  The shift to cloud computing means that these devices are happy to exist anywhere without dependence on hardware.  Users are more than willing to download apps to control them instead of debating how to configure the web UI.  Without the need for low end hardware to run these devices, the market for that hardware is effectively dead.

Tom’s Take
I think IBM got exactly what they wanted when they offloaded their server business.  They can now concentrate on services and software.  The kinds of things that are going to be important in the Internet of Things.  Rather than lamenting the fire sale price of a dying product line, we should instead by looking to the value still locked inside IoT devices and how much higher it can go.

Why I Won’t Be Getting Google Glass

google_glass_grey-580-90

You may recall a few months back when I wrote an article talking about Google Glass and how I thought that the first generation of this wearable computing device was aimed way too low in terms of target applications. When Google started a grass roots campaign to hand out Glass units to social media influencers, I retweeted my blog post with the hashtag #IfIHadGlass with the idea that someone at Google might see it and realize they needed to se their sights higher. Funny enough, someone at Google did see the tweet and told me that I was in the running to be offered a development unit of Glass. All for driving a bit of traffic to my blog.

About a month ago, I got the magic DM from Google Glass saying that I could go online and request my unit along with a snazzy carrying case and a sunglass lens if I wanted. I only had to pony up $1500US for the privilege. Oh, and I could only pick it up at a secured Google facility. I don’t even know where the closest one of those to Oklahoma might be. After weighing the whole thing carefully, I made my decision.

I won’t be participating in generation one of Google Glass.

I had plenty of reasons. I’m not averse to participating in development trials. I use beta software all the time. I signed up for the last wave of Google CR-48 Chromebooks. In fact, I still use that woefully underpowered CR-48 to this day. But Google Glass represents something entirely different from those beta opportunities.

From Entry to Profit

Google isn’t creating a barrier to entry through their usual methods of restricting supply or making the program invite only. Instead, they are trying to restrict Glass users to those with a spare $1500 to drop on a late alpha/early beta piece of hardware. I also think they are trying to recoup the development costs of the project via the early adopters. Google has gone from being an awesome development shop to a company acutely aware of the bottom line. Google has laid down some very stringent rules to determine what can be shown on Glass. Advertising is verboten. Anyone want to be that Google finds a way to work AdWords in somewhere? If you are relying on your tried-and-true user base of developers to recover your costs before you even release the product to the masses, you’ve missed big time

Eye of the Beholder

One of the other things that turned me off about the first generation of Glass is the technology not quite being where I thought it would be. After examining what Glass is capable of doing from a projection standpoint, many of my initial conceptions about the unit are way off. I suppose that has a lot to do with what I thought Google was really working on. Instead of finding a way to track eye movement inside of a specific area and deliver results based on where the user’s eye is focused, Google instead chose to simply project a virtual screen on the user’s eye slightly off center from the field of vision. That’s a great win for version one. But it doesn’t really accomplish what I thought Google Glass should do. The idea of a wearable eyeglass computer isn’t that useful to me if the field of vision is limited to a camera glued to the side of a pair of eyeglass frames. Without the ability to track the eye movements of a user it’s simply not possible to filter the huge amount of information being taken in by the user. If Google could implement a function to see what the user is focusing on, I’m sure that some companies would pay *huge* development dollars to be able to track that information or run some kind of augmented reality advertisement directed as an alternative to that brand. Just go and watch Minority Report if you want to know what I’m thinking about.

Mind the Leash

According to my friend Blake Krone (@BlakeKrone), who just posted his first Google Glass update, the unit is great for taking pictures and video without the need to dig out a camera or take your eyes off the subject for more than the half second it takes to activate the Glass camera with a voice command.  Once you’ve gotten those shiny new pictures ready to upload to Google+, how are you going to do it?  There’s the rub in the first generation Glass units.  You have to tether Glass to some kind of mobile hotspot in order to be able to upload photos outside of a WiFi hotspot.  I guess trying to cram a cellular radio into the little plastic frame was more than the engineers could muster in the initial prototype.  Many will stop me here and interject that WiFi hotspot access is fairly common now.  All you have to do is grab a cup of coffee from Starbucks or a milkshake from McDonalds and let your photos upload to GoogleSpace.  How does that work from a mountain top?  What if I had a video that I wanted to post right away from the middle of the ocean?  How exactly do you livestream video while skydiving over the Moscone Center during Google I/O?  Here’s a hint:  You plant engineers on the roof with parabolic dishes to broadcast WiFi straight up in the air.  Not as magical when you strip all the layers away.  For me, the need to upgrade my data plan to include tethering just so I could upload those pics and videos outside my house was another non-starter.  Maybe the second generation of Glass will have figured out how to make a cellular radio small enough to fit inside a pair of glasses.

Tom’s Take

Google Glass has made some people deliriously happy. They have a computer strapped to their face and they are hacking away to create applications that are going to change the way we interact with software and systems in general. Those people are a lot smarter than me. I’m not a developer. I’m not a visionary. I just call things like I see them. To me, Google Glass was shoved out the door a generation too early to be of real use. It was created to show that Google is still on the cutting edge of hardware development even though no one was developing wearable computing. On the other hand, Google did paint a huge target on their face. When the genie came out of the bottle other companies like Apple and Pebble started developing their own take on wearable computing. Sure, it’s not a striking as a pair of sci-fi googles. But evolutionary steps here lead to the slimming down of technology to the point where those iPhones and Samsung Galaxy S 4 Whatevers can fit comfortable into the frame of any designer eyeglasses. And that’s when the real money is going to be made. Not by gouging developers or requiring your users to be chained to a smartphone.

If you want to check out what Glass looks like from the perspective of someone that isn’t going to wear them in the shower, check out Blake’s Google Glass blog at http://FromEyeLevel.com

Software Defined Cars

CarLights

I think everything in the IT world has been tagged as “software defined” by this point. There’s software defined networking, software defined storage, the software defined data center, and so on. Given that the definitions of the things I just enumerated are very hard to nail down, it’s no surprise that many in the greater IT community just roll their eyes when they start hearing someone talk about SD.

I try to find ways to discuss advanced topics like this with people that may not understand what a hypervisor is or what a forwarding engine is really supposed to be doing. The analogies that I come up usually relate to everyday objects that are familiar to my readers. If I can frame the Internet as a highway and help people “get it,” then I’ve succeeded.

During one particularly interesting discussion, I started trying to relate SDN to the automobile. The car is a fairly stable platform that has been iterated upon many times in the 150 years that it has been around. We’ve seen steam-powered single seat models give way to 8+ passenger units capable of hauling tons of freight. It is a platform that is very much defined by the hardware. Engines and seating are the first things that spring to mind, but also wheels and cargo areas. The difference between a sports car and an SUV is very apparent due to hardware, much in the same way that a workgroup access switch only resembles a core data center switch in the most basic terms.

This got me to thinking: what would it take to software define a car? How could I radically change the thinking behind an automobile with software. At first, I thought about software programs running in the engine that assist the driver with things like fuel consumption or perhaps an on-demand traction and ride handling system. Those are great additional features for sure but they don’t really add anything to the base performance of a car beyond a few extra tweaks. Even the most advanced “programming” tools that are offered for performance specialists that allow for the careful optimization of transmission shifting patterns and fuel injector mixture recalibration don’t really fall into the software defined category. While those programs offer a way to configure the car in a manner different from the original intent they are difficult to operate and require a great deal of special knowledge to configure in the first place.

That’s when it hit me like a bolt out of the blue. We already have a software defined car. Google has been testing it for years. Only they call it a Driverless Car. Think about it in terms of our favorite topic of SDN. Google has taken the hardware that we are used to (the car) and replaced the control plane with a software construct (the robot steering mechanism). The software is capable of directing the forwarding of the hardware with no user intervention, as illustrated in this video:

That’s a pretty amazing feat when you think about it. Of course, programming a car to drive itself isn’t easy. There’s a ton of extra data that is generated as a car learns to drive itself that must be taken into account. In much the same way, the network is going to generate mountains of additional data that needs to be captured by some kind of sensor or management platform. That extra data represents the network feedback that allows you to do things like steer around obstacles, whether they be a deer in the road or a saturated uplink to a cloud provider.

In addition, the idea of a driverless software defined car is exciting because of the disruption that it represents. Once we don’t need a cockpit with a steering mechanism or access to propulsion mechanisms directly at our fingertips (or feet), we can go about breaking about the historical construction of a car and make it a more friendly concept. Why do I need to look forward when my car does all the work? Why can’t I twist the seats 90 degrees and facilitate conversation among passengers while the automation is occuring? Why can’t I put in an uplink to allow me to get work done or a phone to make calls now that my attention doesn’t need to be focused on the road? When the car is doing all the driving, there are a multitude of ideas that need to be reconsidered for how we design the automobile.

When I started bouncing this idea off of some people, Stephen Foskett (@SFoskett) mentioned to me that some people would take issue with my idea of a software defined car because it’s a self-contained, closed ecosystem. What about a software defined network that collects data and provides for greater visibility to the management layer? Doesn’t it need to be a larger system in order to really take advantage of software definition? That’s the beauty of the software defined piece. Once we have a vehicle generating large amounts of actionable data, we can now collect that and do something with it. Google has traffic data in their Maps application. What if that data was being fed in real time by the cars themselves? What if the car could automatically recognize traffic congestion and reroute on the fly instead of merely suggesting that the driver take an alternate path? What if we could load balance our highway system efficiently because the car is getting real time data about conditions. Now Google has the capability to use their software defined endpoints to reconfigure as needed.

What if that same car could automatically sense that you were driving to the airport and check you into your flight based on arrival time without the need to intervene? How about inputting a destination, such as a restaurant or a sporting event and having the car instantly reserve a parking spot near the venue based on reports from cars already in the lot or from sensors that report the number of empty spots in a parking garage nearby? The possibilities are really limitless even in this first or second stage. The key is that we capture the generated data from the software pieces that we install on top of existing hardware. We never knew we could do this because the interface into the data never existed prior to creating a software layer that we could interact with.  When you look at what Google has already done with their recent acquisition of Waze, the social GPS and map application it does look like Google is starting down this path.  Why rely on drivers to update the Waze database when the cars can do it for you?


Tom’s Take

I have spent a very large portion of my IT career driving to and from customer sites. The idea of a driverless car is appealing, but it doesn’t really help me to just sit over in the passenger seat and watch a computer program do my old job. I still like driving long distances to a certain extent. I don’t want to lose that. It’s when I can start using the software layer to enable things that I never thought possible that I start realizing the potential. Rather than just looking as the driverless software defined car as a replacement for drivers, the key is to look at the potential that it unlocks to be more efficient and make me more productive on the road. That’s the key take away for us all. Those lessons can also be applied to the world of software defined networking/storage/data center as well. We just have to remember to look past the hype and marketing and realize what the future holds in store.

A Chrome-Plated Workout

I’ve had my CR-48 for about two weeks now, and I’ve put it through it’s paces.  I used it to take notes at Tech Field Day 5.  I set up an IRC channel for people to ask questions during the event.  I’ve written numerous blog posts on the little laptop.  I’ve used it to chat with people halfway around the world.  All in all, I’m impressed with the unit.  That’s not to say that everything about it has me thrilled.

The Good

I like the fact that the CR-48 is instantly on when I lift the lid.  The SSD and the lightweight OS team up to make it quite easy to just grab and fire up to start using for notes or web surfing.  It’s not quite as fast as an iPad, but much faster than hauling out my Lenovo w701 behemoth.  I like having the CR-48 handy for things I would rather do with a keyboard.

More than a few people have remarked to me that it looks “just like a MacBook”.  And I’ve come to see it much like a MacBook Air.  Obviously it’s not as sleek as Apple’s little wonder, but I like the form factor and the screen resolution much better than some of the other netbooks I’ve used.  It doesn’t feel cramped and toy-like.  In fact, it feels more Mac-like than any other laptop I’ve used.  I’m sure that is intentional on the part of Google.

Having the 3G Verizon radio is pretty handy in situations where there is no Wi-Fi available.  More than once I found myself unable to connect to a certificate-based wireless system (a known issue) or stuck in a place with terrible reception.  With the CR-48, I just switch over to the 3G radio and keep plugging away.  The 100MB allowed with the trial is a little anemic for heavy-duty use, but the bigger plans seem fairly priced should I find the need to upgrade to one.  When I tried activating the radio over the phone, the Verizon rep made sure to point out that they had plans available in all sizes for me to purchase, but somehow skipped over the part about me having 100MB for free each month.  Luckily I read the instructions.

The Bad

The CR-48 isn’t without it’s annoyances.  The touchpad is probably the most persistent issue I had.  The tap-to-click functionality found on most trackpads was bordering on annoying for me.  I’m a touch typist with hands the size of a gorilla.  I tend to rest my thumbs at the bottom of the keyboard as I type and on this laptop that means brushing the trackpad more often than not.  With the default settings, I often found myself sending e-mail or canceling tweets without realizing what happened, or my cursor shooting over to a random section of my blog post and my words spilling into other thoughts.  I finally gave up and disabled the tap-to-click setup, ironically making it more like a MacBook.

I also made the mistake of letting the battery run down all the way.  It was already low from use and I let it go to sleep without plugging it in.  Sure enough, it drained down and wouldn’t power back up.  Once I plugged it in I was able to use it, but it wouldn’t charge no matter how long I left it plugged in.  It took some searching on the Internet to find an acceptable solution (of which there appear to be many) before settling on a combination of things.  I pulled the battery for about 2 minutes, then reattached it and CAREFULLY plugged the adapter back in.  As soon as I saw the orange charging light come on, I finished pushing the charger all the way in and it worked for me after that.  There are rumors that the port and/or the charger are a little substandard, so this is something that is going to bear a little more inspection.  Speaking of the charger, the fact that it uses a three-pronged plug is a little annoying when I’m trying to find a place to plug in.  I’ve taken to carrying a little 2-prong grounding adapter in my bag just so I can plug in anywhere.  Not an expensive solution, but something I wish I didn’t have to do.

One final annoyance was a minor issue that turned into a humorous solution.  When I unboxed the unit and fired it up the first time, it seemed that playing two audio streams on top of each other would cause the speaker to short out and sound like I was choking a robot.  There was evidently a fix for it, but there seemed to be an issue with the netbook pulling the new update as it was only a point release and very minor.  Every time I checked the system updater, it told me the system was up to date.  The fix I found on the Internet suggested to click the Update button repeatedly until the system finally recognized the new update.  Literally, I clicked 50 times in order to get the update.  It did fix my audio issues, but you would think the update system would recognize a new release was out without me needing to be spastic with the update button.

Tom’s Take

Over all I’m thrilled with the CR-48 after a couple of weeks of exposure.  I keep it in my bag at all times, ready to go when necessary.  When I head back to Wireless Field Day in March, I’m planning on leaving the behemoth behind and only taking my CR-48 and my iPad for connectivity.  I figure cutting down on the extra 12 pounds of weight will be good for my posture and not having to haul an extra laptop out at the TSA Security and Prostate Screeing Checkpoint is always welcome to not only myself but the other passengers as well.  I’m also debating whether or not to flip over into developer mode to see if that has any additional tricks I can try out.  I don’t know if it’ll increase my productivity any more, but having a few extra knobs and switches to play with is never a bad thing.

Let Me Google (Chrome) That For You

Back in December, I applied for a Google Chrome OS notebook.  I figured that if I got one I could use it for testing and checking out Google’s ideas about a web-enabled OS.  I then promptly forgot about it.  Guess what showed up on Monday?

I am now the proud possessor of a Google CR-48 Chrome OS laptop.  It’s a pretty utilitarian thing, which suits me just fine.  The finish is matte all over.  No fancy aluminium or gloss plastic.  Likewise, the connection ports are equally spartan.  An SD card slot, headphone jack, single USB port, and a power connector adorn the right side.  The VGA out occupies the left side.  No Firewire, no Ethernet, no optical drive.  This thing is designed to use wireless to connect to the network and pretty much run on it’s own without many (if any) peripherals.

The hardware is very netbook-ish.  An Atom processor with 2GB of RAM, along with a 16 GB SSD.  The latter allows the machine to wake from sleep almost instantly, much like a certain Air-y demo from the Fruit Company Not-A-Netbook press conference last year.  There’s even an integrated Verizon 3G modem for connectivity outside of Wi-Fi areas.  All in all, the looks combined with the hardware specs would most likely not even get a second glance from a buyer.  It’s what’s under the hood that is so very different.

For those of you out there that are fans of the Google Chrome web browser like I am, you’ll find the interface to be identical on the CR-48.  Here’s the catch, though.  That web browser is the ENTIRE OS.  No start menu.  No dock.  The whole OS concept revolves around the browser, and by extension, the web itself.  The user account for the system is a Google account.  It pulls your information from GMail, Google Docs, and even your Chrome favorites if you’ve set them to sync.  All of my Google information was pulled down the first time I logged into the laptop.  The 16GB drive is enough to handle a few downloads, but most of your file manipulation will occur in the “cloud”.  Google Docs for an office suite, for instance.  Any other apps you might need can be downloaded from the Google Apps Web Store.  Twitter clients, note taking apps, remote access apps, and so on.  They can all be “installed” into the browser OS for access to the things you use the most.  The more I used the system, though, the more I found myself thinking in terms of web-based content and less in terms of document storage and programs like I think of on my work laptop.   For a child or a spouse the spend 85-95% of their time doing online-related content creation and consumption (like Facebook or webmail), this would be the perfect laptop.

That’s not to say that the CR-48 is a perfect laptop.  There are some issues, even taking into account this early beta type OS/Platform.  Bluetooth doesn’t work.  Neither does connecting to a Wi-Fi network that uses certificate authentication.  The trackpad is a little tough to get used to.  I’ve heard some people compare it to the type found on the latest Macbook.  It takes some getting used to, and the gesture support isn’t quite intuitive for me just yet.  The other bug I ran into was with the Verizon 3G modem.  There’s a quick link for activating the Verizon account that Google has graciously provided.  However, I hit a geographical snag.  It appears that the Verizon towers in my area code (405) used to belong to Alltel Communications before they were purchased by Verizon last year.  So, when the activation signal was sent, it wouldn’t register correctly.  I was only able to activate it correctly when I went out of state.  To Google’s credit, they Chrome Netbook Ninja (support person) I talked to diagnosed the problem inside of 5 minutes after the Verizon pre-paid tech fought with it for 30 minutes.  Kudos to Google for having competent support people.  And even more kudos for allowing them to call themselves ninjas.

I plan on using the Chrome netbook to take notes during Tech Field Day this week to give it a good run and see how well it performs.  I may even let my kids start using it to see how well it holds up to the gentle caresses of a 5 year old and a 2 year old.  Stay tuned for further reports.