The Compost-PC Era

Generic Mobile Devices

I realized the other day that the vibration motor in my iPhone 5s had gone out.  Thankfully, my device was still covered under warranty.  I set up an appointment to have it fixed at the nearest Apple store.  I figured I’d go in and they’d just pop in a new motor.  It is a simple repair according to iFixit.  I backed my phone up one last time as a precaution.  When I arrived at the store, it took no time to determine what was wrong.

What shocked me was that the Genius tech told me, “We’re just going to replace your whole phone.  We’ll send the old one off to get repaired.”  I was taken aback.  This was a $20 part that should have taken all of five minutes to pop in.  Instead, I got my phone completely replaced after just three months.  As the new phone synced from my last iClould backup, I started thinking about what this means for the future of devices.

Bring Your Own Disposable

Most mobile devices are a wonder of space engineering.  Cramming an extra long battery in with a vibrant color screen and enough storage to satisfy users is a challenge in any device.  Making it small enough and light enough to hold in the palm of your hand is even more difficult.  Compromises must be made.  Devices today are held together as much by glue and adhesive as they are nuts and bolts and screws.  Gaining access to a device to repair a broken part is becoming more and more impossible with each new generation.

I can still remember opening the case on my first PC to add a sound card and an Overdrive processor.  It was a bit scary but led to a career in repairing computers.  I’ve downright terrified to pop open an iPhone.  The ribbon cables are so fragile that it doesn’t take much to render the phone unusable.  Even Apple knows this.  They are much more likely to have the repairs done in a separate facility rather than at the store.  Other than screen replacements, the majority of broken parts result in a new phone being given to the customer.  After all, it’s very easy to replace devices when the data is safe somewhere.

The Cloud Will Save It All

Use of cloud storage and backup is the key to the disposable device trend.  If you tell me that I’m going to lose my laptop and all the data on it I’m going to get a little concerned.  If you tell me that I’m going to lose my phone, I don’t mind as much thanks to the cloud backup I have configured.  In the above case, my data was synced back to my phone as I shopped for a new screen protector.  Just like a corporate system, data loss is the biggest concern on a device.  Cloud storage is a lot like a roaming profile.  I can sync that data back to a fresh device and keep going after a short interruption.  Gone are the wasted hours of reinstallation of operating system and software.

Why repair devices when they can easily be replaced at little cost?  Why should you pay someone to spend their time diagnosing a bad CPU or bad RAM when you can just unwrap a new mobile device, sync your profile and data, and move on with your project?  The implications for PC repair techs are legion.  As are the implications for manufacturers that create products that are easy to open and contain field replaceable parts.

Why go to all the extra effort of making a device that can be easily repaired if it’s much cheaper to just glue it together and recycle what parts you can after it breaks?  Customers have already shown their propensity to upgrade devices with every new cycle each year.  They’d rather buy everything new instead of upgrading the old to match.  That means making the device field repairable (or upgradable) is extra cost you don’t need.  Making devices that aren’t easily fixed in the field means you need to spend less of your budgets training people how to repair them.  In fact, it’s just easier to have the customer send the device back to the manufacturing plant.


Tom’s Take

The cloud has enabled us to keep our data consistent between devices.  While it has helped blur the lines between desktop and mobile device, it has also helped blur the lines tying people to a specific device.  If I can have my phone or tablet replaced with almost no impact, I’m going to elect to have than done rather than finding replacement parts to keep the old one running just a bit longer.  It also means that after pulling the useful parts out of those mildly broken devices that they will end up in the same landfill that analysts are saying will be filled with rejected desktop PCs.

FaceTime Audio: The Beginning or The End?

BlackApple

The world of mobile devices is a curious one. Handset manufacturers are always raising the bar for features in both hardware and software in order to convince customers to use their device. Yet, no matter how much innovation goes into the handset the vendors are still very reliant upon the whims of the carriers. Apple knows this perhaps better than anyone

In Your FaceTime

FaceTime was the first protocol to feel the wrath of the carriers. Apple developed it as a way to facilitate video communication between parties. The idea was that face-to-face video communications could be simplified to create a seamless experience. And it did, for the most part. Except that AT&T decided that using FaceTime over 3G would put too much strain on their network. At first, they forced Apple to limit FaceTime to only work with wireless connections. That severely inhibited the utility of the protocol. If the only place that a you can video call someone is at home or in a coffee shop (or on crappy hotel wireless) that makes the video call much less useful.

Apple finally allowed FaceTime to operate over cellular networks in iOS 6, yet AT&T (and other carriers) restricted the use of the protocol to those customers on the most current data plans. This eliminated those on older, unlimited data plans from utilizing the service. The carriers eventually gave in to customer pressure and started rolling out the capability to all subscribers. By then, it was too late. Apple had decided to take a different track – replace the need for a carrier.

Message For You

The first shot in this replacement battle came with iMessage. Apple created a messaging protocol like the iChat system for Mac, only it ran on iPhones and iPads (and later Macs). It was enabled by default, which was genius. The first time you sent an Short Message Service (SMS) text to a friend, the system detected you were messaging another iPhone user on a compatible version of software. The system then flipped the messaging over to use iMessage instead of SMS and the chat bubbles turned blue instead of green. Now, you could send pictures of any size as well as texts on any length with no restrictions. 160-character limits were no longer a concern. Neither was paying your carrier for an SMS plan. So long as the people you spoke with were all iDevice users the service was completely free.

iMessage was Apple’s first attempt to sideline the carriers. It removed a huge portion of their profitability. According to an article published at the launch of iMessage, carriers were making $.20 per message outside of an SMS plan for data that would cost about $.0125 on a data plan. Worse yet, that message traversed a control channel that was always present for the user. There was no additional cost to the carrier beyond flipping a switch to enable message delivery to the phone. It was a pure-profit enterprise. Apple seized on the opportunity to erode that profitability.

Today, you can barely find a cellular plan that *doesn’t* include unlimited text messaging. The carriers can no longer reap the rewards of a high profit, low cost service like SMS because of Apple and iMessage. Carriers are instead including it as a quality of life feature that they make nothing from. Cupertino has eliminated one of the sources of carrier entanglement. And they’re poised to do it again in iOS 7.

You Can Hear Me Now

FaceTime Audio was one of the features of iOS 7 that got swept under the rug in favor of talking about flat design or parallax wallpaper. FaceTime Audio uses the same audio codec from FaceTime, AAC-ELD, to initiate a phone call between two iDevice users. Only it doesn’t use the 3G/LTE radio to make the call. It’s all done via the data connection.

I tested FaceTime Audio for the first time after my wife upgraded her phone to iOS 7. The results were beyond astonishing. The audio quality of the call was as crisp and clear as any I’d every heard. In fact, I would compare it to the use of Cisco’s Wideband G.722 codec on an enterprise voice system. My wife, a non-technical person even noticed the difference by remarking, “It’s like you’re right next to me in the same room!” I specifically tried it over 3G/LTE to make sure it wasn’t blocked like FaceTime video. Amazingly, it wasn’t.

The Mean Opinion Score (MOS) rating that telephony network use to rate call clarity runs from 1 to 5. A 1 means you can’t hear them at all. A 5 means there is no difference between talking on the phone and talking in the same room. Most of the “best” calls get a MOS rating in the 4.1-4.3 range. I would rate FaceTime audio at a 4.5 or higher. Not only could I hear my wife clearly on the calls we made, but I also heard background noise clearly when she turned her head to speak to someone. The clarity was so amazing that I even tweeted about it.

FaceTime Audio calling could be poised to do the same thing to voice minutes that iMessage did to SMS. I’ve already changed the favorite for my wife’s number to dial her via FaceTime Audio instead of her mobile phone number. The clarity makes that much of a difference. It also helps that I’m not using any of my plan minutes to call her. Yes, I realize that many carriers make mobile-to-mobile calls free already. However, I was also able to call my wife via FaceTime Audio from my iPad as a test that worked perfectly. Now, I not only don’t use voice minutes but have the flexibility to call from a device that previously had no capability to do so.

Who Needs A Phone?

Think about the iPod Touch. It is a device that is very similar to the iPhone. In fact, with the exception of the cellular radio one might say they’re identical. With iMessage, I can get texts on an iPod touch using my Apple ID. So long as I’m around a wireless connection (or have a 3G MiFi device) I’m connected to the world. With FaceTime audio, the same Apple ID now allows me to take phone calls. The only thing the carriers now have to provide is a data connection. You still can’t text or call non-Apple devices with iMessage and FaceTime. However, you can reduce the amount of money you are paying for their services due to a reduction in the amount of minutes and/or texts you are sending. That should have the mobile carriers running scared.


Tom’s Take

I once said I would never own a cellular phone because sometimes I didn’t want to be found. Today, I get nervous if mine isn’t with me at all times. I also didn’t get SMS messaging at first. Now I spend more time doing that than anything else. Mobile technology has changed our lives. We’ve spent far too much time chained to the carriers, however. They have dictated what when can do with our phones. They have enforced how much data we use and how much we can talk. With protocols like FaceTime Audio, the handset manufacturers are going to start deciding how best to use their own devices. No carrier will be able to institute limits on minutes or texts. I think that if FaceTime Audio takes off in the same way as iMessage, you’ll see mobile carriers offering unlimited talk plans alongside the unlimited text plans within the next two years. If 50% of your userbase is making calls on their data plans, they need for all those “rollover” minutes becomes spurious. People will start reducing their plans down to the minimum necessary to get good data coverage. And if a carrier decides to start gouging for data service? Just take your device to another carrier. Or drop you contact in favor of a MiFi or similar data-only connection. FaceTime Audio is the beginning of easy Voice over IP (VoIP) calling. It’s the end of the road for carrier dominance.

Why An iPhone Fingerprint Scanner Makes Sense

silver-apple-thumb

It’s hype season again for the Cupertino Fruit and Phone Company.  We are mere days away from a press conference that should reveal the specs of a new iPhone, likely to be named the iPhone 5S.  As is customary before these events, the public is treated to all manner of Wild Mass Guessing as to what will be contained in the device.  WIll it have dual flashes?  Will it have a slow-motion camera?  NFC? 802.11ac?  The list goes on and on.  One of the most spectacular rumors comes in a package the size of your thumb.

Apple quietly bought a company called AuthenTec last year.  AuthentTec made fingerprint scanners for a variety of companies, including those that included the technology in some Android devices.  After the $365 million acquisition, AuthenTec disappeared into a black hole.  No one (including Apple) said much of anything about them.  Then a few weeks ago, a patent application was revealed that came from Apple and included fingerprint technology from AuthenTec.  This sent the rumor mill into overdrive.  Now all signs point to a convex sapphire home button that contains a fingerprint scanner that will allow iPhones to use biometrics for security.  A developer even managed to ferret out a link to a BiometrickKitUI bundle in one of the iOS 7 beta releases (which was quickly removed in the next beta).

Giving Security The Finger

I think adding a fingerprint scanner to the hardware of an iDevice is an awesome idea.  Passcode locks are good for a certain amount of basic device security, but the usefulness of a passcode is inversely proportional to it’s security level.  People don’t make complex passcodes because they take far too long to type in.  If you make a complex alphanumeric code, typing the code in quickly one-handed isn’t easy.  That leaves most people choosing to use a 4-digit code or forgoing it altogether.  That doesn’t bode well for people whose phones are lost or stolen.

Apple has already publicly revealed that it will include enhanced security in iOS 7 in the form of an activation lock that prevents a thief from erasing the phone and reactivating it for themselves.  This makes sense in that Apple wants to discourage thieves.  But that step only makes sense if you consider that Apple wants to beef up the device security as well.  Biometric fingerprint scanners are a quick method of inputting a unique unlock code quickly.  Enabling this technology on a new phone should show a sharp increase in the number of users that have enabled an unlock code (or finger, in this case).

Not all people thing fingerprint scanners are a good idea.  A link from Angelbeat says that Apple should forget about the finger and instead use a combination of picture and voice to unlock the phone.  The writer says that this would provide more security because it requires your face as well as your voice.  The writer also says that it’s more convenient than taking a glove off to use a finger in cold weather.  I happen to disagree on a couple of points.

A Face For Radio

Facial recognition unlock for phones isn’t new.  It’s been in Android since the release of Ice Cream Sandwich.  It’s also very easy to defeat.  This article from last year talks about how flaky the system is unless you provide it several pictures to reference from many different angles.  This video shows how snapping a picture on a different phone can easily fool the facial recognition.  And that’s only the first video of several that I found on a cursory search for “Android Facial Recognition”.  I could see this working against the user if the phone is stolen by someone that knows their target.  Especially if there is a large repository of face pictures online somewhere.  Perhaps in a “book” of “faces”.

Another issue I have is Siri.  As far as I know, Siri can’t be trained to recognize a users voice.  In fact, I don’t believe Siri can distinguish one user from another at all.  To prove my point, go pick up a friend’s phone and ask Siri to find something.  Odds are good Siri will comply even though you aren’t the phone’s owner.  In order to defeat the old, unreliable voice command systems that have been around forever, Apple made Siri able to recognize a wide variety of voices and accents.  In order to cover that wide use case, Apple had to sacrifice resolution of a specific voice.  Apple would have to build in a completely new set of Siri APIs that query a user to speak a specific set of phrases in order to build a custom unlock code.  Based on my experience with those kinds of old systems, if you didn’t utter the phrase exactly the way it was originally recorded it would fail spectacularly.  What happens if you have a cold?  Or there is background noise?  Not exactly easier that putting your thumb on a sensor.

Don’t think that means that fingerprints are infallible.  The Mythbusters managed to defeat an unbeatable fingerprint scanner in one episode.  Of course, they had access to things like ballistics gel, which isn’t something you can pick up at the corner store.  Biometrics are only as good as the sensors that power them.  They also serve as a deterrent, not a complete barrier.  Lifting someone’s fingerprints isn’t easy and neither is scanning them into a computer to produce a sharp enough image to fool the average scanner.  The idea is that a stolen phone with a biometric lock will simply be discarded and a different, more vulnerable phone would be exploited instead.


Tom’s Take

I hope that Apple includes a fingerprint scanner in the new iPhone.  I hope it has enough accuracy and resolution to make biometric access easy and simple.  That kind of implementation across so many devices will drive the access control industry to take a new look at biometrics and being integrating them into more products.  Hopefully that will spur things like home door locks, vehicle locks, and other personal devices to being using these same kind of sensors to increase security.  Fingerprints aren’t perfect by any stretch, but they are the best option of the current generation of technology.  One day we may reach the stage of retinal scanners or brainwave pattern matches for security locks.  For now, a fingerprint scanner on my phone will get a “thumbs up” from me.

iOS 7 and Labels

wwdc-13-apple-reveals-ios-7

Apple is prepping the release of iOS 7 to their users sometime in the next couple of months. The developers are already testing it out to find bugs and polish their apps in anticipation of the user base adopting Jonathan Ive‘s vision for a mobile operating system. In many ways, it’s still the same core software we’ve been using for many years now with a few radical changes to the look and feel. The icons and lack of skeumorphism are getting the most press. But I found something that I think has the ability to be even bigger than that.

The user interface (UI) elements in the previous iOS builds all look very similar. This is no doubt due to the influence of Scott Forestall, the now departed manager of iOS. The dearth of glossy buttons and switches looked gorgeous back in 2007 when the iPhone was first released. But all UI evolves over time. Some evolve faster than others. Apple hit a roadblock because of those very same buttons. They were all baked into the core UI. Changing them was like trying to correct a misspelled word in a stone carving.  It takes months of planning to make even the smallest of changes.  And those changes have to be looked at on a massive scale to avoid causing issues in the rest of the OS.

iOS 7 is different to me.  Look at this pic of an incoming call and compare it with the same screen in iOS 6:

iOS 7

iOS 7

iOS 6

iOS 6

The iOS 6 picture has buttons.  The iOS 7 picture is different.  Instead of have chiseled buttons, it looks like the Answer and Decline buttons have been stuck to the screen with labels.  That’s not the only place in the UI that has a label-like appearance.  Sending a new  iMessage or text to someone in the Messages app looks like applying a stamp to a piece of paper.  Taking all that into consideration, I think I finally understand what Ive is trying to do with this UI shift in iOS 7

Labels are easy to reapply.  You just peel them off and stick them back on.  Unlike the chiseled-in-stone button UI, a label can quickly and easily be reconfigured or replaced if it starts to look dated.  Apple made mention of this in Ive’s iOS 7 video where he talked about creating “hierarchical layers (to) establish order“.  Ive commented that this approach gives depth to the OS.  I think he’s holding back on us.

Jonathan Ive created UI layers in the OS so he can change them out more quickly.  Think about it.  If you only have to change a label in an app or change the way they are presented on screen, it allows you to make more rapid changes to the way the OS looks.  If the layers are consistent and draw from the same pool of resources, it allows you to skin the OS however you want with minimal effort.  Ive wasn’t just trying to scrub away the accumulation of Scott Forrestal’s ideas about the UI.  He wanted to change them and make the UI so flexible that the look can be updated in the blink of an eye.  That gives him the ability to change elements at will without the need to overhaul the system.  That kind of rapid configurability gives Apple the chance to keep things looking fresh and accommodate changing tastes.


Tom’s Take

I can almost hear people now saying that making future iOS releases able to be skinned is just another rip off of Android’s feature set.  In some ways, you are very right.  However, consider that Android was always designed with modularity in mind from the beginning.  Google wanted to give manufacturers and carriers the ability to install their own UI.  Think about how newsworthy the announcement of a TouchWiz-free Galaxy S4 was.  Apple has always considered the UI inviolate in all their products.  You don’t have much freedom to change things in iOS or in OS X.  Jonathan Ive is trying to set things up so that changes can be made more frequently in iOS.  Modders will likely find ways to insert their own UI elements and take these ideas in an ever more radical direction.  And all because Apple wanted to be able to peel off their UI pieces as easily as a label.

The Microsoft Office Tablet

OfficeTabletI’ve really tried to stay out of the Tablet Wars.  I have a first generation iPad that I barely use any more.  My kids have co-opted it from me for watching on-demand TV shows and playing Angry Birds.  Since I spend most of my time typing blog posts or doing research, I use my laptop more than anything else.  When the Surface RT and Surface Pro escaped from the wilds of Redmond I waited and watched.  I wanted to see what people were going to say about these new Microsoft tablets.  It’s been about 4 months since the release of the Surface Pro and simliar machines from vendors like Dell and Asus.  I’ve been slowly asking questions and collecting information about these devices.  And I think I’ve finally come to a realization.

The primary reason people want to buy a Surface tablet is to run Microsoft Office.

Here’s the setup.  Everyone that expressed an interest in the Pro version of the Surface (or the Latitude 10 from Dell) was asked a question by me: What is the most compelling feature for the Surface Pro for you?  The responses that I got back were overwhelming in their similarity.

1.  I want to use Microsoft Office on my tablet.

2.  I want to run full Windows apps on my tablet.

I never heard anything about portability, power, user interface, or application support (beyond full Windows apps).  I specifically excluded the RT model of the Surface from my questions because of the ARM processor and the reliance of software from the Windows App Store.  The RT functions more like Apple/Android tablets in that regard.

This made me curious.  The primary goal of Surface users is to be able to run Office?  These people have basically told me that the only reason they want to buy a tablet is to use an office suite.  One that isn’t currently available anywhere else for mobile devices.  One that has been rumored to be released on other platforms down the road.  While it may be a logical fallacy, it appears that Microsoft risks invalidating a whole hardware platform because of a single application suite.  If they end up releasing Office for iOS/Android, people would flee from the Surface to the other platforms according to the info above.  Ergo, the only purpose of the Surface appears to be to run one application.  Which I why I’ve started calling it the Microsoft Office Tablet.  Then I started wondering about the second most popular answer in my poll.

Making Your Flow Work

As much as I’ve tried not to use the word “workflow” before, I find that it fits in this particular conversation.  Your workflow is more than just the applications you utilize.  It’s how you use them.  My workflow looks totally different from everyone else even though I use simliar applications.  I use email and word processing for my own purposes.  I write a lot, so a keyboard of some kind is important to my workflow.  I don’t do a lot of graphics design, so a pen input tablet isn’t really a big deal to me.  The list goes on and on, but you see that my needs are my own and not those of someone else.  Workflows may be simliar, but not identical.  That’s where the dichotomy comes into play for me.

When people start looking at using a different device for their workflow, they have to make adjustments of some kind.  Especially if that device is radically different from one they’ve been using before.  Your phone is different from a tablet, and a tablet is different from a laptop.  Even a laptop is different from a desktop, but these two are more simliar than most.  When the time comes to adjust your workflow to a new device, there are generally two categories of people:

1.  People who adjust their workflow to the new device.

2.  People who expect the device to conform to their existing workflow.

For users of the Apple and Android tablets, option 1 is pretty much the only option you’ve got.  That’s because the workflow you’ve created likely can’t be easily replicated between devices.  Desktop apps don’t run on these tablets.  When you pick up an iPad or a Galaxy Tab you have to spend time finding apps to replicate what you’ve been doing previously.  Note taking apps, web browsing apps, and even more specialized apps like banking or ebook readers are very commonly installed.  Your workflow becomes constrained to the device you’re using.  Things like on-screen keyboards or lack of USB ports become bullet points in workflow compatibility.  On occasion, you find that a new workflow is possible with the device.  The prime example I can think of is using the camera on a phone in conjunction with a banking app to deposit checks without needing to take them into the bank.  That workflow would have been impossible just a couple of years ago.  With the increase in camera phone resolution, high speed data transfer, and secure transmission of sensitive data made possible by device advancements we can now picture this new workflow and easily adapt it because a device made it possible.

The other category is where the majority of Surface Pro users come in.  These are the people that think their workflow must work on any device they use.  Rather than modify what they’re doing, they want the perfect device to do their stuff.  These are the people that use a tablet for about a week and then move on to something different because “it just didn’t feel right.”  When they finally do find that magical device that does everything they want, they tend to abandon all other devices and use it exclusively.  That is, until they have a new workflow or a substantial modification to their existing workflow.  Then they go on the hunt for a new device that’s perfect for this workflow.

So long as your workflow is the immutable object in the equation, you are never going to be happy with any device you pick.  My workflows change depending on my device.  I browse Twitter and read email from my phone but rarely read books.  I read books and do light web surfing from a tablet but almost never create content.  I spend a lot of time creating content on my laptop buy hate reading on it.  I’ve adjusted my workflows to suit the devices I’m using.

If the single workflow you need to replicate on your table revolves around content creation, I think it’s time to examine exactly what you’re using a tablet for.  Is it portability beyond what a laptop can offer?  Do you prefer to hunt and peck around a touch screen instead of a keyboard?  Are you looking for better battery life or some other function of the difference in hardware?  Or are you just wanting to look cool with a tablet in the “post PC world?”  That’s the primary reason I don’t use a tablet that much any more.  My workflows conform to my phone and my laptop.  I don’t find use in a tablet.  Some people love them.  Some people swear by them.  Just make sure you aren’t dropping $800-$1000 on a one-application device.

At the end of the day, work needs to get done.  People are going to use whatever device they want to use to get their stuff done.  Some want to do stuff and move on.  Others want to look awesome doing stuff or want to do their stuff everywhere no matter what.  Use what works best for you.  Just don’t be surprised if complaining about how this device doesn’t run my favorite data entry program gets a sideways glance from IT.

Disclaimer:  I own a first generation iPad.  I’ve tested a Dell Latitude 10.  I currently use an iPhone 4S.  I also use a MacBook Air.  I’ve used a Lenovo Thinkpad in the past as my primary workstation.  I’m not a hater of Microsoft or a lover of Apple.  I’ve found a setup that lets me get my job done.

Incremental Awesomeness – Boiling Frogs

Frog on a Saucepan - courtesy of Wikipedia

Frog on a Saucepan – courtesy of Wikipedia

Unless you’ve been living under a big rock for the last couple of weeks, you’ve no doubt heard about the plunge that Apple stock took shortly after releasing their numbers for the previous quarter.  Apple sold $54 billion dollars worth of laptops, desktops, and mobile devices.  They made $13 billion dollars in profit.  They sold 47 million iPhones and almost 23 million iPads.  For all of these record-setting numbers, the investors rewarded Apple by driving the stock down below $500 dollars a share, shaving off a full 10% of Apple’s value in after-hours trading after the release of these numbers.  A lot of people were asking why a fickle group of investors would punish a company making as much quarterly profit as the gross domestic product of a small country.  What has it come to that a company can be successful beyond anyone’s wildest dreams and still be labeled a failure?

The world has become accustomed to incremental awesomeness.

Apple is as much to blame as anyone else in this matter, but almost every company is guilty of this in some form or another.  We’ve reached the point in our lives where we are subjected to a stream of minor improvements on things rather than huge, revolutionary changes.  This steady diet of non-life changing features has soured us on the whole idea of being amazed by things.  If you had told me even 5 years ago that I would possess a device in my pocket that had a camera, GPS, always-on Internet connection, appointment book, tape recorder, and video camera, I would have either been astounded or thought you crazy.  Today, these devices are passé.  We even call phones without these features “dumb phones” as if to demonize them and those that elect to use them.  We can no longer discern between the truly amazing and the depressingly commonplace.

When I was younger, I heard someone ask about boiling a frog alive.  I was curious as to what lesson may lie in such a practice.  If you place a frog into a pot of boiling water, it will hop right back out as a form of self-preservation.  However, if you place a frog in a pot of tepid water and slowly raise the temperature a few degrees every minute, you will eventually boil the frog alive without any resistance.  Why is that?  Well, by slowly raising the temperature of the water, the frog becomes accustomed to the change.  A few degrees one way or the other doesn’t matter to the frog.  However, those few degrees eventually add up to the boiling point.

We find ourselves in the same predicament.  Look at some of the things that users are quibbling over on the latest round of phones and other devices.  The Nexus 4 phone is a failure because it doesn’t have LTE.  The iPad Mini is useless because it doesn’t have a Retina screen.  The iPhone 5 is far from perfect because it’s missing NFC or it’s not a 5-inch phone.  The Nexus 7 needs more storage and shouldn’t be Wi-Fi only.  Look at any device out there and you will find that they are missing features that would keep them from being “perfect”.  Those features might as well be things like inability to read your mind or project information directly onto the cornea.  I’ve complained before that Google is setting up Google Glass to be a mundane gadget because they aren’t thinking outside their little box.  This kind of incremental improvement is what we’ve become accustomed to.  Think about the driverless car that Google is supposedly working on.  It’s an exciting idea, right? Now, think about that invention in 5 years time when it becomes ubiquitous.  When version 6 or 7 of the driverless car is out, we’re going to be complaining about how it doesn’t anticipate traffic conditions or isn’t able to fly.  We will have become totally unimpressed with how awesome the idea of a driverless car is because we’re concentrating on the things that it doesn’t have.

We want to be impressed and surprised by things.  Even when we are confronted with groundbreaking technology, we reject it at first out of spite.  Remember how the iPad was going to be a disaster because people don’t want to use a big iPhone?  Now look at how many are being used.  People want to walk away from a product announcement with a sense of awe and wonder, not a list of features and the same case as last year.  We’ve stopped looking at each new object with a sense of wonder and amazement and instead we focus on the difference from last year’s model.  Every new software or hardware release raises the temperature a few more degrees.  Before long, we’re going to be boiling in our own contempt and discontent.  And the next generation is going to have it even worse.  Even now, I find my kids are spoiled by the ability to watch TV shows on a tablet in any room in the house on their schedule instead of waiting for an episode to air.  They no longer even need to remember to record their favorite show on the DVR.  They just launch the app on their table and watch the show whenever they want.  Something that seems amazing and life-changing to me is commonplace to them.  All of this has happened before.  All of this will happen again.

Instead of judging on incremental advancements, we should start looking at things on the grand scale.  Yes, I know that some companies are going to constant underwhelm the buying public by delivering products that are slightly more advanced than the previous iteration for an increased cost.  However, when you step back and take a look at everything on a long enough time line, you’ll find that we are truly living in an age when technology is amazing and getting better every day.  Sure, I’m waiting for user interfaces like the ones from Minority Report or the Avengers.  I want a driverless car and a thought interface for my computer/phone/widget.  But after seeing what happens to companies that are successful beyond their wildest imaginations I’ll be doing a much better job of looking at things with the proper perspective.  After all, that’s the best way to keep from getting boiled.

Mountain Lion PL-2303 Driver Crash Fix

Now that I’ve switched to using my Mac full time for everything, I’ve been pretty happy with the results.  I even managed to find time to upgrade to Mountain Lion in the summer.  Everything went smoothly with that except for one little hitch with a piece of hardware that I use almost daily.

If you are a CLI jockey like me, you have a USB-to-Serial adapter in your kit.  Even though the newer Cisco devices are starting to use USB-to-mini USB cables for console connections, I find these to be fiddly and problematic at times.  Add in the amount of old, non-USB Cisco gear that I work on regularly and you can seem my need for a good old fashioned RJ-45-to-serial rollover cable.  My first laptop was the last that IBM made with a built-in serial port.  Since then, I’ve found myself dependent on a USB adapter.  The one that I have is some no-name brand, but like most of those cables it has the Prolific PL-2303 chipset.  This little bugger seems to be the basis for almost all serial-to-USB connectivity except for Keyspan adapters.  While the PL-2303 is effective and cheap, it’s given me no end of problems over the past couple of years.  When I upgraded my Lenovo to Windows 7 64-bit, the drivers available at the time caused random BSOD crashes when consoled into a switch.  I could never nail down the exact cause, but a driver point release fixed it for the time being.  When I got my Macbook Air, it came preinstalled with Lion.  There were lots of warnings that I needed to make sure to upgrade the PL-2303 drivers to the latest available on the Prolific support site in order to avoid problems with the Lion kernel.  I dutifully followed the directions and had no troubles with my USB adapter.  Until I upgraded to Mountain Lion.

After I upgraded to 10.8, I started seeing some random behaviors I couldn’t quite explain.  Normally, after I’m done consoling into a switch or a router, I just close my laptop and throw it back in my bag.  I usually remember after I closed it and put it to sleep that I need to pull out the USB adapter.  After Mountain Lion, I was finding that I would open my laptop back up and see that it had rebooted at some point.  All my apps were still open and had the data preserved, but I found it odd that things would spontaneously reboot for no reason.  I found the culprit one day when I yanked the USB adapter out while my terminal program (ZTerm) was still open.  Almost instantly, I got a kernel panic followed by a reboot.  I had finally narrowed down my problem.  I tried closing ZTerm before unplugging the cable and everything behaved as it should.  It appeared the the issue stemmed from having the terminal program actively accessing the port then unplugging it.  I searched around and found that there were a few people reporting the same issue.  I even complained about it a bit on Twitter.

Santino Rizzo (@santinorizzo) heard my pleas for sanity and told me about a couple of projects that created open source versions of the PL-2303 driver.  Thankfully, someone else had noticed that Prolific was taking their sweet time updating things and took matters into their own hands.  The best set of directions to go along with the KEXT that I can find are here:

http://www.xbsd.nl/2011/07/pl2303-serial-usb-on-osx-lion.html

For those not familiar with OS X, a KEXT is basically a driver or DLL file.  Copying it to System/Library/Extensions places in in the folder where OS X looks for device drivers.  Make sure you get rid of the old Prolific driver if you have it installed before you install the OS PL-2303 driver.  Once you’ve run the commands list on the site above, you should be able to plug in your adapter and then unplug it without any nasty crashes.  One other note – the port used to connect in ZTerm changed when I used the new drivers.  Instead of it being /dev/USBSerial or something of that nature, it’s now PL2303-<random digits>.  It also changed the <random digits> when I moved it from one USB port to another.  Thankfully for me, ZTerm remembers the USB ports and will try them all when I launch it until it find the right adapter.  There is some discussion in the comments of the post above about creating a symlink for a more consistent pointer.


Tom’s Take

Writing drivers is hard.  I’ve seen stats that say up to 90% of all Windows crashes are caused by buggy drivers.  Even when drivers appear to work just fine, things can be a little funny.  Thankfully, in the world of *NIX, people that get fed up with the way things work can just pull out their handy IDE and write their own driver.  Not exactly the easiest thing in the world to do but the results speak for themselves.  When the time comes that vendors either can’t or won’t support their hardware in a timely fashion, I take comfort in knowing that the open source community is ready to pick up the torch and make things work for us.

Why Won’t AirPlay Work On My Macbook?

One of the major reasons why I decided to upgrade to OS X 10.8 Mountain Lion was for AirPlay mirroring.  AirPlay has been a nice function to have for people with an AirPlay receiver (basically an AppleTV) and an AirPlay source, like an iDevice.  I know of many people that like to watch a movie from iTunes on their iPad to start, then switch over to the big TV in the living room via AirPlay to the AppleTV.  That’s all well and good for those that want to stream movies or music.  However, my streaming needs are a little more advanced.  I’d rather be able to mirror my desktop to the AirPlay receiver instead, for things like presentations or demonstrations.  That functionality has only be available with software applications like AirParrot up until the release of Mountain Lion, which now has support for AirPlay mirroring on Macs.  Once the GM release of Mountain Lion came out, people started noticing that AirPlay was only supported on relatively new Apple hardware.  Even in cases where the CPU was almost identical to a later hardware release.  It seems a bit mind-boggling that Apple has a very limited specification list for AirPlay Mirroring.  The official site doesn’t even list it, as a matter of fact.  Essentially, any Mac made in 2011 or newer should be capable of supporting AirPlay.  So why did the 2010 Macs get left out?  They’re almost as good as their one-year-newer cousins.

The real answer comes down to the chipset.  Apple started shipping Macs with Intel’s Sandy Bridge chipset in 2011.  This enabled all kinds of interesting things, like Thunderbolt for instance.  There was one little feature down at the bottom of the list of Sandy Bridge spec sheets that didn’t mean much at the time – Intel QuickSync.  QuickSync is an application-specific integrated circuit (ASIC) that has been placed in the Sandy Bridge line of processors to allow high-speed video encoding and decoding.  This allows the Sandy Bridge i-series processors to offload video encoding to the ASIC to reduce the amount of CPU power consumed by performing video tasks.  Rather than tying up the CPU or the GPU of a machine, Sandy Bridge can use this ASIC to do very high speed encoding.  Why would this be a boon?  Well, for most people the idea was that QuickSync could reduce the amount of time that it took to do video production work on mid-range machines.  The problem was that QuickSync turned out lower quality video in favor of optimization for speed?  Where would you find an application that prioritized speed over quality?  If you guessed video streaming, you’d be spot on.  QuickSync supports high-speed encoding of H.264 video streams, which is the preferred format for Apple.  Mountain Lion can now access the QuickSync ASIC to mirror your desktop over to an AppleTV with almost no video lag.  The quality may not be the same as a Pixar rendering farm, but for 1080p video on a TV it’s close enough.

Any Mac made before the introduction of Sandy Bridge isn’t capable of running AirPlay mirroring, at least according to Apple.  Since they are missing the QuickSync ASIC, they aren’t capable of video encoding at the rate that Apple wants to in order to preserve the AirPlay experience.  While on the surface it looks like the same i-series processors are present in 2010 and 2011 machines, the older Macs are using the Clarksdale chipset, which does have a high-speed video decoder, but not an encoder.  Since the Mac is doing all the heavy lifting for the AppleTV in an AirPlay mirroring setup, having the onboard encoding ASIC is critical.  This isn’t the first time that Apple has locked out use of AirPlay.  If you want to AirPlay mirror from your favorite iDevice, you have to ensure that you’re running an iPhone 4S or an iPad 2 or iPad 3.  What’s different about them?  They’re all running the A5 dual-core chip.  Supposedly, the A5 helps with video-intensive tasks.  That says to me that Apple is big on using hardware to help accelerate video mirroring.  That’s not to say that you can’t do AirPlay mirroring with a pre-2011 Mac.  You’re just going to have to rely on a third party program to do it, like the aforementioned AirParrot.  Take note, though, that AirParrot is going to use your CPU to do all the encoding work for AirPlay.  While that isn’t going to be a big issue for simple presentations or showing your desktop, you should take care if you’re going to do any kind of processor-intensive activity, like firing up a bunch of virtual machines or compiling code.

Tom’s Take

Yes, it’s very irritating that Apple drew the line for AirPlay mirroring support at Sandy Bridge.  As it is with all technology refreshes, being on the opposite side of that line sucks big time.  You’ve got a machine that’s more than capable, yet some design guy said that you can’t hack it any more.  Sadly, these are the kinds of decisions that aren’t made lightly by vendors.  Rather than risk offering incomplete support of producing the kind of dodgy results that make for bad Youtube comparison videos, Apple took a hard line and leaned heavily on QuickSync for AirPlay mirroring support.  In another year it won’t matter much as people will have either upgraded their machines to support it if it’s a crucial need for them, or they’ll let it lie fallow and unused like FaceTime.  If you find yourself asking whether or not your machine can support AirPlay mirroring, just look for a Thunderbolt port.  If you’ve got one, you’re good to go.  Otherwise, you should look into a software solution.  There are lots of good ones out there that will help you out.  Based on Apple’s track record with the iDevices, I wouldn’t hold out hope that they’re going to enable AirPlay mirroring on pre-2011 Macs any time soon.  So, if AirPlay mirroring is something important to you, you’re either going to need to spring for a new Mac or get to work installing some software.

OS X 10.8 Mountain Lion – Review

Today appears to be the day that the world at large gets their hands on OS X 10.8, otherwise known as Mountain Lion. The latest major update in the OS X cat family, Mountain Lion isn’t so much a revolutionary upgrade (like moving from Snow Leopard to Lion) as opposed to an evolutionary one (like moving from Leopard to Snow Leopard). I’ve had a chance to use Mountain Lion since early July when the golden master (GM) build was released to the developer community. What follows are my impressions about the OS from a relatively new Mac user.

When you start your Mountain Lion machine for the first time, you won’t notice a lot that’s different from Lion. That’s one of the nicer things about OS X. I don’t have to worry that Apple is going to come out with some strange AOL-esque GUI update just around the corner. Instead, the same principles that I learned in Lion continue here as well. In lieu of a total window manager overhaul, a heavy coat of polish has been applied everywhere. Most of the features that are listed on the Mountain Lion website are included and likely not to be used by me that much. Instead, there are a few little quality of life (QoL) things that I’ve noticed. Firstly, Lion originally came with the dock indicator for open programs disabled. Instead of a little light telling you that Safari and Mail were open, you saw nothing. This spoke more to the capability introduced that reopened the windows that were open when you closed the program. Apple would rather you think less about a program being open or closed and instead on what programs you wanted to use to accomplish things. In Mountain Lion, the little light that indicates an open program has shrunk to a small lighted notch on the very bottom of the dock below an open program. It’s now rather difficult to determine which programs are open with a quick glance. Being one of those people that is meticulous about which programs I have open at any one time, this is a bit of step in the wrong direction. I don’t mind that Apple has changed the default indicator. Just give me an option to put the old one back.

My Mountain Lion Dock with the new open program indicators

Safari

Safari also got an overhaul. One of the things I like the most about Chrome is the Omnibox. The ability to type my searches directly into the address bar saves me a step, and since my job sometimes feels like the Chief Google Search Engineer, saving an extra step can be a big help. Another feature is the iCloud button. iCloud can now sync open tabs on your iPhone/iPad/iPod/Mountain Lion system. This could be handy for someone that opens a website on their mobile device but would like to look at it on a full-sized screen when they get to the office. Not a groundbreaking feature, but a very nice one to have. The Reading List feature is still there as well from the last update, but being a huge fan of Instapaper, I haven’t really tested it yet.

Dictation

Another new feature is dictation. Mountain lion has included a Siri like dictation feature in the operating system that allows you to say what you want rather than typing it out. Make no mistake though. This isn’t Siri. This is more like the dictation feature from the new iPad. Right now, it won’t do much more than regurgitate what you say. I’m not sure how much I’ll use this feature going forward, as I prefer to write with the keyboard as opposed to thinking out loud. Using the dictation feature does make it much more accurate, as the system learns your accent and idiosyncrasies to become much more adapt over time. If you’d like to get a feel for how well the dictation feature works, (the paragraph)

You’ve been reading was done completely by the dictation feature. I’ve left any spelling and grammar mistakes intact to give you a realistic picture. Seriously though, the word paragraph seems to make the dictation feature make a new paragraph.

Gatekeeper

I did have my first run-in with Gatekeeper about a week after I upgraded, but not for the reasons that I thought I would.  Apple’s new program security mechanism is designed to prevent drive-by downloads and program installations like the ones that embarrassed Apple as of late.  Gatekeeper can be set to allow only signed applications from the App Store to be installed or run on the system.  This gives Apple the ability to not only protect the non-IT savvy populace at large from malicious programs, but also gives Apple the ability to program a remote kill switch in the event that something nasty slips past the reviewers and starts wreaking havoc.  Yes, there have been more nefarious and sinister prognostications that Apple will begin to limit apps to only being able to be installed through the App Store or that Apple might flip the kill switch on software they deem “unworthy”, but I’m not going to talk about that here.  Instead, I wanted to point out the issue that I had with Gatekeeper.  I use a networking monitoring system called N-Able at work that gives me the ability to remote into systems on my customer’s networks.  N-Able uses a Java client to establish this remote connection, whether it be telnet, SSH, or RDP.  However, after my upgrade to Mountain Lion, my first attempt to log into a remote machine was met with a Java failure.  I couldn’t bypass the security warning and launch the app from a web browser to bring up my RDP client.  I checked all the Java security settings that got mucked with after the Flashback fiasco, but they all looked clean.  After a Google Glance, I found the culprit was Gatekeeper.  The default permission model allows Mac App Store apps to run as well as those from registered developers.  However, the server that I have running N-Able uses a self-signed certificate.  That evidently violates the Gatekeeper rules for program execution.  I changed Gatekeeper’s permission model to allow all apps to run, regardless of where the app was downloaded from.  This was probably something that would have needed to be done anyway at some point, but the lack of specific error messages pointing me toward Gatekeeper worried me.  I can foresee a lot of support calls in the future from unsuspecting users not understanding that their real problem isn’t with the program they are trying to open, but with the underlying security subsystem of their Mac instead.

Twitter Integration

Mountain Lion has also followed the same path as it’s mobile counterpart and allowed Twitter integration into the OS itself. This, to me, is a mixed bag. I’m a huge fan of Twitter clients on the desktop. Since Tapbots released the Tweetbot Alpha the same day that I upgraded to Mountain Lion, I’ve been using it as my primary communication method with Twitter. The OS still pops up an update when I have a new Twitter notification or DM, so I see that window before I check my client. The sharing ability in the OS to tweet links and pictures is a nice time saver, but it merely saves me a step of copying and pasting. I doubt I’m any more likely to share things with the new shortcuts as I was before. The forthcoming Facebook integration may be more to my liking. Not because I use Facebook more than I use Twitter. Instead, by having access to Facebook without having to open their website in a browser, I might be more motivated to update every once in a while.

AirPlay

I had a limited opportunity to play with AirPlay in Mountain Lion.  AirPlay, for those not familiar, is the ability to wirelessly stream video or audio from some device to receiver.  As of right now, the only out-of-the box receiver is the Apple TV.  The iPad 2 and 3 as well as the iPhone 4S have the capability to stream audio and video to this device.  Older Macs and mobile devices can only stream audio files, ala iTunes.  In Mountain Lion, however, any newer Mac running an i-Series processor can mirror their screen to an Apple TV (or other AirPlay receiver, provided you have the right software installed).  I tested it, and everything worked flawlessly.  Mountain Lion uses Bonjour to detect that a suitable AirPlay receiver is on the network, and the AirPlay icon appears in the notification area to let you know you can mirror your desktop over there.  The software takes care of sizing your desktop to an HD-friendly resolution and away you go.  There was a bit of video lag on the receiver, but not on the Mountain Lion system itself, so you could probably play games if you wanted, provided your weren’t relying on the AirPlay receiver as your primary screen.  For regular things, like presentations, everything went smooth.  The only part of this system that I didn’t care much for is the mirroring setup.  While I understand the idea behind AirPlay is to allow things like movies to be streamed over to an Apple TV, I would have liked the ability to attach an Apple TV as a second monitor input.  That would let me do all kinds of interesting things.  First and foremost, I could use the multi-screen features in Powerpoint and Keynote as they were intended to be used.  Or I could use AirPlay with a second HDMI-capable monitor to finally have a dual monitor setup for my MacBook Air.  But, as a first generation desktop product, AirPlay on Mountain Lion does some good things.  While I had to borrow the Apple TV that I used to test this feature, I’m likely to go pick one up just to throw in my bag for things like presentations.


Tom’s Take

Is Mountain Lion worth the $20 upgrade price? I would say “yes” with some reservations. Having a newer kernel and device drivers is never a bad thing. Software will soon require Mountain Lion to function, as in the case of the OS X version of Tweetbot when it’s finally released. The feature set is tempting for those that spend time sharing on Twitter or want to use iCloud to sync things back and forth. Notification Center is a plus for those that don’t want popup windows cluttering everything. If you are a heavy user of presentation software and own an AppleTV, the Airplay mirroring may be the tipping point for you. Overall, compared to those that paid much more for more minor upgrades, or paid for upgrades that broke their system beyond belief (I’m looking at you, Windows ME), upgrading to Mountain Lion is painless and offers some distinct advantages. For the price of a nice steak, you can keep the same performance you’ve had with your system running Lion and get some new features to boot. Maybe this old cougar can keep running a little while longer.

My Thoughts on the Macbook Pro with Retina Display

At their annual World Wide Developer’s Conference (WWDC), Apple unveiled a new line of laptops based on the latest Intel Ivy Bridge chipset. The Macbook Air and Macbook Pro lines received some upgrade love, but the most excitement came from the announcement of the new Macbook Pro with Retina Display. Don’t let the unweildy moniker fool you, this is the new king of the hill when it comes to beastly laptops. Based on the 15.4″ Macbook Pro, Apple has gone to task to slim down as much as possible. It’s just a wee bit thicker than the widest part of a Macbook Air (MBA) and weighs less than the Macbook Pro (MBP) it’s based on. It is missing the usual Ethernet and Firewire ports in favor of two Thunderbolt ports on the left side and USB3 ports on either side. There’s also an HDMI-out port and an SXHD card reader on the right side. Gone as well is the optical drive, mirroring its removal in the MBA. Instead, you gain a very high resolution display that is “Retina Class”, meaning it is a 2880×1800 display in a 15.4″ screen, gaining enough pixels per inch at the average viewing angle to garner the resolutionary Retina designation. You also gain a laptop devoid of any spinning hard disks, as the only storage options in the Macbook Pro with Retina Display (RMBP) are of the solid state disk (SSD) variety. the base model includes a 256 GB disk, but the top end model can be upgraded to an unheard of 768 GB swath of storage. The RAM options are also impressing, starting at 8 GB and topping out at 16 GB. All in all, from the reviews that have been floating around so far, this thing cooks. So, why are so many people hesitant to run out to the Apple Store and shower the geniuses with cash or other valuable items (such as kidneys)?

The first thing that springs to mind is the iFixit article that has been circulating since day 1 that describes the RMBP as “the most unhackable, untenable, and unfixable laptop ever”. They cite that the RAM and SSD are soldered to the main system board just like in the little MBA brother. They also note the the resolutionary Retina Display is glued to the surrounding case, making removal by anyone but a trained professional impossible. Given the smaller size and construction, it’s entirely possible that there will be very few (if any) aftermarket parts available for repairs or upgrades. That begs the question in my case:

Who Cares?

Yep, I said it. I really don’t give a crap if the RMBP is repairable. I’ve currently got a 13″ MBA that I use mostly for travel and typing blog posts. I know that in the event that anything breaks, I’m going to have to rely on AppleCare to fix it for me. I have a screwdriver that can crack the case on it, but I shudder to think what might happen if I really do get in there. I’m treating the MBA just like an iPad – a disposable device that is covered under AppleCare. In contrast, my old laptop was a Lenovo w701. This behemoth was purchased with upgradability in mind. I installed a ton of RAM at the time, and ripped out the included hard disk to install a whopping 80 GB SSD and run the 500 GB HDD beside it. Beyond that, do you know how many upgrades I have performed in the last two years? Zero. I haven’t added anything. Laptops aren’t like desktops. There’s no graphics card upgrades or PCI cards to slide in. There’s no breakout boxes or 75-in-1 card readers to install. What you get with a laptop is what you get, unless you want to use USB or Thunderbolt attachments. In all honesty, I don’t care that the RMBP is static as far as upgradability. If and when I get one, I’m going to order it with the full amount of RAM, as the 4 GB on my MBA has been plenty so far and I’ve had to work my tail off to push the 12 GB in my Lenovo, even owing to the hungry nature of Windows. The SSD might give some buyers a momentary pause, but this is a way for Apple to push two agendas at the same time. The first is that they want you to use iCloud as much as possible for storage. By giving you online storage in place of acres of local disk, Apple is hoping that some will take them up on the offer of moving documents and pictures to the cloud. A local disk is a one time price or upgrade purchase. iCloud is a recurring sunk cost to Apple. Every month you have your data stored on their servers is a month they can make money to eventually buy more disks to fill up with more iCloud customers. This makes the Apple investors happy. The other reason to jettison the large spinning rust disks in favor of svelt SSD sexiness is the Thunderbolt ports on the left side. Apple upgraded the RMBP to two of them for a reason. So far, the most successful Thunderbolt peripheral has been the 27″ Thunderbolt display. Why? Well, more screen real estate is always good. But is also doubles as a docking station. I can hang extra things off the back of the monitor. I can even daisy chain other Thunderbolt peripherals off the back. With two Thunderbolt ports, I no longer have to worry about chaining the devices. I can use a Thunderbolt display along with a Thunderbolt drive array. I can even utilize the newer, faster USB3 drive arrays. So having less local storage isn’t exactly a demerit in my case.

Tom’s Take

When the new Macbook Pro with Retina Display was announced, I kept saying that I was looking for a buyer for my kidney so I could rush out and buy one. I was only mostly joking. The new RMBP covers all the issues that I’ve had with my excellent MBA so far. I don’t care that it’s a bit bigger. I care about the extra RAM and SSD space. I like the high resolution and the fact that I can adjust it to be Retina-like or really crank it up to something like 1680×1050 or 1920×1200. I couldn’t really care less about the supposed lack of upgradability. When you think about it, most laptops are designed to be disposable devices. If it’s not the battery life going caput, it’s the screen or the logic boards the eventually burn out. We demand a lot from our portable devices, and the stress that manufactures are under to make them faster and smaller forces compromises. Apple has decided that giving users easy access to upgrade RAM or SSD space is one of those compromises. Instead, they offer alternatives in add-on devices. When you think about it, most of the people who are walking into the Apple store are never going to crack the case open on their laptop. Heck, I’m an IBM certified laptop repair technician and even I get squeamish doing that. I’d rather rely on the build quality that I can be sure that I’ll get out of the Cupertino Fruit and Computer Company and let AppleCare take care of the rest.