Insecurity Guards

file000491308347

Pick a random headline related to security today and you’ll see lots of exclamation points and dire warnings about the insecurity of a something we thought was inviolate, such as Apple Pay or TLS. It’s enough to make you jump out of your skin and crawl into a dark hole somewhere never to use electricity again. Until you read the article, that is. After going through a couple of paragraphs, you realize that a click-bait headline about a new technology actually underscores an age-old problem: people are the weakest link.

Engineered To Be Social

We can engineer security for protocols and systems until the cows come home. We can use ciphers so complicated that even Deep Thought couldn’t figure them out. We can create a system so secure that it could never be hacked. But in the end that system needs to be used by people. And people are where everything breaks down.

Take the most recent Apple Pay “exploit” in the news that’s been making all the headlines. The problem has nothing to do with Apple Pay itself, or the way the device interacts with the point-of-sale terminal. It has everything to do with enterprising crooks calling in to banks an impersonating users to get a live, breathing person on the other end of the phone to override security safeguards and break the system down. An hourly employee of the bank can put all the defense-in-depth research to naught in a matter of keystrokes.

It is the way it is because people are dumb, panicky, and dangerous. When confronted with situations that are outside their norm they tend to freeze up and do the wrong thing. Take this scene from Sneakers (which is an excellent movie you should go watch right now):

When I originally started writing this post, that scene stuck out in my mind as a brilliant way to illustrate how less-than-savory people get around high technology with simple solutions, like kicking in a door protected by a keypad. But then I watched the scene again and found an even better example of my point. Look how Robert Redford and River Phoenix work together to distract and eventually overwhelm the security guard. The guard knows that no one should be able to get through the gate without the right keycard. With a bit of distraction, some added stress, and an apparently helpless but irritated user, Redford is able to social engineer his way into the building with little effort. The movie is full of these kinds of scenes.

The point is not that Robert Redford can talk his way into a building. The point that should be illustrated is that people override security decisions every day. Writing down passwords. Ignoring security warnings. Clicking on believable but fake exploits. It’s done because it’s quicker or easier or it’s done to remove a screaming customer on the other end of the phone. Polices are ignored and shortcuts are taken to make things easy. So how do we fix it?

Teach It, Don’t Tech It

The absolute last thing you should do when trying to fix these issues is to create another layer of technology to insulate the issue. That leads to two problems. The first is that people will being to see the new solution as yet another problem and try to create shortcuts to work around it. The second, which is a more sinister issue, is that you’ve essentially told those people that they can’t understand why this is a problem and you’ve decided to marginalize them instead of teaching them. They may not realize it, but you’ve silently placed them lower on the intelligence ladder than a few bytes of code.

People need to know why things are the way they are. If the policy says not to write down a password, tell people why that is. If the rules say you don’t override a lockout for a PIN or add a card to a person’s account without certain information then you need to tell people why you don’t do that. A policy or security feature without an explanation is merely an annoyance. One that will be circumvented. Making your users aware of the reason for a policy makes it something that’s hard to ignore. You’re more likely to get traction by treating your users like people, not automatons.


Tom’s Take

Kevin Mitnick (@KevinMitnick) wrote an entire book about social engineering and how easy it is to accomplish. As security systems become more complicated and much less simple to fool, the majority of miscreants aren’t going to spend hours upon hours trying to hack a handshake protocol or create hash collisions. Instead, they will attack the weakest link in the chain. That will almost undoubtedly be the users of the system. We have to make our users smart enough to know when people are trying to take advantage of them and close that loop. Or at least make that loop as difficult to breach as the rest of the system. That’s the only way to be sure that the security measures we put in place can be used to their fullest potential. Just make sure that everyone knows the Eddie Vedder doesn’t work in accounting.

 

Rules Shouldn’t Have Exceptions

MerkurRazor

On my way to Virtualization Field Day 4, I ran into a bit of a snafu at the airport that made me think about policy and application. When I put my carry-on luggage through the X-ray, the officer took it to the back and gave it a thorough screening. During that process, I was informed that my double-edged safety razor would not be able to make the trip (or the blade at least). I was vexed, as this razor had flown with me for at least a whole year with nary a peep from security. When I related as much to the officer, the response was “I’m sorry no one caught it before.”

Everyone Is The Same, Except For Me

This incident made me start thinking about polices in networking and security and how often they are arbitrarily enforced. We see it every day. The IT staff comes up with a new plan to reduce mailbox sizes or reduce congestion by enforcing quality of service (QoS). Everyone is all for the plan during the discussion stages. When the time comes to implement the idea, the exceptions start happening. Upper management won’t have mailbox limitations. The accounting department is exempt from the QoS policy. The list goes on and on until it’s larger than the policy itself.

Why does this happen? How can a perfect policy go from planning to implementation before it falls apart? Do people sit around making up rules they know they’ll never follow? That does happen in some cases, but more often it happens that the folks that the policy will end up impacting the most have no representation in the planning process.

Take mailboxes for example. The IT department, being diligent technology users, strive for inbox zero every day. They process and deal with messages. They archive old mail. They keep their mailbox a barren wasteland of in-process things and shuffle everything else off to the static archive. Now, imagine an executive. These people are usually overwhelmed by email. They process what they can but the wave will always overtake them. In addition, they have no archive. Their read mail sits around in folders for easy searching and quick access when a years-old issue becomes present again.

In modern IT, any policies limiting mailbox sizes would be decided by the IT staff based on their mailbox size. Sure, a 1 GB limit sounds great. Then, when the policy is implemented the executive staff pushes back with their 5 GB (or larger) mailboxes and says that the policy does not apply to them. IT relents and finds a way to make the executives exempt.

In a perfect world, the executive team would have been consulted or had representation on the planning team prior to the decision. The idea of having huge mailboxes would have been figured out in the planning stage and dealt with early instead of making exceptions after the fact. Maybe the IT staff needed to communicate more. Perhaps the executive team needed to be more involved. Those are problems that happen every day. So how do we fix them?

Exceptions Are NOT The Rule

The way to increase buy-in for changes and increase communication between stakeholders is easy but not without pain. When policies are implemented, no deviations are allowed. It sounds harsh. People are going to get mad at you. But you can’t budge an inch. If a policy exception is not documented in the policy it will get lost somewhere. People will continue to be uninvolved in the process as long as they think they can negotiate a reprieve after the fact.

IT needs to communicate up front exactly what’s going into the change before the the implementation. People need to know how they will be impacted. Ideally, that will mean that people have talked about the change up front so there are no surprises. But we all know that doesn’t happen. So making a “no exceptions” policy or rule change will get them involved. Because not being able to get out of a rule means you want to be there when the rules get decided so you can make your position clear and ensure the needs of you and your department are met.


Tom’s Take

As I said yesterday on Twitter, people don’t mind rules and polices. They don’t even mind harsh or restrictive rules. What they have a problem with is when those rules are applied in an arbitrary fashion. If the corporate email policy says that mailboxes are supposed to be no more than 1 GB in size then people in the organization will have a problem if someone has a 20 GB mailbox. The rules must apply to everyone equally to be universally adopted. Likewise, rules must encompass as many outlying cases as possible in order to prevent one-off exceptions for almost everyone. Planning and communication are more important than ever when planning those rules.

Expiring The Internet

An article came out this week that really made me sigh.  The title was “Six Aging Protocols That Could Cripple The Internet“.  I dove right in, expecting to see how things like Finger were old and needed to be disabled and removed.  Imagine my surprise when I saw things like BGP4 and SMTP on the list.  I really tried not to smack my own forehead as I flipped through the slideshow of how the foundation of the Internet is old and is at risk of meltdown.

If It Ain’t Broke

Engineers love the old adage “If it ain’t broke, don’t fix it!”.  We spend our careers planning and implementing.  We also spend a lot of time not touching things afterwards in order to prevent it from collapsing in a big heap.  Once something is put in place, it tends to stay that way until something necessitates a change.

BGP is a perfect example.  The basics of BGP remain largely the same from when it was first implemented years ago.  BGP4 has been in use since 1994 even though RFC 4271 didn’t officially formalize it until 2006.  It remains a critical part of how the Internet operates.  According to the article, BGP is fundamentally flawed because it’s insecure and trust based.  BGP hijacking has been occurring with more frequency, even as resources to combat it are being hotly debated.  Is BGP to blame for the issue?  Or is it something more deeply rooted?

Don’t Fix It

The issues with BGP and other protocols mentioned in the article, including IPv6, aren’t due to the way the protocol was constructed.  It is due in large part to the humans that implement those protocols.  BGP is still in use in the current insecure form because it works.  And because no one has proposed a simple replacement that accomplishes the goal of fixing all the problems.

Look at IPv6.  It solves the address exhaustion issue.  It solves hierarchical addressing issues.  It restores end-to-end connectivity on the Internet.  And yet adoption numbers still languish in the single digit percentage.  Why?  Is it because IPv6 isn’t technically superior? Or because people don’t want to spend the time to implement it?  It’s expensive.  It’s difficult to learn.  Reconfiguring infrastructures to support new protocols takes time and effort.  Things that are better spent on answering user problems or taking on additional tasks as directed by management that doesn’t care about BGP insecurity until the Internet goes down.

It Hurts When I Do This

Instead of complaining about how protocols are insecure, the solution to the problem should be two fold: First, we need to start building security into protocols and expiring their older, insecure versions.  POODLE exploited SSLv3, an older version that served as a fallback to TLS.  While some old browsers still used SSLv3, the simple easy solution was to disable SSL and force people to upgrade to TLS-capable clients.  In much the same way, protocols like NTP and BGP can be modified to use more security.  Instead of suggesting that people use those versions, architects and engineers need to implement those versions and discourage use of the old insecure protocols by disabling them.  It’s not going to be easy at first.  But as the movement gains momentum, the solution will work.

The next step in the process is to build easy-to-configure replacements.  Bolting security onto a protocol after the fact does stop the bleeding.  But to fix the underlying symptoms, the security needs to be baked into the protocol from the beginning.  But doing this with an entirely new protocol that has no backwards compatibility will be the death of that new protocol.  Just look at how horrible the transition to IPv6 has been.  Lack of an easy transition coupled with no monetary incentive and lack of an imminent problem caused the migration to drag out until the eleventh hour.  And even then there is significant pushback against an issue that can no longer be ignored.

Building the next generation of secure Internet protocols is going to take time and transparent effort.  People need to see what’s going into something to understand why it’s important.  The next generation of engineers needs to understand why things are being built the way they are.  We’re lucky in that many of the people responsible for building the modern Internet are still around.  When asked about limitations in protocols the answer remains remarkably the same – “We never thought it would be around this long.”

The longevity of quick fixes seems to be the real issue.  When the next generation of Internet protocols is built there needs to be a built-in expiration date.  A point-of-no-return beyond which the protocol will cease to function.  And there should be no method for extending the shelf life of a protocol to forestall it’s demise.  In order to ensure that security can’t be compromised we have to resign ourselves to the fact that old things need to be put out to pasture.  And the best way to ensure that new things are put in place to supplant them is to make sure the old things go away on time.


Tom’s Take

The Internet isn’t fundamentally broken.  It’s a collection of things that work well in their roles that maybe have been continued a little longer than necessary.  The probability of an exploit being created for something rises with every passing day it is still in use.  We can solve the issues of the current Internet with some security engineering.  But to make sure the problem never comes back again, we have to make a hard choice to expire protocols on a regular basis.  It will mean work.  It will create strife.  And in the end we’ll all be better for it.

Security Dessert Models

MMCOOKIE

I had the good fortune last week to read a great post from Maish Saidel-Keesing (@MaishSK) that discussed security models in relation to candy.  It reminded me that I’ve been wanting to discuss security models in relation to desserts.  And since Maish got me hungry for a Snicker’s bar, I decided to lay out my ideas.

When we look at traditional security models of the past, everything looks similar to creme brûlée.  The perimeter is very crunchy, but it protects a soft interior.  This is the predominant model of the world where the “bad guys” all live outside of your network.  It works when you know where your threats are located.  This model is still in use today where companies explicitly trust their user base.

The creme brûlée model doesn’t work when you have large numbers of guest users or BYOD-enabled users.  If one of them brings in something that escapes into the network, there’s nothing to stop it from wreaking havoc everywhere.  In the past, this has caused massive virus outbreaks and penetrations from things like malicious USB sticks in the parking lot being activated on “trusted” computers internally.

A Slice Of Pie

A more modern security model looks more like an apple pie.  Rather than trusting that everything inside the network, the smart security team will realize that users are as much of a threat as the “bad guys” outside.  They crunchy crust on top will also be extended around the whole soft area inside.  Users that connect tablets, phones, and personal systems will have a very aggressive security posture in place to prevent access of anything that could cause problems in the network (and data center).  This model is great when you know that the user base is not to be trusted.  I wrote about it over a year ago on the Aruba Airheads community site.

The apple pie model does have some drawbacks.  While it’s a good idea to isolate your users outside the “crust”, you still have nothing protecting your internal systems if a rogue device or “trusted” user manages to get inside the perimeter.  The pie model will protect you from careless intrusions but not from determined attackers.  To fix that problem, you’re going to have to protect things inside the network with a crunchy shell as well.

Melts In Your Firewall, Not In Your Hand

Maish was right on when he talked about M&Ms being a good metaphor for security.  They also do a great job of visualizing the untrusted user “pie” model.  But the ultimate security model will end up looking more like an M&M cookie.  It will have a crunchy edge all around.  It will be “soft” in the middle.  And it will also have little crunchy edges around the important chocolate parts of your network (or data center).  This is how you protect the really important things like customer data.  You make sure that even getting past the perimeter won’t grant access.  This is the heart of “defense in depth”.

The M&M cookie model isn’t easy by any means. It requires you to identify assets that need to be protected.  You have to build the protection in at the beginning.  No ACLs that permit unrestricted access.  The communications between untrusted devices and trusted systems needs to be kept to the bare minimum necessary.  Too many M&Ms in a cookie makes for a bad dessert.  So too must you make sure to identify the critical systems that need be protected and group them together to minimize configuration effort and attack surface.


 

Tom’s Take

Security is a world of protecting the important things while making sure they can be used by people.  If you err on the side of too much caution, you have a useless system.  If you are too permissive, you have a security risk.  Balance is the key.  Just like the recipe for cookies, pie, or even creme brûlée the proportion of ingredients must be just right to make a tasty dessert.  In security you have to have the same mix of permissions and protections.  Otherwise, the whole thing falls apart like a deflated soufflé.

 

Don’t Track My MAC!

track

The latest technology in mobile seems to be identification.  It has nothing to do with credentials.  Instead, it has everything to do with creating a database of who you are and where you are.  Location-based identification is the new holy grail for marketing people.  And the privacy implications are frightening.

Who Are You?

The trend now is to get your device MAC address and store it in a database.  This allows the location tracking systems, like Aruba Meridian or Cisco CMX, to know that they’ve seen you in the past.  They can see where you’ve been in the store with a resolution of a couple of feet (much better than GPS).  They now know which shelf you are standing in front of.  Coupled with new technologies like Apple iBeacon, the retailer can push information to your mobile device like a coupon or a price comparison with competitors.

It’s a fine use of mobile technology.  Provided I wanted that in the first place.  The model should be opt-in.  If I download your store’s app and connect to your wifi then I clicked the little “agree” box that allows you to send me that information.  If I opt-in, feel free to track me and email me coupons.  Or even to pop them up on store displays when my device gets close to a shelf that contains a featured item.  I knew what I was getting into when I opted in.  But what happens when you didn’t?

Wifi, Can You Hear Me?

The problem comes when the tracking system is listening to devices when it shouldn’t be. When my mobile device walks into a store, it will start beaconing for available wifi access points.  It will interrogate them about the SSIDs that they have and whether my device has associated with them.  That’s the way wifi works.  You can’t stop that unless you shut off your wireless.

If the location system is listening to the devices beaconing for wifi, it could be enabled to track those MAC addresses that are beaconing for connectivity even if they don’t connect.  So now, my opt-in is worthless.  If the location system knows about my MAC address even when I don’t connect, they can push information to iBeacon displays without my consent.  I would see a coupon for a camping tent based on the fact that I stood next to the camp stoves last week for five minutes.  It doesn’t matter that I was on a phone call and don’t have the slightest care about camping.  Now the system has started building a profile of me based on erroneous information it gathered when it shouldn’t have been listening.

Think about Minority Report.  When Tom Cruise is walking through the subway, retinal scanners read his print and start showing him very directed advertising.  While we’re still years away from that technology, being able to fingerprint a mobile device when it enters the store is the next best thing.  If I look down to text my wife about which milk to buy, I could get a full screen coupon telling me about a sale on bread.

My (MAC) Generation

This is such a huge issue that Apple has taken a step to “fix” the problem in the beta release for iOS 8.  As reported by The Verge, iOS 8 randomizes the MAC address used when probing for wifi SSIDs.  This means that the MAC used to probe for wifi requests won’t be the same as the one used to connect to the actual AP.  That’s huge for location tracking.  It means that the only way people will know who I am for sure is for me to connect to the wifi network.  Only then will my true MAC address be revealed.  It also means that I have to opt-in to the location tracking.  That’s a great relief for privacy advocates and tin foil hat aficionados everywhere.

It does make iBeacon configuration a bit more time consuming.  But you’ll find that customers will be happier overall knowing their information isn’t being stored without consent.  Because there’s never been a situation where customer data was leaked, right? Not more than once, right?  Oh, who am I kidding.  If you are a retailer, you don’t want that kind of liability on your hands.

Won’t Get Fooled Again

If you’re one of the retailers deploying location based solutions for applications like iBeacon, now is the time to take a look at what you’re doing.  If you’re collecting MAC address information from probing mobile devices you should turn it off now.  Yes, privacy is a concern.  But so is your database.  Assuming iOS randomizes the entire MAC address string including the OUI and not just the 24-bit NIC at the end, your database is going to fill up quickly with bogus entries.  Sure, there may be a duplicate here and there from the random iOS strings, but they will be few and far between.

More likely, your database will overflow from the sheer number of MACs being reported by iOS 8 devices.  And since iOS7 adoption was at 87% of compatible devices just 8 months after release, you can guarantee there will be a large number of iOS devices coming into your environment running with obfuscated MAC addresses.


Tom’s Take

I don’t like the idea of being tracked when I’m not opted in to a program.  Sure, I realize that my usage statistics are being used for research.  I know that clicking those boxes in the EULA gives my data to parties unknown for any purpose they choose.  And I’m okay with it.  Provided that box is checked.

When I find out my data is being collected without my consent, it gives me the creeps.  When I learned about the new trends in data collection for the grand purposes of marketing and sales, I wanted to scream from the rooftops that the vendors needs to put a halt to this right away.  Thankfully, Apple must have heard my silent screams.  We can only hope that other manufacturers start following suit and giving us a method to prevent this from happening.  This tweet from Jan Dawson sums it up nicely:

Security’s Secret Shame

photo-2

Heartbleed captured quite a bit of news these past few days.  A hole in the most secure of web services tends to make people a bit anxious.  Racing to release patches and assess the damage consumed people for days.  While I was a bit concerned about the possibilities of exposed private keys on any number of web servers, the overriding thought in my mind was instead about the speed at which we went from “totally secure” to “Swiss cheese security” almost overnight.

Jumping The Gun

As it turns out, the release of the information about Heartbleed was a bit sudden.  The rumor is that the people that discovered the bug were racing to release the information as soon as the OpenSSL patch was ready because they were afraid that the information had already leaked out to the wider exploiting community.  I was a bit surprised when I learned this little bit of information.

Exploit disclosure has gone through many phases in recent years.  In days past, the procedure was to report the exploit to the vendor responsible.  The vendor in question would create a patch for the issue and prepare their support organization to answer questions about the patch.  The vendor would then release the patch with a report on the impact.  Users would read the report and patch the systems.

Today, researchers that aren’t willing to wait for vendors to patch systems instead perform an end-run around the system.  Rather than waiting to let the vendors release the patches on their cycle, they release the information about the exploit as soon as they can.  The nice ones give the vendors a chance to fix things.  The less savory folks want the press of discovering a new hole and project the news far and wide at every opportunity.

Shame Is The Game

Part of the reason to release exploit information quickly is to shame the vendor into quickly releasing patch information.  Researchers taking this route are fed up with the slow quality assurance (QA) cycle inside vendor shops.  Instead, they short circuit the system by putting the issue out in the open and driving the vendor to release a fix immediately.

While this approach does have its place among vendors that move a glacial patching pace, one must wonder how much good is really being accomplished.  Patching systems isn’t a quick fix.  If you’ve ever been forced to turn out work under a crucial deadline while people were shouting at you, you know the kind of work that gets put out.  Vendor patches must be tested against released code and there can be no issues that would cause existing functionality to break.  Imagine the backlash if a fix to the SSL module cause the web service to fail on startup.  The fix would be worse than the problem.

Rather than rushing to release news of an exploit, researchers need to understand the greater issues at play.  Releasing an exploit for the sake of saving lives is understandable.  Releasing it for the fame of being first isn’t as critical.  Instead of trying to shame vendors into releasing a patch rapidly to plug some hole, why not work with them instead to identify the issue and push the patch through?  Shaming vendors will only put pressure on them to release questionable code.  It will also alienate the vendors from researchers doing   things the right way.


Tom’s Take

Shaming is the rage now.  We shame vendors, users, and even pets.  Problems have taken a back seat to assigning blame.  We try to force people to change or fix things by making a mockery of what they’ve messed up.  It’s time to stop.  Rather than pointing and laughing at those making the mistake, you should pick up a keyboard and help them fix it. Shaming doesn’t do anything other than upset people.  Let’s put it to bed and make things better by working together instead of screaming “Ha ha!” when things go wrong.

Google+ And The Quest For Omniscience

GooglePlusEverything

When you mention Google+ to people, you tend to get a very pointed reaction. Outside of a select few influencers, I have yet to hear anyone say good things about it. This opinion isn’t helped by the recent moves by Google to make Google+ the backend authentication mechanism for their services. What’s Google’s aim here?

Google+ draws immediate comparisons to Facebook. Most people will tell you that Google+ is a poor implementation of the world’s most popular social media site. I would tend to agree for a few reasons. I find it hard to filter things in Google+. The lack of a real API means that I can’t interact with it via my preferred clients. I don’t want to log into a separate web interface simply to ingest endless streams of animated GIFs with the occasional nugget of information that was likely posted somewhere else in the first place.

It’s the Apps

One thing the Google of old was very good at doing was creating applications that people needed. GMail and Google Apps are things I use daily. Youtube gets visits all the time. I still use Google Maps to look things up when I’m away from my phone. Each of these apps represent a separate development train and unique way of looking at things. They were more integrated than some of the attempts I’ve seen to tie together applications at other vendors. They were missing one thing as far as Google was concerned: you.

Google+ isn’t a social network. It’s a database. It’s an identity store that Google uses to nail down exactly who you are. Every +1 tells them something about you. However, that’s not good enough. Google can only prosper if they can refine their algorithms.  Each discrete piece of information they gather needs to be augmented by more information.  In order to do that, they need to increase their database.  That means they need to drive adoption of their social network.  But they can’t force people to use Google+, right?

That’s where the plan to integrate Google+ as the backend authentication system makes nefarious sense. They’ve already gotten you hooked on their apps. You comment on Youtube or use Maps to figure out where the nearest Starbucks already. Google wants to know that. They want to figure out how to structure AdWords to show you more ads for local coffee shops or categorize your comments on music videos to sell you Google Play music subscriptions. Above all else, they can use that information as a product to advertisers.

Build It Before They Come

It’s devilishly simple. It’s also going to be more effective than Facebook’s approach. Ask yourself this: when’s the last time you used Facebook Mail? Facebook started out with the lofty goal of gathering all the information that it could about people. Then they realized the same thing that Google did: You have to collect information on what people are using to get the whole picture. Facebook couldn’t introduce a new system, so they had to start making apps.

Except people generally look at those apps and push them to the side. Mail is a perfect example. Even when Facebook tried to force people to use it as their primary communication method their users rebelled against the idea. Now, Facebook is being railroaded into using their data store as a backend authentication mechanism for third party sites. I know you’ve seen the “log In With Facebook” buttons already. I’ve even written about it recently. You probably figured out this is going to be less successful for a singular reason: control.

Unlike Google+ and the integration will all Google apps, third parties that utilize Facebook logins can choose to restrict the information that is shared with Facebook. Given the climate of privacy in the world today, it stands to reason that people are going to start being very selective about the information that is shared with these kinds of data sinks. Thanks to the Facebook login API, a significant portion of the collected information never has to be shared back to Facebook. On the other hand, Google+ is just making a simple backend authorization. Given that they’ve turned on Google+ identities for Youtube commenting without a second though, it does make you wonder what other data their collecting without really thinking about it.


Tom’s Take

I don’t use Google+. I post things there via API hacks. I do it because Google as a search engine is too valuable to ignore. However, I don’t actively choose to use Google+ or any of the apps that are now integrated into it. I won’t comment on Youtube. I doubt I’ll use the Google Maps functions that are integrated into Google+. I don’t like having a half-baked social media network forced on me. I like it even less when it’s a ham-handed attempt to gather even more data on me to sell to someone willing to pay to market to me. Rather than trying to be the anti-Facebook, Google should stand up for the rights of their product…uh, I mean customers.