Memcached DDoS – There’s Still Time to Save Your Mind

In case you haven’t heard, there’s a new vector for Distributed Denial of Service (DDoS) attacks out there right now and it’s pretty massive. The first mention I saw this week was from Cloudflare, where they details that they were seeing a huge influx of traffic from UDP port 11211. That’s the port used by memcached, a database caching system.

Surprisingly, or not, there were thousands of companies that had left UDP/11211 open to the entire Internet. And, by design, memcached responds to anyone that queries that port. Also, carefully crafted packets can be amplified to have massive responses. In Cloudflare’s testing they were able to send a 15 byte packet and get a 134KB response. Given that this protocol is UDP and capable of responding to forged packets in such a way as to make life miserable for Cloudflare and, now, Github, which got blasted with the largest DDoS attack on record.

How can you fix this problem in your network? There are many steps you can take, whether you are a system admin or a network admin:

  • Go to Shodan and see if you’re affected. Just plug in your company’s IP address ranges and have it search for UDP 11211. If you pop up, you need to find out why memcached is exposed to the internet.
  • If memcached isn’t supposed to be publicly available, you need to block it at the edge. Don’t let anyone connect to UDP port 11211 on any device inside your network from outside of it. That sounds like a no-brainer, but you’d be surprised how many firewall rules aren’t carefully crafted in that way.
  • If you have to have memcached exposed, make sure you talk to that team and find out what their bandwidth requirements are for the application. If it’s something small-ish, create a policer or QoS policy that rate limits the memcached traffic so there’s no way it can exceed that amount. And if that amount is more than 100Mbit of traffic, you need to have an entirely different discussion with your developers.
  • From Cloudflare’s blog, you can disable UDP on memcached on startup by adding the -U 0 flag. Make sure you check with the team that uses it before you disable it though before you break something.

Tom’s Take

Exposing unnecessary services to the Internet is asking for trouble. Given an infinite amount of time, a thousand monkeys on typewriters will create a Shakespearean play that details how to exploit that service for a massive DDoS attack. The nature of protocols to want to help make things easier doesn’t make our jobs easier. They respond to what they hear and deliver what they’re asked. We have to prevent bad actors from getting away with things in the network and at the system level because application developers rarely ask “may I” before turning on every feature to make users happy.

Make sure you check your memcached settings today and immunize yourself from this problem. If Github got blasted with 1.3Tbps of traffic this week there’s no telling who’s going to get hit next.

Advertisements

Should We Build A Better BGP?

One story that seems to have flown under the radar this week with the Net Neutrality discussion being so dominant was the little hiccup with BGP on Wednesday. According to sources, sources inside AS39523 were able to redirect traffic from some major sites like Facebook, Google, and Microsoft through their network. Since the ISP in question is located inside Russia, there’s been quite a lot of conversation about the purpose of this misconfiguration. Is it simply an accident? Or is it a nefarious plot? Regardless of the intent, the fact that we live in 2017 and can cause massive portions of Internet traffic to be rerouted has many people worried.

Routing by Suggestion

BGP is the foundation of the modern Internet. It’s how routes are exchanged between every autonomous system (AS) and how traffic destined for your favorite cloud service or cat picture hosting provider gets to where it’s supposed to be going. BGP is the glue that makes the Internet work.

But BGP, for all of the greatness that it provides, is still very fallible. It’s prone to misconfiguration. Look no further than the Level 3 outage last month. Or the outage that Google caused in Japan in August. And those are just the top searches from Google. There have been a myriad of problems over the course of the past couple of decades. Some are benign. Some are more malicious. And in almost every case they were preventable.

BGP runs on the idea that people configuring it know what they’re doing. Much like RIP, the suggestion of a better route is enough to make BGP change the way that traffic flows between systems. You don’t have to be a evil mad genius to see this in action. Anyone that’s ever made a typo in their BGP border router configuration will tell you that if you make your system look like an attractive candidate for being a transit network, BGP is more than happy to pump a tidal wave of traffic through your network without regard for the consequences.

But why does it do that? Why does BGP act so stupid sometimes in comparison to OSPF and EIGRP? Well, take a look at the BGP path selection mechanism. CCIEs can probably recite this by heart. Things like Local Preference, Weight, and AS_PATH govern how BGP will install routes and change transit paths. Notice that these are all set by the user. There are not automatic conditions outside of the route’s origin. Unlike OSPF and EIGRP, there is no consideration for bandwidth or link delay. Why?

Well, the old Internet wasn’t incredibly reliable from the WAN side. You couldn’t guarantee that the path to the next AS was the “best” path. It may be an old serial link. It could have a lot of delay in the transit path. It could also be the only method of getting your traffic to the Internet. Rather than letting the routing protocol make arbitrary decisions about link quality the designers of BGP left it up to the person making the configuration. You can configure BGP to do whatever you want. And it will do what you tell it to do. And if you’ve ever taken the CCIE lab you know that you can make BGP do some very interesting things when you’re faced with a challenge.

BGP assumes a minimum level of competency to use correctly. The protocol doesn’t have any built in checks to avoid doing stupid things outside of the basics of not installing incorrect routes in the routing table. If you suddenly start announcing someone else’s AS with better metrics then the global BGP network is going to think you’re the better version of that AS and swing traffic your way. That may not be what you want. Given that most BGP outages or configurations of this type only last a couple of hours until the mistake is discovered, it’s safe to say that fat fingers cause big BGP problems.

Buttoning Down BGP

How do we fix this? Well, aside from making sure that anyone touching BGP knows exactly what they’re doing? Not much. Some Regional Internet Registrars (RIRs) require you to preconfigure new prefixes with them before they can be brought online. As mentioned in this Reddit thread, RIPE is pretty good about that. But some ISPs, especially ones in the US that work with ARIN, are less strict about that. And in some cases, they don’t even bring the pre-loaded prefixes online at the correct time. That can cause headaches when trying to figure out why your networks aren’t being announced even though your config is right.

Another person pointed out the Mutually Agreed Norms for Routing Security (MANRS). These look like some very good common sense things that we need to be doing to ensure that routing protocols are secure from hijacks and other issues. But, MANRS is still a manual setup that relies on the people implementing it to know what they’re doing.

Lastly, another option would be the Resource Public Key Infrastructure (RPKI) service that’s offered by ARIN. This services allows people that own IP Address space to specify which autonomous systems can originate their prefixes. In theory, this is an awesome idea that gives a lot of weight to trusting that only specific ASes are allowed to announce prefixes. In practice, it requires the use of PKI cryptographic infrastructure on your edge routers. And anyone that’s ever configured PKI on even simple devices knows how big of a pain that can be. Mixing PKI and BGP may be enough to drive people back to sniffing glue.


Tom’s Take

BGP works. It’s relatively simple and gets the job done. But it is far too trusting. It assumes that the people running the Internet are nerdy pioneers embarking on a journey of discovery and knowledge sharing. It doesn’t believe for one minute that bad people could be trying to do things to hijack traffic. Or, better still, that some operator fresh from getting his CCNP isn’t going to reroute Facebook traffic through a Cisco 2524 router in Iowa. BGP needs to get better. Or we need to make some changes to ensure that even if BGP still believes that the Internet is a utopia someone is behind it to ensure those rose colored glasses don’t cause it to walk into a bus.

Devaluing Data Exposures

I had a great time this week recording the first episode of a new series with my co-worker Rich Stroffolino. The Gestalt IT Rundown is hopefully the start of some fun news stories with a hint of snark and humor thrown in.

One of the things I discussed in this episode was my belief that no data is truly secure any more. Thanks to recent attacks like WannaCry and Bad Rabbit and the rise of other state-sponsored hacking and malware attacks, I’m totally behind the idea that soon everyone will know everything about me and there’s nothing that anyone can do about it.

Just Pick Up The Phone

Personal data is important. Some pieces of personal data are sacrificed for the greater good. Anyone who is in IT or works in an area where they deal with spam emails and robocalls has probably paused for a moment before putting contact information down on a form. I have an old Hotmail address I use to catch spam if I’m relative certain that something looks shady. I give out my home phone number freely because I never answer it. These pieces of personal data have been sacrificed in order to provide me a modicum of privacy.

But what about other things that we guard jealously? How about our mobile phone number. When I worked for a VAR that was the single most secretive piece of information I owned. No one, aside from my coworkers, had my mobile number. In part, it’s because I wanted to make sure that it got used properly. But also because I knew that as soon as one person at the customer site had it, soon everyone would. I would be spending my time answering phone calls instead of working on tickets.

That’s the world we live in today. So many pieces of information about us are being stored. Our Social Security Number, which has truthfully been misappropriated as an identification number. US Driver’s Licenses, which are also used as identification. Passport numbers, credit ratings, mother’s maiden name (which is very handy for opening accounts in your name). The list could be a blog post in and of itself. But why is all of this data being stored?

Data Is The New Oil

The first time I heard someone in a keynote use the phrase “big data is the new oil”, I almost puked. Not because it’s a platitude the underscores the value of data. I lost it because I know what people do with vital resources like oil, gold, and diamonds. They horde them. Stockpiling the resources until they can be refined. Until every ounce of value can be extracted. Then the shell is discarded until it becomes a hazard.

Don’t believe me? I live in a state that is legally required to run radio and television advertisements telling children not to play around old oilfield equipment that hasn’t been operational in decades. It’s cheaper for them to buy commercials than it is to clean up their mess. And that precious resource? It’s old news. Companies that extract resources just move on to the next easy source instead of cleaning up their leftovers.

Why does that matter to you? Think about all the pieces of data that are stored somewhere that could possibly leak out about you. Phone numbers, date of birth, names of children or spouses. And those are the easy ones. Imagine how many places your SSN is currently stored. Now, imagine half of those companies go out of business in the next three years. What happens to your data then? You can better believe that it’s not going to get destroyed or encrypted in such a way as to prevent exposure. It’s going to lie fallow on some forgotten server until someone finds it and plunders it. Your only real hope is that it was being stored on a cloud provider that destroys the storage buckets after the bill isn’t paid for six months.

Devaluing Data

How do we fix all this? Can this be fixed? Well, it might be able to be done, but it’s not going to be fun, cheap, or easy. It all starts by making discrete data less valuable. An SSN is worthless without a name attached to it, for instance. If all I have are 9 random numbers with no context I can’t tell what they’re supposed to be. The value only comes when those 9 numbers can be matched to a name.

We’ve got to stop using SSN as a unique identifier for a person. It was never designed for that purpose. In fact, storing SSN as all is a really bad idea. Users should be assigned a new, random ID number when creating an account or filling out a form. SSN shouldn’t be stored unless absolutely necessary. And when it is, it should be treated like a nuclear launch code. It should take special authority to query it, and the database that queries it should be directly attached to anything else.

Critical data should be stored in a vault that can only be accessed in certain ways and never exposed. A prime example is the trusted enclave in an iPhone. This enclave, when used for TouchID or FaceID, stores your fingerprints and your face map. Pretty important stuff, yes? However, even with biometric ID systems become more prevalent there isn’t any way to extract that data from the enclave. It’s stored in such a way that it can only be queried in a specific manner and a result of yes/no returned from the query. If you stole my iPhone tomorrow, there’s no way for you to reconstruct my fingerprints from it. That’s the template we need to use going forward to protect our data.


Tom’s Take

I’m getting tired of being told that my data is being spread to the four winds thanks to it lying around waiting to be used for both legitimate and nefarious purposes. We can’t build fences high enough around critical data to keep it from being broken into. We can’t keep people out, so we need to start making the data less valuable. Instead of keeping it all together where it can be reconstructed into something of immense value, we need to make it hard to get all the pieces together at any one time. That means it’s going to be tough for us to build systems that put it all together too. But wouldn’t you rather spend your time solving a fun problem like that rather than making phone calls telling people your SSN got exposed on the open market?

Setting Sail on Secret Seas with Trireme

trireme-b

Container networking is a tough challenge to solve. The evolving needs of creating virtual networks to allow inter-container communications is difficult. But ensuring security at the same time is enough to make you pull your hair out. Lots of companies are taking a crack at it as has been demonstrated recently by microsegmentation offerings from Cisco, VMware NSX, and many others. But a new development on this front set sail today. And the captain is an old friend.

Sailing the Security Sea

Dimitri Stiladis did some great things in his time at Nuage Networks. He created a great overlay network solution that not only worked well for software defined systems but also extended into the container world as more and more people started investigating containers as the new way to provide application services. He saw many people rushing into this area with their existing solutions as well as building new solutions. However, those solutions were all based on existing technology and methods that didn’t work well in the container world. If you ever heard someone say, “Oh, containers are just lightweight VMs…” you know what kind of thinking I’m talking about.

Late last year, Dimitri got together with some of his friends to build a new security solution for containers. He founded Aporeto, which is from the Greek for “confidential”. And that really informs the whole idea of what they are trying to build. Container communications should be something easy to secure. All the right pieces are in place. But what’s missing is the way to do it easily and scale it up quickly. This is where existing solutions are missing the point by using existing ideas and constructs.

Enter Trireme. This project is an open source version of the technology Aporeto is working on was released yesterday to help container admins understand why securing communications between containers is critical and yet simple to do. I got a special briefing from Dimitri yesterday about it and once he helped me understand it I immediately saw the power of what they’ve done.

In The Same Boat

Trireme works by doing something very simple. All containers have a certificate that is generated at creation. This allows them to be verified for consistency and other things. What Trireme is doing is using a TCP Authorization Proxy to grab the digital identity of the container and insert it into the TCP SYN setup messages. Now, the receiving container will know who the sender is because the confirmed identity of the sender is encoded in the setup message. If the sender is authorized to talk to the receiver the communications can be setup. Otherwise the connection is dropped.

This is one of the “so simple I can’t believe I missed it” moments. If there is already a secure identity setup for the container it should be used. And adding that information to the TCP setup ensures that we don’t just take for granted that containers with similar attributes are allowed to talk to each other just because they are on the same network. This truly is microsegmentation with the addition of identity protection. Even if you spin up a new container with identical attributes, it won’t have the same digital identity as the previous container, which means it will need to be authorized all over again.

Right now the security model is simple. If the attributes of the containers match, they are allowed to talk. You can setup some different labels and try it yourself. But with the power behind using Kubernetes as the management platform, you can extend this metaphor quite a bit. Imagine being able to create a policy setup that allows containers with the “dev” label to communication if and only if they have the “shared” label as well. Or making sure that “dev” containers can never talk to “prod” containers for any reason, even if they are on the same network. It’s an extension of a lot of things already being looked at in the container world but it has the benefit of built in identity confirmation as well as scalability.

How does Trireme scale? Well, it’s not running a central controller or database of any kind. Instead, the heavy lifting is done by a local process on the container. That’s how Trireme can scale. No dependency on a central process or device failing and leaving everyone stranded. No need to communicate with anything other than the local container host. Kubernetes has the infrastructure to push down the policy changes to processes in the container which are then checked by the Trireme process. That means that Trireme never has to leave the local container to make decisions. Everything that is needed is right on deck.


Tom’s Take

It took me a bit to understand what Dimitri and his group are trying to do with Trireme and later with their Aporeto solution. Creating digital signatures and signing communications between containers is going to be a huge leap forward for security. If all communications are secured by default then security becomes the kind of afterthought that we need.

The other thing that Aporeto illustrates to me is the need for containers to be isolated processes, not heavy VMs. By creating a process boundary per container, Trireme and other solutions can help keep things as close to completely secure as possible. Lowering the attack surface of a construct down to the process level is making it a tiny target in a big ocean.

The People Versus Security

PinkLock

It all comes back to people. People are the users of the system. They are the source of great imagination and great innovation. They are also the reason why security professionals pull their hair out day in and day out. Because computer systems don’t have the capability to bypass, invalidated, and otherwise screw up security quite like a living, breathing human being.

Climb Every Mountain

Security is designed to make us feel safe. Door locks keep out casual prowlers. Alarm systems alert us when our home or business is violated. That warm fuzzy feeling we get when we know the locks are engaged and we are truly secure is one of bliss.

But when security gets in our way, it’s annoying. Think of all the things in your life that would be easier if people just stopped trying to make you secure. Airport security is the first that comes to mind. Or the annoying habit of needing to show your ID when you make a credit card purchase. How about systems that scan your email for data loss prevention (DLP) purposes and kick back emails with sensitive data that you absolutely need to share?

Security only benefits us when it’s unobtrusive yet visibly reassuring. People want security that works yet doesn’t get in their way. And when it does, they will go out of their way to do anything they can to bypass it. Some of the most elaborate procedures I’ve ever seen to get around security lockouts happened because people pushed back against the system.

Cases in point? The US Air Force was forced to put a code on nuclear missiles to protect them from being accidentally launched at the height of the cold war. What did they make that code? 00000000. No, really. How about the more recent spate of issues with the US transition to Chip-and-Signature credit card authentication as opposed to the old swipe method? Just today I was confronted with a card reader that had a piece of paper shoved in the chip reader slot saying “Please Swipe”. Reportedly it’s because transactions are taking 10 seconds or more to process. Much more secure for sure, but far too slow for busy people on the go, I guess.

Computers don’t get imaginative when it comes to overcoming security. They follow the rules. When something happens that violates a rule or triggers a policy to deny an action that policy rule is executed. No exceptions. When an incoming connection is denied at a firewall, that connection is dropped. When the rule says to allow it then it is allowed. Computers are very binary like that (yes, pun intended).

Bring The Mountain To Them

We’ve spent a huge amount of time and effort making security unobtrusive. Think of Apple’s Touch ID. It created a novel and secure way to encourage users to put passcode locks on phones. People can now just unlock their phone with a thumbprint instead of a long passcode. Yet even Touch ID was slow at first. It took some acclimation. And when it was sped up to the point where it caused issues for the way people checked their phones for notifications and such. Apple has even gone to greater lengths in iOS 10 to introduce features to get around the fast Touch ID authentication times caused by new sensors.

Technology will always be one or more steps ahead of where people want it to be. It will always work faster than people think and cause headaches when it behaves in a contrary way. The key to solving security issues related to people is not to try and outsmart them with a computer. People are far too inventive to lose that battle. Even the most computer illiterate person can find a way to bypass a lockout or write a domain administrator password on a sticky note.


Tom’s Take

We need to teach people to think about security from a perspective of need. Why do we have complex passwords? Why do we need to rotate them? Why do the doors of a mantrap open separately? People can understand security when it’s explained in a way that makes them understand the purpose. They still may not like it, but at least they won’t be trying to circumvent it any longer. We hope.

Thoughts On Encryption

encryption

The debate on encryption has heated up significantly in the last couple of months. Most of the recent discussion has revolved around a particular device in a specific case but encryption is older than that. Modern encryption systems represent the culmination of centuries of development of making sure things aren’t seen.

Encryption As A Weapon

Did you know that twenty years ago the U.S. Government classified encryption as a munition? Data encryption was classified as a military asset and placed on the U.S. Munitions List as an auxiliary asset. The control of encryption as a military asset meant that exporting strong encryption to foreign countries was against the law. For a number of years the only thing that could be exported without fear of legal impact was regular old Data Encryption Standard (DES) methods. Even 3DES, which is theoretically much stronger but practically not much better than it’s older counterpart, was restricted for export to foreign countries.

While the rules around encryption export have been relaxed since the early 2000s, there are still some restrictions in place. Those rules are for countries that are on U.S. Government watch lists for terror states or governments deemed “rogue” states. This is why you must attest to not living in or doing business with one of those named countries when you download any software that contains encryption protocols. The problem today is that almost every piece of software includes some form of encryption thanks to ubiquitous functions like OpenSSL.

Even the father of Pretty Good Privacy (PGP) was forced to circumvent U.S. Law to get PGP into the hands of the world. Phil Zimmerman took the novel approach of printing the entirety of PGP’s source code in book form and selling it far and wide. Since books are protected speech, no one could stop them from being sold. The only barrier to creating PGP from the text was how fast one could type. Today, PGP is one of the most popular forms of encrypting written communications, such as emails.

Encryption As A Target

Today’s issues with encryption are rooted in the idea that it shouldn’t be available to people that would use it for “bad” reasons. However, instead of being able to draw a line around a place on a map and say “The people inside this line can’t get access to strong encryption”, we now live in a society where strong encryption is available on virtually any device to protect the growing amount of data we store there. Twenty years ago no one would have guessed that we could use a watch to pay for coffee, board an airplane, or communicate with loved ones.

All of that capability comes with a high information cost. Our devices need to know more and more about us in order to seem so smart. The amount of data contained in innocuous things causes no end of trouble should that data become public. Take the amount of data contained on the average boarding pass. That information is enough to know more about you than is publicly available in most places. All from one little piece of paper.

Keeping that information hidden from prying eyes is the mission of encryption. The spotlight right now is on the government and their predilection to looking at communications. Even the NSA once stated that strong encryption abroad would weaken the ability of their own technology to crack signal intelligence (SIGINT) communications. Instead, the NSA embarked on a policy of sniffing the data before it was ever encrypted by installing backdoors in ISPs and other areas to grab the data in flight. Add in the recent vulnerabilities found in the key exchange process and you can see why robust encryption is critical to protecting data.

Weakening encryption to enable it to be easily overcome by brute force is asking for a huge Pandora’s box to be opened. Perhaps in the early nineties it was unthinkable for someone to be able to command enough compute resources to overcome large number theory. Today it’s not unheard of to have control over resources vast enough to reverse engineer simple problems in a matter or hours or days instead of weeks or years. Every time a new vulnerability comes out that uses vast computing power to break theory it weakens us all.


Tom’s Take

Encryption isn’t about one device. It’s not about one person. It’s not even about a group of people that live in a place with lines drawn on a map that believe a certain way. It’s about the need for people to protect their information from exposure and harm. It’s about the need to ensure that that information can’t be stolen or modified or rerouted. It’s not about setting precedents or looking good for people in the press.

Encryption comes down to one simple question: If you dropped your phone on the floor of DefCon or BlackHat, would you feel comfortable knowing that it would take longer to break into than the average person would care to try versus picking on an easier target or a more reliable method? If the answer to that question is “yes”, then perhaps you know which side of the encryption debate you’re on without even asking the question.

Will Cisco Shine On?

Digital Lights

Cisco announced their new Digital Ceiling initiative today at Cisco Live Berlin. Here’s the marketing part:

And here’s the breakdown of protocols and stuff:

Funny enough, here’s a presentation from just three weeks ago at Networking Field Day 11 on a very similar subject:

Cisco is moving into Internet of Things (IoT) big time. They have at least learned that the consumer side of IoT isn’t a fun space to play in. With the growth of cloud connectivity and other things on that side of the market, Cisco knows that is an uphill battle not worth fighting. Seems they’ve learned from Linksys and Flip Video. Instead, they are tracking the industrial side of the house. That means trying to break into some networks that are very well put together today, even if they aren’t exactly Internet-enabled.

Digital Ceiling isn’t just about the PoE lighting that was announced today. It’s a framework that allows all other kinds of dumb devices to be configured and attached to networks that have intelligence built in. The Constrained Application Protocol (CoaP) is designed in such a way as to provide data about a great number of devices, not just lights. Yet lights are the launch “thing” for this line. And it could be lights out for Cisco.

A Light In The Dark

Cisco wants in on the possibility that PoE lighting will be a huge market. No other networking vendor that I know of is moving into the market. The other building automation company has the manufacturing chops to try and pull off an entire connected infrastructure for lighting. But lighting isn’t something to take lightly (pun intended).

There’s a lot that goes into proper lighting planning. Locations of fixtures and power levels for devices aren’t accidents. It requires a lot of planning and preparation. Plan and prep means there are teams of architects and others that have formulas and other knowledge on where to put them. Those people don’t work on the networking team. Any changes to the lighting plan are going to require input from these folks to make sure the illumination patterns don’t change. It’s not exactly like changing a lightbulb.

The other thing that is going to cause problems is the electrician’s union. These guys are trained and certified to put in anything that has power running to it. They aren’t just going to step aside and let untrained networking people start pulling down light fixtures and put up something new. Finding out that there are new 60-watt LED lights in a building that they didn’t put up is going to cause concern and require lots of investigation to find out if it’s even legal in certain areas for non-union, non-certified employees to install things that are only done by electricians now.

The next item of concern is the fact that you now have two parallel networks running in the building. Because everyone that I’ve talked to about PoE Lighting and Digital Ceiling has had the same response: Not On My Network. The switching infrastructure may be the same, but the location of the closets is different. The requirements of the switches are different. And the air gap between the networks is crucial to avoid any attackers compromising your lighting infrastructure and using it as an on-ramp into causing problems for your production data network.

The last issue in my mind is the least technically challenging, but the most concerning from the standpoint of longevity of the product line – Where’s the value in PoE lighting? Every piece of collateral I’ve seen and every person I’ve heard talk about it comes back to the same points. According to the experts, it’s effectively the same cost to install intelligent PoE lighting as it is to stick with traditional offerings. But that “effective” word makes me think of things like Tesla’s “Effective Lease Payment”.

By saying “effective”, what Cisco is telling you is that the up-front cost of a Digital Ceiling deployment is likely to be expensive. That large initial number comes down by things like electricity cost savings and increased efficiencies or any one of another of clever things that we tell each other to pretend that it doesn’t cost lots of money to buy new things. It’s important to note that you should evaluate the cost of a Digital Ceiling deployment completely on its own before you start taking into account any kind of cost savings in an equation that come months or years from now.


Tom’s Take

I’m not sure where IoT is going. There’s a lot of learning that needs to happen before I feel totally comfortable talking about the pros and cons of having billions of devices connected and talking to each other. But in this time of baby steps toward solutions, I can honestly say that I’m not entirely sold on Digital Ceiling. It’s clever. It’s catchy. But it ultimately feels like Cisco is headed down a path that will lead to ruin. If they can get CoAP working on many other devices and start building frameworks and security around all these devices then there is a chance that they can create a lasting line of products that will help them capitalize on the potential of IoT. What worries me is that this foray into a new realm will be fraught with bad decisions and compromises and eventually we’ll fondly remember Digital Ceiling as yet another Cisco product that had potential and not much else.