torstai 13. heinäkuuta 2017

What is in the value of security for things?


I'm in a workshop tomorrow to discuss (among other things) setting standards for minimum security requirements for Internet of Things devices. There are a lot of technical details to discuss, but I started to think about this from a broader perspective first.

Why do we need security to begin with? The traditional perspective on this relates to guaranteeing that your systems are available for your use and your data is kept confidential. However, as we have witnessed in recent times, the Internet is an interconnected system and its vulnerable parts may be used in attacks to harm other parts of the Internet. As a result, we cannot think of security merely in terms of individual systems. We also need to look at the impacts on the commons, i.e., the Internet as whole.


Economics of Networking

Metcalfe's law states that the value of a network is proportional to the square of the number of connected users of the system. Reed's law suggests that the utility of a network scales even exponentially with the number of users, on the grounds that there is an exponential number of possible subgroups of users. Beckström's law looks at the added value that transactions performed over the network generate. A variant of this law subtracts costs related to securing the system and attacks that happened despite the security.

These laws are all interesting, and provide different viewpoints to the value of a network. I'll try to take them together and apply them to the Internet of Things.


Does Metcalfe's Law Apply to IOT?

To begin with, do they apply to the Internet of Things? This may not be immediately obvious in context of closed devices deployed for the purpose of one application. But the ability to deploy these devices is still one example of network effects. The existence of general purpose networks (mobile networks, wireless LANs, the Internet) has made it economically possible to deploy most of these applications. Applications are rarely worth enough to warrant building special networks for them.

Metcalfe's law was written in the context of humans interacting with each other, expecting an ability to contact other humans when the need for that arises. The IOT world equivalent of that is not necessarily things contacting each other. But rather, the ability create applications that are not silos with their own dedicated devices. But the ability to open up data and functions for more general use is where Metcalfe's law really comes into play.

Years ago I realised this as I networked a large number of sensors in my home, and quickly realised that what was setup for one purpose often found new applications. Humidity sensors designed to monitor building health could be used to calculate when laundry is dry. That's a minor application, of course, but consider others. If vehicles on the road have access to real-time traffic information, and can interact with other vehicles on the road, this enables significant savings for the society, in terms of less congestion, or the ability of self-driving vehicles to pack themselves in "trains" to reduce energy consumption.

But, all this requires the ability to use as much open data as possible, and interoperable systems so that different systems and different manufacturer's products can work together. We're not quite there yet, though making progress. (See, for instance, the WISHI workshop at the IETF.)


Back to the Economics of IOT Security

Clearly, Beckström's Law's variant is on the right track in considering the costs of security and any remaining attacks. But here's our dilemma: just like there's no reason for every human to talk to every other human, there's even less need for all IOT-related applications and devices to connect to each other. There's great value in open data and interoperable systems, but if I add a rain sensor to my garden in Finland, it is unlikely that a warehouse tracking system in Buenos Aires needs to interact with my sensor.

However, for the attackers this is not true to the same extent. If my sensor can be subverted and used as a part of a botnet attack, then for sure the attackers would find it usable for attacking the warehouse.

In other words, I fear that the "value for attack" grows faster than utility. The latter grows more slowly, similar to (or even less than) the growth of value for human connections.

Se let me propose a couple of new laws... lets call the first one Eflactem's law:
Law I (Eflactem's Law): The cost of attacks from a group of nodes grows proportional to the number of vulnerable nodes in that group times the nodes in the entire Internet. 
In other words, the value of a compromised network to the attacker grows when either there are compromised nodes or when there's more or more valuable nodes to attack in the Internet. Therefore, for any new set of nodes to be added to the Internet, the probability of of those being used in attacks needs to be low enough to ensure that the value exceeds the cost of attacks, when the cost of attacks grows quickly.

The second new law is about the potential value of an application network:
Law II: The potential value of a network of application nodes grows proportional to the square of nodes having an ability to participate in the application. 
That is, the value of an application grows quickly, squared, but is limited by the number of nodes that have a possibility to participate in the application. This is equally true of smart object deployments as well as other applications. A closed system whose data cannot be accessed by outsider is less valuable than a system that is broadly used by other applications.

Now, putting these together we get:
Law III: The value of a network of application nodes grows proportional to the square of nodes having an ability to participate in the application, minus the cost of attacks  number of vulnerable nodes in that network times the nodes in the entire Internet.
Note that the benefits and costs are attributed to entire society here, not the individual players. Each individual may of course assess these from their own perspective, and, e.g., decide to deploy an insecure device even if it causes harm elsewhere.

Fortunately, the attack costs are limited by the number of entities that are vulnerable. The purpose of this law is to show how important it is that we do our best to eliminate all of those vulnerabilities, and act quickly when new vulnerabilities come to light.

The theoretical effects are also limited by practical effects, e.g., while a device is in theory capable of disturbing any network addressable in the Internet, it may only do so with one, and large numbers of compromised entities are required to mount a large-scale Denial-of-Service attack.


What About Those Minimal Security Requirements?

In view of the above, there seems to be two categories of requirements:

  • Requiring that new things connected to the Internet continue to maintain the Internet commons, and are not an additional burden in terms of vehicles of attack towards the rest of the Internet.

    For instance, they should not be susceptible to be used as reflectors in Denial-of-Service attacks. And they should be updatable and their lifetime integrity against newly discovered attacks should be actively tracked.
  • Requiring that the new things are safe for the purpose of the application they were made for.

    For instance, applications that monitor a person's health should not leak information outside authorised parties. Or applications that involve moving physical objects (e.g., self-driving cars) be safe from outside manipulation, and fail safely when something goes wrong.

Not So Fast, Mister!

I've used the term "requirement" above, but it is worthwhile to stop for a moment, and think about ways that systems get deployed in the Internet. There's no central authority, for very good reasons, so no one can mandate any specific technology to be used.

The Internet is about voluntary co-operation. Self-interest drives parties to connect and interoperate. That being said, best practices are worked on and documented, and in most cases followed, although often not be everyone. For instance, in the last couple of years there's been a common trend to employ transport layer security for web communications. By now, a significant part (or even most) of Internet communications are secured in this fashion. The reasons behind this include both self-interest (e.g., ability for content providers to control the end-to-end channel to their users) and user benefits (e.g., less ability for traffic capture in open Wireless LANs). Could this deployment model be replicated for the Internet of Things security?

What I've written is about trying to come to a conclusion about why security (in the widest sense) is important. There are elements here that are similar to the transport layer security deployment. There's self-interest (e.g., be in control of devices or protection from any liability claims resulting from security incidents) and good of the Internet.

Conclusions

I think this goes to show why securing the Internet of Things devices is very important. And that focusing on damage to the rest of the Internet as well as to the application itself is also important.

But what do you think?

Jari Arkko

Acknowledgements: I wanted to thank Jon Crowcroft as his recent writings inspired some of the thinking reflected in this post. The image is from Wikimedia and by McSush.

keskiviikko 5. heinäkuuta 2017

Drone Control


I have a very nice drone, but I don't have enough free time to fly it more than once a week or so. But what I have noticed is that every time I want to fly it, it demands a firmware update or no-fly zone update.

Granted, it is important to build in functions in consumer drones to prevent their accidental or malicious use in the wrong place. But, I feel like there's very little true consumer ownership of the things we buy. Ubiquitous connectivity and cloud systems enable wonderful things, but us individuals and consumers find ourselves a bit ... constrained.

Every time I do my weekly start of the drone in the woods or mountains, I'm greeted with a complaint that the drone doesn't want to fly unless I make an update. Or is extremely limited in range and altitude. And as each firmware update needs about quarter of a gig download, and perhaps as much as half an hour of processing, I'll burn both time and one battery pack on the update.

Ok, so maybe this is fine. Safety is important. But, where to draw the line? What if your car won't start because it needs to download a new no-drive zone map? And you had an emergency and really needed the car? What if software updates and no-fly zones were set on commercial grounds, e.g., you don't get to fly in an interesting place because somebody else wants to retain the right to do so?

Maybe these questions sound silly, but the concept of ownership is clearly changing as part of smart objects and new electronics. What does it mean to "own" an object? Will people want to pay for objects, if they don't get to control them? Consider a piece of equipment that the manufacturer decides you cannot resell to others. We already have that for, say, movies in cloud-based services. Remember when you were able to trade your old DVDs for other ones? No longer. And if it is just movies, maybe it is ok. But what about computers? Cars? Houses?

Fortunately, there's open source. For drones, for instance.

Photo (c) 2017 by Jari Arkko