maanantai 14. elokuuta 2017

Science Fiction and Networking


With the WorldCon 75 finished yesterday, it seems appropriate to write a post about Science Fiction. I've been thinking about the role of Science Fiction as coming up with ideas, scenarios and effects of networking technology. What are your favourite SF predictions about it?

There's obviously a ton of Science Fiction that has touched on this topic, sometimes with chillingly predictive visions. George Orwell's 1984, for instance, looked at how a totalitarian society might use pervasive surveillance. In the novel, two-way "telescreens" and microphones made surveillance possible. While today's world is fortunately not the dystopian totalitarity depicted in the novel, the predictions about surveillance capabilities were far ahead of their time, and proved quite accurate. If anything, Orwell may have been too optimistic, given how much of our lives and even the operation of our possessions relies on information technology, and the eagerness of some parties to tap onto those information flows. More work for us engineers to keep securing our communications better, I guess! See RFC 1984 for why cryptography is important for the Internet. What an apt specification number!

Then there was Fahrenheit 451 by Ray Bradbury. While this book isn't about technology but rather about books, pressures to prevent access to information are prevalent in the Internet today in some places. Interestingly, the HTTP error code to signify access blocked by authorities is 451, as specified in RFC 7725.

There are also plenty of more specific examples, like Arthur C. Clarke's prediction of communications satellites in Wireless World, the translation devices in Douglas Adams' Hitchker's Guide to Galaxy, Star Trek's communicators, John Brunner's Shockwave Rider which coined the term "computer worm", Neal Stephenson's predictions about use of cryptocurrencies in his novel Cryptonomicon, and so on. Fundamentals of communication have also played a role in many books, e.g., speed of light shaped the outcomes in Liu Cixin's Remembrance of Earth's Past trilogy, by limiting the usefulness of communication to far away places.

And then there's cyberpunk. When William Gibson's Neuromancer came out, I remember sinking deep into his odd world that has become more true on every passing year. True visions of the future of the Internet, virtual reality, hackers, organisations fighting in the network... I should also mention Vernor Vinge, Philip K. Dick, Bruce Sterling, Pat Cadigan, and many others. And of course, as The Matrix showed later, the inhabitants of virtual world's don't always recognise that they are in a virtual world. Not that we can definitely say we aren't in a computer simulation either.

While not strictly speaking about communications, technological singularity in the form rapidly improving artificial intelligence has been the topic or background in a lot of SF works. The perhaps best example of this is in Vernor Vinge's The Coming Technological Singularity. The opening statement of his paper is "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." That was written in 1993, so there six years left from his prediction. I think we should use those years wisely.

At the WorldCon I had a chance to meet Charles Stross. I've been reading his book Singularity Sky this week. This is an interesting and action packed story where among other things, IETF has taken over the UN in distant future :-) Reportedly, when Charles was asked about this, he had responded that he wouldn't be surprised if it happened, as running the planet being a thankless infrastructure maintenance task, after all.

But, everything above is well known. What else is there? Who do you think is the most interesting book or writer today? I have not had a chance to read enough in the recent years. Can you give me some pointers?

I also spent some time searching for a good anthology or listing of network-related science fiction. A bit surprisingly, I didn't find much. This must be my searching, I cannot imagine that such lists wouldn't exist. Anybody care to give pointers?

Jari Arkko

Photo: A hole in the clouds on the day of the WorldCon opening, conveniently in the form of a flying saucer. Credits: Helsingin Kaupungin viestintä. The original photo appeared in their tweet.

Acknowledgements: Christer Holmberg, Elinor Aminoff, Andrew McGregor, Robert Sparks, Charles Stross, Ted Lemon, Veikko Oittinen, Martin Thomson, Désirée Miloshevic, Miljenko Opsenica, Olli Arkko, and Lee Howard all provided insights relating to the topics in this article.

tiistai 8. elokuuta 2017

Silicon Pilgrimage



In California Janne and I made a pilgrimage to the holy sites in Silicon Valley: offices of AMD, Intel, Google, and Apple, and two museums.

The Apple building that we saw was the new space donut one. Quite a remarkable building! It was difficult to find at first because we kept getting their old address from maps, but once I saw this picture on my display, I knew I had found the right site :-)




But the really interesting visits were to the Computer History Museum and Intel Museum. Both had interesting displays. Plenty of hardware, but also things like bean bag chairs:


So, old hardware *and* old fashion:


And old memories for me at least:


Intel inside?


Finally, there was also some amount of networking history in the Computer History Museum, e.g., Vint Cerf on video talking about the Internet, and some references to the IETF:


Photos (c) 2017 by Jari Arkko

torstai 13. heinäkuuta 2017

What is in the value of security for things?


I'm in a workshop tomorrow to discuss (among other things) setting standards for minimum security requirements for Internet of Things devices. There are a lot of technical details to discuss, but I started to think about this from a broader perspective first.

Why do we need security to begin with? The traditional perspective on this relates to guaranteeing that your systems are available for your use and your data is kept confidential. However, as we have witnessed in recent times, the Internet is an interconnected system and its vulnerable parts may be used in attacks to harm other parts of the Internet. As a result, we cannot think of security merely in terms of individual systems. We also need to look at the impacts on the commons, i.e., the Internet as whole.


Economics of Networking

Metcalfe's law states that the value of a network is proportional to the square of the number of connected users of the system. Reed's law suggests that the utility of a network scales even exponentially with the number of users, on the grounds that there is an exponential number of possible subgroups of users. Beckström's law looks at the added value that transactions performed over the network generate. A variant of this law subtracts costs related to securing the system and attacks that happened despite the security.

These laws are all interesting, and provide different viewpoints to the value of a network. I'll try to take them together and apply them to the Internet of Things.


Does Metcalfe's Law Apply to IOT?

To begin with, do they apply to the Internet of Things? This may not be immediately obvious in context of closed devices deployed for the purpose of one application. But the ability to deploy these devices is still one example of network effects. The existence of general purpose networks (mobile networks, wireless LANs, the Internet) has made it economically possible to deploy most of these applications. Applications are rarely worth enough to warrant building special networks for them.

Metcalfe's law was written in the context of humans interacting with each other, expecting an ability to contact other humans when the need for that arises. The IOT world equivalent of that is not necessarily things contacting each other. But rather, the ability create applications that are not silos with their own dedicated devices. But the ability to open up data and functions for more general use is where Metcalfe's law really comes into play.

Years ago I realised this as I networked a large number of sensors in my home, and quickly realised that what was setup for one purpose often found new applications. Humidity sensors designed to monitor building health could be used to calculate when laundry is dry. That's a minor application, of course, but consider others. If vehicles on the road have access to real-time traffic information, and can interact with other vehicles on the road, this enables significant savings for the society, in terms of less congestion, or the ability of self-driving vehicles to pack themselves in "trains" to reduce energy consumption.

But, all this requires the ability to use as much open data as possible, and interoperable systems so that different systems and different manufacturer's products can work together. We're not quite there yet, though making progress. (See, for instance, the WISHI workshop at the IETF.)


Back to the Economics of IOT Security

Clearly, Beckström's Law's variant is on the right track in considering the costs of security and any remaining attacks. But here's our dilemma: just like there's no reason for every human to talk to every other human, there's even less need for all IOT-related applications and devices to connect to each other. There's great value in open data and interoperable systems, but if I add a rain sensor to my garden in Finland, it is unlikely that a warehouse tracking system in Buenos Aires needs to interact with my sensor.

However, for the attackers this is not true to the same extent. If my sensor can be subverted and used as a part of a botnet attack, then for sure the attackers would find it usable for attacking the warehouse.

In other words, I fear that the "value for attack" grows faster than utility. The latter grows more slowly, similar to (or even less than) the growth of value for human connections.

Se let me propose a couple of new laws... lets call the first one Eflactem's law:
Law I (Eflactem's Law): The cost of attacks from a group of nodes grows proportional to the number of vulnerable nodes in that group times the nodes in the entire Internet. 
In other words, the value of a compromised network to the attacker grows when either there are compromised nodes or when there's more or more valuable nodes to attack in the Internet. Therefore, for any new set of nodes to be added to the Internet, the probability of of those being used in attacks needs to be low enough to ensure that the value exceeds the cost of attacks, when the cost of attacks grows quickly.

The second new law is about the potential value of an application network:
Law II: The potential value of a network of application nodes grows proportional to the square of nodes having an ability to participate in the application. 
That is, the value of an application grows quickly, squared, but is limited by the number of nodes that have a possibility to participate in the application. This is equally true of smart object deployments as well as other applications. A closed system whose data cannot be accessed by outsider is less valuable than a system that is broadly used by other applications.

Now, putting these together we get:
Law III: The value of a network of application nodes grows proportional to the square of nodes having an ability to participate in the application, minus the cost of attacks  number of vulnerable nodes in that network times the nodes in the entire Internet.
Note that the benefits and costs are attributed to entire society here, not the individual players. Each individual may of course assess these from their own perspective, and, e.g., decide to deploy an insecure device even if it causes harm elsewhere.

Fortunately, the attack costs are limited by the number of entities that are vulnerable. The purpose of this law is to show how important it is that we do our best to eliminate all of those vulnerabilities, and act quickly when new vulnerabilities come to light.

The theoretical effects are also limited by practical effects, e.g., while a device is in theory capable of disturbing any network addressable in the Internet, it may only do so with one, and large numbers of compromised entities are required to mount a large-scale Denial-of-Service attack.


What About Those Minimal Security Requirements?

In view of the above, there seems to be two categories of requirements:

  • Requiring that new things connected to the Internet continue to maintain the Internet commons, and are not an additional burden in terms of vehicles of attack towards the rest of the Internet.

    For instance, they should not be susceptible to be used as reflectors in Denial-of-Service attacks. And they should be updatable and their lifetime integrity against newly discovered attacks should be actively tracked.
  • Requiring that the new things are safe for the purpose of the application they were made for.

    For instance, applications that monitor a person's health should not leak information outside authorised parties. Or applications that involve moving physical objects (e.g., self-driving cars) be safe from outside manipulation, and fail safely when something goes wrong.

Not So Fast, Mister!

I've used the term "requirement" above, but it is worthwhile to stop for a moment, and think about ways that systems get deployed in the Internet. There's no central authority, for very good reasons, so no one can mandate any specific technology to be used.

The Internet is about voluntary co-operation. Self-interest drives parties to connect and interoperate. That being said, best practices are worked on and documented, and in most cases followed, although often not be everyone. For instance, in the last couple of years there's been a common trend to employ transport layer security for web communications. By now, a significant part (or even most) of Internet communications are secured in this fashion. The reasons behind this include both self-interest (e.g., ability for content providers to control the end-to-end channel to their users) and user benefits (e.g., less ability for traffic capture in open Wireless LANs). Could this deployment model be replicated for the Internet of Things security?

What I've written is about trying to come to a conclusion about why security (in the widest sense) is important. There are elements here that are similar to the transport layer security deployment. There's self-interest (e.g., be in control of devices or protection from any liability claims resulting from security incidents) and good of the Internet.

Conclusions

I think this goes to show why securing the Internet of Things devices is very important. And that focusing on damage to the rest of the Internet as well as to the application itself is also important.

But what do you think?

Jari Arkko

Acknowledgements: I wanted to thank Jon Crowcroft as his recent writings inspired some of the thinking reflected in this post. The image is from Wikimedia and by McSush.

keskiviikko 5. heinäkuuta 2017

Drone Control


I have a very nice drone, but I don't have enough free time to fly it more than once a week or so. But what I have noticed is that every time I want to fly it, it demands a firmware update or no-fly zone update.

Granted, it is important to build in functions in consumer drones to prevent their accidental or malicious use in the wrong place. But, I feel like there's very little true consumer ownership of the things we buy. Ubiquitous connectivity and cloud systems enable wonderful things, but us individuals and consumers find ourselves a bit ... constrained.

Every time I do my weekly start of the drone in the woods or mountains, I'm greeted with a complaint that the drone doesn't want to fly unless I make an update. Or is extremely limited in range and altitude. And as each firmware update needs about quarter of a gig download, and perhaps as much as half an hour of processing, I'll burn both time and one battery pack on the update.

Ok, so maybe this is fine. Safety is important. But, where to draw the line? What if your car won't start because it needs to download a new no-drive zone map? And you had an emergency and really needed the car? What if software updates and no-fly zones were set on commercial grounds, e.g., you don't get to fly in an interesting place because somebody else wants to retain the right to do so?

Maybe these questions sound silly, but the concept of ownership is clearly changing as part of smart objects and new electronics. What does it mean to "own" an object? Will people want to pay for objects, if they don't get to control them? Consider a piece of equipment that the manufacturer decides you cannot resell to others. We already have that for, say, movies in cloud-based services. Remember when you were able to trade your old DVDs for other ones? No longer. And if it is just movies, maybe it is ok. But what about computers? Cars? Houses?

Fortunately, there's open source. For drones, for instance.

Photo (c) 2017 by Jari Arkko

maanantai 5. kesäkuuta 2017

Human rights and IOT


This week I'm at the EuroDIG conference, discussing policy issues related to the Internet. I will be on a panel focusing human rights and the Internet of Things (IOT).

And that's an interesting topic! At the IETF, we've had plenty of debates about the general topic of whether human rights should be a consideration when designing Internet technology. If you are interested, read the Human Rights Research Group's draft on the topic.

But back the more specific question of IOT. The panel is hosted by the Dynamic Coalition on IOT. This is a group of people who have looked at the role of ethics in IOT systems. I've been an occasional contributor in that group as well, and their document also makes good reading: it covers things like meaningful transparency and user control.

But, to be honest, I'm not the human rights or ethics expert. I know a few things about the tech though. Amidst various IOT discussions I find that it is useful to set of few things straight, so that we at least have a good basis for understanding the technology. And then we can have a more accurate discussion of the ethics or human rights.

Done right, the Internet of Things can bring great benefit and support our societies and human rights: the environment, energy efficiency, quantity and quality of food production, safety and many other things stand to benefit. But it takes effort to ensure that we can enjoy these benefits, and to avoid side effects. And it takes education for all of us to understand how IT is shaping our lives, and how it can be managed and used.

I wanted to highlight four issues:

1. It is not about the gadgets, dammit!
Many IOT discussions focus on the efficiency, security issues or other characteristics of the devices. While that's important, that is far from the full picture. We'd be far better off to consider cloud servers as an even more important component in most systems; that's where the most of the interesting functionality usually resides. And that's what you also want to be under the user's control. 
Similarly, we are often focused on the gadgets and servers, but from my perspective the true value of IOT systems in the data produced or consumed by them. Having user control of the data is very important. How that data is used and by whom is important. It needs to be put to good use by or with the consent of those whose data is being used. It should not be used to violate privacy or in a discriminatory manner.
Also, the architecture of IOT systems as a whole matters a lot. The IRTF Thing-to-Thing Research Group, for instance, is looking at various designs where the devices are talking to each other, rather than (for instance) connecting through a centralised cloud entity. A classic example of where this is the right way to design the system are light control systems; you don't want your ability to turn on lights be dependent on your Internet connection :-) 
 2. Collateral damage
When we talk about security of the IOT systems, we need to understand that security is not merely about protecting the devices or even the data. 
The attacks that caused some common Internet services to fail last year were launched from compromised IOT devices, but the target of the attack was not the devices themselves. It was the other parties, this time the Internet naming infrastructure. (For a discussion of this incident, see the video from IETF-97 technical plenary.) 
The friendly neighbour principle: You cannot design Internet-connected systems without having to consider the effect of your systems to others in the Internet. 
 3. Interoperability
Interoperability is a key issue in creating a large market of useful applications and enabling user control. With more and more Internet-based smart devices, I believe we are on a good path with regards to interoperable devices being able to use the same networks and run over Internet protocols. However, this not enough. We also need applications that are interoperable. Otherwise it will not be possible to plug light switches from one manufacturer to light bulbs from another. 
We also need interoperability for the sake of driving competition, and to ensure that the market supports these systems on a long-term basis regardless of individual manufacturer's decisions. Application level interoperability was discussed in the 2016 IAB workshop on semantic interoperability
 4. Rights of the user
The ability of the user to be in the driver's seat with regards to information concerning him or her is important. I wanted to highlight one additional issue that is important: the right to tinker
This isn't just an issue for hobbyists, it also important for our ability to update products that may be used for decades after they have been manufactured and long after support for them has ceased. I also believe the ability to build new things and modify various consumer systems is important for a healthy, innovative ecosystem.
And as for the opening picture above, that was the message waiting for me this morning on my Inbox. My IOT devices, such as the weight scale, telling how I'm doing. I think the machines have something to learn still from instilling confidence and positive attitude! 😀 Then again, maybe the weight scale would be more efficient, if it slapped me on the face for my failure to have a more healthy diet. Would the positive attitude or the slapping IOT be more ethical?

Jari Arkko

Screenshot (c) 2017 by Jari Arkko. I'd like to acknowledge Ari Keränen, Anna Larmo and Francisco Alcoba for interesting discussions in this problem space.

Have an idea, buy components at midnight


Have an idea, fetch parts at midnight to implement ❤️ verkkokauppa.com

Photos (c) 2017 by Jari Arkko

torstai 25. toukokuuta 2017

Access point recommendations?

With some upgrade of my Internet connection, it seems that my trusty WRT-54GL wireless network is now a bottleneck. I would love to get a recommendation for 802.11ac etc access points. I'll be operating them strictly in access point bridged mode, and I'll need several so cost is a factor. But the WRT-54GLs have been spectacularly reliable and did not get confused and need reboots like many other products. That is something that I absolutely need.

lauantai 20. toukokuuta 2017

More disk!


I am in Canada, and have bought once again more hard drives! C-Ordinateurs Canada, the local Fry's equivalent, supplied the goods!

tiistai 16. toukokuuta 2017

If you think about IOT security, think broad enough!

Internet of Things security issues are serious, and are often the focus of discussions. The discussion is much needed. How can we make our IOT devices safe?

How can we prevent attacks similar to those that last year caused many popular Internet services to be unavailable, with badly secured IOT devices being used as a part of the attack?

This is a very important topic.

However, I would like to argue that people often think about this in a too narrow manner. First off, we  have a tendency to focus on visible, concrete things. However, there's more to IOT than the gadgets, and I think the other parts deserve equal scrutiny.

The IOT is not in the gadget, it is in the cloud.

We have to secure the gadgets, but we also have to secure the rest of the system. And more broadly, it benefits the consumers and users to have secure, interoperable, and open solutions for both the gadgets and other parts of the IOT ecosystem. We need data that is in well-specified format, we need data that is under user control, we need systems that you and I can compose from components. But we do not need closed ecosystems.

torstai 11. toukokuuta 2017

Internet and Societies



Today we have an interesting panel discussion organised by ISOC and Chatham House on the effect of the Internet on societies. Is the Internet helping bring societies together, or creating more divisions? With the increased criticism against globalisation, fake news, and the emergence of closed social circles for like minded (and often misguided) fragments of society, it is easy to be worried about this.

But, it is also easy to focus on the most visible issues. When looking at the Internet and societies, one needs to consider the full scope of human interaction, andconsider human, technical and commercial aspects together.

What issues are affecting our ability to connect together? I want to start with five points:

Human interaction is broad, and we need to look at the whole picture. It is easy to focus on the most publicly visible forms of media, and see how the news media for instance has, to put it kindly, become more diverse.

But whole picture is broader and more nuanced, and the concepts of togetherness and divisions may not be so applicable. For instance, the Internet has made it much easier for various smaller groups to connect where they perhaps had no ability to do so before. Communities working on Wikipedia, people with special interests being able to connect, minorities connecting to their culture, and so on.


Human interaction is both about tech and human abilities. It would be a mistake to think about solely technical solutions for problems involving, say news or social media.  Our technical capabilities advance at an incredible speed, but humans are also very good at learning new skills in new environments.

But, clearly critical media reading and communication skills are needed even more in today's world. These topics need to become even more central in our schools and continuing education programs.

Division vs. unification goes beyond people discussions. The Internet continues to be embedded in the fabric of our societies. We need to consider not just the people's discussions, but how well the Internet supports all the other interactions from personal gadgets to managing cities' traffic to running businesses.

Technical and commercial considerations. TCP/IP and the web provide a platform where we have almost universal interconnectivity and lack of technical barriers.

Still, as the IAB's IOT semantic interoperability workshop pointed out last year, interoperability at the level of applications can still be a problem. Can you buy Apple lightbulbs for a house that has Microsoft light switches?

And more broadly, are commonly used Internet services such as social networks erecting borders that restrict efficient connection, for instance due to their deployment patterns as is shown in the image further down?

And, is our increasingly centralised "winner takes it all" Internet economy driving a model where it becomes difficult to switch social network/search/video/mail/application store providers?


Finding broader consensus is hard, but rewarding. As those of us who work in standards or open source realise, finding agreements in broad, diverse communities is hard and time-consuming. Yet, we find the motivation to do so because if we succeed, the benefits are much greater than with everyone running their own things. We've obviously done this not just with technical developments like the Internet, but also to a large extent with our societies, building their infrastructure and rules. And I believe we will continue to be able to do that.

And where does all this leave us? Clearly, there is a lot of work ahead of us. But that work is not merely about the public sphere of news media or social media discussions, it is also about our ability to offer communications tools for all groups, regardless of their size. Our continuing education of the human parts of the system. Our drive to improve standards so that the technology allows connections. Our drive to ensure that the business system provides the possibilities for evolution and connection.

I would also like to point to my other article for a discussion of why IOT security is a much broader topic (inline with the thoughts in this article) than people usually focus on.

What do you think? Leave a comment below! You can also follow our panel discussion online.

Jari Arkko

Acknowledgments: I would like to thank all my friends and colleagues at the Ericsson, IAB, ISOC, and Chatham House for interesting discussions in this problem space.

Picture credits: 1/ Jari Arkko 2/ Evi Nemeth for the original picture, edits by Jari Arkko 3/ World Map of Social Networks from Vincos.It. How divided is this world, even at this level? And I was surprised to find out that there are places in the world where the most popular social media application is LinkedIn :-)

lauantai 29. huhtikuuta 2017

More 10G cards



Received my 2nd 10G Ethernet card, and successfully inserted it to the router. 3rd card is on order...

I've started testing the cards, and can get 9.3 Gbits/s speed! That does feel fast. This number is from iperf. Using SSH to copy files I get a smaller number, however, around 1.2 Gbits/s to 1.8 Gbit/s depending on which crypto is being used. The smaller number is on chacha20, the faster on aes128-ctr. Still investigating what the bottlenecks here are, trying to understand what iperf measures, for instance. Preliminary results seem to indicate that a CPU core is operating at a high load when it is doing encryption for SSH, but that disks are not the source of the delay.


More research needed... but this is already a 12-18 fold increase from my earlier servers who were only able to do about 100 Mbit/s while using SSH. In this case that speed was very clearly due to the CPU being unable to do crypto at a faster speed.

With regards to getting these cards to work, my only complaint is that it is difficult to manage Linux devices when the number of type of interfaces change. The interface names change... and for some reason I don't get accurate information about link status from ethtool, and some of my interfaces seem to not work well with a /etc/network/interfaces-based definition, but rather need explicit commands to be brought up. Odd. Maybe I've misconfigured something, or maybe there's some issue with these specific cards.


Photos (c) 2017 by Jari Arkko

tiistai 18. huhtikuuta 2017

Native IPv6!


I have once again full IPv6 connectivity on ALL of my uplinks, and this time natively! Thanks to my great ADSL ISP, Nebula, who offers IPv6 as standard service for everyone, and also my great mobile network provider, DNA, who do the same. IPv6 life is good in Finland!

Nebula gives you a /56 for your own networks, and a /96 for the router-to-router interface.

Their default sales and support guys understand and know IPv6 well. All you have to do is to ask for the /56 address and it will be given to you.

DNA IPv6 comes on, completely automatically, for every user with a capable device. Which is most devices by now. Both DNA and Nebula have been providing this service for many years.

My previous setup was through a tunnel service, interrupted due to addressing changes, and now gone forever; good riddance :-) Native is the way to go! I'm now fully dual stacked natively for all my networks, be it ADSL or LTE.

A couple of observations:

  • I was so happy to find out that while on my previous setup I had to resort to hacks to do firewalling on IPv6, all functionality is now there for even dynamic connection tracking. Great!
  • Once again, the simplification of my network to offer the bare essential services only has made things like firewall configuration much easier.
  • The router advertisement daemon, RADVD, has a bug on Ubuntu 16.04. The installation scripts do not create the pid directory /var/run/radvd, and this causes the startup scripts to fail. Silently... but you can do "sudo mkdir /var/run/radvd" and everything works after that.

Copyright (c) 2017 by Jari Arkko

lauantai 15. huhtikuuta 2017

New Router


I have finally replaced my trusty old main router with a new one. The old one has been going strong, but it was running on an old Pentium II platform from 1997 that I paid 10€ sometime in the early 2000s... for that investment, it has paid off phenomenally!

Not only that, but the old machine was badly maintained and stopped accepting updates without complete reinstall which I never found time to do. I was running kernel from 2005 for twelve years! Not just bad, this was a security nightmare!

I guess I don't have a lot of attack surface even if you get past the router/firewall, but I have been at least the target of DoS attacks on the router. Here is the old router, with its proud Pentium II CPU:




But, the hardware for the new router arrived earlier this year, and now I had time to set it up properly, and disconnect the old router. While the new router is not the newest gear still either, it has a modern architecture, hopefully better settings and maintenance, and much simplified configuration. The new machine is running an ASUS CS-B motherboard and the Intel Celeron G1850 CPU in a stylish but simple Bitphoenix Phenom micro-ATX case. There's a medium-sized SSD for the machine but no other disks. The machine will not run any other services than forwarding packets, firewalling, and DHCP for the internal network. And my OS is still Ubuntu, but this time version 16, not 4.

At the same time, I've reorganised my entire network around the following principles:
  • Right things in the cloud: Keep as much of the functionality in the cloud as possible. But do not lose control of your own systems or materials. I rent my own space in the cloud and keep file storage in my own servers at home.
  • Just make it fast: Build a fast, general-purpose and simple network that supports any new service that might come up in  the future.
  • Keep it simple: No unnecessary services, no extra complications, no complex architectures.
More specifically, what I have done is this:
  • Move all external-facing web services to the cloud. With one exception, all my websites -- such as planetskier.net -- are now hosted by Linode, and provide TLS certificates via Letsencrypt. I have yet to move arkko.com, because that is the only domain that handles e-mail, and I haven't found a reasonable, free alternative to hosting that outside our lab server at work.
  • Simplify internal network organisation. I've disabled much of the old hardware and special purpose networks. I won't be needing NAT64 any longer, and I will work with a simpler network that doesn't require the HOMENET automatic routing setup. I will still maintain two special networks, for internal and visitor networks. But I've divided the two networks to use the two redundant uplinks that I have, on ADSL and LTE Advanced. This also allows easy (but manual) switching from one uplink and router to another when something breaks.
  • Turn off dozens of services for which I had no use, or which were only partially functional.
  • Upgrade the internal network to 10G. This is still in progress, as only one of my file servers has the necessary network card. Other cards have been ordered, but I'm still searching for a reasonably priced 10G switch with at least 3 but preferably 8 10GBase-T connectors. Pointers welcome.
  • Employ IPv6 as a means to access individual services from elsewhere in the Internet.
  • Employ smaller number but larger file servers. In my case it is still beneficial to have multiple physically separate devices for safety, but they need to be appropriately dimensioned. I.e., n * 10TB rather than measly 2-4 TB each as previously.

    The primary new file server runs on a similar new computer as the router, but with the MSI A88XM-E45 motherboard and the AMD Athlon x4 760K black edition CPU. This particular CPU unit is by the way a world record holder for the Athlon x4 760Ks; it used to be overclocked up to 7.1 GHz with liquid nitrogen, but it is now enjoying retirement at a more relaxed 3.7 GHz.


  • Employ redundant disk clusters. I've turned on ZFS on my new file server, running currently 2x10TB disks in mirroring mode, so being able to provide 10TB of storage. The really excellent thing with this is that I can add more storage on the go while keeping the same logical disk structure for users, even if I run out of the 10TB. Of course, redundancy within the same case is not sufficient for problems, so in addition to having manual backups I'm also considering hosting backup servers at alternate locations, with automatic network sync.

Not everything is quite up and running yet, in particular I spent five hours last night just getting the router to work. Turned out that the mere existence of a DHCP client package affected network interfaces that had been defined as static ones.

Setting up IPv6 to work with my ADSL connection to Nebula is the next step. The LTE side of the network already has it. There's also a couple of old laptops still running something that I need to figure out what it exactly is :-) as well. One of those laptops also drives the display to the sauna and its broken display... that needs replacement.

Here's the communications closet. The new router, file server, and old computations server are sitting side by side at the far end (this whole space is under a staircase), next to the new small rack that I had built earlier.


Photos (c) 2017 by Jari Arkko

10 Gb Ethernet Cards

Visit the US, buy hardware from Fry's. Wanted to play with 10G Ethernet for my servers. Very expensive in Europe, sometimes significantly cheaper here. This one was 200$. Sadly they had only one. Where could I buy cards cheaply online/Europe?? But funny with the heatsink on a network card.

They also had a 200$ 10G switch but only with 2 10G ports, not so useful for my purpose, even with a extra 1G ports :-) Any suggestions for low-cost switches?

Also, this is 200-400x faster than my Internet connection, so obviously not so useful for external connections. But maybe I can increase the speed of my backup processes from one file server to another :-)


Photo credits (c) 2017 by Jari Arkko

19" Wine rack

A 3U wine bottle storage for your rack system! But does it come with RJ-45?


Photo credits by Canford (the supplier for this product)

AMD Ryzen

Janne is today building the first computer ever bought for him (previous ones were from Olli's scrapyard). AMD Ryzen 7 1700 CPU, Asus Prime X370-PRO motherboard, Define R5 case.


Photo credits (c) 2017 by Jari Arkko

Gaming museum in Vapriikki

Se on moro! I'm spending the winter vacation weekend in Tampere with Janne. First stop the gaming museum in Vapriikki, with ping pong, C64 in real-life settings, and the fall-the-stairs-make-insurance-claim game Stair Dismount.




Photo credits (c) 2017 by Jari Arkko

Spareparts

We are starting to build the new file server today. I'm still missing the intended CPU and a better cooler, but it turns out Olli the overclocker has a box of spareparts...


Photo credits (c) 2017 by Jari Arkko

10TB

I am in California for a quick 2 day visit this February, on IETF business. But I'm not so busy that I couldn't stop on the way from the airport to pick up some storage from Fry's :-)


Photo credits (c) 2017 by Jari Arkko