keskiviikko 27. kesäkuuta 2012

IP for Smart Objects

At Ericsson's Lab web site, Jan Höller and myself have published an article. This article deals with the general architecture for the "Internet of Things", and how we must break the current application-specific silos, and move to a model that employs general-purpose network access technology, IP & IPv6, and web technology. The link to the article is here.

perjantai 20. huhtikuuta 2012

Home networks. By magic.


The routers in my automatically configured IPv6 home network

I wanted to talk about personal home networks on IPv6. I have built a prototype of some cool new technology and my first network is now up and running, having configured itself completely automatically. My implementation is a hack, and only works in very limited cases. Nevertheless, I think it is the first implementation of this technology, currently under development at the IETF's new HOMENET working group. And I believe this technology might have an impact on how people set up their home networks in the future. Hence this article.

(Note that the blog is not about finding an ISP that supports IPv6. And not about home gateway products that are IPv6 capable. Of course those are important issues, but there has been enough discussion about those topics in various forums already. See the end of the article for further details on those.)

What I want to talk about is your home network itself. Can you have any type of network? Any number of nodes? What kind of network architecture should you choose? Is there a lot of configuration effort? The simple case is easy, when there is one router and network behind it -- RFC 6204 has the details. And we all know how to handle this and even the little bit more complex cases on IPv4; it is far from perfect but the technology is well understood.

I Have a Dream

The dream is that you can have any number of routers and hosts and connect them in any way you see fit. And the network comes alive automatically, all parts of the network shall have address space, routers shall know where to send packets, and names get resolved to addresses. And all this should happen without any human touch. (Especially by my mother. Bless her, she is a wonderful person. But she is not that good in configuring IPv6 routing entries.)

"One Subnet Should Be Enough for Everybody"

The simple case with one subnet is easy to deal with. There is already plenty of equipment out in the market that does that. There is an argument that says going beyond this model leads to problems, and that it would be prudent to stick with the simple model. I just don't know if the simple model is sufficient. I can offer a few example cases where it is clearly not sufficient, however:
  • Network separation through policy. Many of today's wireless routers have by default a "Guest" and "Private" networks. It is expected that upcoming applications -- such as smart energy networking for power companies -- may increase requirements for having multiple networks in homes.
  • Heterogeneous network technology. Some types of link layer technologies can not be bridged together, leaving routing or NATting as a the only option. Other technologies may be bridgeable, but their speed differences are too big to make it wise. For instance, bridging together Gigabit Ethernet and some low-speed sensor networking technology may lead to the latter being overcome by the multicast and broadcast traffic of the former.
  • Organic growth of networks. People who have, for instance, insufficient wireless coverage in their houses often end up buying a new device and adding it to their networks. More often than not, these new devices have been routers/NATs in the IPv4 world.
My dream came with a nightmare as well; we might get this wrong. The biggest danger is that IPv6 will be seen as hard to use, leading people to adding more NAT devices into their networks instead. Another danger is causing similar problems as chained NATs already do in the IPv4 side in an organic growth scenario. For instance, difficulties in communicating between devices within the same home. And I really do not want to see the NAT66 solution used for connecting two different parts of an IPv6 network.

The HOMENET WG is trying to address these requirements, by creating a solution that supports multiple routers, multiple subnets, automatic prefix-configuration, automatic routing, and across-the-home name resolution. The working group is currently designing the architecture and various proposals for solutions exist as individual proposals. The group is focused on recommending the use of existing tools, as things like DHCP-based prefix delegation and prefix announcements via Router Advertisements are the right thing to do. And if routing is needed, the existing routing protocols can certainly handle the task. However, the group will have to create a few extensions in some cases where fully automatic configuration is not possible otherwise.


HOMENET WG meeting in Paris, France

The group is looking at a number of different solutions. There are routing protocol designs based on OSPFv3, RIPng, RPL, and even simple routing extensions for Neighbor Discovery options. For prefix assignment, the group has two alternate designs, one based on hierarchical DHCP prefix delegation and another one based on distributed operation over a routing protocol.

The picture below shows the architecture for an OSPFv3-based design, providing both routing and prefix assignment capabilities.


OSPF-based home routing architecture


Example Network

The below figure shows my home network, a good example of a network that would benefit from the HOMENET technology. The network has over 200 Gigabit Ethernet ports, 4 kilometers of Cat6 cable, and enables things like my laundry talking to me via Facebook. In other words, it does all the usual geek network things.

One home network

But the important point from the perspective of the actual network architecture is that I had to create a dozen different subnets within my network. I obviously needed my primary internal subnet, but also had to create other subnets for visitor networks, NAT64 networks (these consume two prefixes), and some of my home automation and utility networks were in their own address space for various reasons.


Subnets in the above home network


I'm a geek, and at the beginning I thought I could easily handle all this manually. Then I realized that I had to run a routing protocol to keep the route entries correct. And a tool to monitor what devices I have in my network, as I had lost count. And then I forgot what prefixes were assigned where. It was too hard to keep all those numbers in my brain.

And this was only the beginning of my problems. A couple of months ago I woke up one morning to realize that my ISP had renumbered my entire network. I had to start over. The moral of the story is that even us geeks need automation, let alone our mothers and other people who have no expertise in networking technology.

Implementing HOMENET

I created a prototype to test the ideas of HOMENET. The primary goal was to understand what the real needs were, find out the missing pieces, as well as to see if our specifications for the OSPFv3 extensions were on the right track. I now have a small implementation. The code is unreadable, the design is a hack, and it still misses large parts of the necessary functionality. But it has already given me an opportunity to see if the ideas work in practice.

First off, the technology seems to be working as intended. I'm writing this article in a network configured by the prototype, so it works. And it feels like the natural way of doing things. If we get the technology fully specified, I'd expect other networks to want to use it as well. Corporate networks, for instance, would probably find it useful.

The implementation is capable of automatic configuration of OSPFv3 itself, generating router identifiers for all involved routers, assigning address space in an optimal manner to the entire network, discovering all available DNS servers, and configuring Router Advertisement daemon to advertise the assigned prefixes and DNS server addresses. And since this week, the prototype also automatically configures Ericsson NAT64 devices by making an assignment for the address space that those devices need.

Logs showing how router IDs, prefixes, RAs, DNS,
and NAT64 have been configured automatically

I've learned a number of detailed lessons about the protocols necessary to do this; those lessons will be fed back to the IETF working group. But the main lesson for me was how connected the routing protocol based auto-configuration software is to other parts of the system. Obviously, an OSPF router talks to other OSPF routers. But in this case it has several other interactions as well, illustrated in the below figure.

Dependencies

The first group of interactions relates to where the system gets the usable address space it needs. The preferred source for address space is your ISP, and DHCP prefix delegation is the best way to retrieve address space from your ISP. However, not all ISPs support this protocol, and some users may have to manually configure their address space in one of the routers. From there on the HOMENET technology can distribute the address space to all other routers in the network. Finally, when the home routers are coming up for the first time and if no connection to the ISP has been set up yet, they are in a difficult position. There is no address space to be used, but communication between the different nodes in the network is going to be difficult without addresses that can be used across the entire home network. At this point it might be useful to generate some Unique Local Address space -- a feature of IPv6 -- that can be used until actual ISP connectivity comes online.

Another set of interactions relates to where the address space is used: the address space needs to be configured to interfaces, advertised in Router Advertisements, given to the use of a NAT64 implementation, and so on. In my network I also need some address for servers that represent legacy sensor networks on IPv6.

Finally, even when addresses have been assigned and packets are routed correctly, the end user will not be happy unless hosts can resolve names to addresses. For this to happen, the hosts need to be told where the DNS servers are. This can be done via adding DNS server addresses in options to Router Advertisement messages. But the auto-configuration system still has to figure out where the DNS servers are! There are couple of different approaches to this. One straightforward approach would be to run a server on each router, resolving names through the global DNS root system. I have chosen to use a different approach, where I attempt to discover existing DNS servers from my ISP or from my own network. I use the DNS Discovery Daemon (DDD) for this task. There is an interesting complication, however, in that IPv6-only and dual-stack networks are different in some respects. First of all, in a dual stack network the configuration of IPv6 DNS servers is often not absolutely necessary, as the IPv4 DNS servers can serve IPv6 content. But things get more complicated with network tools like NAT64, because they involve translation of DNS results through a technique called DNS64. Given this, a name server that acts as a DNS64 is something that you want to use behind a NAT64 device, but not elsewhere in the network. DDD uses an advanced technique for probing the name servers to determine whether it is performing normal or DNS64 operations.

Some of these interactions have interesting aspects that require further work. The assignment of DNS servers is one such area. Timing the start of the ULA generation process is another one. We will be addressing these issues in the coming months.

Implementation Experiences

The OSPFv3 auto-configuration parts were very simple to implement. The prefix assignment was not as trivial to implement, but still relatively easy. These new extensions of OSPFv3 are in the order of two thousand lines of code, so they should be relatively easily be added to any OSPFv3 implementation. Assuming the implementation has been made reasonably extensible.

However, OSPF itself is a very complex and difficult protocol to implement. I chose to implement it from scratch. How hard could it be? It turned out that it is very hard. I do not yet have a full implementation, among other things the actual route computation is still missing even if the flooding process is complete. In any case, any sane person would start from an existing implementation and would just add the new parts. Oh well. I have learned a lot from routing protocols with this exercise, which was part of my goals.


Conclusions

HOMENET is an interesting piece of technology that has promise to make the configuration of home and other networks much simpler in IPv6. The working group at the IETF is in an active design phase; the participation of all interested parties would be welcome there. There are also several implementation efforts under way, not just mine. I would say, however, that we are still in the exploration phase as we keep discovering what this technology could do. The specifics of the protocols will for sure keep changing. If you have feedback on what kinds of things should be auto-configured in networks or how we should go about it, let me or the working group know!


Details

This blog entry is based on my presentation at RIPE-64, Ljubljana, Slovenia, on April 19th, 2012. See the slides for more information, or go the web page for the HOMENET working group at the IETF. I'd also like to thank my colleagues and various IETF people for interesting discussions in this problem space: Mark Townsley, Ray Bellis, Michael Richardsson, Ari Keränen, Lorenzo Colitti, Fred Baker, Lee Howard, Wassim Haddad, Joel Halpern, Jan Melen and others: thank you. I'd also like to thank Ericsson for letting me work on this (among the thousand other things :-)

HORD, my routing daemon is not open source, at least not at this time. But DDD, the DNS discovery part is. Look here for the details.

IPv6-capable ISPs and Routers

Note: Regarding IPv6-capable ISPs: there are plenty, check out Comcast in US, Nebula in Finland, or Mobitel in Slovenia for instance.

There are also plenty of IPv6-capable routers, my personal preference is Linux-based small PCs but you can also check out commercial models from Netgear, Cisco, and others.


Start of the HOMENET implementation effort in Christmas 2011


Photo credits (c) 2010-2012 by Jari Arkko

torstai 22. maaliskuuta 2012

Smart Igloos

Smart igloo


Surely the Eskimos need home networking, too. If the rest of us are setting up our homes for Internet connectivity, smart power, entertainment, and surveillance, shouldn't they need it too? Of course they do. This is why we set out to build igloos with state of the art networking facilities:

Cool housing

  • Your igloo becomes your friend on Facebook, so that when you are out fishing you can keep checking on your smartphone how warm the family is inside the igloo.
  • The network becomes one with snow; snow is important. We need to know whether the walls of the igloo are melting. This is why the igloos are constructed from a mixture of snow, tiny sensors, and Snowcat 5 cabling.
  • Understanding what the snow is doing is also important for mountaineers and skiers. The same technology can be used to determine how much new snow is accumulating and what kind of temperatures are developing in the snow pack. This helps in predicting avalanche risk.
  • The networking architecture is designed for low-power, intermittent connectivity from remote locations.
  • Everything can be constructed from lowest cost parts with widely available, mature technology.


Igloo from the inside. Note sensor hanging from the roof.
It is used for measuring inside temperature.

Igloos! Eskimos! Are You Guys Serious?

Obviously, the igloos are only an example application for the kind of technology that we'd like to develop. It is an example that we personally care about, as many of the people behind this spend a lot of their free time on the mountains and in snowy conditions.

But we are actually very serious about the technology. We need example applications so that we can gather experience and improve our designs. We are not the kind of researchers that produce only streams of Powerpoint presentations. We like to test our ideas in practice, because it tends to give a more honest view into how well they actually work. And once the igloo test is over, we move the sensors back home to measure things like snow cover on our roofs.

More on the technology later, but first we want to talk about snow and mountains.

Building a snow tower to improve reception to the cellular network


What Wouldn't We Do for Science?

The video below shows a time lapse of the village coming together. We are building it in the Swiss alps, as a part of the ExtremeCom 2012 conference. This conference series is dedicated to developing and testing new communication technologies in difficult environments. On previous years they've met in the jungles of Amazon and far up north in Sweden, for instance.




 Sledge, with the sensor
stick on the side
These igloos were not built in any suburb backyard either. Reaching the conference site (Berggasthaus Waldspitz, at the altitude of 1903 meters) from Zurich took three different trains, a gondola, sledging down one mountain, and hiking up another one. From the conference site there was still an hour's hike to the igloo site. With our the demo gear on sledges.

On the site five igloos were built but two never made it to laying down the keystone. Luckily the one with our sensor equipment -- tens of meters of cable with 31 temperature sensors attached -- survived. The sensor wire was installed inside the igloo walls, with some additional sensors inside and outside. The last meters of the cable were free, with a sensor at the end ready to be tucked inside a sleeping bag for the night -- what wouldn't a research scientist do for the sake of science.

One (1) well-deserved after-hike beer
The sensors measured temperatures at the different parts of the igloo over the night and the following day, and the changes in temperature can be seen in the graphs below. It turned out to be surprisingly warm in an igloo and a proper sleeping bag: the inside temperature of the igloo remained close to or over zero throughout the night and temperatures up to 30 degrees Celsius were measured inside the sleeping bag during the night; so warm that one had to open the sleeping bag during the night to cool down.

Below you will find some screenshots from our Facebook and web-based user interface:


Facebook screenshot. Even the discussion bot got excited about the high temperatures in igloo

Sleeping bag temperatures
Summary of the sensor readings inside and outside the igloo.
Wall readings are averages over all the sensors inside the wall.


Key Technologies

Some of the technologies involved in our demo are well established, others are still in research stage. We believe that networking in general benefits from all of these technologies. Particularly when the world moves towards connecting everything -- not just computers but also all other kinds of devices and objects.


Sensor and DTN router

  1. Cellular networks for data transmission. Setting up the smart igloos would not have been possible without some kind of communication network. In the real world, these networks can not be dedicated to specific applications, because building new applications would be prohibitively expensive. The extensive coverage of cellular networks provides a convenient and reasonably priced networking even in fairly remote locations.
  2. Delay- and disruption-tolerant networking (DTN). However, not all places will have always-on network connectivity, and to save power in battery-operated devices not all devices can be connected at all times even if there was connectivity. We need a communications model that allows intermittent connectivity. DTN networking, as the name implies, is very robust against these kind of conditions and can utilize any available connectivity and at any time.
  3. Social web of things. Our team has been experimenting with different types of user interfaces for the "Internet of Objects". In our experience, a natural way to think about these user interfaces is to have relationships ("friendship") to the objects that we care about. And interact with these objects in similar ways as we already interact with out human friends in social networks.
  4. Social web-of-things interface to the igloo
      Power-efficient network architectures
      . While we can expect a constant improvement in electronics over time, the biggest energy savings come from rethinking 
      A 1-wire sensor
      application models and network architectures. Devices should be able to behave in a manner that is natural for them. For instance, a sensor that has very little power should not be required to stay up at all times just in case someone happens to ask something from it. It would be far better to let the sensor report changes at an interval that is natural to it, and have another entity store the results.
    1. Mature, widely available, and low-cost technology.  We like to use technology that is mature enough to be widely available from multiple sources. The cheapest, most commonly available wire, for instance. Or employing the most economic cellular modem from a range a technologies (GSM, 3G, LTE) and different vendors. Or using open source software to interact with sensors that can be acquired commercially and in quantity.
    2. IP. With the exception of some legacy sensors, all our designs are based on IP and IPv6. This is the only logical choice for systems that can connect everywhere. For us this made it possible to place the intelligence and server components back in our homes and offices, and only bring the minimal amount of components to the mountain.

    One single, simple wire connects the sensors

    Developing Sensors for Snow

    We developed two kinds of sensor equipment: snow sticks and a sensor wire. The snow sticks were used to measure the snow pack and the wire could be used to measure any structure made out of snow, in our case the igloo.

    A snow stick, measuring snow depth
    The sticks use two types of measurements. First, they measure incoming light at different points in the stick, making it possible to track what parts of the stick are under snow. Second, they measure temperature at different points, making it possible to track temperatures in different parts of the snow pack. Understanding the temperature history and temperature gradients within the snow is important for predicting avalanches, for instance.

    The sensor wire was simply a long cable with sensors attached to it; 20 meters of cable with sensors and another 20 meters to reach the DTN router. A group of three sensors were placed every couple of meters in the wire so that each group could measure temperatures in the middle of the wall as well as towards the inner and outer edges of the wall. The cable was laid out in a spiral fashion to the igloo wall, rising from the ground up to the apex of the igloo so that readings from different sides and heights could be obtained.


    Wall sensor layout

    Real Challenge Is Cost

    Obviously we could develop these sensors with arbitrary sophistication. However, we also wanted to show that they can be constructed from the cheapest possible components and with minimal expertise.

    The cabling used to connect the tubes into the DTN router was standard Cat 5 cable and any two-wire cable could be used for the sensor wire. Both types of cables are available from any decent hardware store.

    For sensors, we used 1-wire sensors. The sensor wire used temperature sensors that cost just a couple of Euros each and are just a few millimeters across. A large number of 1-wire sensors can share the same cable and still be individually identified with their unique 64-bit identifiers.

    The snow-proof sensors were rigorously tested.
    In a coffee cup filled with snow.
    Manufacturing the cable was easy, just cut the cable and connect the two ends and the pins of the sensor together.

    The only complication is making the wire waterproof. The sensors and the wire at the cable ends were put inside a heat shrink sleeve with just the sensor head sticking out. The terminal blocks were treated with silicone-like sealing compound leaving just the sensor head exposed to the snow.

    The snow sticks needed a slightly different type of sensor to be able to measure incoming light. We used ready-made 1-wire sensor devices from Hobby Boards, measuring both light and temperature. Still, they needed to be attached to the stick somehow. We bought standard 1.5 inch transparent plastic tube from the hardware store for a couple of Euros, stripped the sensor devices to their bare circuit boards, added some lubricant, and pushed the sensors to the right place in the tube along with the cables connecting them.

    If electronics do not fit, add some lubricant

    Silicone sealing was added to both ends to create a completely sealed structure. The stick was attached to a metal rod with the help of duct tape. This helped keep the stick straight and enabled the stick to be planted to the ground. The most expensive part of the snow sticks were the ready-made sensors (25€) but they could also have been manufactured from components that cost just a couple of Euros.

    The 1-wire sensors can be easily monitored from an attached computer via a USB/1-wire converter and the One-Wire File System (OWFS) open source software package. The DTN router sleeps for 30 minutes, wakes up to poll the data from all the sensors, sends it forward to the Internet using the DTN bundle protocol, and then goes back to sleep to conserve batteries. From the DTN back-end server our application server retrieves the data and shows it on a set of web pages, send urgent alarms with SMS and instant messages, and posts information on Facebook. The server also monitors Facebook in order to respond to any questions or discussions; a simple discussion bot was used to accomplish this.

    Switzerland mountain simulation, testing sensors on the 7th floor balcony


    Discussion bot in action

    Detailed sensor value readings from the discussion bot.
    The bot has access to the sensor readings database.

    The bot has limited vocabulary, particularly in Finnish

    Further Reading

    Our igloo networking demo was jointly developed with Stephen Farrell, Kerry Hartnett, and Elwyn Davies, all researchers from the Trinity College in Dublin. The original description of the demo is in our paper, and the conference slides can be found here. Our team built only one igloo and instrumented some of the surroundings, you can read more about the technologies prototyped in the rest of the conference here. You can also find more information social web of things here.

    By the way, if this article got you excited about building igloos, be sure to check the instructions for building one! It is fun. Finally, if you always thought that snow is just frozen ice crystals, you'd be amazed how detailed and complex the physics of snow are. We recommend The Avalanche Handbook for an in-depth look at the weather, physics, and safety issues around snow.

    Ari Keränen
    Jari Arkko
    Ericsson Research, Finland

    Researcher with a view

    Fondue from a snow table


    Igloos with a view

    Assembling the igloo with wires in place

    Photos (c) 2012 by Ari Keränen and Jari Arkko, video (c) 2012 by Bernhard Distl

    torstai 15. maaliskuuta 2012

    Keep Your ABNF Clean

    For the last six years, I've been a part of the IESG, the IETF's steering group. I've loved every day in this job, particularly because it gives you such a grand tour of the different changes and challenges the Internet is going through. And it has been a wonderful opportunity to work with many very smart people around the world.

    But new challenges are good for anyone now and then, and last year I decided that I would step down from this role and do something else instead. You will still see me around in the IETF, among other things I'll be working on many technical things as well as joining in the Internet Architecture Board (IAB). You've probably seen me work on various gadgets and the Internet of Things technology. I will also continue that. The next thing that we'll be organizing on that is a workshop on Smart Object Security, to be held in Paris on March 24th.

    Today is my last IESG telechat. A telechat is a conference call -- or occasionally a Second Life meeting -- where we go through the specifications coming out of the IETF and try to ensure that they are correct. We have a telechat every two weeks, and there are usually from ten to twenty documents to read. The fifteen steering group members try to check everything from ensuring that the right process was followed to small technical details. Most comments that we make are that -- just comments. Occasionally if we detect a problem that would prevent correct implementation or interoperability we will file what is called a "Discuss". This is a blocking comment that needs to be resolved somehow before the document can become an RFC.

    By the way, while these end-of-the-RFC-process reviews are quite visible in the IETF, it is still a small part of our job. I takes roughly a day or at most two for me on a two-week cycle. And in many ways, the issues that we discover are details. Some of the other things that we do are potentially more significant, like starting up new working groups to address a problem in the Internet.

    In any case, I wanted to put some extra effort to how I formulate my comments and Discusses in my last telechat. One of the things that I always want to be careful about is that if there is any formal language (ASN.1, XML, BNF) that it is correctly formulated and unambiguous. This ensures that people can build implementations by feeding the language to a compiler and get correct output. One of the formats IETF documents often use is called ABNF, or Augmented Backus-Naur Form, specified in RFC 5234. Unfortunately, not all specifications we receive use the formal language correctly. It may have been carefully checked at some point, but sometimes a late change causes an error to creep in.

    Normally, comments and Disusses are just e-mails, but today I chose to use video format to complain about ABNF problems. In fact, I've had to use this Discuss already three times for today's telechat:



    Some of our documents had ABNF syntax errors, failed to tell the reader which other RFCs contained the rest of the productions, mixed RFC 2616 and RFC 5324 ABNF syntax, etc. The errors were in all cases minor, as the IETF document authors are usually established experts in their field, and careful. However, even a small error can be annoying, particularly with a formal definition of syntax because due to an error you can not be certain what the actual intent was.

    Please check your ABNF, XML, and ASN.1 carefully before you submit a document for publication!

    maanantai 16. tammikuuta 2012

    Rest in Peace, Toaster

    Rest in peace, my toaster. You have served well, but the endless toasting eventually burned your circuits.


    Luckily your soul was in the cloud anyway and even your senses were outside your body. So it will be easy to replace your physical incarnation. In fact, I've already thrown your old body into the trashcan, and acquired a new, shinier toaster. And that toaster has four slots for bread, not two. But do not worry, we will always remember... err... now I have forgotten what I was about to say.


    Anyway, the new toaster is shiny and powerful. And it will soon get a soul 2.0, as I plan to add the compassion and politeness to the AI chat engine. As you have been aware if you ever talked to the toaster on Facebook, soul 1.0 was somewhat challenged in these areas. I've also realized that the toaster has been silent in recent times because I've apparently blown a power supply to one of the 1-wire hubs. I need to replace the faulty power supply and update the software to complain more loudly if some of the 1-wire devices are not responding.


    In the meantime, I think this is a good occasion to remember the best times of the original Facebook toaster.


    On April 9, 2011 the current version of the toaster code came alive on Facebook:




    The basic operation of the toaster is to let its friends know that it has been used:



    But the toaster also integrates a chat bot that discusses with its friends. In their 4/2011 issue, Mikro-PC became the first newspaper in the world to interview a toaster. Later in the summer, the Wall Street Journal mentioned the toaster on their front page article about new communications markets that mobile operators are after.



    The toaster has quite an attitude:



    But it certainly knows what things are important in life:




    If you did not know, the toaster's owner is an avid skier (read my other blog for stories about that). The toaster can also speak fluent Finnish:




    Photo credits (c) 2011 by Jari Arkko, screenshots by Facebook and the magazine article by Janne Tervola appeared in its full form in the MikroPC magazine