September 2019 - Identity Management | Authentication | Domains

Identity in Digital Space - doteditorial

Klaus Landefeld from the eco Association looks at online identification, authentication & sovereignty, and bridging the digital and analog worlds.

dotmagazine Digital Identities

© Rick_Jo | istockphoto.com

The Domain Name System – How to find and name things online

The Domain Name System (DNS) was created to overcome what’s part of human fallibility – our inability to remember long strings of numbers. This is a problem if we want to find anything online, because everything – from your home router through to the website you want to access – is in its most fundamental form represented as an address that is a long series of numbers. The DNS allows the digits of an IP address to be translated into the text of a – humanly memorable – website address. This saves you from having to search through an exorbitantly long list of numbers to find the IP address of google.com, to take just one example. Google.com is definitely easier.

Domain names have evolved significantly since the inception of the DNS. It is not without good reason that a huge number of new top-level domains have become available in the last few years – from geographical (.nyc) to brand (.bmw), to generic (.shop), and on to the internationalized domain names (IDNs) which enable a range of different alphabets to be used. All of these offer companies the possibility to identify their online resources to users according to their location, their products, and their name – a compact, human-friendly way of identifying things online (for insight into how cities can make use of geographical TLDs as part of their digitalization strategy, see Katrin Ohlmer’s article “.berlin, .tokyo or .vegas: How GeoTLDs Help Form Distinctive Digital Identities”, and on the topic of brand identity and "computational trust”, Tobias Herkula from Cyren looks at “Protecting Brand Identity in Email”).

But identifying people – the users – online is also becoming more and more important. Of course, we have always had to identify people – if you want to access a computer system, or access the network, you need to identify yourself, and this has been taken care of using the “triple A” of network engineering – Authentication, Authorization, and Accounting. Managing customer’s digital identities is also increasingly difficult to manage sustainably with older existing solutions (Volker Zinser from Ubisecure argues the case for implementing a customer identity and access management (CIAM) system that gives individual customers a choice in how they verify their identity online, e.g. by using one of the single sign-on services offered by a number of big platforms). Nowadays, the number of services we each use has mushroomed. This means that we each have more and more services we need to identify ourselves to – and identifying ourselves to so many different systems has become a major problem.

The password dilemma

And here we are, back at human fallibility again: One of the biggest problems today is that most people still use passwords. 

dotmagazine Special: In this issue, we publish several video interviews shot at the CSA Summit in April 2019. In “Making Inboxes Smart with schema.org”, Magnus Eén and Dr. Conny Junghans explain how schema.org can make emails machine readable for developing the intelligent inbox. This editorial already provides a glimpse into a second interview, held with Cyren's Tobias Herkula. John Levine highlights the potential of internationalized email addresses to communicate with billions in a further video interview. In his paper “Sorting the Wheat from the Chaff - The Attention Value of Emails in an Ever-Increasing Stream of Messages,” Willem Vogt looks at the impact the eIDAS Regulation and GDPR have on the reliability of emails. 

We are basically at the end of the life cycle of passwords. They have become a liability. If you want to be even half-way secure, your passwords become very hard or even impossible to remember – given that you are well advised to use a different, complex password for every single service. So, some users resort to writing them down, and that's clearly anything but secure. An alternative would be to save them in a key chain or a similar application – which is then hardware dependent, and if your device is lost (or you don't have a backup), then all the credentials for these systems are gone as well.

To solve the password dilemma, the Internet community is in the process of building systems that allow users authenticate themselves to a lot of individual services through one single identification, a single sign-on ID. Basically, users have a home provider with whom they store their identity with all identifying information; and the users create a username that can be used on all services of which they are a customer. The user logs on to a system or service using this username, and the system inquires with the provider to check whether the ID is valid and whether the user is authorized to access the system.

There are fortunately now some initiatives, supported by a large number of companies, to actually build systems which are distributed, without a central data storage, where you can yourself select who your home provider is and where your identity is stored. (One example of this is the European initiative ID4me – as described in the article “ID4me – the Identity Layer the Internet Founders Forgot to Build” by Neal McPherson from 1&1 IONOS.)

This has a two-fold advantage. In the first place, as a user you don't need to have passwords with all of the systems. This simplifies the process of identification and makes it more secure. In the second place, none of the services you use need to store your data – which also has the added benefit of cutting down on some of the data protection risks for the service provider.

Two-way authentication – The secure handshake

Coming back to the triple A I mentioned above; if you operate a service or a system, you need to have the identification and authentication data of someone who's trying to access your system at hand in order to derive what the person can do in the authorization process. But that identification needs to be both secure and reliable, and the authentication ideally needs to work in both directions. To be very clear: A secure process requires not only that a user can be identified by the system, but also that the user can verify that the system they are talking to is really the one they want to reach.

Take so called “Man in the Middle” attacks as an example. These attacks work by diverting your communication with an online service (whichever this may be – your bank, or your social media account, for example) over a proxy server. When you type in your credentials, the owner of the proxy server can intercept this information, and use it for their own nefarious purposes. With two-way authentication in place – for example, by using cryptographic authentication certificates, or implementing the DANE protocol based on DNSSEC (see Patrick Koetter’s article “Adding Trust & Security to Internet Interactions with DNSSEC”) – your browser will recognize that there is something wrong, and will not allow the connection to be made. Banking apps typically already do two-way authentication of some form, and for payments it is obligatory in the EU as of 14 September 2019, but other digital services, and company networks, need to implement this as well to protect themselves and their customers and users.

The problem is that, as users, we so easily give our credentials to systems. The simple fact that a log-in prompt presents itself leads us to log in. It’s almost Pavlovian. This presents a significant challenge at the moment from a cyber security perspective. The tendency is exploited in phishing attacks: an attacker simply sends the user to the wrong website (and today, the fake websites look quite authentic), with a legitimate-looking log-in page and often a domain name resembling the original site. The user inputs their credentials, and then this identifying information – your credentials – can be used by the criminals to access the site in question. (One important tool in combatting the abuse of company identities in phishing attacks is to implement DMARC, as Sven Krohlas from BFK explains in his article “DMARC - Protecting Your Infrastructure and Users from Phishing Attacks”.)

This is exactly where two-way authentication is essential. Users need to be clear that the site they are talking to is actually the site they want to access. The technology has been available for quite some time, but validating the authenticity of the other side was typically disregarded. It has only been implemented as “required” in very recent editions of popular web browsers, and is still not widely used in other applications. In order to really step up security, we need to enable this two-way ID.

Another approach to ensuring the integrity of data and documents and to more securely verifying the identities of users is to implement distributed-ledger technology (DLT). The user-centric identity as conceptualized in the DLT-based self-sovereign identity (SSI), enables the identity itself (human or non-human) to be in full control of the identity and its associated data (Sebastian Weidenbach from estatus AG looks at whether SSI can deliver what it promises in terms of security and privacy for identity and access management (IAM) processes). Blockchain services such as Blockchain Ensured Certificates have been developed that represent a digital identity and make data sharing faster, more secure, and tamper-resistant. Companies need to carefully consider whether using blockchain technology actually offers them a concrete advantage (Sebastian Beyer from CERTIVATION GmbH describes a number of use cases in his article “How Could Digital Identities and Blockchain Make our Lives Easier?”)

Access & Authorization – Decision-making in an IoT environment

So far, we’ve been looking more at virtual identities, but the challenge of security and authentication also applies in the analog world. We see this very clearly when we look at the human-digital interface we find in many IoT scenarios. Take the smart home or the connected car as examples. Here, we need to ensure that the person attempting to unlock the door or start the car really is who they purport to be – and we also need to know what they are allowed to do in the specific environment. We also need to know who has the decision-making power to authorize access and roles in an IoT environment.

A lot of that has to do with the third A, which is Accounting, and the preparedness to pay for certain services or certain types of data. One of the discussions surrounding electronic identities is how far individual sovereignty goes.

Interesting questions arise in the area of the connected car, for example. Would you allow someone who connects to your car remotely to have certain functions or not? Will the manufacturer or the police have the authorization to connect to your car to stop it or switch it off? And who is operating the system? Will the general authority to allow certain IDs to do things with your car reside with you, or will it be inbuilt in such a way that it resides with the manufacturer? This is a highly complex issue and still under discussion.

Smart locks and physical access

Smart locks providing physical access bring the concept of digital identities right to the front door. As part of a smart home or intelligent building control, these enable access for residents or personnel respectively. On top of that, technicians, janitors and other (permanently or temporarily) authorized individuals will also be able to enter a given space - with identification, authorization, and role assignment generally taken care of in the cloud (for more on the smart lock, see the article “Smart Lock Market Growth Boosted By the Rising Popularity of Smartphones”, by Sanna Räsänen from Herman IT).

One example of a specific use case for access technology is the data center. Data center access needs to be carefully controlled and monitored, the people accessing the data center need to be unambiguously identified, and – in the case of a colocation or shared facility – authorized for access to specific racks or servers only. The legal frameworks that govern such access are becoming increasingly strict (as Jan Sanders points out in his article “Future-Proof Physical Security of Data Centers for Progressively Growing (Legal) Requirements”).

With many IoT contexts, the additional problem arises that there are an unlimited number of hitherto unknown, independent devices trying to talk to each other (as I point out in my interview, “IPv6 – Making the Internet End-to-End Addressable Again”, even this presents a a considerable challenge – especially given the snail’s pace at which IPv6 adoption is advancing), and they also need to authenticate to each other. It needs to be very clear which device can do what – can deliver data, can pull data, push data, do processing jobs, and so on: here we’re talking about authorization for devices. Typically, this would need to be fully automated, and to really make this work, we need elaborate, highly flexible Triple A systems. Currently, most implementations tend only to work according to the principle that a given IoT device uploads sensor or event data to a cloud service, and then the cloud service takes all the decisions on what happens with the ensuing data. 

Car2Car Communication – Developing one-to-many authentication

In connected cars there are discussions about how to communicate information directly, in a very close proximity communication. If there's an accident or the car in front of you is braking, it should be communicated directly so that you can immediately have your brakes engaged as well. But due to time constraints, this cannot be processed through a centralized cloud service, it needs to run in a peer-to-peer environment, from device to device. Again, you need authenticity and authorization – and standardization to ensure interoperability between car models and different vendors – in order for the benefits of car-to-car communication to actually come to fruition. If an emergency communication were to run through a cloud service, it would simply take too long - we're talking about milliseconds to engage. For this, you need authenticated group communication functioning in a peer-to-peer environment. The protocols for all of this are in the making right now – stay tuned.

 

Klaus Landefeld is Vice-Chair of the Board and Director of Infrastructure & Networks at eco – Association of the Internet Industry.

Since 2013, he has served as Chief Executive Officer of nGENn GmbH, a consultancy for broadband Internet access providers in the field of FTTx, xDSL and BWA. He also serves as network safety and security officer as well as data protection officer for several German ISPs.

Before establishing nGENn, Mr. Landefeld held a number of other management positions, including CEO at Mega Access and CTO at Tiscali and World Online. He was also the CEO and founder of Nacamar, one of the first privately-held Internet providers in Germany.

Mr. Landefeld is a member of a number of high-profile committees, including the Supervisory Board of DE-CIX Group AG, and the ATRT committee of the Bundesnetzagentur (BNetzA - German Federal Network Agency).