The Intelligence in the Network - Transcript
Artificial Intelligence is moving from science fiction to science fact with remarkable speed. But what is the nature of this intelligence, and what are the risks? dotmagazine met up with Internet pioneer and eco Board Member Klaus Landefeld, to find how far AI has come.
Watch the 8-minute video interview with Klaus Landefeld.
Transcript
DOTMAGAZINE: The applications that have been coming on to the market recently offer some very promising solutions from the fields of machine learning and natural language processing. Take the voice-activated home assistants that are appearing in households everywhere. But how intelligent are they really?
KLAUS LANDEFELD: Nowadays, everything is smart – if you look at the advertisements. We have to draw a very clear line between things that actually enable just the interfaces within your home – that they take action, that are part of your car – that's not really intelligent; that is just an interface to the automated world, to devices which are online nowadays. Intelligence typically is somewhere back in the network. These are systems which are trying to help you, to learn about you, to understand you. This has a lot to do with deep learning so that the systems actually collect a lot of data about you and are trying to help you. This is where we are in AI today. Is that already intelligence? That's a very good question. There's always a lot of discussion about what is intelligence. We are even discussing that in biology everyday now. So, I wouldn't say that we're already there, but the systems are already at a stage where they can really help you, and mostly even understand you and what you want from them, and get the relevant data or action for you.
Related Stories
DOT: In amongst the hype around AI, there are voices of dissent, warning of the dangers of developing truly intelligent machines. In what areas is there a need for caution?
LANDEFELD: Right, yeah, Elon Musk was there are a couple of weeks ago, kind of like calling for a ban of AI at least in automated weapons systems and things like that. And that might be one of the dangers today: that we build it into weapons, that we build it into systems which can actually act against a human. That is a big problem. I mean, it hasn't happened so far, if we take that term which typically comes when you talk about whether it is a threat, that AI systems or autonomous systems with artificial intelligence take to the world and do things.
But I really see a big problem when the human element, the ethical element and everything, is missing in a decision taken by the fictional police robot, or things like that. And that is on the horizon. There are already systems under development where border zones or demilitarized zones are supposed to be patrolled by machines. If they have the capability to actually shoot people, kill people, I think that is way too far. The genie is out of the bottle: these systems are being developed. We can try to solve this and to not allow it. But even if we would do this on the U.N. level or something, it's probably not going to happen that these systems won't be developed. And that's the risk we are at right now. How can we control it? How can we actually not have the systems in every corner?
DOT: But even taking weaponized machines out of the equation, what role does ethics play in developing AI?
LANDEFELD: That is already a big problem today. In the U.S. they have systems which are in the pre-crime area right now where they actually profile people based on data and invite them to the police station. Even though there isn't any crime, just saying "oh, you're a risk group, don't do anything stupid", something like that. That is really a problem. Also judges use systems where it helps them find a sentence, and that is based on algorithms. It's based on the data of the person they're actually judging. How likely is it they'll fall back? How likely is it they actually did it? But these algorithms, how neutral are they? The algorithm itself is not very well known. It's proprietary to the software company building it. But how do things factor into the algorithm which you wouldn't be able to use as a judge – like skin color, ethnic background, religion, or things like that? They're probably part of the algorithm, and rightfully so, because obviously that influences the sentencing if that is an expert system drawing on experience. Is that the right thing? The judge wouldn't be able to use it as an argument if he is drawing a sentence. So, how can the algorithm helping the judge use these factors that should not be allowed? From an ethical viewpoint, it's very difficult. Interestingly enough it still lets them find better sentences – there are statistics about this already – because the algorithm is less influenced than the actual person doing it. So even though you're not allowed to use it, obviously as a person you're still drawing on your experiences, and you'll use these even if you're trying not to. So the algorithm uses this less than you do. But that is a problem. Where are the ethical lines which we draw upon, where you can say, this is not acceptable? Should it at least be open? Should it be clear how the algorithm works?
And we have that in a lot of systems which are currently being built. We have the same problem in automated driving, for example. There's this classical question we ask there – if there is no avoidance of an accident, what do we use? Does the car steer into the persons on the sidewalk, into the cross traffic? How is that solved? You know, ethical questions where the algorithm can't really decide. We need to give the algorithm guidelines on what is acceptable. And we have to find that as a society. So typically, the driver would use self-preservation I think as the first reaction. The algorithm can do differently, but what is ethical to do? And we need to find answers to that. Otherwise the systems – which are already on the horizon and we're talking three years, maybe four years... So we need to find answers as a society on how to do this. And we're not there yet.
Klaus Landefeld is Director of Infrastructure & Networks with eco – Association of the Internet Industry. Since 2004, he has served as the Chief Executive Officer of Mega Access, a company focused on wireless broadband Internet access. He also works with nGENn GmbH as an independent management consultant. Before establishing Mega Access, Mr. Landefeld held a number of other management positions, including Chief Technology Officer at Tiscali and World Online. He was also the founder of Nacamar, one of the first privately-held Internet providers in Germany. Mr. Landefeld is a member of a number of high-profile committees, including the Supervisory Board of DE-CIX Management GmbH, and the ATRT committee of the Bundesnetzagentur (Federal Network Agency).