November 2021 - Artificial Intelligence | Data Sovereignty

We Need Large AI Models with European Values and Standards

Jörg Bienert from the German AI Association on helping Europe gain digital sovereignty in developing and maintaining AI models.

We Need Big AI Models with European Values and Standards-web

© undefined undefined| istockphoto.com

Artificial intelligence is already noticeable for many people in their everyday digital lives – for example, when they use the autofill function of a website, interact with a chatbot, or choose a video on platforms such as YouTube and Netflix based on intelligent recommendations. However, there is still a lack of large-scale AI systems in Europe, says Jörg Bienert, president and founder of the German AI Association. In an interview with the eco Association, he talks about intelligent language systems like GPT3, large-scale AI models, and the LEAM initiative, which calls for joint European projects.

dotmagazine: Mr. Bienert, what potential do machine learning systems have for the Internet industry in Europe?

Jörg Bienert: We see applications based on artificial intelligence in all industries and at all levels of the value chain – usually to optimize processes, but increasingly also as the basis for completely new products and services. I see a good example of direct benefits to people in using data for the development of medicines and new treatments. The first large-scale models are already available that can analyze the human proteome and draw insights from the data for our health. In addition, machine learning offers great potential at the technical level, for example, in optimizing traffic flows or detecting cybersecurity attacks. Currently, many companies are already using chatbots for various online services. There are also already some applications for online marketing, such as sentiment analysis. Of course, there are currently both positive aspects and challenges, for instance in dealing with trained bias.

dot: In what areas can large-scale intelligent speech systems help?

Bienert: The release of the GPT3 language system has created a big wave. More than 300 applications, AI products, Business Models and startups have sprung up around the technology. This ranges from intelligent chat bots to language understanding analysis and translation systems. The first applications already exist that independently create program code from natural language descriptions. However, great AI models are not limited to speech – you can also include multimodal content in which image and video are taken into account.

dot: Together with other companies, you launched the LEAM initiative at the beginning of the year – What’s behind it?

Bienert: In Europe we need the ability to compute large AI models. Otherwise, we will gradually become dependent on the systems from North America or China and end up in a situation of digital “insovereignty”. We want to avoid building only the front ends and workflows for AI here and then accessing them through interfaces in the US. LEAM – Large European AI Models – calls for an infrastructure that will consist of a high-performance data center at its core. This requires a supercomputer optimized for AI tasks. We want to develop the technology independently and in cooperation with other European companies so that our systems are subject to European values, norms, and quality standards.

dot: What values and quality standards should large-scale AI models meet?

Bienert: There is a lot of potential for research in the area of algorithms – especially for non-discriminatory and unbiased language models, there is still a lot of need for improvement and research. Other important dimensions in which we want to establish a high standard are increasing quality and reducing energy consumption. Around this core, the models will then also be available to all LEAM partners for customization. This is expected to result in a complete ecosystem of industry, startups, research institutes, and universities.

dot: You published your first position paper this summer. What specifically are you calling for in the LEAM initiative?

Bienert: We are primarily calling for the availability of the necessary computing capacity in the LEAM initiative. The computation of large models takes enormous resources. GPT3 was computed on the fifth-fastest supercomputer in the world. If such computing capacities were continuously available, this would also result in a great many use cases and new business models. We also need appropriate data storage facilities in which we can collect, store, edit, and pre-process data. In addition, we need algorithms to calculate, continuously extend and optimize such models. There is a great deal of potential in this area for research and also for companies. This potential must be leveraged with the LEAM initiative.

Thank you very much for the interview, Mr. Bienert!

 

Jörg Bienert is partner and CPO of Alexander Thamm GmbH, Germany’s leading company for Data and AI. At the same time, he is president of the German AI Association (KI Bundesverband e.V.) and member of the advisory board Junge Digitale Wirtschaft in the BMWI. Furthermore, he is a respected keynote speaker and is regularly featured in the press as a Data & AI expert. After studying technical computer science and holding several positions in the IT industry, he founded ParStream, a big-data startup based in Silicon Valley, which was acquired by Cisco in 2015.


Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.