The LEAM Initiative – Supporting SMEs In Training Large AI Language Models
What are the potentials and challenges for SMEs that want to implement large AI models in practice? Dr Johannes Otterbach of Merantix Labs explains.
Since the introduction of the machine learning language model GPT-3, the market has been growing with new business models for automatic text generation and text recognition. GPT-3 was developed in the USA and is able to compose texts and conduct dialogues on its own – it is considered to be a technological breakthrough. What are the potentials and challenges for German and European companies that want to implement large AI models in practice?
We talk about this in an interview with Dr Johannes Otterbach, Vice President Machine Learning Research at Merantix Labs. The company builds AI solutions for German and European SMEs and is involved in the European LEAM initiative (Large European AI Models), which eco – Association of the Internet Industry also supports.
This interview was originally published in German on eco.de and translated into English for dotmagazine.
What potential do large-scale AI language models have for the Internet industry in Germany and Europe?
Otterbach: Currently, large-scale language models are not yet very prevalent in applications in Germany because we lack the infrastructure for them here. But I believe that this development will come and must come. The remarkable thing about AI models is that they have the ability to solve many different tasks in different use cases and industries. This is a paradigm shift in machine learning because, previously, individual models were developed for individual tasks. What we have found out in research now needs to be implemented in the economy. In the next few years, we will have a shortage of skilled workers in some sectors of the economy. To relieve the burden on people, we will thus have to automate some processes and support them with AI. With the LEAM initiative, we want to build trust in the technology and support companies to train AI language models and put them into practice.
Merantix is also involved in the LEAM (Large European AI Models) initiative, which aims to develop large-scale AI models according to European standards. Why is it so important to have European models?
Otterbach: Merantix works in the LEAM initiative in the “Data & Algorithms” task force. There we examine existing and future models and data sets that we would like to train with application partners. These must be copyright-compliant and GDPR-compliant and comply with European values and laws. In addition, we focus on multilingualism and use self-supervised learning to train the models to take many different languages into account in a holistic way. For example, these standards do not yet exist in GPT-3, because the language model was mainly trained on widely spoken languages such as English. The application is limited for languages spoken by fewer people, of which there are many in Europe.
What other aspects do you take into account in the research and creation of the models and data sets for the application?
Otterbach: Many data sets have a bias in the data. This can occur both systematically and by chance, for example during sampling or through unconscious thought patterns. If we train models on data that have a bias then these models would reflect the bias. We are, therefore, currently investigating which data sets we can already use for large AI models and how we can mitigate the bias in these data sets. We also take the fairness aspect into account when creating the models.
In what areas can large-scale intelligent language systems support us?
Otterbach: Nowadays, we spend about 28 percent of our working time answering emails. That’s about a quarter of our work time, which is really quite a lot. Much of this time could be automated with speech recognition and language models. Email is just one example, but it affects a lot of people and companies. I think there is a big market and a lot of potential that we are not using at the moment. For unstructured text, large language models have an enormous advantage over previous statistical models and rule-based systems.
On the one hand, large-scale AI models consume a lot of computing power and energy. On the other hand, they can save CO2 emissions and contribute to sustainability. How do companies establish resource-saving algorithms and AI models and what do they have to pay attention to?
Otterbach: Large AI models can solve many different tasks at the same time, as opposed to many different small models that are only trained for one task at a time. This amortises the costs as well as the computing power and energy. There are several indications that the carbon footprint for a large model that solves many tasks is smaller than the carbon footprint of many small models combined. Moreover, the algorithms themselves also become more efficient over time as the technology evolves. If companies offer business models with great added value and apply them broadly to many different use cases and industry sectors, then the algorithms can contribute to efficiency.
Dr Johannes Otterbach did his doctorate in physics at the Technical University of Kaiserslautern. He then specialised in Big Data and Machine Learning, at companies such as Palantir and OpenAI, and has been promoting the transfer of knowledge from public research projects into practice at Merantix since 2021.
Hanna Sissmann is a Junior PR Manager in the Communications Team at eco – Association of the Internet Industry. She has a focus on Social Media, video and audio content and makes complex tech topics comprehensible. She is also responsible for the German podcast “Das Ohr am Netz”, bringing together a line-up of interesting guests, and highlighting Internet industry topics and stories.
Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.