October 2021 - Artificial Intelligence | Phishing | Security

AI and Cybersecurity – What to Expect and How to Protect your Organization

Innovations in AI are making social engineering attacks on companies more effective, explains Dr. Niklas Hellemann from SoSafe Cyber Security Awareness.

AI and Cyber Security - What to Expect and How to Protect your Organization-web

© Andreus | istockphoto.com

Social engineering attacks on companies have not only become more frequent in recent years, but also more specific in their targets. Innovations in artificial intelligence are making the situation even more dire. Employees now have to be prepared for these new attacks.

Phishing mails remain one of the most popular tactics used by cyber criminals. This is not surprising, as people themselves are the most efficient gateway into companies’ IT systems. Yet these attacks have become increasingly sophisticated in recent years. For example, attacks can target specific employees (as is the case with spear phishing and CEO fraud), or are based on actual conversations (such as with dynamite phishing). Such attacks are gaining in popularity, not least because they are very lucrative, and the hackers can quickly demand millions in ransom for encrypted data. It must thus be expected that hackers will become even more innovative in the future, for example by using new artificial intelligence (AI) technology, when crafting their attacks.

Not just science fiction: machine learning facilitates Social Engineering 2.0

There is no question about the fact that AI technology like text-to speech and natural language processing can be very beneficial. Personal assistant systems relieve us of bothersome tasks, while image recognition software improves industrial quality control processes. However, some of the underlying technology is also available to the public, meaning that criminals have access to it. There is already a variety of publicly usable AI models – for example, those which imitate real people’s voices (known as voice mimicry or voice cloning) or faces (deepfakes) with astonishing accuracy.

Yet, in recent years, the time needed to learn how to create convincing deepfakes has been drastically reduced. The consequences of this are severe, as many users are unaware of how convincing AI-generated fakes can be. In this year’s SoSafe Human Risk Review, we already predicted that the use of AI will increase the likelihood of cyber threats, especially with regard to social engineering.

In a recent test, researchers from Singapore’s Government Technology Agency have now found that, with the AI tool GPT-3, they could craft spear phishing mails that were more effective than those written by humans. A frightening outlook for individuals and organizations alike, since other publicly available tools might soon be exploited by criminals for sophisticated and – at the same time – automated attacks. Producing AI-based attacks en masse might soon be criminals’ go-to tactic.

Double-barrel attacks: when supervisors call

But the use of AI models does not merely stop at phishing. New technology gives cyber criminals a range of tools for executing social engineering attacks. By using the aforementioned AI models, for example, a CEO’s voice can be imitated to legitimize an upcoming phishing mail. These so-called double-barrel attacks increase the chance of success dramatically compared to conventional phishing. And the Internet, along with social and video platforms, provides sufficient source material for imitating voices from large companies.

The first real case of this occurred just a few months after the possibility of such an attack was predicted at the BSI Security Conference in 2019: an employee of a British energy supplier was called by the supposed CEO of the German parent corporation, who told him to transfer money to a Hungarian bank account. The criminals were able to steal 220,000 Euro with this attack.

But this is likely to just be the beginning. Personalized attacks against a large number of employees can be perpetrated with text-to-speech models. The combination of AI-based vishing and phishing thus significantly increases the risk posed to companies. Persons in charge of security are faced with new challenges, as they have to prepare employees for this new generation of social engineering attacks.

How can companies and organizations protect themselves against deepfakes?

A number of companies are currently working on developing software that recognizes deepfakes by their artifacts and labels them as such. However, the security that such software can provide is limited. This is because cyber criminals are constantly honing their attacks so that they can circumvent the latest security mechanisms. It is thus essential that, in addition to the implementation of technical measures, employees be sensitized to these new means of attack. Only when all employees are aware of the social engineering tactics that criminals use can these attacks be recognized and reported at the outset.

Regular awareness training that also addresses new types of attacks is recommended in order to raise long-term awareness of various cyber-attack tactics. The combination of training sessions (such as e-learning) and attack simulations is particularly effective. Phishing simulations can also utilize various channels, including emails and telephone calls, so employees are confronted with potential gateways that cyber criminals use on a daily basis. By dealing with realistic attacks, they continuously learn how to recognize them, become more alert with regard to new types of attacks, and use their newly acquired knowledge in real cases to protect themselves and their organization.

 

Dr. Niklas Hellemann is a certified psychologist with years of corporate consulting experience, as well as a Managing Director of SoSafe Cyber Security Awareness. As an expert in social engineering and security awareness, he works with innovative methods of employee sensitization and helps organizations from diverse industries protect their human layer.


Please note: The opinions expressed in Industry Insights published by dotmagazine are the author’s own and do not reflect the view of the publisher, eco – Association of the Internet Industry.