Artificial Intelligence Technologies Boost Capabilities of Cyber Threat Actors

The development of techniques to use these technologies and tools to enhance their capabilities is now increasingly on the agenda of cyber threat actors

Photo Credit : ShutterStock,

In response to cyber defenders’ increasing use of artificial intelligence (AI) technologies, malicious actors started discussing their potential application for criminal use. Research from Control Risks, the specialist global risk consultancy, shows that the development of techniques to use these technologies and tools to enhance their capabilities is now increasingly on the agenda of cyber threat actors.

Nicolas Reys, associate director and head of Control Risks’ cyber threat intelligence team, explains: “More and more organisations are beginning to employ machine learning and artificial intelligence as part of their defences against cyber threats. Cyber threat actors are recognising the need to advance their skills to keep up with this development. One application could be to use deep learning algorithms to improve the effectiveness of their attacks. This shows that AI and its subsets will play a larger role in facilitating cyber attacks in the near future.”

There are currently no known attacks using AI, but these technologies could assist threat actors in a number of ways. This includes:

Spearphishing campaigns: In the targeting of a criminal campaign, threat actors could use algorithms to generate spearphishing campaigns in victims’ native languages, expanding the reach of mass campaigns. Similarly, larger amounts of data could be automatically gathered and analysed to improve social engineering techniques, and with it the effectiveness of spearphishing campaigns.

Dubbed ‘hivenets’: In the post-infection phase, clusters of compromised devices that have the ability to self-learn, so-called dubbed ‘hivenets’, could be used to automatically identify and target additional vulnerable systems.

Extensive, customised attacks: Based on its assessment of the target environment, AI technology could tailor the actual malware or attack in order to be unique to each system it encounters along the way. This would enable threat actors to conduct vast numbers of attacks that are uniquely tailored to each victim. Only bespoke mitigation or responses would be effective for each infection, rendering traditional signature or behaviour-based defence systems obsolete.

Advanced obfuscation techniques: Threat actors could evade detection by developing and implementing advanced obfuscation techniques, using data from past campaigns and the analysis of security tools. Attackers may even be able to launch targeted misdirection or ‘noise generation’ attacks to disrupt intelligence gathering and mitigation efforts by automated defence systems.


Tags assigned to this article:
artificial intelligence Cyber Threat Actors

Advertisement

Around The World