AI will play a stronger roll on both ends of cyber attacks

February 15, 2022
#AI22 - No. 5 of 10

#AI22 is a series of articles highlighting what we believe to be 10 developments that will be impacting AI this year.
This series is co-written by
Dr. Johannes Otterbach, Dr. Rasmus Rothe and Henry Schröder.

---

Cyber incidents have increasingly become a severe threat to businesses, governments and all digitized organizations around the world. Allianz Insurance named cyberattacks as the number one risk to businesses in 2022 in its annual Risk Barometer. While in 2015, the cost of cyber attacks was estimated at $3tr, by 2025 it will cost an estimated $10.5tr annually.  

In December of 2021, hackers worldwide launched more than 1.2m attacks within four days as a result of the Log4j flaw, a simple backdoor in the key open-source protocolling software, opening the floodgates for hackers to enter a huge number of networks and which consequences will only be felt in the years to come. Such mistakes in software code are becoming a key use case for AI in cyber security. Techniques such as shift-left testing apply an NLP approach at every step of the software development process to detect security or efficiency issues in code at every stage of development, preventing problems from being discovered at late and, more costly, stages of development. Companies implementing code security through AI will experience a surge in demand.

However, businesses are not only attacked through software flaws or back doors in code. Cisco predicts that 90% of all data breaches occur via phishing attacks where hackers gain access to company or personal data by persuading email recipients to perform certain actions. In this space, we will see that NLP models will struggle to detect phishing attacks since generative NLP will be able to scale highly intricate and targeted spear-phishing attacks that in the past still have been very costly. Additionally, the usage of EvilModels will further increase. The sophistication of these attacks, where malware is built into neural networks and is delivered covertly, without impacting the performance of the model, poses a major risk and great challenge to cybersecurity - and thereby also a chance for AI to prove and test its level of sophistication in this field. 

As the importance of cyber security increases, so does the need for talent in this space. As the demand will not be met in the short term, the industry will see strong automation of repetitive security tasks - automated fraud detection, breach risk prediction, or security effectiveness - through AI models, which, until now, were widely still monitored manually. Not only will these types of programs soothe the talent situation, but they will also provide organizations the possibility to detect attacks earlier as the programs monitor their networks and data infrastructure continuously, instead of only in certain time intervals, depending on the availability of the monitoring personnel. 

Overall, we expect an AI arms race between hackers and cyber security providers will play out, while in simple monitoring tasks there will be an immense increase in demand for AI applications due to talent shortages and the increased risk of cyberattacks. 

Read more