Technology visionaries have been forecasting a fully automated future for decades. However, computer scientist John McCarthy, who was significantly involved in the development of the discipline of artificial intelligence (AI), explained the following as early as 1956: the difficulty and challenge lies in “creating a machine that behaves in such a way that one would call it intelligent if a human being behaved in such a way”.

Artificial Intelligence is on everyone’s lips today. A real buzzword. Realistically speaking, however, recreating human intelligence remains a difficult undertaking today. The trend is still in its infancy. This is the belief held by 70 percent of IT security specialists, according to a survey by Radar Cyber Security. However, there is a lot of research and testing going on. According to an analysis by Research and Markets from 2018, the level of investment here is expected to reach an estimated USD 191 billion by 2025. This is why the experts surveyed in the Radar Cyber Security study have high expectations for the coming years: 67 percent of them see good or even very good progress in the use of AI/machine learning in the field of IT security by 2020. Indeed, 89 percent of experts believe that it will be possible to use it for a wide range of applications in 2025.

The urgent need for intelligent machines to be used for cybersecurity purposes is understandable: on the one hand, there is a significant lack of experts and, on the other, the number of cyber attacks is rocketing with the increasing number of devices connected to the internet. Artificial Intelligence is intended to help automate detection & response and make it more efficient compared to other software-based support.

We have not yet reached this goal

“Supervised learning” is the term currently used for the AI offered on the cybersecurity market. This means that a learning algorithm tries to find a hypothesis that makes predictions that are as accurate as possible. The hypothesis is an image that assigns the assumed output value to each input value, for example whether a code contains malware or not. To do this, the algorithm needs many data sets from which it learns the desired “laws” and can then apply them to other data. The core prerequisite for this type of intelligence: the set of sample data is good. On the one hand, malware-free data must indeed be “clean”, otherwise AI doesn’t see abnormal data points. On the other hand, care must be taken to ensure that cyber attackers cannot gain access to the “training data”, since they could switch malware and malware-free code, thereby outwitting the system.

In IT development, machine learning is the next step in generating “artificial” knowledge from experience: an artificial system learns from examples and can generalise them once it has completed the learning phase. The examples are not simply memorised, but rather the system “recognises” patterns and laws in the learning data. This means that the system can also assess unknown data through the transfer of learning or even fail to learn unknown data due to overadaptation.

Initial successes have been achieved, for example in one of the largest competence centres for research into automated IT risk identification: the team at RadarServices puts these directly into practice with its customers. However, the experts also remain realistic: “We prefer to stay on the safe side and test our algorithms over a longer period of time before using them as the only analysis tool – without expert involvement,” says Christian Polster, who is responsible for research and technology development at Radar Cyber Security.

The computer scientist Donald Knuth summed up the status quo in this way: “Artificial Intelligence does everything that requires thought, but fails to do what humans and animals do automatically without thinking.”

Subscribe newsletter