Get ready for AI-supercharged hacking

Artificial intelligence can supercharge the effect of hacking attacks. As use of AI widens, people and organisations will have to become much more careful in guarding against its malicious use.

One aspect of the hacking problem is that malicious actors, having succeeded in hacking a system, such as a database or phone, can apply AI to the information they have stolen to create phishing messages that are much more persuasive and effective.

Another challenge is that an AI program loaded on to a phone or other computer must have access to far more information than a normal app. So a hacker may target the AI tool itself, seeing it as a wide door to more information that in turn can be used to execute more and stronger attacks.

Cybercrime is causing significant disruption to the Australian economy. According to the Australian Institute of Criminology, cybercrime cost $3.5 billion in Australia in 2019. Around $1.9 billion was lost directly by victims and the rest was the cost of recovery from attacks and of measures to protect systems.

To guard against AI-supercharged hacking, we’ll need to try harder in protecting ourselves and organisations we’re affiliated to. We’ll need even more vigilance when receiving emails and text messages, more diligence in reporting suspicious ones and more reluctance in sharing information in response to them.

Spear-phishing is sending emails and text messages that are highly targeted to the individuals they’re addressed to. For example, suppose you visited a bakery yesterday, bought a tiramisu cake and later received a text message asking you to follow a link to rate the cake and your shopping experience. Mistakenly assuming that such a message can have come only from the innocent local bakery, you may click through and provide personal information, when in fact you’re dealing with a hacker who has found out just a little about you—your phone number, the name of the shop and what you bought.

But that example is mild compared with the spear-phishing that might be done with generative AI, the type that can create text, music, voice or images. It’s quite conceivable that a hacker using generative AI could send a detailed email purporting to come from your friend, written in the friend’s style and discussing things that you’d expect to hear only from that friend.

Next, there’s the problem that AI tools that are or will be on our phones and other computers must have permission to access a great deal of other information in other apps. Although the AI tools are mostly pre-trained, for them to provide personalised solutions or recommendations for each individual they need to access our data. For example, to send that persuasive message from your friend, it would learn from records in your messages, email and contacts apps, and maybe the photos app, too.

This means that if someone can get into the system, maybe using some means that hackers already use, then get access to the AI tool, he or she may be able to collect whatever information the AI is collecting and do so without having to directly get into the stores of information to which the AI has access.

Imagine that you and two friends are planning a birthday party for your brother and discussing gift ideas by email. A hacker who can read the contents of your email app, because your AI tool has access to it, can then send an extremely persuasive spear-phishing email. It might purport to come from one of the friends, offering links to gifts of the type you were discussing. With today’s usual level of guardedness, you are not likely to be at all suspicious. But the links are in fact malicious, possibly designed to give access to your organisation’s computer network.

The AI tool that Apple announced in June, for example, requires access to your contacts and other personal information on your phone or other computer.

So far, the only answer to all this is increased vigilance, by individuals and their employers. Governments can help by publicising the problem. They should.