Will future artificial intelligence be a cybersecurity risk?
That’s the provocative question posed by a new report from Cambridge University’s Centre for the Study of Existential Risk (CSER).
“Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis... Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously,” the CSER report states.
“As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to changes in the landscape of threats… The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labor, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.”
Research organisations like CSER predict that the current development trajectory of AI will lead to some sort of artificial super-intelligence within the century.
Powerful AI systems could be transformative for human society and bring benefits we can’t yet even imagine, but the history of technological progress should serve as a warning: AI will likely be used for destructive and criminal ends as well as benevolent ones.
If machines get smart enough to out-think people what would that mean for cybersecurity? Could the social engineering techniques we see being used in email scams at the moment be applied to more sophisticated technology like video chat?
Imagine you get a Skype call from your accountant and they say they need to get your authorisation for a payment. It’s a video call, so you’re actually talking to your accountant in real time and looking them in the eye. They get your account details and authorisation code from your bank to make the payment and it’s only the following day that you realise you weren’t actually talking to your accountant at all; it was an AI avatar mimicking their appearance and voice down to the smallest facial expressions and vocal nuances.
The most disturbing thing about the possibilities of AI as a scam tool is that it is almost infinitely scalable. Where human con-artists have to have conversations with their victims one at a time, an AI bot can be talking to thousands or even millions of potential victims simultaneously, it’s just a matter of processor power and bandwidth.
AI crime vs AI security
MailGuard is developing industry-leading AI technology to meet the cybersecurity challenges of the future.
GlobalGuard, our new blockchain powered cybersecurity platform will harness cutting-edge AI technology to detect and resolve cyber-threats as they arise.
"The biggest challenge in cybersecurity is keeping up with the new attacks that criminals are always coming up with. They’re always looking for new tools and strategies to defraud companies, so we have to be forward-looking in our approach to defence as well.
"MailGuard has built its success on being able to respond to new threats ahead of the market and it’s always been our mission to devise better strategies to combat cyber-attacks in real time.
A while ago, an idea started to take shape in MailGuard meetings; a concept of a massive neural network, enhanced by cutting-edge AI, that could use all the most up-to-the-minute intel on cyber-attacks from across the globe. That was the origin of our new cybersecurity project: GlobalGuard."
- MailGuard CEO, Craig McDonald
> Learn more about GlobalGuard, here.
Stay up-to-date with new posts on the MailGuard Blog by subscribing to free updates. Click on the button below: