Emmanuel Marshall 09 May 2018 10:26:24 AEST 10 MIN READ

AI vs AI: the new cybersecurity challenge


"The imperative to do AI research is now not optional. We have to build better and better AI security systems because you can bet your life the criminal elements of the hacking world are busy developing AI weapons to attack us..." 
- Bill Rue.

Artificial Intelligence is one of those subjects that is constantly in the headlines at the moment; controversial, endlessly debated and widely misunderstood.

AI promises to be a game-changer for cyber-security and it’s a central component of MailGuard’s new project, GolbalGuard, so I sat down with Bill Rue, MailGuard’s Chief Technology Officer, and asked him to give me his perspective on why AI is such an important technology.

Prior to joining MailGuard as CTO, Bill Rue's (pictured) extensive experience in the IT world included roles as Technology Strategist for Microsoft as well as experience with military technology systems, so his insights into AI development and cybersecurity are based on real-world experience with user-facing technology.


Interview with Bill Rue:


20180522_103733-01

EM: Bill, how do you think AI is going to change the cybersecurity landscape?

Bill Rue: That’s kind of the unanswerable question, because AI is still a technology in its infancy, and we really can’t predict in any sort of realistic way what it might look like in even 3 to 5 years from now. Futurists trying to predict what disruptive technology will look like rarely get it right. Having said that, there’s real concern amongst some scientists and technology thinkers that future AI could be potentially weaponised; turned against us. There’s a lot of speculation about very powerful AI machines becoming self-aware and humans losing control of them, but even if we disregard that more ‘science-fiction’ sort of speculation, there are still other ways that AI could be a security issue.


EM: So there’s an inherent problem with AI that it could be used as a weapon as well as a tool?

Bill Rue: At the software level, even simple software can be weaponised. We know that because we’ve already seen it happen in cybersecurity incidents like; NotPetya; like Wannacry.
Software doesn’t have to be intentionally malicious to be dangerous. Most cyber-attacks at the moment start with infiltration - like malware being delivered via an email - because hijacking systems is more efficient than building new weapons. It’s basically a lot easier for criminals or terrorists to grab systems that already exist and take control of them than build stuff of their own.

AI is just technology and primitive AI is already within the reach of regular people now. There are open source AI platforms being built by big companies that malicious actors can download and exploit.

Even governments can see the value of that sort of ‘hacker’ approach to weaponising technology. Recently we saw that example from Wikileaks where the CIA hoarded exploits that they thought might be useful as weapons and then those exploits were used by criminals when they got into the wild.


EM: Can cybercriminals exploit AI to make hacking and infiltration easier?

Bill Rue: What we should be concerned about is malicious actors getting hold of models of our security systems and training their AI to defeat it. That’s the main reason we are now committed to an ongoing AI arms race. Cybercriminals want to use AI to attack just like we in the security world want to use it to defend. AI builds a model of a problem and gets better and better at achieving its goals. So, before they send their AI based attack out to the target they will create a sandbox environment and train their AI to understand the weaknesses of our defences.


EM: Do you think AI is going to be mostly an asset or a threat in a cybersecurity sense?

Bill Rue: AI is currently pretty basic.  It’s not very smart - yet. As it gets smarter - and if you look at some stuff being developed by teams like the Korean researchers that won the DARPA challenge robot or Boston Dynamics you know that we’re moving fast toward independent AI - the big problem could be AI systems that can self-organise.

Part of my background is military, so I tend to think in those terms. AI is built by humans and humans are really good at making weapons.  

Think about drones that can self-organise. That already exists. Now add guns to the equation and you start to see the potential problem. Actually, drones don’t even need to have guns to be a serious threat. Imagine you devise a swarm of self-organising drones and you send them to fly over an airport or an air force base; you get a swarm of drones in a jet engine and suddenly you’re dealing with crashing aircraft and potentially the loss of air power capability.

Extrapolate from that to non-physical software and we need to be thinking about swarms of virtual bots zooming around the internet and identifying weaknesses in security systems, breaking into them and stealing whatever they find, all without human guidance or intervention. That’s a very real possibility.
We’re seeing good evidence of cyber-attack actions even at the nation-state level; where governments are using hacking tools and malware to attack other countries. It’s a very grey area but once malware is being used as a political tool or to damage state infrastructure then you really are talking about weaponisation. Governments are already using conventional malware as a weapon, so using AI would seem to be the next logical step, once it’s available.

The imperative to do AI research is now not optional. We have to build better and better AI security systems because you can bet your life the criminal elements of the hacking world are busy developing AI weapons to attack us.


EM: So we’re in an arms race with cybercriminals to build better AI weapons; could a virus hunter AI be perverted into a virus trainer? Could black hat hackers potentially get our defensive AI software and use it to train better attack software?

Bill Rue: In a way, we’re already seeing something like that happening. We know that some malicious software now won’t run if it senses that it’s inside a “dump centre”; a sandbox environment where we study and learn about viruses. Cybercriminals are building that sort of functionality into their “bugs” so that it’s harder for us to take them down.

Virus intelligence is basic at the moment - it’s still not “true AI.”

We have a crucial window of opportunity at the moment to build AI systems that are at the very cutting edge so we don’t get pipped for line honours by the bad guys. Our whole focus with MailGuard and especially with our new project GlobalGuard is to constantly push the limits of what’s possible with AI security software.

Criminals do, and will, obtain benign software and use it against us, so that’s the reason we need to keep doing research and keep evolving our systems to keep ahead of the criminal’s tools.


EM: What will MailGuard’s future AI security tools feel like to a customer?

Bill Rue: The AI tools aren’t necessarily going to be that visible to the customer. The AI we’re working with is just observing and monitoring in a way that people don’t have the time and patience to do. It works in the background doing a lot of very fast repetitive tasks that our customers don’t notice - and that’s the whole point - the system is unobtrusive and actually enhances the user experience rather than intruding on it.

Our AI finds suspicious actions and then comes up with possible solutions to resolve threats based on predefined activity; that’s its learning process. The big difference with future AI will be that at the moment, we - the humans - have to act on the issues AI detects but in the future, our true AI systems will find the problem, find the solutions, implement them, and then predict the next problem that could arise, all on their own.

For the customer it will mean a faster, better, more secure service that will streamline communication and minimise interference  - they won’t be aware of the AI component because it won’t require them to do anything.


EM: You just mentioned GlobalGuard; that’s the new system you’re working on right now that combines AI and Blockchain technology. I know you can’t be really specific because the product is still pre-launch, but can you talk a bit in general terms about the way GlobalGuard uses AI and Blockchain together?

Bill Rue: OK, sure. The thing that is quite misunderstood is that Blockchain itself doesn’t solve security threats, but it does give us a transparent communication platform.

The strength of GlobalGuard is that it’s going to be a collaborative community platform for research, bug-bounty hunting and tool development; a token based community that incentivises collaboration.

Hackers have been collaborating forever, so if we are making a community for people to collaborate freely on security development.

The AI part of GlobalGuard allows that human-driven community of problem solvers to scale up their response. The AI sees the contributions made by the people and then builds multiple variants and extrapolations on those ideas. If the community comes up with a new malicious variant and a fix for it then the AI will give us the optimal solution to that same problem plus offering capability to defend against the next variant of what that strain might be.

The various contributors to the community - researchers, white-hat hackers, business owners - each offer their own skill sets and data sources and the AI facilitates the development of solutions by coordinating all those various inputs.

While all that is happening, the Blockchain platform records all the contribution units and maps the development process. Working that way reduces duplication of work and directs the progress better.


EM: If you were a betting man, Bill, what would be your predictions for the way AI is going to change the cybersecurity landscape over the next 3 to 5 years?

Bill Rue: Essentially, AI will help define signal from noise and help us focus effort to get the best protection. When we talk about cybersecurity we’re talking about very, very big data sets, and handling all that data is a slow and difficult task for humans, but it’s what AI does best.

AI could also make cybercriminals more dangerous though, so the mission is to make good choices about how to use it.

We have to start using AI as our security spearhead because the cybercriminals are definitely going to use it to try and achieve their ends.

 

> Read more about AI and cybersecurity.

> Read more about Blockchain technology.

 

Stay up-to-date


To keep up with the latest cybersecurity news follow MailGuard on social media; we're on Facebook, Twitter and LinkedIn.

Stay up-to-date with the MailGuard Blog by subscribing to our weekly newsletter. Click on the button below:

Keep Informed with Weekly Updates