Are you scared or excited about the future of AI? Perhaps a combination of both? As a technology insider, I’m really excited about the prospects for AI in the coming years, and the transformative implications it holds for each and every industry.
High-growth companies are, as Microsoft recently talked about at MSEurope, already seeing the benefits of utilising AI technology in their businesses.
The field of cybersecurity is no different, with AI and machine learning (a branch of AI) integral to most solutions. Currently, most AIs enjoy a symbiotic relationship of sorts with humans. AI helps us to navigate complex systems and to process masses of data in order to fortify systems security. Simultaneously, we help machines learn by feeding them learning models so that they can train and make better decisions.
It’s a great relationship - but one which business owners like me need to watch closely. For instance, at MailGuard, we employ proprietary Hybrid AI threat detection engines in order to protect businesses across the globe from a more diverse and high-load of email threats than ever before. We’ve been in the business for more than 17 years, so we have a bit of a head start.
But I often find myself thinking - will we ever reach a stage when the analysts are obsolete and where we let the machines takeover? How dependant can I be on AI and machine learning to maximise its potential without any detrimental effects? What, really, does the future hold for AI, and what does it mean for my business?
To that effect, I wanted to explore how AI and machine learning are currently working, where we are in the field, and how the gears are turning in cybersecurity.
AI is excellent at decision making based on historical data - but it’s only as good as the data it’s trained on
For business owners like me who are wondering how best to employ AI and about the implications of its evolution for long-term growth plans, it’s important to firstly know how the technology works - and where it’s at in its current stage of development. Especially because many leaders seem to not really know what’s going in the field. A report released just last week found that while 71% of businesses said they plan to spend more budget on AI and machine learning in their cybersecurity tools this year, 58% said they still aren't sure what the technology really does. Yikes.
Cybersecurity’s hunger for AI is understandable. Currently, we are facing a tsunami of cyberattacks, keeping CEOs up and worried at night. At the same time, there’s a massive shortage of skilled cybersecurity workers capable of handling the growing number and complexity of cyber attacks.
Here’s where AI and machine learning come in. In its present stage of development, it helps to automate threat detection and response by analysing large amounts of data to identify threats and mitigate vulnerabilities based on previously fed learning models. Basically, eliminating the need for you to hire multiple talents and have them comb through all that data.
However, the fact remains that AI solutions are only as good as the data they’re trained on. Because machine learning algorithms are based on existing datasets (we can’t very well collect data from the future) they do very well at making decisions based on the past. This means AI can usually make a very good decision for right now, given a comprehensive data set.
An oft-touted example is determining the current market value of a house. This will take into account a data set of recently sold houses in the area, along with features such as property size, number of bedrooms, distance from public transport, schools, etc. It can then calculate an approximate market value (or range) for a particular house.
This particular algorithm can be very effective in its predictions, so long as there is a large enough data set (and large enough feature set).
However, if we’re talking about housing prices and trying to ascertain the market price of a house in Melbourne in two years’ time, and we forget to tell the algorithm there are new laws coming into effect about the way home loans are managed (or something similar), then the predictions may not be very accurate. The machine may, in technical terms, hit a blind spot with short-runs. The same limitation would also exist if we tried to use that same AI to make decisions about property in San Francisco or Rome.
The quality, quantity and range of the data set the machine is learning from is critical, and so too are the conditions within which the model has been developed. Differences in the political or regulatory environment, or socio-economic differences are just a handful of fundamental differences in the environment that the AI cannot account for.
The state of AI within the field of cybersecurity
Digging a little deeper, here’s an overview of how AI is already being employed in cybersecurity tactics around the world.
Some real-life examples include:
- Identifying an email that contains known patterns of text from phishing attempts and quarantining or deleting it
- Identifying website traffic that “looks” like a DDOS attack and defending using IP-blocking strategies
- Identifying unusual login attempts and deploying 2FA to verify user identity
You may notice that these examples I’ve given are based on known cybersecurity incidents and infrastructure weak points.
This rewinds back to where I was talking about AI being able to predict something right now based on past experience and data. This does not take into account new types of incidents, or zero-day attacks.
Take Bitcoin mining trojans for example. These have existed since as far back as 2011. Could humans have predicted this might be possible? Yes. Machines? No, especially not at that point in time.
Machine learning is at the stage in cybersecurity where it’s capable of identifying that something might be a variant of an existing threat, but as for whole new threats, it’s not quite there yet.
Any business owner looking to completely defer to AI for their cybersecurity needs should therefore think twice. The human element is still required to point out what the machine has missed - or more accurately - what the machine has not yet seen, which is why we still need analysts in the mix. A machine can’t identify those features and attributes that denote a new scam, if they’re not part of the dataset it’s been fed.
This limitation is particularly vital in cybersecurity, where:
- Cybercriminals are acutely aware of the blind-spots of current vendors and design their attacks accordingly, and
- The nature of zero-day attacks means that cybercrime networks are continually adapting threats to bypass defenses. By definition meaning there are only a small number of such attacks at the bleeding edge, and certainly not enough data for an AI to effectively learn and defend users without human intervention.
Our CTO, Bill Rue, expresses AI’s current and future strengths quite succinctly:
“Most AI is currently pretty basic. It’s not super smart - yet. As it gets smarter - and if you look at some stuff being developed by teams like the Korean researchers that won the DARPA challenge robot or Boston Dynamics you know that we’re moving fast toward independent AI.
“The big difference with future AI will be that at the moment, we - the humans - have to act on the issues AI detects but in the future, our true AI systems will find the problem, find the solutions, implement them, and then predict the next problem that could arise, all on their own. Essentially, AI will help define signal from noise and help us focus effort to get the best protection.”
A fear of the unknown? Or a cautious approach?
Along with a lack of understanding about how AI and machine learning-based cybersecurity tools work, the Webroot report I mentioned above found that only 49% of IT professionals said they felt extremely comfortable using these tools.
The report makes sense - decision-makers would naturally be uncomfortable increasing their dependency on AI if they don't know how exactly it works to begin with. Consequently, this situation calls for greater emphasis on the need for cybersecurity collaboration - sharing up-to-date intel among businesses, communities, nations - a message that I keep reiterating. By sharing information on the evolution of AI within the field of cybersecurity, including what has gone wrong in the past, there will be greater awareness of the issues, making business leaders more confident about increasing their use of the technology.
But a fear of the unknown isn’t all that makes business leaders hesitate when asked if they will completely defer to AI for cyber security.
The report found the primary reason businesses are turning to these tools is because cybercriminals are. Some 86% of the IT professionals surveyed said they believe cybercriminals are using AI and machine learning to attack organisations.
Again, understandable. After all, if businesses are interested in (and building) AI projects, it is only natural for cybercriminals to get in on the action too. From enhancing phishing attacks to enabling nation-wide hacking, AI is equally a threat as it is an asset within cybercrime.
It makes me shudder to think that if machines get smart enough to out-think people what will that mean for cybersecurity? An AI arms race is arguably already on. Are we heading for a battle of the bots, good vs evil AI?
But for now, the answer to the question I posed at the start of this blog is no. No, we aren’t ready to completely defer to AI for cybersecurity.
We still need humans. I still need my expert team of analysts. We need these people to evaluate whether our machine learning AI solutions are getting it right - as well as to try and forecast potential trouble areas ourselves.
Adopting a cautious approach towards AI shouldn’t imply that we are scared of it, or that we dismiss it altogether. Rather, we should invest more time towards developing the technology, and/or developing our knowledge surrounding it as much as possible, so we can maximise its potential. For developing AI and/or how we use it will help us gain a better idea of how and how well it can help our businesses combat the cyber threats we are facing, allowing us to build more effective cybersecurity strategies.
When it comes to AI and cybersecurity in its present state, businesses need to do their part and develop, rather than defer.
Get the facts
Companies are spending more on cybersecurity now than ever before, but those funds aren't always targeting the most significant dangers. There seems to be a bit of a disconnect amongst many CEOs about the sources of cyber-threat.
Studies consistently show that more than 90% of cyber-attacks are perpetrated via email, yet email security is rarely the biggest item in cybersecurity budgets. If we’re going to win the battle against cybercrime we have to get real about the nature of the threat.
I’m on a mission to help business people understand cybercrime and protect their businesses from costly attacks. If you would like to learn more about the complex cybersecurity challenges facing business today, please download my e-book Surviving the Rise of Cybercrime. It’s a plain English, non-technical guide, explaining the most common threats and providing essential advice on managing risk.
You can download my e-book for free, here.
“Cybercrime is a serious and growing business risk. Building an effective cybersecurity culture within an organisation requires directors and executives to lead by example. Surviving the Rise of Cybercrime is a must-read for directors and executives across business and in government and provides strong foundations for leaders determined to address cyber risk.” - Rob Sloan, Cybersecurity Research Director, Wall Street Journal.
... ... ...
Hi, I’m Craig McDonald; MailGuard CEO and cybersecurity author.
Follow me on social media to keep up with the latest developments in cybersecurity and Blockchain; I'm active on LinkedIn and Twitter.
I’d really value your input and comments so please join the conversation.