MailGuard Jun 26, 2023 12:15:49 PM 11 MIN READ

Embrace the “Fog of the Future”!

We’re always cynical about what we see, hear, and read. Even more than most.

Who really wrote that article, and what were their motivations? Working in the cybersecurity space only serves to exacerbate that cynicism. Ironically, in an industry that’s all about trust, you learn to trust no one. Identities are misappropriated all the time. Sometimes with permission, and sometimes not. Most of the time they’re just digital credentials used to access a platform, and then a team member chooses whose voice they’re representing and in which channel (Sorry to pull back the curtain for any of the remaining believers in the Great Oz).

And so, in the flood of conversation about AI, it’s frightening to think not only about the upside. How many posts have you seen recently asking people to share their favourite AI apps? And, with all the hype, the possibilities and permutations are multiplying exponentially by the day. But, also to think about the risks. Yes, we’re not alone on that score, fearing the worst. Along with the excitement, doomsayers are literally equating advancements in AI to a pending nuclear apocalypse, and we’re the first to agree that’s all a bit OTT.

That said, in less than five minutes this morning our CMO created a Telegram bot that’s a good approximation of his voice, and he’s no tech whiz. He was just curious to see how close it would get. Sure, it has a very slight American accent, and a tinny robotic tone, but only when you know what you’re listening for can you really spot those differences. You can chat to NotRealMe here, but don’t expect him to remember the conversation: https://t.me/NotRealMe_Bot. The point is that we’re at the beginning of a phase in history where we will be flooded with information, but it will be near impossible to tell if it’s real, who’s behind it and what their motivations are? Some would argue that we’re already there. And, as advertisers, where we used to pay celebrities and thought leaders to represent our stories because of the trust that people placed in them, now many of those voices will be fake. They’ll be good approximations of real people created with cutting edge technology.

Even more concerning are the faked voices of those closest to us. It’s unexpected when you get a call or voice message from Brad Pitt in the morning, so that’s probably easy to spot. But, what if the call is your partner, best friend, or a co-worker? We see the impact of CEO fraud and BEC already, so how much more damaging will future versions of those attacks be with advanced video and audio forgeries, and the ability to reach out through channels beyond email. And if you think that sounds farfetched, it's already happening, as in this story about scammers imitating the victims' children (an evolution of the 'Hi Mum' WhatsApp scam).

To create a robot version of our CMO, he only needed 60 seconds of audio. We could easily capture that from a recording at a conference, in a meeting, or on a quick market research call with someone. How easily we could steal an identity, and the technology is only just getting started. By adding a handful of key words, we instantly change the personality and tone of the bot.

Combined with techniques that already exist, like SIM swapping for example. Imagine a call from an old friend, it pops up on your phone as their caller ID, from a criminal who’s already taken control of their SIM, and then a voice that sounds a lot like them greets you on the other end of the line. It can respond in real time to carry out a full conversation with you, and might reach you anytime, or even over a series of calls, like through a voicemail and then responding to your call back. It’s just data of course, but how are you to know it’s not the real thing? That technology is already here. Think about it next time your bank asks you to say out loud ‘My voice confirms my identity’, to authorise the transfer of a large sum of money. 

Step forward to 2024 and think about Apple’s Vision Pro headsets, and a world where everything that you interact with is filtered through layers of software interpreting the outside world and projecting your image back to others. A synthetic version of your current Microsoft Teams or Google Meet calls. How will we know that it’s really you? As you would expect, Apple of course have already anticipated that question and built in retina scanners, but can we expect every vendor to be so careful?

Think about DoD plans to build cyber armies to thwart bad actors and malicious networks. How can we know what’s real and what’s fake, and what side they’re on? Of course, the tech imitations aren’t just by voice. They can be text only, which is what we’ve been seeing for years now with socially engineered cyber-attacks via email. And so, as we have trusted our eyes and ears for another layer of validation, with newer technology we will need to be even more alert for deep fakes and audio bots that are capable of tricking us with greater sophistication, nuance and eloquence than is even imaginable today.

Today most of us think we can spot an email scam because they’re often riddled with grammatical errors, but not all of them are, and that’s the danger. We become complacent because we spot 99 out of 100. But consider that the 1 in 100 is likely to be the most dangerous. And the same may well be true for future advanced scams. Expect scores of robocalls from synthetic identities wanting to sell you solar panels or to conduct a quick survey about your favourite politician, and the odd call that seems legitimate from a distant relative or old school friend will be so much harder to spot.

And as always, what will it mean for our most vulnerable? For kids, getting an inbound call from Mum, or a pop-up message in chat from a friend, right in the middle of their favourite MMO. Or for new migrants, still trying to decipher language and accents. And the elderly, who’s faculties are already failing, and who have less confidence with advancing technologies anyhow. Add quantum into the mix, and the scale and accuracy of this technology will be beyond belief.

So maybe we’re starting to sound like Chicken Little, but it’s all too real. And, if that future isn’t here yet, it’s right around the corner. Biometrics will be vital, just as Apple have done with the Vision Pro. But how long will it be before every device, every channel and software are enabled for biometric verification, and when that day arrives, will there we be new ways to evade those biometrics? Because until every endpoint, in every conversation is verified, we’re all vulnerable.

MFA is good for now, but it’s not perfect. There are ways around it.

Likewise, password managers are good but not without their shortfalls.

We have loads of protections in our kit bag, but as industry professionals and trusted experts, we need to remain vigilant and aware that there are limitations in what we are recommending today. No solution is an absolute, iron clad protection against cyber threats. And the speed of technological advancement means that we have to constantly challenge and revisit the tools and solutions that we rely upon today, in light of what the world will look like tomorrow.

No post about AI is complete without an obligatory reference to generative AI and ChatGPT, and yet we have gotten this far without even a mention. ChatGPT almost seems like old news, but a few months ago you probably didn’t even know what it was. The world is changing so rapidly, and the information that we consume is transforming so quickly and emerging from so many fragmented corners of the internet and the tech landscape, that the risk is that the volume and complexity may overrun us.

In an article for The Conversation, Laks V.S. Lakshmanan, Professor of Computer Science, University of British Columbia, says “Recent advances in generative AI, particularly those powered by large language models such ChatGPT, make it easier than ever to create articles at great speed and significant volume, raising the challenge of detecting misinformation and countering its spread at scale and in real time.” Professor Lakshmanan talks about harnessing the same tech to detect fraud and misinformation. So, there is hope of course, and other ingenius ways to turn the same tech back against the bad actors, like this AI revelation that keeps phone scammers on the line way longer than they had planned.

And with all of that progress, there is also the risk of fatigue. As the saying goes, ‘the more things change, the more they stay the same’. That sentiment emerged from the truth that the longer we’re around, the more that we start to see the change around us as a constant, and we can fall into the trap of believing that nothing really changed after all. By extension, then we may feel there’s no need to do things differently from what we’ve always done. But that is a foolish and perilous wire to walk, especially when it’s your customers businesses and their employees’ livelihoods that you’re entrusted with.

As the title of Satya Nadella’s book says, ‘Hit Refresh’, and so you must do so regularly to challenge your thinking and what’s best for your customers protection. Donald Sull explains in this HBR article, ‘Strategy as Active Waiting’, from as far back as 2005. It argues that ‘Successful executives who cut their teeth in stable industries or in developed countries often stumble after entering more volatile markets. They falter, in part, because they mistakenly believe they can gaze deep into the future and draft a long-term strategy that will confer on them a sustainable competitive advantage. But visibility in volatile markets is sharply limited because so many different variables are in play. Uncertainty would be manageable if only one thing changed while the rest remained fixed, but of course business is rarely so simple. In volatile markets, many variables are individually uncertain, and they interact with one another to create unexpected outcomes.’

Sull refers to it as “fog of the future”, describing the unpredictability. It’s a concept that is even more relevant now, as the rate of change continues to accelerate, and that “fog of the future” becomes the norm. Surely none of us are stubborn and foolish enough to still believe that the world is as predictable as it once was, so embrace the ‘Fog of the Future’ and encourage your customers to do the same. Accept that the world is changing fast, and that new risks are emerging every day that require new solutions.

Keeping Businesses Safe and Secure

Prevention is always better than a cure, and one of the best defences is to encourage businesses to proactively boost their company’s cyber resilience levels to avoid threats landing in inboxes in the first place. The fact that a staggering 94% of malware attacks are delivered by email, makes email an extremely important vector for businesses to fortify.  

No one vendor can stop all email threats, so it’s crucial to remind customers that if they are using Microsoft 365, they should also have a third-party email security specialist in place to mitigate their risk. For example, using a third-party cloud email solution like MailGuard.   

MailGuard provides a range of solutions to keep businesses safe, from email filtering to email continuity and archiving solutions. Speak to your customers today to ensure they’re prepared, and get in touch with our team to discuss strengthening your customer’s Microsoft 365 security.   

 

Talk to us

 

MailGuard's partner blog is a forum to share information; we want it to be a dialogue. Reach out to us and tell us what your customers need so we can serve you better. You can connect with us on social media or call us and speak to one of our consultants.  

 

Australian partners, please call us on 1300 30 65 10  

US partners call 1888 848 2822  

UK partners call 0 800 404 8993  

We’re on Facebook, Twitter and LinkedIn. 

Keep Informed with Weekly Updates