MailGuard Jul 25, 2023 5:06:47 PM 15 MIN READ

RISE OF THE AI: REMINDER IT’S A SECURITY ‘COPILOT’, NOT AN ‘AUTOPILOT’

After sifting through a trove of announcements from Microsoft recently, and following Inspire last week, it’s no surprise that all roads seemed to lead to AI (or GPT, or Bing Chat, or all of the above). Let’s face it, the hype around AI and GPT is certainly not new, but the real danger is that in the rush to embrace AI, that business leaders and decision makers are so entranced by the possibilities that they ignore the warning signs. Drunk on the cool aid, with little pause for thought to consider what could go wrong.

And so, it was refreshing to watch the security copilot demo video from Microsoft and see the warning about 2-minutes in, that ‘AI-generated content can make mistakes’. As the video’s presenter, Holly Stewart, Director of Security Research, Microsoft, says ‘Security copilot doesn’t always get everything right. Here it has generated a reference to Windows 9, which doesn’t exist. AI-generated content can make mistakes.’

MS copilot pic

Excerpt pictured: Microsoft Security Copilot Demo Video

It's hard to believe that ChatGPT launched less than a year ago, on 30 November, and most of us have only been exploring its possibilities for a matter of months. After a few early missteps, its creators were fast to call out that it was only a Beta, and that it was only trained on data sets to 2021, although in the background there have been numerous tweaks and adjustments, and already a new iteration, with OpenAI promising GPT-4 is its ‘most advanced system, producing safer and more useful responses’, claiming ‘GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities.’

FastCompany says ‘We’re in the eighth month of the generative AI arms race, a flurry of activity that was kicked off by the release of OpenAI’s ChatGPT in late 2022.’

It reports that ‘after eight months in the limelight, OpenAI doesn’t look nearly as unassailable as it did just three months ago. A growing number of developers who rely on OpenAI’s models have in recent weeks observed a decrease in the speed and accuracy of the models’ output (OpenAI has denied that performance is degrading). This is almost certainly related to a dearth of available computing power for running the company’s models. OpenAI, whose models run on Microsoft Azure servers, no longer enjoys the access to computing power that gave it its initial lead in the LLM race.

A well-placed source tells me that Microsoft executives (including Satya Nadella) now meet weekly with OpenAI to manage the server resources allocated to running the OpenAI LLMs. OpenAI is likely asking Microsoft for more GPU power, while Microsoft is no doubt asking OpenAI to find ways to economize. Microsoft is so concerned about the compute shortage that it has begun signing deals with smaller cloud startups to access more servers suited to AI.

And others are entering the fray, ‘Meta, Google, and Apple have just as much money, as well as their own chip designs, for AI work. And that’s not the only problem now rearing up against OpenAI. Meta just released a new open-source (free and available) LLM called Llama 2 that may rival OpenAI’s GPT-4 model. Apple is also now reportedly developing its own ChatGPT rival in hopes of catching up to OpenAI.’

The excitement is largely fuelled by the advancements in natural language processing, which make it possible for computers or applications to understand human language and to engage with users conversationally. Prior to OpenAI’s ChatGPT, the best-known and most popular, mass-market predecessor examples were virtual assistants, like Google Assist, Siri, and Alexa.

Amazon says ‘The GPT models, and in particular, the transformer architecture that they use, represent a significant AI research breakthrough. The rise of GPT models is an inflection point in the widespread adoption of ML because the technology can be used now to automate and improve a wide set of tasks ranging from language translation and document summarization to writing blog posts, building websites, designing visuals, making animations, writing code, researching complex topics, and even composing poems. The value of these models lies in their speed and the scale at which they can operate.’

Commercial minds are racing to show how this great leap forward can be leveraged to expand addressable markets, to drive innovation and productivity gains, and ultimately to boost their bottom lines.

One of the earliest battle lines is playing out in what has been characterised as the writers and actors strike against AI, with the Writers Guild of America and SAG-AFTRA actors union, taking on the Alliance of Motion Picture and Television Producers (AMPTP), with one of the main concerns in the dispute, the ability for studios to use an actor’s AI-generated likeness.

The dispute coincides with the new season of Black Mirror on Netflix, with the first episode, ‘Joan is Awful’, portraying a woman whose likeness has been hijacked by a streaming service without her consent, in a dramatization featuring Salma Hayek. According to reports, the episode was too close to home for many in the industry and has helped to fuel what is now an emotionally charged stand-off, with one Hollywood exec going so far as to say that “the endgame is to allow things to drag on until union members start losing their apartments and losing their houses".

SAG-AFTRA president Fran Drescher, known to most as ‘The Nanny’, said that “If we don’t stand tall right now, we are all going to be in trouble, we are all going to be in jeopardy of being replaced by machines.”

To say that the negotiations are rocky would be an understatement. Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator said, “This ‘groundbreaking’ AI proposal that they gave us yesterday, they proposed that our background performers should be able to be scanned, get one day’s pay, and their companies should own that scan, their image, their likeness and should be able to use it for the rest of eternity on any project they want, with no consent and no compensation. So, if you think that’s a groundbreaking proposal, I suggest you think again.”

However, AMPTP spokesperson Scott Rowe sent out a statement in response denying the claims made during the SAG-AFTRA’s press conference. It said, “The claim made today by SAG-AFTRA leadership that the digital replicas of background actors may be used in perpetuity with no consent or compensation is false. In fact, the current AMPTP proposal only permits a company to use the digital replica of a background actor in the motion picture for which the background actor is employed. Any other use requires the background actor’s consent and bargaining for the use, subject to a minimum payment.”

The dispute demonstrates the pandoras box that AI and GPT represent and serves to underline the claim by Amazon that recent advancements in GPT are indeed an inflection point for many industries in the ways that the technology is used, and upheavals for the ecosystems that surround them.

On a more positive note, Judson Althoff, EVP & Chief Commercial Officer at Microsoft, said following Microsoft Inspire last week, that ‘The speed and scale of generative AI technology adoption is staggering and continues opening doors for organizations to imagine new ways to solve challenges. For those fortified with the Microsoft Cloud, advanced AI technology is already unlocking innovation and delivering greater business value, such as elevating customer and employee experiences; transforming patient care; helping scale and optimize operations; and better serving communities. As leaders across organizations seek to keep pace with today’s advancements, they turn to Microsoft and our partner community for comprehensive industry expertise, scale, and copilot capabilities.

We are also working closer than ever before with our partners to help them realize their own AI transformation to fuel business growth and profitability, while scaling go-to-market strategies.’

And, Althoff’s blog outlines a plethora of progressive Microsoft aligned businesses that are already doing just that.

The speed of change will be immense, and the White House has moved to get ahead of it too, with President Biden convening several summits and working groups with corporate leaders, especially those in tech including Alphabet, Meta, Microsoft, Amazon and Apple, remarking that, ‘Over the past year, my administration has taken action to guide responsible innovation.’

At one of the first such meetings, President Biden told the group, ‘We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years. That has been an astounding revelation to me, quite frankly. Artificial intelligence is going to transform the lives of people around the world. The group here will be critical in shepherding that innovation with responsibility and safety by design to earn the trust of Americans.’

They introduced a first-of-its-kind AI Bill of Rights, saying that ‘Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten people’s opportunities, undermine their privacy, or pervasively track their activity—often without their knowledge or consent.

Speaking at a meeting with tech leaders, President Biden explained that ‘In February, I signed an executive order to direct agencies to protect the public from algorithms that discriminate. In May, we unveiled a new strategy to establish seven new AI research institutes to help drive breakthroughs in responsible AI innovation. And today, I’m pleased to announce that these seven companies have agreed to voluntary commitments for responsible innovation. These commitments, which the companies will implement immediately, underscore three fundamental principles: safety, security, and trust.

First, the companies have an obligation to make sure their technology is safe before releasing it to the public. That means testing the capabilities of their systems, assessing their potential risk, and making the results of these assessments public.

Second, companies must prioritize the security of their systems by safeguarding their models against cyber threats and managing the risks to our national security and sharing the best practices and industry standards that are — that are necessary.

Third, the companies have a duty to earn the people’s trust and empower users to make informed decisions — labeling content that has been altered or AI-generated, rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm.

And finally, companies have agreed to find ways for AI to help meet society’s greatest challenges — from cancer to climate change — and invest in education and new jobs to help students and workers prosper from the opportunities, and there are enormous opportunities of AI.

These commitments are real, and they’re concrete. They’re going to help industry fulfill its fundamental obligation to Americans to develop safe, secure, and trustworthy technologies that benefit society and uphold our values and our shared values.’

The fact sheet, entitled ‘Ensuring Products are Safe Before Introducing Them to the Public’, outlines vital commitments around security, stating that, ‘The companies commit to internal and external security testing of their AI systems before their release. This testing, which will be carried out in part by independent experts, guards against some of the most significant sources of AI risks, such as biosecurity and cybersecurity, as well as its broader societal effects.

The companies commit to sharing information across the industry and with governments, civil society, and academia on managing AI risks. This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.’

It also affirms that the companies will be ‘Building Systems that Put Security First’.

‘The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. These model weights are the most essential part of an AI system, and the companies agree that it is vital that the model weights be released only when intended and when security risks are considered.

The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems. Some issues may persist even after an AI system is released and a robust reporting mechanism enables them to be found and fixed quickly.’

And importantly, they resolved that watermarking technology would be mandated under the banner of ‘Earning the Public’s Trust’.

‘The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.

The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.  

The companies commit to develop and deploy advanced AI systems to help address society’s greatest challenges. From cancer prevention to mitigating climate change to so much in between, AI—if properly managed—can contribute enormously to the prosperity, equality, and security of all.’

And beyond the USA, it states that, ‘…the Administration will work with allies and partners to establish a strong international framework to govern the development and use of AI. It has already consulted on the voluntary commitments with Australia, Brazil, Canada, Chile, France, Germany, India, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, New Zealand, Nigeria, the Philippines, Singapore, South Korea, the UAE, and the UK.’

NIST has also developed an AI Risk Management Framework. It states ‘Artificial intelligence (AI) technologies have significant potential to transform society and people’s lives – from commerce and health to transportation and cybersecurity to the environment and our planet. AI technologies can drive inclusive economic growth and support scientific advancements that improve the conditions of our world.

AI technologies, however, also pose risks that can negatively impact individuals, groups, organizations, communities, society, the environment, and the planet. Like risks for other types of technology, AI risks can emerge in a variety of ways and can be characterized as long- or short-term, high or low-probability, systemic or localized, and high- or low-impact.

The AI RMF refers to an AI system as an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (Adapted from: OECD Recommendation on AI:2019; ISO/IEC 22989:2022).

While there are myriad standards and best practices to help organizations mitigate the risks of traditional software or information-based systems, the risks posed by AI systems are in many ways unique (See Appendix B). AI systems, for example, may be trained on data that can change over time, sometimes significantly and unexpectedly, affecting system functionality and trustworthiness in ways that are hard to understand. AI systems and the contexts in which they are deployed are frequently complex, making it difficult to detect and respond to failures when they occur. AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks – and benefits – can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operates it, and the social context in which it is deployed.

These risks make AI a uniquely challenging technology to deploy and utilize both for organizations and within society. Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities. With proper controls, AI systems can mitigate and manage inequitable outcomes.’

None of us can claim to be Nostradamus when it comes to seeing what the future holds for AI, but we must approach the opportunities before us with eyes wide open to the risks as well. These early steps are reassuring and enlightening guard rails for discussions with your customers about their own AI journeys and what lies ahead.

Keeping Businesses Safe and Secure

Prevention is always better than a cure, and one of the best defences is to encourage businesses to proactively boost their company’s cyber resilience levels to avoid threats landing in inboxes in the first place. The fact that a staggering 94% of malware attacks are delivered by email, makes email an extremely important vector for businesses to fortify.  

No one vendor can stop all email threats, so it’s crucial to remind customers that if they are using Microsoft 365, they should also have a third-party email security specialist in place to mitigate their risk. For example, using a third-party cloud email solution like MailGuard.   

MailGuard provides a range of solutions to keep businesses safe, from email filtering to email continuity and archiving solutions. Speak to your customers today to ensure they’re prepared, and get in touch with our team to discuss strengthening your customer’s Microsoft 365 security.   

Talk to us

MailGuard's partner blog is a forum to share information; we want it to be a dialogue. Reach out to us and tell us what your customers need so we can serve you better. You can connect with us on social media or call us and speak to one of our consultants.  

 

Australian partners, please call us on 1300 30 65 10  

US partners call 1888 848 2822  

UK partners call 0 800 404 8993  

We’re on Facebook, Twitter and LinkedIn

Keep Informed with Weekly Updates