The hype around AI is impossible to avoid, and it’s logical that customers and partners alike are eager to get started, not to be left behind. Artificial intelligence (AI) is pervasive, and integrations into productivity tools have paved the way for a new era of innovation. Generative AI tools, powered by advanced machine learning algorithms, are revolutionizing the way we work, collaborate, and create.
Among these groundbreaking tools, ChatGPT and Microsoft 365 Copilot (a collaboration with OpenAI) stand out as shining examples of the transformative potential of AI in enhancing productivity. And of course, there are many more examples and applications with an AI arms race in full swing.
Generative AI tools leverage deep learning algorithms to generate content, automate repetitive tasks, and provide intelligent assistance to users. The tools are trained on vast amounts of data, enabling them to understand context, recognize patterns, and generate human-like outputs. From writing code and designing graphics to composing emails and drafting documents, generative AI tools offer a wide range of applications across various industries and domains.
However, the age of AI and the race to embrace its powers also ushers in a new range of risks that businesses must consider. Despite being a leader in the AI space with a promise of a security-first mindset, Microsoft’s recent faux pas with Recall is an example of how quickly things can go wrong without the necessary forethought.
According to The Verge, ‘Recall uses local AI models built into Windows 11 to screenshot mostly everything you see or do on your computer and then give you the ability to search and retrieve items you’ve seen. An explorable timeline lets you scroll through these snapshots to look back on what you did on a particular day on your PC.’ In its haste to deploy, Microsoft missed critical security flaws, evoking a backlash from the tech community. The forthcoming AI-powered search service was found to capture snapshots of a PC every 5 seconds and store that data unencrypted on the device, creating significant security risks for users.
With that recent furore in mind and leaving aside some of the more extreme predictions that accompany AI, like fears for the end of humanity, let’s consider six very real and pragmatic AI risks that businesses need to consider. We hope they’re a jumping-off point for conversations with your customers:
1. The Validity of the DataAI requires a plethora of data, and many applications have difficulty sourcing and validating sufficient quantities. Ensuring the data is both accurate and relevant for users is critical. Testing and verifying data is one of the biggest challenges for AI & ML models, with scores of examples of new applications delivering false and inaccurate results due simply to the erroneous data upon which the models are trained. This risk exists if the business is developing its own AI application, but it also exists where a business is dependent on AI applications as tools developed by a third party. If the data is invalid or sub-par, then the outputs will be compromised, and many AI applications struggle with transparency.
2. ExpertiseIn the race to embrace new technologies, the human element is often overlooked or undervalued. Many AI applications are employed to perform tedious tasks where the results produced, and any errors, are relatively innocuous and inert. However, that’s not always the case.
As AI applications strive to push new frontiers, and as organizations become more comfortable with AI applications performing more challenging and sophisticated tasks, there is a risk that businesses may let their guards down as tasks get more challenging and complex.
Business-critical and highly sensitive tasks are good examples, including tasks that impact data security or human health. For such tasks, it is imperative to ask if the business has the appropriate expertise and processes in place to:
- Identify, procure, and quality control the right tools for those tasks.
- Design the processes around the use of those tools.
- Monitor inputs and outputs to ensure that the AIs and surrounding people and processes are fit for purpose and generating consistent and accurate results.
Consider a scenario where the business seeks to embrace AI tools in pursuit of cost-cutting initiatives. While AI tools may be a relatively cheap and efficient means to an end, if the staff that remain in the business after cuts don’t have sufficient expertise to question the accuracy of the tools, or indeed to use them correctly, then the outcomes may be disastrous, albeit unforeseen and unintended.
Ironically, that same scenario, as skills and expertise bleed from a business, can also lead to job losses as a closely related risk. Over time the business’s workforce will have diminishing skills and expertise. Some estimates are as high as 45 million Americans, or a quarter of the workforce, losing their jobs to AI automation. Worldwide, it equates to a billion people losing their jobs over the next decade due to AI. Businesses need to tread cautiously as the adoption of AI can be a slippery slope.
3. Protecting Proprietary IP and Legal LiabilityAs a new and emerging field, the legal landscape surrounding AI is uncertain. There are laws and regulations, but many are yet to be tested in a court of law with relatively few precedents to refer to. Furthermore, many unintended consequences are still unknown.
Some argue that AI disintermediates the owners of IP, for instance. More specifically, if an AI tool is used to produce results or to create a good or service, where does the IP reside? Or, in the event of an accident, error, or legal breach, who is liable? Is the provider of the data upon which the tool was trained implicated, for better or worse? Did they even consent to their data being used? Is it the vendor that created the tool? Those that developed the algorithms and sourced and validated the data, and trained the models? Or, is it the person or organization that is employing the tool? Many may argue that it’s the latter, however, in the case of the law, matters are rarely so simple. And, what if a model consumes your IP, what recourse do you have and how can you prove it?
Consider what disclosures and assurances are provided regarding the tools and the data upon which they are trained. If there are inherent biases or errors in the data and/or the results, who is responsible? And, what if the legal owner of the data consumed by the tool has not granted consent for their data to be used for that specific application?
How transparent are the tools and the data that they are consuming?
The range of applications for AI is so diverse, it is near impossible to anticipate how the legal landscape will evolve. Take the simple example of an AI application trained on public surveillance footage. Have the subjects in the footage consented? What if the model is trained on mass data consumption across multiple jurisdictions? Or what about deep fakes, where the models and the tools can be employed for legal or illegal purposes, or sometimes more specifically, ethical or unethical purposes, with laws still to be passed as new applications and insidious uses are emerging.
Or a more subtle scenario – what if a business offers advice or services based on an AI tool that is flawed? AI can be a black box, and those risks may still be unknown.
There are calls for a coordinated regulatory approach to AI globally, between governments and industry, however, with the slow pace of regulatory change combined with the manoeuvring of commercial interests, political posturing, and the pervasiveness of technology across borders, these efforts will be slow.
4. CostsIn every business, cost is a key consideration, and when it comes to new technologies like AI applications, the same is true. From procurement & licensing to maintenance and staffing, the costs for AI services can accumulate fast, and questions about the financial return on investments (ROI) will arise.
Inevitably, studies suggest that by 2030 100% of IT spending will be linked in some way to AI, and those organizations that embrace AI now for aggressive growth are forecast to be best placed to succeed.
Generative AI is becoming table stakes in most tech markets and those that thrive will align investments with customer outcomes, and re-invest productivity gains into product improvements and performance, for future strategic advantage.
The largest organizations on the planet are literally investing billions of dollars into AI, so those that embrace the technological opportunities will be in good company, however, there are no guaranteed returns, so buyer beware.
5. Data Privacy and CybersecurityAs noted earlier, AI applications rely upon the collection of large volumes of data, some of which may be from external sources and may include personal data necessary to train models or personalize experiences. The more data the better, but that can lead to privacy concerns. Experts encourage strict adherence to privacy and ethical standards in the adoption of AI, beyond anonymization to advanced encryption, including the use of homomorphic encryption.
Where the application stores information about an individual, that information may also include sensitive information that needs to be protected from the prying eyes of hackers or cybercriminals to avoid unforeseen spear phishing attacks and data breaches.
The same is true for those that are using AI tools and applications to perform tasks. They should avoid the exposure of any sensitive, proprietary, or personal information.
Businesses should be alert to more nuanced risks as well, like data poisoning and manipulation where adversaries may feed misinformation to systems for malicious reasons.
6. Algorithmic Bias and Discrimination RisksFinally, businesses should ensure that their use or development of AI applications and the impact on goods and services are as transparent and explainable as possible, to avoid algorithmic AI bias.
One known flaw of AI applications is their reliance on large existing data sets, and that data may have inherent flaws and biases that could lead to discrimination, intended or not.
Businesses must aim to implement measures that allow full disclosure of their processes and outcomes, and that can make all reasonable attempts to detect and mitigate any such discrimination or bias.
Conclusion
Artificial Intelligence (AI) and Gen AI tools represent a new era and enormous opportunities for businesses, but they do have pitfalls that should be considered. Take time to discuss with your customers their various use cases and projects, the role that AI might play and some of the risks that should be considered.
Keeping Businesses Safe and Secure
Prevention is always better than a cure, and one of the best defences is to encourage businesses to proactively boost their company’s cyber resilience levels to avoid threats landing in inboxes in the first place. The fact that a staggering 94% of malware attacks are delivered by email, makes email an extremely important vector for businesses to fortify.
No one vendor can stop all email threats, so it’s crucial to remind customers that if they are using Microsoft 365 or Google Workspace, they should also have a third-party email security specialist in place to mitigate their risk. For example, using a specialist third-party cloud email solution like MailGuard.
MailGuard provides a range of solutions to keep businesses safe, from email filtering to email continuity and archiving solutions. Speak to your customers today to ensure they’re prepared and get in touch with our team to discuss fortifying your customer’s cyber resilience.
Talk to us
MailGuard's partner blog is a forum to share information; we want it to be a dialogue. Reach out to us and tell us what your customers need so we can serve you better. You can connect with us on social media or call us and speak to one of our consultants.
Australian partners, please call us on 1300 30 65 10
US partners call 1888 848 2822
UK partners call 0 800 404 8993
We’re on Facebook, Twitter and LinkedIn.