Implementing AI is not without risk, but how exactly are you meant to mitigate against them?
I am a huge advocate for the use of AI within products but also business operations, which increases the risk of potential fraud or malicious activity impacting your business.
Businesses should work with AI when they understand and can protect themselves against potential problems that may arise.
The addition of AI into business heightens the board responsibilities to educate themselves on risk and implement governance. The board must evaluate how AI aligns with the company’s long-term vision and overall business strategy, and it is more crucial than ever that they ensure ethical implementation.
Read on to understand the risks and how to mitigate against them while implementing AI into your business.
Potential inbound risks
AI increases the likelihood of inbound risks, such as:
- Security threats – Increased volume and sophistication of attacks from generative AI-enabled malware.
- Third-party risk – Challenges around how and where third-parties are using generative AI, which create potential unknown exposures.
- Malicious use – Compelling deepfakes of company representatives or branding that result in reputational and trust damage.
- IP infringement – Intellectual property (IP) may be scraped from training engines as they can be access by anyone with the technology.
As part of the board, you need to be aware of these risks, and follow the steps below to mitigate against them.
Step 1: A focused sprint
Businesses should start with a focused sprint to determine:
- Potential risks – What risks are the business exposed to? Take into account the business ecosystem and third-party partnerships as well.
- Readiness – How mature is the business’ prevention, detection and response to these potential risks?
Understanding where you are first, is important for determining your next steps.
Step 2: Create a roadmap
Based on the focused sprint, the business will be able to develop a roadmap to improve its readiness to the identified risks.
- Categorise risks – At the start, the business should create a matrix for the categorisation of risks, to ensure objective assessment.
- Human action – Humans should remain in the loop, particularly when it comes to monitoring and adding new information into the model.
Step 3: Changes to governance
On implementing the roadmap, the board will need to decide how best to ensure correct governance when implementing AI.
Here are some examples we recommend:
- Steering group – Businesses should set up a cross-functional steering group that meets monthly to make decisions about associated risks and review strategies.
- AI policies – The executive team and board should agree on the guiding principles for adopting AI into the business and review existing policies, such as marketing, employment and moderation.
- AI culture – Training on responsible AI should be provided through the business and clear understanding of the ethics and risks of this tech.
“With major potential uplift in productivity at stake, working to scale gen AI sustainably and responsibly is essential in capturing its full benefits.”
McKinsey