Artificial intelligence (AI) is already part of our daily lives. We use it as a tool to speed up the pace of day-to-day work. However, as beneficial as it is, there are some obvious loopholes that need to be closed off for everyone’s safety and security.
So, it comes as no surprise that the European Union’s Artificial Intelligence (AI) Act was passed in August 2024, and that South Africa’s Department of Communications and Digital Technologies (DCDT) published our own draft version shortly after. The goal is to balance safety and compliance with the benefits, and here’s why.
AI’s Rapid Rise, Benefits, and Booming Growth
AI might have started its journey back in the 1950s, but the real boom hit in the 2010s, and set the stage for its current widespread presence across industries. Today, we’re living in an ‘AI-powered world’, with countries like China, Nigeria, India, the USA, UK, and Australia all moving towards national policies and laws to regulate AI. It’s a pivotal moment for this game-changing technology, signalling that AI is now deeply embedded in our everyday lives.
According to a report by Grand View Research, the global AI market was valued at a whopping USD 196.63 billion in 2023. With tech giants pouring resources into research and innovation, along with advances in computational power and data availability, AI algorithms and models are exploding. By 2030, the market is set to skyrocket to over USD 1.81 trillion. AI tools are being rapidly adopted across the globe, especially in industries like Financial Services, automotive, manufacturing, retail, and healthcare.
When it comes to generative AI’s impact on productivity, McKinsey’s research found that it could add the equivalent of $2.6 trillion to $4.4 trillion annually, particularly for customer operations, marketing and sales, software development, and R&D. These findings show that the impact of AI on productivity could be between 15% to 40% – that is significant.
The 10 Risks of AI
However, as with all other emerging technologies, AI comes with its fair share of risks, many of which we’re still discovering. But as it grows and spreads, some risks are already clear, and require innovative solutions to mitigate them.
Here are some of the key risks already identified.
- Bias and Discrimination: AI models are trained on data, and if that data contains biases (racial, gender, socio-economic, etc.), the AI can perpetuate and even amplify those biases. This could lead to unfair outcomes in areas like hiring, lending, or law enforcement.
- Lack of Transparency: AI algorithms, especially deep learning models, often function as “black boxes.” This makes it difficult for users to understand how they arrive at decisions. This lack of transparency can cause trust issues and make it hard to identify errors or biases.
- Security Vulnerabilities: AI systems can be hacked or manipulated. For example, adversarial attacks can deceive AI by slightly altering the input data. This can cause the AI to make incorrect decisions. This is particularly dangerous in areas like cybersecurity, healthcare, and autonomous vehicles.
- Data Privacy Concerns: AI requires vast amounts of data to function effectively. This raises concerns about how that data is collected, stored, and used. Misuse of personal data or breaches can lead to significant privacy violations.
- Job Displacement: Automation powered by AI is replacing many traditional jobs. This is particularly evident in industries like manufacturing, retail, and even white-collar sectors like financial services and customer support. This raises concerns about unemployment and economic inequality.
- Ethical Dilemmas: AI triggers tough ethical questions, especially in areas like autonomous weapons, surveillance, and healthcare. The potential for AI to be used in ways that conflict with human rights and ethical norms is a growing concern.
- Dependence and De-skilling: Over-reliance on AI can lead to human skills atrophying, as people may depend too much on AI for tasks like decision-making, driving, or creative processes.
- Regulatory and Legal Issues: As AI evolves, legal frameworks haven’t fully caught up. Issues around accountability, liability, and compliance with existing laws can create a lot of uncertainty for businesses and individuals using AI.
- Misalignment of Goals: If AI systems are not carefully aligned with human intentions or objectives, they may pursue goals in ways that are harmful or unintended. Especially so in the context of advanced AI systems capable of autonomous decision-making.
- Failure of AI Models: Like all software, AI models can fail, especially when applied to situations they were not designed or trained for. This can lead to erroneous or even dangerous outcomes in critical areas like healthcare, aviation, or finance.
These risks highlight the need for robust risk management strategies, including regulation, transparency, accountability, and safety protocols to ensure that the benefits of AI can be maximized while minimizing potential harm.
The Urgent Need for Stringent AI Regulations
Guy Krige, Executive Risk Consultant at ESCROWSURE, explains, “We’re seeing a surge in AI-driven solutions, particularly in industries like Financial Services. It’s crucial that these are fully integrated into business continuity and risk mitigation strategies.”
Guy continues, “At its core, AI models are software, usually provided by third-party vendors. These are embedded into company systems to boost services and operations. But, like any third-party software, AI opens businesses up to specific risks. That’s why both globally and locally, there’s increasing attention on the benefits of software escrow for AI to protect operations against catastrophe, and ensure business continuity.”
Which Software Escrow Company in South Africa Offers AI Software Escrow?
First, let’s unpack software escrow.
Software escrow, or source code escrow, is a trusted global practice used to manage third-party risks. It involves securely storing a software’s intellectual property, be it source code, technical documentation or any other important information, which can be released to the user under specific, pre-agreed conditions.
The top software escrow company in South Africa is ESCROWSURE.
We have delivered these services in South Africa for over 20 years, and have experience and expertise that works in your favour. We are also the only provider in the southern hemisphere with ISO 27001:2022 certification, the international standard for information security and third-party software risk management.
How does software escrow work for AI?
Most AI-driven solutions are supplied by third-party vendors. This means that users, such as banks or insurance companies, have no direct access or way to protect the vendor’s AI assets. “This creates a significant vulnerability for AI users.
Guy Krige, Executive Risk Consultant at ESCROWSURE, explains, “Software escrow for AI applications safeguards against the discontinuation of third party AI services and ensures access to the source code and related hosting information .
What happens when the AI vendor stops working?
If an AI provider fails or unexpectedly discontinues its services, software escrow release conditions ensures access to the source code and essential hosting details reducing the user’s dependency on a single AI software provider’s stability, giving them greater confidence in their long-term AI operations. This allows the user to maintain and operate the AI software independently or to transition smoothly to an alternative provider.
Software Escrow Builds Trust in AI Dependent Businesses
By offering access to AI source code and models, software escrow helps build resilience for AI services, particularly in Financial Services.
“Software escrow plays a critical role in helping both vendors and users meet compliance requirements,” says Guy Krige. “For example, guidelines from South Africa’s Financial Sector Conduct Authority already highlight the importance of continuity planning and risk management in technology outsourcing. As South Africa’s AI regulatory framework takes shape, we’re likely to see specific rules around safeguarding customer data and ensuring the continuity of operations that rely on AI-powered models.”
Prepare for the Unforeseen with Software Escrow for AI Applications
As AI reshapes businesses, the incorporation of software escrow into risk management strategies is becoming more vital than ever. Krige adds, “AI technologies evolve at a rapid pace, and companies must be able to adapt and maintain their systems. For Financial Services firms, the priority is securing the long-term viability of their AI service. Software escrow ensures that AI-driven systems remain operational, no matter what happens with the third-party provider. This not only protects the significant investment businesses make in AI technology but also strengthens the resilience of the entire AI ecosystem.”
Don’t wait until disaster strikes.
Talk to the experts at ESCROWSURE today.