Nearly eight months after it was introduced, South Africa’s AI policy framework is set for public input review this April by the Department of Communications and Digital Technologies (DCDT).
While the framework promotes ethical, inclusive AI development, innovation, talent development, and data protection, experts raise concerns about its slow progress.
“At the moment, there is a lot of uncertainty,” said Nerushka Bowan, a technology and privacy lawyer and the founder of LITT Institute. “The biggest task would be to provide a clear and actionable roadmap to guide the country’s AI policy direction and regulatory intent.”
South Africa, recognised as a frontrunner in Africa’s technology landscape, had been anticipated to lead the way in AI policy development. However, it is lagging with unclear direction in its AI policy implementation.
Without clear regulatory guidance, the country risks falling behind peers like Rwanda and missing opportunities for economic transformation and global competitiveness, said Daniel Novitzkas, the group director at Specno, a South African digital solutions company.
Rwanda approved its national AI policy in April 2023, focusing on using AI to drive economic development and improve public services, and prioritises ethical AI development, investment incentives, and infrastructure expansion.
Without a comparable roadmap, South Africa risks missing out on the projected $1.5 trillion contribution of AI to Africa’s GDP by 2030.
Wendy Rosenberg, director, head of digital media and electronic communications practice at Werksmans Attorneys, said that while South Africa’s AI framework covers critical areas like data protection, privacy, governance, and transparency, its delay is problematic.
“These issues are critical in ensuring AI development and deployment align with South Africa’s legal and ethical landscape,” said Rosenberg.“
However, it is crucial to finalise the policy framework as it sets the foundation for detailed policies that will be established for various sectors.”
Need for a clear AI regulation
Currently, industries like financial services and healthcare operate under sector-specific regulations, which already impose obligations on AI-related applications.
However, there is no overarching AI-specific regulation or guiding policy to provide clarity on government intent and future legal obligations, said Bowan.
Without a nationally supported AI framework policy, it becomes increasingly difficult to encourage entrepreneurs and attract foreign investment in AI infrastructure. Investors and developers often look for regulatory clarity before committing resources, and the absence of a definitive policy creates hesitation, stalling AI-driven economic transformation.
“We do not have many years to debate the way forward,” Bowan said. “Investors want to know what the landscape looks like. Early movers often gain a first-mover advantage.”
A lack of clarity does not just deter investors, it risks driving local AI talent abroad, where countries with well-defined AI policies, such as the U.S., U.K., and Canada, actively attract skilled professionals with funding incentives, research grants, and AI-friendly regulations.
“The people capable of building AI solutions for our economy need the right tools, information, and regulations to thrive here. Otherwise, we risk losing them to the US, Europe, and China,” said Novitzkas.
Read Also: Kenya, Ghana and Ethiopia hit with lowest tariffs under Trump’s new trade policy
Addressing ethical and bias concerns
South Africa already has a strong foundation for AI regulation, thanks to the Protection of Personal Information Act (POPIA), which aligns with the European Union’s General Data Protection Regulation (GDPR).
However, Rosenberg pointed out that AI policies must go beyond data protection to address transparency, bias mitigation, and ethical AI use.
“AI systems use personal information for various purposes – training AI models, personalisation, and analytics,” Rosenberg said. “This makes transparency and user control vital.”
Addressing ethical concerns in AI-bias, transparency, and accountability is vital, particularly in South Africa with its history of inequality.
Rosenberg noted that global best practices such as human-in-the-loop systems, bias evaluation processes, and diverse data sampling should be implemented to mitigate these risks.
A risk-based approach
For South Africa to develop a policy that fully capitalises on AI, it must address key challenges, particularly data sovereignty and internet connectivity.
As of 2023, about 28% of South Africans lacked internet access, limiting the country’s ability to fully harness AI’s potential.
“Even if AI has the potential to address pressing issues like education, the average South African still lacks access to the internet or a smartphone,” Novitzkas said.
Looking globally, the European Union’s AI Act follows a risk-based approach, imposing stricter regulations on high-risk applications while allowing more flexibility for low-risk AI.
For AI policy implementation to be successful, a phased approach will be necessary, alongside ongoing legislative updates and monitoring mechanisms.
“The challenge always is the law keeping up with technology. What we need is principle-based, future-ready legislation,” Rosenberg said.
Regulation is necessary, but it must not stifle innovation. If South Africa can strike the right balance between oversight and technological growth, AI could become a major driver of investment, job creation, and digital transformation, rather than another missed opportunity.
Comment and follow us on social media for more tips:
- Facebook: Today Africa
- Instagram: Today Africa
- Twitter: Today Africa
- LinkedIn: Today Africa
- YouTube: Today Africa Studio