Kenya’s new AI Bill pushes for strict rules on deepfakes and AI job takeovers
The Artificial Intelligence Bill 2026 mandates that AI must be designed to enhance human capability rather than replace it, requiring "human-in-the-loop" oversight for all critical automated decisions.
The proposed Kenya Artificial Intelligence Bill 2026 promises stiff penalties for violations of image rights while drawing a firm line against AI systems that edge too close to replacing human labour.
Currently before the Senate, the draft law seeks to play watchdog over the fast-evolving AI ecosystem, promoting ethical use while guarding privacy and livelihoods.
Central to this ambition is the creation of an Office of the Artificial Intelligence Commissioner, a regulatory institution tasked with keeping both AI innovation and its excesses in check.
Kenya’s AI sector has seen explosive growth, unlocking new streams of revenue while stirring unease around bias, data privacy, and the future of human jobs.
The Bill attempts to bring order to this digital frontier by classifying AI systems according to risk, then tightening or loosening the regulatory leash accordingly. In other words, the greater the potential harm, the stricter the scrutiny.
On employment, the proposed law leans firmly toward augmentation over replacement. It requires developers and deployers of AI systems to design technologies that enhance human capability, not sideline it.
“A person who designs or deploys an artificial intelligence system shall, design or deploy the system in a manner that enhances rather than replaces human capabilities. Incorporate features that support human involvement in the system and provide for human oversight in critical decisions made by the system.” states the bill.
To reinforce this, the Cabinet Secretary for Information and Technology is empowered to define how AI systems should support human involvement, identify which decisions demand human supervision, and establish mechanisms for intervention or override when necessary.
The Bill goes a step further by introducing mandatory workforce impact assessments for AI systems likely to affect employment. Developers will be required to evaluate potential job displacement and build in safeguards that allow qualified individuals to step in where decisions risk harming human rights, safety, or societal well-being.
On image rights protection, the Bill bares its teeth most sharply. It proposes fines of up to Sh5 million or jail terms of up to two years for those found guilty of distributing harmful AI-generated content using another person’s likeness without consent, commonly known as deepfakes.
The same penalties would apply to those deploying prohibited AI systems or failing to carry out required risk and workforce assessments.
In structuring its approach, the Bill borrows from Europe’s AI framework, adopting a tiered risk framework that categorises AI systems as unacceptable, high, limited, or minimal risk.
Technologies deemed to pose ‘unacceptable risk’ would be banned outright, even though the Bill stops short of listing specific examples.
In Europe, such systems include those that threaten fundamental rights, particularly applications involving biometric data processing.
High-risk systems, spanning sectors like critical infrastructure, healthcare, education, law enforcement, border control, and elections, would face the toughest compliance requirements
Low-risk services, such as spam filters, will face the lightest regulation, borrowing heavily from the EU, which expects most services to fall into this category.
The EU law also requires that AI tools remain under human oversight rather than being left entirely to automated decision-making to prevent harmful outcomes.
Follow our WhatsApp channel for breaking news updates and more stories like this.