Hello

Your subscription is almost coming to an end. Don’t miss out on the great content on Nation.Africa

Ready to continue your informative journey with us?

Hello

Your premium access has ended, but the best of Nation.Africa is still within reach. Renew now to unlock exclusive stories and in-depth features.

Reclaim your full access. Click below to renew.

Is your child safe online? Kenya's alarming AI protection gap

Local experts are calling for tech firms to build child safeguards before products launch as AI threats grow.

Photo credit: Shutterstock

What you need to know:

  • AI-generated child sexual abuse material is rising globally; Kenya lacks specific laws, prompting calls for stronger AI regulations and child protection measures.
  • The UK criminalises AI-generated child sexual abuse material as Kenya lags behind, with legal experts advocating policy updates, industry collaboration, and stronger online child safety regulations.



While Kenya has not reported a case of artificial intelligence (AI)-generated child sexual abuse material (CSAM), the situation in the United Kingdom is alarming.

The Internet Watch Foundation assessed images posted onto dark web CSAM forums in September 2023 and March–April 2024 and found 3,512 AI-generated CSAM images, with over 99 per cent depicting girls.

To combat this trend, the UK government has tabled a Crime and Policing Bill in the House of Commons, which criminalises AI models optimised to create hyper-realistic CSAM that often contains the likeness of real children, thereby harming them.

This proposed law grants customs officers at UK border points the authority to direct individuals suspected of possessing digitally stored CSAM to unlock their digital devices for technology-enabled inspection. The UK already has a system in place to facilitate such inspections.

In 2014, the Home Office launched the Child Abuse Image Database (CAID), a repository of all known CSAM detected during UK police investigations. According to the Ministry of Justice, CAID now holds millions of unique files. The scanning of a digital device, such as a mobile phone, takes about 15 seconds and can reveal whether it contains “known material”.

Meanwhile, in Kenya, while the government acknowledges the growing threat of AI in advancing online sexual abuse, little has been done to address this emerging challenge.

Rose Mwangi, the deputy director at the Directorate of Children Services under the State Department of Social Protection, says online sexual exploitation and abuse pose significant risks to children globally.

“The rise of artificial intelligence is altering the landscape of child harm. AI capabilities are exposing children around the world to a new type of online child sexual exploitation and abuse: AI-generated CSAM. Kenya is not an exception, though no case has been reported to date, according to law enforcement,” she states.

She acknowledges that while the Computer Misuse and Cybercrime Act criminalises the manufacture of CSAM, an AI-specific legal framework is needed to effectively address AI-generated content.

“Some of the steps the country has taken to combat AI-generated CSAM include the legislation of the Computer Misuse and Cybercrime Act (2018), which criminalises the manufacturing of CSAM. This can be used to prosecute offenders who create AI-generated CSAM. Eventually, an AI legislative framework must be enacted to combat this crime.”

However, legal experts argue that existing laws require amendments to provide better protection for children in the digital space. Athena Morgan, a lawyer and regional project manager at the International Centre for Missing and Exploited Children, pointed out inadequacies in Kenya’s legal framework regarding online child protection.

“Kenya has the Data Protection Act of 2019, which requires parental consent before a child's data is published, and the Computer Misuse and Cybercrimes Act, which addresses child pornography and online grooming,” she explains.

“However, the term ‘child pornography’ implies consent, which is inaccurate. The correct terminology should be ‘child sexual abuse material’ or ‘child sexual exploitation material’ to reflect the severity of the crime.”

Athena further highlights the gaps in AI-related child protection laws, stating that Kenya has no specific provisions addressing AI-generated CSAM or AI-driven child exploitation. She recommends incorporating child-specific data protection measures in AI regulations, mandatory reporting of AI-related offences against children, and prioritising research on AI’s role in facilitating and preventing child exploitation.

She also emphasises the importance of involving internet service providers (ISPs) and technology companies in child protection. “Safety by design means that when developing a product, companies must identify potential risks to children and integrate protective measures before launching it. The Communications Authority of Kenya is working on industry guidelines for child online protection, but more needs to be done to ensure companies prioritise children’s safety.”

Jean Paul Murunga, the programme officer at Equality Now’s End Sexual Violence Programme, also calls for clearer laws specifying different online crimes involving children. “Our laws do not itemise various offences that can be committed through online platforms, including AI-generated abuse materials. The Children’s Act (2022) is too general in its definition of child abuse and does not provide specific guidelines on how online platforms can be misused for exploitation.”

He stresses the need for continuous monitoring of online platforms and AI-generated content to prevent child grooming and exploitation. He suggests that ISPs and tech companies take a proactive role in identifying and reporting AI-generated materials that pose risks to children.

“The DCI’s (Directorate of Criminal Investigations) Child Protection Unit focuses mainly on child trafficking, but online exploitation, especially involving AI, requires additional resources and capacity for effective monitoring. Developed countries have systems in place to track and address these emerging threats, and Kenya must invest in similar monitoring mechanisms,” he adds.

mobiria@ke.nationmedia.com