Hello

Your subscription is almost coming to an end. Don’t miss out on the great content on Nation.Africa

Ready to continue your informative journey with us?

Hello

Your premium access has ended, but the best of Nation.Africa is still within reach. Renew now to unlock exclusive stories and in-depth features.

Reclaim your full access. Click below to renew.

How AI-generated deep fakes are terrorising Kenyan women in the public eye

Photo credit: Photo I Pool

What you need to know:

  • The proliferation of artificial intelligence has enabled malicious users to quickly generate and disseminate harmful content.
  • This includes sharing intimate images, videos, or audio recordings that can be devastating to victims.

The anxiety of waking up to find oneself trending on social media is becoming a pressing concern for women, particularly politicians, musicians, and journalists.

The proliferation of artificial intelligence (AI) has enabled malicious users to quickly generate and disseminate harmful content targeting these well-known figures, often without their consent. 

This includes sharing intimate images, videos, or audio recordings that can be devastating to victims.

Over the past five years, Kenya has seen an alarming increase in both online and offline incidents involving leaked images and videos, with victims acknowledging that many of these materials are fake or altered.

Earlier this year, a US singer fell victim to image-based sexual abuse when a wave of AI-generated explicit images circulated on social media platform X. The situation escalated to the point where the platform had to block searches related to the incident.

In cases of image-based sexual abuse, perpetrators—be they strangers, current or former intimate partners, friends, or acquaintances—share or threaten to share sexually explicit materials without the victim's knowledge or consent. 

This often includes deepfakes, which are digitally manipulated images or videos created using AI technology. The primary intention behind this action is to inflict emotional distress or harm on the victims.

In response to such incidents, X introduced a new policy allowing users to share consensually produced adult content, which may include AI-generated, photographic, or animated materials such as cartoons or anime.

While some tech companies have begun implementing tools to combat non-consensual content on their platforms, the challenge remains significant.

In September, Microsoft announced its partnership with StopNCII, an initiative designed to protect victims of image-based sexual abuse. 

This programme allows victims to create a “hash” or digital fingerprint of their images without them ever leaving their device, including synthetic images created by AI. These hashes can then be used by industry partners to identify and take action against the unauthorised sharing of such content.

Furthermore, last week, the United Nations adopted a comprehensive global framework for digital cooperation and AI governance, mandating governments, tech companies, and social media providers to work jointly to ensure the safety of online spaces for all users.