AI: The technology that's giving misogyny a dangerous upgrade
AI has handed woman-haters an easy-to-use tool to fabricate, harass and destroy reputations.
What you need to know:
- AI deepfakes are weaponising women's own images against them, creating fake pornography and destroying reputations.
- Ninety to 95 per cent of all deepfakes are non-consensual pornographic images, with 90 per cent depicting women.
A recent BBC radio interview with a Kenyan artist illustrates the curse that artificial intelligence (AI) has become. The artist reveals how her detractors took and manipulated her images using AI to depict her engaged in reprehensible activities, which she only learnt about when friends started asking her what she was up to.
Despite her denials that this was not her, they insisted that the images were authentic. Another set of images was posted a month later to reinforce the impression that indeed she was engaged in these obnoxious activities. The aim was obvious: to depress her popularity, hence her sources of income from online content.
What this artist went through is referred to as deepfaking—the use of artificial intelligence to create highly realistic but false impressions of real people apparently doing things they never actually did. AI is trained using large amounts of real images, followed by the generation of synthetic content that, for all practical purposes, appears genuine.
Deepfakes can be used for harmless creativity, such as mimicry. However, they are also being used to spread all forms of falsehood, some of which constitute gender-based violence, leaving their targets psychologically, socially, morally and even financially broken.
The article AI-powered online abuse: How AI is amplifying violence against women and what can stop it, posted by UN Women on November 18, 2025, notes that AI is now "creating new forms of abuse and amplifying existing ones at alarming rates". It states that "one global survey found that 38 per cent of women have personal experiences of online violence, and 85 per cent … have witnessed digital violence against others". What makes this even more ominous is that "the scale, speed, anonymity and ease of communication in digital spaces" make "perpetrators feel that they can get away with it, and victims often do not know if and how they can get help" while "legal systems play catch up with the rapid changes in technology".
Threatening messages
The article reveals that "according to research, 90 per cent to 95 per cent of all online deepfakes are non-consensual pornographic images, with around 90 per cent depicting women". Moreover, "processing tools can identify vulnerable or controversial content in women's posts" as well as "craft personalised, threatening messages using a victim's own words and data". In short, "deepfakes are increasingly misused as a form of digital abuse – for example, to create non-consensual sexual images, spread disinformation, or damage a person's reputation".
But why are they largely targeting women?
According to Laura Bates, quoted in the article, this is because of misogyny. AI provides men who are hostile to women with an easy to use technology to fabricate and share compromising images just to harass them. Such images are "replicated multiple times, shared and stored on privately-owned devices, making them difficult to locate and remove".
Ruby Sciberras, in an online article on 01 December 2025, explains that "fake images or videos of real people that are generated by AI" can be created, or a person's actual face can be pasted "onto existing pornography".
Her interviews with women and gender-diverse people revealed "feelings of concern, frustration and fear surrounding the use of AI for sexual gratification or 'intimacy'," as AI can aid in creating the "perfect" female body that then negates physical romance. What this means is that fear will deter people from normal romantic routines such as sexting and sharing of photos and videos.
The UN Women article lists organisations, which one can contact to stop the dissemination of such images and for other types of help. They include Stop Non-consensual Image-abuse; Chayn Global Directory; The Online Harassment Field Manual – Help Organisations Directory; Cybersmile Foundation; and Take it Down.
The experts interviewed for the article posit that "technology companies have a critical role to play in preventing and stopping AI-generated digital violence" by restricting access to pornographic deepfake tools; rejecting content created by these tools; developing user-friendly reporting features for responding to abuse; responding swiftly and effectively to reports; proactively identifying falsified content; and increasing the number of women researchers and builders of technology in designing AI tech.
Digital literacy
They also recommend digital literacy for young men so they become source-sceptical – not accepting everything they come across but verifying its credibility.
Sciberras recommends ethical use of technology—applying a critical lens on what AI technologies "mean for human connection, the environment or the safety of vulnerable groups"—as well as digital literacy and accountability for creators of emerging AI and social media platforms.
That women are the majority victims of AI-generated violence is a testament that society remains deeply misogynistic. Creators of AI have simply given society a new tool with which to express the same. Who or what is then to blame?
The writer is a lecturer in Gender and Development Studies at South Eastern Kenya University ([email protected]).