Premium
When AI becomes a weapon: The digital war against women
Artificial intelligence, particularly deepfake technology, is being weaponised to create and disseminate a new and insidious form of gender-based violence that disproportionately targets women by means of fabricated, often sexually explicit content.
On a seemingly ordinary January morning in 2025, Adanech Abiebie woke up to discover her reputation under siege. The mayor of Addis Ababa, who has served in the role since 2021, had become the latest victim of a sinister new form of harassment that is spreading like wildfire across the digital landscape.
A TikTok account had posted an AI-generated video falsely showing her kissing a prominent political figure, complete with a caption claiming she had secured her position through sexual relationships. The manipulated video gained significant traction online, but what proved most disturbing was that the majority of those who watched it believed it was genuine.
An analysis of the first 20 comments revealed a chilling reality: 90 per cent supported the fabricated claim, with most using laughing emojis to mock the mayor. It was a stark illustration of how quickly artificial intelligence can be weaponised to destroy a woman's reputation with devastating efficiency.
Abiebie's experience represents just the tip of an increasingly dangerous iceberg. Last month, a female MP in Kenya raised similar concerns when a video circulated online claiming she was promoting a government investment opportunity.
"That is not my voice and scammers have used my image and generated an AI voice to mislead and scam people. I have no connection to this scheme. Please stay vigilant," the MP's statement read in part, warning Kenyans to disregard the video.
Taylor Swift
The phenomenon reached international headlines in January 2024, when AI-generated images of Taylor Swift, one of the world's most famous stars, spread across social media. The sexually explicit deepfakes using her face went viral and were viewed millions of times on platforms including X and Telegram. The images, which showed the singer in sexually suggestive and explicit positions, looked authentic until it emerged they were AI-generated.
At the heart of this crisis lies deepfake technology – computer-generated images that replace the face of one person with another. AI and facial mapping technology merge, combine, and superimpose images and videos to generate authentic-looking media that can be virtually indistinguishable from reality.
These digitally altered creations are typically used maliciously to spread false information, and they represent one of the most troubling ways that AI is being used to enhance digital gender-based violence and harass women on social media.
In 2023, 98 per cent of non-consensual deepfake content was sexual in nature, and 99 per cent of those affected were women.
A 2021 study of over 10,000 US survey respondents found 41 per cent had personally experienced online harassment. Among respondents under 35, 33 per cent of women and 11 percent of men reported experiencing sexual harassment online.
A 2023 analysis of over 95,000 deepfake videos found up to 98 per cent were deepfake pornography. Of those, 99 per cent of targeted individuals were women. Other vulnerable groups, including minors, are also disproportionately victims of online sexual harassment.
The Convention on the Elimination of Discrimination against Women calls digitised violence the "newest category of gender-based violence."
According to UN Women, millions of women and girls are affected by digital abuse and technology-facilitated violence annually. Studies suggest between 16 and 58 per cent of women have experienced this violence.
Regional data confirms technology-facilitated violence against women happens everywhere:
- Arab states: 60 per cent of women internet users have experienced online violence
- Eastern Europe and Central Asia: Research across 12 countries found more than 50 per cent of women over 18 have experienced technology-facilitated abuse
- Sub-Saharan Africa: Studies in five countries found 28 per cent of women experienced online violence
- Europe and USA: A survey in Denmark, Italy, New Zealand, Poland, Spain, Sweden, the UK and USA found 23 per cent of women aged 18-55 reported online abuse or harassment
Sexual harassment and stalking are the most commonly reported forms of technology-facilitated violence. Other forms include cyberbullying, hate speech, sexual exploitation, defamation, intimate image-sharing, sextortion, revenge porn, and doxing—sharing someone's personal information online.
Perpetrators use smartphones, computers, chatrooms, social networking sites, online gaming sites, GPS trackers, and video streaming platforms. Gender and technology experts warn this can lead to real-life consequences including stalking, threats, and physical violence.
Generative AI has made creating and distributing non-consensual, explicit content easier, victimising more women and potentially desensitising others to digital violence. What starts as online abuse can extend beyond screens, making it impossible for many women to feel safe at home, work, or in public spaces.
Despite these challenges, experts see potential for AI to be part of the solution. Zinnya del Villar, a leading expert in responsible AI, recently shared insights with UN Women on both the challenges and solutions to this rising menace.
"While technology-facilitated violence against women and girls online and offline is a growing concern, there are many promising advancements in AI offering innovative solutions to address digital abuse and protect survivors," del Villar explained.
She pointed to several innovative applications already making a difference. Mobile apps like bSafe provide safety alerts to protect women, while Canada-based Botler.ai helps victims understand if sexual harassment incidents they experienced violate the US criminal code or Canadian law.
Chatbots such as 'Sophia' by Spring ACT and 'rAInbow' by AI for Good provide anonymous support and connect survivors with legal services and other resources.
"AI-powered algorithms can also be used to make the digital space safe for everyone by detecting and removing harmful, discriminatory content and by stopping the spread of non-consensual intimate images," del Villar noted.
UN Secretary-General Antonio Guterres. He announced at the CSW68 event that the organisation has launched a system-wide gender equality acceleration plan.
A 2024 UN Secretary General António Guterres report identified AI's rapid rise as critical in shaping public attitudes towards women and fuelling violence. However, AI also presents opportunities to combat gender-based violence.
AI-powered chatbots connect survivors to support services and help navigate legal options. AI identifies patterns of bias and discrimination in employment, healthcare, and legal systems. Some tools employ AI to detect and remove non-consensual imagery from digital spaces.
Del Villar notes AI can reduce or perpetuate societal biases and inequalities. She emphasises using diverse, representative datasets to train AI systems, improving algorithm transparency, and ensuring diverse, inclusive AI development teams to avoid blind spots. She calls for strong ethical frameworks and gender-responsive policies in AI development.
As the cases of Adanech Abiebie, the Kenyan MP, and Taylor Swift demonstrate, no woman is immune from this digital violence. The fight against AI-facilitated gender-based violence requires urgent action from technology companies, governments, and society. Only through collective effort can we ensure AI serves humanity rather than becoming a weapon against women.
You may also read other AI In Our Lives story series below.