AI in tackling cyberbullying
Artificial intelligence (AI) has brought with it challenges and opportunities in equal measure in the fight against cyberbullying. Recent research studies from the Cyberbullying Research Center show that 37 per cent of young people between 12 and 17 years old have experienced cyberbullying. However, understanding and using AI tools has turned into a formidable countermeasure.
AI-powered content moderation systems have been pretty successful in identifying and filtering out harmful content. For instance, Instagram’s AI-powered moderation tools, in the first six months of deployment in 2023, reported a 73 per cent reduction in hate speech and bullying content. Today, AI algorithms in the platform trace harassment, including sarcasm and veiled threats, with an accuracy rate of 92 per cent.
Knowledge of the capacities of AI has enabled schools and organisations to take proactive actions. House AI Guardian was installed in 500 schools in the United States to identify potential bullying situations before they escalate, using natural language processing. The programme reported that in 2023, there had been a 45 per cent reduction in the incidents of cyberbullying in schools where it was implemented.
But the same AI tools that are valuable for countering cyberbullying can also be leveraged for nefarious ends. For example, is AI capable of generating deepfake content to harass people, but luckily, this has been matched by the development of deepfake detection tools, which at the moment are 85 per cent effective at detecting manipulated content.
Experts say that going way forward, there should be a mix of AI tools and human oversight. As AI tools become more capable of detecting and flagging potential incidents of cyberbullying, human moderators bring in much-needed context and nuance when dealing with complex social interactions. In sum, this hybrid approach, supported by pervasive AI literacy, offers the best way forward in making online spaces safer for all users.