Over the past decade, with the rapid rise of digital technologies and their integration into everyday life, online spaces and digital tools have offered opportunities for connection and information.
However, technology-facilitated violence against women and girls has also grown significantly.
More than 90% of deepfake videos online are pornographic in nature, with women almost exclusively the target, according to a 2023 Security Hero report.
Across Europe, cyberstalking, surveillance, and the use of spyware were the most common forms of cyberviolence reported by women and girls, according to the latest Women Against Violence Europe (WAVE) report.
WAVE is a network of more than 180 European women’s NGOs working towards the prevention and protection of women and children from violence.
“Violence carried out online is often harder to recognise, prove, and sanction, leaving many women and girls exposed to harm without adequate protection,” the report noted.
Online harassment, hate speech, and threats were equally widespread and reported in 30 countries.
For instance, in Greece, in 2023, women made up 55.3% of victims in online-threat cases and 69.6% in cyberstalking cases.
More than half of the countries (57%) also reported a rise in image-based abuse and non-consensual intimate image sharing.
In Denmark, the number of young people experiencing image-based abuse has tripled since 2021.
“Algorithms can quickly spread misogynistic content to large numbers of people, creating closed spaces where violence against women and girls is normalised and harmful ideas spread, especially among young men,” the WAVE report claims.
Growing concerns over sexually explicit images
The rapid development of AI in recent years appears to have exacerbated the problem and thrown up even more challenges when it comes to sexually explicit images.
Since the beginning of 2026, Grok, the Elon Musk-owned AI chatbot, has responded to user prompts to “undress” images of women, creating AI-generated deepfakes with no safeguards.
A European non-profit that investigates influential algorithms, AI Forensics, analysed over 20,000 images generated by Grok and 50,000 requests made by users, and found that 53% of images generated by Grok contained individuals in minimal attire, of which 81% were individuals presenting as women.
In addition, 2% of the images depicted persons who appeared to be 18 years old or younger, and 6% of the images depicted public figures, approximately one-third of whom were political figures.
As a response, the platform has implemented new tools to prevent Grok from allowing people to edit photos of real individuals wearing revealing clothing.
“We have implemented technological measures to prevent the Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis,” Musk’s safety team wrote on X.
This restriction applies to all users, including paid subscribers.
Musk has also claimed to take action to remove high-priority violative content, including child sexual abuse material and non-consensual nudity and to report accounts seeking child sexual exploitation materials to law enforcement authorities.
Politicians, journalists, women’s rights defenders, and feminist activists are frequent targets of online harassment, deepfake pornography, and coordinated hate speech designed to silence or discredit them.
Read the full article here
