The rapid spread of AI-generated nude deepfakes is part of a broader and long-standing pattern of gender-based violence, a McMaster expert warns.
The European Union and U.K. regulators are investigating the role of Grok, the AI chatbot built into Elon Musk’s X platform, in producing nonconsensual sexualized deepfakes, following new analysis showing that Grok generated an estimated three million sexualized images of women and children in just 11 days.
These developments highlight a deeper issue, says McMaster researcher Alexis-Carlota Cochrane: AI-generated nude deepfakes should be understood as part of a long-standing continuum of gender-based violence.
“There’s a lot to be said about how these technologies are framed as neutral, even though we have really clear evidence that women and gender-diverse folks are most often targeted by digital harm,” she says.
A PhD candidate in the Department of Communication Studies and Media Arts, Cochrane examines digital harms.
In Canada, 61 per cent of women and gender-diverse people have experienced some form of gendered digital harm, with even higher rates among Black, Indigenous, 2SLGBTQIA+ people and people with disabilities
With tools like Grok making the creation of explicit deepfakes as easy as leaving a comment on a photo, Cochrane says the scale and speed of harm are accelerating faster than policy can address.
She answers some questions about the issue:
How are women and gender-diverse people disproportionately harmed by AI-generated nude deepfakes?
Women and gender-diverse people have long been targeted by a broad spectrum of technology-facilitated harms: cyberstalking, online harassment, hate speech, doxxing and increasingly, the non-consensual distribution of intimate images. Deepfakes are just the latest strategy in that continuum.
These harms aren’t experienced equally. They’re deeply tied to misogyny and entitlement, especially when it comes to targeting people who are visible, outspoken, or hold public influence.
Early deepfakes mostly targeted public-facing women, like celebrities, politicians, journalists, partly because their images were available online, but also because misogyny tends to punish women and gender-diverse people who are visible in public life.
But the shift in the last few years is toward everyday people, because you no longer need Photoshop skills or high resolution images. With tools like Grok, it’s now as simple as locating a photo of someone on the platform and commenting, “take her shirt off” beneath it.
The lack of AI guardrails for tools like Grok means this content is incredibly easy to create, incredibly realistic, and incredibly harmful.
What are the psychological and social harms they experience in this?
These harms have severe psychological impacts. Survivors often report anxiety, stress, depression, suicidal ideation and an erosion of self-worth. They frequently feel shame or internalize blame for the harm they are experiencing, even though they’ve done nothing wrong.
Posting a photo online isn’t a wrongdoing. Even posting a nude photo isn’t wrong if you have made the choice to do so. People should have agency over their bodies and their content.
But when someone uses your likeness without your consent, even if the image is synthetic, it is a direct violation of bodily autonomy. It takes away your ability to control how you appear and how your identity is represented publicly.
We see survivors becoming hypervigilant. They reduce their online presence or leave platforms entirely. They censor themselves, especially for queer and marginalized folks, who may choose not to disclose aspects of their identities on X, for example, even if they feel safe doing so on a private Instagram account.
And these impacts move offline, too. People retreat from community, from friendships, from public life, because they’re afraid colleagues, employers, friends or family might see a manipulated image of them.
These images are so realistic that it becomes your word against the image. You know it’s not you, but other people might not. That’s devastating. And because it only takes a little bit of information to do frightening things, the sense of vulnerability is profound. These harms can happen far more easily than people realize.
How do you see these types of deepfakes fitting into real world experience of gender-based violence?
There’s a misconception that deepfakes invented a new kind of harm for women, but they’re simply another example of technology being used to enable gender-based violence.
They come from existing infrastructures of misogyny, entitlement and toxic masculinity, including online spaces like the manosphere and Reddit, where graphic and degrading representations of women have been normalized for years.
Non-consensual deepfakes are simply another mechanism in the larger landscape of technology-facilitated gender-based violence.
Whether an image is synthetic or “real,” the ethics are the same: Someone has taken control of your image and your body without your consent.
Unfortunately, I think it may get worse before it gets better. But I also think this visibility, as terrible as it is, might wake people up to the fact that digital harms are real, persistent forms of violence, and they require serious, active policy responses.
What are your thoughts on the responses we’ve seen to these Grok-created images and are there lessons Canada could learn?
The U.K. has been one of the most assertive countries when it comes to responding to AI-generated intimate image abuse. We saw Ofcom, the UK’s communications regulator, launch a formal investigation into Grok and X for facilitating non-consensual sexualized deepfakes involving women and minors, which is a clear violation of the Online Safety Act.
The U.K. has shown a readiness to intervene quickly, publicly, and decisively, and that sends an important message.
In Canada, the response has been minimal. Canada has repeatedly failed to pass and implement online-harms legislation. Bill C63, the Online Harms Act, died on the order paper, and earlier attempts in 2021 also stalled. We’re stuck in this cycle of introducing frameworks that never make it into actual policy.
What we need now is an enforceable, survivor-centred, AI-aware online safety framework, one that includes things like rapid takedown rights, clear platform accountability, and a regulatory body that can act decisively.
But we also need nuance. Moderation systems often disproportionately censor or penalize marginalized communities, so we have to protect survivors without reproducing existing inequities.
And we need to draw on a wide range of expertise: from survivors, gender-based violence organizations, legal and policy experts, activists, researchers, journalists, mental health providers.
Ultimately, Canada’s approach needs to be collaborative, inclusive and urgent. The U.K. provides an example of decisive action, but we have to ensure we build something that truly protects people, especially those most vulnerable, without further harming the communities that already face disproportionate risks online.