Abstract
In January 2024, non-consensual deepfakes came to public attention with the spread of AI generated sexually abusive images of Taylor Swift. Although this brought new found energy to the debate on what some call non-consensual synthetic intimate images (i.e. images that use technology such as AI or photoshop to make sexual images of a person without their consent), female celebrities like Swift have had deepfakes like these made of them for years. In 2017, a Reddit user named “deepfakes” posted several videos in which he had used opensource machine learning tools to swap the faces of female celebrities on to the faces of female porn actors, displaying what appeared to be live video footage of the celebrity engaging in sex acts she never engaged in. Since that time, deepfake technology has advanced astronomically. What once were choppy sexualized videos are now nearly flawless videos that can be difficult to distinguish from a real video. According to recent research on deepfakes by Sensity AI, this technology has been used primarily on women to create sexual videos. These women’s sexual autonomy has been co-opted for the purposed of gratifying men’s sexual pleasure, but have also been used in campaigns to delegitimize and humiliate female journalists and politicians. In Canada, civil and criminal legislation has addressed the non-consensual distribution of intimate images, but in only a few provinces – British Columbia, New Brunswick, Prince Edward Island and Saskatchewan – does this offence include altered images that could include deepfakes. This paper explores the evolution of synthetic technology such as AI image generators and Canada’s legal responses to the non-consensual sharing of intimate images.