On March 6, Beverly Hills Unified School District (BHUSD) board members voted in a closed session to expel five Beverly Vista Middle School students following the spread of explicit deep fake AI-generated images of their classmates. The images were superimposed pictures of real students’ faces on simulated nude bodies generated by artificial intelligence.
A statement by BHUSD’s Superintendent, Michael Bregy, noted the five expelled students were the main focus of the investigation and were “most egregiously involved” in creating and sharing the deep fakes. According to the district, the victims were 16 eighth-grade students. Under an accepted agreement, the students and their families did not contest the punishment and no further hearing was held. The names of the students will remain private.
Initially spread through messaging apps, the images infuriated families and faculty, prompting Bregy to announce he was prepared to impose “The most severe disciplinary actions allowed by state law.” As such, the students involved were identified and held accountable in under 24 hours, but the district did not expel them until their investigation was over.
The Beverly Hills Police Department and the Los Angeles County District Attorney’s office are still looking into the incident, but no arrests have been made. California’s laws against possessing child pornography and sharing non-consensual nude photos do not technically apply to AI-generated images. Legal experts expect this will pose a problem for prosecutors.
Scores of AI-powered programs are available online to both “undress” someone in a picture, allowing users to guess what a person would look like if the picture had been taken in the nude, and to “face swap” a targeted person’s face onto another’s nude body. Versions of these programs have been around for years, but their earlier iterations were expensive, harder to use, and often less realistic. Today, AI has progressed toward tools that clone more precisely and easily create fakes, even with just a smartphone.
No specific policy change was announced in response to the incident, but the district has already curtailed students from using cell phones on campus.
“This incident has spurred crucial discussions on the ethical use of technology, including AI, underscoring the importance of vigilant and informed engagement within digital environments,” Bregy wrote in a statement. “In response, our district is steadfast in its commitment to enhancing education around digital citizenship, privacy, and safety for our students, staff, and parents which was immediately reemphasized at all schools following the incident.”
Westside Voice reached out to Beverly Hills Unified School District several times for a more direct comment. However, BHUSD officials declined to respond.
Westside Voice did get to speak with Ryan Ofman, Head of Science Communication at Deepmedia AI, for their perspective on the limitations of AI and honesty.
With AI popping up everywhere, best practices for telling the difference between what is authentic and what has been doctored are top of mind.
“I think ultimately we will need more robust automated detectors to catch every deepfake, but in the meantime, AI still has some peculiarities,” Ofman said. He noted many AI models still struggle with accurately depicting human hands. “While these models are adept at creating detailed images, a closer look at smaller objects might reveal inconsistencies. My advice is to maintain a healthy skepticism and be on the lookout for these classic signs.”
He continued by saying that deepfakes are as alarming as they are because they challenge something central to all of us, our identity.
“To see ourselves in pictures we never took, and in contexts we would never be in, is a violation of our privacy. Especially when these contexts are generated with the sole intention of manipulating or deceiving. This problem becomes even more severe with the propagation of AI-generated pornography and face swapping,” Ofman said.
He also said these cases are increasingly tricky when minors are involved as current regulation allows any individual to access and easily use the technology to create hugely inappropriate content. This has led to too many minors across the country being the victims of deepfake attacks, very often at the hands of other minors.
When asked if younger generations, Gen Z and Alpha, were more or less susceptible to inauthentic imagery, Ofman explained that it could be a bit of both. While younger generations may be more aware of these kinds of manipulations having growing up on the internet, they are also often more likely to use applications that are spreading these kinds of images far more than any other generation, sites like X, Instagram, and TikTok. So, while they may be more cautious, they will likely come across more of it more often than their older counterparts.
Regarding AI best practices, Ofman said, “[There is an] urgent need for common sense regulation limiting the ability to spread the specifically trained models to generate this kind of face-swapped pornography, and repercussions for the creation and distribution of this intense violation of privacy. As it currently stands, anyone with access to the internet can… generate these horrible manipulated nude images. We need to protect the victims, and especially our minors, from this terrifying threat, and it begins with regulation.”
He concluded by saying, “We owe it to ourselves, and more importantly, to our children, to ensure their safety in the digital age.”
Photo of Beverly Vista Middle School from the School Website.
Stay informed. Sign up for The Westside Voice Newsletter
By clicking submit, you agree to share your email address with Westside Voice. We do not sell or share your information with anyone.