Westfield Public Schools held its regular board meeting in late March at the local high school in a red brick facility in Westfield, N.J., outside where a scoreboard proudly welcomes visitors to the sports teams “home of the Blue Devils.” was.
But it was not business as usual for Dorota Mani.
In October, several 10th grade female students at Westfield High School, including Mani's 14-year-old daughter Francesca, discovered that a male student in their class had used artificial intelligence software to create sexually explicit images of them. He warned the administrator that he was disseminating fabricated and forged images. picture. Five months later, the Manis and other families say the district has done little to publicly address the doctored images or update school policies to prevent exploitative AI uses. Stated.
“The Westfield High School administration and school district seem to be giving a masterclass in blowing this case up,” Ms. Mani, the founder of a local preschool, advised trustees during the meeting.
The school district said in a statement that it “immediately launched an investigation” upon learning of the incident, immediately contacted police, provided counseling, and provided group counseling to the second-grade class.
“All school districts are grappling with the challenges and implications of artificial intelligence and other technologies that are available to students anytime and anywhere,” Westfield Public Schools Superintendent Raymond Gonzalez said in a statement.
Last year, blindsided by the sudden popularity of AI-powered chatbots like ChatGPT, schools across the country rushed to shut down text-generating bots to stop students from cheating. Now, an even more alarming AI image generation phenomenon is shaking schools.
Boys in several states are using a widely used “nudist” app to identify their fully clothed female classmates in videos showing them attending events such as school proms. The real photos were transformed into AI-generated images of girls with their breasts and genitals exposed, giving them a vivid and convincing appearance. In some cases, the boys shared the fake images in the school lunchroom, on the school bus, or through group chats on platforms like Snapchat and Instagram, according to school and police reports. .
Digitally altered images like this (known as “deepfakes” or “deep nudes”) can have devastating consequences. Child sexual exploitation experts say using non-consensual AI-generated images to harass, humiliate and bully young women can harm their mental health, reputation and physical safety. It said it could pose a risk to college and job prospects as well. Last month, the Federal Bureau of Investigation warned that it is illegal to distribute computer-generated child sexual abuse material, including realistic AI-generated images of identifiable minors engaging in sexually explicit acts. did.
But student use of exploitative AI apps in schools is so new that some school districts may be less prepared than others. This can create a shaky safety net for students.
“This phenomenon could happen very suddenly and leave many school districts in a situation that they are unprepared for and don't know what to do,” she said, according to the Laws Related to Computer-Generated Child Sexual Conduct. said Liana Pfefferkorn, a researcher at the Stanford Internet Observatory who writes about the issue. Abuse images.
Detectives investigating a parent's complaint about explicit AI-generated images of their 14- and 15-year-old daughters at Issaquah High School in suburban Seattle last fall are wondering why the school didn't report the incident to police. He said he asked the vice principal. to a report from the Issaquah Police Department. According to police documents, school officials asked “what should have been reported,” and detectives said schools are required by law to report sexual abuse, including possible child sexual abuse. He reportedly told her that he was there. The school then reported the incident to Child Protective Services, according to a police report. (The New York Times obtained the police report through a public records request.)
The Issaquah School District said in a statement that it consulted with students, families and law enforcement as part of its investigation into deepfakes. The school district also “showed empathy” and provided support to the affected students, the statement said.
The statement added that the school district “reported the artificial intelligence-generated fake images to Child Protective Services out of an abundance of caution,” adding, “According to our legal team, there is no obligation to report the fake images to law enforcement.” It pointed out. ”
At Beverly Vista Middle School in Beverly Hills, California, administrators called police in February after learning that five male students had created and shared explicit AI-generated images of their female classmates. Two weeks later, the school board approved the expulsion of five students, according to district documents. (The school district said California education regulations prohibit it from verifying whether the expelled students were the ones who fabricated the images.)
Beverly Hills Unified School District Superintendent Michael Breggie said he and other school leaders have set a national precedent that schools should not allow students to create and distribute sexually explicit images of their peers. He said he wanted to make one.
“This is extreme bullying when it comes to schools,” Dr Breggie said, noting the explicit images were “disturbing and invasive” to the girls and their families. “That would never be tolerated here.”
The school, located in the small, wealthy communities of Beverly Hills and Westfield, was one of the first to publicly acknowledge the deepfake incident. Details of the incidents described in the district's communications with parents, school board meetings, legislative hearings and court filings show the disparity in schools' responses.
The Westfield incident began last summer when a high school boy asked a 15-year-old female classmate who had a private account to be his friend on Instagram, according to a lawsuit filed by a young woman and her family against the boy and his parents. It started with that. (The Manis said they were not involved in the lawsuit.)
After she accepted the request, the male student copied photos of her and several other female classmates from her social media account, according to court documents. She then used an AI app to fabricate sexually explicit images of “fully identifiable” girls and shared them with schoolmates via a Snapchat group, court documents say.
Westfield High School began an investigation in late October. Francesca Mani said she and other 10th grade girls who were victims of deepfakes were kept inside the school while administrators quietly took some of the boys aside and questioned them. He said his name was called over the intercom and he called him to the school office.
That week, Westfield High School Principal Mary Asfendis sent an email to parents warning them of a “spread of misinformation.” The email went on to call deepfakes a “very serious incident.” The school also said that despite students' concerns about the possibility of image sharing, the school “believes that all images created have been deleted and are not in circulation.”
Dorota Mani said she was told by Westfield administrators that the district had suspended a male student accused of image fabrication for one to two days.
Shortly after, she and her daughter began speaking out publicly about the incident, calling on school districts, state legislatures, and Congress to enact laws and policies specifically prohibiting blatant deepfakes.
“We have to start updating our school policies,” Francesca Mani, now 15, said in a recent interview. “Because if the school had an AI policy, students like me would have been protected.”
Parents, including Dorota Mani, also filed harassment complaints against Westfield High School last fall over the explicit images. However, at a March meeting, Mani told the school board that the high school had not yet provided parents with a formal report on the incident.
Westfield Public Schools said it could not comment on the disciplinary action, citing student confidentiality. Superintendent Dr. Gonzalez said in a statement that the district is stepping up efforts “by educating students and establishing clear guidelines to ensure these new technologies are used responsibly.” Ta.
Beverly Hills schools are taking a firmer public stance.
In February, administrators learned that eighth-grade boys at Beverly Vista Middle School were creating explicit images of their 12- and 13-year-old female classmates, and immediately sent a message to everyone with the subject line “Artificial.” “A horrific abuse of intelligence''). Parents, teachers, middle school and high school students in the school district. The message urged local residents to share information with the school to “immediately stop” the “offensive and inappropriate” use of AI by students.
The district also warned that it was prepared to impose severe penalties. “Students found creating, distributing, or possessing AI-generated images of this type will be subject to disciplinary action, including recommendation for expulsion,” the message said.
Superintendent Dr. Breggie said schools and lawmakers need to act quickly because misuse of AI is making students feel unsafe at school.
“We hear a lot about physical safety in schools,” he said. “But what you are not hearing is the violation of the personal and psychological safety of these students.”