Meta announced on January 9, 2024 that it will protect its teenage users by blocking content on Instagram and Facebook that it deems harmful, including content related to suicide and eating disorders. The move comes as federal and state governments are increasing pressure on social media companies to put safety measures in place for teens.
At the same time, teens turn to their peers on social media for support they can't get anywhere else. Efforts to protect teens can inadvertently make it harder for them to get help.
Congress has held numerous hearings in recent years about social media and the risks to young people. The CEOs of Meta, X (formerly known as Twitter), TikTok, Snap, and Discord are scheduled to testify before the Senate Judiciary Committee on January 31, 2024, about their efforts to protect minors from sexual exploitation. is.
A statement released ahead of the hearing by Sens. Dick Durbin (D-Ill.) and Sen. Lindsey Graham, the committee's chairman and ranking member, said that tech companies “have no right to protect children. So, I finally had to admit my failure” (RS.C.), respectively.
I'm a researcher who studies online safety. My colleagues and I have been studying social media interactions among teens and the effectiveness of platforms' efforts to protect users. Research shows that while teens face risks on social media, they also find support from their peers, especially through direct messages. We have identified a series of steps that social media platforms can take to protect their users while protecting their privacy and autonomy online.
What children are facing
It is well established that risks to teens on social media are prevalent. These risks range from harassment and bullying to poor mental health and sexual exploitation. Research shows companies like Meta know their platforms are exacerbating mental health issues, helping make youth mental health one of the US Surgeon General's priorities. .
Much of the research on online safety among adolescents is based on self-reported data, such as surveys. Further research is needed into young people's private real-world interactions and how they view online risks. To address this need, my colleagues and I collected a large dataset of young people's Instagram activity, including over 7 million direct girlfriend messages. We asked young people to annotate their conversations and identify messages that made them feel uncomfortable or unsafe.
Using this dataset, we found that face-to-face interactions are crucial for young people seeking support for a variety of issues, from daily living to mental health concerns. Our findings suggest that these channels were used by young people to have deeper discussions about their interactions in public. Based on the mutual trust in the environment, the teens felt safe asking for help.
Research shows that the privacy of online discourse plays an important role in young people's online safety, and that a significant amount of harmful interactions on these platforms take place in the form of private messages. is suggested. Unsecure messages reported by users in our dataset included harassment, sexual messages, sexual solicitations, nudity, pornography, hate speech, and selling or promoting illegal activities.
However, the growing need for platforms to protect user privacy has made it more difficult for platforms to use automated technology to detect and prevent online risks for teens. For example, Meta has implemented end-to-end encryption for all messages on the platform, ensuring that the content of the messages is safe and accessible only to participants in the conversation.
Additionally, the steps Meta has taken to block suicide and eating disorder content means that content is removed from public posting and search, even if it is posted by a teen's friends. This means that teens who share that content end up stranded without the support of friends or colleagues. Additionally, Meta's content strategy does not address the risky interactions teens have in their private conversations online.
to keep balance
Therefore, the challenge is to protect young users without compromising their privacy. To that end, we conducted research to find a way to detect insecure messages using minimal data. We explore how various characteristics and metadata of risky conversations, such as conversation length, average response time, and the relationships of conversation participants, contribute to machine learning programs that detect these risks. I wanted to understand. For example, previous research has shown that risky conversations tend to be short and one-sided, such as when strangers make unwanted advances.
We found that our machine learning program was able to identify insecure conversations 87% of the time using conversation metadata alone. However, analyzing the text, images, and videos of conversations is the most effective approach to identifying the type and severity of risks.
These results highlight the importance of metadata for distinguishing unsafe conversations and can be used as guidelines for platforms designing artificial intelligence risk identification. Platforms use high-level features such as metadata to block harmful content without scanning it, thereby potentially violating user privacy. For example, a persistent harasser that young people want to avoid will generate metadata (short, one-sided communications repeated between disconnected users) that an AI system can use to block the harasser.
Ideally, young people and their caregivers would have the option by design to turn on encryption, risk detection, or both, allowing them to decide for themselves the trade-off between privacy and safety. Become.
Afsaneh Raji, Assistant Professor of Information Science; drexel university
This article is republished from The Conversation under a Creative Commons license. Read the original article.