Social media challenges can be deadly. Here’s what to do

Late last year, 10-year-old Nylah Anderson from Chester saw something in her personalized TikTok feed that caught her eye. It was a challenge that challenged users to choke until they almost passed out. Nylah tried, and she died.

Nylah’s mother is now suing TikTok in federal court in Philadelphia, accusing the social media platform of using a dangerous algorithm that directed her daughter into the deadly challenge.

READ MORE: A 10-year-old Chester girl died while doing a choking ‘challenge’ on TikTok. Her mother is suing the video platform.

But TikTok is far from the only provider of dangerous “challenges” and algorithms that feed users harmful content.

Many challenges are trivial, involving cup stacking, homemade slime or dance. And the concept is not so bad – many social media challenges kept people connected during pandemic quarantines for example, by recreating your favorite work of art or creating a fashion with your pillowcase. But there have also been dangerous stunts like the “Tide Pod Challenge” which took off a few years ago.

Challenges aren’t the only dangerous aspect of social media. Our own research at the Annenberg Public Policy Center has identified ways in which social media can encourage the dissemination of harmful content via algorithms, such as depictions of self-harm sent to vulnerable young people. And we found that young adults exposed to self-harm on Instagram were more likely to display these thoughts and behaviors within a month of seeing it.

So what do we do about it? We cannot cut off children’s access to social media and we cannot expect parents to be able to monitor everything their children see online. Instead, social media platforms should take more responsibility for monitoring harmful content.

“Social media platforms should take more responsibility for monitoring harmful content.”

Dan Romer

At this time, Twitter, Facebook and other outlets say they do this monitoring, but it is not enough and there are too few legal consequences in case of failure. Even video recordings of mass shootings that are posted on social media can remain available for research long after their initial appearance.

The biggest obstacle to digital platforms taking responsibility for what they share is the Communications Decency Act of 1996. In order to allow the Internet to thrive, Section 230 of the Act proclaimed that Internet platforms were not considered to be the publishers of the content posted there, and are therefore not responsible for it. But the internet has changed a lot since 1996.

Over the past 25 years, as advertising has become a way for websites to make money, they have become more and more like publishers, just like older magazines and media always have. summer. They are even better at it: with highly refined user and tracking data, as well as the ability to use algorithms to identify target audiences, platforms can sell advertising with much greater accuracy than this. that was possible with traditional media.

Directing users to content they might find appealing does not have to result in harm. But when there is no accountability for what platforms offer, harmful but engaging content can proliferate.

For example, we found that the vast majority of young people exposed to Instagram posts depicting self-harm said they came across this content by accident – and that they hadn’t been looking for him.

We don’t know how Nylah completed the challenge that led to her death, but whether TikTok sent content containing the challenge to Nylah because she matched a profile of people who had completed the challenge or searched for content like this here, then the creators and guardians of the algorithm might take some of the blame.

One way to do this is to amend Section 230. A potential change suggested by US Sens. Amy Klobuchar (D., Minn.) and Ben Ray Luján (D., NM) is the Health Misinformation Act, which would define the retransmission of user content that may be harmful to health as a form of publication, thus removing the free pass that Article 230 granted to Internet platforms. (The Department of Health and Human Services would define what is harmful to health.) The law was proposed to protect the public from misleading content during the pandemic, but it could easily be applied to other forms of content. harmful to health, including the kinds of dangerous challenges that led to Nylah’s suffocation.

While this may look like a government violation of free speech, it could be narrowly designed to reduce the spread of content that directly harms public health. I think this kind of regulation is worth considering if it saves the lives of 10 year old girls.

Dan Romer is Research Director at the Annenberg Public Policy Center at the University of Pennsylvania, where his research focuses on media and social influences on adolescent health and development..

Back To Top