Instagram keeps flubbing on teen safety. Will its new ‘PG-13’ guidelines make a difference? | Tayo Bero

5 hours ago 4

For months, Instagram has been struggling to convince parents, advocates and officials that it’s a safe place for kids, even though there’s a mountain of evidence to show quite the opposite. Now, the company is introducing yet another set of guardrails that will supposedly keep teens on the platform safe. But going by their track record, parents shouldn’t be smiling yet.

Starting this week, all users aged under 18 will automatically be placed into the 13+ setting, which restricts their feed to content that meets the standards of the US PG-13 movie rating.

But Instagram’s failed past promises make this new content clampdown feel like just more hollow posturing so they can look like they’re actually doing something about the problem.

Advocacy groups have been sounding the alarm for years about minors being exposed to content and people that they shouldn’t be, while the company rakes in over $100bn a year in revenue. Meta itself estimated that about 100,000 children using Facebook and Instagram were experiencing online sexual harassment each day. This is unsurprising, given that as of July 2020 (according to an internal Meta chat made public through a New Mexico lawsuit against the company), the actions being taken to prevent child grooming on the platform lay “somewhere between zero and negligible”. The New Mexico lawsuit alleges that Meta’s social networks – including Instagram – have become hubs for child predators. (Meta denies the core allegations and claims the lawsuit “cherry-picked” from documents.)

Last year, the company finally set up mandatory Instagram teen accounts. But new research led by a Meta whistleblower found that 64% of the new safety tools that came with Instagram teens weren’t effective.

According to the study, 47% of young teen Instagram users encountered unsafe content and unwanted messages in a month, and 37% of 13- to 15-year-old users experienced at least one piece of unsafe content or unwanted message on a weekly basis, “including roughly 1 in 7 who were seeing either self-harm content, unwanted sexual content, discriminatory content, or alcohol and drug content weekly”.

“These failings point to a corporate culture at Meta that puts engagement and profit before safety,” Andy Burrows, chief executive of the UK’s Molly Rose Foundation, which seeks stronger online safety laws, and was part of the team that conducted the research, told the BBC. A Meta spokesperson said the study “repeatedly misrepresents our efforts to empower parents and protect teens, misstating how our safety tools work and how millions of parents and teens are using them”.

Meanwhile, the measures the company put in place last year came after another significant moment for Meta’s public image: in January 2024, the heads of the world’s biggest social media companies were hauled before the US Senate to answer for their company’s safety policies. Mark Zuckerberg, Meta’s CEO, apologised to a group of parents who said their children had been harmed by social media.

But Instagram has had years to get this right and it still seems to choose the way of letting children be harmed, and then saying sorry afterwards. On Monday, Reuters reported that the company’s own research found teens who said Instagram frequently made them feel bad about their bodies saw three times more “eating disorder-adjacent content” than others did. What’s worse, tech companies and social media startups have insinuated themselves so deeply into our lives that it’s virtually impossible to participate in society – especially as a kid – without them.

So what’s the solution? First, it’s important to think seriously about online spaces as an extension of the real world and not just a digital approximation of it. Social media platforms do not just replicate real life violence in digital form, they are also used as a vehicle to help furnish other forms of in-real-life harm, and children are most susceptible to this danger.

Pushing lawmakers to compel these companies to incorporate safety measures as part of the design, and not as an afterthought, is one thing; but it’s also crucial that parents teach kids about online safety in the same way we would teach them about any other kind of safety when they are out in the world.

The technologies these carnivorous companies create, leverage and very often abuse simply aren’t going anywhere. If we can’t trust them to protect their most vulnerable users, then it’s on us to do it ourselves.

  • Tayo Bero is a Guardian US columnist

Read Entire Article
Infrastruktur | | | |