Can social media platforms be tamed?
Well, Instagram is trying to rein in the trolls in its corner of the internet
The Facebook-owned photo sharing app introduced a comment-control tool this week in a bid to grapple with harassment on its platform and curb online vitriol.
Individuals with public Instagram accounts will now have a handful of filtering options for comments. They can allow comments from: Everyone, people you follow and your followers, just people you follow, or just your followers. Comments can also be blocked from specific users.
While it may seem like just another service update, it is important step in the broader context of social media’s big online harassment problem. And it is, in fact, a huge issue for internet users.
Pew Research Center recently reported online abuse is as rampant as ever, with four in 10 U.S. adults saying they’ve been harassed online. Meanwhile, 18% of that group said physical threats, stalking or sexual harassment were part of the harassment.
The ability to filter commenters on Instagram won’t eradicate the internet’s trolling problem. But some experts say it’s a step in the right direction.
“As lawyers for individuals targeted by trolls, perverts, as*****s, and psychos, we’ve known how rife the Instagram comment section is for our client base,” Carrie Goldberg, an attorney who specializes in sexual harassment crimes, told CNN Tech.
The new tool could stave off harassment before it starts, according to Goldberg. “The company does not have to spend its resources moderating abuse when users can curtail it before it happens in the first place.”
Zoe Quinn, a game developer who wrote about her personal experience being harassed online in a new book “Crash Override,” says she’ll take advantage of the feature.
“I’m relieved to know I can finally do something about a few bad actors on my own account,” she told CNN Tech, adding that giving people “granular and specific control over their privacy settings is great practice in general.”
Instagram’s tactic for helping users filter their feeds isn’t exactly groundbreaking.
The strategy is one parent company Facebook also uses.
Facebook users can filter who sees their profiles and posts, as well as who is able to comment. Facebook, as well as other platforms like Twitter, rely on users to report misbehavior. A team of moderators then investigates claims that have been submitted.
The comment-control tool isn’t a panacea. Experts say there are drawbacks because it places the onus too squarely on the user. Others say it could create a false sense of security.
Brianna Wu, another game developer who has been a frequent target of online harassment, including death threats, explained to CNN Tech that if an Instagram account is dedicated to “doxxing” women — a term that refers to publishing the private or identifying information about someone for malicious purposes — blocking that content doesn’t prevent it from existing and spreading.
“The danger is going to still exist. You have to have user oversight,” she said. “It’s also psychologically exhausting to curate death threats and rape threats yourself. You can block them, but new accounts spring up like weeds … It’s my experience when you draw a boundary with someone, they often double down.”
Soraya Chemaly, a writer and director of the Women’s Media Center Speech Project, agreed.
“I know that they try hard to make sure users have tools at their disposal that enable them to develop more privacy which I think is a net good,” said Chemaly, while adding that the tools themselves don’t really protect anyone. “It just makes the experience a little more pleasant.”
Instagram also provides the ability to filter comments in a few other languages. It added the ability to block select offensive comments in English in June. This week, the company said that’s now available in other languages, too: Arabic, French, German and Portuguese.
Chemaly said the issue of online harassment on social platforms is complicated.
“There’s a question of, what’s at the root of the hostility, that no one really addresses … It’s a larger issue of social and emotional learning.”
So, while Instagram may be sending a message to trolls that it’s platform is getting a little less friendly to their vitriol, “there’s really no stemming the firehose of awful human beings online,” added Chemaly. “It’s like playing whack-a-mole.”
Danielle Citron, a cyber-harassment expert and law professor at the University of Maryland, said the comment-control tool is similar to those used by some blogging platforms, where people can delete or block individuals who are abusive or off-topic.
If privacy and safety folks collaborate with engineers when building a product from the ground up, features like this might be available from the get-go “rather than trying to tack on privacy and security later,” Citron said.