Press "Enter" to skip to content

Twitter Is Testing of Screening for Offensive Content

Twitter is taking its screening of unknown direct messages a step further. The social media giant announced today that it’s testing a filter for messages that include offensive content. Messages containing probably suspect words can be stored away in a folder marked “additional messages.” Users can then choose to see them particularly.

The social media mega has made some efforts this year to use technology to flag abusive tweets without the need for human intervention. Whereas such algorithm-assisted policing can work, they also flag plenty of false positives and miss a lot of filth. Twitter has additionally tested a “hide replies” function and made it simpler to report abusive tweets.

Nonetheless, a lot of the abuse and vitriol on Twitter comes within the type of direct messages from perfect strangers — a lot of which customers could not even observe. Women and people of shade are particularly topic to this type of online harassment. If your account is ready up to accept direct messages from anybody, Twitter will file messages from the users you don’t follow in a folder referred to as “message requests.” It also has a “quality filter” that may weed out what it defines as “lower-quality” messages out of your message requests folder solely. You won’t be able to see the suspicious messages unless you unselect the quality filter.

This latest filter for the sensitive subject by Twitter is a little bit of a happy middle floor. The first few lines of suspect messages might be hidden and changed with the road, “This message is hidden as it may contain offensive content.” You can then select to either view or delete them. This way, you won’t refrain the odd NSFW message from an old sorority sister or the awkwardly written word from a random business contact.