In its latest effort to deal with rampant harassment on its platform, Twitter will look into giving users a second chance before they tweet. In a new feature, the company is testing, users who use “harmful” language will see a prompt suggesting that they self-edit before posting a reply.
When things get heated, you may say things you don't mean. To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.
— Twitter Support (@TwitterSupport) May 5, 2020
Twitter confirmed that the test feature will be limited to replies only for now. Twitter also explained that its systems will detect harmful language based on the kind of language used in other tweets that have been reported.
The app is also implementing a new layout when viewing tweets on an iPhone and on the web.
Your conversations are the 💙 of Twitter, so we’re testing ways to make them easier to read and follow.
Some of you on iOS and web will see a new layout for replies with lines and indentations that make it clearer who is talking to whom and to fit more of the convo in one view. pic.twitter.com/sB2y09fG9t
— Twitter Support (@TwitterSupport) May 5, 2020