Occurred Twitter has its own policies requiring users to pause and reconsider when they are about to Tweet with a potentially harmful or offensive response before clicking the submit button.
Based on feedback from those tests, the company has made improvements to the systems that decide when and how to send these reminders.
Claim systems are now better at detecting offensive or vulgar language, including profanity, and more aware of vocabulary that underrepresented communities have altered and used in harmless ways, and now also take into account your relationship with the person you’re writing to.
In other words, if you tweet for someone you interact with regularly, Twitter assumes that there is a higher chance that you have a better understanding of your preferred tone of communication and will not show you a prompt.
Twitter began testing this system for the first time in May 2020, then paused it shortly thereafter, and then brought it back to life in February of this year.
The claims are one of many updates made by Twitter to try to shape user behavior, reduce bullying and harassment, and stimulate healthy conversations.
New improvements regarding offensive Tweets are rolling out to Twitter for iOS users today and for Android users in the next few days.
The company says it is already making a difference in how people interact across the platform after policies become able to distinguish between potentially offensive language, mockery and banter.
The company says internal tests show that 34 percent of people who receive such an immediate request have reviewed their initial response or decided not to send their response at all.
After receiving such a claim once, people wrote, on average, 11 percent fewer abusive responses.
Also, the people to whom the claim surfaced were themselves less likely to receive abusive and harmful responses.
The company says it has made improvements over the past year to reduce instances where people may see claims unnecessarily.
Topics of interest to the reader