Twitter explained it Studied Whether its machine learning-based algorithms are causing unintended harm, a move that comes as social media companies face increasing scrutiny for their role in spreading conspiracy theories and enabling harassment.
In the coming months, the Responsible Machine Learning Initiative is releasing reports analyzing possible racial and gender bias in its image cropping algorithm.
It also issues reports on assessing the equity of home timeline recommendations across ethnic subgroups, and analyzing content recommendations for different political ideologies across seven countries.
After criticizing Twitter’s photo algorithm last year for focusing on white faces rather than dark faces, the company asserted that its tests showed no racial or gender bias.
Despite this, in the aftermath of the tests, it announced that it would give users more control over images because it understands that its automatic method of cropping images means that there is a potential for damage.
Twitter said: The results of the reports could be useful in making changes across the platform, new guidance on how to design specific products, and raising awareness about ethical machine learning.
She added: The use of machine learning affects hundreds of millions of tweets every day, and the system can sometimes act differently than it was intended, and these subtle shifts affect users, and we want to make sure that we study these changes and use them to build a better product.
The Twitter initiative comes as social media companies face accusations that their algorithms are responsible for increasing polarization, disinformation and online extremism.
Twitter in particular was criticized for not doing enough to combat harassment.
CEO Jack Dorsey previously said: He wants to see a future where users can choose the algorithm they want to use through an app store-like interface, rather than relying on a single algorithm put in place by Twitter.
Twitter’s implicit acknowledgment that its algorithms may be harmful to users in general contrasts with Facebook’s defense of its algorithms.
Facebook said last month: We have no interest Commercial in amplifying extremist content, the platform is not solely responsible for increasing political polarization because individual choices also influence what users see in the news feed.