At the UN Generation Equality Forum in Paris, Twitter, TikTok, Google and Facebook pledged to fight online abuse and strengthen the safety of women on their platforms.
The promise came after inquiries with the World Wide Web Foundation (WWWF) last year, which aimed to examine online violence and gender abuse.
WWWF said studies have shown that women want more control over who can respond to or comment on their posts on social media, as well as more options about what they see online, where and when.
According to WWWF, companies are committed to “finding better ways for women to take care of their safety online,” offering more granular settings such as who can view, share or comment on posts; simpler and more accessible language; easier navigation and access to safety tools, and “reducing the burden on women by proactively reducing the amount of abuse they see”.
The way the last part was written is a little frustrating; addresses the consequences or location of the abuse, but not the person/persons who committed the abuse. And just because women aren’t seeing the abuse on social media doesn’t mean the abuse has gone away. Platforms certainly have some responsibility for making their online spaces safer, but until they become more proactive and less reactive, and go after abusers, the responsibility will continue to rest with women and marginalized groups to report abuse and convince a social networking platform worth addressing.
In addition to the “best curation” checklist, as part of the commitment, companies will implement improvements to their reporting systems, offering users the ability to track and manage their reports and establish additional ways for women to get help and support when they report abuse. They will also allow for “greater ability to address context and/or language”, which may allow more subtle forms of verbal abuse or threats to be incorporated into enforcement measures.
These seem to be excellent goals, but the WWWF communiqué did not include any specifications on how each platform plans to achieve them.
Keeping everyone on Twitter safe and free from abuse is their top priority, Vijaya Gadde, head of Twitter’s legal, public policy and trust and security department, Vijaya Gadde, said in an e-mailed statement.
“Although we have made recent strides in giving people more control to manage their security, we know there is still a lot of work to be done,” wrote Gadde, noting that underrepresented women and communities are disproportionately affected by abuse.
Gadde said abusive behavior “has no place in our service. This harms those who are targeted and is detrimental to the health of the conversation and the role that Twitter plays in the expression and exchange of ideas where people – regardless of their views or perspectives – can be heard. ”
Facebook’s global security director Antigone Davis said in an email that Facebook was eager to work with other technology companies to make the Internet safer for women. “To keep women safe from online and offline abuse, exploitation and harassment, we regularly update our policies, tools and technology in consultation with experts around the world – including more than 200 women’s safety organizations,” Davis said in the statement.
Tara Wadhwa, policy director at TikTok US, wrote a blog post describing the company’s plans. “Over the next few months, we will begin to develop and test a number of potential product changes to our platform that address these priorities and help make TikTok an increasingly safer place for women,” wrote Wadhwa.
For now, these “commitments” are not binding on these companies, except for the prospect of public shame if they do not comply with them. And unfortunately, this tends to be the best way to get social networking platforms to respond to users’ issues.