Connect with us

Abuse

Twitter breaks its silence on McGowan suspension

Twitter has abruptly broken its own policy of not commenting on individual accounts to explain why it temporarily suspended the account of actress Rose McGowan late yesterday, after she had been tweeting about allegations of sexual abuse and harassment which have been surfacing against Hollywood producer Harvey Weinstein.

Earlier today we asked Twitter why it had suspended McGowan’s account and it declined to provide an explanation, saying it does not comment on individual accounts “for privacy and security reasons”.

Yet now the Twitter Safety account has publicly tweeted to say McGowan’s account was “temporarily locked because one of her Tweets included a private phone number, which violates our Terms of Service” — apparently selectively breaking its own rule about not commenting on individual accounts. (We’ve asked McGowan for comment and will update this post with any response.)

At this point it would appear that Twitter’s sense of irony runs very deep indeed. And/or its store of hypocrisy. Because, as others have previously pointed out, the company has long used a policy of not commenting on individual accounts to shield itself from accountability — e.g. from criticism that it’s providing a platform to nazis and white supremacists.

Yet now, in this instance when it’s facing a high profile storm of criticism for selectively silencing McGowan (a verified Twitter user with more than 750k followers) and simultaneously failing to silence the abuse flowing over its own platform, it’s suddenly okay breaking its own rule as it tries to extricate itself from blame and criticism that it’s also complicit in enabling the abuse of women.

Safe to say, this really is what leading from behind looks like.

But let’s not forget we’ve already seen Twitter ban one particular notorious troll yet defend the right of another to make violent threats, even as armies of misogynistic and racist trolls continue to roam its platform with near impunity exactly because Twitter has handed the burden of responsibility for blocking and reporting racism, hate speech, misogyny and so on off to individual Twitter users.

Here’s another little irony you’ll find retweeted into the Twitter Safety feed right now:

In its series of three longer-than-140char tweets regarding McGowan’s suspension, Twitter does say it will be “clearer about these policies and decisions in the future”.

But it also goes on to claim to be “proud to empower and support the voices on our platform, especially those that speak truth to power”.

So — in tone of voice at least — Twitter appears, once again, to be hunkering down and digging into its default position of defending all speech, i.e. including abusive, hateful speech.

Which is exactly the kind of inflexible perspective that has led to its ongoing failure to drive abusers off its platform — thereby contributing, inexorably, to the bullying and harassment of women (and others).

Twitter CEO Jack Dorsey has also tweeted to flag the Twitter Safety apologia, reiterating that: “We need to be a lot more transparent in our actions in order to build trust.”

And while more transparency certainly sounds like a good idea, if it’s just going to be more selective transparency — as Twitter has deployed in this instance — that’s hardly going to engender a new dawn of trust in its actions.

Moreover, if the company’s leadership continues to let its platform be weaponized by co-ordinated groups of abusers to conduct targeted harassment against anyone they choose it will find large swathes of its user base continue to view it with mistrust — if they don’t just up sticks and ditch Twitter entirely.

Saying you’re ‘good people’ yet doing nothing to fix a major abuse problem is the story of Twitter’s current leadership (and likely contributes to the growth problem bogging down its business).

And, well, we all know how that tale ends.

Featured Image: Drew Angerer/Getty Images

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Abuse

Twitter replaces its gun emoji with a water gun

Twitter has now followed Apple’s lead in changing its pistol emoji to a harmless, bright green water gun. And in doing so, the company that has struggled to handle the abuse, hate speech and harassment taking place across its platform, has removed one of the means for online abusers to troll their victims.

The change is one of several rolling out now in Twitter’s emoji update, Twemoji 2.6, which impacts Twitter users on the web, mobile web, and on Tweetdeck.

Below: Apple’s water gun

Below: Twitter’s water gun

The decision to replace an emoji of a weapon to a child’s toy was seen as a political statement when Apple in 2016 rolled out its own water gun emoji in iOS 10. The company had also argued against the addition of a rifle emoji, ultimately leading to the Unicode’s decision to remove the gun from its list of new emoji candidates that same year.

With these moves, Apple was effectively telling people that a gun didn’t have a place in the pictorial language people commonly use when messaging on mobile devices.

These sorts of changes matter because of emoji’s ability to influence culture and its function as a globally understood form of communication. That’s why so much attention is given to those emoji updates that go beyond the cosmetic – like updates that offer better representations of human skin tones, show different types of family groupings or relationships, or those give various professions – like a police officer or a scientist – both male and female versions, for example.

In the case of the water pistol, Apple set a certain standard that others in the industry have since followed.

Samsung also later replaced its gun with a water gun, as did WhatsApp. Google, meanwhile, didn’t follow Apple’s lead saying that it believed in cross-platform communication. Many others left their realistic gun emojis alone, too, including Microsoft.

“The main problem with the different appearances of the pistol emoji has been the potential for confusion when one platform displays this as an innocuous toy, and another shows the same emoji as a weapon. This was particularly an issue in 2016 when Apple changed the pistol emoji out of step with every single other vendor at the time,” notes Jeremy Burge, Emojipedia’s founder and Vice Chair on the Unicode Emoji Subcommittee. “Now we’re seeing multiple vendors all changing to a water pistol image all in a similar timeframe with Samsung and Twitter both changing their design this year,” he says.

On Twitter, however, the updated gun emoji very much comes across as a message about where the company stands (or aims to stand) on abuse and violence. A gun – as opposed to a water gun – can be far more frightening when accompanied with a threat of violence in a tweet.

The change also arrives at a time when Twitter is trying – some would say unsuccessfully – to better manage the bad behavior that takes place on its platform. Most recently, it decided to publicize its rules around abuse to see if people would then choose to follow them. It has also updated its guidelines and policies for how it would handle online abusers to mixed results.

In addition, the change feels even more like a political message than the Apple emoji update did given its timing – in the wake of Parkland, the youth-led #NeverAgain movement, the YouTube shooting, and the increased focus on the NRA’s contributions to politicians.

Twitter has confirmed the change in an email with TechCrunch, saying the decision was made for “consistency” with the others who have changed.

However, Emojipedia shows that not all companies have updated to the water gun. Google, Microsoft, Facebook, Messenger, LG, HTC, EmojiOne, emojidex, and Mozilla still offer a realistic pistol, not the green toy.

But Apple and Samsung perhaps hold more weight when it comes to where things are headed.

“I know some users object to what they see as censorship on their emoji keyboard, but I can certainly see why companies today might want to ensure that they aren’t showing a weapon where iPhone and Samsung Galaxy users now have a toy gun,” Burge says. “It’s pretty much the opposite to the issue with Apple being out of step with other vendors in 2016.”

 

The gun was the most notable change in Twemoji 2.6, but Emojipedia notes that other emoji have been updated as well, including the kitchen knife (which now looks like more of a vegetable slicer than a weapon for stabbing), the Crystal Ball, the Alembic (a glass vessel with water), and the Magnifying Glass, with more minor tweaks to the Coat, Eyes, and emoji faces with horns.

Image credits: Emojipedia; Apple Water Gun: Apple

News Source = techcrunch.com

Continue Reading

Abuse

Twitter will publicize rules around abuse to test if behavior changes

As part of Twitter’s efforts to rid its platform of abuse and hate, the company is teaming up with researchers Susan Benesch, a faculty associate at the Berkman Klein Center for Internet & Society at Harvard University, and J. Nathan Matias, a post-doc research associate at Princeton University, to study online abuse. Today, Twitter is going to start testing an idea that if it shows people its rules, behavior will improve.

Via TechCrunch. This appeared at the top of one user’s Notifications tab.

“In an experiment starting today, Twitter is publicizing its rules, to test whether this improves civility,” Benesch and Matias wrote on Medium. “We proposed this idea to Twitter and designed an experiment to evaluate it.”

The idea is that by showing people the rules, their behavior will improve on the platform. The researchers point to evidence of when institutions clearly publish rules, people are more likely to follow them.

The researchers assure the privacy of Twitter users will be protected. For example, Twitter will only provide anonymized, aggregated information.

“Since we will not receive identifying information on any individual person or Twitter account, we cannot and will not mention anyone or their Tweets in our publications,” the researchers wrote.

Last month, Twitter began soliciting proposals from the public to help the social network capture, measure and evaluate healthy interactions on the platform. This was part of Twitter’s commitment “to help increase the collective health, openness, and civility of public conversation,” Twitter CEO Jack Dorsey said in a tweet.

It’s not clear how widespread the test will be, but it seems that the company won’t be releasing specifics.

“We’re collaborating with a group of academic researchers and scholars led by Susan Benesch, J. Nathan Matias, and Derek Ruths on an initiative to remind people of the Twitter Rules, to evaluate whether increased awareness of our policies results in improved behavior and more respect on Twitter,” a Twitter spokesperson said in a statement.

News Source = techcrunch.com

Continue Reading

Abuse

Facebook tries to prove it cares with “Fighting Abuse @ Scale” conference

Desperate to show it takes thwarting misinformation, fraud, and spam seriously, Facebook just revealed that it’s hosting a private “Fighting Abuse @Scale” invite-only conference in San Francisco on April 25th. Speakers from Facebook, Airbnb, Google, Microsoft, and LinkedIn will discuss how to stop fake news, prevent counterfeit account creation, using honeypots to disrupt adversarial infrastructure, and how machine learning can be employed to boost platform safety.

[Update: Though never publicly announced, Facebook has already privately filled the event to capacity. The company’s PR says this isn’t a “last-minute” event as we originally described it, and initial invites went out February 6th. But the sudden move to invite journalists less than a month ahead seems timed to humanize Facebook’s efforts to combat its ongoing data abuse and election interference scandals.]

Fighting Abuse @Scale will be held at the Bespoke Event Center within the Westfield Mall in SF. We can expect more technical details about the new proactive artificial intelligence tools Facebook announced today during a conference call about its plans to protect election integrity. The first session is titled “Combating misinformation at Facebook” and will feature an engineering director and data scientists from the company.

Facebook previously held “Fighting Spam @Scale” conferences in May 2015 and November 2016 just after the Presidential election. But since then, public frustration has built up to a breaking point for the social network. Russian election interference, hoaxes reaching voters, violence on Facebook Live, the ever-present issue of cyberbullying, and now the Cambridge Analytica data privacy scandal have created a convergence of backlash. With its share price plummeting, former executives speaking out against its impact on society, and CEO Mark Zuckerberg on a media apology tour, Facebook needs to show this isn’t just a PR problem. It needs users, potential government regulators, and its own existing and potential employees to see it’s willing to step up and take responsibility for fixing its platform.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending