Menu

Timesdelhi.com

June 25, 2019
Category archive

Hate crime

2018 really was more of a dumpster fire for online hate and harassment, ADL study finds

in Abuse/Anti-Defamation League/behavior/bullying/cyberbullying/cybercrime/Delhi/digital media/Facebook/harassment/Hate crime/India/online harassment/Politics/Reddit/social applications/social media/TC/Twitch/United States/WhatsApp/YouGov by

Around 37 percent of Americans were subjected to severe hate and harassment online in 2018, according to a new study by the Anti-Defamation League, up from about 18 percent in 2017. And over half of all Americans experienced some form of harassment according to the ADL study.

Facebook users bore the brunt of online harassment on social networking sites according to the ADL study, with around 56 percent of survey respondents indicating that at least some of their harassment occurred on the platform. — unsurprising given Facebook’s status as the dominant social media platform in the U.S.

Around 19 percent of people said they experienced severe harassment on Twitter (only 19 percent? That seems low); while 17 percent reported harassment on YouTube; 16 percent on Instagram; and 13 percent on WhatsApp .

Chart courtesy of the Anti-Defamation League

In all, the blue ribbon standards for odiousness went to Twitch, Reddit, Facebook and Discord, when the ADL confined their surveys to daily active users. nearly half of all daily users on Twitch have experienced harassment, the report indicated. Around 38% of Reddit users, 37% of daily Facebook users, and 36% of daily Discord users reported being harassed.

“It’s deeply disturbing to see how prevalent online hate is, and how it affects so many Americans,” said ADL chief executive Jonathan A. Greenblatt. “Cyberhate is not limited to what’s solely behind a screen; it can have grave effects on the quality of everyday lives – both online and offline. People are experiencing hate and harassment online every day and some are even changing their habits to avoid contact with their harassers.”

And the survey respondents seem to think that online hate makes people more susceptible to committing hate crimes, according to the ADL.

The ADL also found that most Americans want policymakers to strengthen laws and improve resources for police around cyberbullying and cyberhate. Roughly 80 percent said they wanted to see more action from lawmakers.

Even more Americans, or around 84 percent, think that the technology platforms themselves need to do more work to curb the harassment, hate, and hazing they see on social applications and websites.

As for the populations that were most at risk to harassment and hate online, members of the LGBTQ community were targeted most frequently, according to the study. Some 63 percent of people identifying as LGBTQ+ said they were targeted for online harassment because of their identity.

“More must be done in our society to lessen the prevalence of cyberhate,” said Greenblatt. “There are key actions every sector can take to help ensure more Americans are not subjected to this kind of behavior. The only way we can combat online hate is by working together, and that’s what ADL is dedicated to doing every day.”

The report also revealed that cyberbullying had real consequences on user behavior. Of the survey respondents 38 percent stopped, reduced or changed online activities, and 15 percent took steps to reduce risks to their physical safety.

Interviews for the survey were conducted between Dec. 17 to Dec. 27, 2018 by the public opinion and data analysis company YouGov, and was conducted by the ADL’s Center for Technology and Society. The non-profit admitted that it oversampled for respondents who identified as Jewish, Muslim, African American, Asian AMerican or LGBTQ+ to “understand the experiences of individuals who may be especially targeted because of their group identity.”

The survey had a margin of error of plus or minus three percentage points, according to a statement from the ADL.

Online platforms still not clear enough about hate speech takedowns: EC

in Artificial Intelligence/Censorship/DailyMotion/Delhi/Europe/european commission/European Union/Facebook/Germany/Hate crime/hate speech/India/instagram/online platforms/Policy/Politics/Snapchat/Social/social media/social media platforms/Twitter/United Kingdom/Vera Jourova/YouTube by

In its latest monitoring report of a voluntary Code of Conduct on illegal hate speech, which platforms including Facebook, Twitter and YouTube signed up to in Europe back in 2016, the European Commission has said progress is being made on speeding up takedowns but tech firms are still lagging when it comes to providing feedback and transparency around their decisions.

Tech companies are now assessing 89% of flagged content within 24 hours, with 72% of content deemed to be illegal hate speech being removed, according to the Commission — compared to just 40% and 28% respectively when the Code was first launched more than two years ago.

However it said today that platforms still aren’t giving users enough feedback vis-a-vis reports, and has urged more transparency from platforms — pressing for progress “in the coming months”, warning it could still legislate for a pan-EU regulation if it believes it’s necessary.

Giving her assessment of how the (still) voluntary code on hate speech takedowns is operating at a press briefing today, commissioner Vera Jourova said: “The only real gap that remains is transparency and the feedback to users who sent notifications [of hate speech].

“On average about a third of the notifications do not receive a feedback detailing the decision taken. Only Facebook has a very high standard, sending feedback systematically to all users. So we would like to see progress on this in the coming months. Likewise the companies should be more transparent towards the general public about what is happening in their platforms. We would like to see them make more data available about the notices and removals.”

“The fight against illegal hate speech online is not over. And we have no signs that such content has decreased on social media platforms,” she added. “Let me be very clear: The good results of this monitoring exercise don’t mean the companies are off the hook. We will continue to monitor this very closely and we can always consider additional measures if efforts slow down.”

Jourova flagged additional steps taken by the Commission to support the overarching goal of clearing what she dubbed a “sewage of words” off of online platforms, such as facilitating data-sharing between tech companies and police forces to help investigations and prosecutions of hate speech purveyors move forward.

She also noted it continues to provide Member States’ justice ministers with briefings on how the voluntary code is operating, warning again: “We always discuss that we will continue but if it slows down or it stops delivering the results we will consider some kind of regulation.”

Germany passed its own social media hate speech takedown law back in 2016, with the so-called ‘NetzDG’ law coming into force in early 2017. The law provides for fines as high as €50M for companies that fail to remove illegal hate speech within 24 hours and has led to social media platforms like Facebook to plough greater resource into locally sited moderation teams.

While, in the UK, the government announced a plan to legislate around safety and social media last year. Although it has yet to publish a White Paper setting out the detail of its policy plan.

Last week a UK parliamentary committee which has been investigating the impacts of social media and screen use among children recommended the government legislate to place a legal ‘duty of care’ on platforms to protect minors.

The committee also called for platforms to be more transparent, urging them to provide bona fide researchers with access to high quality anonymized data to allow for robust interrogation of social media’s effects on children and other vulnerable users.

Debate about the risks and impacts of social media platforms for children has intensified in the UK in recent weeks, following reports of the suicide of a 14 year old schoolgirl — whose father blamed Instagram for exposing her to posts encouraging self harm, saying he had no doubt content she’d been exposed to on the platform had helped kill her.

During today’s press conference, Jourova was asked whether the Commission intends to extend the Code of Conduct on illegal hate speech to other types of content that’s attracting concern, such as bullying and suicide. But she said the executive body is not intending to expand into such areas.

She said the Commission’s focus remains on addressing content that’s judged illegal under existing European legislation on racism and xenophobia — saying it’s a matter for individual Member States to choose to legislate in additional areas if they feel a need.

“We are following what the Member States are doing because we see… to some extent a fragmented picture of different problems in different countries,” she noted. “We are focusing on what is our obligation to promote the compliance with the European law. Which is the framework decision against racism and xenophobia.

“But we have the group of experts from the Member States, in the so-called Internet forum, where we speak about other crimes or sources of hatred online. And we see the determination on the side of the Member States to take proactive measures against these matters. So we expect that if there is such a worrying trend in some Member State that will address it by means of their national legislation.”

“I will always tell you I don’t like the fragmentation of the legal framework, especially when it comes to digital because we are faced with, more or less, the same problems in all the Member States,” she added. “But it’s true that when you [take a closer look] you see there are specific issues in the Member States, also maybe related with their history or culture, which at some moment the national authorities find necessary to react on by regulation. And the Commission is not hindering this process.

“This is the sovereign decision of the Member States.”

Four more tech platforms joined the voluntary code of conduct on illegal hate speech last year: — namely Google+, Instagram, Snapchat, Dailymotion. While French gaming platform Webedia (jeuxvideo.com) also announced their participation today.

Drilling down into the performance of specific platforms, the Commission’s monitoring exercise found that Facebook assessed hate speech reports in less than 24 hours in 92.6% of the cases and 5.1% in less than 48 hours. The corresponding performance figures for YouTube were 83.8 % and 7.9%; and for Twitter 88.3% and 7.3%, respectively.

While Instagram managed 77.4 % of notifications assessed in less than 24 hours. And Google+, which will in any case closes to consumers this April, managed to assess just 60%.

In terms of removals, the Commission found YouTube removed 85.4% of reported content, Facebook 82.4% and Twitter 43.5% (the latter constituting a slight decrease in performance vs last year). While Google+ removed 80.0% of the content and Instagram 70.6%.

It argues that despite social media platforms removing illegal content “more and more rapidly”, as a result of the code, this has not led to an “over-removal” of content — pointing to variable removal rates as an indication that “the review made by the companies continues to respect freedom of expression”.

“Removal rates varied depending on the severity of hateful content,” the Commission writes. “On average, 85.5% of content calling for murder or violence against specific groups was removed, while content using defamatory words or pictures to name certain groups was removed in 58.5 % of the cases.”

“This suggest that the reviewers assess the content scrupulously and with full regard to protected speech,” it adds.

It is also crediting the code with helping foster partnerships between civil society organisations, national authorities and tech platforms — on key issues such as awareness raising and education activities.

Germany’s social media hate speech law is now in effect

in Censorship/Delhi/Europe/Freedom of Speech/Germany/Hate crime/hate speech/India/NetzDG/online extremism/Politics/Social/social media/social media platforms/TC by

A new law has come into force in Germany aimed at regulating social media platforms to ensure they remove hate speech within set periods of receiving complaints — within 24 hours in straightforward cases or within seven days where evaluation of content is more difficult.

The name of the law translates to ‘Enforcement on Social Networks’. It’s also referred to as NetzDG, an abbreviation of its full German name.

Fines of up to €50 million can be applied under the law if social media platforms fail to comply, though as Spiegal Online reports there is a transition period for companies to gear up for compliance — which will end on January 1, 2018. However the Ministry in charge has started inspections this month.

Social platform giants such as Facebook, YouTube and Twitter were couched as the initial targets for the law, but Spiegal Online suggests the government is looking to apply the law more widely — including to content on networks such as Reddit, Tumblr, Flickr, Vimeo, VK and Gab.

The usage bar for complying with the takedown timeframes is being set at a service having more than two million registered users in Germany.

While Spiegal Online reports that the German government is intending to have 50 people assigned to the task of implementing and policing the law.

It also says all social media platforms, regardless of size, must provide a contact person in Germany for user complaints or requests for information from investigators. Recent queries will need to be answered within 48 hours or risk penalties, it adds.

One obvious question here is how any fines could be applied across international borders if a social media firm has no bricks-and-mortar presence in Germany, though.

The law does also require social media firms operating in Germany to appoint a contact person in the country. But again, those companies that are outside Germany may be rather hard to police — unless the government intends to start trying to block access to non-compliance services which would only invite further controversy.

The Germany cabinet backed the proposal for the law back in April. At the time, justice minister Heiko Maas said: “Freedom of expression ends where criminal law begins.”

The country has specific hate speech laws which criminalize certain types of speech, such as incitement to racial and religious violence, and the NetzDG law cites sections of the existing German Criminal Code — applying itself specifically to social media platforms.

Germany not alone in Europe at seeking to clamp down on illegal content being spread and amplified via social media, either.

The UK has also been active recently in leading a push by several G7 nations against online extremism — with the apparent aim of reducing takedown times for this type of content to an average of just two hours.

Germany has also been pushing for a European Union wide response to tackling the spread of hate speech across online platforms.

And last week the European Commission put out new guidance for social media platforms urging them to be more pro-active about removing “illegal content” — including by developing tools to automative identification and prevent re-uploading of problem content.

It warned social giants that it might seek to draft a legislative proposal if they do not improve takedown performance within six months.

However the executive body appears to be seeking to bundle up various types of “illegal” content into the same problem bucket — and quickly drew criticism it risks encouraging algorithmic censorship by seeking to create one set of rules to apply to copyrighted content and terrorist propaganda, for example. Which does underline the risks around broad efforts to regulate the types of content that can and can’t be viewed online.

Critics of Germany’s NetzDG law argue it will encourage tech platforms to censor controversial content to avoid the risk of big fines. And while speedy social media takedowns of offensive hate speech might enjoy mainstream backing in Germany, it remains to be seen how the law will operate in practice.

Meanwhile, if overly expansive rules end up being fashioned to try to regulate all sorts of “illegal” content online that could also result in a wider chilling effect on online expression, and reduced support for broad regulatory efforts.

Go to Top