Menu

Timesdelhi.com

May 22, 2019
Category archive

Abuse

Facebook says its new A.I. technology can detect ‘revenge porn’

in Abuse/AI/Artificial Intelligence/Delhi/Facebook/India/Politics/privacy/revenge porn/Social/social media by

Facebook on Friday announced a new artificial intelligence powered tool that it says will help the social network detect revenge porn – the nonconsensually shared intimate images that, when posted online, can have devastating consequences for those who appear in the photos. The technology will leverage both A.I. and machine learning techniques to proactively detect near nude images or videos that are shared without permission across Facebook and Instagram.

The announcement follows on Facebook’s earlier pilot of a photo-matching technology, which had people directly submit their intimate photos and videos to Facebook. The program, which was run in partnership with victim advocate organizations, would then create a digital fingerprint of that image so Facebook could stop it from ever being shared online across its platforms. This is similar to how companies today prevent child abuse images from being posted to their sites.

The new A.I. technology for revenge porn, however, doesn’t require the victim’s involvement. This is important, Facebook explains, because victims are sometimes too afraid of retribution to report the content themselves. Other times, they’re simply unaware that the photos or videos are being shared.

While the company was short on details about how the new system itself works, it did note that it goes beyond simply “detecting nudity.”

After the system flags an image or video, a specially trained member of Facebook’s Community Operations team will review the image then remove it if it violates Facebook’s Community Standards. In most cases, the company will also disable the account, as a result. An appeals process is available if the person believes Facebook has made a mistake.

In addition to the technology and existing pilot program, Facebook says it also reviewed how its other procedures around revenge porn reporting could be improved. It found, for instance, that victims wanted faster responses following their reports and they didn’t want a robotic reply. Other victims didn’t know how to use the reporting tools or even that they existed.

Facebook noted that addressing revenge porn is critical as it can lead to mental health consequences like anxiety, depression, suicidal thoughts and sometimes even PTSD. There can also be professional consequences, like lost jobs and damaged relationships with colleagues. Plus, those in more traditional communities around the world may be shunned or exiled, persecuted or even physically harmed.

Facebook admits that it wasn’t finding a way to “acknowledge the trauma that the victims endure,” when responding to their reports. It says it’s now re-evaluating the reporting tools and process to make sure they’re more “straightforward, clear and empathetic.”

It’s also launching “Not Without My Consent,” a victim-support hub in the Facebook Safety Center that was developed in partnership with experts. The hub will offer victims access to organizations and resources that can support them, and it will detail the steps to take to report the content to Facebook.

In the months ahead, Facebook says it will also build victim support toolkits with more locally and culturally relevant info by working with partners including the Revenge Porn Helpline (UK), Cyber Civil Rights Initiative (US), Digital Rights Foundation (Pakistan), SaferNet (Brazil) and Professor Lee Ji-yeon (South Korea).

Revenge porn is one of the many issues that results from offering the world a platform for public sharing. Facebook today is beginning to own up to the failures of social media across many fronts – which also include things like data privacy violations, the spread of misinformation, and online harassment and abuse.

CEO Mark Zuckerberg recently announced a pivot to privacy, where Facebook’s products will be joined together as an encrypted, interoperable, messaging network – but the move has shaken Facebook internally, causing it to lose top execs along the way.

While changes are in line with what the public wants, many have already lost trust in Facebook. For the first time in 10 years Edison Research noted a decline in Facebook usage in the U.S., from 67 to 62 percent of Americans 12 and older. Still, Facebook still a massive platform with its over 2 billion users. Even if users themselves opt out of Facebook, that doesn’t prevent them from ever becoming a victim of revenge porn or other online abuse by those who continue to use the social network.

News Source = techcrunch.com

2018 really was more of a dumpster fire for online hate and harassment, ADL study finds

in Abuse/Anti-Defamation League/behavior/bullying/cyberbullying/cybercrime/Delhi/digital media/Facebook/harassment/Hate crime/India/online harassment/Politics/Reddit/social applications/social media/TC/Twitch/United States/WhatsApp/YouGov by

Around 37 percent of Americans were subjected to severe hate and harassment online in 2018, according to a new study by the Anti-Defamation League, up from about 18 percent in 2017. And over half of all Americans experienced some form of harassment according to the ADL study.

Facebook users bore the brunt of online harassment on social networking sites according to the ADL study, with around 56 percent of survey respondents indicating that at least some of their harassment occurred on the platform. — unsurprising given Facebook’s status as the dominant social media platform in the U.S.

Around 19 percent of people said they experienced severe harassment on Twitter (only 19 percent? That seems low); while 17 percent reported harassment on YouTube; 16 percent on Instagram; and 13 percent on WhatsApp .

Chart courtesy of the Anti-Defamation League

In all, the blue ribbon standards for odiousness went to Twitch, Reddit, Facebook and Discord, when the ADL confined their surveys to daily active users. nearly half of all daily users on Twitch have experienced harassment, the report indicated. Around 38% of Reddit users, 37% of daily Facebook users, and 36% of daily Discord users reported being harassed.

“It’s deeply disturbing to see how prevalent online hate is, and how it affects so many Americans,” said ADL chief executive Jonathan A. Greenblatt. “Cyberhate is not limited to what’s solely behind a screen; it can have grave effects on the quality of everyday lives – both online and offline. People are experiencing hate and harassment online every day and some are even changing their habits to avoid contact with their harassers.”

And the survey respondents seem to think that online hate makes people more susceptible to committing hate crimes, according to the ADL.

The ADL also found that most Americans want policymakers to strengthen laws and improve resources for police around cyberbullying and cyberhate. Roughly 80 percent said they wanted to see more action from lawmakers.

Even more Americans, or around 84 percent, think that the technology platforms themselves need to do more work to curb the harassment, hate, and hazing they see on social applications and websites.

As for the populations that were most at risk to harassment and hate online, members of the LGBTQ community were targeted most frequently, according to the study. Some 63 percent of people identifying as LGBTQ+ said they were targeted for online harassment because of their identity.

“More must be done in our society to lessen the prevalence of cyberhate,” said Greenblatt. “There are key actions every sector can take to help ensure more Americans are not subjected to this kind of behavior. The only way we can combat online hate is by working together, and that’s what ADL is dedicated to doing every day.”

The report also revealed that cyberbullying had real consequences on user behavior. Of the survey respondents 38 percent stopped, reduced or changed online activities, and 15 percent took steps to reduce risks to their physical safety.

Interviews for the survey were conducted between Dec. 17 to Dec. 27, 2018 by the public opinion and data analysis company YouGov, and was conducted by the ADL’s Center for Technology and Society. The non-profit admitted that it oversampled for respondents who identified as Jewish, Muslim, African American, Asian AMerican or LGBTQ+ to “understand the experiences of individuals who may be especially targeted because of their group identity.”

The survey had a margin of error of plus or minus three percentage points, according to a statement from the ADL.

News Source = techcrunch.com

Amnesty International used machine-learning to quantify the scale of abuse against women on Twitter

in Abuse/Amnesty International/Delhi/element ai/India/Politics/social media/TC/Twitter by

A new study by Amnesty International and Element AI puts number to a problem many women already know about: that Twitter is a cesspool of harassment and abuse. Conducted with the help of 6,500 volunteers, the study, billed by Amnesty International as “the largest ever” into online abuse against women, used machine-learning software from Element AI to analyze tweets sent to a sample of 778 women politicians and journalists during 2017. It found that 7.1%, or 1.1 million, of those tweets were either “problematic” or “abusive,” which Amnesty International said amounts to one abusive tweet sent every 30 seconds.

On an interactive website breaking down the study’s methodology and results, Amnesty International said many women either censor what they post, limit their interactions on Twitter, or just quit the platform altogether. “At a watershed moment when women around the world are using their collective power to amplify their voices through social media platforms, Twitter’s failure to consistently and transparently enforce its own community standards to tackle violence and abuse means that women are being pushed backwards towards a culture of silence,” stated the human rights advocacy organization.

Amnesty International, which has been researching abuse against women on Twitter for the past two years, signed up 6,500 volunteers for what it refers to as the “Troll Patrol” after releasing another study in March 2018 that described Twitter as a “toxic” place for women. The Troll Patrol’s volunteers, who come from 150 countries and range in age from 18 to 70 years old, received training about constitutes a problematic or abusive tweet. Then they were shown anonymized tweets mentioning one of the 778 women and asked whether or not the tweets were problematic or abusive. Each tweet was shown to several volunteers. In addition, Amnesty International said “three experts on violence and abuse against women” also categorized a sample of 1,000 tweets to “ensure we were able to assess the quality of the tweets labelled by our digital volunteers.”

The study defined “problematic” as tweets “that contain hurtful or hostile content, especially if repeated to an individual on multiple occasions, but do not necessarily meet the threshold of abuse,” while “abusive” meant tweets “that violate Twitter’s own rules and include content that promote violence against or threats of people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”

In total, the volunteers analyzed 288,000 tweets sent between January to December 2017 to the 778 women studied, who included politicians and journalists across the political spectrum from the United Kingdom and United States. Politicians included members of the U.K. Parliament and the U.S. Congress, while journalists represented a diverse group of publications including The Daily Mail, The New York Times, Guardian, The Sun, gal-dem, Pink News, and Breitbart.

Then a subset of the labelled tweets was processed using Element AI’s machine-learning software to extrapolate the analysis to the total of 14.5 million tweets that mentioned the 778 women during 2017. (Since tweets weren’t collected for the study until March 2018, Amnesty International notes that the scale of abuse was likely even higher because some abusive tweets may have been deleted or made by accounts that were suspended or disabled). Element AI’s extrapolation produced the finding that 7.1% of tweets sent to the women were problematic or abusive, amounting to 1.1 million tweets 2017.

Black, Asian, Latinx, and mixed race women were 34% more likely to be mentioned in problematic or abusive tweets than white women. Black women in particular were especially vulnerable: they were 84% more likely than white women to be mentioned in problematic or abusive tweets. One in 10 tweets mentioning black women in the study sample was problematic or abusive, compared to one in 15 for white women.

“We found that, although abuse is targeted at women across the political spectrum, women of color were much more likely to be impacted, and black women are disproportionately targeted. Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices,” said Milena Marin, Amnesty International’s senior advisor for tactical research, in the statement.

Breaking down the results by profession, the study found that 7% of tweets that mentioned the 454 journalists in the study were either problematic or abusive. The 324 politicians surveyed were targeted at a similar rate, with 7.12% of tweets that mentioned them problematic or abusive.

Of course, findings from a sample of 778 journalists and politicians in the U.K. and U.S. is difficult to extrapolate to other professions, countries, or the general population. The study’s findings are important, however, because many politicians and journalists need to use social media in order to do their jobs effectively. Women, and especially women of color, are underrepresented in both professions, and many stay on Twitter simply to make a statement about visibility, even though it means dealing with constant harassment and abuse. Furthermore, Twitter’s API changes means many third-party anti-bullying tools no longer work, as technology journalist Sarah Jeong noted on her own Twitter profile, and the platform has yet to come up with tools that replicate their functionality.

Amnesty International’s other research about abusive behavior towards women on Twitter includes a 2017 online poll of women in 8 countries, and an analysis of abuse faced by female members of Parliament before the UK’s 2017 snap election. The organization said the Troll Patrol isn’t about “policing Twitter or forcing it to remove content.” Instead, the organization wants the platform to be more transparent, especially about how the machine-learning algorithms it uses to detect abuse.

Because the largest social media platforms now rely on machine learning to scale their anti-abuse monitoring, Element AI also used the study’s data to develop a machine-learning model that automatically detects abusive tweets. For the next three weeks, the model will be available to test on Amnesty International’s website in order to “demonstrate the potential and current limitations of AI technology.” These limitations mean social media platforms need to fine-tune their algorithms very carefully in order to detect abusive content without also flagging legitimate speech.

“These trade-offs are value-based judgements with serious implications for freedom of expression and other human rights online,” the organization said, adding that “as it stands, automation may have a useful role to play in assessing trends or flagging content for human review, but it should, at best, be used to assist trained moderators, and certainly should not replace them.”

TechCrunch has contacted Twitter for comment.

News Source = techcrunch.com

Pew: A majority of U.S. teens are bullied online

in Abuse/bullying/cyberbullying/Delhi/family/harassment/India/kids/parents/Pew/Politics/Social/study/Teens by

A majority of U.S. teens have been subject to online abuse, according to a new study from Pew Research Center, out this morning. Specifically, that means they’ve experienced at least one of a half-dozen types of online cyberbullying, including name-calling, being subject to false rumors, receiving explicit images they didn’t ask for, having explicit images of themselves shared without their consent, physical threats, or being constantly asked about their location and activities in a stalker-ish fashion by someone who is not their parents.

Of these, name-calling and being subject to false rumors were the top two categories of abuse teens were subject to, with 42% and 32% of teens reporting it had happened to them.

 

 

 

Pew says that texting and digital messaging has paved the way for these types of interactions, and parents and teens alike are both aware of the dangers and concerned.

Parents, in particular, are worried about teens sending and receiving explicit images, with 57% saying that’s a concern, and a quarter who worry about this “a lot.” And parents of girls worry more. (64% do.)

Meanwhile, a large majority – 90% – of teens now believe that online harassment is a problem and 63% say it’s what they consider a “major” problem.

Pew also found that girls and boys are both harassed online in fairly equal measure, with 60% of girls and 59% of boys reporting having experienced some sort of online abuse. That’s a figure that may surprise some. However, it’s important to clarify that this finding is about whether or not the teen had ever had experienced online abuse – not how often or how much.

Not surprisingly, Pew found that girls are more likely than boys to have experienced two or more types of abuse, and 15% of girls have been the target of at least 4 types of abuse, compared with 6% of boys.

Girls are also more likely to be the recipient of explicit images they didn’t ask for, as 29% of teens girls reported this happened to them, versus 20% of boys.

And as the teen girls got older, they receive even more of these types of images, with 35% of girls ages 15 to 17 saying they received them, compared with only 1 out of 5 boys.

Several factors seem to play no role in how often the teens experience abuse, including race, ethnicity, or parents’ educational attainment, Pew noted. But having money does seem to matter somehow – as 24% of teens whose household income was less than $30K per year said they received online threats, compared with only 12% of those whose household incomes was greater than $75K per year. (Pew’s report doesn’t attempt to explain this finding.)

Beyond that factor, receiving or avoiding abuse is directly tied to how much screen time teens put in.

That is, the more teens go online, the more abuse they’ll receive.

45% of teens say they’re online almost constantly, and they are more likely to be harassed, as a result. 67% of them say they’ve been cyberbullied, compared with 53% who use the internet several times a day or less. And half the constantly online teens have been called offensive names, compared with just about a third (36%) who use the internet less often.

Major tech companies, including Apple, Google, and Facebook, have begun to address the issues around device addiction and screen time with software updates and parental controls.

Apple, in iOS 12, rolled out Screen Time controls that allows Apple device users to measure, monitor and restrict how often they’re on their phones, when, what type of content is blocked, and which apps they can use. In adults, the software can nudge them in the right direction, but parents also have the option of locking down their children’s phones using Screen Time controls. (Of course, savvy kids have already found the loopholes to avoid this, according to new reports.)

Google also introduced time management controls in the new version of Android, and offers parental controls around screen time through its Family Link software.

And both Google and Facebook have begun to introduce screen time reminders and settings for addictive apps like YouTube, Facebook and Instagram.

Teens seem to respect parents’ involvement in their digital lives, the report also found.

A majority – 59% – of U.S. teens say their parents are doing a good job with regard to addressing online harassment. However, 79% say elected officials are failing to protect them through legislation, 66% say social media sites are doing a poor job at stamping down abuse, and 58% of teachers are doing a poor job at handling abuse, as well.

Many of the top media sites were largely built by young people when they were first founded, and those people were often men. The sites were created in an almost naive fashion, with regard to online abuse. Protections – like muting, filters, blocking, and reporting, were generally introduced in a reactive fashion, not as proactive controls.

Instagram, for example – one of teens’ most-used apps – only introduced comment filters, blocklists, and comment blocking in 2016, and just four months ago added account muting. The app was launched in October 2010.

Pew’s findings indicate that parents would do well by their kids by using screen time management and control systems – not simply to stop their teenagers from being bullied and abused as often, but also to help the teens practice how to interact with the web in a less addictive fashion as they grow into adults.

After all, device addiction resulting in increased exposure to online abuse is not a plague that only affects teens.

Pew’s full study involves surveys of 743 teens and 1,058 parents living in the U.S. conducted March 7 to April 10, 2018. It counted “teens” as those ages 13 to 17, and “parents of teens” are those who are the parent or guardian of someone in that age range. The full report is here.

News Source = techcrunch.com

Tall Poppy aims to make online harassment protection an employee benefit

in Abuse/American Civil Liberties Union/behavior/bill de blasio/bullying/Canada/cyberbullying/cybercrime/Delhi/Department of Education/Donald Trump/eventbrite/Facebook/harassment/Honeywell/India/law enforcement/linux/Mayor/Microsoft/New York/online abuse/online communities/online harassment/Politics/Ron Wyden/Salesforce/Security/Sexual harassment/slack/social network/Startups/TC/teacher/ticketfly/United States/Y Combinator by

For the nearly 20 percent of Americans who experience severe online harassment, there’s a new company launching in the latest batch of Y Combinator called Tall Poppy that’s giving them the tools to fight back.

Co-founded by Leigh Honeywell and Logan Dean, Tall Poppy grew out of the work that Honeywell, a security specialist, had been doing to hunt down trolls in online communities since at least 2008.

That was the year that Honeywell first went after a particularly noxious specimen who spent his time sending death threats to women in various Linux communities. Honeywell cooperated with law enforcement to try and track down the troll and eventually pushed the commenter into hiding after he was visited by investigators.

That early success led Honeywell to assume a not-so-secret identity as a security expert by day for companies like Microsoft, Salesforce, and Slack, and a defender against online harassment when she wasn’t at work.

“It was an accidental thing that I got into this work,” says Honeywell. “It’s sort of an occupational hazard of being an internet feminist.”

Honeywell started working one-on-one with victims of online harassment that would be referred to her directly.

“As people were coming forward with #metoo… I was working with a number of high profile folks to essentially batten down the hatches,” says Honeywell. “It’s been satisfying work helping people get back a sense of safety when they feel like they have lost it.”

As those referrals began to climb (eventually numbering in the low hundreds of cases), Honeywell began to think about ways to systematize her approach so it could reach the widest number of people possible.

“The reason we’re doing it that way is to help scale up,” says Honeywell. “As with everything in computer security it’s an arms race… As you learn to combat abuse the abusive people adopt technologies and learn new tactics and ways to get around it.”

Primarily, Tall Poppy will provide an educational toolkit to help people lock down their own presence and do incident response properly, says Honeywell. The company will work with customers to gain an understanding of how to protect themselves, but also to be aware of the laws in each state that they can use to protect themselves and punish their attackers.

The scope of the problem

Based on research conducted by the Pew Foundation, there are millions of people in the U.S. alone, who could benefit from the type of service that Tall Poppy aims to provide.

According to a 2017 study, “nearly one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.”

The women and minorities that bear the brunt of these assaults (and, let’s be clear, it is primarily women and minorities who bear the brunt of these assaults), face very real consequences from these virtual assaults.

Take the case of the New York principal who lost her job when an ex-boyfriend sent stolen photographs of her to the New York Post and her boss. In a powerful piece for Jezebel she wrote about the consequences of her harassment.

As a result, city investigators escorted me out of my school pending an investigation. The subsequent investigation quickly showed that I was set up by my abuser. Still, Mayor Bill de Blasio’s administration demoted me from principal to teacher, slashed my pay in half, and sent me to a rubber room, the DOE’s notorious reassignment centers where hundreds of unwanted employees languish until they are fired or forgotten.

In 2016, I took a yearlong medical leave from the DOE to treat extreme post-traumatic stress and anxiety. Since the leave was almost entirely unpaid, I took loans against my pension to get by. I ran out of money in early 2017 and reported back to the department, where I was quickly sent to an administrative trial. There the city tried to terminate me. I was charged with eight counts of misconduct despite the conclusion by all parties that my ex-partner uploaded the photos to the computer and that there was no evidence to back up his salacious story. I was accused of bringing “widespread negative publicity, ridicule and notoriety” to the school system, as well as “failing to safeguard a Department of Education computer” from my abusive ex.

Her story isn’t unique. Victims of online harassment regularly face serious consequences from online harassment.

According to a  2013 Science Daily study, cyber stalking victims routinely need to take time off from work, or change or quit their job or school. And the stalking costs the victims $1200 on average to even attempt to address the harassment, the study said.

“It’s this widespread problem and the platforms have in many ways have dropped the ball on this,” Honeywell says.

Tall Poppy’s co-founders

Creating Tall Poppy

As Honeywell heard more and more stories of online intimidation and assault, she started laying the groundwork for the service that would eventually become Tall Poppy. Through a mutual friend she reached out to Dean, a talented coder who had been working at Ticketfly before its Eventbrite acquisition and was looking for a new opportunity.

That was in early 2015. But, afraid that striking out on her own would affect her citizenship status (Honeywell is Canadian), she and Dean waited before making the move to finally start the company.

What ultimately convinced them was the election of Donald Trump.

“After the election I had a heart-to-heart with myself… And I decided that I could move back to Canada, but I wanted to stay and fight,” Honeywell says.

Initially, Honeywell took on a year-long fellowship with the American Civil Liberties Union to pick up on work around privacy and security that had been handled by Chris Soghoian who had left to take a position with Senator Ron Wyden’s office.

But the idea for Tall Poppy remained, and once Honeywell received her green card, she was “chomping at the bit to start this company.”

A few months in the company already has businesses that have signed up for the services and tools it provides to help companies protect their employees.

Some platforms have taken small steps against online harassment. Facebook, for instance, launched an initiative to get people to upload their nude pictures  so that the social network can monitor when similar images are distributed online and contact a user to see if the distribution is consensual.

Meanwhile, Twitter has made a series of changes to its algorithm to combat online abuse.

“People were shocked and horrified that people were trying this,” Honeywell says. “[But] what is the way [harassers] can do the most damage? Sharing them to Facebook is one of the ways where they can do the most damage. It was a worthwhile experiment.”

To underscore how pervasive a problem online harassment is, out of the four companies where the company is doing business or could do business in the first month and a half there is already an issue that the company is addressing. 

“It is an important problem to work on,” says Honeywell. “My recurring realization is that the cavalry is not coming.”

News Source = techcrunch.com

1 2 3
Go to Top