Menu

Timesdelhi.com

February 24, 2019
Category archive

behavior

2018 really was more of a dumpster fire for online hate and harassment, ADL study finds

in Abuse/Anti-Defamation League/behavior/bullying/cyberbullying/cybercrime/Delhi/digital media/Facebook/harassment/Hate crime/India/online harassment/Politics/Reddit/social applications/social media/TC/Twitch/United States/WhatsApp/YouGov by

Around 37 percent of Americans were subjected to severe hate and harassment online in 2018, according to a new study by the Anti-Defamation League, up from about 18 percent in 2017. And over half of all Americans experienced some form of harassment according to the ADL study.

Facebook users bore the brunt of online harassment on social networking sites according to the ADL study, with around 56 percent of survey respondents indicating that at least some of their harassment occurred on the platform. — unsurprising given Facebook’s status as the dominant social media platform in the U.S.

Around 19 percent of people said they experienced severe harassment on Twitter (only 19 percent? That seems low); while 17 percent reported harassment on YouTube; 16 percent on Instagram; and 13 percent on WhatsApp .

Chart courtesy of the Anti-Defamation League

In all, the blue ribbon standards for odiousness went to Twitch, Reddit, Facebook and Discord, when the ADL confined their surveys to daily active users. nearly half of all daily users on Twitch have experienced harassment, the report indicated. Around 38% of Reddit users, 37% of daily Facebook users, and 36% of daily Discord users reported being harassed.

“It’s deeply disturbing to see how prevalent online hate is, and how it affects so many Americans,” said ADL chief executive Jonathan A. Greenblatt. “Cyberhate is not limited to what’s solely behind a screen; it can have grave effects on the quality of everyday lives – both online and offline. People are experiencing hate and harassment online every day and some are even changing their habits to avoid contact with their harassers.”

And the survey respondents seem to think that online hate makes people more susceptible to committing hate crimes, according to the ADL.

The ADL also found that most Americans want policymakers to strengthen laws and improve resources for police around cyberbullying and cyberhate. Roughly 80 percent said they wanted to see more action from lawmakers.

Even more Americans, or around 84 percent, think that the technology platforms themselves need to do more work to curb the harassment, hate, and hazing they see on social applications and websites.

As for the populations that were most at risk to harassment and hate online, members of the LGBTQ community were targeted most frequently, according to the study. Some 63 percent of people identifying as LGBTQ+ said they were targeted for online harassment because of their identity.

“More must be done in our society to lessen the prevalence of cyberhate,” said Greenblatt. “There are key actions every sector can take to help ensure more Americans are not subjected to this kind of behavior. The only way we can combat online hate is by working together, and that’s what ADL is dedicated to doing every day.”

The report also revealed that cyberbullying had real consequences on user behavior. Of the survey respondents 38 percent stopped, reduced or changed online activities, and 15 percent took steps to reduce risks to their physical safety.

Interviews for the survey were conducted between Dec. 17 to Dec. 27, 2018 by the public opinion and data analysis company YouGov, and was conducted by the ADL’s Center for Technology and Society. The non-profit admitted that it oversampled for respondents who identified as Jewish, Muslim, African American, Asian AMerican or LGBTQ+ to “understand the experiences of individuals who may be especially targeted because of their group identity.”

The survey had a margin of error of plus or minus three percentage points, according to a statement from the ADL.

News Source = techcrunch.com

Consumer-focused healthcare can save lives by focusing on changing behavior

in Amazon/articles/behavior/behaviorism/CBT/Column/Delhi/diabetes/Disease/e-commerce/India/machine learning/Politics/psychology by

Everything we do in the $3 trillion healthcare market today only affects 10% of outcomes to premature death.

You read that right. All of that, for just 10% of outcomes:

That 10% exists for a reason. Genetic predisposition is hard to change. So, unfortunately, are social circumstances and environmental behavior. But that 40% of behavioral patterns — why can’t we tackle that? This is what real prevention would look like: nothing comes even close to mattering as much towards whether you will die prematurely as your behavior does.

We can do better than simply focusing on that small 10% slice of the pie; in fact, we’re looking in the wrong place. Doctors, entrepreneurs and founders need to be thinking (and treating with) lifestyle as medicine. Because behavioral change is the best and most powerful way to impact that whopping 40% slice.

Too often we think of this as the “just eat right and exercise” problem. As we know very well, that platitude will not solve our healthcare problem. The true problem is the difficulty of modifying behavior. We know this, because the platitude doesn’t work. We like to eat what we want, to exercise or not exercise if we choose. In short, humans like our patterns. They’re hard to change.

Tech, on the other hand, modifies behavior very well. Just look at the phone you’re probably reading this on, which has foundationally changed the way we communicate — along with huge other swaths of human behavior, in both positive and negative ways — from the ability to call a ride service in practically any city at any time to tracking your health to screen addiction. We know technology modifies behavior; we live this every day. So the question is, how can we target this superpower ability of tech to have 4x the ability to impact that the $3 trillion healthcare budget does?

How does it work?

Let’s think about why technology actually does work for modifying behavior. For one, it’s always there, thanks to the leap in mobile tech, whether that be phones or fitness trackers. Second, technology’s ability to do constant A/B testing essentially enables RCTs, or Randomized Clinical Trials, every moment that technology is present and being used. These RCTs are invaluable laboratories for learning about what is effective therapeutic behavior modification, or improving efficacy — and it’s not toxic. Most medical products are released and then rarely get updated (think about how old the stethoscope is!). Rolling out new versions of products has been difficult and expensive. But that no longer has to be true. The same kind of A/B testing that Amazon does, for example, to optimize ecommerce — everything from the look of the website to the flow of the experience to the nature of the shipping that you get — can be now applied to behavior modification for health. Comparing the immediate efficacy of two algorithms for lifestyle behavior modification on two different populations can happen not just over years or months — as a RCT would have to be — but over weeks and even days, improving our responses and lifestyles that much faster.

Second, applying Machine Learning to vast amounts of new data is identifying all kinds of nuances of human behavior that we aren’t nearly as good, as humans, at noticing. For example, correlating patterns with data like where you shop, when you eat lunch, what activities do you do, what shows you watch, what your exercise routine has been, how much you sleep, even perhaps whether you remember to charge your phone. Identifying the clues in our behavior that eventually add up to significant lifestyle risk is the first step towards changing and improving that behavior. Like it or not, we live our lifestyles now through our phones — ML allows us to learn from it.

And last, technology allows us to scale existing therapies in new orders of magnitude.  Programs which have proven extremely effective at behavior modification through personal interaction — such as Diabetes Prevention Program for Type 2 Diabetes — have been by definition hard to scale; computation can extend their reach into the billions. Or take for another example depression, a complex disease where the molecules involved are poorly understood: drug therapies have been challenging, but therapy, specifically CBT, has a very strong track record, and computational CBT — ie, CBT scaled with technology — the strongest.

Even conditions as mysterious and difficult as cognitive decline can be treated much more effectively with technology. This is another fascinating example where the biology is so complex at the molecular level that breakthroughs have been far and few between. On the other hand, cognitive is painfully clear at the behavior level. And it is also very clear that behavioral treatment in the form of cognitive stimulation helps significantly. In this study, for example, the auditory memory and attention capability of patients who received cognitive stimulation training 1 hour per day, 5 days per week, for 8 weeks improvement was significantly greater than those who did not.

These are big challenges to meet. Behavior is the result of thousands of small decisions at every moment of every day: do I sit or do I stand? Do I drink this beer? Even, do I take regular deep breaths? One of the biggest challenges to face is how we ‘read’ this behavior and turn it into reliable data. There’s also the issue of small sample sizes: in order to narrow down to a meaningful experiment, you need, at the moment, to have very clear definitions of behavior, which often means small sample sizes of people who always do X in Y conditions. The science of behavior and decision making itself is complex, debatable, and often evolving. And there’s the company building practicalities: to build a company in this space, you need to find people who understand clinical science, data science, experimentation approaches, behavioral science *and* product and UI.

But that’s exactly the opportunity. These things are coming; we understanding more about behavior every day, as devices enter our daily lives and health data becomes more and more fine-grained. New conceptions of roles that blend behavioral science and product design are clearly emerging. All of these means are not exclusive and can be combined into powerful ways of modifying behavior for health. Those that can connect all these dots have the ability to build companies that can take a giant bite out of that 40% — and have tremendous impact on mortality for huge swaths of the population.

There’s an old joke that plumbers have saved more lives than doctors, because improving sewers and sanitation (and eradicating the disease that went along with that) was so impactful on longevity for humans. By cleaning up the modern day ‘sewers’ of our lifestyles — not through magical drugs, complex procedures, or platitudes about prevention — but through a real infrastructure of technology that is being built right now — technology will bring an analogous impact.

News Source = techcrunch.com

Tall Poppy aims to make online harassment protection an employee benefit

in Abuse/American Civil Liberties Union/behavior/bill de blasio/bullying/Canada/cyberbullying/cybercrime/Delhi/Department of Education/Donald Trump/eventbrite/Facebook/harassment/Honeywell/India/law enforcement/linux/Mayor/Microsoft/New York/online abuse/online communities/online harassment/Politics/Ron Wyden/Salesforce/Security/Sexual harassment/slack/social network/Startups/TC/teacher/ticketfly/United States/Y Combinator by

For the nearly 20 percent of Americans who experience severe online harassment, there’s a new company launching in the latest batch of Y Combinator called Tall Poppy that’s giving them the tools to fight back.

Co-founded by Leigh Honeywell and Logan Dean, Tall Poppy grew out of the work that Honeywell, a security specialist, had been doing to hunt down trolls in online communities since at least 2008.

That was the year that Honeywell first went after a particularly noxious specimen who spent his time sending death threats to women in various Linux communities. Honeywell cooperated with law enforcement to try and track down the troll and eventually pushed the commenter into hiding after he was visited by investigators.

That early success led Honeywell to assume a not-so-secret identity as a security expert by day for companies like Microsoft, Salesforce, and Slack, and a defender against online harassment when she wasn’t at work.

“It was an accidental thing that I got into this work,” says Honeywell. “It’s sort of an occupational hazard of being an internet feminist.”

Honeywell started working one-on-one with victims of online harassment that would be referred to her directly.

“As people were coming forward with #metoo… I was working with a number of high profile folks to essentially batten down the hatches,” says Honeywell. “It’s been satisfying work helping people get back a sense of safety when they feel like they have lost it.”

As those referrals began to climb (eventually numbering in the low hundreds of cases), Honeywell began to think about ways to systematize her approach so it could reach the widest number of people possible.

“The reason we’re doing it that way is to help scale up,” says Honeywell. “As with everything in computer security it’s an arms race… As you learn to combat abuse the abusive people adopt technologies and learn new tactics and ways to get around it.”

Primarily, Tall Poppy will provide an educational toolkit to help people lock down their own presence and do incident response properly, says Honeywell. The company will work with customers to gain an understanding of how to protect themselves, but also to be aware of the laws in each state that they can use to protect themselves and punish their attackers.

The scope of the problem

Based on research conducted by the Pew Foundation, there are millions of people in the U.S. alone, who could benefit from the type of service that Tall Poppy aims to provide.

According to a 2017 study, “nearly one-in-five Americans (18%) have been subjected to particularly severe forms of harassment online, such as physical threats, harassment over a sustained period, sexual harassment or stalking.”

The women and minorities that bear the brunt of these assaults (and, let’s be clear, it is primarily women and minorities who bear the brunt of these assaults), face very real consequences from these virtual assaults.

Take the case of the New York principal who lost her job when an ex-boyfriend sent stolen photographs of her to the New York Post and her boss. In a powerful piece for Jezebel she wrote about the consequences of her harassment.

As a result, city investigators escorted me out of my school pending an investigation. The subsequent investigation quickly showed that I was set up by my abuser. Still, Mayor Bill de Blasio’s administration demoted me from principal to teacher, slashed my pay in half, and sent me to a rubber room, the DOE’s notorious reassignment centers where hundreds of unwanted employees languish until they are fired or forgotten.

In 2016, I took a yearlong medical leave from the DOE to treat extreme post-traumatic stress and anxiety. Since the leave was almost entirely unpaid, I took loans against my pension to get by. I ran out of money in early 2017 and reported back to the department, where I was quickly sent to an administrative trial. There the city tried to terminate me. I was charged with eight counts of misconduct despite the conclusion by all parties that my ex-partner uploaded the photos to the computer and that there was no evidence to back up his salacious story. I was accused of bringing “widespread negative publicity, ridicule and notoriety” to the school system, as well as “failing to safeguard a Department of Education computer” from my abusive ex.

Her story isn’t unique. Victims of online harassment regularly face serious consequences from online harassment.

According to a  2013 Science Daily study, cyber stalking victims routinely need to take time off from work, or change or quit their job or school. And the stalking costs the victims $1200 on average to even attempt to address the harassment, the study said.

“It’s this widespread problem and the platforms have in many ways have dropped the ball on this,” Honeywell says.

Tall Poppy’s co-founders

Creating Tall Poppy

As Honeywell heard more and more stories of online intimidation and assault, she started laying the groundwork for the service that would eventually become Tall Poppy. Through a mutual friend she reached out to Dean, a talented coder who had been working at Ticketfly before its Eventbrite acquisition and was looking for a new opportunity.

That was in early 2015. But, afraid that striking out on her own would affect her citizenship status (Honeywell is Canadian), she and Dean waited before making the move to finally start the company.

What ultimately convinced them was the election of Donald Trump.

“After the election I had a heart-to-heart with myself… And I decided that I could move back to Canada, but I wanted to stay and fight,” Honeywell says.

Initially, Honeywell took on a year-long fellowship with the American Civil Liberties Union to pick up on work around privacy and security that had been handled by Chris Soghoian who had left to take a position with Senator Ron Wyden’s office.

But the idea for Tall Poppy remained, and once Honeywell received her green card, she was “chomping at the bit to start this company.”

A few months in the company already has businesses that have signed up for the services and tools it provides to help companies protect their employees.

Some platforms have taken small steps against online harassment. Facebook, for instance, launched an initiative to get people to upload their nude pictures  so that the social network can monitor when similar images are distributed online and contact a user to see if the distribution is consensual.

Meanwhile, Twitter has made a series of changes to its algorithm to combat online abuse.

“People were shocked and horrified that people were trying this,” Honeywell says. “[But] what is the way [harassers] can do the most damage? Sharing them to Facebook is one of the ways where they can do the most damage. It was a worthwhile experiment.”

To underscore how pervasive a problem online harassment is, out of the four companies where the company is doing business or could do business in the first month and a half there is already an issue that the company is addressing. 

“It is an important problem to work on,” says Honeywell. “My recurring realization is that the cavalry is not coming.”

News Source = techcrunch.com

Search and social media was filled with clickbait and propaganda in the wake of Vegas shooting

in Alphabet/behavior/communication/Delhi/Facebook/Google/India/Las Vegas/mass shooting/new media/Politics/social media/Structure/TC/Twitter/Web 2.0 by

In the wake of what is now the worst mass-shooting in U.S. history, thousands of people turning to social media for information on the unfolding investigation earlier this morning would have found many of the top posts on most of the major websites to be hot garbage.

Letting an algorithm cull links from the sewer of internet commentary, and then distributing that to millions of people, is a losing game. It’s another sign of how Facebook and the rest continue to abdicate responsibility.

Google and the social media sites say they’re working on improving their hit rate for better quality news sources, but today’s spread in the aftermath of the Las Vegas shooting shows just how much more work they have to do.

Over the course of the morning, Facebook’s “Safety Check” page included updates on the shooting from a far right wing blogger who had accused the shooter of being a “left wing loon”. The top post on the site then moved to feature a clickbait video from a news aggregation service, MyTVToday, before finally settling on reports from local and national news outlets.

Facebook wasn’t alone in recirculating conjecture and outright lies on its marquee pages. One of the rumors that had been circulating on the site 4Chan which misidentified the shooter as a man named Geary Danley appeared in Google’s top stories widget (as Buzzfeed and Bloomberg reported).

Google released the following statement to Bloomberg and The New York Times (we’ve also reached out for comment):

“Unfortunately, early this morning we were briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries. Within hours, the 4chan story was algorithmically replaced by relevant results. This should not have appeared for any queries, and we’ll continue to make algorithmic improvements to prevent this from happening in the future.”

The algorithms that social media sites like Twitter and Facebook use to determine what stories to display have been making headlines themselves, as more attention is paid to the information they’re distributing.

The hoaxes listed by Buzzfeed that showed up on Twitter alone should be enough to convince @Jack that something needs to be done about the trolls willfully setting dumpster fires in the middle of the service’s vaunted news stream.

It’s a legitimate question to ask about how sources like 4Chan would even be considered a viable outlet to source information from for breaking news. And, indeed, some reporters are already asking it.

Google, Twitter and Facebook are under the microscope already for their response to the Russian hacking scandal currently being investigated by Congress. And this latest misstep in their treatment of a story that has shaken the country, with so many dead and injured, and so much still unknown, just underscores how problematic their reliance on software can be — and the real world consequences that it can have.

Featured Image: (Photo by David Becker/Getty Images)/Getty Images

News Source = techcrunch.com

Go to Top