February 23, 2018
Category archive


Facebook’s ad targeting tools could be a valuable supplement to census data

in advertising/Advertising Tech/census/Delhi/Facebook/India/Politics/Science/Social/TC by

Official census forms are an invaluable source for demographic data throughout the country, but for trends that occur on the scale of weeks or months rather than years, they’re a bit lacking. But a new study shows that similar data used by Facebook to target ads could help fill in that blind spot.

Sociologist Emilio Zagheni at the University of Washington looked into the possibility, in this case specifically regarding migrants in the U.S. and their movements between states. He’s previously looked at this topici using Google+ and other internet-based metrics.

Say you wanted to know whether East African migrant populations were tending towards settling in cities, suburbs, or rural areas. The census process is completed every decade, which really is too long a time to observe short-term trends that follow, for example, an economic recovery or important bill.

But by using, or rather strategically misusing Facebook’s Ads Manager tool, one can find reasonably accurate and up-to-date info on, for example, Somalian migrants to the Chicago metro area versus outside the city. Facebook has already extracted all this data — why not use it?

This data is, of course, not the whole picture. You’re not finding all Somalian folks in the Chicago area, only Facebook users who choose to accurately report their country of origin and current location. Compared with the data in the Census Bureau’s American Community Survey, it’s not very reliable. But it’s still valuable, Zagheni argues.

“Is it better to have a large sample that is biased, or a small sample that is nonbiased?” he asks in a UW news release. “The American Community Survey is a small sample that is more representative of the underlying population; Facebook is a very large sample but not representative. The idea is that in certain contexts, the sample in the American Community Survey is too small to say something significant. In other circumstances, Facebook samples are too biased.”

“With this project we aim at getting the best of both worlds,” he continues. “By calibrating the Facebook data with the American Community Survey, we can correct for the bias and get better estimates.”

Facebook trends mirror the census data, but tend to underestimate numbers.

With reliable but scarce ground truth data and noisy but voluminous supplementary data, you can put together a more precise picture than before — as long as you’re careful to control for those biases. Data from other social networks could also be brought in to even things out.

Zagheni and his team hope to refine the ideas demonstrated in the paper so that they can be applied in places like developing countries where self-reported data like Facebook’s is easy to come by but reliable government data isn’t. A “good enough” sketch of the population and recent trends could help with things like prioritizing infrastructure investment or directing aid.

It’s unfortunate that the whole thing required the researchers to abuse the advertising system to expose the data — surely Facebook can provide better access for research purposes. I asked the company whether that was a likely possibility. Zagheni seemed to like the idea.

“I certainly hope that there will be opportunities to work directly with Facebook on this line of research in the future,” he wrote in an email to TechCrunch.

The paper describing the team’s work is published in the latest issue of the journal Population and Development Review.

Featured Image: FotografiaBasica/iStock/Getty Images

News Source = techcrunch.com

In-office medical advertising startup Outcome Health reportedly misled advertisers

in advertising/Delhi/India/Outcome Health/Politics/Startups/TC by

Some employees at Outcome Health, which provides advertising for pharmaceutical companies in screens within doctors’ offices, allegedly misled advertisers by charging them for ad placements on more video screens than the company had installed, according to a report by The Wall Street Journal.

The startup installs screens in doctors’ offices free of charge and then runs advertisements like those for pharmaceutical companies, charging for the ad placement. According to the report, the company seems to have inflated the number of screens that ads ran on to advertisers, allowing it to generate additional revenue. If so, this would be a huge misstep in terms of internal governance for the company, especially as advertisers (like pharmaceutical companies) look for new channels to promote their products.

The Chicago-based startup said it raised $500 million at a $5 billion pre-money valuation from Goldman Sachs and Alphabet’s investment arm CapitalG in May this year. The company started in 2006 but had not taken additional capital prior to this financing round. The screens in doctors’ offices also run content beyond advertising. Benchmark Capital partner Bill Gurley poured heavy praise on the company and its CEO, Rishi Shah, around the time of the financing round.

Outcome Health certainly isn’t the only startup in the past few years we’ve heard have major problems internally. Zenefits skirted regulatory boundaries, eventually leading to the ouster of former CEO and founder Parker Conrad and investors cutting the valuation of the company in half. There was also the massive Theranos fiasco that another Wall Street Journal report exposed, which led to heaps of lawsuits hitting the company. The Journal story on Outcome Health says it “found nothing to demonstrate top executives’ involvement in the alleged misleading of advertisers.”

We’ve reached out to Outcome Health through multiple channels for additional comment and will update the story when we hear back.

Featured Image: Medioimages/Photodisc/Getty Images

News Source = techcrunch.com

ARAD helps developers get ads in their augmented reality apps

in advertising/Apps/ARAD/Augmented Reality/Delhi/hackathon/India/mobile/Politics/TC/techcrunch disrupt hackathon/TechCrunch Disrupt SF 2017 by

ARKit and other augmented reality tools are going to make the experience more and more popular among developers and users — but, like any new platform, there probably won’t be a sophisticated way to monetize them yet outside of paying for a download.

A team of developers from Google and Snapchat at the TechCrunch Disrupt SF 2017 hackathon are hoping to take the learnings from that movement to help build a way to connect developers and advertisers to create ad experiences within an augmented reality environment. Sriram Bargav Karnati, Spandana Govindgari, Sai Teja Pratap, and Jaydev Ajit Kumar spent the last 24 hours at the hackathon building a way to insert an advertisement into augmented reality games with ARAD.

“When we were building the app we were thinking, hey, how do we place this ad in a non-intrusive way,” Govindgari said. “When the user clicks this ad, they should experience a whole new ad format. We were aware of it but didn’t really dig deep into it, but as we learned more we learned it’s hard to make these 3D objects and detect objects in augmented reality.”

The goal is to help developers figure out a way to make money and still get their apps into the hands of as many people as possible. ARAD has advertisers place some media assets on their platform (which, again, was built in around 24 hours), and then the tool inserts an ad that’s just usually just slightly outside of their field of vision. It’ll detect something like a water bottle, and if an advertiser has targeted an ad against a water bottle, an ad for LaCroix might pop up as a small interactive box. When a user taps on the box, they’ll see the ad for LaCroix, and then that counts as an impression for the advertiser.

Because augmented reality as a platform is so new, we’re going to see a lot of experimentation as to how advertising will work in augmented reality. Karnati and his team envision a tool where advertisers could place 3D assets on that table instead of a window that pokes into an advertising asset, though that might be something that takes a little longer than just a day to build. The idea, though, is that if someone is playing a quick game of tic-tac-toe and takes a break to look to the side, they’ll potentially see an ad — which, again, isn’t intrusive into the core experience.

That’ll also be a good tool for cross-promotion between other developers, a practice you see pretty often among game developers, Pratap said. And because augmented reality is such an immersive experience, there is probably a better chance someone will take notice of a high-quality ad that might lead them to another app (or game). For developers, that means they might also be able to charge advertisers even more because it’s a more engaged audience.

“We want it to enhance the experience for the user,” Karnati said. “Most ads which are in-app are really annoying and provide a bad user experience. This is bringing real-world, contextual things into the app, and it’s not really annoying the user. We want to answer the question from developers of ‘how do I get money?’ They can actually use something like this to pay off their bills, to support their lifestyle.”

News Source = techcrunch.com

Facebook’s generation of ‘Jew Hater’ and other advertising categories prompts system inspection

in advertising/Advertising Tech/Delhi/Facebook/India/Politics/TC by

Facebook automatically generates categories advertisers can target, such as “jogger” and “activist,” based on what it observes in users’ profiles. Usually that’s not a problem, but ProPublica found that Facebook had generated anti-Semitic categories such as “Jew Hater” and “Hitler did nothing wrong,” which could be targeted for advertising purposes.

The categories were small — a few thousand people total — but the fact that they existed for official targeting (and in turn, revenue for Facebook) raises questions about the effectiveness — or even existence — of hate speech controls on the platform. Although surely countless posts are flagged and removed successfully, the failures are often conspicuous.

ProPublica, acting on a tip, found that a handful of categories autocompleted themselves when their researchers entered “jews h” into the advertising category search box. To verify these were real, they bundled a few together and bought an ad targeting them, which indeed went live.

Upon being alerted, Facebook removed the categories and issued a familiar-sounding strongly worded statement about how tough on hate speech the company is:

We don’t allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes. However, there are times where content is surfaced on our platform that violates our standards. In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.

The problem occurred because people were listing “jew hater” and the like in their “field of study” category, which is of course a good one for guessing what a person might be interested in: meteorology, social sciences, etc. Although the numbers were extremely small, that shouldn’t be a barrier to an advertiser looking to reach a very limited group, like owners of a rare dog breed.

But as difficult as it might be for an algorithm to determine the difference between “History of Judaism” and “History of ‘why jews ruin the world,’” it really does seem incumbent on Facebook to make sure the algorithm does make that determination. At the very least, when categories are potentially sensitive, dealing with personal data like religion, politics, and sexuality, one would think they would be verified by humans before being offered up to would-be advertisers.

Facebook told TechCrunch that it is now working to prevent such offensive entries in demographic traits from appearing as addressable categories. Of course, hindsight is 20/20, but really — only now it’s doing this?

It’s good that measures are being taken, but it’s kind of hard to believe that there was not some kind of flag list that watched for categories or groups that clearly violate the no-hate-speech provision. I asked Facebook for more details on this, and will update the post if I hear back.

News Source = techcrunch.com

Facebook sold more than $100,000 in political ads to a Russian company during the 2016 election

in 2016 election/advertising/Advertising Tech/Delhi/Facebook/Government/India/Politics/Russia/TC by

Following its April post-mortem on its platform’s role in the 2016 U.S. presidential election, Facebook is out with some juicy new details. Most noteworthy given the public’s intense interest in all things Russian is the fact that potential pro-Kremlin entities apparently purchased as much as $150,000 in political ads on the platform between 2015 and 2017.

As Facebook Chief Security Officer Alex Stamos explained in a blog post:

“There have been a lot of questions since the 2016 US election about Russian interference in the electoral process. In April we published a white paper that outlined our understanding of organized attempts to misuse our platform. One question that has emerged is whether there’s a connection between the Russian efforts and ads purchased on Facebook. These are serious claims and we’ve been reviewing a range of activity on our platform to help understand what happened.

“In reviewing the ads buys, we have found approximately $100,000 in ad spending from June of 2015 to May of 2017 — associated with roughly 3,000 ads — that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia.”

In addition to that $100,000, another $50,000 in political ad spending is thought to have loose connections to Russia that suggest Russian origins, including “ads bought from accounts with US IP addresses but with the language set to Russian.”

According to Stamos, the “vast majority” of the ads in question did not explicitly mention candidate names or the presidential race itself. Instead, they focused on a spectrum of wedge issues that were particularly hot leading into the election, including gun rights, immigration, LGBT rights and race. Roughly one quarter of these ads were targeted to particular geographic regions, particularly the ads that ran in 2015. Facebook’s more recent findings mesh with the insights around political misinformation campaigns that it published in April of this year. Perhaps most interesting is the revelation that bots aren’t actually responsible for most of this stuff — the bulk of it appears to be non-automated, coordinated campaigns by human actors.

Given the deep knowledge of state-level American politics necessary to successfully geo-target ads like these, the whole thing raises further questions about the possibility that entities linked to the Russian government might have coordinated with individuals in the U.S., though it doesn’t begin to answer those questions.

On Wednesday, Facebook spoke to Congress about the findings as part of its investigation into Russian interference in the 2016 U.S. election. In a follow-up story by the Washington Post, Facebook admitted that “there is evidence that some of the accounts are linked to a troll farm in St. Petersburg, referred to as the Internet Research Agency, though we have no way to independently confirm.” The Internet Research Agency is a group known for its pro-Kremlin online propaganda campaigns which U.S. intelligence agencies believe is funded by a close associate of Russian President Vladimir Putin with connections to the Russian intelligence community.

For its part, Facebook has been acting on the results of its internal audit examining the ways its platform may have been exploited in the 2016 U.S. election. Based on these reviews, the company was able to boot 30,000 suspect accounts engaging in what it calls “false amplification” off its platform around the time of the French election earlier this year. The company has also begun blocking ads from pages and accounts that repeatedly share fake news and misinformation. Still, if these kind of influence campaigns are truly linked to Russian intelligence efforts, Facebook is going to have a hell of a time trying to stay a few steps ahead.

Featured Image: Sean Gallup/Getty

News Source = techcrunch.com

Go to Top