Connect with us

2016 election

After Sandberg chat, House intel plans to release Russian-bought Facebook ads to the public

After its Senate counterpart made clear that it had no intention of doing so, the House Intelligence Committee announced that it plans to publish the Russian-bought Facebook ads that the company provided as part of the committee’s investigation into Russian interference in the 2016 U.S. election.

The news comes on the heels of a meeting between Facebook COO Sheryl Sandberg and the House’s intel leads, Chairman Mike Conaway and ranking Democrat Adam Schiff.

“We will be releasing them from our committee,” Schiff said. “We’ve asked for Facebook’s help to help scrub any personally identifiable information, but it’s our hope that when they conclude, then we can release them publicly.”

Last week, Senate Intelligence Chairman Richard Burr stated definitively that his committee was not at liberty to publish the 3,000 ads in question and that it did not make a practice of releasing documents provided to it during the course of an investigation. The Senate committee called on Facebook as well as Twitter and Google to release any ads with links to Russian efforts to interfere in the U.S. election to the public.

Those tech companies have yet to do so, which some in the legal community believe is due to the Electronic Communications Privacy Act (ECPA), a 1986 wiretapping law that is still consulted as a legal framework in many online privacy matters.

The ads will likely be released after November 1, the date that Facebook, Twitter and Google are expected to appear in open hearings as part of the House and Senate’s respective Russia investigations.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

2016 election

What we can learn from the 3,500 Russian Facebook ads meant to stir up U.S. politics

On Thursday, Democrats on the House Intelligence Committee released a massive new trove of Russian government-funded Facebook political ads targeted at American voters. While we’d seen a cross section of the ads before through prior releases from the committee, the breadth of ideological manipulation is on full display across the more than 3,500 newly released ads — and that doesn’t even count still unreleased unpaid content that shared the same divisive aims.

After viewing the ads, which stretch from 2015 to late 2017, some clear trends emerged.

Russia focused on black Americans

Many, many of these ads targeted black Americans. From the fairly large sample of ads that we reviewed, black Americans were clearly of particular interest, likely in an effort to escalate latent racial tensions.

Many of these ads appeared as memorials for black Americans killed by police officers. Others simply intended to stir up black pride, like one featuring an Angela Davis quote. One ad posted by “Black Matters” was targeted at Ferguson, Missouri residents in June 2015 and only featured the lyrics to Tupac’s “California Love.” Around this time, many ads targeted black Facebook users in Baltimore and the St. Louis area.

Some Instagram ads targeted black voters interested in black power, Malcolm X, and the new Black Panther party using Facebook profile information. In the days leading up to November 8, 2016 other ads specifically targeted black Americans with anti-Clinton messaging.

Not all posts were divisive (though most were)

While most ads played into obvious ideological agendas, those posts were occasionally punctuated by more neutral content. The less controversial or call-to-action style posts were likely designed to buffer the politically divisive content, helping to build out and grow an account over time.

For accounts that grew over the course of multiple years, some “neutral” posts were likely useful for making them appear legitimate and build trust among followers. Some posts targeting LGBT users and other identity-based groups just shared positive messages specific to those communities.

Ads targeted media consumers and geographic areas

Some ads we came across targeted Buzzfeed readers, though they were inexplicably more meme-oriented and not political in nature. Others focused on Facebook users that liked the Huffington Post’s Black Voices section or Sean Hannity.

Many ads targeting black voters targeted major U.S. cities with large black populations (Baltimore and New Orleans, for example). Other geo-centric ads tapped into Texas pride and called on Texans to secede.

Conservatives were targeted on many issues

We already knew this from the ad previews, but the new collection of ads makes it clear that conservative Americans across a number of interest groups were regularly targeted. This targeting concentrated on stirring up patriotic and sometimes nationalist sentiment with anti-Clinton, gun rights, anti-immigrant and religious stances. Some custom-made accounts spoke directly to veterans and conservative Christians. Libertarians were also separately targeted.

Events rallied competing causes

Among the Russian-bought ads, event-based posts became fairly frequent in 2016. The day after the election, an event called for an anti-Trump rally in Union Square even as another ad called for Trump supporters to rally outside Trump tower. In another instance, the ads promoted both a pro-Beyoncé and anti-Beyoncé event in New York City.

Candidate ads were mostly pro-Trump, anti-Clinton

Consistent with the intelligence community’s assessment of Russia’s intentions during the 2016 U.S. election, among the candidates, posts slamming Hillary Clinton seemed to prevail. Pro-Trump ads were fairly common, though other ads stirred up anti-Trump sentiment too. Few ads seemed to oppose Bernie Sanders and some rallied support for Sanders even after Clinton had won the nomination. One ad in August 2016 from account Williams&Kalvin denounced both presidential candidates and potentially in an effort to discourage turnout among black voters. In this case and others, posts called for voters to ignore the election outright.

While efforts like the Honest Ads Act are mounting to combat foreign-paid social media influence in U.S. politics, the scope and variety of today’s House Intel release makes it clear that Americans would be well served to pause before engaging with provocative, partisan ideological content on social platforms — at least when it comes from unknown sources.

News Source = techcrunch.com

Continue Reading

2016 election

Google offers new findings on Russian disinformation across its products

Just a day before tech’s big Russia-focused Congressional hearings begin, Google is out with a new report on the Russian government’s efforts to interfere in the U.S. presidential election across its platforms.

“While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable,” Google wrote in its latest blog post on the issue, titled “Security and disinformation in the U.S. 2016 election.”

Google’s report appears to be limited to accounts with observable ties to the Internet Research Agency, a Russian state-affiliated organization that produces political disinformation and sock puppet accounts. That narrowed scope is possibly an effort to appease Congress with some hard numbers, so it’s worth keeping in mind that we don’t yet know the scope of these disinformation campaigns beyond those pre-defined parameters.

Google reports that in an examination of its ad products, it discovered only two accounts with ties to the Internet Research Agency. The two accounts had invested $4,700 into Google’s ad network (search and display ads) during the timeframe of the 2016 U.S. presidential election. Google doesn’t specify how it defined that timeframe in this particular batch of numbers.

Unlike razor-sharp ad targeting on a platform like Facebook, these ads weren’t even targeted by location or by political affiliation. Google does offer political ad segments that face “left-leaning” and “right-leaning” audiences, though in this instance the Internet Research Agency did not appear to use the feature.

Google’s report breaks its YouTube findings into their own category. Here, it found 18 channels it believed to be linked to the Russian government that featured public political videos in English. While that isn’t very many channels, they did create a cumulative 1,108 videos with 309,000 views in the U.S. from June 2015 to November of the following year. The vast majority of videos had fewer than 5,000 views.

The report also included Google’s other products, though those examinations didn’t turn up much. There’s no evidence (yet, anyway) that state-sponsored accounts used “improper methods” to boost search rankings, though anyone who’s seen fake news featured high up in their search results might rightfully have questions about how the company decides what flies in search and what doesn’t.

To wrap up its report, Google even did an analysis of Google+ that seems to suggest that Russian state actors might be posting vacation pics on the mostly abandoned social network:

“We ​found ​no ​political ​posts ​in ​English ​from ​state-linked ​actors ​on ​Google+ (there ​were some ​posts ​in ​Russian ​and ​a ​very ​small ​number ​of ​non-political ​posts).”

All three companies set to appear before the Senate Judiciary Committee and the Senate and House Intel Committees this week put out an early report previewing their expected testimony. Google’s relatively small scale findings put into perspective Facebook’s new assertion that similar content reached 126 million users on its own platform, though the situation on Twitter also appears to be at least somewhat worse than previously reported. 

We’ll be following tech’s testimony to Congress this week as the companies expand on their own unwitting role in foreign disinformation campaigns during the 2016 election.

News Source = techcrunch.com

Continue Reading

2016 election

Google offers new findings on Russian disinformation across its products

Just a day before tech’s big Russia-focused Congressional hearings begin, Google is out with a new report on the Russian government’s efforts to interfere in the U.S. presidential election across its platforms.

“While we have found only limited activity on our services, we will continue to work to prevent all of it, because there is no amount of interference that is acceptable,” Google wrote in its latest blog post on the issue, titled “Security and disinformation in the U.S. 2016 election.”

Google’s report appears to be limited to accounts with observable ties to the Internet Research Agency, a Russian state-affiliated organization that produces political disinformation and sock puppet accounts. That narrowed scope is possibly an effort to appease Congress with some hard numbers, so it’s worth keeping in mind that we don’t yet know the scope of these disinformation campaigns beyond those pre-defined parameters.

Google reports that in an examination of its ad products, it discovered only two accounts with ties to the Internet Research Agency. The two accounts had invested $4,700 into Google’s ad network (search and display ads) during the timeframe of the 2016 U.S. presidential election. Google doesn’t specify how it defined that timeframe in this particular batch of numbers.

Unlike razor-sharp ad targeting on a platform like Facebook, these ads weren’t even targeted by location or by political affiliation. Google does offer political ad segments that face “left-leaning” and “right-leaning” audiences, though in this instance the Internet Research Agency did not appear to use the feature.

Google’s report breaks its YouTube findings into their own category. Here, it found 18 channels it believed to be linked to the Russian government that featured public political videos in English. While that isn’t very many channels, they did create a cumulative 1,108 videos with 309,000 views in the U.S. from June 2015 to November of the following year. The vast majority of videos had fewer than 5,000 views.

The report also included Google’s other products, though those examinations didn’t turn up much. There’s no evidence (yet, anyway) that state-sponsored accounts used “improper methods” to boost search rankings, though anyone who’s seen fake news featured high up in their search results might rightfully have questions about how the company decides what flies in search and what doesn’t.

To wrap up its report, Google even did an analysis of Google+ that seems to suggest that Russian state actors might be posting vacation pics on the mostly abandoned social network:

“We ​found ​no ​political ​posts ​in ​English ​from ​state-linked ​actors ​on ​Google+ (there ​were some ​posts ​in ​Russian ​and ​a ​very ​small ​number ​of ​non-political ​posts).”

All three companies set to appear before the Senate Judiciary Committee and the Senate and House Intel Committees this week put out an early report previewing their expected testimony. Google’s relatively small scale findings put into perspective Facebook’s new assertion that similar content reached 126 million users on its own platform, though the situation on Twitter also appears to be at least somewhat worse than previously reported. 

We’ll be following tech’s testimony to Congress this week as the companies expand on their own unwitting role in foreign disinformation campaigns during the 2016 election.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending