Menu

Timesdelhi.com

June 16, 2019
Category archive

fake news

Facebook still a great place to amplify pre-election junk news, EU study finds

in deception/Delhi/digital media/election security/Emmanuel Macron/Europe/european commission/European Union/Facebook/fake news/France/India/junk news/misinformation/online disinformation/Oxford Internet Institute/oxford university/Politics/presidential election/Security/sensationalism/Social/social media/Sweden/Twitter by

A study carried out by academics at Oxford University to investigate how junk news is being shared on social media in Europe ahead of regional elections this month has found individual stories shared on Facebook’s platform can still hugely outperform the most important and professionally produced news stories, drawing as much as 4x the volume of Facebook shares, likes, and comments.

The study, conducted for the Oxford Internet Institute’s (OII) Computational Propaganda Project, is intended to respond to widespread concern about the spread of online political disinformation on EU elections which take place later this month, by examining pre-election chatter on Facebook and Twitter in English, French, German, Italian, Polish, Spanish, and Swedish.

Junk news in this context refers to content produced by known sources of political misinformation — aka outlets that are systematically producing and spreading “ideologically extreme, misleading, and factually incorrect information” — with the researchers comparing interactions with junk stories from such outlets to news stories produced by the most popular professional news sources to get a snapshot of public engagement with sources of misinformation ahead of the EU vote.

As we reported last year, the Institute also launched a junk news aggregator ahead of the US midterms to help Internet users get a handle on manipulative politically-charged content that might be hitting their feeds.

In the EU the European Commission has responded to rising concern about the impact of online disinformation on democratic processes by stepping up pressure on platforms and the adtech industry — issuing monthly progress reports since January after the introduction of a voluntary code of practice last year intended to encourage action to squeeze the spread of manipulative fakes. Albeit, so far these ‘progress’ reports have mostly boiled down to calls for less foot-dragging and more action.

One tangible result last month was Twitter introducing a report option for misleading tweets related to voting ahead of the EU vote, though again you have to wonder what took it so long given that online election interference is hardly a new revelation. (The OII study is also just the latest piece of research to bolster the age old maxim that falsehoods fly and the truth comes limping after.)

The study also examined how junk news spread on Twitter during the pre-EU election period, with the researchers finding that less than 4% of sources circulating on Twitter’s platform were junk news (or “known Russian sources”) — with Twitter users sharing far more links to mainstream news outlets overall (34%) over the study period.

Although the Polish language sphere was an exception — with junk news making up a fifth (21%) of EU election-related Twitter traffic in that outlying case.

Returning to Facebook, while the researchers do note that many more users interact with mainstream content overall via its platform, noting that mainstream publishers have a higher following and so “wider access to drive activity around their content” and meaning their stories “tend to be seen, liked, and shared by far more users overall”, they also point out that junk news still packs a greater per story punch — likely owing to the use of tactics such as clickbait, emotive language, and outragemongering in headlines which continues to be shown to generate more clicks and engagement on social media.

It’s also of course much quicker and easier to make some shit up vs the slower pace of doing rigorous professional journalism — so junk news purveyors can get out ahead of news events also as an eyeball-grabbing strategy to further the spread of their cynical BS. (And indeed the researchers go on to say that most of the junk news sources being shared during the pre-election period “either sensationalized or spun political and social events covered by mainstream media sources to serve a political and ideological agenda”.)

“While junk news sites were less prolific publishers than professional news producers, their stories tend to be much more engaging,” they write in a data memo covering the study. “Indeed, in five out of the seven languages (English, French, German, Spanish, and Swedish), individual stories from popular junk news outlets received on average between 1.2 to 4 times as many likes, comments, and shares than stories from professional media sources.

“In the German sphere, for instance, interactions with mainstream stories averaged only 315 (the lowest across this sub-sample) while nearing 1,973 for equivalent junk news stories.”

To conduct the research the academics gathered more than 584,000 tweets related to the European parliamentary elections from more than 187,000 unique users between April 5 and April 20 using election-related hashtags — from which they extracted more than 137,000 tweets containing a URL link, which pointed to a total of 5,774 unique media sources.

Sources that were shared 5x or more across the collection period were manually classified by a team of nine multi-lingual coders based on what they describe as “a rigorous grounded typology developed and refined through the project’s previous studies of eight elections in several countries around the world”.

Each media source was coded individually by two separate coders, via which technique they say was able to successfully label nearly 91% of all links shared during the study period. 

The five most popular junk news sources were extracted from each language sphere looked at — with the researchers then measuring the volume of Facebook interactions with these outlets between April 5 and May 5, using the NewsWhip Analytics dashboard.

They also conducted a thematic analysis of the 20 most engaging junk news stories on Facebook during the data collection period to gain a better understanding of the different political narratives favoured by junk news outlets ahead of an election.

On the latter front they say the most engaging junk narratives over the study period “tend to revolve around populist themes such as anti-immigration and Islamophobic sentiment, with few expressing Euroscepticism or directly mentioning European leaders or parties”.

Which suggests that EU-level political disinformation is a more issue-focused animal (and/or less developed) — vs the kind of personal attacks that have been normalized in US politics (and were richly and infamously exploited by Kremlin-backed anti-Clinton political disinformation during the 2016 US presidential election, for example).

This is likely also because of a lower level of political awareness attached to individuals involved in EU institutions and politics, and the multi-national state nature of the pan-EU project — which inevitably bakes in far greater diversity. (We can posit that just as it aids robustness in biological life, diversity appears to bolster democratic resilience vs political nonsense.)

The researchers also say they identified two noticeable patterns in the thematic content of junk stories that sought to cynically spin political or social news events for political gain over the pre-election study period.

“Out of the twenty stories we analysed, 9 featured explicit mentions of ‘Muslims’ and the Islamic faith in general, while seven mentioned ‘migrants’, ‘immigration’, or ‘refugees’… In seven instances, mentions of Muslims and immigrants were coupled with reporting on terrorism or violent crime, including sexual assault and honour killings,” they write.

“Several stories also mentioned the Notre Dame fire, some propagating the idea that the arson had been deliberately plotted by Islamist terrorists, for example, or suggesting that the French government’s reconstruction plans for the cathedral would include a minaret. In contrast, only 4 stories featured Euroscepticism or direct mention of European Union leaders and parties.

“The ones that did either turned a specific political figure into one of derision – such as Arnoud van Doorn, former member of PVV, the Dutch nationalist and far-right party of Geert Wilders, who converted to Islam in 2012 – or revolved around domestic politics. One such story relayed allegations that Emmanuel Macron had been using public taxes to finance ISIS jihadists in Syrian camps, while another highlighted an offer by Vladimir Putin to provide financial assistance to rebuild Notre Dame.”

Taken together, the researchers conclude that “individuals discussing politics on social media ahead of the European parliamentary elections shared links to high-quality news content, including high volumes of content produced by independent citizen, civic groups and civil society organizations, compared to other elections we monitored in France, Sweden, and Germany”.

Which suggests that attempts to manipulate the pan-EU election are either less prolific or, well, less successful than those which have targeted some recent national elections in EU Member States. And logic would suggest that co-ordinating election interference across a 28-Member State bloc does require greater co-ordination and resource vs trying to meddle in a single national election — on account of the multiple countries, cultures, languages and issues involved.

We’ve reached out to Facebook for comment on the study’s findings.

The company has put a heavy focus on publicizing its self-styled ‘election security’ efforts ahead of the EU election. Though it has mostly focused on setting up systems to control political ads — whereas junk news purveyors are simply uploading regular Facebook ‘content’ at the same time as wrapping it in bogus claims of ‘journalism’ — none of which Facebook objects to. All of which allows would-be election manipulators to pass off junk views as online news, leveraging the reach of Facebook’s platform and its attention-hogging algorithms to amplify hateful nonsense. While any increase in engagement is a win for Facebook’s ad business, so er…

When it comes to elections, Facebook moves slow, may still break things

in api/Avaaz/Campaign/dark web/Delhi/dublin/Election Interference/election security/encryption/Europe/european commission/european parliament/European Union/Facebook/fake news/General Election/India/instagram/ireland/Israel/Nick Clegg/Political Advertising/Politics/Security/Social/social media/spain/targeted advertising/TC/United Kingdom/Vox/WhatsApp by

This week, Facebook invited a small group of journalists — which didn’t include TechCrunch — to look at the “war room” it has set up in Dublin, Ireland, to help monitor its products for election-related content that violates its policies. (“Time and space constraints” limited the numbers, a spokesperson told us when he asked why we weren’t invited.)

Facebook announced it would be setting up this Dublin hub — which will bring together data scientists, researchers, legal and community team members, and others in the organization to tackle issues like fake news, hate speech and voter suppression — back in January. The company has said it has nearly 40 teams working on elections across its family of apps, without breaking out the number of staff it has dedicated to countering political disinformation. 

We have been told that there would be “no news items” during the closed tour — which, despite that, is “under embargo” until Sunday — beyond what Facebook and its executives discussed last Friday in a press conference about its European election preparations.

The tour looks to be a direct copy-paste of the one Facebook held to show off its US election “war room” last year, which it did invite us on. (In that case it was forced to claim it had not disbanded the room soon after heavily PR’ing its existence — saying the monitoring hub would be used again for future elections.)

We understand — via a non-Facebook source — that several broadcast journalists were among the invites to its Dublin “war room”. So expect to see a few gauzy inside views at the end of the weekend, as Facebook’s PR machine spins up a gear ahead of the vote to elect the next European Parliament later this month.

It’s clearly hoping shots of serious-looking Facebook employees crowded around banks of monitors will play well on camera and help influence public opinion that it’s delivering an even social media playing field for the EU parliament election. The European Commission is also keeping a close watch on how platforms handle political disinformation before a key vote.

But with the pan-EU elections set to start May 23, and a general election already held in Spain last month, we believe the lack of new developments to secure EU elections is very much to the company’s discredit.

The EU parliament elections are now a mere three weeks away, and there are a lot of unresolved questions and issues Facebook has yet to address. Yet we’re told the attending journalists were once again not allowed to put any questions to the fresh-faced Facebook employees staffing the “war room”.

Ahead of the looming batch of Sunday evening ‘war room tour’ news reports, which Facebook will be hoping contain its “five pillars of countering disinformation” talking points, we’ve compiled a run down of some key concerns and complications flowing from the company’s still highly centralized oversight of political campaigning on its platform — even as it seeks to gloss over how much dubious stuff keeps falling through the cracks.

Worthwhile counterpoints to another highly managed Facebook “election security” PR tour.

No overview of political ads in most EU markets

Since political disinformation created an existential nightmare for Facebook’s ad business with the revelations of Kremlin-backed propaganda targeting the 2016 US presidential election, the company has vowed to deliver transparency — via the launch of a searchable political ad archive for ads running across its products.

The Facebook Ad Library now shines a narrow beam of light into the murky world of political advertising. Before this, each Facebook user could only see the propaganda targeted specifically at them. Now, such ads stick around in its searchable repository for seven years. This is a major step up on total obscurity. (Obscurity that Facebook isn’t wholly keen to lift the lid on, we should add; Its political data releases to researchers so far haven’t gone back before 2017.)

However, in its current form, in the vast majority of markets, the Ad Library makes the user do all the leg work — running searches manually to try to understand and quantify how Facebook’s platform is being used to spread political messages intended to influence voters.

Facebook does also offer an Ad Library Report — a downloadable weekly summary of ads viewed and highest spending advertisers. But it only offers this in four countries globally right now: the US, India, Israel and the UK.

It has said it intends to ship an update to the reports in mid-May. But it’s not clear whether that will make them available in every EU country. (Mid-May would also be pretty late for elections that start May 23.)

So while the UK report makes clear that the new ‘Brexit Party’ is now a leading spender ahead of the EU election, what about the other 27 members of the bloc? Don’t they deserve an overview too?

A spokesperson we talked to about this week’s closed briefing said Facebook had no updates on expanding Ad Library Reports to more countries, in Europe or otherwise.

So, as it stands, the vast majority of EU citizens are missing out on meaningful reports that could help them understand which political advertisers are trying to reach them and how much they’re spending.

Which brings us to…

Facebook’s Ad Archive API is far too limited

In another positive step Facebook has launched an API for the ad archive that developers and researchers can use to query the data. However, as we reported earlier this week, many respected researchers have voiced disappointed with what it’s offering so far — saying the rate-limited API is not nearly open or accessible enough to get a complete picture of all ads running on its platform.

Following this criticism, Facebook’s director of product, Rob Leathern, tweeted a response, saying the API would improve. “With a new undertaking, we’re committed to feedback & want to improve in a privacy-safe way,” he wrote.

The question is when will researchers have a fit-for-purpose tool to understand how political propaganda is flowing over Facebook’s platform? Apparently not in time for the EU elections, either: We asked about this on Thursday and were pointed to Leathern’s tweets as the only update.

This issue is compounded by Facebook also restricting the ability of political transparency campaigners — such as the UK group WhoTargetsMe and US investigative journalism site ProPublica — to monitor ads via browser plug-ins, as the Guardian reported in January.

The net effect is that Facebook is making life hard for civil society groups and public interest researchers to study the flow of political messaging on its platform to try to quantify democratic impacts, and offering only a highly managed level of access to ad data that falls far short of the “political ads transparency” Facebook’s PR has been loudly trumpeting since 2017.

Ad loopholes remain ripe for exploiting

Facebook’s Ad Library includes data on political ads that were active on its platform but subsequently got pulled (made “inactive” in its parlance) because they broke its disclosure rules.

There are multiple examples of inactive ads for the Spanish far right party Vox visible in Facebook’s Ad Library that were pulled for running without the required disclaimer label, for example.

“After the ad started running, we determined that the ad was related to politics and issues of national importance and required the label. The ad was taken down,” runs the standard explainer Facebook offers if you click on the little ‘i’ next to an observation that “this ad ran without a disclaimer”.

What is not at all clear is how quickly Facebook acted to removed rule-breaking political ads.

It is possible to click on each individual ad to get some additional details. Here Facebook provides a per ad breakdown of impressions; genders, ages, and regional locations of the people who saw the ad; and how much was spent on it.

But all those clicks don’t scale. So it’s not possible to get an overview of how effectively Facebook is handling political ad rule breakers. Unless, well, you literally go in clicking and counting on each and every ad…

There is then also the wider question of whether a political advertiser that is found to be systematically breaking Facebook rules should be allowed to keep running ads on its platform.

Because if Facebook does allow that to happen there’s a pretty obvious (and massive) workaround for its disclosure rules: Bad faith political advertisers could simply keep submitting fresh ads after the last batch got taken down.

We were, for instance, able to find inactive Vox ads taken down for lacking a disclaimer that had still been able to rack up thousands — and even tens of thousands — of impressions in the time they were still active.

Facebook needs to be much clearer about how it handles systematic rule breakers.

Definition of political issue ads is still opaque

Facebook currently requires that all political advertisers in the EU go through its authorization process in the country where ads are being delivered if they relate to the European Parliamentary elections, as a step to try and prevent foreign interference.

This means it asks political advertisers to submit documents and runs technical checks to confirm their identity and location. Though it noted, on last week’s call, that it cannot guarantee this ID system cannot be circumvented. (As it was last year when UK journalists were able to successfully place ads paid for by ‘Cambridge Analytica’.)

One other big potential workaround is the question of what is a political ad? And what is an issue ad?

Facebook says these types of ads on Facebook and Instagram in the EU “must now be clearly labeled, including a paid-for-by disclosure from the advertiser at the top of the ad” — so users can see who is paying for the ads and, if there’s a business or organization behind it, their contact details, plus some disclosure about who, if anyone, saw the ads.

But the big question is how is Facebook defining political and issue ads across Europe?

While political ads might seem fairly easy to categorize — assuming they’re attached to registered political parties and candidates, issues are a whole lot more subjective.

Currently Facebook defines issue ads as those relating to “any national legislative issue of public importance in any place where the ad is being run.” It says it worked with EU barometer, YouGov and other third parties to develop an initial list of key issues — examples for Europe include immigration, civil and social rights, political values, security and foreign policy, the economy and environmental politics — that it will “refine… over time.”

Again specifics on when and how that will be refined are not clear. Yet ads that Facebook does not deem political/issue ads will slip right under its radar. They won’t be included in the Ad Library; they won’t be searchable; but they will be able to influence Facebook users under the perfect cover of its commercial ad platform — as before.

So if any maliciously minded propaganda slips through Facebook’s net, because the company decides it’s a non-political issue, it will once again leave no auditable trace.

In recent years the company has also had a habit of announcing major takedowns of what it badges “fake accounts” ahead of major votes. But again voters have to take it on trust that Facebook is getting those judgement calls right.

Facebook continues to bar pan-EU campaigns

On the flip side of weeding out non-transparent political propaganda and/or political disinformation, Facebook is currently blocking the free flow of legal pan-EU political campaigning on its platform.

This issue first came to light several weeks ago, when it emerged that European officials had written to Nick Clegg (Facebook’s vice president of global affairs) to point out that its current rules — i.e. that require those campaigning via Facebook ads to have a registered office in the country where the ad is running — run counter to the pan-European nature of this particular election.

It means EU institutions are in the strange position of not being able to run Facebook ads for their own pan-EU election everywhere across the region. “This runs counter to the nature of EU institutions. By definition, our constituency is multinational and our target audience are in all EU countries and beyond,” the EU’s most senior civil servants pointed out in a letter to the company last month.

This issue impacts not just EU institutions and organizations advocating for particular policies and candidates across EU borders, but even NGOs wanting to run vanilla “get out the vote” campaigns Europe-wide — leading to a number to accuse Facebook of breaching their electoral rights and freedoms.

Facebook claimed last week that the ball is effectively in the regulators’ court on this issue — saying it’s open to making the changes but has to get their agreement to do so. A spokesperson confirmed to us that there is no update to that situation, either.

Of course the company may be trying to err on the side of caution, to prevent bad actors being able to interfere with the vote across Europe. But at what cost to democratic freedoms?

What about fake news spreading on WhatsApp?

Facebook’s ‘election security’ initiatives have focused on political and/or politically charged ads running across its products. But there’s no shortage of political disinformation flowing unchecked across its platforms as user uploaded ‘content’.

On the Facebook-owned messaging app WhatsApp, which is hugely popular in some European markets, the presence of end-to-end encryption further complicates this issue by providing a cloak for the spread of political propaganda that’s not being regulated by Facebook.

In a recent study of political messages spread via WhatsApp ahead of last month’s general election in Spain, the campaign group Avaaz dubbed it “social media’s dark web” — claiming the app had been “flooded with lies and hate”.

Posts range from fake news about Prime Minister Pedro Sánchez signing a secret deal for Catalan independence to conspiracy theories about migrants receiving big cash payouts, propaganda against gay people and an endless flood of hateful, sexist, racist memes and outright lies,” it wrote. 

Avaaz compiled this snapshot of politically charged messages and memes being shared on Spanish WhatsApp by co-opting 5,833 local members to forward election-related content that they deemed false, misleading or hateful.

It says it received a total of 2,461 submissions — which is of course just a tiny, tiny fraction of the stuff being shared in WhatsApp groups and chats. Which makes this app the elephant in Facebook’s election ‘war room’.

What exactly is a war room anyway?

Facebook has said its Dublin Elections Operation Center — to give it its official title — is “focused on the EU elections”, while also suggesting it will plug into a network of global teams “to better coordinate in real time across regions and with our headquarters in California [and] accelerate our rapid response times to fight bad actors and bad content”.

But we’re concerned Facebook is sending out mixed — and potentially misleading — messages about how its election-focused resources are being allocated.

Our (non-Facebook) source told us the 40-odd staffers in the Dublin hub during the press tour were simultaneously looking at the Indian elections. If that’s the case, it does not sound entirely “focused” on either the EU or India’s elections. 

Facebook’s eponymous platform has 2.375 billion monthly active users globally, with some 384 million MAUs in Europe. That’s more users than in the US (243M MAUs). Though Europe is Facebook’s second-biggest market in terms of revenues after the US. Last quarter, it pulled in $3.65BN in sales for Facebook (versus $7.3BN for the US) out of $15BN overall.

Apart from any kind of moral or legal pressure that Facebook might have for running a more responsible platform when it comes to supporting democratic processes, these numbers underscore the business imperative that it has to get this sorted out in Europe in a better way.

Having a “war room” may sound like a start, but unfortunately Facebook is presenting it as an end in itself. And its foot-dragging on all of the bigger issues that need tackling, in effect, means the war will continue to drag on.

Samantha Bee: Canadian, comedian, and defender of the free press

in comedian/Delhi/dinner/fake news/Government/India/Jamal Khashoggi/Media/Politics/Television In The United States/Trump administration/United States/Washington DC/White House by

The only job named in and protected by the U.S. constitution is journalism. But when it’s under attack from fake news, misinformation, and the supposed defender-of-the-constitution-in-chief, who looks out for the press?

Reporters have an unlikely ally in the late night comedy circuit.

Late night television has a steady stream of male comedians ready to cursorily pick apart the news of the day, often mocking the dispatches of the press — typically the government — before they turn to a light hearted interview with a celebrity to round off the night.

But not Samantha Bee. The Canadian-born comedian and former ‘Daily Show’ correspondent, is the only female comedian with a late-night show, Full Frontal, and doesn’t waste a second not holding the powers to account. Her show, which films and airs on TBS every Wednesday, offers a weekly record of the abuses of the government by bringing both the big stories and the little-read reports to her massive viewing audience.

It’s no surprise that President Trump, an ardent critic of the press, declined for the third consecutive year to attend Saturday’s White House Correspondent’s Dinner, an annual gala for the White House press corps that “celebrates” the First Amendment’s protections of free speech — often by taking comical potshots at the commander-in-chief himself. The only saving grace for the president’s would-be roasting is the dinner’s organizers, the White House Correspondents’ Association, dropped the traditional comedy set altogether after Michelle Wolf’s pointed if not controversial set last year — which Bee herself defended.

Enter Bee with her own rival event, the aptly named Not The White House Correspondent’s Dinner, a party in its third year for “the free press… while we still have one,” said Bee.

“We’re throwing the party they should be having,” she said.

WASHINGTON, DC – APRIL 26: Samantha Bee speaks onstage during “Full Frontal With Samantha Bee” Not The White House Correspondents Dinner – Show on April 26, 2019 in Washington, DC. (Photo by Tasos Katopodis/Getty Images for TBS) 558325

A free meal and an hour of comedy aside, support for the press is as important as ever. With more frequent attacks on the press, the murder of Jamal Khashoggi, and the regular insults of “fake news,” press freedom is in a vice.

“Journalists are critical to creating an informed citizenry, to make sure we’re hold public officials account, and to get basic information about the world around us,” said Courtney Radsch, advocacy director at the Committee to Protect Journalists, a non-profit dedicated to promoting press freedom and advocating the rights of reports across the world.

“By labeling journalists as ‘enemies of the people’,” said Radsch, a term repeatedly used by Trump, including days prior to a newsroom shooting at Baltimore’s Capital Gazette newspaper, “it creates conditions that make it less safe for reporters to work.”

Last year, the CPJ’s Press Freedom Tracker database logged over a hundred incidents — from murders to physical attacks, border searches and legal orders — involving the press.

“This constant denigration of the media as ‘fake news’ has a really detrimental impact,” she said.

Bee isn’t alone in her efforts to support the free press. Other fellow comedians like John Oliver and Hasan Minhaj use their platform to educate and inform about “fundamental issue that concern more than just journalists,” said Radsch.

Bee’s weekly half-hour show is a journalistic effort in its own right. But as a comedy show, it’s largely shielded from the near-constant attacks that the press face from the Trump administration and its allies.

With all proceeds from the dinner going to the Committee to Protect Journalists, Bee has shown to not only serve as an ally for reporters but also a staunch defender of the free press.

“No-one needs the press more than me and my show,” said Bee at the dinner. “We spend all day reading and watching and thinking about the news.”

“Journalism is essential,” she said. And then she broke into song.

Samantha Bee’s Not The White House Correspondent’s Dinner airs Saturday at 10pm ET on TBS. TechCrunch was invited as a guest.

Twitter to offer report option for misleading election tweets

in Asia/Delhi/disinformation/EC/Election Interference/election security/Europe/european commission/European Union/Facebook/fake news/Google/Government/India/online disinformation/political disinformation/Politics/Social/social media/Twitter by

Twitter is adding a dedicated report option that enables users to tell it about misleading tweets related to voting — starting with elections taking place in India and the European Union .

From tomorrow users in India can report tweets they believe are trying to mislead voters — such as disinformation related to the date or location of polling stations; or fake claims about identity requirements for being able to vote — by tapping on the arrow menu of the suspicious tweet and selecting the ‘report tweet’ option and then choosing: ‘It’s misleading about voting’.

Twitter says the tool will go live for the Indian Lok Sabha elections from tomorrow, and will launch in all European Union member states on April 29 — ahead of elections for the EU parliament next month.

The ‘misleading about voting’ option will persist in the list of available choices for reporting tweets for seven days after each election ends, Twitter said in a blog post announcing the feature.

It also said it intends to the vote-focused feature to be rolled out to “other elections globally throughout the rest of the year”, without providing further detail on which elections and markets it will prioritize for getting the tool.

“Our teams have been trained and we recently enhanced our appeals process in the event that we make the wrong call,” Twitter added.

In recent months the European Commission has been ramping up pressure on tech platforms to scrub disinformation ahead of elections to the EU parliament — issuing monthly reports on progress, or, well, the lack of it.

This follows a Commission initiative last year which saw major tech and ad platforms — including Facebook, Google and Twitter — sign up to a voluntary Code of Practice on disinformation, committing themselves to take some non-prescribed actions to disrupt the ad revenues of disinformation agents and make political ads more transparent on their platforms.

Another strand of the Code looks to have directly contributed to the development of Twitter’s new ‘misleading about voting’ report option — with signatories committing to:

  • Empower consumers to report disinformation and access different news sources, while improving the visibility and findability of authoritative content;

In the latest progress report on the Code, which was published by the Commission yesterday but covers steps taken by the platforms in March 2019, it noted some progress made — but said it’s still not enough.

“Further technical improvements as well as sharing of methodology and data sets for fake accounts are necessary to allow third-party experts, fact-checkers and researchers to carry out independent evaluation,” EC commissioners warned in a joint statement.

In the case of Twitter the company was commended for having made political ad libraries publicly accessible but criticized (along with Google) for not doing more to improve transparency around issue-based advertising.

“It is regrettable that Google and Twitter have not yet reported further progress regarding transparency of issue-based advertising, meaning issues that are sources of important debate during elections,” the Commission said. 

It also reported that Twitter had provided figures on actions undertaken against spam and fake accounts but had failed to explain how these actions relate to activity in the EU.

“Twitter did not report on any actions to improve the scrutiny of ad placements or provide any metrics with respect to its commitments in this area,” it also noted.

The EC says it will assess the Code’s initial 12-month period by the end of 2019 — and take a view on whether it needs to step in and propose regulation to control online disinformation. (Something which some individual EU Member States are already doing, albeit with a focus on hate speech and/or online safety.)

Facebook has quietly removed three bogus far right networks in Spain ahead of Sunday’s elections

in Delhi/Election Interference/election security/Europe/Facebook/fake news/far right/General Election/India/political disinformation/Politics/Security/Social/social media/spain/TC/Vox by

Facebook has quietly removed three far right networks that were engaged in coordinated inauthentic behavior intended to spread politically divisive content in Spain ahead of a general election in the country which takes place on Sunday.

The networks had a total reach of almost 1.7M followers and had generated close to 7.4M interactions in the past three months alone, according to analysis by the independent group that identified the bogus activity on Facebook’s platform.

The fake far right activity was apparently not picked up by Facebook.

Instead activist not-for-profit Avaaz unearthed the inauthentic content, and presented its findings to the social networking giant earlier this month, on April 12. In a press release issued today the campaigning organization said Facebook has now removed the fakes — apparently vindicating its findings.

“Facebook did a great job in acting fast, but these networks are likely just the tip of the disinformation iceberg — and if Facebook doesn’t scale up, such operations could sink democracy across the continent,” said Christoph Schott, campaign director at Avaaz, in a statement.

“This is how hate goes viral. A bunch of extremists use fake and duplicate accounts to create entire networks to fake public support for their divisive agenda. It’s how voters were misled in the U.S., and it happened again in Spain,” he added.

We reached out to Facebook for comment but at the time of writing the company had not responded to the request or to several questions we also put to it.

Avaaz said the networks it found comprised around thirty pages and groups spreading far right propaganda — including anti-immigrant, anti-LGBT, anti-feminist and anti-Islam content.

Examples of the inauthentic content can be viewed in Avaaz’s executive summary of the report. They include fake data about foreigners committing the majority of rapes in Spain; fake news about Catalonia’s pro independence leader; and various posts targeting leftwing political party Podemos — including an image superimposing the head of its leader onto the body of Hitler performing a nazi salute.

One of the networks — which Avaaz calls Unidad ​Nacional Española (after the most popular page in the network) — was apparently created and co-ordinated by an individual called ​Javier Ramón Capdevila Grau, who had multiple personal Facebook accounts (also) in contravention of Facebook’s community standards. 

This network, which had a reach of more than 1.2M followers, comprised at least 10 pages that Avaaz identified as working in a coordinated fashion to spread “politically divisive content”.

Its report details how word-for-word identical posts were published across multiple Facebook pages and groups in the network just minutes apart, with nothing to indicate they weren’t original postings on each page. 

Here’s an example post it found copy-pasted across the Unidad ​Nacional Española network:

Translated the posted text reads: ‘In Spain, if a criminal enters your house without your permission the only thing you can do is hide, since if you touch a hair on his head or prevent him from being able to rob you you’ll spend more time in prison than him.’

Avaaz found another smaller network targeting leftwing views, called Todos Contra Podemos, which included seven pages and groups with around 114,000 followers — also apparently run by a single individual (in this case using the name Antonio Leal Felix Aguilar) who also operated multiple Facebook profiles

A third network, Lucha por España​, comprised 12 pages and groups with around 378,000 followers.

Avaaz said it was unable to identify the individual/s behind that network. 

While Facebook has not publicized the removals of these particular political disinformation networks, despite its now steady habit of issuing PR when it finds and removes ‘coordinated inauthentic behavior‘ (though of course there’s no way to be sure it’s disclosing everything it finds on its platform), test searches for the main pages identified by Avaaz returned either no results or what appear to be other unrelated Facebook pages using the same name.

Since the 2016 U.S. presidential election was (infamously) targeted by divisive Kremlin propaganda seeded and amplified via social media, Facebook has launched what it markets as “election security” initiatives in a handful of countries around the world — such as searchable ad archives and political ad authentication and/or disclosure requirements.

However these efforts continue to face criticism for being patchy, piecemeal and, even in countries where they have been applied to its platform, weak and trivially easy to workaround.

Its political ads transparency measures do not always apply to issue-based ads (and/or content), for instance, which punches a democracy-denting hole in the self-styled ‘guardrails’ by allowing divisive propaganda to continue to flow.

In Spain Facebook has not even launched a system of political ad transparency, let alone launched systems addressing issue-based political ads — despite the country’s looming general election on April 28; its third in four years. (Since 2015 elections in Spain have yielded heavily fragmented parliaments — making another imminent election not at all unlikely.)

In February, when we asked Facebook whether it would commit to launching ad transparency tools in Spain before the April 28 election, it offered no such commitment — saying instead that it sets up internal cross-functional teams for elections in every market to assess the biggest risks, and make contact with the relevant electoral commission and other key stakeholders.

Again, it’s not possible for outsiders to assess the efficacy of such internal efforts. But Avaaz’s findings suggest Facebook’s risk assessment of Spain’s general election has had a pretty hefty blindspot when it comes to proactively picking up malicious attempts to inflate far right propaganda.

Yet, at the same time, a regional election in Andalusia late last year returned a shock result and warning signs — with the tiny (and previously unelected) far right party, Vox, gaining around 10 per cent of the vote to take 12 seats.

Avaaz’s findings vis-a-vis the three bogus far right networks suggest that as well as seeking to slur leftwing/liberal political views and parties some of the inauthentic pages were involved in actively trying to amplify Vox — with one bogus page, Orgullo Nacional España, sharing a pro-Vox Facebook page 155 times in a three month period. 

Avaaz used the Facebook-owned social media monitoring tool Crowdtangle to get a read on how much impact the fake networks might have had.

It found that while the three inauthentic far right Facebook networks produced just 3.7% of the posts in its Spanish elections dataset, they garnered an impressive 12.6% of total engagement over the three month period it pulled data on (between January 5 and April 8) — despite consisting of just 27 Facebook pages and groups out of a total of 910 in the full dataset. 

Or, to put it another way, a handful of bad actors managed to generate enough divisive politically charged noise that more than one in ten of those engaging in Spanish election chatter on Facebook, per its dataset, at very least took note.

It’s a finding which neatly illustrates that divisive content being more clickable is not at all a crazy idea — whatever the founder of Facebook once said.

1 2 3 8
Go to Top