Menu

Timesdelhi.com

May 26, 2019
Category archive

United Kingdom

Thousands of vulnerable TP-Link routers at risk of remote hijack

in california/computing/cybercrime/Cyberwarfare/Delhi/dns/dyn/gps/Hardware/India/Politics/Router/search engines/Security/spokesperson/telecommunications/United Kingdom/United States by

Thousands of TP-Link routers are vulnerable to a bug that can be used to remotely take control the device, but it took over a year for the company to publish the patches on its website.

The vulnerability allows any low-skilled attacker to remotely gain full access to an affected router. The exploit relies on the router’s default password to work, which many don’t change.

In the worst case scnario, an attacker could target vulnerable devices on a massive scale, using similar mechanism to how botnets like Mirai worked — by scouring the web and hijacking routers using default passwords like “admin” and “pass”.

Andrew Mabbitt, founder of U.K. cybersecurity firm Fidus Information Security, first discovered and disclosed the remote code execution bug to TP-Link in October 2017. TP-Link released a patch a few weeks later for the vulnerable WR940N router, but Mabbitt warned TP-Link again in January 2018 that another router, TP-Link’s WR740N, was also vulnerable to the same bug because the company reused vulnerable code between devices.

TP-Link said the vulnerability was quickly patched in both routers. But when we checked, the firmware for WR740N wasn’t available on the website.

When asked, a TP-Link spokesperson said the update was “currently available when requested from tech support,” but wouldn’t explain why. Only after TechCrunch reached out, TP-Link updated the firmware page to include the latest security update.

Top countries with vulnerable WR740N routers. (Image: Shodan)

Routers have long been notorious for security problems. At the heart of any network, any flaw affecting a router can have disastrous effects on every connected device. By gaining complete control over the router, Mabbitt said an attacker could wreak havoc on a network. Modifying the settings on the router affects everyone who’s connected to the same network, like altering the DNS settings to trick users into visiting a fake page to steal their login credentials.

TP-Link declined to disclose how many potentially vulnerable routers it had sold, but said that the WR740N had been discontinued a year earlier in 2017. When we checked two search engines for exposed devices and databases, Shodan and Binary Edge, each suggested there are anywhere between 129,000 and 149,000 devices on the internet — though the number of vulnerable devices is likely far lower.

Mabbitt said he believed TP-Link still had a duty of care to alert customers of the update if thousands of devices are still vulnerable, rather than hoping they will contact the company’s tech support.

Both the U.K. and the U.S. state of California are set to soon require companies to sell devices with unique default passwords to prevent botnets from hijacking internet-connected devices at scale and using their collective internet bandwidth to knock websites offline.

The Mirai botnet downed Dyn, a domain name service giant, which knocked dozens of major sites offline for hours — including Twitter, Spotify and SoundCloud.

Read more:

London’s Tube network to switch on wi-fi tracking by default in July

in api/controlled/Delhi/encryption/Europe/European Union/India/London/London Underground/MAC Address/Mayor/mobile devices/Politics/privacy/Security/smartphone/transport for london/Transportation/United Kingdom/wi-fi/wireless networking by

Transport for London will roll out default wi-fi device tracking on the London Underground this summer, following a trial back in 2016.

In a press release announcing the move, TfL writes that “secure, privacy-protected data collection will begin on July 8” — while touting additional services, such as improved alerts about delays and congestion, which it frames as “customer benefits”, as expected to launch “later in the year”.

As well as offering additional alerts-based services to passengers via its own website/apps, TfL says it could incorporate crowding data into its free open-data API — to allow app developers, academics and businesses to expand the utility of the data by baking it into their own products and services.

It’s not all just added utility though; TfL says it will also use the information to enhance its in-station marketing analytics — and, it hopes, top up its revenues — by tracking footfall around ad units and billboards.

Commuters using the UK capital’s publicly funded transport network who do not want their movements being tracked will have to switch off their wi-fi, or else put their phone in airplane mode when using the network.

To deliver data of the required detail, TfL says detailed digital mapping of all London Underground stations was undertaken to identify where wi-fi routers are located so it can understand how commuters move across the network and through stations.

It says it will erect signs at stations informing passengers that using the wi-fi will result in connection data being collected “to better understand journey patterns and improve our services” — and explaining that to opt out they have to switch off their device’s wi-fi.

Attempts in recent years by smartphone OSes to use MAC address randomization to try to defeat persistent device tracking have been shown to be vulnerable to reverse engineering via flaws in wi-fi set-up protocols. So, er, switch off to be sure.

We covered TfL’s wi-fi tracking beta back in 2017, when we reported that despite claiming the harvested wi-fi data was “de-personalised”, and claiming individuals using the Tube network could not be identified, TfL nonetheless declined to release the “anonymized” data-set after a Freedom of Information request — saying there remains a risk of individuals being re-identified.

As has been shown many times before, reversing ‘anonymization’ of personal data can be frighteningly easy.

It’s not immediately clear from the press release or TfL’s website exactly how it will be encrypting the location data gathered from devices that authenticate to use the free wi-fi at the circa 260 wi-fi enabled London Underground stations.

Its explainer about the data collection does not go into any real detail about the encryption and security being used. (We’ve asked for more technical details.)

“If the device has been signed up for free Wi-Fi on the London Underground network, the device will disclose its genuine MAC address. This is known as an authenticated device,” TfL writes generally of how the tracking will work.

“We process authenticated device MAC address connections (along with the date and time the device authenticated with the Wi-Fi network and the location of each router the device connected to). This helps us to better understand how customers move through and between stations — we look at how long it took for a device to travel between stations, the routes the device took and waiting times at busy periods.”

“We do not collect any other data generated by your device. This includes web browsing data and data from website cookies,” it adds, saying also that “individual customer data will never be shared and customers will not be personally identified from the data collected by TfL”.

In a section entitled “keeping information secure” TfL further writes: “Each MAC address is automatically depersonalised (pseudonymised) and encrypted to prevent the identification of the original MAC address and associated device. The data is stored in a restricted area of a secure location and it will not be linked to any other data at a device level.  At no time does TfL store a device’s original MAC address.”

Privacy and security concerns were raised about the location tracking around the time of the 2016 trial — such as why TfL had used a monthly salt key to encrypt the data rather than daily salts, which would have decreased the risk of data being re-identifiable should it leak out.

Such concerns persist — and security experts are now calling for full technical details to be released, given TfL is going full steam ahead with a rollout.

 

A report in Wired suggests TfL has switched from hashing to a system of tokenisation – “fully replacing the MAC address with an identifier that cannot be tied back to any personal information”, which TfL billed as as a “more sophisticated mechanism” than it had used before. We’ll update as and when we get more from TfL.

Another question over the deployment at the time of the trial was what legal basis it would use for pervasively collecting people’s location data — since the system requires an active opt-out by commuters a consent-based legal basis would not be appropriate.

In a section on the legal basis for processing the Wi-Fi connection data, TfL writes now that its ‘legal ground’ is two-fold:

  • Our statutory and public functions
  • to undertake activities to promote and encourage safe, integrated, efficient and economic transport facilities and services, and to deliver the Mayor’s Transport Strategy

So, presumably, you can file ‘increasing revenue around adverts in stations by being able to track nearby footfall’ under ‘helping to deliver (read: fund) the mayor’s transport strategy’.

(Or as TfL puts it: “[T]he data will also allow TfL to better understand customer flows throughout stations, highlighting the effectiveness and accountability of its advertising estate based on actual customer volumes. Being able to reliably demonstrate this should improve commercial revenue, which can then be reinvested back into the transport network.”)

On data retention it specifies that it will hold “depersonalised Wi-Fi connection data” for two years — after which it will aggregate the data and retain those non-individual insights (presumably indefinitely, or per its standard data retention policies).

“The exact parameters of the aggregation are still to be confirmed, but will result in the individual Wi-Fi connection data being removed. Instead, we will retain counts of activities grouped into specific time periods and locations,” it writes on that.

It further notes that aggregated data “developed by combining depersonalised data from many devices” may also be shared with other TfL departments and external bodies. So that processed data could certainly travel.

Of the “individual depersonalised device Wi-Fi connection data”, TfL claims it is accessible only to “a controlled group of TfL employees” — without specifying how large this group of staff is; and what sort of controls and processes will be in place to prevent the risk of A) data being hacked and/or leaking out or B) data being re-identified by a staff member.

A TfL employee with intimate knowledge of a partner’s daily travel routine might, for example, have access to enough information via the system to be able to reverse the depersonalization.

Without more technical details we just don’t know. Though TfL says it worked with the UK’s data protection watchdog in designing the data collection with privacy front of mind.

“We take the privacy of our customers very seriously. A range of policies, processes and technical measures are in place to control and safeguard access to, and use of, Wi-Fi connection data. Anyone with access to this data must complete TfL’s privacy and data protection training every year,” it also notes elsewhere.

Despite holding individual level location data for two years, TfL is also claiming that it will not respond to requests from individuals to delete or rectify any personal location data it holds, i.e. if people seek to exercise their information rights under EU law.

“We use a one-way pseudonymisation process to depersonalise the data immediately after it is collected. This means we will not be able to single out a specific person’s device, or identify you and the data generated by your device,” it claims.

“This means that we are unable to respond to any requests to access the Wi-Fi data generated by your device, or for data to be deleted, rectified or restricted from further processing.”

Again, the distinctions it is making there are raising some eyebrows.

What’s amply clear is that the volume of data that will be generated as a result of a full rollout of wi-fi tracking across the lion’s share of the London Underground will be staggeringly massive.

More than 509 million “depersonalised” pieces of data, were collected from 5.6 million mobile devices during the four-week 2016 trial alone — comprising some 42 million journeys. And that was a very brief trial which covered a much smaller sub-set of the network.

As big data giants go, TfL is clearly gunning to be right up there.

Amazon leads $575M investment in Deliveroo

in Amazon/Australia/Companies/Delhi/Deliveroo/Europe/food delivery/France/funding/Fundings & Exits/Germany/India/London/online food ordering/Politics/Singapore/spain/t.rowe price/taiwan/Uber/Uber Eats/united arab emirates/United Kingdom/United States/world wide web by

Amazon is taking a slice of Europe’s food delivery market after the U.S. e-commerce giant led a $575 million investment in Deliveroo .

First reported by Sky yesterday, the Series G round was confirmed in an early UK morning announcement from Deliveroo, which confirmed that existing backers including T. Rowe Price, Fidelity Management and Research Company, and Greenoaks also took part. The deal takes Deliveroo to just over $1.5 billion raised to date. The company was valued at over $2 billion following its previous raise in late 2017, no updated valuation was provided today.

London-based Deliveroo operates in 14 countries, including the U.K, France, Germany and Spain, and — outside of Europe — Singapore, Taiwan, Australia and the UAE. Across those markets, it claims it works with 80,000 restaurants with a fleet of 60,000 delivery people and 2,500 permanent employees.

It isn’t immediately clear how Amazon plans to use its new strategic relationship with Deliveroo — it could, for example, integrate it with Prime membership — but this isn’t the firm’s first dalliance with food delivery. The U.S. firm closed its Amazon Restaurants UK takeout business last year after it struggled to compete with Deliveroo and Uber Eats. The service remains operational in the U.S, however.

“Amazon has been an inspiration to me personally and to the company, and we look forward to working with such a customer-obsessed organization,” said Deliveroo CEO and founder Will Shu in a statement.

Shu said the new money will go towards initiatives that include growing Deliveroo’s London-based engineering team, expanding its reach and focusing on new products, including cloud kitchens that can cook up delivery meals faster and more cost-efficiently.

[Center] Will Shu, Deliveroo CEO and co-founder, on stage at TechCrunch Disrupt London

UK tax office ordered to delete millions of unlawful biometric voiceprints

in Artificial Intelligence/biometrics/data controller/Delhi/digital rights/Europe/GDPR/General Data Protection Regulation/Government/human rights/identity management/India/information commissioner's office/Politics/privacy/United Kingdom/voice by

The U.K.’s data protection watchdog has issued the government department responsible for collecting taxes with a final enforcement notice, after an investigation found HMRC had collected biometric data from millions of citizens without obtaining proper consent.

HMRC has 28 days from the May 9 notice to delete any Voice ID records where it did not obtain explicit consent to record and create a unique biometric voiceprint linked to the individual’s identity. 

The Voice ID system was introduced in January 2017, with HMRC instructing callers to a helpline to record a phrase to use their voiceprint as a password. The system soon attracted criticism for failing to make it clear that people did not have to agree to their biometric data being recorded by the tax office.

In total, some 7 million U.K. citizens have had voiceprints recorded via the system. HMRC will now have to delete the majority of these records (~5 million voiceprints) — only retaining biometric data where it has fully informed consent to do so.

The Information Commissioner’s Office (ICO) investigation into Voice ID was triggered by a complaint by privacy advocacy group Big Brother Watch — which said more than 160,000 people opted out of the system after its campaign highlighted questions over how the data was being collected.

Announcing the conclusion of its probe last week, the ICO said it had found the tax office unlawfully processed people’s biometric data.

“Innovative digital services help make our lives easier but it must not be at the expense of people’s fundamental right to privacy. Organisations must be transparent and fair and, when necessary, obtain consent from people about how their information will be used. When that doesn’t happen, the ICO will take action to protect the public,” said deputy commissioner, Steve Wood, in a statement.

Blogging about its final enforcement notice, the regulator said today that it intends to carry out an audit to assess HMRC’s wider compliance with data protection rules.

“With the adoption of new systems comes the responsibility to make sure that data protection obligations are fulfilled and customers’ privacy rights addressed alongside any organisational benefit. The public must be able to trust that their privacy is at the forefront of the decisions made about their personal data,” writes Woods, offering guidance for using biometric data “in a fair, transparent and accountable way.”

Under Europe’s General Data Protection Regulation (GDPR), biometric data that’s used for identifying a person is classed as so-called “special category” data — meaning if a data controller is relying on consent as their legal basis for collecting this information the data subject must provide explicit consent.

In the case of HMRC, the ICO found it had failed to give customers sufficient information about how their biometric data would be processed, and failed to give them the chance to give or withhold consent.

It also collected voiceprints prior to publishing a Voice ID-specific privacy notice on its website. The ICO found it had not carried out an adequate data protection impact assessment prior to launching the system.

In October 2018 HMRC tweaked the automated options it offered to callers to provide clearer information about the system and their options.

That amended Voice ID system remains in operation. And in a letter to the ICO last week HMRC’s chief executive, Jon Thompson, defended it — claiming it is “popular with our customers, is a more secure way of protecting customer data, and enables us to get callers through to an adviser faster.”

As a result of the regulator’s investigation, HMRC retrospectively contacted around a fifth of the 7 million Brits whose data it had gathered to ask for consent. Of those it said more than 995,000 provided consent for the use of their biometric data and more than 260,000 withheld it.

When it comes to elections, Facebook moves slow, may still break things

in api/Avaaz/Campaign/dark web/Delhi/dublin/Election Interference/election security/encryption/Europe/european commission/european parliament/European Union/Facebook/fake news/General Election/India/instagram/ireland/Israel/Nick Clegg/Political Advertising/Politics/Security/Social/social media/spain/targeted advertising/TC/United Kingdom/Vox/WhatsApp by

This week, Facebook invited a small group of journalists — which didn’t include TechCrunch — to look at the “war room” it has set up in Dublin, Ireland, to help monitor its products for election-related content that violates its policies. (“Time and space constraints” limited the numbers, a spokesperson told us when he asked why we weren’t invited.)

Facebook announced it would be setting up this Dublin hub — which will bring together data scientists, researchers, legal and community team members, and others in the organization to tackle issues like fake news, hate speech and voter suppression — back in January. The company has said it has nearly 40 teams working on elections across its family of apps, without breaking out the number of staff it has dedicated to countering political disinformation. 

We have been told that there would be “no news items” during the closed tour — which, despite that, is “under embargo” until Sunday — beyond what Facebook and its executives discussed last Friday in a press conference about its European election preparations.

The tour looks to be a direct copy-paste of the one Facebook held to show off its US election “war room” last year, which it did invite us on. (In that case it was forced to claim it had not disbanded the room soon after heavily PR’ing its existence — saying the monitoring hub would be used again for future elections.)

We understand — via a non-Facebook source — that several broadcast journalists were among the invites to its Dublin “war room”. So expect to see a few gauzy inside views at the end of the weekend, as Facebook’s PR machine spins up a gear ahead of the vote to elect the next European Parliament later this month.

It’s clearly hoping shots of serious-looking Facebook employees crowded around banks of monitors will play well on camera and help influence public opinion that it’s delivering an even social media playing field for the EU parliament election. The European Commission is also keeping a close watch on how platforms handle political disinformation before a key vote.

But with the pan-EU elections set to start May 23, and a general election already held in Spain last month, we believe the lack of new developments to secure EU elections is very much to the company’s discredit.

The EU parliament elections are now a mere three weeks away, and there are a lot of unresolved questions and issues Facebook has yet to address. Yet we’re told the attending journalists were once again not allowed to put any questions to the fresh-faced Facebook employees staffing the “war room”.

Ahead of the looming batch of Sunday evening ‘war room tour’ news reports, which Facebook will be hoping contain its “five pillars of countering disinformation” talking points, we’ve compiled a run down of some key concerns and complications flowing from the company’s still highly centralized oversight of political campaigning on its platform — even as it seeks to gloss over how much dubious stuff keeps falling through the cracks.

Worthwhile counterpoints to another highly managed Facebook “election security” PR tour.

No overview of political ads in most EU markets

Since political disinformation created an existential nightmare for Facebook’s ad business with the revelations of Kremlin-backed propaganda targeting the 2016 US presidential election, the company has vowed to deliver transparency — via the launch of a searchable political ad archive for ads running across its products.

The Facebook Ad Library now shines a narrow beam of light into the murky world of political advertising. Before this, each Facebook user could only see the propaganda targeted specifically at them. Now, such ads stick around in its searchable repository for seven years. This is a major step up on total obscurity. (Obscurity that Facebook isn’t wholly keen to lift the lid on, we should add; Its political data releases to researchers so far haven’t gone back before 2017.)

However, in its current form, in the vast majority of markets, the Ad Library makes the user do all the leg work — running searches manually to try to understand and quantify how Facebook’s platform is being used to spread political messages intended to influence voters.

Facebook does also offer an Ad Library Report — a downloadable weekly summary of ads viewed and highest spending advertisers. But it only offers this in four countries globally right now: the US, India, Israel and the UK.

It has said it intends to ship an update to the reports in mid-May. But it’s not clear whether that will make them available in every EU country. (Mid-May would also be pretty late for elections that start May 23.)

So while the UK report makes clear that the new ‘Brexit Party’ is now a leading spender ahead of the EU election, what about the other 27 members of the bloc? Don’t they deserve an overview too?

A spokesperson we talked to about this week’s closed briefing said Facebook had no updates on expanding Ad Library Reports to more countries, in Europe or otherwise.

So, as it stands, the vast majority of EU citizens are missing out on meaningful reports that could help them understand which political advertisers are trying to reach them and how much they’re spending.

Which brings us to…

Facebook’s Ad Archive API is far too limited

In another positive step Facebook has launched an API for the ad archive that developers and researchers can use to query the data. However, as we reported earlier this week, many respected researchers have voiced disappointed with what it’s offering so far — saying the rate-limited API is not nearly open or accessible enough to get a complete picture of all ads running on its platform.

Following this criticism, Facebook’s director of product, Rob Leathern, tweeted a response, saying the API would improve. “With a new undertaking, we’re committed to feedback & want to improve in a privacy-safe way,” he wrote.

The question is when will researchers have a fit-for-purpose tool to understand how political propaganda is flowing over Facebook’s platform? Apparently not in time for the EU elections, either: We asked about this on Thursday and were pointed to Leathern’s tweets as the only update.

This issue is compounded by Facebook also restricting the ability of political transparency campaigners — such as the UK group WhoTargetsMe and US investigative journalism site ProPublica — to monitor ads via browser plug-ins, as the Guardian reported in January.

The net effect is that Facebook is making life hard for civil society groups and public interest researchers to study the flow of political messaging on its platform to try to quantify democratic impacts, and offering only a highly managed level of access to ad data that falls far short of the “political ads transparency” Facebook’s PR has been loudly trumpeting since 2017.

Ad loopholes remain ripe for exploiting

Facebook’s Ad Library includes data on political ads that were active on its platform but subsequently got pulled (made “inactive” in its parlance) because they broke its disclosure rules.

There are multiple examples of inactive ads for the Spanish far right party Vox visible in Facebook’s Ad Library that were pulled for running without the required disclaimer label, for example.

“After the ad started running, we determined that the ad was related to politics and issues of national importance and required the label. The ad was taken down,” runs the standard explainer Facebook offers if you click on the little ‘i’ next to an observation that “this ad ran without a disclaimer”.

What is not at all clear is how quickly Facebook acted to removed rule-breaking political ads.

It is possible to click on each individual ad to get some additional details. Here Facebook provides a per ad breakdown of impressions; genders, ages, and regional locations of the people who saw the ad; and how much was spent on it.

But all those clicks don’t scale. So it’s not possible to get an overview of how effectively Facebook is handling political ad rule breakers. Unless, well, you literally go in clicking and counting on each and every ad…

There is then also the wider question of whether a political advertiser that is found to be systematically breaking Facebook rules should be allowed to keep running ads on its platform.

Because if Facebook does allow that to happen there’s a pretty obvious (and massive) workaround for its disclosure rules: Bad faith political advertisers could simply keep submitting fresh ads after the last batch got taken down.

We were, for instance, able to find inactive Vox ads taken down for lacking a disclaimer that had still been able to rack up thousands — and even tens of thousands — of impressions in the time they were still active.

Facebook needs to be much clearer about how it handles systematic rule breakers.

Definition of political issue ads is still opaque

Facebook currently requires that all political advertisers in the EU go through its authorization process in the country where ads are being delivered if they relate to the European Parliamentary elections, as a step to try and prevent foreign interference.

This means it asks political advertisers to submit documents and runs technical checks to confirm their identity and location. Though it noted, on last week’s call, that it cannot guarantee this ID system cannot be circumvented. (As it was last year when UK journalists were able to successfully place ads paid for by ‘Cambridge Analytica’.)

One other big potential workaround is the question of what is a political ad? And what is an issue ad?

Facebook says these types of ads on Facebook and Instagram in the EU “must now be clearly labeled, including a paid-for-by disclosure from the advertiser at the top of the ad” — so users can see who is paying for the ads and, if there’s a business or organization behind it, their contact details, plus some disclosure about who, if anyone, saw the ads.

But the big question is how is Facebook defining political and issue ads across Europe?

While political ads might seem fairly easy to categorize — assuming they’re attached to registered political parties and candidates, issues are a whole lot more subjective.

Currently Facebook defines issue ads as those relating to “any national legislative issue of public importance in any place where the ad is being run.” It says it worked with EU barometer, YouGov and other third parties to develop an initial list of key issues — examples for Europe include immigration, civil and social rights, political values, security and foreign policy, the economy and environmental politics — that it will “refine… over time.”

Again specifics on when and how that will be refined are not clear. Yet ads that Facebook does not deem political/issue ads will slip right under its radar. They won’t be included in the Ad Library; they won’t be searchable; but they will be able to influence Facebook users under the perfect cover of its commercial ad platform — as before.

So if any maliciously minded propaganda slips through Facebook’s net, because the company decides it’s a non-political issue, it will once again leave no auditable trace.

In recent years the company has also had a habit of announcing major takedowns of what it badges “fake accounts” ahead of major votes. But again voters have to take it on trust that Facebook is getting those judgement calls right.

Facebook continues to bar pan-EU campaigns

On the flip side of weeding out non-transparent political propaganda and/or political disinformation, Facebook is currently blocking the free flow of legal pan-EU political campaigning on its platform.

This issue first came to light several weeks ago, when it emerged that European officials had written to Nick Clegg (Facebook’s vice president of global affairs) to point out that its current rules — i.e. that require those campaigning via Facebook ads to have a registered office in the country where the ad is running — run counter to the pan-European nature of this particular election.

It means EU institutions are in the strange position of not being able to run Facebook ads for their own pan-EU election everywhere across the region. “This runs counter to the nature of EU institutions. By definition, our constituency is multinational and our target audience are in all EU countries and beyond,” the EU’s most senior civil servants pointed out in a letter to the company last month.

This issue impacts not just EU institutions and organizations advocating for particular policies and candidates across EU borders, but even NGOs wanting to run vanilla “get out the vote” campaigns Europe-wide — leading to a number to accuse Facebook of breaching their electoral rights and freedoms.

Facebook claimed last week that the ball is effectively in the regulators’ court on this issue — saying it’s open to making the changes but has to get their agreement to do so. A spokesperson confirmed to us that there is no update to that situation, either.

Of course the company may be trying to err on the side of caution, to prevent bad actors being able to interfere with the vote across Europe. But at what cost to democratic freedoms?

What about fake news spreading on WhatsApp?

Facebook’s ‘election security’ initiatives have focused on political and/or politically charged ads running across its products. But there’s no shortage of political disinformation flowing unchecked across its platforms as user uploaded ‘content’.

On the Facebook-owned messaging app WhatsApp, which is hugely popular in some European markets, the presence of end-to-end encryption further complicates this issue by providing a cloak for the spread of political propaganda that’s not being regulated by Facebook.

In a recent study of political messages spread via WhatsApp ahead of last month’s general election in Spain, the campaign group Avaaz dubbed it “social media’s dark web” — claiming the app had been “flooded with lies and hate”.

Posts range from fake news about Prime Minister Pedro Sánchez signing a secret deal for Catalan independence to conspiracy theories about migrants receiving big cash payouts, propaganda against gay people and an endless flood of hateful, sexist, racist memes and outright lies,” it wrote. 

Avaaz compiled this snapshot of politically charged messages and memes being shared on Spanish WhatsApp by co-opting 5,833 local members to forward election-related content that they deemed false, misleading or hateful.

It says it received a total of 2,461 submissions — which is of course just a tiny, tiny fraction of the stuff being shared in WhatsApp groups and chats. Which makes this app the elephant in Facebook’s election ‘war room’.

What exactly is a war room anyway?

Facebook has said its Dublin Elections Operation Center — to give it its official title — is “focused on the EU elections”, while also suggesting it will plug into a network of global teams “to better coordinate in real time across regions and with our headquarters in California [and] accelerate our rapid response times to fight bad actors and bad content”.

But we’re concerned Facebook is sending out mixed — and potentially misleading — messages about how its election-focused resources are being allocated.

Our (non-Facebook) source told us the 40-odd staffers in the Dublin hub during the press tour were simultaneously looking at the Indian elections. If that’s the case, it does not sound entirely “focused” on either the EU or India’s elections. 

Facebook’s eponymous platform has 2.375 billion monthly active users globally, with some 384 million MAUs in Europe. That’s more users than in the US (243M MAUs). Though Europe is Facebook’s second-biggest market in terms of revenues after the US. Last quarter, it pulled in $3.65BN in sales for Facebook (versus $7.3BN for the US) out of $15BN overall.

Apart from any kind of moral or legal pressure that Facebook might have for running a more responsible platform when it comes to supporting democratic processes, these numbers underscore the business imperative that it has to get this sorted out in Europe in a better way.

Having a “war room” may sound like a start, but unfortunately Facebook is presenting it as an end in itself. And its foot-dragging on all of the bigger issues that need tackling, in effect, means the war will continue to drag on.

1 2 3 16
Go to Top