Timesdelhi.com

July 18, 2018
Category archive

General Data Protection Regulation

Pressure mounts on EU-US Privacy Shield after Facebook-Cambridge Analytica data scandal

in Claude Moraes/computing/data protection/data protection law/Delhi/EU-US Privacy Shield/Europe/european commission/european parliament/European Union/Facebook/Federal Trade Commission/General Data Protection Regulation/human rights/India/law/lawsuit/Politics/privacy/safe harbor/United States/us government by

Yet more pressure on the precariously placed EU-US Privacy Shield: The European Union parliament’s civil liberties committee has called for the data transfer arrangement to be suspended by September 1 unless the US comes into full compliance.

Though the committee has no power to suspend the arrangement itself. But has amped up the political pressure on the EU’s executive body, the European Commission .

In a vote late yesterday the Libe committee agreed the mechanism as it is currently being applied does not provide adequate protection for EU citizens’ personal information — emphasizing the need for better monitoring in light of the recent Facebook Cambridge Analytica scandal, after the company admitted in April that data on as many as 87 million users had been improperly passed to third parties in 2014 (including 2.7M EU citizens) .

Facebook is one of the now 3,000+ organizations that have signed up to Privacy Shield to make it easier for them to shift EU users’ data to the US for processing.

Although the Cambridge Analytica scandal pre-dates Privacy Shield — which was officially adopted in mid 2016, replacing the long-standing Safe Harbor arrangement (which was struck down by Europe’s top court in 2015, after a legal challenge that successfully argued that US government mass surveillance practices were undermining EU citizens’ fundamental rights).

The EU also now has an updated data protection framework — the GDPR  — which came into full force on May 25, and further tightens privacy protections around EU data.

The Libe committee says it wants US authorities to act upon privacy scandals such as Facebook Cambridge Analytica debacle without delay — and, if needed, remove companies that have misused personal data from the Privacy Shield list. MEPs also want EU authorities to investigate such cases and suspend or ban data transfers under the Privacy Shield where appropriate.

Despite a string of privacy scandals — some very recent, and a fresh FTC probe — Facebook remains on the Privacy Shield list; along with SCL Elections, an affiliate of Cambridge Analytica, which has claimed to be closing its businesses down in light of press around the scandal, yet which is apparently still certified to take people’s data out of the EU and provide it with ‘adequate protection’, per the Privacy Shield list…

MEPs on the committee also expressed concern about the recent adoption in the US of the Clarifying Lawful Overseas Use of Data Act (Cloud Act), which grants the US and foreign police access to personal data across borders — with the committee pointing out that the US law could conflict with EU data protection laws.

In a statement, civil liberties committee chair and rapporteur Claude Moraes said: “While progress has been made to improve on the Safe Harbor agreement, the Privacy Shield in its current form does not provide the adequate level of protection required by EU data protection law and the EU Charter. It is therefore up to the US authorities to effectively follow the terms of the agreement and for the Commission to take measures to ensure that it will fully comply with the GDPR.”

The Privacy Shield was negotiated by the European Commission with US counterparts as a replacement for Safe Harbor, and is intended to offer ‘essentially equivalent’ data protections for EU citizens when their data is taken to the US — a country which does not of course have essentially equivalent privacy laws. So the aim is to try to bridge the gap between two distinct legal regimes.

However the viability of that endeavor has been in doubt since the start, with critics arguing that the core legal discrepancies have not gone away — and dubbing Privacy Shield as ‘lipstick on a pig‘.

Also expressing concerns throughout the process of drafting the framework and since: The EU’s influence WP29 group (now morphed into the European Data Protection Board), made up of representatives of Member States’ data protection agencies.

Its concerns have spanned both commercial elements of the framework and law enforcement/national security considerations. We’ve reached out to the EDPB for comment and will update this report with any response.

Following the adoption of Privacy Shield, the Commission has also expressed some public concerns, though the EU’s executive body has generally followed a ‘wait and see’ approach, coupled with attempts to use the mechanism to apply political pressure on US counterparts — using the moment of the Privacy Shield’s first annual review to push for reform of US surveillance law, for example.

Reform that did not come to pass, however. Quite the opposite. Hence the arrangement being in the pressing bind it is now, with the date of the second annual review fast approaching — and zero progress for the Commission to point to try to cushion Privacy Shield from criticism.

There’s still no permanent appointment for a Privacy Shield ombudsperson, as the framework requires. Another raised concern has been over the lack of membership of the US Privacy and Civil Liberties Oversight Board — which remains moribund, with just a single member.

Threats to suspend the Privacy Shield arrangement if it’s judged to not be functioning as intended can only be credible if they are actually carried out.

Though the Commission will also want to avoid at all costs pulling the plug on a mechanism that more than 3,000 organizations are now using, and so which many businesses are relying on. So it’s most likely that it will again be left to Europe’s supreme court to strike any invalidating blow.

A Commission spokesman told us it is aware of the discussions in the European Parliament on a draft resolution on the EU- U.S. Privacy Shield. But he emphasized its approach of engaging with US counterparts to improve the arrangement.

“The Commission’s position is clear and laid out in the first annual review report. The first review showed that the Privacy Shield works well, but there is some room for improving its implementation,” he told TechCrunch.

“The Commission is working with the US administration and expects them to address the EU concerns. Commissioner Jourová was in the U.S. last time in March to engage with the U.S. government on the follow-up and discussed what the U.S. side should do until the next annual review in autumn.

“Commissioner Jourová also sent letters to US State Secretary Pompeo, Commerce Secretary Ross and Attorney General Sessions urging them to do the necessary improvements, including on the Ombudsman, as soon as possible.

“We will continue to work to keep the Privacy Shield running and ensure European’s data are well protected. Over 3000 companies are using it currently.”

While the Commission spokesman didn’t mention it, Privacy Shield is now facing several legal challenges.

Including, specifically, a series of legal questions pertaining to its adequacy which have been referred to the CJEU by Ireland’s High Court as a result of a separate privacy challenge to a different EU data transfer mechanism that’s also used by organizations to authorize data flows.

And judging by how quickly the CJEU has handled similar questions, the arrangement could have as little as  one more year’s operating grace before a decision is handed down that invalidates it.

If the Commission were to act itself the second annual review of the mechanism is due to take place in September, and indeed the Libe committee is pushing for a suspension by September 1 if there’s no progress on reforms within the US.

The EU parliament as a whole is also due to vote on the committee’s text on Privacy Shield next month, which — if they back the Libe position — would place further pressure on the EC to act. Though only a legal decision invalidating the arrangement can compel action.

News Source = techcrunch.com

To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

in Artificial Intelligence/Column/computing/data management/data protection/Delhi/Europe/european parliament/European Union/Facebook/General Data Protection Regulation/Google/India/Judge/machine learning/Mark Zuckerberg/Politics/privacy/right to be forgotten/smartphone/social media/Software/United States/world wide web by

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

News Source = techcrunch.com

Uber to stop storing precise location pick-ups/drop-offs in driver logs

in Apps/Delhi/Europe/European Union/Federal Trade Commission/General Data Protection Regulation/India/Location/Politics/privacy/Transportation/Uber by

Uber is planning to tweak the historical pick-up and drop-off logs that drivers can see in order to slightly obscure the exact location, rather than planting an exact pin in it (as now). The idea is to provide a modicum more privacy for users while still providing drivers with what look set to be remain highly detailed trip logs.

The company told Gizmodo it will initially pilot the change with drivers, but intends the privacy-focused feature to become the default setting “in the coming months”.

Earlier this month Uber also announced a complete redesign of the drivers’ app — making changes it said had been informed by “months” of driver conversations and feedback. It says the pilot of location obfuscation will begin once all drivers have the new app.

The ride-hailing giant appears to be trying to find a compromise between rider safety concerns — there have been reports of Uber drivers stalking riders, for example — and drivers wanting to have precise logs so they can challenge fare disputes.

Location data is our most sensitive information, and we are doing everything we can do to protect privacy around it,” a spokesperson told us. “The new design provides enough information for drivers to identify past trips for customer support issues or earning disputes without granting them ongoing access to rider addresses.”

In the current version of the pilot — according to screenshots obtained by Gizmodo — the location of the pin has been expanded into a circle, so it’s indicating a shaded area a few meters around a pick-up or drop-off location.

According to Uber the design may still change, as is said it intends to gather driver feedback. We’ve asked if it’s also intending to gather rider feedback on the design.

Asked whether it’s making the change as part of an FTC settlement last year — which followed an investigation into data mishandling, privacy and security complaints dating back to 2014 and 2015 — an Uber spokesman told us: “Not specifically, but user expectations are shifting and we are working to build privacy into the DNA of our products.”

Earlier this month the company agreed to a revised settlement with the FTC, including agreeing that it may be subject to civil penalties if it fails to notify the FTC of future privacy breaches — likely in light of the 2016 data breach affecting 57 million riders and drivers which the company concealed until 2017.

An incoming update to European privacy rules (called GDPR) — which beefs up fines for violations and applies extraterritorially (including, for example, if an EU citizen is using the Uber app on a trip to the U.S.) — also tightens the screw on data protection, giving individuals expanded rights to control their personal information held by a company.

A precise location log would likely be considered personal data that Uber would have to provide to any users requesting their information under GDPR, for example.

Although it’s less clear whether the relatively small amount of obfuscation it’s toying with here would be enough to ensure the location logs are no longer judged as riders’ personal data under the regulation.

Last year the company also ended a controversial feature in which its app had tracked the location of users even after their trip had ended.

News Source = techcrunch.com

Facebook face recognition error looks awkward ahead of GDPR

in data controller/data protection/Delhi/Europe/European Union/face recognition technology/Facebook/facial recognition/GDPR/General Data Protection Regulation/India/Mark Zuckerberg/personally identifiable information/Politics/privacy/Social by

A Facebook face recognition notification slip-up hints at how risky the company’s approach to compliance with a tough new European data protection standard could turn out to be.

On Friday a Metro journalist in the UK reported receiving a notification about the company’s face recognition technology — which told him “the setting is on”.

The wording was curious as the technology has been switched off in Europe since 2012, after regulatory pressure, and — as part of changes related to its GDPR compliance strategy — Facebook has also said it will be asking European users to choose individually whether or not they want to switch it on. (And on Friday begun rolling out its new consent flow in the region, ahead of the regulation applying next month.)

The company has since confirmed to us that the message was sent to the user in error — saying the wording came from an earlier notification which it sent to users who already had its facial recognition tech enabled, starting in December. And that it had intended to send the person a similar notification — containing the opposite notification, i.e. that “the setting is off”.

“We’re asking everyone in the EU whether they want to enable face recognition, and only people who affirmatively give their consent will have these features enabled. We did not intend for anyone in the EU to see this type of message, and we can confirm that this error did not result in face recognition being enabled without the person’s consent,” a Facebook spokesperson told us.

Here are the two notifications in question — showing the setting on vs the setting off wordings:

This is interesting because Facebook has repeatedly refused to confirm it will be universally applying GDPR compliance measures across its entire global user-base.

Instead it has restricted its public commitments to saying the same “settings and controls” will be made available for users — which as we’ve previously pointed out avoids committing the company to a universal application of GDPR principles, such as privacy by design.

Given that Facebook’s facial recognition feature has been switched off in Europe since 2012 “the setting is on” message would presumably have only been sent to users in the US or Canada — where Facebook has been able to forge ahead with pushing people to accept the controversial, privacy-hostile technology, embedding it into features such as auto-tagging for photo uploads.

But it hardly bodes well for Facebook’s compliance with the EU’s strict new data protection standard if its systems are getting confused about whether or not a user is an EU person.

Facebook claims no data was processed without consent as a result of the wrong notification being sent — but under GDPR it could face investigations by data protection authorities seeking to verify whether or not an individual’s rights were violated. (Reminder: GDPR fines can scale as high as 4% of a company’s global annual turnover so privacy enforcement is at last getting teeth.)

Facebook’s appetite for continuing to push privacy hostile features on its user-base is clear. This strategic direction also comes from the very top of the company.

Earlier this month CEO and founder Mark Zuckerberg urged US lawmakers not to impede US companies from be using people’s data for sensitive use-cases like facial recognition — attempting to gloss that tough sell by claiming pro-privacy rules would risk the US falling behind China.

Meanwhile, last week it also emerged that Zuckerberg’s company will switch the location where most international users’ data is processed from its international HQ, Facebook Ireland, to Facebook USA. From next month only EU users will have their data controller located in the EU — other international users, who would have at least technically fallen under GDPR’s reach otherwise, on account of their data being processed in the region, are being shifted out of the EU jurisdiction — via a unilateral T&Cs change.

This move seems intended to try to shrink some of Facebook’s legal liabilities by reducing the number of international users that would, at least technically, fall under the reach of the EU regulation — which both applies to anyone in the EU whose data is being processed and also extends EU fundamental rights extraterritorially, carrying the aforementioned major penalties for violations.

However Facebook’s decision to reduce how many of its users have their data processed in the EU also looks set to raise the stakes — if, as it appears, the company intends to exploit the lack of a comprehensive privacy framework in the US to apply different standards for North American users (and from next month also for non-EU international users, whose data will be processed there).

The problem is, if Facebook does not perform perfect segregation and management of these two separate pools of users it risks accidentally processing the personal data of Europeans in violation of the strict new EU standard, which applies from May 25.

Yet here it is, on the cusp of the new rules, sending the wrong notification and incorrectly telling an EU user that facial recognition is on.

Given how much risk it’s creating for itself by trying to run double standards for data protection you almost have to wonder whether Facebook is trying to engineer in some compliance wiggle room for itself — i.e. by positioning itself to be able to claim that such and such’s data was processed in error.

Another interesting question is whether the unilateral switching of ~1.5BN non-EU international users to Facebook USA as data controller could be interpreted as a data transfer to a third country — which would trigger other data protection requirements under EU law, and further layer on the legal complexity…

What is clear is that legal challenges to Facebook’s self-serving interpretation of EU law are coming.

News Source = techcrunch.com

Data experts on Facebook’s GDPR changes: Expect lawsuits

in Cambridge Analytica/China/Congress/data protection/data protection law/data security/Delhi/Europe/European Union/Facebook/facial recognition/Federal Trade Commission/GDPR/General Data Protection Regulation/Government/India/Internet Business Models/Mark Zuckerberg/Paul-Olivier Dehaye/personally identifiable information/Policy/Politics/privacy/Social/social media/United States by

Make no mistake: Fresh battle lines are being drawn in the clash between data-mining tech giants and Internet users over people’s right to control their personal information and protect their privacy.

An update to European Union data protection rules next month — called the General Data Protection Regulation — is the catalyst for this next chapter in the global story of tech vs privacy.

A fairytale ending would remove that ugly ‘vs’ and replace it with an enlightened ‘+’. But there’s no doubt it will be a battle to get there — requiring legal challenges and fresh case law to be set down — as an old guard of dominant tech platforms marshal their extensive resources to try to hold onto the power and wealth gained through years of riding roughshod over data protection law.

Payback is coming though. Balance is being reset. And the implications of not regulating what tech giants can do with people’s data has arguably never been clearer.

The exciting opportunity for startups is to skate to where the puck is going — by thinking beyond exploitative legacy business models that amount to embarrassing blackboxes whose CEOs dare not publicly admit what the systems really do — and come up with new ways of operating and monetizing services that don’t rely on selling the lie that people don’t care about privacy.

 

More than just small print

Right now the EU’s General Data Protection Regulation can take credit for a whole lot of spilt ink as tech industry small print is reworded en masse. Did you just receive a T&C update notification about a company’s digital service? Chances are it’s related to the incoming standard.

The regulation is generally intended to strengthen Internet users’ control over their personal information, as we’ve explained before. But its focus on transparency — making sure people know how and why data will flow if they choose to click ‘I agree’ — combined with supersized fines for major data violations represents something of an existential threat to ad tech processes that rely on pervasive background harvesting of users’ personal data to be siphoned biofuel for their vast, proprietary microtargeting engines.

This is why Facebook is not going gentle into a data processing goodnight.

Indeed, it’s seizing on GDPR as a PR opportunity — shamelessly stamping its brand on the regulatory changes it lobbied so hard against, including by taking out full page print ads in newspapers…

This is of course another high gloss plank in the company’s PR strategy to try to convince users to trust it — and thus to keep giving it their data. Because — and only because — GDPR gives consumers more opportunity to lock down access to their information and close the shutters against countless prying eyes.

But the pressing question for Facebook — and one that will also test the mettle of the new data protection standard — is whether or not the company is doing enough to comply with the new rules.

One important point re: Facebook and GDPR is that the standard applies globally, i.e. for all Facebook users whose data is processed by its international entity, Facebook Ireland (and thus within the EU); but not necessarily universally — with Facebook users in North America not legally falling under the scope of the regulation.

Users in North America will only benefit if Facebook chooses to apply the same standard everywhere. (And on that point the company has stayed exceedingly fuzzy.)

It has claimed it won’t give US and Canadian users second tier status vs the rest of the world where their privacy is concerned — saying they’re getting the same “settings and controls” — but unless or until US lawmakers spill some ink of their own there’s nothing but an embarrassing PR message to regulate what Facebook chooses to do with Americans’ data. It’s the data protection principles, stupid.

Zuckerberg was asked by US lawmakers last week what kind of regulation he would and wouldn’t like to see laid upon Internet companies — and he made a point of arguing for privacy carve outs to avoid falling behind, of all things, competitors in China.

Which is an incredibly chilling response when you consider how few rights — including human rights — Chinese citizens have. And how data-mining digital technologies are being systematically used to expand Chinese state surveillance and control.

The ugly underlying truth of Facebook’s business is that it also relies on surveillance to function. People’s lives are its product.

That’s why Zuckerberg couldn’t tell US lawmakers to hurry up and draft their own GDPR. He’s the CEO saddled with trying to sell an anti-privacy, anti-transparency position — just as policymakers are waking up to what that really means.

 

Plus ça change?

Facebook has announced a series of updates to its policies and platform in recent months, which it’s said are coming to all users (albeit in ‘phases’). The problem is that most of what it’s proposing to achieve GDPR compliance is simply not adequate.

Coincidentally many of these changes have been announced amid a major data mishandling scandal for Facebook, in which it’s been revealed that data on up to 87M users was passed to a political consultancy without their knowledge or consent.

It’s this scandal that led Zuckerberg to be perched on a booster cushion in full public view for two days last week, dodging awkward questions from US lawmakers about how his advertising business functions.

He could not tell Congress there wouldn’t be other such data misuse skeletons in its closet. Indeed the company has said it expects it will uncover additional leaks as it conducts a historical audit of apps on its platform that had access to “a large amount of data”. (How large is large, one wonders… )

But whether Facebook’s business having enabled — in just one example — the clandestine psychological profiling of millions of Americans for political campaign purposes ends up being the final, final straw that catalyzes US lawmakers to agree their own version of GDPR is still tbc.

Any new law will certainly take time to formulate and pass. In the meanwhile GDPR is it.

The most substantive GDPR-related change announced by Facebook to date is the shuttering of a feature called Partner Categories — in which it allowed the linking of its own information holdings on people with data held by external brokers, including (for example) information about people’s offline activities.

Evidently finding a way to close down the legal liabilities and/or engineer consent from users to that degree of murky privacy intrusion — involving pools of aggregated personal data gathered by goodness knows who, how, where or when — was a bridge too far for the company’s army of legal and policy staffers.

Other notable changes it has so far made public include consolidating settings onto a single screen vs the confusing nightmare Facebook has historically required users to navigate just to control what’s going on with their data (remember the company got a 2011 FTC sanction for “deceptive” privacy practices); rewording its T&Cs to make it more clear what information it’s collecting for what specific purpose; and — most recently — revealing a new consent review process whereby it will be asking all users (starting with EU users) whether they consent to specific uses of their data (such as processing for facial recognition purposes).

As my TC colleague Josh Constine wrote earlier in a critical post dissecting the flaws of Facebook’s approach to consent review, the company is — at very least — not complying with the spirit of GDPR’s law.

Indeed, Facebook appears pathologically incapable of abandoning its long-standing modus operandi of socially engineering consent from users (doubtless fed via its own self-reinforced A/B testing ad expertise). “It feels obviously designed to get users to breeze through it by offering no resistance to continue, but friction if you want to make changes,” was his summary of the process.

But, as we’ve pointed out before, concealment is not consent.

To get into a few specifics, pre-ticked boxes — which is essentially what Facebook is deploying here, with a big blue “accept and continue” button designed to grab your attention as it’s juxtaposed against an anemic “manage data settings” option (which if you even manage to see it and read it sounds like a lot of tedious hard work) — aren’t going to constitute valid consent under GDPR.

Nor is this what ‘privacy by default’ looks like — another staple principle of the regulation. On the contrary, Facebook is pushing people to do the opposite: Give it more of their personal information — and fuzzing why it’s asking by bundling a range of usage intentions.

The company is risking a lot here.

In simple terms, seeking consent from users in a way that’s not fair because it’s manipulative means consent is not being freely given. Under GDPR, it won’t be consent at all. So Facebook appears to be seeing how close to the wind it can fly to test how regulators will respond.

Safe to say, EU lawmakers and NGOs are watching.

 

“Yes, they will be taken to court”

“Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment,” runs one key portion of GDPR.

Now compare that with: “People can choose to not be on Facebook if they want” — which was Facebook’s deputy chief privacy officer, Rob Sherman’s, paper-thin defense to reporters for the lack of an overall opt out for users to its targeted advertising.

Data protection experts who TechCrunch spoke to suggest Facebook is failing to comply with, not just the spirit, but the letter of the law here. Some were exceeding blunt on this point.

“I am less impressed,” said law professor Mireille Hildebrandt discussing how Facebook is railroading users into consenting to its targeted advertising. “It seems they have announced that they will still require consent for targeted advertising and refuse the service if one does not agree. This violates [GDPR] art. 7.4 jo recital 43. So, yes, they will be taken to court.”

“Zuckerberg appears to view the combination of signing up to T&Cs and setting privacy options as ‘consent’,” adds cyber security professor Eerke Boiten. “I doubt this is explicit or granular enough for the personal data processing that FB do. The default settings for the privacy settings certainly do not currently provide for ‘privacy by default’ (GDPR Art 25).

“I also doubt whether FB Custom Audiences work correctly with consent. FB finds out and retains a small bit of personal info through this process (that an email address they know is known to an advertiser), and they aim to shift the data protection legal justification on that to the advertisers. Do they really then not use this info for future profiling?”

That looming tweak to the legal justification of Facebook’s Custom Audiences feature — a product which lets advertisers upload contact lists in a hashed form to find any matches among its own user-base (so those people can be targeted with ads on Facebook’s platform) — also looks problematical.

Here the company seems to be intending to try to claim a change in the legal basis, pushed out via new terms in which it instructs advertisers to agree they are the data controller (and it is merely a data processor). And thereby seek to foist a greater share of the responsibility for obtaining consent to processing user data onto its customers.

However such legal determinations are simply not a matter of contract terms. They are based on the fact of who is making decisions about how data is processed. And in this case — as other experts have pointed out — Facebook would be classed as a joint controller with any advertisers that upload personal data. The company can’t use a T&Cs change to opt out of that.

Wishful thinking is not a reliable approach to legal compliance.

 

Fear and manipulation of highly sensitive data

Over many years of privacy-hostile operation, Facebook has shown it has a major appetite for even very sensitive data. And GDPR does not appear to have blunted that.

Let’s not forget, facial recognition was a platform feature that got turned off in the EU, thanks to regulatory intervention. Yet here Facebook is now trying to use GDPR as a route to process this sensitive biometric data for international users after all — by pushing individual users to consent to it by dangling a few ‘feature perks’ at the moment of consent.

Veteran data protection and privacy consultant, Pat Walshe, is unimpressed.

“The sensitive data tool appears to be another data grab,” he tells us, reviewing Facebook’s latest clutch of ‘GDPR changes’. “Note the subtlety. It merges ‘control of sharing’ such data with FB’s use of the data “to personalise features and products”. From the info available that isn’t sufficient to amount to consent for such sensitive data and nor is it clear folks can understand the broader implications of agreeing.

“Does it mean ads will appear in Instagram? WhatsApp etc? The default is also set to ‘accept’ rather than ‘review and consider’. This is really sensitive data we’re talking about.”

“The face recognition suggestions are woeful,” he continues. “The second image — is using an example… to manipulate and stoke fear — “we can’t protect you”.

“Also, the choices and defaults are not compatible with [GDPR] Article 25 on data protection by design and default nor Recital 32… If I say no to facial recognition it’s unclear if other users can continue to tag me.”

Of course it goes without saying that Facebook users will keep uploading group photos, not just selfies. What’s less clear is whether Facebook will be processing the faces of other people in those shots who have not given (and/or never even had the opportunity to give) consent to its facial recognition feature.

People who might not even be users of its product.

But if it does that it will be breaking the law. Yet Facebook does indeed profile non-users — despite Zuckerberg’s claims to Congress not to know about its shadow profiles. So the risk is clear.

It can’t give non-users “settings and controls” not to have their data processed. So it’s already compromised their privacy — because it never gained consent in the first place.

New Mexico Representative Ben Lujan made this point to Zuckerberg’s face last week and ended the exchange with a call to action: “So you’re directing people that don’t even have a Facebook page to sign up for a Facebook page to access their data… We’ve got to change that.”

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

But nothing in the measures Facebook has revealed so far, as its ‘compliance response’ to GDPR, suggest it intends to pro-actively change that.

Walshe also critically flags how — again, at the point of consent — Facebook’s review process deploys examples of the social aspects of its platform (such as how it can use people’s information to “suggest groups or other features or products”) as a tactic for manipulating people to agree to share religious affiliation data, for example.

“The social aspect is not separate to but bound up in advertising,” he notes, adding that the language also suggests Facebook uses the data.

Again, this whiffs a whole lot more than smells like GDPR compliance.

“I don’t believe FB has done enough,” adds Walshe, giving a view on Facebook’s GDPR preparedness ahead of the May 25 deadline for the framework’s application — as Zuckerberg’s Congress briefing notes suggested the company itself believes it has. (Or maybe it just didn’t want to admit to Congress that U.S. Facebook users will get lower privacy standards vs users elsewhere.)

“In fact I know they have not done enough. Their business model is skewed against privacy — privacy gets in the way of advertising and so profit. That’s why Facebook has variously suggested people may have to pay if they want an ad free model & so ‘pay for privacy’.”

“On transparency, there is a long way to go,” adds Boiten. “Friend suggestions, profiling for advertising, use of data gathered from like buttons and web pixels (also completely missing from “all your Facebook data”), and the newsfeed algorithm itself are completely opaque.”

“What matters most is whether FB’s processing decisions will be GDPR compliant, not what exact controls are given to FB members,” he concludes.

US lawmakers also pumped Zuckerberg on how much of the information his company harvests on people who have a Facebook account is revealed to them when they ask for it — via its ‘Download your data’ tool.

His answers on this appeared to intentionally misconstrue what was being asked — presumably in a bid to mask the ugly reality of the true scope and depth of the surveillance apparatus he commands. (Sometimes with a few special ‘CEO privacy privileges’ thrown in — like being able to selectively retract just his own historical Facebook messages from conversations, ahead of bringing the feature to anyone else.)

‘Download your Data’ is clearly partial and self-serving — and thus it also looks very far from being GDPR compliant.

 

Not even half the story

Facebook is not even complying with the spirit of current EU data protection law on data downloads. Subject Access Requests give individuals the right to request not just the information they have voluntarily uploaded to a service, but also personal data the company holds about them; Including giving a description of the personal data; the reasons it is being processed; and whether it will be given to any other organizations or people.

Facebook not only does not include people’s browsing history in the info it provides when you ask to download your data — which, incidentally, its own cookies policy confirms it tracks (via things like social plug-ins and tracking pixels on millions of popular websites etc etc) — it also does not include a complete list of advertisers on its platform that have your information.

Instead, after a wait, it serves up an eight-week snapshot. But even this two month view can still stretch to hundreds of advertisers per individual.

If Facebook gave users a comprehensive list of advertisers’ access to their information the number of third party companies would clearly stretch into the thousands. (In some cases thousands might even be a conservative estimate.)

There’s plenty of other information harvested from users that Facebook also intentionally fails to divulge via ‘Download your data’. And — to be clear — this isn’t a new problem either. The company has a very long history of blocking these type of requests.

In the EU it currently invokes a exception in Irish law to circumvent more fulsome compliance — which, even setting GDPR aside, raises some interesting competition law questions, as Paul-Olivier Dehaye told the UK parliament last month.

“All your Facebook data” isn’t a complete solution,” agrees Boiten. “It misses the info Facebook uses for auto-completing searches; it misses much of the information they use for suggesting friends; and I find it hard to believe that it contains the full profiling information.”

“Ads Topics” looks rather random and undigested, and doesn’t include the clear categories available to advertisers,” he further notes.

Facebook wouldn’t comment publicly about this when we asked. But it maintains its approach towards data downloads is GDPR compliant — and says it’s reviewed what it offers via with regulators to get feedback.

Earlier this week it also put out a wordy blog post attempting to diffuse this line of attack by pointing the finger of blame at the rest of the tech industry — saying, essentially, that a whole bunch of other tech giants are at it too.

Which is not much of a moral defense even if the company believes its lawyers can sway judges with it. (Ultimately I wouldn’t fancy its chances; the EU’s top court has a robust record of defending fundamental rights.)

 

Think of the children…

What its blog post didn’t say — yet again — was anything about how all the non-users it nonetheless tracks around the web are able to have any kind of control over its surveillance of them.

And remember, some Facebook non-users will be children.

So yes, Facebook is inevitably tracking kids’ data without parental consent. Under GDPR that’s a majorly big no-no.

TC’s Constine had a scathing assessment of even the on-platform system that Facebook has devised in response to GDPR’s requirements on parental consent for processing the data of users who are between the ages of 13 and 15.

“Users merely select one of their Facebook friends or enter an email address, and that person is asked to give consent for their ‘child’ to share sensitive info,” he observed. “But Facebook blindly trusts that they’ve actually selected their parent or guardian… [Facebook’s] Sherman says Facebook is “not seeking to collect additional information” to verify parental consent, so it seems Facebook is happy to let teens easily bypass the checkup.”

So again, the company is being shown doing the minimum possible — in what might be construed as a cynical attempt to check another compliance box and carry on its data-sucking business as usual.

Given that intransigence it really will be up to the courts to bring the enforcement stick. Change, as ever, is a process — and hard won.

Hildebrandt is at least hopeful that a genuine reworking of Internet business models is on the way, though — albeit not overnight. And not without a fight.

“In the coming years the landscape of all this silly microtargeting will change, business models will be reinvented and this may benefit both the advertisers, consumers and citizens,” she tells us. “It will hopefully stave off the current market failure and the uprooting of democratic processes… Though nobody can predict the future, it will require hard work.”

News Source = techcrunch.com

Go to Top