Timesdelhi.com

September 24, 2018
Category archive

General Data Protection Regulation

Keeping artificial intelligence accountable to humans

in algorithmic bias/Artificial Intelligence/Brad Smith/Column/Culture/cybernetics/Delhi/driver/engineer/European Union/facial recognition software/General Data Protection Regulation/IBM/India/Jeopardy/machine learning/Microsoft/Nigeria/Politics/search engines/TC/Technology/U.S. government/United States by

As a teenager in Nigeria, I tried to build an artificial intelligence system. I was inspired by the same dream that motivated the pioneers in the field: That we could create an intelligence of pure logic and objectivity that would free humanity from human error and human foibles.

I was working with weak computer systems and intermittent electricity, and needless to say my AI project failed. Eighteen years later—as an engineer researching artificial intelligence, privacy and machine-learning algorithms—I’m seeing that so far, the premise that AI can free us from subjectivity or bias is also disappointing. We are creating intelligence in our own image. And that’s not a compliment.

Researchers have known for awhile that purportedly neutral algorithms can mirror or even accentuate racial, gender and other biases lurking in the data they are fed. Internet searches on names that are more often identified as belonging to black people were found to prompt search engines to generate ads for bailbondsmen. Algorithms used for job-searching were more likely to suggest higher-paying jobs to male searchers than female. Algorithms used in criminal justice also displayed bias.

Five years later, expunging algorithmic bias is turning out to be a tough problem. It takes careful work to comb through millions of sub-decisions to figure out why the algorithm reached the conclusion it did. And even when that is possible, it is not always clear which sub-decisions are the culprits.

Yet applications of these powerful technologies are advancing faster than the flaws can be addressed.

Recent research underscores this machine bias, showing that commercial facial-recognition systems excel at identifying light-skinned males, with an error rate of less than 1 percent. But if you’re a dark-skinned female, the chance you’ll be misidentified rises to almost 35 percent.

AI systems are often only as intelligent—and as fair—as the data used to train them. They use the patterns in the data they have been fed and apply them consistently to make future decisions. Consider an AI tasked with sorting the best nurses for a hospital to hire. If the AI has been fed historical data—profiles of excellent nurses who have mostly been female—it will tend to judge female candidates to be better fits. Algorithms need to be carefully designed to account for historical biases.

Occasionally, AI systems get food poisoning. The most famous case was Watson, the AI that first defeated humans in 2011 on the television game show “Jeopardy.” Watson’s masters at IBM needed to teach it language, including American slang, so they fed it the contents of the online Urban Dictionary. But after ingesting that colorful linguistic meal, Watson developed a swearing habit. It began to punctuate its responses with four-letter words.

We have to be careful what we feed our algorithms. Belatedly, companies now understand that they can’t train facial-recognition technology by mainly using photos of white men. But better training data alone won’t solve the underlying problem of making algorithms achieve fairness.

Algorithms can already tell you what you might want to read, who you might want to date and where you might find work. When they are able to advise on who gets hired, who receives a loan, or the length of a prison sentence, AI will have to be made more transparent—and more accountable and respectful of society’s values and norms.

Accountability begins with human oversight when AI is making sensitive decisions. In an unusual move, Microsoft president Brad Smith recently called for the U.S. government to consider requiring human oversight of facial-recognition technologies.

The next step is to disclose when humans are subject to decisions made by AI. Top-down government regulation may not be a feasible or desirable fix for algorithmic bias. But processes can be created that would allow people to appeal machine-made decisions—by appealing to humans. The EU’s new General Data Protection Regulation establishes the right for individuals to know and challenge automated decisions.

Today people who have been misidentified—whether in an airport or an employment data base—have no recourse. They might have been knowingly photographed for a driver’s license, or covertly filmed by a surveillance camera (which has a higher error rate.) They cannot know where their image is stored, whether it has been sold or who can access it. They have no way of knowing whether they have been harmed by erroneous data or unfair decisions.

Minorities are already disadvantaged by such immature technologies, and the burden they bear for the improved security of society at large is both inequitable and uncompensated. Engineers alone will not be able to address this. An AI system is like a very smart child just beginning to understand the complexities of discrimination.

To realize the dream I had as a teenager, of an AI that can free humans from bias instead of reinforcing bias, will require a range of experts and regulators to think more deeply not only about what AI can do, but what it should do—and then teach it how. 

News Source = techcrunch.com

Pressure mounts on EU-US Privacy Shield after Facebook-Cambridge Analytica data scandal

in Claude Moraes/computing/data protection/data protection law/Delhi/EU-US Privacy Shield/Europe/european commission/european parliament/European Union/Facebook/Federal Trade Commission/General Data Protection Regulation/human rights/India/law/lawsuit/Politics/privacy/safe harbor/United States/us government by

Yet more pressure on the precariously placed EU-US Privacy Shield: The European Union parliament’s civil liberties committee has called for the data transfer arrangement to be suspended by September 1 unless the US comes into full compliance.

Though the committee has no power to suspend the arrangement itself. But has amped up the political pressure on the EU’s executive body, the European Commission .

In a vote late yesterday the Libe committee agreed the mechanism as it is currently being applied does not provide adequate protection for EU citizens’ personal information — emphasizing the need for better monitoring in light of the recent Facebook Cambridge Analytica scandal, after the company admitted in April that data on as many as 87 million users had been improperly passed to third parties in 2014 (including 2.7M EU citizens) .

Facebook is one of the now 3,000+ organizations that have signed up to Privacy Shield to make it easier for them to shift EU users’ data to the US for processing.

Although the Cambridge Analytica scandal pre-dates Privacy Shield — which was officially adopted in mid 2016, replacing the long-standing Safe Harbor arrangement (which was struck down by Europe’s top court in 2015, after a legal challenge that successfully argued that US government mass surveillance practices were undermining EU citizens’ fundamental rights).

The EU also now has an updated data protection framework — the GDPR  — which came into full force on May 25, and further tightens privacy protections around EU data.

The Libe committee says it wants US authorities to act upon privacy scandals such as Facebook Cambridge Analytica debacle without delay — and, if needed, remove companies that have misused personal data from the Privacy Shield list. MEPs also want EU authorities to investigate such cases and suspend or ban data transfers under the Privacy Shield where appropriate.

Despite a string of privacy scandals — some very recent, and a fresh FTC probe — Facebook remains on the Privacy Shield list; along with SCL Elections, an affiliate of Cambridge Analytica, which has claimed to be closing its businesses down in light of press around the scandal, yet which is apparently still certified to take people’s data out of the EU and provide it with ‘adequate protection’, per the Privacy Shield list…

MEPs on the committee also expressed concern about the recent adoption in the US of the Clarifying Lawful Overseas Use of Data Act (Cloud Act), which grants the US and foreign police access to personal data across borders — with the committee pointing out that the US law could conflict with EU data protection laws.

In a statement, civil liberties committee chair and rapporteur Claude Moraes said: “While progress has been made to improve on the Safe Harbor agreement, the Privacy Shield in its current form does not provide the adequate level of protection required by EU data protection law and the EU Charter. It is therefore up to the US authorities to effectively follow the terms of the agreement and for the Commission to take measures to ensure that it will fully comply with the GDPR.”

The Privacy Shield was negotiated by the European Commission with US counterparts as a replacement for Safe Harbor, and is intended to offer ‘essentially equivalent’ data protections for EU citizens when their data is taken to the US — a country which does not of course have essentially equivalent privacy laws. So the aim is to try to bridge the gap between two distinct legal regimes.

However the viability of that endeavor has been in doubt since the start, with critics arguing that the core legal discrepancies have not gone away — and dubbing Privacy Shield as ‘lipstick on a pig‘.

Also expressing concerns throughout the process of drafting the framework and since: The EU’s influence WP29 group (now morphed into the European Data Protection Board), made up of representatives of Member States’ data protection agencies.

Its concerns have spanned both commercial elements of the framework and law enforcement/national security considerations. We’ve reached out to the EDPB for comment and will update this report with any response.

Following the adoption of Privacy Shield, the Commission has also expressed some public concerns, though the EU’s executive body has generally followed a ‘wait and see’ approach, coupled with attempts to use the mechanism to apply political pressure on US counterparts — using the moment of the Privacy Shield’s first annual review to push for reform of US surveillance law, for example.

Reform that did not come to pass, however. Quite the opposite. Hence the arrangement being in the pressing bind it is now, with the date of the second annual review fast approaching — and zero progress for the Commission to point to try to cushion Privacy Shield from criticism.

There’s still no permanent appointment for a Privacy Shield ombudsperson, as the framework requires. Another raised concern has been over the lack of membership of the US Privacy and Civil Liberties Oversight Board — which remains moribund, with just a single member.

Threats to suspend the Privacy Shield arrangement if it’s judged to not be functioning as intended can only be credible if they are actually carried out.

Though the Commission will also want to avoid at all costs pulling the plug on a mechanism that more than 3,000 organizations are now using, and so which many businesses are relying on. So it’s most likely that it will again be left to Europe’s supreme court to strike any invalidating blow.

A Commission spokesman told us it is aware of the discussions in the European Parliament on a draft resolution on the EU- U.S. Privacy Shield. But he emphasized its approach of engaging with US counterparts to improve the arrangement.

“The Commission’s position is clear and laid out in the first annual review report. The first review showed that the Privacy Shield works well, but there is some room for improving its implementation,” he told TechCrunch.

“The Commission is working with the US administration and expects them to address the EU concerns. Commissioner Jourová was in the U.S. last time in March to engage with the U.S. government on the follow-up and discussed what the U.S. side should do until the next annual review in autumn.

“Commissioner Jourová also sent letters to US State Secretary Pompeo, Commerce Secretary Ross and Attorney General Sessions urging them to do the necessary improvements, including on the Ombudsman, as soon as possible.

“We will continue to work to keep the Privacy Shield running and ensure European’s data are well protected. Over 3000 companies are using it currently.”

While the Commission spokesman didn’t mention it, Privacy Shield is now facing several legal challenges.

Including, specifically, a series of legal questions pertaining to its adequacy which have been referred to the CJEU by Ireland’s High Court as a result of a separate privacy challenge to a different EU data transfer mechanism that’s also used by organizations to authorize data flows.

And judging by how quickly the CJEU has handled similar questions, the arrangement could have as little as  one more year’s operating grace before a decision is handed down that invalidates it.

If the Commission were to act itself the second annual review of the mechanism is due to take place in September, and indeed the Libe committee is pushing for a suspension by September 1 if there’s no progress on reforms within the US.

The EU parliament as a whole is also due to vote on the committee’s text on Privacy Shield next month, which — if they back the Libe position — would place further pressure on the EC to act. Though only a legal decision invalidating the arrangement can compel action.

News Source = techcrunch.com

To truly protect citizens, lawmakers need to restructure their regulatory oversight of big tech

in Artificial Intelligence/Column/computing/data management/data protection/Delhi/Europe/european parliament/European Union/Facebook/General Data Protection Regulation/Google/India/Judge/machine learning/Mark Zuckerberg/Politics/privacy/right to be forgotten/smartphone/social media/Software/United States/world wide web by

If members of the European Parliament thought they could bring Mark Zuckerberg to heel with his recent appearance, they underestimated the enormous gulf between 21st century companies and their last-century regulators.

Zuckerberg himself reiterated that regulation is necessary, provided it is the “right regulation.”

But anyone who thinks that our existing regulatory tools can reign in our digital behemoths is engaging in magical thinking. Getting to “right regulation” will require us to think very differently.

The challenge goes far beyond Facebook and other social media: the use and abuse of data is going to be the defining feature of just about every company on the planet as we enter the age of machine learning and autonomous systems.

So far, Europe has taken a much more aggressive regulatory approach than anything the US was contemplating before or since Zuckerberg’s testimony.

The European Parliament’s Global Data Protection Regulation (GDPR) is now in force, which extends data privacy rights to all European citizens regardless of whether their data is processed by companies within the EU or beyond.

But I’m not holding my breath that the GDPR will get us very far on the massive regulatory challenge we face. It is just more of the same when it comes to regulation in the modern economy: a lot of ambiguous costly-to-interpret words and procedures on paper that are outmatched by rapidly evolving digital global technologies.

Crucially, the GDPR still relies heavily on the outmoded technology of user choice and consent, the main result of which has seen almost everyone in Europe (and beyond) inundated with emails asking them to reconfirm permission to keep their data. But this is an illusion of choice, just as it is when we are ostensibly given the option to decide whether to agree to terms set by large corporations in standardized take-it-or-leave-it click-to-agree documents.  

There’s also the problem of actually tracking whether companies are complying. It is likely that the regulation of online activity requires yet more technology, such as blockchain and AI-powered monitoring systems, to track data usage and implement smart contract terms.

As the EU has already discovered with the right to be forgotten, however, governments lack the technological resources needed to enforce these rights. Search engines are required to serve as their own judge and jury in the first instance; Google at last count was doing 500 a day.  

The fundamental challenge we face, here and throughout the modern economy, is not: “what should the rules for Facebook be?” but rather, “how can we can innovate new ways to regulate effectively in the global digital age?”

The answer is that we need to find ways to harness the same ingenuity and drive that built Facebook to build the regulatory systems of the digital age. One way to do this is with what I call “super-regulation” which involves developing a market for licensed private regulators that serve two masters: achieving regulatory targets set by governments but also facing the market incentive to compete for business by innovating more cost-effective ways to do that.  

Imagine, for example, if instead of drafting a detailed 261-page law like the EU did, a government instead settled on the principles of data protection, based on core values, such as privacy and user control.

Private entities, profit and non-profit, could apply to a government oversight agency for a license to provide data regulatory services to companies like Facebook, showing that their regulatory approach is effective in achieving these legislative principles.  

These private regulators might use technology, big-data analysis, and machine learning to do that. They might also figure out how to communicate simple options to people, in the same way that the developers of our smartphone figured that out. They might develop effective schemes to audit and test whether their systems are working—on pain of losing their license to regulate.

There could be many such regulators among which both consumers and Facebook could choose: some could even specialize in offering packages of data management attributes that would appeal to certain demographics – from the people who want to be invisible online, to those who want their every move documented on social media.

The key here is competition: for-profit and non-profit private regulators compete to attract money and brains the problem of how to regulate complex systems like data creation and processing.

Zuckerberg thinks there’s some kind of “right” regulation possible for the digital world. I believe him; I just don’t think governments alone can invent it. Ideally, some next generation college kid would be staying up late trying to invent it in his or her dorm room.

The challenge we face is not how to get governments to write better laws; it’s how to get them to create the right conditions for the continued innovation necessary for new and effective regulatory systems.

News Source = techcrunch.com

Uber to stop storing precise location pick-ups/drop-offs in driver logs

in Apps/Delhi/Europe/European Union/Federal Trade Commission/General Data Protection Regulation/India/Location/Politics/privacy/Transportation/Uber by

Uber is planning to tweak the historical pick-up and drop-off logs that drivers can see in order to slightly obscure the exact location, rather than planting an exact pin in it (as now). The idea is to provide a modicum more privacy for users while still providing drivers with what look set to be remain highly detailed trip logs.

The company told Gizmodo it will initially pilot the change with drivers, but intends the privacy-focused feature to become the default setting “in the coming months”.

Earlier this month Uber also announced a complete redesign of the drivers’ app — making changes it said had been informed by “months” of driver conversations and feedback. It says the pilot of location obfuscation will begin once all drivers have the new app.

The ride-hailing giant appears to be trying to find a compromise between rider safety concerns — there have been reports of Uber drivers stalking riders, for example — and drivers wanting to have precise logs so they can challenge fare disputes.

Location data is our most sensitive information, and we are doing everything we can do to protect privacy around it,” a spokesperson told us. “The new design provides enough information for drivers to identify past trips for customer support issues or earning disputes without granting them ongoing access to rider addresses.”

In the current version of the pilot — according to screenshots obtained by Gizmodo — the location of the pin has been expanded into a circle, so it’s indicating a shaded area a few meters around a pick-up or drop-off location.

According to Uber the design may still change, as is said it intends to gather driver feedback. We’ve asked if it’s also intending to gather rider feedback on the design.

Asked whether it’s making the change as part of an FTC settlement last year — which followed an investigation into data mishandling, privacy and security complaints dating back to 2014 and 2015 — an Uber spokesman told us: “Not specifically, but user expectations are shifting and we are working to build privacy into the DNA of our products.”

Earlier this month the company agreed to a revised settlement with the FTC, including agreeing that it may be subject to civil penalties if it fails to notify the FTC of future privacy breaches — likely in light of the 2016 data breach affecting 57 million riders and drivers which the company concealed until 2017.

An incoming update to European privacy rules (called GDPR) — which beefs up fines for violations and applies extraterritorially (including, for example, if an EU citizen is using the Uber app on a trip to the U.S.) — also tightens the screw on data protection, giving individuals expanded rights to control their personal information held by a company.

A precise location log would likely be considered personal data that Uber would have to provide to any users requesting their information under GDPR, for example.

Although it’s less clear whether the relatively small amount of obfuscation it’s toying with here would be enough to ensure the location logs are no longer judged as riders’ personal data under the regulation.

Last year the company also ended a controversial feature in which its app had tracked the location of users even after their trip had ended.

News Source = techcrunch.com

Facebook face recognition error looks awkward ahead of GDPR

in data controller/data protection/Delhi/Europe/European Union/face recognition technology/Facebook/facial recognition/GDPR/General Data Protection Regulation/India/Mark Zuckerberg/personally identifiable information/Politics/privacy/Social by

A Facebook face recognition notification slip-up hints at how risky the company’s approach to compliance with a tough new European data protection standard could turn out to be.

On Friday a Metro journalist in the UK reported receiving a notification about the company’s face recognition technology — which told him “the setting is on”.

The wording was curious as the technology has been switched off in Europe since 2012, after regulatory pressure, and — as part of changes related to its GDPR compliance strategy — Facebook has also said it will be asking European users to choose individually whether or not they want to switch it on. (And on Friday begun rolling out its new consent flow in the region, ahead of the regulation applying next month.)

The company has since confirmed to us that the message was sent to the user in error — saying the wording came from an earlier notification which it sent to users who already had its facial recognition tech enabled, starting in December. And that it had intended to send the person a similar notification — containing the opposite notification, i.e. that “the setting is off”.

“We’re asking everyone in the EU whether they want to enable face recognition, and only people who affirmatively give their consent will have these features enabled. We did not intend for anyone in the EU to see this type of message, and we can confirm that this error did not result in face recognition being enabled without the person’s consent,” a Facebook spokesperson told us.

Here are the two notifications in question — showing the setting on vs the setting off wordings:

This is interesting because Facebook has repeatedly refused to confirm it will be universally applying GDPR compliance measures across its entire global user-base.

Instead it has restricted its public commitments to saying the same “settings and controls” will be made available for users — which as we’ve previously pointed out avoids committing the company to a universal application of GDPR principles, such as privacy by design.

Given that Facebook’s facial recognition feature has been switched off in Europe since 2012 “the setting is on” message would presumably have only been sent to users in the US or Canada — where Facebook has been able to forge ahead with pushing people to accept the controversial, privacy-hostile technology, embedding it into features such as auto-tagging for photo uploads.

But it hardly bodes well for Facebook’s compliance with the EU’s strict new data protection standard if its systems are getting confused about whether or not a user is an EU person.

Facebook claims no data was processed without consent as a result of the wrong notification being sent — but under GDPR it could face investigations by data protection authorities seeking to verify whether or not an individual’s rights were violated. (Reminder: GDPR fines can scale as high as 4% of a company’s global annual turnover so privacy enforcement is at last getting teeth.)

Facebook’s appetite for continuing to push privacy hostile features on its user-base is clear. This strategic direction also comes from the very top of the company.

Earlier this month CEO and founder Mark Zuckerberg urged US lawmakers not to impede US companies from be using people’s data for sensitive use-cases like facial recognition — attempting to gloss that tough sell by claiming pro-privacy rules would risk the US falling behind China.

Meanwhile, last week it also emerged that Zuckerberg’s company will switch the location where most international users’ data is processed from its international HQ, Facebook Ireland, to Facebook USA. From next month only EU users will have their data controller located in the EU — other international users, who would have at least technically fallen under GDPR’s reach otherwise, on account of their data being processed in the region, are being shifted out of the EU jurisdiction — via a unilateral T&Cs change.

This move seems intended to try to shrink some of Facebook’s legal liabilities by reducing the number of international users that would, at least technically, fall under the reach of the EU regulation — which both applies to anyone in the EU whose data is being processed and also extends EU fundamental rights extraterritorially, carrying the aforementioned major penalties for violations.

However Facebook’s decision to reduce how many of its users have their data processed in the EU also looks set to raise the stakes — if, as it appears, the company intends to exploit the lack of a comprehensive privacy framework in the US to apply different standards for North American users (and from next month also for non-EU international users, whose data will be processed there).

The problem is, if Facebook does not perform perfect segregation and management of these two separate pools of users it risks accidentally processing the personal data of Europeans in violation of the strict new EU standard, which applies from May 25.

Yet here it is, on the cusp of the new rules, sending the wrong notification and incorrectly telling an EU user that facial recognition is on.

Given how much risk it’s creating for itself by trying to run double standards for data protection you almost have to wonder whether Facebook is trying to engineer in some compliance wiggle room for itself — i.e. by positioning itself to be able to claim that such and such’s data was processed in error.

Another interesting question is whether the unilateral switching of ~1.5BN non-EU international users to Facebook USA as data controller could be interpreted as a data transfer to a third country — which would trigger other data protection requirements under EU law, and further layer on the legal complexity…

What is clear is that legal challenges to Facebook’s self-serving interpretation of EU law are coming.

News Source = techcrunch.com

Go to Top