Menu

Timesdelhi.com

March 19, 2019
Category archive

data processing

This early GDPR adtech strike puts the spotlight on consent

in Advertising Tech/Android/Apps/Artificial Intelligence/China/data processing/data protection/Delhi/Europe/European Union/Facebook/Fidzup/GDPR/General Data Protection Regulation/Google/India/location based services/mobile advertising/mobile device/online advertising/Politics/privacy/retail/smartphone/TC/terms of service by

What does consent as a valid legal basis for processing personal data look like under Europe’s updated privacy rules? It may sound like an abstract concern but for online services that rely on things being done with user data in order to monetize free-to-access content this is a key question now the region’s General Data Protection Regulation is firmly fixed in place.

The GDPR is actually clear about consent. But if you haven’t bothered to read the text of the regulation, and instead just go and look at some of the self-styled consent management platforms (CMPs) floating around the web since May 25, you’d probably have trouble guessing it.

Confusing and/or incomplete consent flows aren’t yet extinct, sadly. But it’s fair to say those that don’t offer full opt-in choice are on borrowed time.

Because if your service or app relies on obtaining consent to process EU users’ personal data — as many free at the point-of-use, ad-supported apps do — then the GDPR states consent must be freely given, specific, informed and unambiguous.

That means you can’t bundle multiple uses for personal data under a single opt-in.

Nor can you obfuscate consent behind opaque wording that doesn’t actually specify the thing you’re going to do with the data.

You also have to offer users the choice not to consent. So you cannot pre-tick all the consent boxes that you really wish your users would freely choose — because you have to actually let them do that.

It’s not rocket science but the pushback from certain quarters of the adtech industry has been as awfully predictable as it’s horribly frustrating.

This has not gone unnoticed by consumers either. Europe’s Internet users have been filing consent-based complaints thick and fast this year. And a lot of what is being claimed as ‘GDPR compliant’ right now likely is not.

So, some six months in, we’re essentially in a holding pattern waiting for the regulatory hammers to come down.

But if you look closely there are some early enforcement actions that show some consent fog is starting to shift.

Yes, we’re still waiting on the outcomes of major consent-related complaints against tech giants. (And stockpile popcorn to watch that space for sure.)

But late last month French data protection watchdog, the CNIL, announced the closure of a formal warning it issued this summer against drive-to-store adtech firm, Fidzup — saying it was satisfied it was now GDPR compliant.

Such a regulatory stamp of approval is obviously rare this early in the new legal regime.

So while Fidzup is no adtech giant its experience still makes an interesting case study — showing how the consent line was being crossed; how, working with CNIL, it was able to fix that; and what being on the right side of the law means for a (relatively) small-scale adtech business that relies on consent to enable a location-based mobile marketing business.

From zero to GDPR hero?

Fidzup’s service works like this: It installs kit inside (or on) partner retailers’ physical stores to detect the presence of user-specific smartphones. At the same time it provides an SDK to mobile developers to track app users’ locations, collecting and sharing the advertising ID and wi-fi ID of users’ smartphone (which, along with location, are judged personal data under GDPR.)

Those two elements — detectors in physical stores; and a personal data-gathering SDK in mobile apps — come together to power Fidzup’s retail-focused, location-based ad service which pushes ads to mobile users when they’re near a partner store. The system also enables it to track ad-to-store conversions for its retail partners.

The problem Fidzup had, back in July, was that after an audit of its business the CNIL deemed it did not have proper consent to process users’ geolocation data to target them with ads.

Fidzup says it had thought its business was GDPR compliant because it took the view that app publishers were the data processors gathering consent on its behalf; the CNIL warning was a wake up call that this interpretation was incorrect — and that it was responsible for the data processing and so also for collecting consents.

The regulator found that when a smartphone user installed an app containing Fidzup’s SDK they were not informed that their location and mobile device ID data would be used for ad targeting, nor the partners Fidzup was sharing their data with.

CNIL also said users should have been clearly informed before data was collected — so they could choose to consent — instead of information being given via general app conditions (or in store posters), as was the case, after the fact of the processing.

It also found users had no choice to download the apps without also getting Fidzup’s SDK, with use of such an app automatically resulting in data transmission to partners.

Fidzup’s approach to consent had also only been asking users to consent to the processing of their geolocation data for the specific app they had downloaded — not for the targeted ad purposes with retail partners which is the substance of the firm’s business.

So there was a string of issues. And when Fidzup was hit with the warning the stakes were high, even with no monetary penalty attached. Because unless it could fix the core consent problem, the 2014-founded startup might have faced going out of business. Or having to change its line of business entirely.

Instead it decided to try and fix the consent problem by building a GDPR-compliant CMP — spending around five months liaising with the regulator, and finally getting a green light late last month.

A core piece of the challenge, as co-founder and CEO Olivier Magnan-Saurin tells it, was how to handle multiple partners in this CMP because its business entails passing data along the chain of partners — each new use and partner requiring opt-in consent.

“The first challenge was to design a window and a banner for multiple data buyers,” he tells TechCrunch. “So that’s what we did. The challenge was to have something okay for the CNIL and GDPR in terms of wording, UX etc. And, at the same time, some things that the publisher will allow to and will accept to implement in his source code to display to his users because he doesn’t want to scare them or to lose too much.

“Because they get money from the data that we buy from them. So they wanted to get the maximum money that they can, because it’s very difficult for them to live without the data revenue. So the challenge was to reconcile the need from the CNIL and the GDPR and from the publishers to get something acceptable for everyone.”

As a quick related aside, it’s worth noting that Fidzup does not work with the thousands of partners an ad exchange or demand-side platform most likely would be.

Magnan-Saurin tells us its CMP lists 460 partners. So while that’s still a lengthy list to have to put in front of consumers — it’s not, for example, the 32,000 partners of another French adtech firm, Vectaury, which has also recently been on the receiving end of an invalid consent ruling from the CNIL.

In turn, that suggests the ‘Fidzup fix’, if we can call it that, only scales so far; adtech firms that are routinely passing millions of people’s data around thousands of partners look to have much more existential problems under GDPR — as we’ve reported previously re: the Vectaury decision.

No consent without choice

Returning to Fidzup, its fix essentially boils down to actually offering people a choice over each and every data processing purpose, unless it’s strictly necessary for delivering the core app service the consumer was intending to use.

Which also means giving app users the ability to opt out of ads entirely — and not be penalized by not being able to use the app features itself.

In short, you can’t bundle consent. So Fidzup’s CMP unbundles all the data purposes and partners to offer users the option to consent or not.

“You can unselect or select each purpose,” says Magnan-Saurin of the now compliant CMP. “And if you want only to send data for, I don’t know, personalized ads but you don’t want to send the data to analyze if you go to a store or not, you can. You can unselect or select each consent. You can also see all the buyers who buy the data. So you can say okay I’m okay to send the data to every buyer but I can also select only a few or none of them.”

“What the CNIL ask is very complicated to read, I think, for the final user,” he continues. “Yes it’s very precise and you can choose everything etc. But it’s very complete and you have to spend some time to read everything. So we were [hoping] for something much shorter… but now okay we have something between the initial asking for the CNIL — which was like a big book — and our consent collection before the warning which was too short with not the right information. But still it’s quite long to read.”

Fidzup’s CNIL approved GDPR-compliant consent management platform

“Of course, as a user, I can refuse everything. Say no, I don’t want my data to be collected, I don’t want to send my data. And I have to be able, as a user, to use the app in the same way as if I accept or refuse the data collection,” he adds.

He says the CNIL was very clear on the latter point — telling it they could not require collection of geolocation data for ad targeting for usage of the app.

“You have to provide the same service to the user if he accepts or not to share his data,” he emphasizes. “So now the app and the geolocation features [of the app] works also if you refuse to send the data to advertisers.”

This is especially interesting in light of the ‘forced consent’ complaints filed against tech giants Facebook and Google earlier this year.

These complaints argue the companies should (but currently do not) offer an opt-out of targeted advertising, because behavioural ads are not strictly necessary for their core services (i.e. social networking, messaging, a smartphone platform etc).

Indeed, data gathering for such non-core service purposes should require an affirmative opt-in under GDPR. (An additional GDPR complaint against Android has also since attacked how consent is gathered, arguing it’s manipulative and deceptive.)

Asked whether, based on his experience working with the CNIL to achieve GDPR compliance, it seems fair that a small adtech firm like Fidzup has had to offer an opt-out when a tech giant like Facebook seemingly doesn’t, Magnan-Saurin tells TechCrunch: “I’m not a lawyer but based on what the CNIL asked us to be in compliance with the GDPR law I’m not sure that what I see on Facebook as a user is 100% GDPR compliant.”

“It’s better than one year ago but [I’m still not sure],” he adds. “Again it’s only my feeling as a user, based on the experience I have with the French CNIL and the GDPR law.”

Facebook of course maintains its approach is 100% GDPR compliant.

Even as data privacy experts aren’t so sure.

One thing is clear: If the tech giant was forced to offer an opt out for data processing for ads it would clearly take a big chunk out of its business — as a sub-set of users would undoubtedly say no to Zuckerberg’s “ads”. (And if European Facebook users got an ads opt out you can bet Americans would very soon and very loudly demand the same, so…)

Bridging the privacy gap

In Fidzup’s case, complying with GDPR has had a major impact on its business because offering a genuine choice means it’s not always able to obtain consent. Magnan-Saurin says there is essentially now a limit on the number of device users advertisers can reach because not everyone opts in for ads.

Although, since it’s been using the new CMP, he says a majority are still opting in (or, at least, this is the case so far) — showing one consent chart report with a ~70:30 opt-in rate, for example.

He expresses the change like this: “No one in the world can say okay I have 100% of the smartphones in my data base because the consent collection is more complete. No one in the world, even Facebook or Google, could say okay, 100% of the smartphones are okay to collect from them geolocation data. That’s a huge change.”

“Before that there was a race to the higher reach. The biggest number of smartphones in your database,” he continues. “Today that’s not the point.”

Now he says the point for adtech businesses with EU users is figuring out how to extrapolate from the percentage of user data they can (legally) collect to the 100% they can’t.

And that’s what Fidzup has been working on this year, developing machine learning algorithms to try to bridge the data gap so it can still offer its retail partners accurate predictions for tracking ad to store conversions.

“We have algorithms based on the few thousand stores that we equip, based on the few hundred mobile advertising campaigns that we have run, and we can understand for a store in London in… sports, fashion, for example, how many visits we can expect from the campaign based on what we can measure with the right consent,” he says. “That’s the first and main change in our market; the quantity of data that we can get in our database.”

“Now the challenge is to be as accurate as we can be without having 100% of real data — with the consent, and the real picture,” he adds. “The accuracy is less… but not that much. We have a very, very high standard of quality on that… So now we can assure the retailers that with our machine learning system they have nearly the same quality as they had before.

“Of course it’s not exactly the same… but it’s very close.”

Having a CMP that’s had regulatory ‘sign-off’, as it were, is something Fidzup is also now hoping to turn into a new bit of additional business.

“The second change is more like an opportunity,” he suggests. “All the work that we have done with CNIL and our publishers we have transferred it to a new product, a CMP, and we offer today to all the publishers who ask to use our consent management platform. So for us it’s a new product — we didn’t have it before. And today we are the only — to my knowledge — the only company and the only CMP validated by the CNIL and GDPR compliant so that’s useful for all the publishers in the world.”

It’s not currently charging publishers to use the CMP but will be seeing whether it can turn it into a paid product early next year.

How then, after months of compliance work, does Fidzup feel about GDPR? Does it believe the regulation is making life harder for startups vs tech giants — as is sometimes suggested, with claims put forward by certain lobby groups that the law risks entrenching the dominance of better resourced tech giants. Or does he see any opportunities?

In Magnan-Saurin’s view, six months in to GDPR European startups are at an R&D disadvantage vs tech giants because U.S. companies like Facebook and Google are not (yet) subject to a similarly comprehensive privacy regulation at home — so it’s easier for them to bag up user data for whatever purpose they like.

Though it’s also true that U.S. lawmakers are now paying earnest attention to the privacy policy area at a federal level. (And Google’s CEO faced a number of tough questions from Congress on that front just this week.)

“The fact is Facebook-Google they own like 90% of the revenue in mobile advertising in the world. And they are American. So basically they can do all their research and development on, for example, American users without any GDPR regulation,” he says. “And then apply a pattern of GDPR compliance and apply the new product, the new algorithm, everywhere in the world.

“As a European startup I can’t do that. Because I’m a European. So once I begin the research and development I have to be GDPR compliant so it’s going to be longer for Fidzup to develop the same thing as an American… But now we can see that GDPR might be beginning a ‘world thing’ — and maybe Facebook and Google will apply the GDPR compliance everywhere in the world. Could be. But it’s their own choice. Which means, for the example of the R&D, they could do their own research without applying the law because for now U.S. doesn’t care about the GDPR law, so you’re not outlawed if you do R&D without applying GDPR in the U.S. That’s the main difference.”

He suggests some European startups might relocate R&D efforts outside the region to try to workaround the legal complexity around privacy.

“If the law is meant to bring the big players to better compliance with privacy I think — yes, maybe it goes in this way. But the first to suffer is the European companies, and it becomes an asset for the U.S. and maybe the Chinese… companies because they can be quicker in their innovation cycles,” he suggests. “That’s a fact. So what could happen is maybe investors will not invest that much money in Europe than in U.S. or in China on the marketing, advertising data subject topics. Maybe even the French companies will put all the R&D in the U.S. and destroy some jobs in Europe because it’s too complicated to do research on that topics. Could be impacts. We don’t know yet.”

But the fact of GDPR enforcement having — perhaps inevitably — started small, with so far a small bundle of warnings against relative data minnows, rather than any swift action against the industry dominating adtech giants, that’s being felt as yet another inequality at the startup coalface.

“What’s sure is that the CNIL started to send warnings not to Google or Facebook but to startups. That’s what I can see,” he says. “Because maybe it’s easier to see I’m working on GDPR and everything but the fact is the law is not as complicated for Facebook and Google as it is for the small and European companies.”

News Source = techcrunch.com

AWS launches a base station for satellites as a service

in AWS/AWS re:Invent 2018/Cloud/data processing/Delhi/Enterprise/India/Politics/Satellites/Space/TC by

Today at AWS Re:invent in Las Vegas, AWS announced a new service for satellite providers with the launch of AWS Ground Station, the first fully-managed ground station as a service.

With this new service, AWS will provide ground antennas through their existing network of worldwide availability zones, as well as data processing services to simplify the entire data retrieval and processing process for satellite companies, or for others who consume the satellite data.

Satellite operators need to get data down from the satellite, process it and then make it available for developers to use in applications. In that regard, it’s not that much different from any IoT device. It just so happens that these are flying around in space.

AWS CEO Andy Jassy pointed out that they hadn’t really considered a service like this until they had customers asking for it. “Customers said that we have so much data in space with so many applications that want to use that data. Why don’t you make it easier,” Jassy said. He said they thought about that and figured they could put their vast worldwide network to bear on the problem. .

Prior to this service, companies had to build these base stations themselves to get the data down from the satellites as they passed over the base stations on earth wherever those base stations happened to be. It required that providers buy land and build the hardware, then deal with the data themselves. By offering this as a managed service, it greatly simplifies every aspect of the workflow.

The value proposition of any cloud service has always been about reducing the resource allocation required by a company to achieve a goal. With AWS Ground Station, AWS handles every aspect of the satellite data retrieval and processing operation for the company, greatly reducing the cost and complexity associated with it.

AWS claims it can save up to 80 percent by using an on-demand model over ownership. They are starting with two ground stations today as they launch the service, but plan to expand it to 12 by the middle of next year.

News Source = techcrunch.com

Children are being “datafied” before we’ve understood the risks, report warns

in Advertising Tech/Anne Longfield/Apps/Artificial Intelligence/big data/children's data/consumer protection/data management/data processing/data protection/data security/Delhi/Education/Europe/European Union/Facebook/General Data Protection Regulation/identity management/India/Policy/Politics/privacy/Snapchat/Social/social media/terms of service/United Kingdom by

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

News Source = techcrunch.com

UK report urges action to combat AI bias

in Aleksandr Kogan/Artificial Intelligence/British Business Bank/chairman/cybernetics/data processing/data security/deep neural networks/DeepMind/Delhi/Diversity/Europe/European Union/Facebook/General Data Protection Regulation/Google/Government/Health/India/London/Matt Hancock/National Health Service/oxford university/Policy/Politics/privacy/Royal Free NHS Trust/Technology/UK government/United Kingdom/United States by

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

“Unlike other AI ‘ethics’ standards which seek to create something so weak no one opposes it, the existing standards and conventions of the rule of law are well known and well understood, and provide real and meaningful scrutiny of decisions, assuming an entity believes in the rule of law,” he adds.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

 

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability point is already generally well-aired — and in the autonomous cars case, at least, now having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Exceptions: Terrorist content spreading via online platforms has been decried for some years, with government ministers more than happy to make platforms and technologies their scapegoat and even toughen laws; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these societal pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it also raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

News Source = techcrunch.com

Go to Top