Timesdelhi.com

December 12, 2018
Category archive

European Union

Quantum computing, not AI, will define our future

in aerospace/Canada/China/Column/computing/D-Wave Systems/Delhi/economist/Emerging-Technologies/encryption/energy/European Union/Finance/Germany/Google/Healthcare/IBM/India/Japan/north america/patent search/Physics/Politics/quantum computing/Quantum Mechanics/qubit/software development/start ups/United Kingdom/United States by

The word “quantum” gained currency in the late 20th century as a descriptor signifying something so significant, it defied the use of common adjectives. For example, a “quantum leap” is a dramatic advancement (also an early ’90’s television series starring Scott Bakula).

At best, that is an imprecise (though entertaining) definition. When “quantum” is applied to “computing,” however, we are indeed entering an era of dramatic advancement.

Quantum computing is technology based on the principles of quantum theory, which explains the nature of energy and matter on the atomic and subatomic level. It relies on the existence of mind-bending quantum-mechanical phenomena, such as superposition and entanglement.

Erwin Schrödinger’s famous 1930’s thought experiment involving a cat that was both dead and alive at the same time was intended to highlight the apparent absurdity of superposition, the principle that quantum systems can exist in multiple states simultaneously until observed or measured. Today quantum computers contain dozens of qubits (quantum bits), which take advantage of that very principle. Each qubit exists in a superposition of zero and one (i.e., has non-zero probabilities to be a zero or a one) until measured. The development of qubits has implications for dealing with massive amounts of data and achieving previously unattainable level of computing efficiency that are the tantalizing potential of quantum computing.

While Schrödinger was thinking about zombie cats, Albert Einstein was observing what he described as “spooky action at a distance,” particles that seemed to be communicating faster than the speed of light. What he was seeing were entangled electrons in action. Entanglement refers to the observation that the state of particles from the same quantum system cannot be described independently of each other. Even when they are separated by great distances, they are still part of the same system. If you measure one particle, the rest seem to know instantly. The current record distance for measuring entangled particles is 1,200 kilometers or about 745.6 miles. Entanglement means that the whole quantum system is greater than the sum of its parts.

If these phenomena make you vaguely uncomfortable so far, perhaps I can assuage that feeling simply by quoting Schrödinger, who purportedly said after his development of quantum theory, “I don’t like it, and I’m sorry I ever had anything to do with it.”

Various parties are taking different approaches to quantum computing, so a single explanation of how it works would be subjective. But one principle may help readers get their arms around the difference between classical computing and quantum computing. Classical computers are binary. That is, they depend on the fact that every bit can exist only in one of two states, either 0 or 1. Schrödinger’s cat merely illustrated that subatomic particles could exhibit innumerable states at the same time. If you envision a sphere, a binary state would be if the “north pole,” say, was 0, and the south pole was 1. In a qubit, the entire sphere can hold innumerable other states and relating those states between qubits enables certain correlations that make quantum computing well-suited for a variety of specific tasks that classical computing cannot accomplish. Creating qubits and maintaining their existence long enough to accomplish quantum computing tasks is an ongoing challenge.

IBM researcher Jerry Chow in the quantum computing lab at IBM’s T.J. Watson Research Center.

Humanizing Quantum Computing

These are just the beginnings of the strange world of quantum mechanics. Personally, I’m enthralled by quantum computing. It fascinates me on many levels, from its technical arcana to its potential applications that could benefit humanity. But a qubit’s worth of witty obfuscation on how quantum computing works will have to suffice for now. Let’s move on to how it will help us create a better world.

Quantum computing’s purpose is to aid and extend the abilities of classical computing. Quantum computers will perform certain tasks much more efficiently than classical computers, providing us with a new tool for specific applications. Quantum computers will not replace their classical counterparts. In fact, quantum computers require classical computer to support their specialized abilities, such as systems optimization.

Quantum computers will be useful in advancing solutions to challenges in diverse fields such as energy, finance, healthcare, aerospace, among others. Their capabilities will help us cure diseases, improve global financial markets, detangle traffic, combat climate change, and more. For instance, quantum computing has the potential to speed up pharmaceutical discovery and development, and to improve the accuracy of the atmospheric models used to track and explain climate change and its adverse effects.

I call this “humanizing” quantum computing, because such a powerful new technology should be used to benefit humanity, or we’re missing the boat.

Intel’s 17-qubit superconducting test chip for quantum computing has unique features for improved connectivity and better electrical and thermo-mechanical performance. (Credit: Intel Corporation)

An Uptick in Investments, Patents, Startups, and more

That’s my inner evangelist speaking. In factual terms, the latest verifiable, global figures for investment and patent applications reflect an uptick in both areas, a trend that’s likely to continue. Going into 2015, non-classified national investments in quantum computing reflected an aggregate global spend of about $1.75 billion USD,according to The Economist. The European Union led with $643 million. The U.S. was the top individual nation with $421 million invested, followed by China ($257 million), Germany ($140 million), Britain ($123 million) and Canada ($117 million). Twenty countries have invested at least $10 million in quantum computing research.

At the same time, according to a patent search enabled by Thomson Innovation, the U.S. led in quantum computing-related patent applications with 295, followed by Canada (79), Japan (78), Great Britain (36), and China (29). The number of patent families related to quantum computing was projected to increase 430 percent by the end of 2017

The upshot is that nations, giant tech firms, universities, and start-ups are exploring quantum computing and its range of potential applications. Some parties (e.g., nation states) are pursuing quantum computing for security and competitive reasons. It’s been said that quantum computers will break current encryption schemes, kill blockchain, and serve other dark purposes.

I reject that proprietary, cutthroat approach. It’s clear to me that quantum computing can serve the greater good through an open-source, collaborative research and development approach that I believe will prevail once wider access to this technology is available. I’m confident crowd-sourcing quantum computing applications for the greater good will win.

If you want to get involved, check out the free tools that the household-name computing giants such as IBM and Google have made available, as well as the open-source offerings out there from giants and start-ups alike. Actual time on a quantum computer is available today, and access opportunities will only expand.

In keeping with my view that proprietary solutions will succumb to open-source, collaborative R&D and universal quantum computing value propositions, allow me to point out that several dozen start-ups in North America alone have jumped into the QC ecosystem along with governments and academia. Names such as Rigetti Computing, D-Wave Systems, 1Qbit Information Technologies, Inc., Quantum Circuits, Inc., QC Ware, Zapata Computing, Inc. may become well-known or they may become subsumed by bigger players, their burn rate – anything is possible in this nascent field.

Developing Quantum Computing Standards

 Another way to get involved is to join the effort to develop quantum computing-related standards. Technical standards ultimately speed the development of a technology, introduce economies of scale, and grow markets. Quantum computer hardware and software development will benefit from a common nomenclature, for instance, and agreed-upon metrics to measure results.

Currently, the IEEE Standards Association Quantum Computing Working Group is developing two standards. One is for quantum computing definitions and nomenclature so we can all speak the same language. The other addresses performance metrics and performance benchmarking to enable measurement of quantum computers’ performance against classical computers and, ultimately, each other.

The need for additional standards will become clear over time.

News Source = techcrunch.com

Children are being “datafied” before we’ve understood the risks, report warns

in Advertising Tech/Anne Longfield/Apps/Artificial Intelligence/big data/children's data/consumer protection/data management/data processing/data protection/data security/Delhi/Education/Europe/European Union/Facebook/General Data Protection Regulation/identity management/India/Policy/Politics/privacy/Snapchat/Social/social media/terms of service/United Kingdom by

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

News Source = techcrunch.com

Campaign tool supplied to UK’s governing party by Trump-Pence app dev quietly taken out of service

in Apps/Cambridge Analytica/Conservative Campaigner/Conservative Party/data protection/DCMS committee/Delhi/digital media/electoral law/Europe/European Union/Facebook/General Election/Government/India/information commissioner's office/Malta/Politics/privacy/republican national committee/Social/social media/social network/terms of service/uCampaign/United Kingdom by

An app that the UK’s governing party launched last year — for Conservative Party activists to gamify, ‘socialize’ and co-ordinate their campaigning activity — has been quietly pulled from app stores.

Its vanishing was flagged to us earlier today, by Twitter user Sarah Parks, who noticed that, when loaded, the Campaigner app now displays a message informing users the supplier is “no longer supporting clients based in Europe”.

“So we’re taking this opportunity to refresh our campaigning app,” it adds. “We will be back with a new and improved app early next year – well in time for the local elections.”

(Bad luck, then, should there end up being another very snap, Brexit-induced UK General Election in the meanwhile, as some have suggested may yet come to pass. But I digress… )

The supplier of the Conservative Campaigner app is — or was — a US-based add developer called uCampaign, which had also built branded apps for Trump-Pence 2016; the Republican National Committee; and the UK’s Vote Leave Brexit campaign, to name a few of the political campaigns it has counted as customers.

Here’s a few more: The (pro-gun) National Rife Association and the (anti-abortion) SBA List.

We know the name of the Conservative Campaigner app’s supplier because this summer we raised privacy concerns about the app — on account of its use of uCampaign’s boilerplate privacy policy, if you clicked to read the app’s privacy policy earlier this year.

The wording of uCampaign’s privacy policy suggested the Conservative Campaigner app could be harvesting users’ mobile phone contacts — if they chose to sync their contacts book with it.

The privacy policy for the app was subsequently changed to point to the Conservative Party’s own privacy policy — with the change of privacy policy taking place just before a tough new EU-wide data protection framework, GDPR, came into force on May 25 this year.

Prior to May 23, the privacy policy of the Conservatives’ digital campaigning app suggests it was harvesting contacts data from users — and potentially sharing non-users’ personal information with entities of uCampaign’s choosing (given, for example, the company’s privacy policy gave itself the right to “share your Personal Information with other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”).

This sort of consentless scraping of large amounts of networked personal data — by sucking up information on users’ friend groups and other personal connections — has of course had a massive spotlight thrown on it this year, as a result of the Facebook Cambridge Analytica data misuse scandal in which the personal data of tens of millions of Facebook users was extracted from the social network via a quiz app that used a (now defunct) Facebook friends API to grab data on non-users who would not have even had the chance to agree to the app’s terms.

Safe to say, this modus operandi wasn’t cool then — and it’s certainly not cool now.

Politicians all over the globe have been shaken awake by the Cambridge Analytica scandal, and are now raising all sorts of concerns about how data and digital tools are being used (and or misused and abused).

The EU parliament recently called for an independent audit of Facebook, for example.

In the UK, a committee that’s been probing the impact of social media-accelerated disinformation on democratic processes published a report this summer calling for a levy on social media to defend democracy. Its lengthy preliminary report also suggested urgent amendments to domestic electoral law to reflect the use of digital technologies for political campaigning.

Though the UK’s Conservative minority government — and the party behind the now on-pause Conservative Campaigner app — apparently disagrees on the need for speed, declining in its response last week to accept most of the committee’s laundry list of recommended changes.

The DCMS committee’s inquiry into political campaigns’ use (and misuse) of personal data continues — now at a transnational level.

An ethical pause?

Shortly after we published our privacy concerns about the Conservative Campaigner app, the UK’s data protection watchdog issued its own a lengthy report detailing extensive concerns about how UK political parties were misusing personal data — and calling for an ethical pause on the use of microtargeting for election campaigning purposes.

Which does rather beg the question whether the Conservative Campaigner app going AWOL now, until a reboot under a new supplier (presumably) next year, might not represent just such an ‘ethical pause’.

The app is, after all, only just over a year old.

We asked the Conservative Party a number of questions about the Campaigner app via email — after a press office spokeswoman declined to discuss the matter on the telephone.

Five hours later it emailed the following brief statement, attributed to a Conservative spokesperson:

We work with a number of different suppliers and all Conservative party campaigning is compliant with the relevant data protection legislation including GDPR.

The spokesperson did not engage with the substance of the vast majority of our concerns — such as those relating to the app’s handling of people’s data and the legal bases for any transfers of UK voter data to the US.

Instead the spokesperson reiterated the in-app notification which claims “the supplier” is no longer supporting clients based in Europe.

They also said the party is currently reviewing its campaigning tools, without providing any further detail.

We’ve included our full list of questions at the bottom of this post.

We’ve also reached out to the ICO to ask if it had any concerns related to how the Conservative Campaigner app was handling people’s data.

Similarly, the former deputy director & head of digital strategy for the Conservative party, Anthony Hind, declined to engage with the same data protection concerns when we raised them with him directly, back in July.

According to his LinkedIn profile he’s since moved on from the Conservatives to head up social media for the Confederation of British Industry.

For this report we also reached out to uCampaign’s founder and CEO, Thomas Peters, to ask for confirmation on the company’s situation vis-a-vis European clients.

At the time of writing Peters had not responded to our emails. We’ll update this story with any uCampaign response.

The company’s website still includes the UK Conservative Party listed as a client — though the language used on the webpage does not make it explicit whether or not the party is a current client…

Another graphic on the same page plots the UK flag on a world map depicting what uCampaign dubs its “global platform”, where it’s marked along with several other European flags — including Ireland, France, Germany and Malta, suggesting uCampaign has — or had — multiple European clients.

Here’s the full list of questions we put to the Conservatives about their campaigner app. To our eye it has answered just one of them:

Can you confirm — on the record — the reasons for the app being pulled?

Does the Conservative Party intend to continue working with uCampaign for the new campaign app that will relaunch next year? Or does the party have a new supplier?

If the latter, where is the new supplier based? In the UK or in the US?

Did the Conservative Party have any concerns at all related to using uCampaigner as a supplier? (Given, for example, concerns flagged about its data privacy practices by one of the DCMS committee’s recent reports — following an inquiry investigating digital campaigning.)

If the Conservative Party was aware of data privacy concerns pertaining to uCampaign’s practices can you confirm when the party became aware of such concerns?

Was the party aware that the privacy policy it used for the app prior to May 23, 2018 was uCampaign’s own privacy policy?

This privacy policy stated that the app could harvest data from users’ mobile phone contacts and share that data with unknown third parties of the developer’s choosing — including other political campaigns. Is the Conservative Party comfortable with having its supporters’ data shared with other political campaigns?

What due diligence did the Conservative Party carry out before it selected uCampaign as its app supplier?

After signing up the supplier, did the Conservative Party carry out a privacy impact assessment related to how the app operates?

Please confirm all the data points that the app was collecting from users, and what each of those data points was being used for

Where was app user data being processed? In the US, where uCampaign is based, or in the UK where potential voters live?

If the US, what was the legal basis for any transfer of data from UK users to the US?

Is the Conservative Party confident its use of the campaigner app did not breach UK data protection law?

Earlier this year the former Cabinet Minister Dominic Grieve suggested that the bosses of tech giants involved in the Cambridge Analytica data misuse scandal should be jailed for their part in abusing online data for political and financial gain. Does the Conservative Party support Grieve’s position on online data abuse?

Has anyone been sacked or sanctioned for their part in procuring uCampaign as the app supplier — and/or overseeing the operation of the Conservative Campaigner app itself?

Will the Conservative Party commit to notifying all individuals whose data was shared with uCampaign without their explicit consent?

Can the Conservative Party confirm how many individuals had their personal data shared with uCampaign?

Has the Information Commissioner’s Office raised any concerns with the Conservative Party about the Campaigner app?

Has the Conservative Party itself reported any concerns about the app/uCampaign to the ICO?

News Source = techcrunch.com

Big tech must not reframe digital ethics in its image

in Advertising Tech/Apple/Artificial Intelligence/Brussels/california/China/civic tech/competition law/computing/data protection/data protection law/Delhi/digital advertising/digital media/DuckDuckGo/Elizabeth Denham/engineer/Europe/european parliament/European Union/Facebook/facial recognition/fundamental rights/Giovanni Buttarelli/Google/human rights/ICDPPC/India/Kent Walker/Mark Zuckerberg/New America Foundation/news media/Nick Clegg/Politics/privacy/San Francisco/search engine/Security/smartphone/Snapchat/Social/social network/Sundar Pichai/TC/terms of service/Tim Cook/Tim-berners lee/United States/Washington D.C./world wide web by

Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.

The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.

The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.

So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.

As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”

As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.

Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.

But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.

(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)

And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.

This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.

“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”

Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.

And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)

It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.

A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.

“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.

Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.

SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)

The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.

“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.

“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”

Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.

It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.

The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.

They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.

Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.

This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.

The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.

Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.

Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.

A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.

They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.

“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.

“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”

Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).

She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.

She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.

But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.

Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.

But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.

Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.

This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.

Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.

Not so long as the platform retains the power to cause damage at scale.

It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.

“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.

“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”

Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.

On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.

Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.

The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.

His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.

“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.

“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”

“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”

For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.

Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.

The adtech modeus operandi sums to “surveillance”, Cook asserted.

Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.

“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.

Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.

The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.

So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.

“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.

It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.

No profit making entity should be used as the model for where the ethical line should lie.

Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.

One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.

DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.

But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.

There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.

One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.

Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.

And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.

Call for collective action

Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.

Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)

Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.

“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”

The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.

Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”

“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.

“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”

So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.

Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.

The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.

Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.

Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)

Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.

The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.

The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.

In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.

It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.

Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.

People and civic society must teach them.

A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.

She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.

The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.

She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.

This is vital work to defend values and rights against the overreach of the digital here and now.

“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”

If the conference had one short sharp message it was this: Society must wake up to technology — and fast.

“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.

“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”

This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.

But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.

News Source = techcrunch.com

Apple’s Tim Cook makes blistering attack on the “data industrial complex”

in Advertising Tech/Apple/Artificial Intelligence/Brussels/data protection/data security/Delhi/digital rights/ethics/Europe/european parliament/European Union/Giovanni Buttarelli/human rights/India/law/Politics/privacy/Security/surveillance/TC/terms of service/Tim Cook by

Apple’s CEO Tim Cook has joined the chorus of voices warning that data itself is being weaponized again people and societies — arguing that the trade in digital data has exploded into a “data industrial complex”.

Cook did not namecheck the adtech elephants in the room: Google, Facebook and other background data brokers that profit from privacy-hostile business models. But his target was clear.

“Our own information — from the everyday to the deeply personal — is being weaponized against us with military efficiency,” warned Cook. “These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded and sold.

“Taken to the extreme this process creates an enduring digital profile and lets companies know you better than you may know yourself. Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm.”

“We shouldn’t sugarcoat the consequences. This is surveillance,” he added.

Cook was giving the keynote speech at the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC), which is being held in Brussels this year, right inside the European Parliament’s Hemicycle.

“Artificial intelligence is one area I think a lot about,” he told an audience of international data protection experts and policy wonks, which included the inventor of the World Wide Web itself, Sir Tim Berners-Lee, another keynote speaker at the event.

“At its core this technology promises to learn from people individually to benefit us all. But advancing AI by collecting huge personal profiles is laziness, not efficiency,” Cook continued.

“For artificial intelligence to be truly smart it must respect human values — including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

That sense of responsibility is why Apple puts human values at the heart of its engineering, Cook said.

In the speech, which we previewed yesterday, he also laid out a positive vision for technology’s “potential for good” — when combined with “good policy and political will”.

“We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR. We also celebrate the new steps taken, not only here in Europe but around the world — in Singapore, Japan, Brazil, New Zealand. In many more nations regulators are asking tough questions — and crafting effective reform.

“It is time for the rest of the world, including my home country, to follow your lead.”

Cook said Apple is “in full support of a comprehensive, federal privacy law in the United States” — making the company’s clearest statement yet of support for robust domestic privacy laws, and earning himself a burst of applause from assembled delegates in the process.

Cook argued for a US privacy law to prioritize four things:

  1. data minimization — “the right to have personal data minimized”, saying companies should “challenge themselves” to de-identify customer data or not collect it in the first place
  2. transparency — “the right to knowledge”, saying users should “always know what data is being collected and what it is being collected for, saying it’s the only way to “empower users to decide what collection is legitimate and what isn’t”. “Anything less is a shame,” he added
  3. the right to access — saying companies should recognize that “data belongs to users”, and it should be made easy for users to get a copy of, correct and delete their personal data
  4. the right to security — saying “security is foundational to trust and all other privacy rights”

“We see vividly, painfully how technology can harm, rather than help,” he continued, arguing that platforms can “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense or what is true or false”.

“This crisis is real. Those of us who believe in technology’s potential for good must not shrink from this moment”, he added, saying the company hopes “to work with you as partners”, and that: “Our missions are closely aligned.”

He also made a sideswipe at tech industry efforts to defang privacy laws — saying that some companies will “endorse reform in public and then resist and undermine it behind closed doors”.

“They may say to you our companies can never achieve technology’s true potential if there were strengthened privacy regulations. But this notion isn’t just wrong it is destructive — technology’s potential is and always must be rooted in the faith people have in it. In the optimism and the creativity that stirs the hearts of individuals. In its promise and capacity to make the world a better place.”

“It’s time to face facts,” Cook added. “We will never achieve technology’s true potential without the full faith and confidence of the people who use it.”

Opening the conference before the Apple CEO took to the stage, Europe’s data protection supervisor Giovanni Buttarelli argued that digitization is driving a new generational shift in the respect for privacy — saying there is an urgent need for regulators and indeed societies to agree on and establish “a sustainable ethics for a digitised society”.

“The so-called ‘privacy paradox’ is not that people have conflicting desires to hide and to expose. The paradox is that we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization,” Buttarelli argued.

“To cultivate a sustainable digital ethics, we need to look, objectively, at how those technologies have affected people in good ways and bad; We need a critical understanding of the ethics informing decisions by companies, governments and regulators whenever they develop and deploy new technologies.”

The EU’s data protection supervisor told an audience largely made up of data protection regulators and policy wonks that laws that merely set a minimum standard are not enough, including the EU’s freshly painted GDPR.

“We need to ask whether our moral compass been suspended in the drive for scale and innovation,” he said. “At this tipping point for our digital society, it is time to develop a clear and sustainable moral code.”

“We do not have a[n ethical] consensus in Europe, and we certainly do not have one at a global level. But we urgently need one,” he added.

“Not everything that is legally compliant and technically feasible is morally sustainable,” Buttarelli continued, pointing out that “privacy has too easily been reduced to a marketing slogan.

“But ethics cannot be reduced to a slogan.”

“For us as data protection authorities, I believe that ethics is among our most pressing strategic challenges,” he added.

“We have to be able to understand technology, and to articulate a coherent ethical framework. Otherwise how can we perform our mission to safeguard human rights in the digital age?”

News Source = techcrunch.com

1 2 3 8
Go to Top