Timesdelhi.com

December 12, 2018
Category archive

General Data Protection Regulation

Cognigo raises $8.5M for its AI-driven data protection platform

in Artificial Intelligence/Crowdfunding/data protection/data security/Delhi/Enterprise/Europe/Finance/funding/Fundings & Exits/General Data Protection Regulation/India/machine learning/OurCrowd/Politics/Recent Funding/regulatory compliance/Startups/TC by

Cognigo, a startup that aims to use AI and machine learning to help enterprises protect their data and stay in compliance with regulations like GDPR, today announced that it has raised an $8.5 million Series A round. The round was led by Israel-based crowdfunding platform OurCrowd, with participation from privacy company Prosegur and State of Mind Ventures.

The company promises that it can help businesses protect their critical data assets and prevent personally identifiable information from leaking outside of the company’s network. And it says it can do so without the kind of hands-on management that’s often required in setting these kinds of systems up and managing them over time. Indeed, Cognigo says that it can help businesses achieve GDPR compliance in days instead of months.

To do this, the company tells me, it’s using pre-trained language models for data classification. That model has been trained to detect common categories like payslips, patents, NDAs and contracts. Organizations can also provide their own data samples to further train the model and customize it for their own needs. “The only human intervention required is during the systems configuration process which would take no longer than a single day’s work,” a company spokesperson told me. “Apart from that, the system is completely human-free.”

The company tells me that it plans to use the new funding to expand its R&D, marketing and sales teams, all with the goal of expanding its market presence and enhancing awareness of its product. “Our vision is to ensure our customers can use their data to make smart businesses decisions while making sure that the data is continuously protected and in compliance,” the company tells me.

News Source = techcrunch.com

Children are being “datafied” before we’ve understood the risks, report warns

in Advertising Tech/Anne Longfield/Apps/Artificial Intelligence/big data/children's data/consumer protection/data management/data processing/data protection/data security/Delhi/Education/Europe/European Union/Facebook/General Data Protection Regulation/identity management/India/Policy/Politics/privacy/Snapchat/Social/social media/terms of service/United Kingdom by

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

News Source = techcrunch.com

GDPR has cut ad trackers in Europe but helped Google, study suggests

in Adtech/Advertising Tech/Banking/Cliqz/dark pattern design/data protection/Delhi/Europe/European Union/Facebook/GDPR/General Data Protection Regulation/ghostery/Google/India/Politics/privacy/privacy policy/terms of service/tracker blockers/trackers/United States by

An analysis of the impact of Europe’s new data protection framework, GDPR, on the adtech industry suggests the regulation has reduced the numbers of ad trackers that websites are hooking into EU visitors.

But it also implies that Google may have slightly increased its marketshare in the region — indicating the adtech giant could be winning at the compliance game at the expense of smaller advertising entities which the study also shows losing reach.

The research was carried out by the joint data privacy team of the anti-tracking browser Cliqz and the tracker blocker tool Ghostery (which merged via acquisition two years ago), using data from a service they jointly run, called WhoTracks.me — which they say is intended to provide greater transparency on the tracker market. (And therefore to encourage people to make use of their tracker blocker tools.)

A tale of two differently regulated regions

For the GDPR analysis, the team compared the prevalence of trackers one month before and one month after the introduction of the regulation, looking at the top 2,000 domains visited by EU or US residents.

On the tracker numbers front, they found that the average number of trackers per page dropped by almost 4% for EU web users from April to July.

Whereas the opposite was true in the US, with the average number of trackers per page rose by more than 8 percent over the same period.

In Europe, they found that the reduction in trackers was nearly universal across website types, with adult sites showing almost no change and only banking sites actually increasing their use of trackers.

In the US, the reverse was again true — with banking sites the only category to reduce tracker numbers over the analyzed period.

“The effects of the GDPR on the tracker landscape in Europe can be observed across all website categories. The reduction seems more prevalent among categories of sites with a lot of trackers,” they write, discussing the findings in a blog post. “Most trackers per page are still located on news websites: On average, they embed 12.4 trackers. Compared to April, however, this represents a decline of 7.5%.

“On ecommerce sites, the average number of trackers decreased by 6.9% to 9.5 per page. For recreation websites, the decrease is 6.7%, which corresponds to 10.7 trackers per page. A similar trend is observed for almost all other website categories. The only exception are banking sites, on which 7.4% more trackers were active in July than in April. However, the average number of trackers per page is only 2.6.”

Shifting marketshare

In the blog post they also argue that their snapshot comparison of tracker prevalence of April 2018 against July 2018 reveals “a clear picture” of GDPR’s impact on adtech marketshare — with “especially” smaller advertising trackers having “significantly” lost reach (which they are using as a proxy for marketshare).

In their analysis they found smaller tracker players lost between 18% and 31% reach/marketshare when comparing April (pre-GDPR) and July (post-GDPR).

They also found that Facebook suffered a decline of just under 7%.

Whereas adtech market leader Google was able to slightly increase its reach — by almost 1%.

Summing up their findings, Cliqz and Ghostery write: “For users this means that while the number of trackers asking for access to their data is decreasing, a tiny few (including Google) are getting even more of their data.”

The latter finding lends some weight to the argument that regulation can reinforce dominant players at the expense of smaller entities by further concentrating power — because big companies have greater resources to tackle compliance.

Although the data here is just a one-month snapshot. And the additional bump in marketshare being suggested for Google is not a huge one — whereas a nearly 7% drop in marketshare for Facebook is a more substantial impact.

Cliqz shared their findings with TechCrunch ahead of publication and we put several questions to them about the analysis, including whether or not the subsequent months (August, September) indicated this snapshot is a trend, i.e. whether or not Google sustained the additional marketshare.

However the company had not responded to our questions ahead of publication.

In the blog post Cliqz and Ghostery speculate that the larger adtech players might be winning (relatively speaking) the compliance game at the expense of smaller players because website owners are preferring to ‘play it safe’ and drop smaller entities vs big known platforms.

In the case of Google, they also flag up reports that suggest it has used its dominance of the adtech market to “encourage publishers to reduce the number of ad tech vendors and thus the number of trackers on their sites” — via a consent gathering tool that restricts the number of supply chain partners a publisher can share consent with to 12 vendors. 

And we’ve certainly heard complaints of draconian Google GDPR compliance terms before.

They also point to the use of manipulative UX design (aka dark patterns) that are used to “nudge users towards particular choices and actions that may be against their own interests”, suggesting these essentially deliberately confusing consent flows have been successfully tricking users into clicking and accepting “any kind of data collection” just to get rid of cryptic choices they’re being asked to understand. 

Given Google’s dominance of digital ad spending in Europe it stands to gain the most from websites’ use of manipulative consent flows.

However GDPR requires consent to be informed and freely given, not baffling and manipulative. So regulators should (hopefully) be getting a handle on any such transgressions and transgressors soon.

The continued existence of nightmarishly confused and convoluted consent flows is another complaint we’ve also heard before — much and often. (And one we have ourselves, frankly.)

Overall, according to the European Data Protection Board, a total of more than 42,000 complaints have been lodged so far with regulators, just four months into GDPR.

And just last week Europe’s data protection supervisor, Giovanni Buttarelli, told us to expect the first GDPR enforcement actions before the end of the year. So lots of EU consumers will already be warming up the popcorn.

But Cliqz and Ghostery argue that disingenuous attempts to manipulate consent might need additional regulatory tweaks to be beaten back — calling in their blog post for regulations to enforce machine-readable standards to help iron away flakey flows.

“The next opportunity for that would be the ePrivacy regulation,” they suggest, referencing the second big privacy rules update Europe is (still) working on. “It would be desirable, for example, if ePrivacy required that the privacy policies of websites, information on the type and scope of data collection by third parties, details of the Data Protection Officer and reports on data incidents must be machine-readable.

“This would increase transparency and create a market for privacy and compliance where industry players keep each other in check.”

It would also, of course, provide another opportunity for pro-privacy tools to make themselves even more useful to consumers.

News Source = techcrunch.com

Europe is drawing fresh battle lines around the ethics of big data

in Amazon/Artificial Intelligence/Brussels/competition law/data controller/data protection/Delhi/Europe/European Union/Facebook/General Data Protection Regulation/Giovanni Buttarelli/Google/India/law/Margrethe Vestager/Max Schrems/Politics/privacy/privacy policy/Security/social media/social networks/terms of service/Tim Cook/Tim-berners lee by

It’s been just over four months since Europe’s tough new privacy framework came into force. You might believe that little of substance has changed for big tech’s data-hungry smooth operators since then — beyond firing out a wave of privacy policy update spam, and putting up a fresh cluster of consent pop-ups that are just as aggressively keen for your data.

But don’t be fooled. This is the calm before the storm, according to the European Union’s data protection supervisor, Giovanni Buttarelli, who says the law is being systematically flouted on a number of fronts right now — and that enforcement is coming.

“I’m expecting, before the end of the year, concrete results,” he tells TechCrunch, sounding angry on every consumer’s behalf.

Though he chalks up some early wins for the General Data Protection Regulation (GDPR) too, suggesting its 72 hour breach notification requirement is already bearing fruit.

He also points to geopolitical pull, with privacy regulation rising up the political agenda outside Europe — describing, for example, California’s recently passed privacy law, which is not at all popular with tech giants, as having “a lot of similarities to GDPR”; as well as noting “a new appetite for a federal law” in the U.S.

Yet he’s also already looking beyond GDPR — to the wider question of how European regulation needs to keep evolving to respond to platform power and its impacts on people.

Next May, on the anniversary of GDPR coming into force, Buttarelli says he will publish a manifesto for a next-generation framework that envisages active collaboration between Europe’s privacy overseers and antitrust regulators. Which will probably send a shiver down the tech giant spine.

Notably, the Commission’s antitrust chief, Margrethe Vestager — who has shown an appetite to take on big tech, and has so far fined Google twice ($2.7BN for Google Shopping and staggering $5BN for Android), and who is continuing to probe its business on a number of fronts while simultaneously eyeing other platforms’ use of data — is scheduled to give a keynote at an annual privacy commissioners’ conference that Buttarelli is co-hosting in Brussels later this month.

Her presence hints at the potential of joint-working across historically separate regulatory silos that have nonetheless been showing increasingly overlapping concerns of late.

See, for example, Germany’s Federal Cartel Office accusing Facebook of using its size to strong-arm users into handing over data. And the French Competition Authority probing the online ad market — aka Facebook and Google — and identifying a raft of problematic behaviors. Last year the Italian Competition Authority also opened a sector inquiry into big data.

Traditional competition law theories of harm would need to be reworked to accommodate data-based anticompetitive conduct — essentially the idea that data holdings can bestow an unfair competitive advantage if they cannot be matched. Which clearly isn’t the easiest stinging jellyfish to nail to the wall. But Europe’s antitrust regulators are paying increasing mind to big data; looking actively at whether and even how data advantages are exclusionary or exploitative.

In recent years, Vestager has been very public with her concerns about dominant tech platforms and the big data they accrue as a consequence, saying, for example in 2016, that: “If a company’s use of data is so bad for competition that it outweighs the benefits, we may have to step in to restore a level playing field.”

Buttarelli’s belief is that EU privacy regulators will be co-opted into that wider antitrust fight by “supporting and feeding” competition investigations in the future. A future that can be glimpsed right now, with the EC’s antitrust lens swinging around to zoom in on what Amazon is doing with merchant data.

“Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” Buttarelli tells TechCrunch. 

“Monopolies are quite recent. And therefore once again, as it was the case with social networks, we have been surprised,” he adds, when asked whether the law can hope to keep pace. “And therefore the legal framework has been implemented in a way to do our best but it’s not in my view robust enough to consider all the relevant implications… So there is space for different solutions. But first joint enforcement and better co-operation is key.”

From a regulatory point of view, competition law is hampered by the length of time investigations take. A characteristic of the careful work required to probe and prove out competitive harms that’s nonetheless especially problematic set against the blistering pace of technological innovation and disruption. The law here is very much the polar opposite of ‘move fast and break things’.

But on the privacy front at least, there will be no 12 year wait for the first GDPR enforcements, as Buttarelli notes was the case when Europe’s competition rules were originally set down in 1957’s Treaty of Rome.

He says the newly formed European Data Protection Board (EDPB), which is in charge of applying GDPR consistently across the bloc, is fixed on delivering results “much more quickly”. And so the first enforcements are penciled in for around half a year after GDPR ‘Day 1’.

“I think that people are right to feel more impassioned about enforcement,” he says. “We see awareness and major problems with how the data is treated — which are systemic. There is also a question with regard to the business model, not only compliance culture.

“I’m expecting concrete first results, in terms of implementation, before the end of this year.”

“No blackmailing”

Tens of thousands of consumers have already filed complaints under Europe’s new privacy regime. The GDPR updates the EU’s longstanding data protection rules, bringing proper enforcement for the first time in the form of much larger fines for violations — to prevent privacy being the bit of the law companies felt they could safely ignore.

The EDPB tells us that more than 42,230 complaints have been lodged across the bloc since the regulation began applying, on May 25. The board is made up of the heads of EU Member State’s national data protection agencies, with Buttarelli serving as its current secretariat.

“I did not appreciate the tsunami of legalistic notices landing on the account of millions of users, written in an obscure language, and many of them were entirely useless, and in a borderline even with spamming, to ask for unnecessary agreements with a new privacy policy,” he tells us. “Which, in a few cases, appear to be in full breach of the GDPR — not only in terms of spirit.”

He also professes himself “not surprised” about Facebook’s latest security debacle — describing the massive new data breach the company revealed on Friday as “business as usual” for the tech giant. And indeed for “all the tech giants” — none of whom he believes are making adequate investments in security.

“In terms of security there are much less investments than expected,” he also says of Facebook specifically. “Lot of investments about profiling people, about creating clusters, but much less in preserving the [security] of communications. GDPR is a driver for a change — even with regard to security.”

Asked what systematic violations of the framework he’s seen so far, from his pan-EU oversight position, Buttarelli highlights instances where service operators are relying on consent as their legal basis to collect user data — saying this must allow for a free choice.

Or “no blackmailing”, as he puts it.

Facebook, for example, does not offer any of its users, even its users in Europe, the option to opt out of targeted advertising. Yet it leans on user consent, gathered via dark pattern consent flows of its own design, to sanction its harvesting of personal data — claiming people can just stop using its service if they don’t agree to its ads.

It also claims to be GDPR compliant.

It’s pretty easy to see the disconnect between those two positions.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

“In cases in which it is indispensable to build on consent it should be much more than in the past based on exhaustive information; much more details, written in a comprehensive and simple language, accessible to an average user, and it should be really freely given — so no blackmailing,” says Buttarelli, not mentioning any specific tech firms by name as he reels off this list. “It should be really freely revoked, and without expecting that the contract is terminated because of this.

“This is not respectful of at least the spirit of the GDPR and, in a few cases, even of the legal framework.”

His remarks — which chime with what we’ve heard before from privacy experts — suggest the first wave of complaints filed by veteran European data protection campaigner and lawyer, Max Schrems, via his consumer focused data protection non-profit noyb, will bear fruit. And could force tech giants to offer a genuine opt-out of profiling.

The first noyb complaints target so-called ‘forced consent‘, arguing that Facebook; Facebook-owned Instagram; Facebook-owned WhatsApp; and Google’s Android are operating non-compliant consent flows in order to keep processing Europeans’ personal data because they do not offer the aforementioned ‘free choice’ opt-out of data collection.

Schrems also contends that this behavior is additionally problematic because dominant tech giants are gaining an unfair advantage over small businesses — which simply cannot throw their weight around in the same way to get what they want. So that’s another spark being thrown in on the competition front.

Discussing GDPR enforcement generally, Buttarelli confirms he expects to see financial penalties not just investigatory outcomes before the year is out — so once DPAs have worked through the first phase of implementation (and got on top of their rising case loads).

Of course it will be up to local data protection agencies to issue any fines. But the EDPB and Buttarelli are the glue between Europe’s (currently) 28 national data protection agencies — playing a highly influential co-ordinating and steering role to ensure the regulation gets consistently applied.

He doesn’t say exactly where be thinks the first penalties will fall but notes a smorgasbord of issues that are being commonly complained about, saying: “Now we have an obvious trend and even a peak, in terms of complaints; different violations focusing particularly, but not only, on social media; big data breaches; rights like right of access to information held; right to erasure.”

He illustrates his conviction of incoming fines by pointing to the recent example of the ICO’s interim report into Cambridge Analytica’s misuse of Facebook data, in July — when the UK agency said it intended to fine Facebook the maximum possible (just £500k, because the breach took place before GDPR).

A similarly concluded data misuse investigation under GDPR would almost certainly result in much larger fines because the regulation allows for penalties of up to 4% of a company’s annual global turnover. (So in Facebook’s case the maximum suddenly balloons into the billions.)

The GDPR’s article 83 sets out general conditions for calculating fines — saying penalties should be “effective, proportionate and dissuasive”; and they must take into account factors such as whether an infringement was intentional or negligent; the categories of personal data affected; and how co-operative the data controller is as the data supervisor investigates.

For the security breach Facebook disclosed last week the EU’s regulatory oversight process will involve an assessment of how negligent the company was; what response steps it took when it discovered the breach, including how it communicated with data protection authorities and users; and how comprehensively it co-operatives with the DPC’s investigation. (In a not-so-great sign for Facebook the Irish DPC has already criticized its breach notification for lacking detail).

As well as evaluating a data controller’s security measures against GDPR standards, EU regulators can “prescribe additional safeguards”, as Buttarelli puts it. Which means enforcement is much more than just a financial penalty; organizations can be required to change their processes and priorities too.

And that’s why Schrems’ forced consent complaints are so interesting.

Because a fine, even a large one, can be viewed by a company as revenue-heavy as Facebook as just another business cost to suck up as it keeps on truckin’. But GDPR’s follow on enforcement prescriptions could force privacy law breakers to actively reshape their business practices to continue doing business in Europe.

And if the privacy problem with Facebook is that it’s forcing people-tracking ads on everyone, the solution is surely a version of Facebook that does not require users to accept privacy intrusive advertising to use it. Other business models are available, such as subscription.

But ads don’t have to be hostile to privacy. For example it’s possible to display advertising without persistently profiling users — as, for example, pro-privacy search engine DuckDuckGo does. Other startups are exploring privacy-by-design on-device ad-targeting architectures for delivering targeted ads without needing to track users. Alternatives to Facebook’s targeted ads certainly exist — and innovating in lock-step with privacy is clearly possible. Just ask Apple.

So — at least in theory — GDPR could force the social network behemoth to revise its entire business model.

Which would make even a $1.63BN fine the company could face as a result of Friday’s security breach pale into insignificance.

Accelerating ethics

There’s a wrinkle here though. Buttarelli does not sound convinced that GDPR alone will be remedy enough to fix all privacy hostile business models that EU regulators are seeing. Hence his comment about a “question with regard to the business model”.

And also why he’s looking ahead and talking about the need to evolve the regulatory landscape — to enable joint working between traditionally discrete areas of law. 

“We need structural remedies to make the digital market fairer for people,” he says. “And therefore this is we’ve been successful in persuading our colleagues of the Board to adopt a position on the intersection of consumer protection, competition rules and data protection. None of the independent regulators’ three areas, not speaking about audio-visual deltas, can succeed in their sort of old fashioned approach.

“We need more interaction, we need more synergies, we need to look to the future of these sectoral legislations.”

People are targeted with content to make them behave in a certain way. To predict but also to react. This is not the kind of democracy we deserve. Giovanni Buttarelli, European Data Protection Supervisor

The challenge posed by the web’s currently dominant privacy-hostile business models is also why, in a parallel track, Europe’s data protection supervisor is actively pushing to accelerate innovation and debate around data ethics — to support efforts to steer markets and business models in, well, a more humanitarian direction.

When we talk he highlights that Sir Tim Berners-Lee will be keynoting at the same European privacy conference where Vestager will appear at — which has an overarching discussion frame of “Debating Ethics: Dignity and Respect in Data Driven Life” as its theme.

Accelerating innovation to support the development of more ethical business models is also clearly the Commission’s underlying hope and aim.

Berners-Lee, the creator of the World Wide Web, has been increasingly strident in his criticism of how commercial interests have come to dominate the Internet by exploiting people’s personal data, including warning earlier this year that platform power is crushing the web as a force for good.

He has also just left his academic day job to focus on commercializing the pro-privacy, decentralized web platform he’s been building at MIT for years — via a new startup, called Inrupt.

Doubtless he’ll be telling the conference all about that.

“We are focusing on the solutions for the future,” says Buttarelli on ethics. “There is a lot of discussion about people becoming owners of their data, and ‘personal data’, and we call that personal because there’s something to be respected, not traded. And on the contrary we see a lot of inequality in the tech world, and we believe that the legal framework can be of an help. But will not give all the relevant answers to identify what is legally and technically feasible but morally untenable.”

Also just announced as another keynote speaker at the same conference later this month: Apple’s CEO Tim Cook.

In a statement on Cook’s addition to the line-up, Buttarelli writes: “We are delighted that Tim has agreed to speak at the International Conference of Data Protection and Privacy Commissioners. Tim has been a strong voice in the debate around privacy, as the leader of a company which has taken a clear privacy position, we look forward to hearing his perspective. He joins an already superb line up of keynote speakers and panellists who want to be part of a discussion about technology serving humankind.”

So Europe’s big fight to rule the damaging impacts of big data just got another big gun behind it.

Apple CEO Tim Cook looks on during a visit of the shopfitting company Dula that delivers tables for Apple stores worldwide in Vreden, western Germany, on February 7, 2017. (Photo: BERND THISSEN/AFP/Getty Images)

 

“Question is [how do] we go beyond the simple requirements of confidentiality, security, of data,” Buttarelli continues. “Europe after such a successful step [with GDPR] is now going beyond the lawful and fair accumulation of personal data — we are identifying a new way of assessing market power when the services delivered to individuals are not mediated by a binary. And although competition law is still a powerful instrument for regulation — it was invented to stop companies getting so big — but I think together with our efforts on ethics we would like now Europe to talk about the future of the current dominant business models.

“I’m… concerned about how these companies, in compliance with GDPR in a few cases, may collect as much data as they can. In a few cases openly, in other secretly. They can constantly monitor what people are doing online. They categorize excessively people. They profile them in a way which cannot be contested. So we have in our democracies a lot of national laws in an anti-discrimination mode but now people are to be discriminated depending on how they behave online. So people are targeted with content to make them behave in a certain way. To predict but also to react. This is not the kind of democracy we deserve. This is not our idea.”

News Source = techcrunch.com

Facebook is weaponizing security to erode privacy

in Cambridge Analytica/Congress/Damian Collins/Delhi/e2e encryption/encryption/European Union/Facebook/facial recognition/General Data Protection Regulation/house energy and commerce committee/India/Mark Zuckerberg/online disinformation/Politics/privacy/Security/senate/Social/social media/TC/terms of service/Washington DC/WhatsApp by

At a Senate hearing this week in which US lawmakers quizzed tech giants on how they should go about drawing up comprehensive Federal consumer privacy protection legislation, Apple’s VP of software technology described privacy as a “core value” for the company.

“We want your device to know everything about you but we don’t think we should,” Bud Tribble told them in his opening remarks.

Facebook was not at the commerce committee hearing which, as well as Apple, included reps from Amazon, AT&T, Charter Communications, Google and Twitter.

But the company could hardly have made such a claim had it been in the room, given that its business is based on trying to know everything about you in order to dart you with ads.

You could say Facebook has ‘hostility to privacy‘ as a core value.

Earlier this year one US senator wondered of Mark Zuckerberg how Facebook could run its service given it doesn’t charge users for access. “Senator we run ads,” was the almost startled response, as if the Facebook founder couldn’t believe his luck at the not-even-surface-level political probing his platform was getting.

But there have been tougher moments of scrutiny for Zuckerberg and his company in 2018, as public awareness about how people’s data is being ceaselessly sucked out of platforms and passed around in the background, as fuel for a certain slice of the digital economy, has grown and grown — fuelled by a steady parade of data breaches and privacy scandals which provide a glimpse behind the curtain.

On the data scandal front Facebook has reigned supreme, whether it’s as an ‘oops we just didn’t think of that’ spreader of socially divisive ads paid for by Kremlin agents (sometimes with roubles!); or as a carefree host for third party apps to party at its users’ expense by silently hovering up info on their friends, in the multi-millions.

Facebook’s response to the Cambridge Analytica debacle was to loudly claim it was ‘locking the platform down‘. And try to paint everyone else as the rogue data sucker — to avoid the obvious and awkward fact that its own business functions in much the same way.

All this scandalabra has kept Facebook execs very busy with year, with policy staffers and execs being grilled by lawmakers on an increasing number of fronts and issues — from election interference and data misuse, to ad transparencyhate speech and abuse, and also directly, and at times closely, on consumer privacy and control

Facebook shielded its founder from one sought for grilling on data misuse, as UK MPs investigated online disinformation vs democracy, as well as examining wider issues around consumer control and privacy. (They’ve since recommended a social media levy to safeguard society from platform power.) 

The DCMS committee wanted Zuckerberg to testify to unpick how Facebook’s platform contributes to the spread of disinformation online. The company sent various reps to face questions (including its CTO) — but never the founder (not even via video link). And committee chair Damian Collins was withering and public in his criticism of Facebook sidestepping close questioning — saying the company had displayed a “pattern” of uncooperative behaviour, and “an unwillingness to engage, and a desire to hold onto information and not disclose it.”

As a result, Zuckerberg’s tally of public appearances before lawmakers this year stands at just two domestic hearings, in the US Senate and Congress, and one at a meeting of the EU parliament’s conference of presidents (which switched from a behind closed doors format to being streamed online after a revolt by parliamentarians) — and where he was heckled by MEPs for avoiding their questions.

But three sessions in a handful of months is still a lot more political grillings than Zuckerberg has ever faced before.

He’s going to need to get used to awkward questions now that lawmakers have woken up to the power and risk of his platform.

Security, weaponized 

What has become increasingly clear from the growing sound and fury over privacy and Facebook (and Facebook and privacy), is that a key plank of the company’s strategy to fight against the rise of consumer privacy as a mainstream concern is misdirection and cynical exploitation of valid security concerns.

Simply put, Facebook is weaponizing security to shield its erosion of privacy.

Privacy legislation is perhaps the only thing that could pose an existential threat to a business that’s entirely powered by watching and recording what people do at vast scale. And relying on that scale (and its own dark pattern design) to manipulate consent flows to acquire the private data it needs to profit.

Only robust privacy laws could bring Facebook’s self-serving house of cards tumbling down. User growth on its main service isn’t what it was but the company has shown itself very adept at picking up (and picking off) potential competitors — applying its surveillance practices to crushing competition too.

In Europe lawmakers have already tightened privacy oversight on digital businesses and massively beefed up penalties for data misuse. Under the region’s new GDPR framework compliance violations can attract fines as high as 4% of a company’s global annual turnover.

Which would mean billions of dollars in Facebook’s case — vs the pinprick penalties it has been dealing with for data abuse up to now.

Though fines aren’t the real point; if Facebook is forced to change its processes, so how it harvests and mines people’s data, that could knock a major, major hole right through its profit-center.

Hence the existential nature of the threat.

The GDPR came into force in May and multiple investigations are already underway. This summer the EU’s data protection supervisor, Giovanni Buttarelli, told the Washington Post to expect the first results by the end of the year.

Which means 2018 could result in some very well known tech giants being hit with major fines. And — more interestingly — being forced to change how they approach privacy.

One target for GDPR complainants is so-called ‘forced consent‘ — where consumers are told by platforms leveraging powerful network effects that they must accept giving up their privacy as the ‘take it or leave it’ price of accessing the service. Which doesn’t exactly smell like the ‘free choice’ EU law actually requires.

It’s not just Europe, either. Regulators across the globe are paying greater attention than ever to the use and abuse of people’s data. And also, therefore, to Facebook’s business — which profits, so very handsomely, by exploiting privacy to build profiles on literally billions of people in order to dart them with ads.

US lawmakers are now directly asking tech firms whether they should implement GDPR style legislation at home.

Unsurprisingly, tech giants are not at all keen — arguing, as they did at this week’s hearing, for the need to “balance” individual privacy rights against “freedom to innovate”.

So a lobbying joint-front to try to water down any US privacy clampdown is in full effect. (Though also asked this week whether they would leave Europe or California as a result of tougher-than-they’d-like privacy laws none of the tech giants said they would.)

The state of California passed its own robust privacy law, the California Consumer Privacy Act, this summer, which is due to come into force in 2020. And the tech industry is not a fan. So its engagement with federal lawmakers now is a clear attempt to secure a weaker federal framework to ride over any more stringent state laws.

Europe and its GDPR obviously can’t be rolled over like that, though. Even as tech giants like Facebook have certainly been seeing how much they can get away with — to force a expensive and time-consuming legal fight.

While ‘innovation’ is one oft-trotted angle tech firms use to argue against consumer privacy protections, Facebook included, the company has another tactic too: Deploying the ‘S’ word — security — both to fend off increasingly tricky questions from lawmakers, as they finally get up to speed and start to grapple with what it’s actually doing; and — more broadly — to keep its people-mining, ad-targeting business steamrollering on by greasing the pipe that keeps the personal data flowing in.

In recent years multiple major data misuse scandals have undoubtedly raised consumer awareness about privacy, and put greater emphasis on the value of robustly securing personal data. Scandals that even seem to have begun to impact how some Facebook users Facebook. So the risks for its business are clear.

Part of its strategic response, then, looks like an attempt to collapse the distinction between security and privacy — by using security concerns to shield privacy hostile practices from critical scrutiny, specifically by chain-linking its data-harvesting activities to some vaguely invoked “security purposes”, whether that’s security for all Facebook users against malicious non-users trying to hack them; or, wider still, for every engaged citizen who wants democracy to be protected from fake accounts spreading malicious propaganda.

So the game Facebook is here playing is to use security as a very broad-brush to try to defang legislation that could radically shrink its access to people’s data.

Here, for example, is Zuckerberg responding to a question from an MEP in the EU parliament asking for answers on so-called ‘shadow profiles’ (aka the personal data the company collects on non-users) — emphasis mine:

It’s very important that we don’t have people who aren’t Facebook users that are coming to our service and trying to scrape the public data that’s available. And one of the ways that we do that is people use our service and even if they’re not signed in we need to understand how they’re using the service to prevent bad activity.

At this point in the meeting Zuckerberg also suggestively referenced MEPs’ concerns about election interference — to better play on a security fear that’s inexorably close to their hearts. (With the spectre of re-election looming next spring.) So he’s making good use of his psychology major.

“On the security side we think it’s important to keep it to protect people in our community,” he also said when pressed by MEPs to answer how a person who isn’t a Facebook user could delete its shadow profile of them.

He was also questioned about shadow profiles by the House Energy and Commerce Committee in April. And used the same security justification for harvesting data on people who aren’t Facebook users.

“Congressman, in general we collect data on people who have not signed up for Facebook for security purposes to prevent the kind of scraping you were just referring to [reverse searches based on public info like phone numbers],” he said. “In order to prevent people from scraping public information… we need to know when someone is repeatedly trying to access our services.”

He claimed not to know “off the top of my head” how many data points Facebook holds on non-users (nor even on users, which the congressman had also asked for, for comparative purposes).

These sorts of exchanges are very telling because for years Facebook has relied upon people not knowing or really understanding how its platform works to keep what are clearly ethically questionable practices from closer scrutiny.

But, as political attention has dialled up around privacy, and its become harder for the company to simply deny or fog what it’s actually doing, Facebook appears to be evolving its defence strategy — by defiantly arguing it simply must profile everyone, including non-users, for user security.

No matter this is the same company which, despite maintaining all those shadow profiles on its servers, famously failed to spot Kremlin election interference going on at massive scale in its own back yard — and thus failed to protect its users from malicious propaganda.

TechCrunch/Bryce Durbin

Nor was Facebook capable of preventing its platform from being repurposed as a conduit for accelerating ethnic hate in a country such as Myanmar — with some truly tragic consequences. Yet it must, presumably, hold shadow profiles on non-users there too. Yet was seemingly unable (or unwilling) to use that intelligence to help protect actual lives…

So when Zuckerberg invokes overarching “security purposes” as a justification for violating people’s privacy en masse it pays to ask critical questions about what kind of security it’s actually purporting to be able deliver. Beyond, y’know, continued security for its own business model as it comes under increasing attack.

What Facebook indisputably does do with ‘shadow contact information’, acquired about people via other means than the person themselves handing it over, is to use it to target people with ads. So it uses intelligence harvested without consent to make money.

Facebook confirmed as much this week, when Gizmodo asked it to respond to a study by some US academics that showed how a piece of personal data that had never been knowingly provided to Facebook by its owner could still be used to target an ad at that person.

Responding to the study, Facebook admitted it was “likely” the academic had been shown the ad “because someone else uploaded his contact information via contact importer”.

“People own their address books. We understand that in some cases this may mean that another person may not be able to control the contact information someone else uploads about them,” it told Gizmodo.

So essentially Facebook has finally admitted that consentless scraped contact information is a core part of its ad targeting apparatus.

Safe to say, that’s not going to play at all well in Europe.

Basically Facebook is saying you own and control your personal data until it can acquire it from someone else — and then, er, nope!

Yet given the reach of its network, the chances of your data not sitting on its servers somewhere seems very, very slim. So Facebook is essentially invading the privacy of pretty much everyone in the world who has ever used a mobile phone. (Something like two-thirds of the global population then.)

In other contexts this would be called spying — or, well, ‘mass surveillance’.

It’s also how Facebook makes money.

And yet when called in front of lawmakers to asking about the ethics of spying on the majority of the people on the planet, the company seeks to justify this supermassive privacy intrusion by suggesting that gathering data about every phone user without their consent is necessary for some fuzzily-defined “security purposes” — even as its own record on security really isn’t looking so shiny these days.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

It’s as if Facebook is trying to lift a page out of national intelligence agency playbooks — when governments claim ‘mass surveillance’ of populations is necessary for security purposes like counterterrorism.

Except Facebook is a commercial company, not the NSA.

So it’s only fighting to keep being able to carpet-bomb the planet with ads.

Profiting from shadow profiles

Another example of Facebook weaponizing security to erode privacy was also confirmed via Gizmodo’s reportage. The same academics found the company uses phone numbers provided to it by users for the specific (security) purpose of enabling two-factor authentication, which is a technique intended to make it harder for a hacker to take over an account, to also target them with ads.

In a nutshell, Facebook is exploiting its users’ valid security fears about being hacked in order to make itself more money.

Any security expert worth their salt will have spent long years encouraging web users to turn on two factor authentication for as many of their accounts as possible in order to reduce the risk of being hacked. So Facebook exploiting that security vector to boost its profits is truly awful. Because it works against those valiant infosec efforts — so risks eroding users’ security as well as trampling all over their privacy.

It’s just a double whammy of awful, awful behavior.

And of course, there’s more.

A third example of how Facebook seeks to play on people’s security fears to enable deeper privacy intrusion comes by way of the recent rollout of its facial recognition technology in Europe.

In this region the company had previously been forced to pull the plug on facial recognition after being leaned on by privacy conscious regulators. But after having to redesign its consent flows to come up with its version of ‘GDPR compliance’ in time for May 25, Facebook used this opportunity to revisit a rollout of the technology on Europeans — by asking users there to consent to switching it on.

Now you might think that asking for consent sounds okay on the surface. But it pays to remember that Facebook is a master of dark pattern design.

Which means it’s expert at extracting outcomes from people by applying these manipulative dark arts. (Don’t forget, it has even directly experimented in manipulating users’ emotions.)

So can it be a free consent if ‘individual choice’ is set against a powerful technology platform that’s both in charge of the consent wording, button placement and button design, and which can also data-mine the behavior of its 2BN+ users to further inform and tweak (via A/B testing) the design of the aforementioned ‘consent flow’? (Or, to put it another way, is it still ‘yes’ if the tiny greyscale ‘no’ button fades away when your cursor approaches while the big ‘YES’ button pops and blinks suggestively?)

In the case of facial recognition, Facebook used a manipulative consent flow that included a couple of self-serving ‘examples’ — selling the ‘benefits’ of the technology to users before they landed on the screen where they could choose either yes switch it on, or no leave it off.

One of which explicitly played on people’s security fears — by suggesting that without the technology enabled users were at risk of being impersonated by strangers. Whereas, by agreeing to do what Facebook wanted you to do, Facebook said it would help “protect you from a stranger using your photo to impersonate you”…

That example shows the company is not above actively jerking on the chain of people’s security fears, as well as passively exploiting similar security worries when it jerkily repurposes 2FA digits for ad targeting.

There’s even more too; Facebook has been positioning itself to pull off what is arguably the greatest (in the ‘largest’ sense of the word) appropriation of security concerns yet to shield its behind-the-scenes trampling of user privacy — when, from next year, it will begin injecting ads into the WhatsApp messaging platform.

These will be targeted ads, because Facebook has already changed the WhatsApp T&Cs to link Facebook and WhatsApp accounts — via phone number matching and other technical means that enable it to connect distinct accounts across two otherwise entirely separate social services.

Thing is, WhatsApp got fat on its founders promise of 100% ad-free messaging. The founders were also privacy and security champions, pushing to roll e2e encryption right across the platform — even after selling their app to the adtech giant in 2014.

WhatsApp’s robust e2e encryption means Facebook literally cannot read the messages users are sending each other. But that does not mean Facebook is respecting WhatsApp users’ privacy.

On the contrary; The company has given itself broader rights to user data by changing the WhatsApp T&Cs and by matching accounts.

So, really, it’s all just one big Facebook profile now — whichever of its products you do (or don’t) use.

This means that even without literally reading your WhatsApps, Facebook can still know plenty about a WhatsApp user, thanks to any other Facebook Group profiles they have ever had and any shadow profiles it maintains in parallel. WhatsApp users will soon become 1.5BN+ bullseyes for yet more creepily intrusive Facebook ads to seek their target.

No private spaces, then, in Facebook’s empire as the company capitalizes on people’s fears to shift the debate away from personal privacy and onto the self-serving notion of ‘secured by Facebook spaces’ — in order that it can keep sucking up people’s personal data.

Yet this is a very dangerous strategy, though.

Because if Facebook can’t even deliver security for its users, thereby undermining those “security purposes” it keeps banging on about, it might find it difficult to sell the world on going naked just so Facebook Inc can keep turning a profit.

What’s the best security practice of all? That’s super simple: Not holding data in the first place.

News Source = techcrunch.com

1 2 3
Go to Top