Menu

Timesdelhi.com

June 16, 2019
Category archive

privacy policy

Taxing your privacy

in articles/california/ClearRoad/Column/coo/Delhi/digital rights/gps/human rights/identity management/India/law/location technology/Louisiana/Lyft/mobile device/New York/oregon/Politics/privacy/privacy policy/Seattle/smartphones/terms of service/Uber/utah by

Data collection through mobile tracking is big business and the potential for companies helping governments monetize this data is huge. For consumers, protecting yourself against the who, what and where of data flow is just the beginning. The question now is: How do you ensure your data isn’t costing you money in the form of new taxes, fees and bills?  Particularly when the entity that stands to benefit from this data — the government — is also tasked with protecting it?

The advances in personal data collection are a source of growing concern for privacy advocates, but whereas most fears tend to focus on what type of data is being collected, who’s watching and to whom is your data being sold, the potential for this same data to be monetized via auditing and compliance fees is even more problematic.

The fact is, you don’t need massive infrastructure to now track/tax businesses and consumers. State governments and municipalities have taken notice.

The result is a potential multi-billion dollar per-year business that, with mobile tracking technology, will only grow exponentially year over year.

Yet, while the revenue upside for companies helping smart cities (and states) with taxing and tolling is significant, it is also rife with contradictions and complications that could, ultimately, pose serious problems to those companies’ underlying business models and for the investors that bet heavily on them.

Photo courtesy of Getty Images/chombosan

The most common argument when privacy advocates bring up concerns around mobile data collection is that consumers almost always have the control to opt out. When governments utilize this data, however, that option is not always available. And the direct result is the monetization of a consumer’s privacy in the form of taxes and tolls. In an era where states like California and others are stepping up as self-proclaimed defenders of citizen privacy and consent, this puts everyone involved in an awkward position — to say the least.

The marriage of smart cities and next-gen location tracking apps is becoming more commonplace.  AI, always-on data flows, sensor networks and connected devices are all being employed by governments in the name of sustainable and equitable cities as well as new revenue.

New York, LA and Seattle are all implementing (or considering implementing) congestion pricing that would ultimately rely on harvesting personal data in some form or another. Oregon, which passed the first gas tax in 1919, began it’s OreGo Program two years ago utilizing data that measured miles driven to levy fees on drivers so as to address infrastructure issues with its roads and highways.

Image Courtesy of Shutterstock

As more state and local governments look to emulate these kinds of policies the revenue opportunity for companies and investors harvesting this data is obvious.  Populus, (and a portfolio company) a data platform that helps cities manage mobility, captures data from fleets like Uber and Lyft to help cities set policy and collect fees.

Similarly, ClearRoad  is a “road pricing transaction processor” that leverages data from vehicles to help governments determine road usage for new revenue streams.  Safegraph, on the other hand, is a company that daily collects millions of trackers from smartphones via apps, APIs and other delivery methods often leaving the business of disclosure up to third parties. Data like this has begun to make its way into smart city applications which could impact industries as varied as the real estate market to the Gig Economy.

“There are lots of companies that are using location technology, 3D scanning, sensor tracking and more.  So, there are lots of opportunities to improve the effectiveness of services and for governments to find new revenue streams,” says Paul Salama, COO of ClearRoad . “If you trust the computer to regulate, as opposed to the written code, then you can allow for a lot more dynamic types of regulation and that extends beyond vehicles to noise pollution, particulate emissions, temporary signage, etc.”

While most of these platforms and technologies endeavor to do some public good by creating the baseline for good policy and sustainable cities they also raise concerns about individual privacy and the potential for discrimination.  And there is an inherent contradiction for states ostensibly tasked with curbing the excesses of data collection then turning around and utilizing that same data to line the state’s coffers, sometimes without consent or consumer choice.

Image courtesy Bryce Durbin

“People care about their privacy and there are aspects that need to be hashed out”, says Salama. “But we’re talking about a lot of unknowns on that data governance side.  There’s definitely going to be some sort of reckoning at some point but it’s still so early on.”

As policy makers and people become more aware of mobile phone tracking and the largely unregulated data collection associated with it, the question facing companies in this space is how to extract all this societally beneficial data while balancing that against some pretty significant privacy concerns.

“There will be options,” says Salama.  “An example is Utah which, starting next year, will offer electric cars the option to pay a flat fee (for avoiding gas taxes) or pay-by-the-mile.  The pay-by-the-mile option is GPS enabled but it also has additional services, so you pay by your actual usage.”

Ultimately, for governments, regulation plus transparency seems the likeliest way forward.

Image courtesy Getty Images

In most instances, the path to the consumer or tax payer is either through their shared economy vehicle (car, scooter, bike, etc.) or though their mobile device.  While taxing fleets is indirect and provides some measure of political cover for the governments generating revenue off of them, there is no such cover for directly taxing citizens via data gathered through mobile apps.

The best case scenario to short circuit these inherent contradictions for governments is to actually offer choice in the form of their own opt-in for some value exchange or preferred billing method, such as Utah’s opt-in as an alternative way to pay for road use vs. gas tax.   It may not satisfy all privacy concerns, particularly when it is the government sifting through your data, but it at least offers a measure of choice and a tangible value.

If data collection and sharing were still mainly the purview of B2B businesses and global enterprises, perhaps the rising outcry over the methods and usage of data collection would remain relatively muted. But as data usage seeps into more aspects of everyday life and is adopted by smart cities and governments across the nation questions around privacy will invariably get more heated, particularly when citizen consumers start feeling the pinch in their wallet.

As awareness rises and inherent contradictions are laid bare, regulation will surely follow and those businesses not prepared may face fundamental threats to their business models that ultimately threaten their bottom line.

GDPR has cut ad trackers in Europe but helped Google, study suggests

in Adtech/Advertising Tech/Banking/Cliqz/dark pattern design/data protection/Delhi/Europe/European Union/Facebook/GDPR/General Data Protection Regulation/ghostery/Google/India/Politics/privacy/privacy policy/terms of service/tracker blockers/trackers/United States by

An analysis of the impact of Europe’s new data protection framework, GDPR, on the adtech industry suggests the regulation has reduced the numbers of ad trackers that websites are hooking into EU visitors.

But it also implies that Google may have slightly increased its marketshare in the region — indicating the adtech giant could be winning at the compliance game at the expense of smaller advertising entities which the study also shows losing reach.

The research was carried out by the joint data privacy team of the anti-tracking browser Cliqz and the tracker blocker tool Ghostery (which merged via acquisition two years ago), using data from a service they jointly run, called WhoTracks.me — which they say is intended to provide greater transparency on the tracker market. (And therefore to encourage people to make use of their tracker blocker tools.)

A tale of two differently regulated regions

For the GDPR analysis, the team compared the prevalence of trackers one month before and one month after the introduction of the regulation, looking at the top 2,000 domains visited by EU or US residents.

On the tracker numbers front, they found that the average number of trackers per page dropped by almost 4% for EU web users from April to July.

Whereas the opposite was true in the US, with the average number of trackers per page rose by more than 8 percent over the same period.

In Europe, they found that the reduction in trackers was nearly universal across website types, with adult sites showing almost no change and only banking sites actually increasing their use of trackers.

In the US, the reverse was again true — with banking sites the only category to reduce tracker numbers over the analyzed period.

“The effects of the GDPR on the tracker landscape in Europe can be observed across all website categories. The reduction seems more prevalent among categories of sites with a lot of trackers,” they write, discussing the findings in a blog post. “Most trackers per page are still located on news websites: On average, they embed 12.4 trackers. Compared to April, however, this represents a decline of 7.5%.

“On ecommerce sites, the average number of trackers decreased by 6.9% to 9.5 per page. For recreation websites, the decrease is 6.7%, which corresponds to 10.7 trackers per page. A similar trend is observed for almost all other website categories. The only exception are banking sites, on which 7.4% more trackers were active in July than in April. However, the average number of trackers per page is only 2.6.”

Shifting marketshare

In the blog post they also argue that their snapshot comparison of tracker prevalence of April 2018 against July 2018 reveals “a clear picture” of GDPR’s impact on adtech marketshare — with “especially” smaller advertising trackers having “significantly” lost reach (which they are using as a proxy for marketshare).

In their analysis they found smaller tracker players lost between 18% and 31% reach/marketshare when comparing April (pre-GDPR) and July (post-GDPR).

They also found that Facebook suffered a decline of just under 7%.

Whereas adtech market leader Google was able to slightly increase its reach — by almost 1%.

Summing up their findings, Cliqz and Ghostery write: “For users this means that while the number of trackers asking for access to their data is decreasing, a tiny few (including Google) are getting even more of their data.”

The latter finding lends some weight to the argument that regulation can reinforce dominant players at the expense of smaller entities by further concentrating power — because big companies have greater resources to tackle compliance.

Although the data here is just a one-month snapshot. And the additional bump in marketshare being suggested for Google is not a huge one — whereas a nearly 7% drop in marketshare for Facebook is a more substantial impact.

Cliqz shared their findings with TechCrunch ahead of publication and we put several questions to them about the analysis, including whether or not the subsequent months (August, September) indicated this snapshot is a trend, i.e. whether or not Google sustained the additional marketshare.

However the company had not responded to our questions ahead of publication.

In the blog post Cliqz and Ghostery speculate that the larger adtech players might be winning (relatively speaking) the compliance game at the expense of smaller players because website owners are preferring to ‘play it safe’ and drop smaller entities vs big known platforms.

In the case of Google, they also flag up reports that suggest it has used its dominance of the adtech market to “encourage publishers to reduce the number of ad tech vendors and thus the number of trackers on their sites” — via a consent gathering tool that restricts the number of supply chain partners a publisher can share consent with to 12 vendors. 

And we’ve certainly heard complaints of draconian Google GDPR compliance terms before.

They also point to the use of manipulative UX design (aka dark patterns) that are used to “nudge users towards particular choices and actions that may be against their own interests”, suggesting these essentially deliberately confusing consent flows have been successfully tricking users into clicking and accepting “any kind of data collection” just to get rid of cryptic choices they’re being asked to understand. 

Given Google’s dominance of digital ad spending in Europe it stands to gain the most from websites’ use of manipulative consent flows.

However GDPR requires consent to be informed and freely given, not baffling and manipulative. So regulators should (hopefully) be getting a handle on any such transgressions and transgressors soon.

The continued existence of nightmarishly confused and convoluted consent flows is another complaint we’ve also heard before — much and often. (And one we have ourselves, frankly.)

Overall, according to the European Data Protection Board, a total of more than 42,000 complaints have been lodged so far with regulators, just four months into GDPR.

And just last week Europe’s data protection supervisor, Giovanni Buttarelli, told us to expect the first GDPR enforcement actions before the end of the year. So lots of EU consumers will already be warming up the popcorn.

But Cliqz and Ghostery argue that disingenuous attempts to manipulate consent might need additional regulatory tweaks to be beaten back — calling in their blog post for regulations to enforce machine-readable standards to help iron away flakey flows.

“The next opportunity for that would be the ePrivacy regulation,” they suggest, referencing the second big privacy rules update Europe is (still) working on. “It would be desirable, for example, if ePrivacy required that the privacy policies of websites, information on the type and scope of data collection by third parties, details of the Data Protection Officer and reports on data incidents must be machine-readable.

“This would increase transparency and create a market for privacy and compliance where industry players keep each other in check.”

It would also, of course, provide another opportunity for pro-privacy tools to make themselves even more useful to consumers.

Europe is drawing fresh battle lines around the ethics of big data

in Amazon/Artificial Intelligence/Brussels/competition law/data controller/data protection/Delhi/Europe/European Union/Facebook/General Data Protection Regulation/Giovanni Buttarelli/Google/India/law/Margrethe Vestager/Max Schrems/Politics/privacy/privacy policy/Security/social media/social networks/terms of service/Tim Cook/Tim-berners lee by

It’s been just over four months since Europe’s tough new privacy framework came into force. You might believe that little of substance has changed for big tech’s data-hungry smooth operators since then — beyond firing out a wave of privacy policy update spam, and putting up a fresh cluster of consent pop-ups that are just as aggressively keen for your data.

But don’t be fooled. This is the calm before the storm, according to the European Union’s data protection supervisor, Giovanni Buttarelli, who says the law is being systematically flouted on a number of fronts right now — and that enforcement is coming.

“I’m expecting, before the end of the year, concrete results,” he tells TechCrunch, sounding angry on every consumer’s behalf.

Though he chalks up some early wins for the General Data Protection Regulation (GDPR) too, suggesting its 72 hour breach notification requirement is already bearing fruit.

He also points to geopolitical pull, with privacy regulation rising up the political agenda outside Europe — describing, for example, California’s recently passed privacy law, which is not at all popular with tech giants, as having “a lot of similarities to GDPR”; as well as noting “a new appetite for a federal law” in the U.S.

Yet he’s also already looking beyond GDPR — to the wider question of how European regulation needs to keep evolving to respond to platform power and its impacts on people.

Next May, on the anniversary of GDPR coming into force, Buttarelli says he will publish a manifesto for a next-generation framework that envisages active collaboration between Europe’s privacy overseers and antitrust regulators. Which will probably send a shiver down the tech giant spine.

Notably, the Commission’s antitrust chief, Margrethe Vestager — who has shown an appetite to take on big tech, and has so far fined Google twice ($2.7BN for Google Shopping and staggering $5BN for Android), and who is continuing to probe its business on a number of fronts while simultaneously eyeing other platforms’ use of data — is scheduled to give a keynote at an annual privacy commissioners’ conference that Buttarelli is co-hosting in Brussels later this month.

Her presence hints at the potential of joint-working across historically separate regulatory silos that have nonetheless been showing increasingly overlapping concerns of late.

See, for example, Germany’s Federal Cartel Office accusing Facebook of using its size to strong-arm users into handing over data. And the French Competition Authority probing the online ad market — aka Facebook and Google — and identifying a raft of problematic behaviors. Last year the Italian Competition Authority also opened a sector inquiry into big data.

Traditional competition law theories of harm would need to be reworked to accommodate data-based anticompetitive conduct — essentially the idea that data holdings can bestow an unfair competitive advantage if they cannot be matched. Which clearly isn’t the easiest stinging jellyfish to nail to the wall. But Europe’s antitrust regulators are paying increasing mind to big data; looking actively at whether and even how data advantages are exclusionary or exploitative.

In recent years, Vestager has been very public with her concerns about dominant tech platforms and the big data they accrue as a consequence, saying, for example in 2016, that: “If a company’s use of data is so bad for competition that it outweighs the benefits, we may have to step in to restore a level playing field.”

Buttarelli’s belief is that EU privacy regulators will be co-opted into that wider antitrust fight by “supporting and feeding” competition investigations in the future. A future that can be glimpsed right now, with the EC’s antitrust lens swinging around to zoom in on what Amazon is doing with merchant data.

“Europe would like to speak with one voice, not only within data protection but by approaching this issue of digital dividend, monopolies in a better way — not per sectors,” Buttarelli tells TechCrunch. 

“Monopolies are quite recent. And therefore once again, as it was the case with social networks, we have been surprised,” he adds, when asked whether the law can hope to keep pace. “And therefore the legal framework has been implemented in a way to do our best but it’s not in my view robust enough to consider all the relevant implications… So there is space for different solutions. But first joint enforcement and better co-operation is key.”

From a regulatory point of view, competition law is hampered by the length of time investigations take. A characteristic of the careful work required to probe and prove out competitive harms that’s nonetheless especially problematic set against the blistering pace of technological innovation and disruption. The law here is very much the polar opposite of ‘move fast and break things’.

But on the privacy front at least, there will be no 12 year wait for the first GDPR enforcements, as Buttarelli notes was the case when Europe’s competition rules were originally set down in 1957’s Treaty of Rome.

He says the newly formed European Data Protection Board (EDPB), which is in charge of applying GDPR consistently across the bloc, is fixed on delivering results “much more quickly”. And so the first enforcements are penciled in for around half a year after GDPR ‘Day 1’.

“I think that people are right to feel more impassioned about enforcement,” he says. “We see awareness and major problems with how the data is treated — which are systemic. There is also a question with regard to the business model, not only compliance culture.

“I’m expecting concrete first results, in terms of implementation, before the end of this year.”

“No blackmailing”

Tens of thousands of consumers have already filed complaints under Europe’s new privacy regime. The GDPR updates the EU’s longstanding data protection rules, bringing proper enforcement for the first time in the form of much larger fines for violations — to prevent privacy being the bit of the law companies felt they could safely ignore.

The EDPB tells us that more than 42,230 complaints have been lodged across the bloc since the regulation began applying, on May 25. The board is made up of the heads of EU Member State’s national data protection agencies, with Buttarelli serving as its current secretariat.

“I did not appreciate the tsunami of legalistic notices landing on the account of millions of users, written in an obscure language, and many of them were entirely useless, and in a borderline even with spamming, to ask for unnecessary agreements with a new privacy policy,” he tells us. “Which, in a few cases, appear to be in full breach of the GDPR — not only in terms of spirit.”

He also professes himself “not surprised” about Facebook’s latest security debacle — describing the massive new data breach the company revealed on Friday as “business as usual” for the tech giant. And indeed for “all the tech giants” — none of whom he believes are making adequate investments in security.

“In terms of security there are much less investments than expected,” he also says of Facebook specifically. “Lot of investments about profiling people, about creating clusters, but much less in preserving the [security] of communications. GDPR is a driver for a change — even with regard to security.”

Asked what systematic violations of the framework he’s seen so far, from his pan-EU oversight position, Buttarelli highlights instances where service operators are relying on consent as their legal basis to collect user data — saying this must allow for a free choice.

Or “no blackmailing”, as he puts it.

Facebook, for example, does not offer any of its users, even its users in Europe, the option to opt out of targeted advertising. Yet it leans on user consent, gathered via dark pattern consent flows of its own design, to sanction its harvesting of personal data — claiming people can just stop using its service if they don’t agree to its ads.

It also claims to be GDPR compliant.

It’s pretty easy to see the disconnect between those two positions.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee in the Rayburn House Office Building on Capitol Hill April 11, 2018 in Washington, DC. This is the second day of testimony before Congress by Zuckerberg, 33, after it was reported that 87 million Facebook users had their personal information harvested by Cambridge Analytica, a British political consulting firm linked to the Trump campaign. (Photo by Chip Somodevilla/Getty Images)

“In cases in which it is indispensable to build on consent it should be much more than in the past based on exhaustive information; much more details, written in a comprehensive and simple language, accessible to an average user, and it should be really freely given — so no blackmailing,” says Buttarelli, not mentioning any specific tech firms by name as he reels off this list. “It should be really freely revoked, and without expecting that the contract is terminated because of this.

“This is not respectful of at least the spirit of the GDPR and, in a few cases, even of the legal framework.”

His remarks — which chime with what we’ve heard before from privacy experts — suggest the first wave of complaints filed by veteran European data protection campaigner and lawyer, Max Schrems, via his consumer focused data protection non-profit noyb, will bear fruit. And could force tech giants to offer a genuine opt-out of profiling.

The first noyb complaints target so-called ‘forced consent‘, arguing that Facebook; Facebook-owned Instagram; Facebook-owned WhatsApp; and Google’s Android are operating non-compliant consent flows in order to keep processing Europeans’ personal data because they do not offer the aforementioned ‘free choice’ opt-out of data collection.

Schrems also contends that this behavior is additionally problematic because dominant tech giants are gaining an unfair advantage over small businesses — which simply cannot throw their weight around in the same way to get what they want. So that’s another spark being thrown in on the competition front.

Discussing GDPR enforcement generally, Buttarelli confirms he expects to see financial penalties not just investigatory outcomes before the year is out — so once DPAs have worked through the first phase of implementation (and got on top of their rising case loads).

Of course it will be up to local data protection agencies to issue any fines. But the EDPB and Buttarelli are the glue between Europe’s (currently) 28 national data protection agencies — playing a highly influential co-ordinating and steering role to ensure the regulation gets consistently applied.

He doesn’t say exactly where be thinks the first penalties will fall but notes a smorgasbord of issues that are being commonly complained about, saying: “Now we have an obvious trend and even a peak, in terms of complaints; different violations focusing particularly, but not only, on social media; big data breaches; rights like right of access to information held; right to erasure.”

He illustrates his conviction of incoming fines by pointing to the recent example of the ICO’s interim report into Cambridge Analytica’s misuse of Facebook data, in July — when the UK agency said it intended to fine Facebook the maximum possible (just £500k, because the breach took place before GDPR).

A similarly concluded data misuse investigation under GDPR would almost certainly result in much larger fines because the regulation allows for penalties of up to 4% of a company’s annual global turnover. (So in Facebook’s case the maximum suddenly balloons into the billions.)

The GDPR’s article 83 sets out general conditions for calculating fines — saying penalties should be “effective, proportionate and dissuasive”; and they must take into account factors such as whether an infringement was intentional or negligent; the categories of personal data affected; and how co-operative the data controller is as the data supervisor investigates.

For the security breach Facebook disclosed last week the EU’s regulatory oversight process will involve an assessment of how negligent the company was; what response steps it took when it discovered the breach, including how it communicated with data protection authorities and users; and how comprehensively it co-operatives with the DPC’s investigation. (In a not-so-great sign for Facebook the Irish DPC has already criticized its breach notification for lacking detail).

As well as evaluating a data controller’s security measures against GDPR standards, EU regulators can “prescribe additional safeguards”, as Buttarelli puts it. Which means enforcement is much more than just a financial penalty; organizations can be required to change their processes and priorities too.

And that’s why Schrems’ forced consent complaints are so interesting.

Because a fine, even a large one, can be viewed by a company as revenue-heavy as Facebook as just another business cost to suck up as it keeps on truckin’. But GDPR’s follow on enforcement prescriptions could force privacy law breakers to actively reshape their business practices to continue doing business in Europe.

And if the privacy problem with Facebook is that it’s forcing people-tracking ads on everyone, the solution is surely a version of Facebook that does not require users to accept privacy intrusive advertising to use it. Other business models are available, such as subscription.

But ads don’t have to be hostile to privacy. For example it’s possible to display advertising without persistently profiling users — as, for example, pro-privacy search engine DuckDuckGo does. Other startups are exploring privacy-by-design on-device ad-targeting architectures for delivering targeted ads without needing to track users. Alternatives to Facebook’s targeted ads certainly exist — and innovating in lock-step with privacy is clearly possible. Just ask Apple.

So — at least in theory — GDPR could force the social network behemoth to revise its entire business model.

Which would make even a $1.63BN fine the company could face as a result of Friday’s security breach pale into insignificance.

Accelerating ethics

There’s a wrinkle here though. Buttarelli does not sound convinced that GDPR alone will be remedy enough to fix all privacy hostile business models that EU regulators are seeing. Hence his comment about a “question with regard to the business model”.

And also why he’s looking ahead and talking about the need to evolve the regulatory landscape — to enable joint working between traditionally discrete areas of law. 

“We need structural remedies to make the digital market fairer for people,” he says. “And therefore this is we’ve been successful in persuading our colleagues of the Board to adopt a position on the intersection of consumer protection, competition rules and data protection. None of the independent regulators’ three areas, not speaking about audio-visual deltas, can succeed in their sort of old fashioned approach.

“We need more interaction, we need more synergies, we need to look to the future of these sectoral legislations.”

People are targeted with content to make them behave in a certain way. To predict but also to react. This is not the kind of democracy we deserve. Giovanni Buttarelli, European Data Protection Supervisor

The challenge posed by the web’s currently dominant privacy-hostile business models is also why, in a parallel track, Europe’s data protection supervisor is actively pushing to accelerate innovation and debate around data ethics — to support efforts to steer markets and business models in, well, a more humanitarian direction.

When we talk he highlights that Sir Tim Berners-Lee will be keynoting at the same European privacy conference where Vestager will appear at — which has an overarching discussion frame of “Debating Ethics: Dignity and Respect in Data Driven Life” as its theme.

Accelerating innovation to support the development of more ethical business models is also clearly the Commission’s underlying hope and aim.

Berners-Lee, the creator of the World Wide Web, has been increasingly strident in his criticism of how commercial interests have come to dominate the Internet by exploiting people’s personal data, including warning earlier this year that platform power is crushing the web as a force for good.

He has also just left his academic day job to focus on commercializing the pro-privacy, decentralized web platform he’s been building at MIT for years — via a new startup, called Inrupt.

Doubtless he’ll be telling the conference all about that.

“We are focusing on the solutions for the future,” says Buttarelli on ethics. “There is a lot of discussion about people becoming owners of their data, and ‘personal data’, and we call that personal because there’s something to be respected, not traded. And on the contrary we see a lot of inequality in the tech world, and we believe that the legal framework can be of an help. But will not give all the relevant answers to identify what is legally and technically feasible but morally untenable.”

Also just announced as another keynote speaker at the same conference later this month: Apple’s CEO Tim Cook.

In a statement on Cook’s addition to the line-up, Buttarelli writes: “We are delighted that Tim has agreed to speak at the International Conference of Data Protection and Privacy Commissioners. Tim has been a strong voice in the debate around privacy, as the leader of a company which has taken a clear privacy position, we look forward to hearing his perspective. He joins an already superb line up of keynote speakers and panellists who want to be part of a discussion about technology serving humankind.”

So Europe’s big fight to rule the damaging impacts of big data just got another big gun behind it.

Apple CEO Tim Cook looks on during a visit of the shopfitting company Dula that delivers tables for Apple stores worldwide in Vreden, western Germany, on February 7, 2017. (Photo: BERND THISSEN/AFP/Getty Images)

 

“Question is [how do] we go beyond the simple requirements of confidentiality, security, of data,” Buttarelli continues. “Europe after such a successful step [with GDPR] is now going beyond the lawful and fair accumulation of personal data — we are identifying a new way of assessing market power when the services delivered to individuals are not mediated by a binary. And although competition law is still a powerful instrument for regulation — it was invented to stop companies getting so big — but I think together with our efforts on ethics we would like now Europe to talk about the future of the current dominant business models.

“I’m… concerned about how these companies, in compliance with GDPR in a few cases, may collect as much data as they can. In a few cases openly, in other secretly. They can constantly monitor what people are doing online. They categorize excessively people. They profile them in a way which cannot be contested. So we have in our democracies a lot of national laws in an anti-discrimination mode but now people are to be discriminated depending on how they behave online. So people are targeted with content to make them behave in a certain way. To predict but also to react. This is not the kind of democracy we deserve. This is not our idea.”

Plex changes its new privacy policy after backlash, clarified it’s not trying to see what’s in your library

in Delhi/India/Media/media player software/plex/Politics/privacy/privacy policy by

Popular media player software maker Plex has had a rough weekend. The company alerted users via email on Friday about an updated privacy policy due to go into effect in September. The new policy would remove users’ ability to opt out of data collection, said Plex. This move quickly resulted in much user backlash, with many subscribers concerned that their data would be shared or sold to third parties, or that Plex would now be able to identify the media files housed in their library.

That latter item is of particular concern to many users who have amassed media collections through illegal means, like torrenting or ripping copy-protected DVDs. They’re worried that allowing Plex to collect data about their media and its consumption would allow the company to deduce what sort of files they have.

Plex, in a post published to its website, admitted that its new policy didn’t go far enough to clarify that it didn’t have interest in knowing what’s in users’ libraries or collecting data to that effect. It said it would make changes to the new policy ASAP to assuage users’ fears in this area.

The company explained that it would be difficult to figure out the identify of a file based on certain media information, like duration, but also acknowledged it’s more than a “theoretical possibility” that this could be done. In other words, the indication here is that this is an almost paranoid concern on users’ parts.

“We have ZERO interest in knowing or being able to know what is any of your libraries,” wrote Plex CEO Keith Valory, ahead of detailing how the new policy would further protect users’ against having files identified.

To address this problem, Plex is updating the policy with three main changes, he said.

To start, Plex will now “generalize” playback statistics so it won’t be able to create any sort of fingerprint to identify a file in a user’s library. This will allow the company to still gain insights it needs to improve its service – like server performance when combined with specific hardware, codecs, bit rate and resolution; if a given feature is being used; if users are having trouble finding a certain button; and other items.

Another of the key concerns with the new policy is that Plex was removing the ability to opt out of data collection, which confused users who believed this change was about allowing Plex to amass data that could later be sold to third parties or used against them in other ways.

However, Plex said that its decision to remove opt out during setup was because it gave users a “false sense of privacy” because there were so many exceptions to the opt out clause as is. The company said a lot of data is transmitted already in order for its service to function.

For example: Plex servers connect to the cloud to receive updates; clients talk to the cloud to connect to remote servers; third-party services like Alexa and Sonos such that metadata must be available to Plex’s cloud services; Plex has to know if you’re a subscriber to premium features; it has to communicate various playback requests or commands through its cloud infrastructure at times; its relay service has to hand off data between your server and a remote device; and it has to provide reporting to licensors about trailers, extra, photo tagging, lyrics, licensed codecs and more (which is anonymized data).

These were all carved out as “exceptions” in the original policy, but Plex came to believe that wasn’t as clear as simply removing opt out altogether, and then changing the policy to be more transparent about what’s done and not done with user data. That is, the policy states that Plex is prohibited from selling user data.

Because of user feedback related to the opt out removal, Plex will introduce a new opt out mechanism, allowing users to opt out of playback statistics, alongside crash reporting and marketing communications. This new opt out will prevent Plex from gathering data like duration, bit rate and resolution – the specific stats that worried users who believed this could be turned into a way to fingerprint (identify) their media files.

In addition, a new privacy tab in server settings will provide a full list of all product events that Plex collects. That way users can see exactly what’s being collected, then opt out of the playback data they’re not comfortable with, the company says.

It’s not likely that Plex itself wanted to gather metrics like duration, bit rate and resolution in order to identify users’ pirated files for its own purposes.

But, as several angry customers on Reddit and Plex’s user forums pointed out, Plex’s intent here didn’t matter. If Plex amassed that data and stored it, it could be sued in the future to out those with illegal, pirated media collections, these users said. To be fair, that’s a valid concern: once the data exists, it could be used. It could be subpoenaed. Users feel more comfortable when the data isn’t collected in the first place.

The user backlash was harsh enough for Plex to take an immediate action to correct its policy. Many users were threatening to unsubscribe, or switch to competitors’ media player software instead.

It remains to be seen if Plex’s changes will bring those users’ back, and if it stems the tide of cancellations. It’s also unclear to what extent the backlash represented the larger Plex user base’s thoughts, or if was a vocal minority.

There were some ten pages of posts on Plex’s forums, representing several dozen users’ opinions. But Plex has grown its customer base to nearly 14 million paying users as of August, and more often than not, users ignore legalese updates like this.

Go to Top