Menu

Timesdelhi.com

June 17, 2019
Category archive

digital rights

UK tax office ordered to delete millions of unlawful biometric voiceprints

in Artificial Intelligence/biometrics/data controller/Delhi/digital rights/Europe/GDPR/General Data Protection Regulation/Government/human rights/identity management/India/information commissioner's office/Politics/privacy/United Kingdom/voice by

The U.K.’s data protection watchdog has issued the government department responsible for collecting taxes with a final enforcement notice, after an investigation found HMRC had collected biometric data from millions of citizens without obtaining proper consent.

HMRC has 28 days from the May 9 notice to delete any Voice ID records where it did not obtain explicit consent to record and create a unique biometric voiceprint linked to the individual’s identity. 

The Voice ID system was introduced in January 2017, with HMRC instructing callers to a helpline to record a phrase to use their voiceprint as a password. The system soon attracted criticism for failing to make it clear that people did not have to agree to their biometric data being recorded by the tax office.

In total, some 7 million U.K. citizens have had voiceprints recorded via the system. HMRC will now have to delete the majority of these records (~5 million voiceprints) — only retaining biometric data where it has fully informed consent to do so.

The Information Commissioner’s Office (ICO) investigation into Voice ID was triggered by a complaint by privacy advocacy group Big Brother Watch — which said more than 160,000 people opted out of the system after its campaign highlighted questions over how the data was being collected.

Announcing the conclusion of its probe last week, the ICO said it had found the tax office unlawfully processed people’s biometric data.

“Innovative digital services help make our lives easier but it must not be at the expense of people’s fundamental right to privacy. Organisations must be transparent and fair and, when necessary, obtain consent from people about how their information will be used. When that doesn’t happen, the ICO will take action to protect the public,” said deputy commissioner, Steve Wood, in a statement.

Blogging about its final enforcement notice, the regulator said today that it intends to carry out an audit to assess HMRC’s wider compliance with data protection rules.

“With the adoption of new systems comes the responsibility to make sure that data protection obligations are fulfilled and customers’ privacy rights addressed alongside any organisational benefit. The public must be able to trust that their privacy is at the forefront of the decisions made about their personal data,” writes Woods, offering guidance for using biometric data “in a fair, transparent and accountable way.”

Under Europe’s General Data Protection Regulation (GDPR), biometric data that’s used for identifying a person is classed as so-called “special category” data — meaning if a data controller is relying on consent as their legal basis for collecting this information the data subject must provide explicit consent.

In the case of HMRC, the ICO found it had failed to give customers sufficient information about how their biometric data would be processed, and failed to give them the chance to give or withhold consent.

It also collected voiceprints prior to publishing a Voice ID-specific privacy notice on its website. The ICO found it had not carried out an adequate data protection impact assessment prior to launching the system.

In October 2018 HMRC tweaked the automated options it offered to callers to provide clearer information about the system and their options.

That amended Voice ID system remains in operation. And in a letter to the ICO last week HMRC’s chief executive, Jon Thompson, defended it — claiming it is “popular with our customers, is a more secure way of protecting customer data, and enables us to get callers through to an adviser faster.”

As a result of the regulator’s investigation, HMRC retrospectively contacted around a fifth of the 7 million Brits whose data it had gathered to ask for consent. Of those it said more than 995,000 provided consent for the use of their biometric data and more than 260,000 withheld it.

Facebook hit with three privacy investigations in a single day

in articles/Canada/computing/data protection law/Delhi/digital rights/General Data Protection Regulation/human rights/India/mining/New York/Politics/privacy/Security/terms of service by

Third time lucky — unless you’re Facebook.

The social networking giant was hit by a trio of investigations over its privacy practices Thursday following a particularly tumultuous month of security lapses and privacy violations — the latest in a string of embarrassing and damaging breaches at the company, much of its own doing.

First came a probe by the Irish data protection authority looking into the breach of “hundreds of millions” of Facebook and Instagram user passwords were stored in plaintext on its servers. The company will be investigated under the European GDPR data protection law, which could lead to fines of up to four percent of its global annual revenue for the infringing year — already some several billions of dollars.

Then, Canadian authorities confirmed that the beleaguered social networking giant broke its strict privacy laws, reports TechCrunch’s Natasha Lomas. The Office of the Privacy Commissioner of Canada said it plans to take Facebook ti federal court to force the company to correct its “serious contraventions” of Canadian privacy law. The findings came in the aftermath of the Cambridge Analytica scandal, which vacuumed up more than 600,000 profiles of Canadian citizens.

Lastly, and slightly closer to home, Facebook was hit by its third investigation — this time by New York attorney general Letitia James. The state chief law enforcer is looking into the recent “unauthorized collection” of 1.5 million user email addresses, which Facebook used for profile verification, but inadvertently also scraped their contact lists.

“It is time Facebook is held accountable for how it handles consumers’ personal information,” said James in a statement. “Facebook has repeatedly demonstrated a lack of respect for consumers’ information while at the same time profiting from mining that data.”

Facebook spokesperson Jay Nancarrow said the company is “in touch with the New York State attorney general’s office and are responding to their questions on this matter.”

Taxing your privacy

in articles/california/ClearRoad/Column/coo/Delhi/digital rights/gps/human rights/identity management/India/law/location technology/Louisiana/Lyft/mobile device/New York/oregon/Politics/privacy/privacy policy/Seattle/smartphones/terms of service/Uber/utah by

Data collection through mobile tracking is big business and the potential for companies helping governments monetize this data is huge. For consumers, protecting yourself against the who, what and where of data flow is just the beginning. The question now is: How do you ensure your data isn’t costing you money in the form of new taxes, fees and bills?  Particularly when the entity that stands to benefit from this data — the government — is also tasked with protecting it?

The advances in personal data collection are a source of growing concern for privacy advocates, but whereas most fears tend to focus on what type of data is being collected, who’s watching and to whom is your data being sold, the potential for this same data to be monetized via auditing and compliance fees is even more problematic.

The fact is, you don’t need massive infrastructure to now track/tax businesses and consumers. State governments and municipalities have taken notice.

The result is a potential multi-billion dollar per-year business that, with mobile tracking technology, will only grow exponentially year over year.

Yet, while the revenue upside for companies helping smart cities (and states) with taxing and tolling is significant, it is also rife with contradictions and complications that could, ultimately, pose serious problems to those companies’ underlying business models and for the investors that bet heavily on them.

Photo courtesy of Getty Images/chombosan

The most common argument when privacy advocates bring up concerns around mobile data collection is that consumers almost always have the control to opt out. When governments utilize this data, however, that option is not always available. And the direct result is the monetization of a consumer’s privacy in the form of taxes and tolls. In an era where states like California and others are stepping up as self-proclaimed defenders of citizen privacy and consent, this puts everyone involved in an awkward position — to say the least.

The marriage of smart cities and next-gen location tracking apps is becoming more commonplace.  AI, always-on data flows, sensor networks and connected devices are all being employed by governments in the name of sustainable and equitable cities as well as new revenue.

New York, LA and Seattle are all implementing (or considering implementing) congestion pricing that would ultimately rely on harvesting personal data in some form or another. Oregon, which passed the first gas tax in 1919, began it’s OreGo Program two years ago utilizing data that measured miles driven to levy fees on drivers so as to address infrastructure issues with its roads and highways.

Image Courtesy of Shutterstock

As more state and local governments look to emulate these kinds of policies the revenue opportunity for companies and investors harvesting this data is obvious.  Populus, (and a portfolio company) a data platform that helps cities manage mobility, captures data from fleets like Uber and Lyft to help cities set policy and collect fees.

Similarly, ClearRoad  is a “road pricing transaction processor” that leverages data from vehicles to help governments determine road usage for new revenue streams.  Safegraph, on the other hand, is a company that daily collects millions of trackers from smartphones via apps, APIs and other delivery methods often leaving the business of disclosure up to third parties. Data like this has begun to make its way into smart city applications which could impact industries as varied as the real estate market to the Gig Economy.

“There are lots of companies that are using location technology, 3D scanning, sensor tracking and more.  So, there are lots of opportunities to improve the effectiveness of services and for governments to find new revenue streams,” says Paul Salama, COO of ClearRoad . “If you trust the computer to regulate, as opposed to the written code, then you can allow for a lot more dynamic types of regulation and that extends beyond vehicles to noise pollution, particulate emissions, temporary signage, etc.”

While most of these platforms and technologies endeavor to do some public good by creating the baseline for good policy and sustainable cities they also raise concerns about individual privacy and the potential for discrimination.  And there is an inherent contradiction for states ostensibly tasked with curbing the excesses of data collection then turning around and utilizing that same data to line the state’s coffers, sometimes without consent or consumer choice.

Image courtesy Bryce Durbin

“People care about their privacy and there are aspects that need to be hashed out”, says Salama. “But we’re talking about a lot of unknowns on that data governance side.  There’s definitely going to be some sort of reckoning at some point but it’s still so early on.”

As policy makers and people become more aware of mobile phone tracking and the largely unregulated data collection associated with it, the question facing companies in this space is how to extract all this societally beneficial data while balancing that against some pretty significant privacy concerns.

“There will be options,” says Salama.  “An example is Utah which, starting next year, will offer electric cars the option to pay a flat fee (for avoiding gas taxes) or pay-by-the-mile.  The pay-by-the-mile option is GPS enabled but it also has additional services, so you pay by your actual usage.”

Ultimately, for governments, regulation plus transparency seems the likeliest way forward.

Image courtesy Getty Images

In most instances, the path to the consumer or tax payer is either through their shared economy vehicle (car, scooter, bike, etc.) or though their mobile device.  While taxing fleets is indirect and provides some measure of political cover for the governments generating revenue off of them, there is no such cover for directly taxing citizens via data gathered through mobile apps.

The best case scenario to short circuit these inherent contradictions for governments is to actually offer choice in the form of their own opt-in for some value exchange or preferred billing method, such as Utah’s opt-in as an alternative way to pay for road use vs. gas tax.   It may not satisfy all privacy concerns, particularly when it is the government sifting through your data, but it at least offers a measure of choice and a tangible value.

If data collection and sharing were still mainly the purview of B2B businesses and global enterprises, perhaps the rising outcry over the methods and usage of data collection would remain relatively muted. But as data usage seeps into more aspects of everyday life and is adopted by smart cities and governments across the nation questions around privacy will invariably get more heated, particularly when citizen consumers start feeling the pinch in their wallet.

As awareness rises and inherent contradictions are laid bare, regulation will surely follow and those businesses not prepared may face fundamental threats to their business models that ultimately threaten their bottom line.

Industries must adopt ethics along with technology

in Artificial Intelligence/big data/Column/computing/Delhi/digital rights/ethics/European Union/India/information/New-York-Times/oil/Politics/privacy/smartphone/social media/Technology/terms of service/United Kingdom/United States by

A recent New York Times investigation into how smartphone-resident apps collect location data exposes why it’s important for industry to admit that the ethics of individuals who code and commercialize technology is as important as the technology’s code itself.

For the benefit of technology users, companies building technologies must make efforts to raise awareness of their potential human risks – and be honest about how people’s data is used by their innovations. People developing innovations must demand commitment from the C-suite – and boardrooms – of global technology companies to ethical technology. Specifically, the business world needs to instill workforce ethics champions throughout company ranks, develop corporate transparency frameworks and hire diverse teams to interact with, create and improve upon these technologies.

Image courtesy of Shutterstock

Responsible handling of data is no longer a question

Our data is a valuable asset and the commercial insight it brings to marketers is priceless. Data has become a commodity akin to oil or gold, but user privacy should be the priority – and endgame – for companies across industries benefiting from data. As companies grow and shift, there needs to be an emphasis placed on user consent, clearly establishing what and how data is being used, tracking collected data, placing privacy at the forefront and informing users where AI is making sensitive decisions.

On the flip side, people are beginning to realize that seemingly harmless data they enter into personal profiles, apps and platforms can be taken out of context, commercialized and potentially sold without user consent. The bottom line: consumers are now holding big data and big tech accountable for data privacy – and the public scrutiny of companies operating inside and outside of tech will only grow from here.

Whether or not regulators in the United States, United Kingdom, European Union and elsewhere act, the onus is on Big Tech and private industry to step up by addressing public scrutiny head-on. In practice, this involves Board and C-Suite level acknowledgement of the issues and working-level efforts to address them comprehensively. Companies should clearly communicate steps being taken to improve data security, privacy, ethics and general practices.

Image courtesy of TechCrunch/Bryce Durbin

People working with data need to be more diverse and ethical

Efforts to harvest personal data submitted to technology platforms reinvigorates the need for ethics training for people in all positions at companies that handle sensitive data. The use of social media and third party platforms raises the importance of building backend technologies distributing and analyzing human data, like AI, to be ethical and transparent. We also need the teams actually creating these technologies to be more diverse, as diverse as the community that will eventually use them. Digital equality should be a human right that encompasses fairness in algorithms, access to digital tools and the opportunity for anyone to develop digital skills.

Many companies boast reactionary and retrospective improvements, to boost ethics and transparency in products already on the market. The reality is that it’s much harder to retrofit ethics into technology after the fact. Companies need to have the courage to make the difficult decision at the working and corporate levels not launch biased or unfair systems in some cases.

In practice, organizations must establish guidelines that people creating technologies can work within throughout a product’s development cycle. It’s established and common practice for developers and researchers to test usability, potential flaws and security prior to a product hitting the market. That’s why technology developers should also be testing for fairness, potential biases and ethical implementation before a product hits the market or deploys into the enterprise.

Photo courtesy of Getty Images

The future of technology will be all about transparency

Recent events confirm that the business world’s approach to building and deploying data-consuming technologies, like AI, needs to focus squarely on ethics and accountability. In the process, organizations building technologies and supporting applications need to fundamentally incorporate both principles into their engineering. A single company that’s not careful, and breaks the trust of its users, can cause a domino effect in which consumers lose trust in the greater technology and any company leveraging it.

Enterprises need to develop internal principles and processes that hold people from the Board to the newest hire accountable. These frameworks should govern corporate practices and transparently showcase companies’ commitment to ethical AI and data practices. That’s why my company introduced The Ethics of Code to address critical ethics issues before AI products launch and our customers questions around accountability.

Moving into 2019 with purpose

Ultimately, there’s now a full-blown workforce, public and political movement toward ethical data practices that was already in motion within some corners of the tech community. Ideally, the result will be change in the form of more ethical technology created, improved and managed transparently by highly accountable people – from company developers to CEOs to Boards of Directors. Something the world has needed since way before ethical questions sparked media headlines, entered living rooms and showed up on government agendas.

Feds like cryptocurrencies and blockchain tech and so should antitrust agencies

in author/Bitcoin/blockchain/Column/computing/cryptocurrencies/decentralization/Delhi/digital rights/Economy/ethereum/fed/General Data Protection Regulation/Germany/human rights/India/money/Politics/privacy/St. Louis by

While statements and position papers from most central banks were generally skeptical of cryptocurrencies, the times may be changing.

Earlier this year, the Federal Reserve of Saint Louis published a study that relates the positive effects of cryptocurrencies for privacy protection.

Even with the precipitous decline in value of Bitcoin, Ethereum and other currencies, the Federal Reserve author emphasized the new competitive offering these currencies created exactly because of the way they function, and accordingly, why they are here to stay.

And antitrust authorities should welcome cryptocurrencies and blockchain technologies for the same reason.

Fact: crypto-currencies are good for (legitimate) privacy protection

In the July article from Federal Reserve research fellow Charles M. Kahn, cryptocurrencies were held up as an exemplar of a degree of privacy protection that not even the central banks can provide to customers.

Kahn further stressed that “privacy in payments is desired not just for illegal transactions, but also for protection from malfeasance or negligence by counterparties or by the payments system provider itself.”

The act of payment engages the liability of the person who makes it. As a consequence, parties insert numerous contractual clauses to limit their liability. This creates a real issue due to the fact that some “parties to the transaction are no longer able to support the lawyers’ fees necessary to uphold the arrangement.” Smart contracts may address this issue by automating conflict resolution, but for anyone who doesn’t have access to them, crypto-currencies solve the problem differently. They make it possible to make a transaction without revealing your identity.

Above all, crypto-currencies are a reaction to fears of privacy invasion, whether by governments or big companies, according to Kahn. And indeed, following Cambridge Analytica and fake news revelations, we are hearing more and more opinions expressing concerns. The General Data Protection Regulation is set to protect private citizens, but in practice, “more and more individuals will turn to payments technologies for privacy protection in specific transactions.” In this regard, cryptocurrencies provide an alternative solution that competes directly with what the market currently offers.

Consequence: blockchain is good for competition and consumers

Indeed, cryptocurrencies may be the least among many blockchain applications. The diffusion of data among a decentralized network that is independently verified by some or all of the network’s participating stakeholders is precisely the aspect of the technology that provides privacy protection and competes with applications outside the blockchain by offering a different kind of service.

The Fed of St. Louis’ study underlines that “because privacy needs are different in type and degree, we should expect a variety of platforms to emerge for specific purposes, and we should expect continued competition between traditional and start-up providers.”

And how not to love variety? In an era where antitrust authorities are increasingly interested in consumers’ privacy, crypto-currencies (and more generally blockchains) offer a much more effective protection than antitrust law and/or the GDPR combined.

These agencies should be happy about that, but they don’t say a word about it. That silence could lead to flawed judgements, because ignoring the speed of blockchain development — and its increasingly varied use — leads to misjudge the real nature of the competitive field.

And in fact, because they ignore the existence of blockchain (applications), they tend to engage in more and more procedures where privacy is seen as an antitrust concern (see what’s happening in Germany). But blockchain is actually providing an answer to this issue ; it can’t be said accordingly that the market is failing. And without a market failure, antitrust agencies’ intervention is not legitimate.

The roles of the fed and antitrust agencies could change

This new privacy offering from blockchain technologies should also lead to changes in the role of agencies. As the Fed study stressed:

“the future of central banks and payments authorities is no longer in privacy provision but in privacy regulation, in holding the ring as different payments platforms offer solutions appropriate to different niches with different mixes of expenses and safety, and with attention to different parts of the public’s demand for privacy.”

Some constituencies may criticize the expanding role of central banks in enforcing and ensuring privacy online, but those banks would be even harder pressed if they handled the task themselves instead of trying to relinquish it to the network.

The same applies to antitrust authorities. It is not for them to judge what the business model of digital companies should be and what degree of privacy protection they should offer. Their role is to ensure that alternatives exist, here, that blockchain can be deployed without misinformed regulation to slow it down.

Perhaps antitrust agencies should be more vocal about the benefits of cryptocurrencies and blockchain and advise governments not to prevent them.

After all, if even a Fed is now pro-crypto-currencies, antitrust regulators should jump on the wagon without fear. After all, blockchain creates a new alternative by offering real privacy protections, which ultimately put more power in the hands of consumers. If antitrust agencies can’t recognize that, we will soon ask ourselves: who are they really protecting?

Go to Top