Menu

Timesdelhi.com

May 22, 2019
Category archive

privacy

London’s Tube network to switch on wi-fi tracking by default in July

in api/controlled/Delhi/encryption/Europe/European Union/India/London/London Underground/MAC Address/Mayor/mobile devices/Politics/privacy/Security/smartphone/transport for london/Transportation/United Kingdom/wi-fi/wireless networking by

Transport for London will roll out default wi-fi device tracking on the London Underground this summer, following a trial back in 2016.

In a press release announcing the move, TfL writes that “secure, privacy-protected data collection will begin on July 8” — while touting additional services, such as improved alerts about delays and congestion, which it frames as “customer benefits”, as expected to launch “later in the year”.

As well as offering additional alerts-based services to passengers via its own website/apps, TfL says it could incorporate crowding data into its free open-data API — to allow app developers, academics and businesses to expand the utility of the data by baking it into their own products and services.

It’s not all just added utility though; TfL says it will also use the information to enhance its in-station marketing analytics — and, it hopes, top up its revenues — by tracking footfall around ad units and billboards.

Commuters using the UK capital’s publicly funded transport network who do not want their movements being tracked will have to switch off their wi-fi, or else put their phone in airplane mode when using the network.

To deliver data of the required detail, TfL says detailed digital mapping of all London Underground stations was undertaken to identify where wi-fi routers are located so it can understand how commuters move across the network and through stations.

It says it will erect signs at stations informing passengers that using the wi-fi will result in connection data being collected “to better understand journey patterns and improve our services” — and explaining that to opt out they have to switch off their device’s wi-fi.

Attempts in recent years by smartphone OSes to use MAC address randomization to try to defeat persistent device tracking have been shown to be vulnerable to reverse engineering via flaws in wi-fi set-up protocols. So, er, switch off to be sure.

We covered TfL’s wi-fi tracking beta back in 2017, when we reported that despite claiming the harvested wi-fi data was “de-personalised”, and claiming individuals using the Tube network could not be identified, TfL nonetheless declined to release the “anonymized” data-set after a Freedom of Information request — saying there remains a risk of individuals being re-identified.

As has been shown many times before, reversing ‘anonymization’ of personal data can be frighteningly easy.

It’s not immediately clear from the press release or TfL’s website exactly how it will be encrypting the location data gathered from devices that authenticate to use the free wi-fi at the circa 260 wi-fi enabled London Underground stations.

Its explainer about the data collection does not go into any real detail about the encryption and security being used. (We’ve asked for more technical details.)

“If the device has been signed up for free Wi-Fi on the London Underground network, the device will disclose its genuine MAC address. This is known as an authenticated device,” TfL writes generally of how the tracking will work.

“We process authenticated device MAC address connections (along with the date and time the device authenticated with the Wi-Fi network and the location of each router the device connected to). This helps us to better understand how customers move through and between stations — we look at how long it took for a device to travel between stations, the routes the device took and waiting times at busy periods.”

“We do not collect any other data generated by your device. This includes web browsing data and data from website cookies,” it adds, saying also that “individual customer data will never be shared and customers will not be personally identified from the data collected by TfL”.

In a section entitled “keeping information secure” TfL further writes: “Each MAC address is automatically depersonalised (pseudonymised) and encrypted to prevent the identification of the original MAC address and associated device. The data is stored in a restricted area of a secure location and it will not be linked to any other data at a device level.  At no time does TfL store a device’s original MAC address.”

Privacy and security concerns were raised about the location tracking around the time of the 2016 trial — such as why TfL had used a monthly salt key to encrypt the data rather than daily salts, which would have decreased the risk of data being re-identifiable should it leak out.

Such concerns persist — and security experts are now calling for full technical details to be released, given TfL is going full steam ahead with a rollout.

 

A report in Wired suggests TfL has switched from hashing to a system of tokenisation – “fully replacing the MAC address with an identifier that cannot be tied back to any personal information”, which TfL billed as as a “more sophisticated mechanism” than it had used before. We’ll update as and when we get more from TfL.

Another question over the deployment at the time of the trial was what legal basis it would use for pervasively collecting people’s location data — since the system requires an active opt-out by commuters a consent-based legal basis would not be appropriate.

In a section on the legal basis for processing the Wi-Fi connection data, TfL writes now that its ‘legal ground’ is two-fold:

  • Our statutory and public functions
  • to undertake activities to promote and encourage safe, integrated, efficient and economic transport facilities and services, and to deliver the Mayor’s Transport Strategy

So, presumably, you can file ‘increasing revenue around adverts in stations by being able to track nearby footfall’ under ‘helping to deliver (read: fund) the mayor’s transport strategy’.

(Or as TfL puts it: “[T]he data will also allow TfL to better understand customer flows throughout stations, highlighting the effectiveness and accountability of its advertising estate based on actual customer volumes. Being able to reliably demonstrate this should improve commercial revenue, which can then be reinvested back into the transport network.”)

On data retention it specifies that it will hold “depersonalised Wi-Fi connection data” for two years — after which it will aggregate the data and retain those non-individual insights (presumably indefinitely, or per its standard data retention policies).

“The exact parameters of the aggregation are still to be confirmed, but will result in the individual Wi-Fi connection data being removed. Instead, we will retain counts of activities grouped into specific time periods and locations,” it writes on that.

It further notes that aggregated data “developed by combining depersonalised data from many devices” may also be shared with other TfL departments and external bodies. So that processed data could certainly travel.

Of the “individual depersonalised device Wi-Fi connection data”, TfL claims it is accessible only to “a controlled group of TfL employees” — without specifying how large this group of staff is; and what sort of controls and processes will be in place to prevent the risk of A) data being hacked and/or leaking out or B) data being re-identified by a staff member.

A TfL employee with intimate knowledge of a partner’s daily travel routine might, for example, have access to enough information via the system to be able to reverse the depersonalization.

Without more technical details we just don’t know. Though TfL says it worked with the UK’s data protection watchdog in designing the data collection with privacy front of mind.

“We take the privacy of our customers very seriously. A range of policies, processes and technical measures are in place to control and safeguard access to, and use of, Wi-Fi connection data. Anyone with access to this data must complete TfL’s privacy and data protection training every year,” it also notes elsewhere.

Despite holding individual level location data for two years, TfL is also claiming that it will not respond to requests from individuals to delete or rectify any personal location data it holds, i.e. if people seek to exercise their information rights under EU law.

“We use a one-way pseudonymisation process to depersonalise the data immediately after it is collected. This means we will not be able to single out a specific person’s device, or identify you and the data generated by your device,” it claims.

“This means that we are unable to respond to any requests to access the Wi-Fi data generated by your device, or for data to be deleted, rectified or restricted from further processing.”

Again, the distinctions it is making there are raising some eyebrows.

What’s amply clear is that the volume of data that will be generated as a result of a full rollout of wi-fi tracking across the lion’s share of the London Underground will be staggeringly massive.

More than 509 million “depersonalised” pieces of data, were collected from 5.6 million mobile devices during the four-week 2016 trial alone — comprising some 42 million journeys. And that was a very brief trial which covered a much smaller sub-set of the network.

As big data giants go, TfL is clearly gunning to be right up there.

News Source = techcrunch.com

Amazon faces greater shareholder pressure to limit sale of facial recognition tech to the government

in aclu/Amazon/American Civil Liberties Union/Cloud/Delhi/facial recognition/Government/India/law enforcement/learning/Politics/privacy/publishing/San Francisco/Security/skills/surveillance/surveillance technologies/United States by

This week could mark a significant setback for Amazon’s facial recognition business if privacy and civil liberties advocates — and some shareholders — get their way.

Months earlier, shareholders tabled a resolution to limit the sale of Amazon’s facial recognition tech giant calls Rekognition to law enforcement and government agencies. It followed accusations of bias and inaccuracies with the technology, which they say can be used to racially discriminate against minorities. Rekognition, which runs image and video analysis of faces, has been sold to two states so far and Amazon has pitched Immigrations & Customs Enforcement. A second resolution will require an independent human and civil rights review of the technology.

Now the ACLU is backing the measures and calling on shareholders to pass the the resolutions.

“Amazon has stayed the course,” said Shankar Narayan, director of the Technology and Liberty Project at the ACLU Washington, in a call Friday. “Amazon has heard repeatedly about the dangers to our democracy and vulnerable communities about this technology but they have refused to acknowledge those dangers let alone address them,” he said.

“Amazon has been so non-responsive to these concerns,” said Narayan, “even Amazon’s own shareholders have been forced to resort to putting these proposals addressing those concerns on the ballot.”

It’s the latest move in a concerted effort by dozens of shareholders and investment firms, tech experts and academics, and privacy and rights groups and organizations who have decried the use of the technology.

Critics say Amazon Rekognition has accuracy and bias issues. (Image: TechCrunch)

In a letter to be presented at Amazon’s annual shareholder meeting Wednesday, the ACLU will accuse Amazon of “failing to act responsibly” by refusing to stop the sale of the technology to the government.

“This technology fundamentally alters the balance of power between government and individuals, arming governments with unprecedented power to track, control, and harm people,” said the letter, shared with TechCrunch. “It would enable police to instantaneously and automatically determine the identities and locations of people going about their daily lives, allowing government agencies to routinely track their own residents. Associated software may even display dangerous and likely inaccurate information to police about a person’s emotions or state of mind.”

“As shown by a long history of other surveillance technologies, face surveillance is certain to be disproportionately aimed at immigrants, religious minorities, people of color, activists, and other vulnerable communities,” the letter added.

“Without shareholder action, Amazon may soon become known more for its role in facilitating pervasive government surveillance than for its consumer retail operations,” it read.

Facial recognition has become one of the most hot button topics in privacy in years. Amazon Rekognition, its cloud-based facial recognition system, remains in its infancy yet one of the most prominent and available systems available. But critics say the technology is flawed. Exactly a year prior to this week’s shareholder meeting, the ALCU first raised “profound” concerns with Rekognition and its installation at airports, public places and by police. Since then, the technology was shown to struggle to detect people of color. In its tests, the system struggled to match 28 congresspeople who were falsely matched in a mugshot database who had been previously arrested.

But there has been pushback — even from government. Several municipalities have rolled out surveillance-curtailing laws and ordnances in the past year. San Francisco last week became the first major U.S. city government to ban the use of facial recognition.

“Amazon leadership has failed to recognize these issues,” said the ACLU’s letter to be presented Wednesday. “This failure will lead to real-life harm.”

The ACLU said shareholders “have the power to protect Amazon from its own failed judgment.”

Amazon has pushed back against the claims by arguing that the technology is accurate — largely by criticizing how the ACLU conducted its tests using Rekognition.

Amazon did not comment when reached prior to publication.

Read more:

News Source = techcrunch.com

Yes, Americans can opt-out of airport facial recognition. Here’s how

in airline/Airlines/airport/aviation/check.in/Delhi/Delta/facial recognition/Government/India/Politics/privacy/Security/smartphone/Transportation/U.S. government/United States by

Whether you like it or not, facial recognition tech to check in for your flight will soon be coming to an airport near you.

Over a dozen U.S. airports are already rolling out the technology, with many more to go before the U.S. government hits its target of enrolling the largest 20 airports in the country before 2021.

Facial recognition is highly controversial and has many divided. On the one hand, it reduces paper tickets and meant to be easier for travelers to check in at the airport before their flight. But facial recognition also has technical problems. According to a Homeland Security watchdog, the facial recognition systems used at airports only worked in 85 percent in some cases. Homeland Security said the system is getting better over time and will be up to scratch by the supposed 2021 deadline — even if the watchdog has its doubts.

Many also remain fearful of the privacy and legal concerns. After all, it’s not Customs and Border Protection collecting your facial recognition data directly — it’s the airlines — and they pass it onto the government.

Delta debuted the tech last year, scanning faces before passengers fly. JetBlue also followed suit, and many more airlines are expected to sign up. That data is used to verify boarding passes before travelers get to their gate. But it’s also passed onto Customs and Border Protection to check passengers against their watchlists — and to crack down on those who overstay their visas.

Clearly that’s rattling travelers. In a recent Twitter exchange with JetBlue, the airline said customers are “able to opt out of this procedure.”

That’s technically true, although you might not know it if you’re at one of the many U.S. airports. The Electronic Frontier Foundation found that it’s not easy to opt-out but it is possible.

A sign allowing U.S. citizens to opt-out of facial scans. (Image: Twitter/Juli Lyskawa)

If you’re a U.S. citizen, you can opt out by telling an officer or airline employee at the time of a facial recognition scan. You’ll need your U.S. passport with you — even if you’re flying domestically. Border officials or airline staff will manually check your passport or boarding pass like they would normally do before you’ve boarded a plane.

Be on the lookout for any signs that say you can opt-out, but also be mindful that there may be none at all. You may have to opt-out multiple times from arriving at the airport until you reach your airplane seat.

“It might sound trite, but right now, the key to opting out of face recognition is to be vigilant,” wrote EFF’s Jason Kelley.

Bad news if you’re not an American: you will not be allowed to opt-out.

“Once the biometric exit program is a nationally-scaled, established program, foreign nationals will be required to biometrically confirm their exit from the United States at the final [boarding] point,” said CBP spokesperson Jennifer Gabris in an earlier email to TechCrunch. “This has been and is a Congressional mandate,” she said.

There are a few exceptions, such as Canadian citizens who don’t require a visa to enter the U.S. are exempt, and diplomatic and government visa holders.

Facial recognition data collected by the airlines on U.S. citizens is stored by Customs and Border Protection for between 12 hours and two weeks, and 75 years for non-citizens. That data is stored in several government databases, which border officials can pull up when you’re arriving or leaving the U.S.

Why should you opt-out? As an American, it’s your right to refuse. Homeland Security once said Americans who didn’t want their faces scanned at the airport should “refrain from traveling.” Now all it takes is a “no, thanks.”

Read more:

News Source = techcrunch.com

Friend portability is the must-have Facebook regulation

in Apps/Chris Hughes/data portability/Delhi/Facebook/Facebook Data Portability/Facebook Download Your Information/Facebook Policy/facebook privacy/Facebook Regulation/FTC/Government/India/Mark Zuckerberg/Opinion/Policy/Politics/privacy/Social/TC by

Choice for consumers compels fair treatment by corporations. When people can easily move to a competitor, it creates a natural market dynamic coercing a business to act right. When we can’t, other regulations just leave us trapped with a pig in a fresh coat of lipstick.

That’s why as the FTC considers how many billions to fine Facebook or which executives to stick with personal liability or whether to go full-tilt and break up the company, I implore it to consider the root of how Facebook gets away with abusing user privacy: there’s no simple way to switch to an alternative.

If Facebook users are fed up with the surveillance, security breaches, false news, or hatred, there’s no western general purpose social network with scale for them to join. Twitter is for short-form public content, Snapchat is for ephemeral communication. Tumblr is neglected. Google+ is dead. Instagram is owned by Facebook. And the rest are either Chinese, single-purpose, or tiny.

No, I don’t expect the FTC to launch its own “Fedbook” social network. But what it can do is pave an escape route from Facebook so worthy alternatives become viable options. That’s why the FTC must require Facebook offer truly interoperable data portability for the social graph.

In other words, the government should pass regulations forcing Facebook to let you export your friend list to other social networks in a privacy-safe way. This would allow you to connect with or follow those people elsewhere so you could leave Facebook without losing touch with your friends. The increased threat of people ditching Facebook for competitors would create a much stronger incentive to protect users and society.

The slate of potential regulations for Facebook currently being discussed by the FTC’s heads include a $3 billion to $5 billion fine or greater, holding Facebook CEO personally liable for violations of an FTC consent decree, creating new privacy and compliance positions including one held by executive that could be filled by Zuckerberg, creating an independent oversight committee to review privacy and product decisions, accordng to the New York Times and Washington Post. More extreme measures like restricting how Facebook collects and uses data for ad targeting, blocking future acquisitions, or breaking up the company are still possible but seemingly less likely.

Facebook co-founder Chris Hughes (right) recently wrote a scathing call to break up Facebook.

Breaking apart Facebook is a tantalizing punishment for the company’s wrongdoings. Still, I somewhat agree with Zuckerberg’s response to co-founder Chris Hughes’ call to split up the company, which he said “isn’t going to do anything to help” directly fix Facebook’s privacy or misinformation issues. Given Facebook likely wouldn’t try to make more acquisitions of big social networks under all this scrutiny, it’d benefit from voluntarily pledging not to attempt these buys for at least three to five years. Otherwise, regulators could impose that ban, which might be more politically attainable with fewer messy downstream effects,

Yet without this data portability regulation, Facebook can pay a fine and go back to business as usual. It can accept additional privacy oversight without fundamentally changing its product. It can become liable for upholding the bare minimum letter of the law while still breaking the spirit. And even if it was broken up, users still couldn’t switch from Facebook to Instagram, or from Instagram and WhatsApp to somewhere new.

Facebook Kills Competition With User Lock-In

When faced with competition in the past, Facebook has snapped into action improving itself. Fearing Google+ in 2011, Zuckerberg vowed “Carthage must be destroyed” and the company scrambled to launch Messenger, the Timeline profile, Graph Search, photo improvements and more. After realizing the importance of mobile in 2012, Facebook redesigned its app, reorganized its teams, and demanded employees carry Android phones for “dogfooding” testing. And when Snapchat was still rapidly growing into a rival, Facebook cloned its Stories and is now adopting the philosophy of ephemerality.

Mark Zuckerberg visualizes his social graph at a Facebook conference

Each time Facebook felt threatened, it was spurred to improve its product for consumers. But once it had defeated its competitors, muted their growth, or confined them to a niche purpose, Facebook’s privacy policies worsened. Anti-trust scholar Dina Srinivasan explains this in her summary of her paper “The Anti-Trust Case Against Facebook”:

“When dozens of companies competed in an attempt to win market share, and all competing products were priced at zero—privacy quickly emerged as a key differentiator. When Facebook entered the market it specifically promised users: “We do not and will not use cookies to collect private information from any user.” Competition didn’t only restrain Facebook’s ability to track users. It restrained every social network from trying to engage in this behavior . . .  the exit of competition greenlit a change in conduct by the sole surviving firm. By early 2014, dozens of rivals that initially competed with Facebook had effectively exited the market. In June of 2014, rival Google announced it would shut down its competitive social network, ceding the social network market to Facebook.

For Facebook, the network effects of more than a billion users on a closed-communications protocol further locked in the market in its favor. These circumstances—the exit of competition and the lock-in of consumers—finally allowed Facebook to get consumers to agree to something they had resisted from the beginning. Almost simultaneous with Google’s exit, Facebook announced (also in June of 2014) that it would begin to track users’ behavior on websites and apps across the Internet and use the data gleaned from such surveillance to target and influence consumers. Shortly thereafter, it started tracking non-users too. It uses the “like” buttons and other software licenses to do so.”

This is why the FTC must seek regulation that not only punishes Facebook for wrongdoings, but that lets consumers do the same. Users can punch holes in Facebook by leaving, both depriving it of ad revenue and reducing its network effect for others. Empowering them with the ability to take their friend list with them gives users a taller seat at the table. I’m calling for what University Of Chicago professors Luigi Zingales and Guy Rolnik termed a Social Data Portability Act.

Luckily, Facebook already has a framework for this data portability through a feature called Find Friends. You connect your Facebook account to another app, and you can find your Facebook friends who are already on that app.

But the problem is that in the past, Facebook has repeatedly blocked competitors from using Find Friends. That includes cutting off Twitter, Vine, Voxer, and MessageMe, while Phhhoto was blocked from letting you find your Instagram friends…six months before Instagram copied Phhhoto’s core back-and-forth GIF feature and named it Boomerang. Then there’s the issue that you need an active Facebook account to use Find Friends. That nullifies its utility as a way to bring your social graph with you when you leave Facebook.

Facebook’s “Find Friends” feature used to let Twitter users follow their Facebook friends, but Facebook later cut off access for competitors including Twitter and Vine seen here

The social network does offer a way to “Download Your Information” which is helpful for exporting photos, status updates, messages, and other data about you. Yet the friend list can only be exported as a text list of names in HTML or JSON format. Names aren’t linked to their corresponding Facebook profiles or any unique identifier, so there’s no way to find your friend John Smith amongst everyone with that name on another app. And less than 5 percent of my 2800 connections had used the little-known option to allow friends to export their email address. What about the big “Data Transfer Project” Facebook announced 10 months ago in partnership with Google, Twitter, and Microsoft to provide more portability? It’s released nothing so far, raising questions of whether it was vaporware designed to ward off regulators.

Essentially, this all means that Facebook provides zero portability for your friendships. That’s what regulators need to change. There’s already precedent for this. The Telecommunications Act of 1996 saw FCC require phone service carriers to allow customers to easily port their numbers to another carrier rather than having to be assigned a new number. If you think of a phone number as a method by which friends connect with you, it would be reasonable for regulators to declare that the modern equivalent — your social network friend connections — must be similarly portable.

How To Unchain Our Friendships

Facebook should be required to let you export a truly interoperable friend list that can be imported into other apps in a privacy-safe way.

To do that, Facebook should allow you to download a version of the list that feature hashed versions of the phone numbers and email addresses friends used to sign up. You wouldn’t be able to read that contact info or freely import and spam people. But Facebook could be required to share documentation teaching developers of other apps to build a feature that safely cross-checks the hashed numbers and email addresses against those of people who had signed up for their app. That developer wouldn’t be able to read the contact info from Facebook either, or store any useful data about people who hadn’t signed up for their app. But if the phone number or email address of someone in your exported Facebook friend list matched one of their users, they could offer to let you connect with or follow them.

This system would let you save your social graph, delete your Facebook account, and then find your friends on other apps without ever jeopardizing the privacy of their contact info. Users would no longer be locked into Facebook and could freely choose to move their friendships to whatever social network treats them best. And Facebook wouldn’t be able to block competitors from using it.

If the company wanted to go a step further, it could offer ways to makes News Feed content preferences or Facebook Groups connections portable, such as by making it easier for Group members to opt-in to joining a parallel email or text message mailing list. For researchers, Facebook could offer ways to export anonymized News Feed and activity data for study.

Portability would much more closely align the goals of users, Facebook, and the regulators. Facebook wouldn’t merely be responsible to the government for technically complying with new fines, oversight, or liability. It would finally have to compete to provide the best social app rather than relying on its network effect to handcuff users to its service.

This same model of data portability regulation could be expanded to any app with over 1 billion users, or even 100 million users to ensure YouTube, Twitter, Snapchat, or Reddit couldn’t lock down users either. By only applying the rule to apps with a sufficiently large user base, the regulation wouldn’t hinder new startup entrants to the market and accidentally create a moat around well-funded incumbents like Facebook that can afford the engineering chore. Data portability regulation combined with a fine, liability, oversight, and a ban on future acquisitions of social networks could set Facebook straight without breaking it up.

Users have a lot of complaints about Facebook that go beyond strictly privacy. But their recourse is always limited because for many functions there’s nowhere else to go, and it’s too hard to go there. By fixing the latter, the FTC could stimulate the rise of Facebook alternatives so that users rather regulators can play king-maker.

News Source = techcrunch.com

Zuckerberg says breaking up Facebook “isn’t going to help”

in Apps/Chris Hughes/Delhi/Drama/Facebook/Government/India/Mark Zuckerberg/Nick Clegg/Policy/Politics/privacy/Social/TC by

With the look of someone betrayed, Facebook’s CEO has fired back at co-founder Chris Hughes and his brutal NYT op-ed calling for regulators to split up Facebook, Instagram, and WhatsApp. “When I read what he wrote, my main reaction was that what he’s proposing that we do isn’t going to do anything to help solve those issues. So I think that if what you care about is democracy and elections, then you want a company like us to be able to invest billions of dollars per year like we are in building up really advanced tools to fight election interference” Zuckerberg told France Info while in Paris to meet with French President Emmanuel Macron.

Zuckerberg’s argument boils down to the idea that Facebook’s specific problems with privacy, safety, misinformation, and speech won’t be directly addressed by breaking up the company, and instead would actually hinder its efforts to safeguard its social networks. The Facebook family of apps would theoretically have fewer economies of scale when investing in safety technology like artificial intelligence to spot bots spreading voter suppression content.

Facebook’s co-founders (from left): Dustin Moskovitz, Chris Hughes, and Mark Zuckerberg

Hughes claims that “Mark’s power is unprecedented and un-American” and that Facebook’s rampant acquisitions and copying have made it so dominant that it deters competition. The call echoes other early execs like Facebook’s first president Sean Parker and growth chief Chamath Palihapitiya who’ve raised alarms about how the social network they built impacts society.

But Zuckerberg argues that Facebook’s size benefits the public. “Our budget for safety this year is bigger than the whole revenue of our company was when we went public earlier this decade. A lot of that is because we’ve been able to build a successful business that can now support that. You know, we invest more in safety than anyone in social media” Zuckerberg told journalist Laurent Delahousse.

The Facebook CEO’s comments were largely missed by the media, in part because the TV interview was heavily dubbed into French with no transcript. But written out here for the first time, his quotes offer a window into how deeply Zuckerberg dismisses Hughes’ claims. “Well [Hughes] was talking about a very specific idea of breaking up the company to solve some of the social issues that we face” Zuckerberg says before trying to decouple solutions from anti-trust regulation. “The way that I look at this is, there are real issues. There are real issue around harmful content and finding the right balance between expression and safety, for preventing election interference, on privacy.”

Claiming that a breakup “isn’t going to do anything to help” is a more unequivocal refutation of Hughes’ claim than that of Facebook VP of communications and former UK deputy Prime Minster Nick Clegg . He wrote in his own NYT op-ed today that “what matters is not size but rather the rights and interests of consumers, and our accountability to the governments and legislators who oversee commerce and communications . . . Big in itself isn’t bad. Success should not be penalized.”

Mark Zuckerberg and Chris Hughes

Something certainly must be done to protect consumers. Perhaps that’s a break up of Facebook. At the least, banning it from acquiring more social networks of sufficient scale so it couldn’t snatch another Instagram from its crib would be an expedient and attainable remedy.

But the sharpest point of Hughes’ op-ed was how he identified that users are trapped on Facebook. “Competition alone wouldn’t necessarily spur privacy protection — regulation is required to ensure accountability — but Facebook’s lock on the market guarantees that users can’t protest by moving to alternative platforms” he writes. After Cambridge Analytica “people did not leave the company’s platforms en masse. After all, where would they go?”

That’s why given critics’ call for competition and Zuckerberg’s own support for interoperability, a core tenet of regulation must be making it easier for users to switch from Facebook to another social network. As I’ll explore in an upcoming piece, until users can easily bring their friend connections or ‘social graph’ somewhere else, there’s little to compel Facebook to treat them better.

News Source = techcrunch.com

1 2 3 54
Go to Top