Menu

Timesdelhi.com

February 23, 2019
Category archive

Opinion

When surveillance meets incompetence

in Artificial Intelligence/Asia/China/Delhi/face recognition/facial recognition/India/Opinion/Politics/privacy/Security/sensenets/surveillance/TC by

Last week brought an extraordinary demonstration of the dangers of operating a surveillance state — especially a shabby one, as China’s apparently is. An unsecured database exposed millions of records of Chinese Muslims being tracked via facial recognition — an ugly trifecta of prejudice, bureaucracy and incompetence.

The security lapse was discovered by Victor Gevers at the GDI Foundation, a security organization working in the public’s interest. Using the infamous but useful Shodan search engine, he found a MongoDB instance owned by the Chinese company SenseNets that stored an ever-increasing number of data points from a facial recognition system apparently at least partially operated by the Chinese government.

Many of the targets of this system were Uyghur Muslims, an ethnic and religious minority in China that the country has persecuted in what it considers secrecy, isolating them in remote provinces in what amount to religious gulags.

This database was no limited sting operation: some 2.5 million people had their locations and other data listed in it. Gevers told me that data points included national ID card number with issuance and expiry dates; sex; nationality; home address; DOB; photo; employer; and known previously visited face detection locations.

This data, Gevers said, plainly “had been visited multiple times by visitors all over the globe. And also the database was ransacked somewhere in December by a known actor,” one known as Warn, who has previously ransomed poorly configured MongoDB instances. So it’s all out there now.

A bad idea, poorly executed, with sad parallels

Courtesy: Victor Gevers/GDI.foundation

First off, it is bad enough that the government is using facial recognition systems to target minorities and track their movements, especially considering the treatment many of these people have already received. The ethical failure on full display here is colossal, but unfortunately no more than we have come to expect from an increasingly authoritarian China.

Using technology as a tool to track and influence the populace is a proud bullet point on the country’s security agenda, but even allowing for the cultural differences that produce something like the social credit rating system, the wholesale surveillance of a minority group is beyond the pale. (And I say this in full knowledge of our own problematic methods in the U.S.)

But to do this thing so poorly is just embarrassing, and should serve as a warning to anyone who thinks a surveillance state can be well administrated — in Congress, for example. We’ve seen security tech theater from China before, in the ineffectual and likely barely functioning AR displays for scanning nearby faces, but this is different — not a stunt but a major effort and correspondingly large failure.

The duty of monitoring these citizens was obviously at least partially outsourced to SenseNets (note this is different from SenseTime, but many of the same arguments will apply to any major people-tracking tech firm), which in a way mirrors the current controversy in the U.S. regarding Amazon’s Rekognition and its use — though on a far, far smaller scale — by police departments. It is not possible for federal or state actors to spin up and support the tech and infrastructure involved in such a system on short notice; like so many other things, the actual execution falls to contractors.

And as SenseNets shows, these contractors can easily get it wrong, sometimes disastrously so.

MongoDB, it should be said, is not inherently difficult to secure; it’s just a matter of choosing the right settings in deployment (settings that are now but were not always the defaults). But for some reason people tend to forget to check those boxes when using the popular system; over and over we’ve seen poorly configured instances being accessible to the public, exposing hundreds of thousands of accounts. This latest one must surely be the largest and most damaging, however.

Gevers pointed out that the server was also highly vulnerable to MySQL exploits among other things, and was of course globally visible on Shodan. “So this was a disaster waiting to happen,” he said.

In fact it was a disaster waiting to happen twice; the company re-exposed the database a few days after securing it, after I wrote this story but before I published:

Living in a glass house

The truth is, though, that any such centralized database of sensitive information is a disaster waiting to happen, for pretty much everyone involved. A facial recognition database full of carefully organized demographic data and personal movements is a hell of a juicy target, and as the SenseTimes instance shows, malicious actors foreign and domestic will waste no time taking advantage of the slightest slip-up (to say nothing of a monumental failure).

We know major actors in the private sector fail at this stuff all the time and, adding insult to injury, are not held responsible — case in point: Equifax. We know our weapons systems are hackable; our electoral systems are trivial to compromise and under active attack; the census is a security disaster; and unsurprisingly the agencies responsible for making all these rickety systems are themselves both unprepared and ignorant, by the government’s own admission… not to mention unconcerned with due process.

The companies and governments of today are simply not equipped to handle the enormousness, or recognize the enormity, of large-scale surveillance. Not only that, but the people that compose those companies and governments are far from reliable themselves, as we have seen from repeated abuse and half-legal uses of surveillance technologies for decades.

Naturally we must also consider the known limitations of these systems, such as their poor record with people of color, the lack of transparency with which they are generally implemented and the inherently indiscriminate nature of their collection methods. The systems themselves are not ready.

A failure at any point in the process of legalizing, creating, securing, using or administrating these systems can have serious political consequences (such as the exposure of a national agenda, which one can imagine could be held for ransom), commercial consequences (who would trust SenseNets after this? The government must be furious) and, most importantly, personal consequences — to the people whose data is being exposed.

And this is all due (here, in China, and elsewhere) to the desire of a government to demonstrate tech superiority, and of a company to enable that and enrich itself in the process.

In the case of this particular database, Gevers says that although the policy of the GDI is one of responsible disclosure, he immediately regretted his role. “Personally it made angry after I found out that I unknowingly helped the company secure its oppression tool,” he told me. “This was not a happy experience.”

The best we can do, and which Gevers did, is to loudly proclaim how bad the idea is and how poorly it has been done, is being done and will be done.

News Source = techcrunch.com

Instagram thinks you want IGTV previews in your home feed

in Apps/Creators/Delhi/IGTV/India/instagram/mobile/Opinion/Politics/Social/Video by

If you can’t beat or join them…force feed ’em? That appears to be Instagram’s latest strategy for IGTV, which is now being shoved right into Instagram’s main feed, the company announced today. Instagram says that it will now add one-minute IGTV previews to the feed, making it “even easier” to discover and watch content from IGTV.

Uh.

IGTV, you may recall, was launched last year as a way for Instagram to woo creators. With IGTV, creators are able to share long-form videos within the Instagram platform instead of just short-form content to the Feed or Stories.

The videos, before today, could be viewed in Instagram itself by tapping the IGTV icon at the top right of the screen, or within the separate IGTV standalone app.

Instagram’s hope was that IGTV would give the company a means of better competing with larger video sites, like Google’s YouTube or Amazon’s Twitch.

Its users, however, haven’t found IGTV as compelling.

As of last fall, few creators were working on content exclusively for IGTV and rumor was the viewing audience for IGTV content remained quite small, compared with rivals like Snapchat or Facebook. Many creators just weren’t finding it worth investing additional resources into IGTV, so were repurposing content designed for other platforms, like YouTube or Snapchat.

That means the bigger creators weren’t developing premium content or exclusives for IGTV, but were instead experimenting by replaying the content their fans could find elsewhere. Many are still not even sure what the IGTV audience wants to watch.

IGTV’s standalone app doesn’t seem to have gained much of a following either.

The app today is ranked a lowly No. 228 on the U.S. App Store’s “Photo and Video” top chart. Despite being run by Instagram – an app that topped a billion monthly users last summer, and is currently the No. 1 free app on iOS – fewer are downloading IGTV.

After seeing 1.5 million downloads in its first month last year – largely out of curiosity – the IGTV app today has only grown to 3.5 million total installs worldwide, according to Sensor Tower data. While those may be good numbers for a brand-new startup, for a spin-off from one of the world’s biggest apps, they’re relatively small.

Instagram’s new video initiative also represents another shot across the bow of Instagram purists.

As BuzzFeed reporter Katie Notopoulos opined last year, “I’m Sorry To Report Instagram Is Bad Now.” Her point of concern was the impact that Stories had on the Instagram Feed – people were sharing to Stories instead of the Feed, which made the Feed pretty boring. At yet, the Stories content wasn’t good either, having become a firehose of the throwaway posts that didn’t deserve being shared directly on users’ profiles.

On top of all this, it seems the Instagram Feed is now going to be cluttered with IGTV previews now. That’s. Just. Great.

Instagram says you’ll see the 1-minute previews in the Feed, and can tap on them to turn on the audio. Tap the IGTV icon on the preview and you’ll be able to watch the full version in IGTV. When the video is finished, you’re returned the Feed. Or, if you want to see more from IGTV, you can swipe up while the video plays to start browsing.

IGTV previews is only one way Instagram has been developing the product to attract more views in recent months. It has also integrated IGTV in Explore, allowed the sharing of IGTV videos to Stories, added the ability to save IGTV Videos, and launched IGTV Web Embeds.

 

 

 

News Source = techcrunch.com

Theranos documentary review: The Inventor’s horrifying optimism

in Alex Gibney/Delhi/elizabeth holmes/Health/India/John Carreyrou/Opinion/Personnel/Politics/sundance/TC/Theranos by

A blood-splattered Theranos machine nearly pricks an employee struggling to fix it. This gruesome graphical rendering is what you’ll walk away from HBO’s “The Inventor” with. It finally gives a visual to the startup’s laboratory fraud detailed in words by John Carreyrou’s book “Bad Blood”.

The documentary that premiered tonight at Sundance Film Festival explores how the move fast and break things ethos of Silicon Valley is “really dangerous when people’s lives are in the balance” as former employee and whistleblower Tyler Shultz says in the film. Theranos promised a medical testing device that made a single drop of blood from your finger more precise than a painful old-school syringe in your vein. What patients ended up using was so inaccurate it put their health in jeopardy.

But perhaps even more frightening is the willingness of Theranos CEO Elizabeth Holmes to delude herself and everyone around her in service of a seemingly benevolent mission. The documentary captures how good ideas can make people do bad things.

“The Inventor: Out For Blood In Silicon Valley” juxtaposes truthful interviews with the employees who eventually rebelled against Holmes with footage and media appearances of her blatantly lying to the world. It manages to stick to the emotion of the story rather than getting lost in the scientific discrepancies of Theranos’ deception.

The film opens and closes with close-ups of Holmes, demonstrating how the facts change her same gleaming smile and big blue eyes from the face of innovative potential to that of a sociopathic criminal. “I don’t have many secrets” she tells the camera at the start.

Though the film mentions early that her $9 billion-plus valuation company would wind up worth less than zero, it does a keen job of building empathy for her that it can tear down later. You see her tell sob stories of death in the family and repeat her line about building an end to having to say goodbye to loved ones too soon. You hear how she’s terrified of needles and how growing up, “my best friends were books.”

But then cracks start to emerge as old powerful men from professors to former cabinet members faun over Holmes and become enthralled in her cult of personality as validation snowballs. Oscar-winning director Alex Gibney has a knack for creeping dread from his experience making “Enron: The Smartest Guys In The Room” and “Going Clear: Scientology and the Prison of Belief.” He portrays Holmes’ delusions of grandeur with shots of her portrait beside those of Archimedes, Beethoven, and her idol Steve Jobs.

The first red flag comes when Holmes names her initial device Edison after the historic inventor the film assures you was quite a fraud himself. Soon, sources from inside the company relay how the Edison and subsequent Theranos hardware never worked right but that demos were faked for customers and investors. Instead of sticking to a firm timeline, Gibney bounces around to hammer home the emotional arcs of employees from excited to dubious, and of Holmes from confidence to paranoia.

Carreyrou’s “Bad Blood” meticulously chronicled every tiny warning sign that worried Theranos’ staff in order to build a case. But the author’s Wall Street Journal day job bled through, sapping the book of emotion and preventing it from seizing the grandeur of the tale’s climactic moments.

Gibney fills in the blanks with cringe-inducing scenes of Theranos’ faulty hardware. A ‘nanotainer’ of blood rolls off a table and fractures, a biohazard awaiting whoever tries to pick it up. The depiction of working in Theranos’ unregulated laboratory scored the biggest gasps from the Sundance audience. Former employees describe how Theranos recruited drifters they suspected of hepatitis as guinea pigs. Their stale blood evaporates into the air surrounding machines dripping with inky red, covered in broken test tubes. Gibney nails the graphics, zooming in on a needle spraying droplets as a robotic arm sputters through malfunctions. I almost had to look away as the film renders a hand reaching into the machine and only just dodging an erratic syringe.

A still from The Inventor: Out For Blood in Silicon Valley by Alex Gibney, an official selection of the Documentary Premieres program at the 2019 Sundance Film Festival. Courtesy of Sundance Institute | photo by Drew Kelly.

At times, Gibney goes a bit too melodramatic. The toy music box twinkling foreshadows a dream becoming a nightmare, but it gets maddening after an hour straight. The pacing feels uneven, sometimes bogged down in Holmes’ personal relationships when later it seems to speed through the company’s collapse.

Though elsewhere, the director harnesses the nervous laughter coping mechanism of the former employees to inject humor into the grim tale. With accuracy so low, Shultz jokes that “if people are testing themselves for syphilis with Theranos, there’s going to be a lot more syphilis in the world.” Visual dramatizations of journalists’ audio recordings of Holmes and the eventual legal disputes bring this evidence to life.

Alex Gibney, director of The Inventor: Out For Blood in Silicon Valley, an official selection of the Documentary Premieres program at the 2019 Sundance Film Festival. Courtesy of Sundance Institute.

The most touching scene sees Fortune’s Roger Parloff on the brink of implosion as he grapples with giving Holmes her first magazine cover story — momentum she used to eventually get Theranos’ useless hardware in front of real patients who depended on its results.

The Inventor succeeds at instilling the lesson without getting too preachy. It’s fine to be hopeful, but don’t ignore your concerns no matter how much you want something to be real. It takes an incredibly complex sequence of events and makes it at once gripping and informative. If you haven’t read “Bad Blood” or found it drab, “The Inventor” conveys the gravity of the debacle with a little more flare.

Yet the documentary also gives Holmes a bit too much benefit of the doubt, suggesting that hey, at least she was trying to do good in the world. In the after-film panel, Gibney said “She had a noble vision . . . I think that was part of why she was able to convince so many people and convince herself that what she was doing was great, which allowed her to lie so effectively.” Carreyrou followed up that “she was not intending to perpetrate a long con.”

Yet that’s easier to say for both the director and the author when neither of their works truly investigated the downstream health impacts of Theranos’ false positives and false negatives. If they’d tracked down people who delayed critical treatment or had their lives upended by the fear of a disease they didn’t have, I doubt Holmes would be cut so much slack.

Some degree of ‘Fake it ’til you make it’ might be essential to build hard technology startups. You must make people believe Inc something that doesn’t exist if you’re to pull in the funding and talent necessary to make it a reality. But it’s not just medical, hardware, or “atoms not bits” startups that must be allegiant to the truth. As Facebook and WhatsApps’ role in spreading misinformation that led to mob killings in India and Myanmar proved, having a grand mission doesn’t make you incapable of doing harm. A line must be drawn between optimism and dishonesty before it leads to drawing chalk outlines on the ground.

News Source = techcrunch.com

Facebook launches petition feature, its next battlefield

in Apps/Change.org/Delhi/Facebook/Facebook Politics/Government/India/Opinion/petition/Policy/Politics/Social by

Gather a mob and Facebook will now let you make political demands. Tomorrow Facebook will encounter a slew of fresh complexities with the launch of Community Actions, its News Feed petition feature. Community Actions could unite neighbors to request change from their local and national elected officials and government agencies. But it could also provide vocal interest groups a bully pulpit from which to pressure politicians and bureaucrats with their fringe agendas.

Community Actions embodies the central challenge facing Facebook. Every tool it designs for positive expression and connectivity can be subverted for polarization and misinformation. Facebook’s membership has swelled into such a ripe target for exploitation that it draws out the worst of humanity. You can imagine misuses like “Crack down on [minority group]” that are offensive or even dangerous but some see as legitimate. The question is whether Facebook puts in the forethought and aftercare to safeguard its new tools with proper policy and moderation. Otherwise each new feature is another liability.

Community Actions roll out to the entire US tomorrow after several weeks of testing in a couple of markets. Users can add a title, description, and image to their Community Action, and tag relevant government agencies and officials who’ll be notified. The goal is to make the Community Action go viral and get people to hit the “Support” button. Community Actions have their own discussion feed where people can leave comments, create fundraisers, and organize Facebook Events or Call Your Rep campaigns. Facebook displays the numbers of supporters behind a Community Action, but you’ll only be able to see the names of those you’re friends with or that are Pages or public figures.

Facebook is purposefully trying to focus Community Actions to be more narrowly concentrated on spurring government action than just any random cause. That means it won’t immediately replace Change.org petitions that can range from the civilian to the absurd. But one-click Support straight from the News Feed could massively reduce the friction to signing up, and thereby attract organizations and individuals seeking to maximize the size of their mob.

You can check out some examples here of Community Actions here like a non-profit Colorado Rising calling for the governor to put a moratorium on oil and gas drilling, citizens asking the a Florida’s mayor and state officials to build a performing arts center, and a Philadelphia neighborhood association requesting that the city put in crosswalks by the library. I fully expect one of the first big Community Actions will be the social network’s users asking Senators to shut down Facebook or depose Mark Zuckerberg.

The launch follows other civic-minded Facebook features like its Town Hall and Candidate Info for assessing politicians, Community Help for finding assistance after a disaster, and local news digest Today In. A Facebook spokesperson who gave us the first look at Community Actions provided this statement:

“Building informed and civically engaged communities is at the core of Facebook’s mission. Every day, people come together on Facebook to advocate for causes they care about, including by contacting their elected officials, launching a fundraiser, or starting a group. Through these and other tools, we have seen people marshal support for and get results on issues that matter to them. Community Action is another way for people to advocate for changes in their communities and partner with elected officials and government agencies on solutions.”

The question will be where Facebook’s moderators draw the line on what’s appropriate as a Community Action, and the ensuing calls of bias that line will trigger. Facebook is employing a combination of user flagging, proactive algorithmic detection, and human enforcers to manage the feature. But what the left might call harassment, the right might call free expression. If Facebook allows controversial Community Actions to persist, it could be viewed as complicit with their campaigns, but could be criticized for censorship if it takes one down. Like fake news and trending topics, the feature could become the social network’s latest can of worms.

Facebook is trying to prioritize local Actions where community members have a real stake. It lets user display “constituent” badges so their elected officials know they aren’t just a distant rabble-rouser. It’s why Facebook will not allow President Donald Trump or Vice President Mike Pence to be tagged in Community Actions. But you’re free to tag all your state representatives demanding nude parks, apparently.

Another issue is how people can stand up against a Community Action. Only those who Support one may join in its discussion feed. That might lead trolls to falsely pledge their backing just to stir up trouble in the comments. Otherwise, Facebook tells me users will have to share a Community Action to their own feed with a message of disapproval, or launch their own in protest. My concern is that an agitated but niche group could drive a sense of false equivocacy by using Facebook Groups or message threads to make it look like there’s as much or more support for a vulgar cause or against of a just one. A politician could be backed into a corner and forced to acknowledge radicals or bad-faith actors lest they look negligent

While Facebook’s spokesperson says initial tests didn’t surface many troubles, the company is trying to balance safety with efficiency and it will consider how to evolve the feature in response to emergent behaviors. The trouble is that open access draws out the trolls and grifters seeking to fragment society. Facebook will have to assume the thorny responsibility of shepherding the product towards righteousness and defining what that even means. If it succeeds, there’s an amazing opportunity here for citizens to band together to exert consensus upon government. A chorus of voices carries much further than a single cry.

News Source = techcrunch.com

What history could tell Mark Zuckerberg

in 2018 Year in Review/Delhi/Facebook/fake news/India/Mark Zuckerberg/Opinion/Politics/Russian election interference/TC by

Perhaps Mark Zuckerberg obsessed over the wrong bit of history. Or else didn’t study his preferred slice of classical antiquity carefully enough, faced, as he now is, with an existential crisis of ‘fake news’ simultaneously undermining trust in his own empire and in democracy itself.

A recent New Yorker profile — questioning whether the Facebook founder can fix the creation he pressed upon the world before the collective counter-pressure emanating from his billions-strong social network does for democracy what Brutus did to Caesar — touched in passing on Zuckerberg’s admiration for Augustus, the first emperor of Rome.

“Basically, through a really harsh approach, he established two hundred years of world peace,” was the Facebook founder’s concise explainer of his man-crush, freely accepting there had been some crushing “trade-offs” involved in delivering that august outcome.

Zuckerberg’s own trade-offs, engaged in his quest to maximize the growth of his system, appear to have achieved a very different kind of outcome.

Empire of hurt

If you gloss over the killing of an awful lot of people, the Romans achieved and devised many ingenious things. But the population that lived under Augustus couldn’t have imagined an information-distribution network with the power, speed and sheer amplifying reach of the internet. Let alone the data-distributing monster that is Facebook — an unprecedented information empire unto itself that’s done its level best to heave the entire internet inside its corporate walls.

Literacy in Ancient Rome was dependent on class, thereby limiting who could read the texts that were produced, and requiring word of mouth for further spread.

The ‘internet of the day’ would best resemble physical gatherings — markets, public baths, the circus — where gossip passed as people mingled. Though of course information could only travel as fast as a person (or an animal assistant) could move a message.

In terms of regular news distribution, Ancient Rome had the Acta Diurna, A government-produced daily gazette that put out the official line on noteworthy public events.

These official texts, initially carved on stone or metal tablets, were distributed by being exposed in a frequented public place. The Acta is sometimes described as a proto-newspaper, given the mix of news it came to contain.

Minutes of senate meetings were included in the Acta by Julius Caesar. But, in a very early act of censorship, Zuckerberg’s hero ended the practice — preferring to keep more fulsome records of political debate out of the literate public sphere.

“What news was published thereafter in the acta diurna contained only such parts of the senatorial debates as the imperial government saw fit to publish,” writes Frederick Cramer, in an article on censorship in Ancient Rome.

Augustus, the grand-nephew and adopted son of Caesar, evidently did not want the risk of political opponents using the outlet to influence opinion, his great-uncle having been assassinated in a murderous plot hatched by conspiring senators.

The Death of Caesar

Under Augustus, the Acta Diurna was instead the mouthpiece of the “monarchic faction.”

“He rightly believed this method to be less dangerous than to muzzle the senators directly,” is Cramer’s assessment of Augustus’s decision to terminate publication of the senatorial protocols, limiting at a stroke how physical voices raised against him in the Senate could travel and lodge in the wider public consciousness by depriving them of space on the official platform.

Augustus also banned anonymous writing in a bid to control incendiary attacks distributed via pamphlets and used legal means to command the burning of incriminatory writings (with some condemned authors issued with ‘literary death-sentences’ for their entire life’s work).

The first emperor of Rome understood all too well the power of “publicare et propagare.”

It’s something of a grand irony, then, that Zuckerberg failed to grasp the lesson for the longest time, letting the eviscerating fire of fake news rage on unchecked until the inferno was licking at the seat of his own power.

So instead of Facebook’s brand and business invoking the sought-for sense of community, it’s come to appear like a layer cake of fakes, iced with hate speech horrors.

On the fake front, there are fake accounts, fake newsinauthentic adsfaux verifications and questionable metrics. Plus a truck tonne of spin and cynical blame shifting manufactured by the company itself.

There’s some murkier propaganda, too; a PR firm Facebook engaged in recent years to help with its string of reputation-decimating scandals reportedly worked to undermine critical voices by seeding a little inflammatory smears on its behalf.

Publicare et propagare, indeed.

Perhaps Zuckerberg thought Ancient Rome’s bloody struggles were so far-flung in history that any leaderly learnings he might extract would necessarily be abstract, and could be cherry-picked and selectively filtered with the classical context so comfortably remote from the modern world. A world that, until 2017, Zuckerberg had intended to render, via pro-speech defaults and systematic hostility to privacy, “more open and connected.” Before it got too difficult for him to totally disregard the human and societal costs.

Revising the mission statement a year-and-a-half ago, Zuckerberg had the chance to admit he’d messed up by mistaking his own grandstanding world-changing ambition for a worthy cause.

Of course he sidestepped, writing instead that he would commit his empire (he calls it a “community”) to strive for a specific positive outcome.

It’s something of a grand irony, then, that Zuckerberg failed to grasp the lesson for the longest time, letting the eviscerating fire of fake news rage on unchecked until the inferno was licking at the seat of his own power.

He didn’t go full Augustus with the new goal (no ‘world peace’) — but recast Facebook’s mission to: “Give people the power to build community and bring the world closer together.”

There are, it’s painful to say, “communities” of neo-Nazis and white supremacists thriving on Facebook. But they certainly don’t believe in bringing the world closer together. So Facebook’s reworked mission statement is a tacit admission that its tools can help spread hate by saying it hopes for the opposite outcome. Even as Zuckerberg continues to house voices on his platform that seek to deny historical outrages like the Holocaust, which is the very definition of antisemitic hate speech.

“I used to think that if we just gave people a voice and helped them connect, that would make the world better by itself. In many ways it has. But our society is still divided,” he wrote in June 2017, eliding his role as emperor of the Facebook platform, in fomenting the societal division of which he typed. “Now I believe we have a responsibility to do even more. It’s not enough to simply connect the world, we must also work to bring the world closer together.”

This year his personal challenge was also set at “fixing Facebook.”

Also this year: Zuckerberg made a point of defending allowing Holocaust deniers on his platform, then scrambled to add the caveat that he finds such views “deeply offensive.” (That particular Facebook content policy has stood unflinching for almost a decade.)

It goes without saying that the Nazis of Hitler’s Germany understood the terrible power of propaganda, too.

More recently, faced with the consequences of a moral and ethical failure to grapple with hateful propaganda and junk news, Facebook has said it will set up an external policy committee to handle some content policy decisions next year.

But only at a higher and selective appeal tier, after layers of standard internal reviews. It’s also not clear how this committee can be truly independent from Facebook.

Quite possibly it’ll just be another friction-laced distraction tactic, akin to Facebook’s self-serving ‘Hard Questions’ series.

WASHINGTON, DC – APRIL 11: Facebook co-founder, Chairman and CEO Mark Zuckerberg prepares to testify before the House Energy and Commerce Committee on April 11, 2018 in Washington, DC. (Photo by Chip Somodevilla/Getty Images)

Revised mission statements, personal objectives and lashings of self-serving blog posts (playing up the latest self-forged “accountability” fudge), have done nothing to dim the now widely held view that Facebook specifically, and social media in general, profits off of accelerated outrage.

Cries to that effect have only grown louder this year, two years on from revelations that Kremlin election propaganda maliciously targeting the U.S. presidential election had reached hundreds of millions of Facebook users, fueled by a steady stream of fresh outrages found spreading and catching fire on these “social” platforms.

Like so many self-hyping technologies, social media seems terribly deceptively named.

“Antisocial media” is, all too often, rather closer to the mark. And Zuckerberg, the category’s still youthful warlord, looks less “harshly pacifying Augustus” than modern day Ozymandias, forever banging on about his unifying mission while being drowned out by the sound and fury coming from the platform he built to programmatically profit from conflict.

And still the young leader longs for the mighty works he might yet do.

Look on my works, ye mighty…

For all the positive connections flowing from widespread access to social media tools (which of course Zuckerberg prefers to fix on), evidence of the tech’s divisive effects are now impossible for everyone else to ignore: Whether you look at the wildly successful megaphoning of Kremlin propaganda targeting elections and (genuine) communities by pot stirring across all sorts of identity divides; or algorithmic recommendation engines that systematically point young and impressionable minds toward extremist ideologies (and/or brain-meltingly ridiculous conspiracy theories) as an eyeball-engagement strategy for scaling ad revenue in the attention economy. Or, well, Brexit.

Whatever your view on whether or not Facebook content is actually influencing opinion, attention is undoubtedly being robbed. And the company has a long history of utilizing addictive design strategies to keep users hooked.

To the point where it’s publicly admitted it has an over-engagement problem and claims to be tweaking its algorithmic recipes to dial down the attention incursion. (Even as its engagement-based business model demands the dial be yanked back the other way.)

Facebook’s problems with fakery (“inauthentic content” in the corporate parlance) and hate speech — which, without the hammer blow of media-level regulation, is forever doomed to slip through Facebook’s one-size-fits-all “community standards” — are, it argues, merely a reflection of humanity’s flaws.

So it’s essentially asking to be viewed as a global mirror, and so be let off the moral hook. A literal vox populi — warts, fakes, hate and all.

Zuckerberg created the most effective tool for spreading propaganda the world has ever known without — so he claims — bothering to consider how people might use it.

It was never selling a fair-face, this self-serving, revisionist hot-take suggests; rather Facebook wants to be accepted as, at best, a sort of utilitarian plug that’s on a philanthropic, world-spanning infrastructure quest to stick a socket in everyone. Y’know, for their own good.

“It’s fashionable to treat the dysfunctions of social media as the result of the naivete of early technologists who failed to foresee these outcomes. The truth is that the ability to build Facebook-like services is relatively common,” wrote Cory Doctorow earlier this year in a damning assessment of the Facebook founder’s moral vacuum. “What was rare was the moral recklessness necessary to go through with it.”

Even now Zuckerberg is refusing the moral and ethical burden of editorial responsibility for the content his tools auto-publish and algorithmically amplify, every instant of every day, using proprietary information-shaping distribution hierarchies that accelerate machine-selected clickbait through the blood-brain barrier of 2.2 billion-plus users.

These algorithmically prioritized comms are positioned to influence opinion and drive intention at an unprecedented, global scale.

Asked by the New Yorker about the inflammatory misinformation peddled by InfoWars conspiracy theorist and hate speech “preacher,” Alex Jones, earlier this year, Zuckerberg’s gut instinct was to argue again to be let off the hook. “I don’t believe that it is the right thing to ban a person for saying something that is factually incorrect,” was his disingenuous response.

It was left to the journalist to point out InfoWars’ malicious disinformation is rather more than just factually incorrect.

Facebook has taken down some individual InfoWars videos this year, in its usual case by case style, where it deemed there was a direct incitement to violence. And in August it also pulled some InfoWars pages (“for glorifying violence, which violates our graphic violence policy, and using dehumanizing language to describe people who are transgender, Muslims and immigrants, which violates our hate speech policies”).

But it has certainly not de-platformed the professional purveyor of hateful conspiracy theories who sells supplements alongside his attention-grabbing lies.

One academic study, published two months ago, found much of the removed InfoWars content had managed to move “swiftly back” onto the Facebook platform. Like radio and silence, Facebook hates a content vacuum.

The problem is its own platform also sells stuff alongside attention-grabbing lies. So Jones is just the Facebook business model if it could pull on a blue suit and shout.

Alex Jones

“Senator, we run ads”

It’s clear that Facebook’s adherence to a rules-based, reactive formula for assessing speech sets few if any meaningful moral standards. The company has also preferred to try offloading tricky decisions to third-party fact checkers and soon a quasi-external committee — a strategy that looks intended to sustain the suggestive lie that, at base, Facebook is just a “neutral platform.”

Yet Zuckerberg’s business is the business of influence itself. He admits as much. “Senator, we run ads,” he told Congress this April when asked how the platform turns a profit.

If the ads don’t work that’s an awful lot of money being pointlessly poured into Facebook’s coffers.

At the same time, the risk of malicious manipulation of Facebook’s machinery of mass manipulation is something the company claims it simply hadn’t thought of until very, very recently. 

That’s the official explanation for why senior executives failed to pay any mind to the tsunami of politically charged propaganda blooming across its U.S. platform, yet originating in Saint Petersburg and environs.

An astute political operator like Augustus was entirely alive to the risks of political propaganda. Hence making sure to keep a lid on domestic political opponents, while allowing them to let off steam in the Senate where a wider audience wouldn’t hear them.

Zuckerberg, by contrast, created the most effective tool for spreading propaganda the world has ever known without — so he claims — bothering to consider how people might use it.

That’s either radical stupidity or willful recklessness.

Zuckerberg implies the former. “I always believed people are basically good,” he wrote in his grandiose explainer on rethinking Facebook’s mission statement last year.

Though you’d think someone with a fascination for classical antiquity, and a special admiration for an emperor whose harsh trade-offs apparently included arranging the execution of his own grandson, might have found plenty to test that theory to a natural breaking point.

Safe to say, such a naive political mind wouldn’t have lasted long in Ancient Rome.

But Zuckerberg is no politician. He’s a new-age ad salesman with a crush on one of history’s canniest political operators — who happened to know the power and value of propaganda. And who also knew that propaganda could be deadly.

If you imagine Facebook’s platform as a modern day Acta Diurna — albeit, one updated continuously, delivered direct to citizens’ pockets, and with no single distributed copy ever being exactly the same — the organ is clearly not working toward any kind of societal order, crushing or otherwise.

Under Zuckerberg’s programmatic instruction, Facebook’s daily notices are selected for their capacity to emotionally tug at the individual. By design the medium agitates because the platform exists to trade attention.

It’s really the opposite of “civilization building.” Outrage and tribalism are grist to the algorithmic mill. It’s much closer to the tabloid news mantra — of “if it bleeds it leads.”

But Facebook goes further, using “free speech” as a cloaking mechanism to cross the ethical  line and conceal the ugly violence of a business that profits by ripping up the social compact.

The speech-before-truth philosophy underpinning Zuckerberg’s creation intrinsically works against the civic, community values he claims to champion. So at bottom, there’s yet another fake: no “global community” inside the walled garden, just a globally scaled marketing empire that’s had raging success in growing programmatic ad sales by tearing genuine communities apart.

Here confusion and anger reign.

The empire of Zuckerberg is a drear domain indeed.

One hundred cardboard cutouts of Facebook founder and CEO Mark Zuckerberg stand outside the US Capitol in Washington, DC, April 10, 2018.
Advocacy group Avaaz is calling attention to what the groups says are hundreds of millions of fake accounts still spreading disinformation on Facebook. (Photo: SAUL LOEB/AFP/Getty Images)

Fake news of the 1640s

Might things have turned out differently for Facebook — and, well, for the world — if its founder had obsessed over a different period in history?

The English Civil War of the 1640s has much to recommend it as a study topic to those trying to understand and unpick the social impacts of the hyper modern phenomenon of social media, given the historical parallels of society turned upside during a moment of information revolution.

It might seen counterintuitive to look so far back in time to try to understand the societal impacts of cutting-edge communications technologies. But human nature can be surprisingly constant.

Internet platforms are also socio-technical tools, which means ignoring human behavior is a really dumb thing to do.

As the inventor of the World Wide Web, Tim Berners-Lee, said recently of modern day anthropogenic platforms: “As we’re designing the system, we’re designing society.”

The design challenge is all about understanding human behaviour — so you know how and where to place your ethical guardrails.

Rather than, per the Zuckerberg fashion, embarking on some kind of a quixotic, decade-plus quest to chase a grand unifying formula of IFTTT reaction statements to respond consistently to every possible human (and inhuman) act across the globe.

Mozilla’s Mitchell Baker made a related warning earlier this year, when she called for humanities and ethics to be baked into STEM learning, saying: “One thing that’s happened in 2018 is that we’ve looked at the platforms, and the thinking behind the platforms, and the lack of focus on impact or result. It crystallised for me that if we have Stem education without the humanities, or without ethics, or without understanding human behaviour, then we are intentionally building the next generation of technologists who have not even the framework or the education or vocabulary to think about the relationship of Stem to society or humans or life.”

What’s fascinating about the English Civil War to anyone interested in current day Internet speech versus censorship ethics trade-offs, is that in a similar fashion to how social media has radically lowered the distribution barrier for online speech, by giving anyone posting stuff online the chance of reaching a large audience, England’s long-standing regime of monarchical censorship collapsed in 1641, leading to a great efflorescence of speech and ideas as pamphlets suddenly and freely poured off printing presses.

This included an outpouring of radical political views from groups agitating for religious reforms, popular sovereignty, extended suffrage, common ownership and even proto women’s rights — laying out democratic concepts and liberal ideas centuries ahead of the nation itself becoming a liberal democracy.

But, at the same time, pamphlets were also used during the English Civil War period as a cynical political propaganda tool to whip up racial and sectarian hatred, most markedly in the parliament’s fight against the king.

Especially vicious hate speech was directed at the Irish. And historians suggest anti-Irish propaganda helped fuel the rampage that Cromwell’s soldiers went on in Ireland to crush the rebellion, having been fed a diet of violent claims in uncensored pamphlet print — such as that the Irish were killing and eating babies.

For a modern day parallel of information technology charging up ethnic hate you only have to look to Facebook’s impact in Myanmar where its platform was appropriated by military elements to incite genocide against the minority Rohingya population — leading to terrible human rights abuses in the modern era. There’s no shortage of other awful examples either.

“There are genuine atrocities in Ireland but suddenly the pamphleteers realise that this sells and suddenly you get a pornography of violence when everyone is rushing to put out these incredibly violent and unpleasant stories, and people are rushing to buy them,” says University of Southampton early modern history professor, Mark Stoyle, discussing the parliamentary pamphleteers’ evolving tactics in the English Civil War.

“It makes the Irish rebellion look even worse than it was. And it sort of raises even greater levels of bitterness and hostility towards the Irish. I would say those sorts of things had a very serious effect.”

The overarching lesson of history is that propaganda is baked indelibly into the human condition. Speech and lies come wrapped around the same tongue.

Stoyle says pamphlets printed during the English Civil War period also revived superstitious beliefs in witchcraft, leading to an upsurge in prosecutions and killings on charges of witchcraft which had dipped in earlier years under tighter state controls on popular printed accounts of witch trials.

“Once the royal regime collapses, the king’s not there to stop people prosecuting witches, he’s not there to stop these pamphlets appearing. There’s a massive upsurge in pamphlets about witches and in no time at all there’s a massive upsurge in prosecutions of witches. That’s when Matthew Hopkins, the witchfinder general, kills several hundred men and women in East Anglia on charges of being witches. And again I think the civil war propaganda has helped to fuel that.”

If you think modern day internet platforms don’t have to worry about crazy superstitions like witchcraft and devil worship just Google “Frazzledrip” (a conspiracy theory that’s been racking up the views on YouTube this year which claims Hillary Clinton and longtime aide Huma Abedin sexually assaulted a girl and drank her blood). The Clinton-targeted viral “Pizzagate” conspiracy theory also combines bizarre claims of Satanic rituals with child abuse. None of which stopped it catching fire on social media.

Indeed, a whole host of ridiculous fictions are being algorithmically accelerated into wider view, here in the 21st (not the 17th) century.

And it’s internet platforms that rank speech above truth that are in the distribution saddle.

Stoyle, who has written a book on witchcraft and propaganda during the English Civil War, believes the worst massacre of the period was also fueled by political disinformation targeting the king’s female camp followers. Parliamentary pamphleteers wrote that the women were prostitutes. Or claimed they were Irish women who had killed English men and women in Ireland. There were also claims some were witches.

“One of these pamphlets describes the women in the king’s camp — just literally a week before the massacre — and it presents them all as prostitutes and it says something like ‘these women they revel in their hot blood and they deserve a hotter punishment’,” he tells us. “Just a week later they’re all cut down. And I don’t think that’s coincidence.”

In the massacre Stoyle says parliamentary soldiers set about the women, killing 100 and mutilating scores more. “This is just unheard of,” he adds.

The early modern period even had the equivalent of viral clickbait in pamphlet form when a ridiculous story about a dog owned by the king’s finest cavalry commander, prince Rupert, takes off. The poodle was claimed to be a witch in disguise which had invested Rupert with magical military powers — hence, the pamphlets proclaimed, his huge successes on the battlefield.

“In a time when we’ve got no pictures at all of some of the most important men and women in the country we’ve got six different pictures of prince Rupert’s dog circulating. So this is absolutely fake news with a vengeance,” says Stoyle.

And while parliamentarian pamphlet writers are generally assumed to be behind this particular sequence of Civil War fakes, Stoyle believes one particularly blatant pamphlet in the series — which claimed the dog was not only a witch but that the prince was having sex with it — is a doubly bogus hoax fake.

“I’m pretty certain now it was actually written by a royalist to poke fun at the parliamentarians for being so gullible and believing this stuff,” he says. “But like so many hoaxes it was a hoax that went wrong — it was done so well that most people who read it actually believed it. And it was just a few highly educated royalists who got the joke and laughed at it. And so in a way it was like a hoax that backfired horribly.

“A classic case of fake news biting the person who put it out in the bum.”

Of course this was also the prince’s dog pamphlet that got the most attention and “viral engagement” of the time, as other pamphlet writers picked up on it and started referencing it.

So again the lesson about clickbait economics is a very old one, if you only know where to look.

Fake news most certainly wasn’t suddenly born in 2016. Modern hoaxers like Jones (who has also been at it for far longer than two years) are just appropriating cutting-edge tech tools to plough a very old furrow.

Equally, it really shouldn’t be any kind of news flash that free speech can have a horribly dark side.

The overarching lesson of history is that propaganda is baked indelibly into the human condition. Speech and lies come wrapped around the same tongue.

The stark consequences that can flow from maliciously minded lies being crafted to move a particular audience are also writ large across countless history books.

So when Facebook says — caught fencing Kremlin lies — “we just didn’t think of that” it’s a truly illiterate response to an age-old problem.

And as the philosophical saying goes: Those who cannot remember the past are condemned to repeat it.

That’s really the most important history lesson of all.

“As humans we have this terrible ability to be angels and devils — to use things for wonderful purposes and to use things for terrible purposes that were never really intended or thought of,” says Stoyle, when asked whether, at a Facebook-level scale, we’re now seeing some of the limits of the benefits of free speech. “I’m not saying that the people who wrote some of these pamphlets in the Civil War expected it would lead to terrible massacres and killings but it did and they sort of played their part in that.

“It’s just an amazingly interesting period because there’s all this stuff going on and some of it is very dark and some of it’s more positive. And I suppose we’re quite well aware of the dark side of social media now and how it has got a tendency to let almost the worst human instincts come out in it. But some of these things were, I think, forces for good.”

‘Balancing angels and devils’ would certainly be quite the job description to ink on Zuckerberg’s business card.

“History teaches you to take all the evidence, weigh it up and then say who’s saying this, where does it come from, why are they saying it, what’s the purpose,” adds Stoyle, giving some final thoughts on why studying the past can provide a way through modern day information chaos. “Those are the tools that you need to make your way through this minefield.”

News Source = techcrunch.com

1 2 3 12
Go to Top