Menu

Timesdelhi.com

June 16, 2019
Category archive

Facebook

Indonesia restricts WhatsApp, Facebook and Instagram usage following deadly riots

in Asia/Delhi/Facebook/Government/India/Indonesia/instagram/Jakarta/operating systems/Politics/president/presidential election/Social/social media/Software/spokesperson/Sri Lanka/WhatsApp/world wide web by

Indonesia is the latest nation to hit the hammer on social media after the government restricted the use of WhatsApp and Instagram following deadly riots yesterday.

Numerous Indonesia-based users are today reporting difficulties sending multimedia messages via WhatsApp, which is one of the country’s most popular chat apps, and posting content to Facebook, while the hashtag #instagramdown is trending among the country’s Twitter users due to problems accessing the Facebook-owned photo app.

Wiranto, a coordinating minister for political, legal and security affairs, confirmed in a press conference that the government is limiting access to social media and “deactivating certain features” to maintain calm, according to a report from Coconuts.

Rudiantara, the communications minister of Indonesia and a critic of Facebook, explained that users “will experience lag on Whatsapp if you upload videos and photos.”

Facebook — which operates both WhatsApp and Instagram — didn’t explicitly confirm the blockages , but it did say it has been in communication with the Indonesian government.

“We are aware of the ongoing security situation in Jakarta and have been responsive to the Government of Indonesia. We are committed to maintaining all of our services for people who rely on them to communicate with their loved ones and access vital information,” a spokesperson told TechCrunch.

A number of Indonesia-based WhatsApp users confirmed to TechCrunch that they are unable to send photos, videos and voice messages through the service. Those restrictions are lifted when using Wi-Fi or mobile data services through a VPN, the people confirmed.

The restrictions come as Indonesia grapples with political tension following the release of the results of its presidential election on Tuesday. Defeated candidate Prabowo Subianto said he will challenge the result in the constitutional court.

Riots broke out in capital state Jakarta last night, killing at least six people and leaving more than 200 people injured. Following this, it is alleged that misleading information and hoaxes about the nature of riots and people who participated in them began to spread on social media services, according to local media reports.

Protesters hurl rocks during clash with police in Jakarta on May 22, 2019. – Indonesian police said on May 22 they were probing reports that at least one demonstrator was killed in clashes that broke out in the capital Jakarta overnight after a rally opposed to President Joko Widodo’s re-election. (Photo by ADEK BERRY / AFP)

For Facebook, seeing its services forcefully cut off in a region is no longer a rare incident. The company, which is grappling with the spread of false information in many markets, faced a similar restriction in Sri Lanka in April, when the service was completely banned for days amid terrorist strikes in the nation. India, which just this week concluded its general election, has expressed concerns over Facebook’s inability to contain the spread of false information on WhatsApp, which is its largest chat app with over 200 million monthly users.

Indonesia’s Rudiantara expressed a similar concern earlier this month.

“Facebook can tell you, ‘We are in compliance with the government’. I can tell you how much content we requested to be taken down and how much of it they took down. Facebook is the worst,” he told a House of Representatives Commission last week, according to the Jakarta Post.

Update 05/22 02:30 PDT: The original version of this post has been updated to reflect that usage of Facebook in Indonesia has also been impacted.

Facebook still a great place to amplify pre-election junk news, EU study finds

in deception/Delhi/digital media/election security/Emmanuel Macron/Europe/european commission/European Union/Facebook/fake news/France/India/junk news/misinformation/online disinformation/Oxford Internet Institute/oxford university/Politics/presidential election/Security/sensationalism/Social/social media/Sweden/Twitter by

A study carried out by academics at Oxford University to investigate how junk news is being shared on social media in Europe ahead of regional elections this month has found individual stories shared on Facebook’s platform can still hugely outperform the most important and professionally produced news stories, drawing as much as 4x the volume of Facebook shares, likes, and comments.

The study, conducted for the Oxford Internet Institute’s (OII) Computational Propaganda Project, is intended to respond to widespread concern about the spread of online political disinformation on EU elections which take place later this month, by examining pre-election chatter on Facebook and Twitter in English, French, German, Italian, Polish, Spanish, and Swedish.

Junk news in this context refers to content produced by known sources of political misinformation — aka outlets that are systematically producing and spreading “ideologically extreme, misleading, and factually incorrect information” — with the researchers comparing interactions with junk stories from such outlets to news stories produced by the most popular professional news sources to get a snapshot of public engagement with sources of misinformation ahead of the EU vote.

As we reported last year, the Institute also launched a junk news aggregator ahead of the US midterms to help Internet users get a handle on manipulative politically-charged content that might be hitting their feeds.

In the EU the European Commission has responded to rising concern about the impact of online disinformation on democratic processes by stepping up pressure on platforms and the adtech industry — issuing monthly progress reports since January after the introduction of a voluntary code of practice last year intended to encourage action to squeeze the spread of manipulative fakes. Albeit, so far these ‘progress’ reports have mostly boiled down to calls for less foot-dragging and more action.

One tangible result last month was Twitter introducing a report option for misleading tweets related to voting ahead of the EU vote, though again you have to wonder what took it so long given that online election interference is hardly a new revelation. (The OII study is also just the latest piece of research to bolster the age old maxim that falsehoods fly and the truth comes limping after.)

The study also examined how junk news spread on Twitter during the pre-EU election period, with the researchers finding that less than 4% of sources circulating on Twitter’s platform were junk news (or “known Russian sources”) — with Twitter users sharing far more links to mainstream news outlets overall (34%) over the study period.

Although the Polish language sphere was an exception — with junk news making up a fifth (21%) of EU election-related Twitter traffic in that outlying case.

Returning to Facebook, while the researchers do note that many more users interact with mainstream content overall via its platform, noting that mainstream publishers have a higher following and so “wider access to drive activity around their content” and meaning their stories “tend to be seen, liked, and shared by far more users overall”, they also point out that junk news still packs a greater per story punch — likely owing to the use of tactics such as clickbait, emotive language, and outragemongering in headlines which continues to be shown to generate more clicks and engagement on social media.

It’s also of course much quicker and easier to make some shit up vs the slower pace of doing rigorous professional journalism — so junk news purveyors can get out ahead of news events also as an eyeball-grabbing strategy to further the spread of their cynical BS. (And indeed the researchers go on to say that most of the junk news sources being shared during the pre-election period “either sensationalized or spun political and social events covered by mainstream media sources to serve a political and ideological agenda”.)

“While junk news sites were less prolific publishers than professional news producers, their stories tend to be much more engaging,” they write in a data memo covering the study. “Indeed, in five out of the seven languages (English, French, German, Spanish, and Swedish), individual stories from popular junk news outlets received on average between 1.2 to 4 times as many likes, comments, and shares than stories from professional media sources.

“In the German sphere, for instance, interactions with mainstream stories averaged only 315 (the lowest across this sub-sample) while nearing 1,973 for equivalent junk news stories.”

To conduct the research the academics gathered more than 584,000 tweets related to the European parliamentary elections from more than 187,000 unique users between April 5 and April 20 using election-related hashtags — from which they extracted more than 137,000 tweets containing a URL link, which pointed to a total of 5,774 unique media sources.

Sources that were shared 5x or more across the collection period were manually classified by a team of nine multi-lingual coders based on what they describe as “a rigorous grounded typology developed and refined through the project’s previous studies of eight elections in several countries around the world”.

Each media source was coded individually by two separate coders, via which technique they say was able to successfully label nearly 91% of all links shared during the study period. 

The five most popular junk news sources were extracted from each language sphere looked at — with the researchers then measuring the volume of Facebook interactions with these outlets between April 5 and May 5, using the NewsWhip Analytics dashboard.

They also conducted a thematic analysis of the 20 most engaging junk news stories on Facebook during the data collection period to gain a better understanding of the different political narratives favoured by junk news outlets ahead of an election.

On the latter front they say the most engaging junk narratives over the study period “tend to revolve around populist themes such as anti-immigration and Islamophobic sentiment, with few expressing Euroscepticism or directly mentioning European leaders or parties”.

Which suggests that EU-level political disinformation is a more issue-focused animal (and/or less developed) — vs the kind of personal attacks that have been normalized in US politics (and were richly and infamously exploited by Kremlin-backed anti-Clinton political disinformation during the 2016 US presidential election, for example).

This is likely also because of a lower level of political awareness attached to individuals involved in EU institutions and politics, and the multi-national state nature of the pan-EU project — which inevitably bakes in far greater diversity. (We can posit that just as it aids robustness in biological life, diversity appears to bolster democratic resilience vs political nonsense.)

The researchers also say they identified two noticeable patterns in the thematic content of junk stories that sought to cynically spin political or social news events for political gain over the pre-election study period.

“Out of the twenty stories we analysed, 9 featured explicit mentions of ‘Muslims’ and the Islamic faith in general, while seven mentioned ‘migrants’, ‘immigration’, or ‘refugees’… In seven instances, mentions of Muslims and immigrants were coupled with reporting on terrorism or violent crime, including sexual assault and honour killings,” they write.

“Several stories also mentioned the Notre Dame fire, some propagating the idea that the arson had been deliberately plotted by Islamist terrorists, for example, or suggesting that the French government’s reconstruction plans for the cathedral would include a minaret. In contrast, only 4 stories featured Euroscepticism or direct mention of European Union leaders and parties.

“The ones that did either turned a specific political figure into one of derision – such as Arnoud van Doorn, former member of PVV, the Dutch nationalist and far-right party of Geert Wilders, who converted to Islam in 2012 – or revolved around domestic politics. One such story relayed allegations that Emmanuel Macron had been using public taxes to finance ISIS jihadists in Syrian camps, while another highlighted an offer by Vladimir Putin to provide financial assistance to rebuild Notre Dame.”

Taken together, the researchers conclude that “individuals discussing politics on social media ahead of the European parliamentary elections shared links to high-quality news content, including high volumes of content produced by independent citizen, civic groups and civil society organizations, compared to other elections we monitored in France, Sweden, and Germany”.

Which suggests that attempts to manipulate the pan-EU election are either less prolific or, well, less successful than those which have targeted some recent national elections in EU Member States. And logic would suggest that co-ordinating election interference across a 28-Member State bloc does require greater co-ordination and resource vs trying to meddle in a single national election — on account of the multiple countries, cultures, languages and issues involved.

We’ve reached out to Facebook for comment on the study’s findings.

The company has put a heavy focus on publicizing its self-styled ‘election security’ efforts ahead of the EU election. Though it has mostly focused on setting up systems to control political ads — whereas junk news purveyors are simply uploading regular Facebook ‘content’ at the same time as wrapping it in bogus claims of ‘journalism’ — none of which Facebook objects to. All of which allows would-be election manipulators to pass off junk views as online news, leveraging the reach of Facebook’s platform and its attention-hogging algorithms to amplify hateful nonsense. While any increase in engagement is a win for Facebook’s ad business, so er…

Oculus Quest and Rift S now shipping

in Delhi/Facebook/Gadgets/Gaming/India/Oculus Quest/Oculus Rift/Oculus Rift S/Oculus VR/Politics/Virtual Reality/VR by

Facebook -owned Oculus is shipping its latest VR headgear from today. Preorders for the PC-free Oculus Quest and the higher end Oculus Rift S opened up three weeks ago.

In a launch blog Oculus touts the new hardware’s “all-in-one, fully immersive 6DOF VR” — writing: “We’re bringing the magic of presence to more people than ever before — and we’re doing it with the freedom of fully untethered movement”.

For a less varnished view on what it’s like to stick a face-computer on your head you can check out our reviews by clicking on the links below…

Oculus Quest

TC: “The headset may not be the most powerful, but it is doubtlessly the new flagship VR product from Facebook”

Oculus Rift S

TC: “It still doesn’t feel like a proper upgrade to a flagship headset that’s already three years old, but it is a more fine-tuned system that feels more evolved and dependable”

The Oculus blog contain no detail on pre-order sales for the headsets — beyond a few fine-sounding words.

Meanwhile Facebook has, for months, been running native ads for Oculus via its eponymous and omnipresent social network — although there’s no explicit mention of the Oculus brand unless you click through to “learn more”.

Instead it’s pushing the generic notion of “all-in-one VR”, shrinking the Oculus brand stamp on the headset to an indecipherable micro-scribble.

Here’s one of Facebook’s ads that targeted me in Europe, back in March, for e.g.:

For those wanting to partake of Facebook flavored face gaming (and/or immersive movie watching), the Oculus Quest and Rift S are available to buy via oculus.com and retail partners including Amazon, Best Buy, Newegg, Walmart, and GameStop in the US; Currys PC World, FNAC, MediaMarkt, and more in the EU and UK; and Amazon in Japan.

Just remember to keep your mouth shut.

Why is Facebook doing robotics research?

in Artificial Intelligence/Delhi/Facebook/Gadgets/Hardware/India/Politics/robotics/robots/Science/Social/TC by

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

On the Internet of Women with Moira Weigel

in #metoo/Apps/Delhi/digital media/Diversity/elizabeth warren/ethics/Facebook/Feminism/gender equality/Google/Government/harvard university/Hillary Clinton/India/logic/Media/OKCupid/Personnel/Policy/Politics/sexism/Sexual harassment/Social/social media/talent/TC/the New York Times/The New Yorker/tumblr/ucla/women/women in tech/yale by

“Feminism,” the writer and editor Marie Shear famously said in an often-misattributed quote, “is the radical notion that women are people.” The genius of this line, of course, is that it appears to be entirely non-controversial, which reminds us all the more effectively of the past century of fierce debates surrounding women’s equality.

And what about in tech ethics? It would seem equally non-controversial that ethical tech is supposed to be good for “people,” but is the broader tech world and its culture good for the majority of humans who happen to be women? And to the extent it isn’t, what does that say about any of us, and about all of our technology?

I’ve known, since I began planning this TechCrunch series exploring the ethics of tech, that it would need to thoroughly cover issues of gender. Because as we enter an age of AI, with machines learning to be ever more like us, what could be more critical than addressing the issues of sex and sexism often at the heart of the hardest conflicts in human history thus far?

Meanwhile, several months before I began envisioning this series I stumbled across the fourth issue of a new magazine called Logic, a journal on technology, ethics, and culture. Logic publishes primarily on paper — yes, the actual, physical stuff, and a satisfyingly meaty stock of it, at that.

In it, I found a brief essay, “The Internet of Women,” that is a must-read, an instant classic in tech ethics. The piece is by Moira Weigel, one of Logic’s founders and currently a member of Harvard University’s “Society of Fellows” — one of the world’s most elite societies of young academics.

A fast-talking 30-something Brooklynite with a Ph.D. from Yale, Weigel’s work combines her interest in sex, gender, and feminism, with a critical and witty analysis of our technology culture.

In this first of a two-part interview, I speak with Moira in depth about some of the issues she covers in her essay and beyond: #MeToo; the internet as a “feminizing” influence on culture; digital media ethics around sexism; and women in political and tech leadership.

Greg E.: How would you summarize the piece in a sentence or so?

Moira W.: It’s an idiosyncratic piece with a couple of different layers. But if I had to summarize it in just a sentence or two I’d say that it’s taking a closer look at the role that platforms like Facebook and Twitter have played in the so-called “#MeToo moment.”

In late 2017 and early 2018, I became interested in the tensions that the moment was exposing between digital media and so-called “legacy media” — print newspapers and magazines like The New York Times and Harper’s and The Atlantic. Digital media were making it possible to see structural sexism in new ways, and for voices and stories to be heard that would have gotten buried, previously.

A lot of the conversation unfolding in legacy media seemed to concern who was allowed to say what where. For me, this subtext was important: The #MeToo moment was not just about the sexualized abuse of power but also about who had authority to talk about what in public — or the semi-public spaces of the Internet.

At the same time, it seemed to me that the ongoing collapse of print media as an industry, and really what people sometimes call the “feminization” of work in general, was an important part of the context.

When people talk about jobs getting “feminized” they can mean many things — jobs becoming lower paid, lower status, flexible or precarious, demanding more emotional management and the cultivation of an “image,” blurring the boundary between “work” and “life.”

The increasing instability or insecurity of media workplaces only make women more vulnerable to the kinds of sexualized abuses of power the #MeToo hashtag was being used to talk about.

1 2 3 119
Go to Top