Menu

Timesdelhi.com

June 25, 2019
Category archive

presidential election

Indonesia restricts WhatsApp, Facebook and Instagram usage following deadly riots

in Asia/Delhi/Facebook/Government/India/Indonesia/instagram/Jakarta/operating systems/Politics/president/presidential election/Social/social media/Software/spokesperson/Sri Lanka/WhatsApp/world wide web by

Indonesia is the latest nation to hit the hammer on social media after the government restricted the use of WhatsApp and Instagram following deadly riots yesterday.

Numerous Indonesia-based users are today reporting difficulties sending multimedia messages via WhatsApp, which is one of the country’s most popular chat apps, and posting content to Facebook, while the hashtag #instagramdown is trending among the country’s Twitter users due to problems accessing the Facebook-owned photo app.

Wiranto, a coordinating minister for political, legal and security affairs, confirmed in a press conference that the government is limiting access to social media and “deactivating certain features” to maintain calm, according to a report from Coconuts.

Rudiantara, the communications minister of Indonesia and a critic of Facebook, explained that users “will experience lag on Whatsapp if you upload videos and photos.”

Facebook — which operates both WhatsApp and Instagram — didn’t explicitly confirm the blockages , but it did say it has been in communication with the Indonesian government.

“We are aware of the ongoing security situation in Jakarta and have been responsive to the Government of Indonesia. We are committed to maintaining all of our services for people who rely on them to communicate with their loved ones and access vital information,” a spokesperson told TechCrunch.

A number of Indonesia-based WhatsApp users confirmed to TechCrunch that they are unable to send photos, videos and voice messages through the service. Those restrictions are lifted when using Wi-Fi or mobile data services through a VPN, the people confirmed.

The restrictions come as Indonesia grapples with political tension following the release of the results of its presidential election on Tuesday. Defeated candidate Prabowo Subianto said he will challenge the result in the constitutional court.

Riots broke out in capital state Jakarta last night, killing at least six people and leaving more than 200 people injured. Following this, it is alleged that misleading information and hoaxes about the nature of riots and people who participated in them began to spread on social media services, according to local media reports.

Protesters hurl rocks during clash with police in Jakarta on May 22, 2019. – Indonesian police said on May 22 they were probing reports that at least one demonstrator was killed in clashes that broke out in the capital Jakarta overnight after a rally opposed to President Joko Widodo’s re-election. (Photo by ADEK BERRY / AFP)

For Facebook, seeing its services forcefully cut off in a region is no longer a rare incident. The company, which is grappling with the spread of false information in many markets, faced a similar restriction in Sri Lanka in April, when the service was completely banned for days amid terrorist strikes in the nation. India, which just this week concluded its general election, has expressed concerns over Facebook’s inability to contain the spread of false information on WhatsApp, which is its largest chat app with over 200 million monthly users.

Indonesia’s Rudiantara expressed a similar concern earlier this month.

“Facebook can tell you, ‘We are in compliance with the government’. I can tell you how much content we requested to be taken down and how much of it they took down. Facebook is the worst,” he told a House of Representatives Commission last week, according to the Jakarta Post.

Update 05/22 02:30 PDT: The original version of this post has been updated to reflect that usage of Facebook in Indonesia has also been impacted.

Facebook still a great place to amplify pre-election junk news, EU study finds

in deception/Delhi/digital media/election security/Emmanuel Macron/Europe/european commission/European Union/Facebook/fake news/France/India/junk news/misinformation/online disinformation/Oxford Internet Institute/oxford university/Politics/presidential election/Security/sensationalism/Social/social media/Sweden/Twitter by

A study carried out by academics at Oxford University to investigate how junk news is being shared on social media in Europe ahead of regional elections this month has found individual stories shared on Facebook’s platform can still hugely outperform the most important and professionally produced news stories, drawing as much as 4x the volume of Facebook shares, likes, and comments.

The study, conducted for the Oxford Internet Institute’s (OII) Computational Propaganda Project, is intended to respond to widespread concern about the spread of online political disinformation on EU elections which take place later this month, by examining pre-election chatter on Facebook and Twitter in English, French, German, Italian, Polish, Spanish, and Swedish.

Junk news in this context refers to content produced by known sources of political misinformation — aka outlets that are systematically producing and spreading “ideologically extreme, misleading, and factually incorrect information” — with the researchers comparing interactions with junk stories from such outlets to news stories produced by the most popular professional news sources to get a snapshot of public engagement with sources of misinformation ahead of the EU vote.

As we reported last year, the Institute also launched a junk news aggregator ahead of the US midterms to help Internet users get a handle on manipulative politically-charged content that might be hitting their feeds.

In the EU the European Commission has responded to rising concern about the impact of online disinformation on democratic processes by stepping up pressure on platforms and the adtech industry — issuing monthly progress reports since January after the introduction of a voluntary code of practice last year intended to encourage action to squeeze the spread of manipulative fakes. Albeit, so far these ‘progress’ reports have mostly boiled down to calls for less foot-dragging and more action.

One tangible result last month was Twitter introducing a report option for misleading tweets related to voting ahead of the EU vote, though again you have to wonder what took it so long given that online election interference is hardly a new revelation. (The OII study is also just the latest piece of research to bolster the age old maxim that falsehoods fly and the truth comes limping after.)

The study also examined how junk news spread on Twitter during the pre-EU election period, with the researchers finding that less than 4% of sources circulating on Twitter’s platform were junk news (or “known Russian sources”) — with Twitter users sharing far more links to mainstream news outlets overall (34%) over the study period.

Although the Polish language sphere was an exception — with junk news making up a fifth (21%) of EU election-related Twitter traffic in that outlying case.

Returning to Facebook, while the researchers do note that many more users interact with mainstream content overall via its platform, noting that mainstream publishers have a higher following and so “wider access to drive activity around their content” and meaning their stories “tend to be seen, liked, and shared by far more users overall”, they also point out that junk news still packs a greater per story punch — likely owing to the use of tactics such as clickbait, emotive language, and outragemongering in headlines which continues to be shown to generate more clicks and engagement on social media.

It’s also of course much quicker and easier to make some shit up vs the slower pace of doing rigorous professional journalism — so junk news purveyors can get out ahead of news events also as an eyeball-grabbing strategy to further the spread of their cynical BS. (And indeed the researchers go on to say that most of the junk news sources being shared during the pre-election period “either sensationalized or spun political and social events covered by mainstream media sources to serve a political and ideological agenda”.)

“While junk news sites were less prolific publishers than professional news producers, their stories tend to be much more engaging,” they write in a data memo covering the study. “Indeed, in five out of the seven languages (English, French, German, Spanish, and Swedish), individual stories from popular junk news outlets received on average between 1.2 to 4 times as many likes, comments, and shares than stories from professional media sources.

“In the German sphere, for instance, interactions with mainstream stories averaged only 315 (the lowest across this sub-sample) while nearing 1,973 for equivalent junk news stories.”

To conduct the research the academics gathered more than 584,000 tweets related to the European parliamentary elections from more than 187,000 unique users between April 5 and April 20 using election-related hashtags — from which they extracted more than 137,000 tweets containing a URL link, which pointed to a total of 5,774 unique media sources.

Sources that were shared 5x or more across the collection period were manually classified by a team of nine multi-lingual coders based on what they describe as “a rigorous grounded typology developed and refined through the project’s previous studies of eight elections in several countries around the world”.

Each media source was coded individually by two separate coders, via which technique they say was able to successfully label nearly 91% of all links shared during the study period. 

The five most popular junk news sources were extracted from each language sphere looked at — with the researchers then measuring the volume of Facebook interactions with these outlets between April 5 and May 5, using the NewsWhip Analytics dashboard.

They also conducted a thematic analysis of the 20 most engaging junk news stories on Facebook during the data collection period to gain a better understanding of the different political narratives favoured by junk news outlets ahead of an election.

On the latter front they say the most engaging junk narratives over the study period “tend to revolve around populist themes such as anti-immigration and Islamophobic sentiment, with few expressing Euroscepticism or directly mentioning European leaders or parties”.

Which suggests that EU-level political disinformation is a more issue-focused animal (and/or less developed) — vs the kind of personal attacks that have been normalized in US politics (and were richly and infamously exploited by Kremlin-backed anti-Clinton political disinformation during the 2016 US presidential election, for example).

This is likely also because of a lower level of political awareness attached to individuals involved in EU institutions and politics, and the multi-national state nature of the pan-EU project — which inevitably bakes in far greater diversity. (We can posit that just as it aids robustness in biological life, diversity appears to bolster democratic resilience vs political nonsense.)

The researchers also say they identified two noticeable patterns in the thematic content of junk stories that sought to cynically spin political or social news events for political gain over the pre-election study period.

“Out of the twenty stories we analysed, 9 featured explicit mentions of ‘Muslims’ and the Islamic faith in general, while seven mentioned ‘migrants’, ‘immigration’, or ‘refugees’… In seven instances, mentions of Muslims and immigrants were coupled with reporting on terrorism or violent crime, including sexual assault and honour killings,” they write.

“Several stories also mentioned the Notre Dame fire, some propagating the idea that the arson had been deliberately plotted by Islamist terrorists, for example, or suggesting that the French government’s reconstruction plans for the cathedral would include a minaret. In contrast, only 4 stories featured Euroscepticism or direct mention of European Union leaders and parties.

“The ones that did either turned a specific political figure into one of derision – such as Arnoud van Doorn, former member of PVV, the Dutch nationalist and far-right party of Geert Wilders, who converted to Islam in 2012 – or revolved around domestic politics. One such story relayed allegations that Emmanuel Macron had been using public taxes to finance ISIS jihadists in Syrian camps, while another highlighted an offer by Vladimir Putin to provide financial assistance to rebuild Notre Dame.”

Taken together, the researchers conclude that “individuals discussing politics on social media ahead of the European parliamentary elections shared links to high-quality news content, including high volumes of content produced by independent citizen, civic groups and civil society organizations, compared to other elections we monitored in France, Sweden, and Germany”.

Which suggests that attempts to manipulate the pan-EU election are either less prolific or, well, less successful than those which have targeted some recent national elections in EU Member States. And logic would suggest that co-ordinating election interference across a 28-Member State bloc does require greater co-ordination and resource vs trying to meddle in a single national election — on account of the multiple countries, cultures, languages and issues involved.

We’ve reached out to Facebook for comment on the study’s findings.

The company has put a heavy focus on publicizing its self-styled ‘election security’ efforts ahead of the EU election. Though it has mostly focused on setting up systems to control political ads — whereas junk news purveyors are simply uploading regular Facebook ‘content’ at the same time as wrapping it in bogus claims of ‘journalism’ — none of which Facebook objects to. All of which allows would-be election manipulators to pass off junk views as online news, leveraging the reach of Facebook’s platform and its attention-hogging algorithms to amplify hateful nonsense. While any increase in engagement is a win for Facebook’s ad business, so er…

With cybersecurity threats looming, the government shutdown is putting America at risk

in agriculture/America/China/Column/computer security/cybercrime/Cyberwarfare/Delhi/Department of Homeland Security/Federal government/Finance/Food/Government/India/Internal Revenue Service/Iran/national security/North Korea/Politics/presidential election/Russia/Security/United States by

Putting political divisions and affiliations aside, the government partially shutting down for the third time over the last year is extremely worrisome, particularly when considering its impact on the nation’s cybersecurity priorities. Unlike the government, our nation’s enemies don’t ‘shut down.’ When our nation’s cyber centers are not actively monitoring and protecting our most valuable assets and critical infrastructure, threats magnify and vulnerabilities become further exposed.

While Republicans and Democrats continue to butt heads over border security, the vital agencies tasked with properly safeguarding our nation from our adversaries are stuck in operational limbo. Without this protection in full force acting around the clock, serious extraneous threats to government agencies and private businesses can thrive. This shutdown, now into its fourth week, has crippled key U.S. agencies, most notably the Department of Homeland Security, imperiling our nation’s cybersecurity defenses.

Consider the Cybersecurity and Infrastructure Security Agency, which has seen nearly 37 percent of its staff furloughed. This agency leads efforts to protect and defend critical infrastructure, as it pertains to industries as varied as energy, finance, food and agriculture, transportation, and defense.

As defined in the 2001 Patriot Act, critical infrastructure is such that, “the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.” In the interest of national security, we simply cannot tolerate prolonged vulnerability in these areas.

Employees who are considered “essential” are still on the job, but the loss of supporting staff could prove to be costly, in both the short and long term. More immediately, the shutdown places a greater burden on the employees deemed essential enough to stick around. These employees are tasked with both longer hours and expanded responsibilities, leading to a higher risk of critical oversight and mission failure, as weary agents find themselves increasingly stretched beyond their capabilities.

The long-term effects, however, are quite frankly, far more alarming. There’s a serious possibility our brightest minds in cybersecurity will consider moving to the private sector following a shutdown of this magnitude. Even ignoring that the private sector pays better, furloughed staff are likely to reconsider just how valued they are in their current roles. After the 2013 shutdown, a significant segment of the intelligence community left their posts for the relative stability of corporate America. The current shutdown bears those risks as well. A loss of critical personnel could result in institutional failure far beyond the present shutdown, leading to cascading security deterioration.

This shutdown has farther reaching effects for the federal government to attract talent in the form of recent college grads or those interested in transitioning from the private sector. The stability of government was once viewed as a guarantee compared to the private sector, but work could incentivize workers to take their talents to the private sector.

The IRS in particular is extremely vulnerable, putting America’s private sector and your average taxpayer directly in the crosshairs. The shutdown has come at the worst time of the year, as the holidays and the post-holiday season tend to have the highest rates for cybercrime. In 2018, the IRS reported a 60 percent increase in email scams. Meanwhile, as the IRS furloughed much of its staff as well, cyber criminals are likely to ramp up their activity even more.

Though the agency has stated it will recall a “significant portion” of its personnel to work without pay, it has also indicated there will be a lack of support for much beyond essential service. There’s no doubt cybercriminals will see this as a lucrative opportunity. With tax season on the horizon, the gap in oversight will feed directly into cyber criminals’ playing field, undoubtedly resulting in escalating financial losses due to tax identity theft and refund fraud.

Cyberwarfare is no longer some distant afterthought, practiced and discussed by a niche group of experts in a backroom. Cyberwarfare has taken center stage on the virtual battlefield. Geopolitical adversaries such as North Korea, Russia, Iran, and China rely on cyber as their most agile and dangerous weapon against the United States. These hostile nation-states salivate at the idea of a prolonged government shutdown.

From Russian interference in the 2016 presidential election to Chinese state cybercriminals breaching Marriott Hotels, the necessity  to protect our national cybersecurity has never been more explicit.

If our government doesn’t resolve this dilemma quickly, America’s cybersecurity will undoubtedly suffer serious deterioration, inevitably endangering the lives and safety of citizens across the nation. This issue goes far beyond partisan politics, yet needs both parties to come to a consensus immediately. Time is not on our side.

Facebook finds and kills another 512 Kremlin-linked fake accounts

in central europe/Delhi/disinformation/eastern europe/estonia/Europe/Facebook/fake accounts/India/instagram/internet research agency/Kazakhstan/Lithuania/Moscow/Politics/presidential election/propaganda/Putin/romania/rt/Russia/Security/Social/social media/Sputnik/ukraine/United States by

Two years on from the U.S. presidential election, Facebook continues to have a major problem with Russian disinformation being megaphoned via its social tools.

In a blog post today the company reveals another tranche of Kremlin-linked fake activity — saying it’s removed a total of 471 Facebook pages and accounts, as well as 41 Instagram accounts, which were being used to spread propaganda in regions where Putin’s regime has sharp geopolitical interests.

In its latest reveal of “coordinated inauthentic behavior” — aka the euphemism Facebook uses for disinformation campaigns that rely on its tools to generate a veneer of authenticity and plausibility in order to pump out masses of sharable political propaganda — the company says it identified two operations, both originating in Russia, and both using similar tactics without any apparent direct links between the two networks.

One operation was targeting Ukraine specifically, while the other was active in a number of countries in the Baltics, Central Asia, the Caucasus, and Central and Eastern Europe.

“We’re taking down these Pages and accounts based on their behavior, not the content they post,” writes Facebook’s Nathaniel Gleicher, head of cybersecurity policy. “In these cases, the people behind this activity coordinated with one another and used fake accounts to misrepresent themselves, and that was the basis for our action.”

Sputnik link

Discussing the Russian disinformation op targeting multiple countries, Gleicher says Facebook found what looked like innocuous or general interest pages to be linked to employees of Kremlin propaganda outlet Sputnik, with some of the pages encouraging protest movements and pushing other Putin lines.

“The Page administrators and account owners primarily represented themselves as independent news Pages or general interest Pages on topics like weather, travel, sports, economics, or politicians in Romania, Latvia, Estonia, Lithuania, Armenia, Azerbaijan, Georgia, Tajikistan, Uzbekistan, Kazakhstan, Moldova, Russia, and Kyrgyzstan,” he writes. “Despite their misrepresentations of their identities, we found that these Pages and accounts were linked to employees of Sputnik, a news agency based in Moscow, and that some of the Pages frequently posted about topics like anti-NATO sentiment, protest movements, and anti-corruption.”

Facebook has included some sample posts from the removed accounts in the blog which show a mixture of imagery being deployed — from a photo of a rock concert, to shots of historic buildings and a snowy scene, to obviously militaristic and political protest imagery.

In all Facebook says it removed 289 Pages and 75 Facebook accounts associated with this Russian disop; adding that around 790,000 accounts followed one or more of the removed Pages.

It also reveals that it received around $135,000 for ads run by the Russian operators (specifying this was paid for in euros, rubles, and U.S. dollars).

“The first ad ran in October 2013, and the most recent ad ran in January 2019,” it notes, adding: “We have not completed a review of the organic content coming from these accounts.”

These Kremlin-linked Pages also hosted around 190 events — with the first scheduled for August 2015, according to Facebook, and the most recent scheduled for January 2019. “Up to 1,200 people expressed interest in at least one of these events. We cannot confirm whether any of these events actually occurred,” it further notes.

Facebook adds that open source reporting and work by partners which investigate disinformation helped identify the network.

It also says it has shared information about the investigation with U.S. law enforcement, the U.S. Congress, other technology companies, and policymakers in impacted countries.

Ukraine tip-off

In the case of the Ukraine-targeted Russian disop, Facebook says it removed a total of 107 Facebook Pages, Groups, and accounts, and 41 Instagram accounts, specifying that it was acting on an initial tip off from U.S. law enforcement.

In all it says around 180,000 Facebook accounts were following one or more of the removed pages. While the fake Instagram accounts were being followed by more than 55,000 accounts.  

Again Facebook received money from the disinformation purveyors, saying it took in around $25,000 in ad spending on Facebook and Instagram in this case — all paid for in rubles this time — with the first ad running in January 2018, and the most recent in December 2018. (Again it says it has not completed a review of content the accounts were generating.)

“The individuals behind these accounts primarily represented themselves as Ukrainian, and they operated a variety of fake accounts while sharing local Ukrainian news stories on a variety of topics, such as weather, protests, NATO, and health conditions at schools,” writes Gleicher. “We identified some technical overlap with Russia-based activity we saw prior to the US midterm elections, including behavior that shared characteristics with previous Internet Research Agency (IRA) activity.”

In the Ukraine case it says it found no Events being hosted by the pages.

“Our security efforts are ongoing to help us stay a step ahead and uncover this kind of abuse, particularly in light of important political moments and elections in Europe this year,” adds Gleicher. “We are committed to making improvements and building stronger partnerships around the world to more effectively detect and stop this activity.”

A month ago Facebook also revealed it had removed another batch of politically motivated fake accounts. In that case the network behind the pages had been working to spread misinformation in Bangladesh 10 days before the country’s general elections.

This week it also emerged the company is extending some of its nascent election security measures by bringing in requirements for political advertisers to more international markets ahead of major elections in the coming months, such as checks that a political advertiser is located in the country.

However in other countries which also have big votes looming this year Facebook has yet to announced any measures to combat politically charged fakes.

Google ‘incognito’ search results still vary from person to person, DDG study finds

in Advertising Tech/Delhi/DuckDuckGo/eli pariser/Filter Bubble/Google/google search/India/personalization/Politics/presidential election/privacy/search results/United States by

A study of Google search results by anti-tracking rival DuckDuckGo has suggested that escaping the so-called ‘filter bubble’ of personalized online searches is a perniciously hard problem for the put upon Internet consumer who just wants to carve out a little unbiased space online, free from the suggestive taint of algorithmic fingers.

DDG reckons it’s not possible even for logged out users of Google search, who are also browsing in Incognito mode, to prevent their online activity from being used by Google to program — and thus shape — the results they see.

DDG says it found significant variation in Google search results, with most of the participants in the study seeing results that were unique to them — and some seeing links others simply did not.

Results within news and video infoboxes also varied significantly, it found.

While it says there was very little difference for logged out, incognito browsers.

“It’s simply not possible to use Google search and avoid its filter bubble,” it concludes.

Google has responded by counter-claiming that DuckDuckGo’s research is “flawed”.

Degrees of personalization

DuckDuckGo says it carried out the research to test recent claims by Google to have tweaked its algorithms to reduce personalization.

A CNBC report in September, drawing on access provided by Google, letting the reporter sit in on an internal meeting and speak to employees on its algorithm team, suggested that Mountain View is now using only very little personalization to generate search results.

A query a user comes with usually has so much context that the opportunity for personalization is just very limited,” Google fellow Pandu Nayak, who leads the search ranking team, told CNBC this fall.

On the surface, that would represent a radical reprogramming of Google’s search modus operandi — given the company made “Personalized Search” the default for even logged out users all the way back in 2009.

Announcing the expansion of the feature then Google explained it would ‘customize’ search results for these logged out users via an ‘anonymous cookie’:

This addition enables us to customize search results for you based upon 180 days of search activity linked to an anonymous cookie in your browser. It’s completely separate from your Google Account and Web History (which are only available to signed-in users). You’ll know when we customize results because a “View customizations” link will appear on the top right of the search results page. Clicking the link will let you see how we’ve customized your results and also let you turn off this type of customization.

A couple of years after Google threw the Personalized Search switch, Eli Pariser published his now famous book describing the filter bubble problem. Since then online personalization’s bad press has only grown.

In recent years concern has especially spiked over the horizon-reducing impact of big tech’s subjective funnels on democratic processes, with algorithms carefully engineered to keep serving users more of the same stuff now being widely accused of entrenching partisan opinions, rather than helping broaden people’s horizons.

Especially so where political (and politically charged) topics are concerned. And, well, at the extreme end, algorithmic filter bubbles stand accused of breaking democracy itself — by creating highly effective distribution channels for individually targeted propaganda.

Although there have also been some counter claims floating around academic circles in recent years that imply the echo chamber impact is itself overblown. (Albeit sometimes emanating from institutions that also take funding from tech giants like Google.)

As ever, where the operational opacity of commercial algorithms is concerned, the truth can be a very difficult animal to dig out.

Of course DDG has its own self-interested iron in the fire here — suggesting, as it is, that “Google is influencing what you click” — given it offers an anti-tracking alternative to the eponymous Google search.

But that does not merit an instant dismissal of a finding of major variation in even supposedly ‘incognito’ Google search results.

DDG has also made the data from the study downloadable — and the code it used to analyze the data open source — allowing others to look and draw their own conclusions.

It carried out a similar study in 2012, after the earlier US presidential election — and claimed then to have found that Google’s search had inserted tens of millions of more links for Obama than for Romney in the run-up to that.

It says it wanted to revisit the state of Google search results now, in the wake of the 2016 presidential election that installed Trump in the White House — to see if it could find evidence to back up Google’s claims to have ‘de-personalized’ search.

For the latest study DDG asked 87 volunteers in the US to search for the politically charged topics of “gun control”, “immigration”, and “vaccinations” (in that order) at 9pm ET on Sunday, June 24, 2018 — initially searching in private browsing mode and logged out of Google, and then again without using Incognito mode.

You can read its full write-up of the study results here.

The results ended up being based on 76 users as those searching on mobile were excluded to control for significant variation in the number of displayed infoboxes.

Here’s the topline of what DDG found:

Private browsing mode (and logged out):

  • “gun control”: 62 variations with 52/76 participants (68%) seeing unique results.
  • “immigration”: 57 variations with 43/76 participants (57%) seeing unique results.
  • “vaccinations”: 73 variations with 70/76 participants (92%) seeing unique results.

‘Normal’ mode:

  • “gun control”: 58 variations with 45/76 participants (59%) seeing unique results.
  • “immigration”: 59 variations with 48/76 participants (63%) seeing unique results.
  • “vaccinations”: 73 variations with 70/76 participants (92%) seeing unique results.

DDG’s contention is that truly ‘unbiased’ search results should produce largely the same results.

Yet, by contrast, the search results its volunteers got served were — in the majority — unique. (Ranging from 57% at the low end to a full 92% at the upper end.)

“With no filter bubble, one would expect to see very little variation of search result pages — nearly everyone would see the same single set of results,” it writes. “Instead, most people saw results unique to them. We also found about the same variation in private browsing mode and logged out of Google vs. in normal mode.”

“We often hear of confusion that private browsing mode enables anonymity on the web, but this finding demonstrates that Google tailors search results regardless of browsing mode. People should not be lulled into a false sense of security that so-called “incognito” mode makes them anonymous,” DDG adds.

Google initially declined to provide a statement responding to the study, telling us instead that several factors can contribute to variations in search results — flagging time and location differences among them.

It even suggested results could vary depending on the data center a user query was connected with — potentially introducing some crawler-based micro-lag.

Google also claimed it does not personalize the results of logged out users browsing in Incognito mode based on their signed-in search history.

However the company admited it uses contextual signals to rank results even for logged out users (as that 2009 blog post described) — such as when trying to clarify an ambiguous query.

In which case it said a recent search might be used for disambiguation purposes. (Although it also described this type of contextualization in search as extremely limited, saying it would not account for dramatically different results.)

But with so much variation evident in the DDG volunteer data, there seems little question that Google’s approach very often results in individualized — and sometimes highly individualized — search results.

Some Google users were even served with more or fewer unique domains than others.

Lots of questions naturally flow from this.

Such as: Does Google applying a little ‘ranking contextualization’ sound like an adequately ‘de-personalized’ approach — if the name of the game is popping the filter bubble?

Does it make the served results even marginally less clickable, biased and/or influential?

Or indeed any less ‘rank’ from a privacy perspective… ?

You tell me.

Even the same bunch of links served up in a slightly different configuration has the potential to be majorly significant since the top search link always gets a disproportionate chunk of clicks. (DDG says the no.1 link gets circa 40%.)

And if the topics being Google-searched are especially politically charged even small variations in search results could — at least in theory — contribute to some major democratic impacts.

There is much to chew on.

DDG says it controlled for time- and location-based variation in the served search results by having all participants in the study carry out the search from the US and do so at the very same time.

While it says it controlled for the inclusion of local links (i.e to cancel out any localization-based variation) by bundling such results with a localdomain.com placeholder (and ‘Local Source’ for infoboxes).

Yet even taking steps to control for space-time based variations it still found the majority of Google search results to be unique to the individual.

“These editorialized results are informed by the personal information Google has on you (like your search, browsing, and purchase history), and puts you in a bubble based on what Google’s algorithms think you’re most likely to click on,” it argues.

Google would counter argue that’s ‘contextualizing’, not editorializing.

And that any ‘slight variation’ in results is a natural property of the dynamic nature of its Internet-crawling search response business.

Albeit, as noted above, DDG found some volunteers did not get served certain links (when others did), which sounds rather more significant than ‘slight difference’.

In the statement Google later sent us it describes DDG’s attempts to control for time and location differences as ineffective — and the study as a whole as “flawed” — asserting:

This study’s methodology and conclusions are flawed since they are based on the assumption that any difference in search results are based on personalization. That is simply not true. In fact, there are a number of factors that can lead to slight differences, including time and location, which this study doesn’t appear to have controlled for effectively.

One thing is crystal clear: Google is — and always has been — making decisions that affect what people see.

This capacity is undoubtedly influential, given the majority marketshare captured by Google search. (And the major role Google still plays in shaping what Internet users are exposed to.)

That’s clear even without knowing every detail of how personalized and/or customized these individual Google search results were.

Google’s programming formula remains locked up in a proprietary algorithm box — so we can’t easily (and independently) unpick that.

And this unfortunate ‘techno-opacity’ habit offers convenient cover for all sorts of claim and counter-claim — which can’t really now be detached from the filter bubble problem.

Unless and until we can know exactly how the algorithms work to properly track and quantify impacts.

Also true: Algorithmic accountability is a topic of increasing public and political concern.

Lastly, ‘trust us’ isn’t the great brand mantra for Google it once was.

So the devil may yet get (manually) unchained from all these fuzzy details.

Go to Top