Timesdelhi.com

November 19, 2018
Category archive

United Kingdom

Quantum computing, not AI, will define our future

in aerospace/Canada/China/Column/computing/D-Wave Systems/Delhi/economist/Emerging-Technologies/encryption/energy/European Union/Finance/Germany/Google/Healthcare/IBM/India/Japan/north america/patent search/Physics/Politics/quantum computing/Quantum Mechanics/qubit/software development/start ups/United Kingdom/United States by

The word “quantum” gained currency in the late 20th century as a descriptor signifying something so significant, it defied the use of common adjectives. For example, a “quantum leap” is a dramatic advancement (also an early ’90’s television series starring Scott Bakula).

At best, that is an imprecise (though entertaining) definition. When “quantum” is applied to “computing,” however, we are indeed entering an era of dramatic advancement.

Quantum computing is technology based on the principles of quantum theory, which explains the nature of energy and matter on the atomic and subatomic level. It relies on the existence of mind-bending quantum-mechanical phenomena, such as superposition and entanglement.

Erwin Schrödinger’s famous 1930’s thought experiment involving a cat that was both dead and alive at the same time was intended to highlight the apparent absurdity of superposition, the principle that quantum systems can exist in multiple states simultaneously until observed or measured. Today quantum computers contain dozens of qubits (quantum bits), which take advantage of that very principle. Each qubit exists in a superposition of zero and one (i.e., has non-zero probabilities to be a zero or a one) until measured. The development of qubits has implications for dealing with massive amounts of data and achieving previously unattainable level of computing efficiency that are the tantalizing potential of quantum computing.

While Schrödinger was thinking about zombie cats, Albert Einstein was observing what he described as “spooky action at a distance,” particles that seemed to be communicating faster than the speed of light. What he was seeing were entangled electrons in action. Entanglement refers to the observation that the state of particles from the same quantum system cannot be described independently of each other. Even when they are separated by great distances, they are still part of the same system. If you measure one particle, the rest seem to know instantly. The current record distance for measuring entangled particles is 1,200 kilometers or about 745.6 miles. Entanglement means that the whole quantum system is greater than the sum of its parts.

If these phenomena make you vaguely uncomfortable so far, perhaps I can assuage that feeling simply by quoting Schrödinger, who purportedly said after his development of quantum theory, “I don’t like it, and I’m sorry I ever had anything to do with it.”

Various parties are taking different approaches to quantum computing, so a single explanation of how it works would be subjective. But one principle may help readers get their arms around the difference between classical computing and quantum computing. Classical computers are binary. That is, they depend on the fact that every bit can exist only in one of two states, either 0 or 1. Schrödinger’s cat merely illustrated that subatomic particles could exhibit innumerable states at the same time. If you envision a sphere, a binary state would be if the “north pole,” say, was 0, and the south pole was 1. In a qubit, the entire sphere can hold innumerable other states and relating those states between qubits enables certain correlations that make quantum computing well-suited for a variety of specific tasks that classical computing cannot accomplish. Creating qubits and maintaining their existence long enough to accomplish quantum computing tasks is an ongoing challenge.

IBM researcher Jerry Chow in the quantum computing lab at IBM’s T.J. Watson Research Center.

Humanizing Quantum Computing

These are just the beginnings of the strange world of quantum mechanics. Personally, I’m enthralled by quantum computing. It fascinates me on many levels, from its technical arcana to its potential applications that could benefit humanity. But a qubit’s worth of witty obfuscation on how quantum computing works will have to suffice for now. Let’s move on to how it will help us create a better world.

Quantum computing’s purpose is to aid and extend the abilities of classical computing. Quantum computers will perform certain tasks much more efficiently than classical computers, providing us with a new tool for specific applications. Quantum computers will not replace their classical counterparts. In fact, quantum computers require classical computer to support their specialized abilities, such as systems optimization.

Quantum computers will be useful in advancing solutions to challenges in diverse fields such as energy, finance, healthcare, aerospace, among others. Their capabilities will help us cure diseases, improve global financial markets, detangle traffic, combat climate change, and more. For instance, quantum computing has the potential to speed up pharmaceutical discovery and development, and to improve the accuracy of the atmospheric models used to track and explain climate change and its adverse effects.

I call this “humanizing” quantum computing, because such a powerful new technology should be used to benefit humanity, or we’re missing the boat.

Intel’s 17-qubit superconducting test chip for quantum computing has unique features for improved connectivity and better electrical and thermo-mechanical performance. (Credit: Intel Corporation)

An Uptick in Investments, Patents, Startups, and more

That’s my inner evangelist speaking. In factual terms, the latest verifiable, global figures for investment and patent applications reflect an uptick in both areas, a trend that’s likely to continue. Going into 2015, non-classified national investments in quantum computing reflected an aggregate global spend of about $1.75 billion USD,according to The Economist. The European Union led with $643 million. The U.S. was the top individual nation with $421 million invested, followed by China ($257 million), Germany ($140 million), Britain ($123 million) and Canada ($117 million). Twenty countries have invested at least $10 million in quantum computing research.

At the same time, according to a patent search enabled by Thomson Innovation, the U.S. led in quantum computing-related patent applications with 295, followed by Canada (79), Japan (78), Great Britain (36), and China (29). The number of patent families related to quantum computing was projected to increase 430 percent by the end of 2017

The upshot is that nations, giant tech firms, universities, and start-ups are exploring quantum computing and its range of potential applications. Some parties (e.g., nation states) are pursuing quantum computing for security and competitive reasons. It’s been said that quantum computers will break current encryption schemes, kill blockchain, and serve other dark purposes.

I reject that proprietary, cutthroat approach. It’s clear to me that quantum computing can serve the greater good through an open-source, collaborative research and development approach that I believe will prevail once wider access to this technology is available. I’m confident crowd-sourcing quantum computing applications for the greater good will win.

If you want to get involved, check out the free tools that the household-name computing giants such as IBM and Google have made available, as well as the open-source offerings out there from giants and start-ups alike. Actual time on a quantum computer is available today, and access opportunities will only expand.

In keeping with my view that proprietary solutions will succumb to open-source, collaborative R&D and universal quantum computing value propositions, allow me to point out that several dozen start-ups in North America alone have jumped into the QC ecosystem along with governments and academia. Names such as Rigetti Computing, D-Wave Systems, 1Qbit Information Technologies, Inc., Quantum Circuits, Inc., QC Ware, Zapata Computing, Inc. may become well-known or they may become subsumed by bigger players, their burn rate – anything is possible in this nascent field.

Developing Quantum Computing Standards

 Another way to get involved is to join the effort to develop quantum computing-related standards. Technical standards ultimately speed the development of a technology, introduce economies of scale, and grow markets. Quantum computer hardware and software development will benefit from a common nomenclature, for instance, and agreed-upon metrics to measure results.

Currently, the IEEE Standards Association Quantum Computing Working Group is developing two standards. One is for quantum computing definitions and nomenclature so we can all speak the same language. The other addresses performance metrics and performance benchmarking to enable measurement of quantum computers’ performance against classical computers and, ultimately, each other.

The need for additional standards will become clear over time.

News Source = techcrunch.com

Children are being “datafied” before we’ve understood the risks, report warns

in Advertising Tech/Anne Longfield/Apps/Artificial Intelligence/big data/children's data/consumer protection/data management/data processing/data protection/data security/Delhi/Education/Europe/European Union/Facebook/General Data Protection Regulation/identity management/India/Policy/Politics/privacy/Snapchat/Social/social media/terms of service/United Kingdom by

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

News Source = techcrunch.com

Bots Distorted the 2016 Election. Will the Midterms Be a Sequel?

in botnet/bots/california/Column/computing/Delhi/digital media/Facebook/Governor/India/internet research agency/Jamal Khashoggi/jerry brown/journalist/National Football League/nfl/Pew Research Center/Politics/president/presidential election/Saudi Arabia/social media/social media platforms/Software/the wall street journal/Trump/Twitter/United Kingdom/United States/wall-street-journal by

The fact that Russian-linked bots penetrated social media to influence the 2016 U.S. presidential election has been well documented and the details of the deception are still trickling out.

In fact, on Oct. 17 Twitter disclosed that foreign interference dating back to 2016 involved 4,611 accounts — most affiliated with the Internet Research Agency, a Russian troll farm. There were more than 10 million suspicious tweets and more than 2 million GIFs, videos and Periscope broadcasts.

In this season of another landmark election — a recent poll showed that about 62 percent of Americans believe the 2018 midterm elections are the most important midterms in their lifetime – it is natural to wonder if the public and private sectors have learned any lessons from the 2016 fiasco. And about what is being done to better protect against this malfeasance by nation-state actors.

There is good news and bad news here. Let’s start with the bad.

Two years after the 2016 election, social media still sometimes looks like a reality show called “Propagandists Gone Wild.” Hardly a major geopolitical event takes place in the world without automated bots generating or amplifying content that exaggerates the prevalence of a particular point of view.

In mid-October, Twitter suspended hundreds of accounts that simultaneously tweeted and retweeted pro-Saudi Arabia talking points about the disappearance of journalist Jamal Khashoggi.

On Oct. 22, the Wall Street Journal reported that Russian bots helped inflame the controversy over NFL players kneeling during the national anthem. Researchers from Clemson University told the newspaper that 491 accounts affiliated with the Internet Research Agency posted more 12,000 tweets on the issue, with activity peaking soon after a Sept. 22, 2017 speech by President Trump in which he said team owners should fire players for taking a knee during the anthem.

The problem hasn’t persisted only in the United States. Two years after bots were blamed for helping sway the 2016 Brexit vote in Britain, Twitter bots supporting the anti-immigration Sweden Democrats increased significantly this spring and summer in the leadup to that country’s elections.

These and other examples of continuing misinformation-by-bot are troubling, but it’s not all doom and gloom.  I see positive developments too.

Photo courtesy of Shutterstock/Nemanja Cosovic

First, awareness must be the first step in solving any problem, and cognizance of bot meddling has soared in the last two years amid all the disturbing headlines.

About two-thirds of Americans have heard of social media bots, and the vast majority of those people are worried bots are being used maliciously, according to a Pew Research Center survey of 4,500 U.S. adults conducted this summer. (It’s concerning, however, that much fewer of the respondents said they’re confident that can actually recognize when accounts are fake.)

Second, lawmakers are starting to take action. When California Gov. Jerry Brown on Sept. 28 signed legislation making it illegal as of July 1, 2019 to use bots – to try to influence voter opinion or for any other purpose — without divulging the source’s artificial nature, it followed anti-ticketing-bot laws nationally and in New York State as the first bot-fighting statutes in the United States.

While I support the increase in awareness and focused interest by legislators, I do feel the California law has some holes. The measure is difficult to enforce because it’s often very hard to identify who is behind a bot network, the law’s penalties aren’t clear, and an individual state is inherently limited it what it can do to attack a national and global issue. However, the law is a good start and shows that governments are starting to take the problem seriously.

Third, the social media platforms — which have faced congressional scrutiny over their failure to address bot activity in 2016 – have become more aggressive in pinpointing and eliminating bad bots.

It’s important to remember that while they have some responsibility, Twitter and Facebook are victims here too, taken for a ride by bad actors who have hijacked these commercial platforms for their own political and ideological agendas.

While it can be argued that Twitter and Facebook should have done more sooner to differentiate the human from the non-human fakes in its user rolls, it bears remembering that bots are a newly acknowledged cybersecurity challenge. The traditional paradigm of a security breach has been a hacker exploiting a software vulnerability. Bots don’t do that – they attack online business processes and thus are difficult to detect though customary vulnerability scanning methods.

I thought there was admirable transparency in Twitter’s Oct. 17 blog accompanying its release of information about the extent of misinformation operations since 2016. “It is clear that information operations and coordinated inauthentic behavior will not cease,” the company said. “These types of tactics have been around for far longer than Twitter has existed — they will adapt and change as the geopolitical terrain evolves worldwide and as new technologies emerge.”

Which leads to the fourth reason I’m optimistic: technological advances.

In the earlier days of the internet, in the late ‘90s and early 00’s, networks were extremely susceptible to worms, viruses and other attacks because protective technology was in its early stages of development. Intrusions still happen, obviously, but security technology has grown much more sophisticated and many attacks occur due to human error rather than failure of the defense systems themselves.

Bot detection and mitigation technology keeps improving, and I think we’ll get to a state where it becomes as automatic and effective as email spam filters are today. Security capabilities that too often are siloed within networks will integrate more and more into holistic platforms better able to detect and ward off bot threats.

So while we should still worry about bots in 2018, and the world continues to wrap its arms around the problem, we’re seeing significant action that should bode well for the future.

The health of democracy and companies’ ability to conduct business online may depend on it.

News Source = techcrunch.com

Campaign tool supplied to UK’s governing party by Trump-Pence app dev quietly taken out of service

in Apps/Cambridge Analytica/Conservative Campaigner/Conservative Party/data protection/DCMS committee/Delhi/digital media/electoral law/Europe/European Union/Facebook/General Election/Government/India/information commissioner's office/Malta/Politics/privacy/republican national committee/Social/social media/social network/terms of service/uCampaign/United Kingdom by

An app that the UK’s governing party launched last year — for Conservative Party activists to gamify, ‘socialize’ and co-ordinate their campaigning activity — has been quietly pulled from app stores.

Its vanishing was flagged to us earlier today, by Twitter user Sarah Parks, who noticed that, when loaded, the Campaigner app now displays a message informing users the supplier is “no longer supporting clients based in Europe”.

“So we’re taking this opportunity to refresh our campaigning app,” it adds. “We will be back with a new and improved app early next year – well in time for the local elections.”

(Bad luck, then, should there end up being another very snap, Brexit-induced UK General Election in the meanwhile, as some have suggested may yet come to pass. But I digress… )

The supplier of the Conservative Campaigner app is — or was — a US-based add developer called uCampaign, which had also built branded apps for Trump-Pence 2016; the Republican National Committee; and the UK’s Vote Leave Brexit campaign, to name a few of the political campaigns it has counted as customers.

Here’s a few more: The (pro-gun) National Rife Association and the (anti-abortion) SBA List.

We know the name of the Conservative Campaigner app’s supplier because this summer we raised privacy concerns about the app — on account of its use of uCampaign’s boilerplate privacy policy, if you clicked to read the app’s privacy policy earlier this year.

The wording of uCampaign’s privacy policy suggested the Conservative Campaigner app could be harvesting users’ mobile phone contacts — if they chose to sync their contacts book with it.

The privacy policy for the app was subsequently changed to point to the Conservative Party’s own privacy policy — with the change of privacy policy taking place just before a tough new EU-wide data protection framework, GDPR, came into force on May 25 this year.

Prior to May 23, the privacy policy of the Conservatives’ digital campaigning app suggests it was harvesting contacts data from users — and potentially sharing non-users’ personal information with entities of uCampaign’s choosing (given, for example, the company’s privacy policy gave itself the right to “share your Personal Information with other organizations, groups, causes, campaigns, political organizations, and our clients that we believe have similar viewpoints, principles or objectives as us”).

This sort of consentless scraping of large amounts of networked personal data — by sucking up information on users’ friend groups and other personal connections — has of course had a massive spotlight thrown on it this year, as a result of the Facebook Cambridge Analytica data misuse scandal in which the personal data of tens of millions of Facebook users was extracted from the social network via a quiz app that used a (now defunct) Facebook friends API to grab data on non-users who would not have even had the chance to agree to the app’s terms.

Safe to say, this modus operandi wasn’t cool then — and it’s certainly not cool now.

Politicians all over the globe have been shaken awake by the Cambridge Analytica scandal, and are now raising all sorts of concerns about how data and digital tools are being used (and or misused and abused).

The EU parliament recently called for an independent audit of Facebook, for example.

In the UK, a committee that’s been probing the impact of social media-accelerated disinformation on democratic processes published a report this summer calling for a levy on social media to defend democracy. Its lengthy preliminary report also suggested urgent amendments to domestic electoral law to reflect the use of digital technologies for political campaigning.

Though the UK’s Conservative minority government — and the party behind the now on-pause Conservative Campaigner app — apparently disagrees on the need for speed, declining in its response last week to accept most of the committee’s laundry list of recommended changes.

The DCMS committee’s inquiry into political campaigns’ use (and misuse) of personal data continues — now at a transnational level.

An ethical pause?

Shortly after we published our privacy concerns about the Conservative Campaigner app, the UK’s data protection watchdog issued its own a lengthy report detailing extensive concerns about how UK political parties were misusing personal data — and calling for an ethical pause on the use of microtargeting for election campaigning purposes.

Which does rather beg the question whether the Conservative Campaigner app going AWOL now, until a reboot under a new supplier (presumably) next year, might not represent just such an ‘ethical pause’.

The app is, after all, only just over a year old.

We asked the Conservative Party a number of questions about the Campaigner app via email — after a press office spokeswoman declined to discuss the matter on the telephone.

Five hours later it emailed the following brief statement, attributed to a Conservative spokesperson:

We work with a number of different suppliers and all Conservative party campaigning is compliant with the relevant data protection legislation including GDPR.

The spokesperson did not engage with the substance of the vast majority of our concerns — such as those relating to the app’s handling of people’s data and the legal bases for any transfers of UK voter data to the US.

Instead the spokesperson reiterated the in-app notification which claims “the supplier” is no longer supporting clients based in Europe.

They also said the party is currently reviewing its campaigning tools, without providing any further detail.

We’ve included our full list of questions at the bottom of this post.

We’ve also reached out to the ICO to ask if it had any concerns related to how the Conservative Campaigner app was handling people’s data.

Similarly, the former deputy director & head of digital strategy for the Conservative party, Anthony Hind, declined to engage with the same data protection concerns when we raised them with him directly, back in July.

According to his LinkedIn profile he’s since moved on from the Conservatives to head up social media for the Confederation of British Industry.

For this report we also reached out to uCampaign’s founder and CEO, Thomas Peters, to ask for confirmation on the company’s situation vis-a-vis European clients.

At the time of writing Peters had not responded to our emails. We’ll update this story with any uCampaign response.

The company’s website still includes the UK Conservative Party listed as a client — though the language used on the webpage does not make it explicit whether or not the party is a current client…

Another graphic on the same page plots the UK flag on a world map depicting what uCampaign dubs its “global platform”, where it’s marked along with several other European flags — including Ireland, France, Germany and Malta, suggesting uCampaign has — or had — multiple European clients.

Here’s the full list of questions we put to the Conservatives about their campaigner app. To our eye it has answered just one of them:

Can you confirm — on the record — the reasons for the app being pulled?

Does the Conservative Party intend to continue working with uCampaign for the new campaign app that will relaunch next year? Or does the party have a new supplier?

If the latter, where is the new supplier based? In the UK or in the US?

Did the Conservative Party have any concerns at all related to using uCampaigner as a supplier? (Given, for example, concerns flagged about its data privacy practices by one of the DCMS committee’s recent reports — following an inquiry investigating digital campaigning.)

If the Conservative Party was aware of data privacy concerns pertaining to uCampaign’s practices can you confirm when the party became aware of such concerns?

Was the party aware that the privacy policy it used for the app prior to May 23, 2018 was uCampaign’s own privacy policy?

This privacy policy stated that the app could harvest data from users’ mobile phone contacts and share that data with unknown third parties of the developer’s choosing — including other political campaigns. Is the Conservative Party comfortable with having its supporters’ data shared with other political campaigns?

What due diligence did the Conservative Party carry out before it selected uCampaign as its app supplier?

After signing up the supplier, did the Conservative Party carry out a privacy impact assessment related to how the app operates?

Please confirm all the data points that the app was collecting from users, and what each of those data points was being used for

Where was app user data being processed? In the US, where uCampaign is based, or in the UK where potential voters live?

If the US, what was the legal basis for any transfer of data from UK users to the US?

Is the Conservative Party confident its use of the campaigner app did not breach UK data protection law?

Earlier this year the former Cabinet Minister Dominic Grieve suggested that the bosses of tech giants involved in the Cambridge Analytica data misuse scandal should be jailed for their part in abusing online data for political and financial gain. Does the Conservative Party support Grieve’s position on online data abuse?

Has anyone been sacked or sanctioned for their part in procuring uCampaign as the app supplier — and/or overseeing the operation of the Conservative Campaigner app itself?

Will the Conservative Party commit to notifying all individuals whose data was shared with uCampaign without their explicit consent?

Can the Conservative Party confirm how many individuals had their personal data shared with uCampaign?

Has the Information Commissioner’s Office raised any concerns with the Conservative Party about the Campaigner app?

Has the Conservative Party itself reported any concerns about the app/uCampaign to the ICO?

News Source = techcrunch.com

China’s Youon expands into Europe as other bike startups backpedal worldwide

in alibaba/alipay/Ant Financial/Asia/China/Delhi/Europe/hellobike/India/London/mobike/ofo/Politics/sharing economy/Transportation/United Kingdom by

A little known Chinese bike company is riding into Europe as its peer Ofo has applied the brakes to its global expansion strategy in recent months.

Youon, which gets by manufacturing public bikes for city governments across China, has formed a joint venture with UK-based bike-sharing startup Cycle.land, it says in a statement. The deal allows the Chinese firm to sit back in its headquarters in eastern China while its British partner deploys its bikes and takes care of on-the-ground operation.

Youon’s fleet of 1,000 public bikes will start appearing in London next March, making the UK the fourth country in its international expansion after Russia, India, and Malaysia.

Youon’s name may not ring a bell, but its subsidiary Hellobike is increasingly turning heads as its dockless bikes win over users in China’s smaller cities where its larger rivals Ofo and Mobike lack a presence. This is in part thanks to Hellobike’s partnership with its investor Ant Financial, Alibaba’s financial affiliate, which lets users skip Hellobike’s standalone app and access the service on Ant’s Alipay wallet, which has over 500 million MAUs.

While Hellobike’s mobile penetration recorded a 20 percent month-over-month increase (link in Chinese) in September, Mobike and Ofo barely saw any growth in the same period, according to data service provider Jiguang.

Away from home, Youon’s partnership approach is also noticeably different from that of Mobike and Ofo, which have chosen to run their own overseas operation. Teaming up with local players gives Youon insight into customers abroad, suggests market research firm Analysys.

“User behavior in Europe and North America is very different and it will be reckless for a [Chinese] firm to abruptly set up its own operations overseas,” Sun Naiyue, an analyst at Analysys, tells TechCrunch.

China’s Youon partnered with peer-to-peer bike-sharing startup Cycle.land to expand to the UK [Image via Youon]

Having a local ally also helps Youon avoid government protectionism and regulatory meddling in the foreign market, Sun adds. London has already greenlighted the company to place bikes in the city and the company will “follow local demand and rules to deploy bikes accordingly,” Cycle.land says of its partner.

Contrasting the prospects of Youon’s latest push is the bleak outlook of its peer. The past few months have seen Ofo retreat from its overseas markets to prioritize profitability. To date, Ofo has shut down in Australia, Austria, Czech Republic, Germany, India, Israel, and scaled back operation in a host of other countries.

News Source = techcrunch.com

1 2 3 9
Go to Top