Timesdelhi.com

December 12, 2018
Category archive

UK government

Uber back in court in UK to argue against workers rights for drivers

in Apps/Delhi/employment law/Europe/GMB Union/India/James Farrar/lawsuit/London/Politics/Transportation/Uber/UK government/United Kingdom/workers rights by

Uber is back in court in the UK today and tomorrow to try once again to overturn a  two year old employment tribunal ruling that judged a group of Uber drivers to be workers — meaning they’re entitled to workers benefits such as holiday pay, paid rest breaks and the national minimum wage.

Uber lost its first appeal against the ruling last year but has said it will continue to appeal.

On Sunday the GMB Union calculated that Uber drivers in the UK are £18,000 out of pocket as a result of the company continuing to fight the rights judgement, rather than paying the additional entitlements.

In a statement Sue Harris, GMB legal director, said: “These figures lay bare the human cost of Uber continuing to refuse to accept the ruling of the courts. While the company are wasting money losing appeal after appeal, their drivers are up to £18,000 out of pocket for the last two years alone.

“That’s thousands of drivers struggling to pay their rent, or feed their families. It’s time Uber admits defeat and pays up. The company needs to stop wasting money dragging its lost cause through the courts. Instead, Uber should do the decent thing and give drivers the rights to which those courts have already said they are legally entitled.”

Uber has previously suggested it would cost its UK business “tens of millions” of pounds if it reclassified the circa 50,000 ‘self-employed’ drivers operating on its platform as Limb (b) workers — an existing employment categorization that sits between ‘self-employed’ and ‘worker’.

The GMB Union notes that in Uber London’s latest accounts, released last week, it warns shareholders that it faces “numerous legal and regulatory risks”, both pertaining to existing regulations and the development of new regulations, as well as as a result of “claims and litigation” related to its classification of drivers as independent contractors.

This year the UK government has signalled a high level intent to bolster rights for more types of workers.

In February it announced a package of labor market reforms intended to respond to changing working patterns — saying it would expand workers rights for millions of workers and touting tighter enforcement.

Though it continues to consult on the issue, to shape the detail of its response, and it’s likely the Uber litigation will feed into government thinking given the timing of the case.

This month Uber drivers in the UK staged a one-day strike over pay and conditions, piling more pressure on the issue and calling for the company to immediately apply the tribunal judgement and implement employment conditions that respect worker rights for drivers.

Uber responded by pointing to changes it has made since the original tribunal ruling — including expanding a free insurance product it now offers to drivers and couriers across Europe.

It also claims to have changed how it takes feedback from drivers, and flagged a number of tweaks to its app it claims help drivers access data insights to boost their earnings.

We’ve reached out to Uber for comment on the latest stage of its appeal. Update: A company spokesperson sent us the following statement:

Almost all taxi and private hire drivers have been self-employed for decades, long before our app existed. A recent Oxford University study found that drivers make more than the London Living Wage and want to keep the freedom to choose if, when and where they drive. If drivers were classed as workers they would inevitably lose some of the freedom and flexibility that comes with being their own boss.

We believe the Employment Appeal Tribunal last year fundamentally misunderstood how we operate. For example, they relied on the assertion that drivers are required to take 80% of trips sent to them when logged into the app, which has never been the case in the UK.

Over the last two years we’ve made many changes to give drivers even more control over how they use the app, alongside more security through sickness, maternity and paternity protections. We’ll keep listening to drivers and introduce further improvements.

The Independent Workers’ Union of Great Britain (IWGB), which is defending the tribunal judgement at the hearings this week, backing former Uber drivers and co-claimants Yaseen Aslam and James Farrar, who brought the original case, has organized a demonstration to coincide with the hearing.

It says it expects hundreds of “precarious workers” — i.e. people who labor in the so-called ‘gig economy’ — to march through London in solidarity with the drivers and demand an end to all work that undermines workers rights.

The march is also being backed by the left-leaning UK political organization Momentum, the Communications Workers Union, War On Want, Bakers Food and Allied Workers Union and United Voices of the World, among others.

A parallel event is being held in Glasgow to coincide with the hearing.

Commenting in a statement, IWGB United Private Hire Drivers branch chair and Uber case co-claimant Farrar said: “It’s two years since we beat Uber at the Employment Tribunal, yet minicab drivers all over the UK are still waiting for justice, while Uber exhausts endless appeals. As the government ignores this mounting crisis, it’s been left to workers to fix this broken system and bring rogue bosses to account. If anything gives me hope, it is the rising tide of precarious workers that are organising and demanding a fair deal.”

IWGB general secretary Jason Moyer Lee added: “Precarious workers are getting hammered in this country. The protest is the articulation of the legitimate grievance of those who are being denied the basic rights and dignities at work that we should all be able to take for granted.”

News Source = techcrunch.com

Drone development should focus on social good first, says UK report

in Delhi/delivery drone/drone/electronics/Emergency services/Emerging-Technologies/Europe/Gadgets/Government/Health/India/London/NESTA/NHS/Politics/robotics/TC/Transportation/UK government/United Kingdom/unmanned aerial vehicles by

A UK government backed drone innovation project that’s exploring how unmanned aerial vehicles could benefit cities — including for use-cases such as medical delivery, traffic incident response, fire response and construction and regeneration — has reported early learnings from the first phase of the project.

Five city regions are being used as drone test-beds as part of Nesta’s Flying High Challenge — namely London, the West Midlands, Southampton, Preston and Bradford.

While five socially beneficial use-cases for drone technology have been analyzed as part of the project so far, including considering technical, social and economic implications of the tech.

The project has been ongoing since December.

Nesta, the innovation-focused charity behind the project and the report, wants the UK to become a global leader in shaping drone systems that place people’s needs first, and writes in the report that: “Cities must shape the future of drones: Drones must not shape the future of cities.”

In the report it outlines some of the challenges facing urban implementations of drone technology and also makes some policy recommendations.

It also says that socially beneficial use-cases have come out as an early winner over of cities to the potential of the tech — over and above “commercial or speculative” applications such as drone delivery or for carrying people in flying taxis.

The five use-cases explored thus far via the project are:

  • Medical delivery within London — a drone delivery network for carrying urgent medical products between NHS facilities, which would routinely carry products such as pathology samples, blood products and equipment over relatively short distances between hospitals in a network
  • Traffic incident response in the West Midlands — responding to traffic incidents in the West Midlands to support the emergency services prior to their arrival and while they are on-site, allowing them to allocate the right resources and respond more effectively
  • Fire response in Bradford — emergency response drones for West Yorkshire Fire and Rescue service. Drones would provide high-quality information to support emergency call handlers and fire ground commanders, arriving on the scene faster than is currently possible and helping staff plan an appropriate response for the seriousness of the incident
  • Construction and regeneration in Preston — drone services supporting construction work for urban projects. This would involve routine use of drones prior to and during construction, in order to survey sites and gather real-time information on the progress of works
  • Medical delivery across the Solent — linking Southampton across the Solent to the Isle of Wight using a delivery drone. Drones could carry light payloads of up to a few kilos over distances of around 20 miles, with medical deliveries of products being a key benefit

Flagging up technical and regulatory challenges to scaling the use of drones beyond a few interesting experiments, Nest writes: “In complex environments, flight beyond the operator’s visual line of sight, autonomy and precision flight are key, as is the development of an unmanned traffic management (UTM) system to safely manage airspace. In isolation these are close to being solved — but making these work at large scale in a complex urban environment is not.”

“While there is demand for all of the use cases that were investigated, the economics of the different use cases vary: Some bring clear cost savings; others bring broader social benefits. Alongside technological development, regulation needs to evolve to allow these use cases to operate. And infrastructure like communications networks and UTM systems will need to be built,” it adds.

The report also emphasizes the importance of public confidence, writing that: “Cities are excited about the possibilities that drones can bring, particularly in terms of critical public services, but are also wary of tech-led buzz that can gloss over concerns of privacy, safety and nuisance. Cities want to seize the opportunity behind drones but do it in a way that responds to what their citizens demand.”

And the charity makes an urgent call for the public to be brought into discussions about the future of drones.

“So far the general public has played very little role,” it warns. “There is support for the use of drones for public benefit such as for the emergency services. In the first instance, the focus on drone development should be on publicly beneficial use cases.”

Giving the combined (and intertwined) complexity of regulatory, technical and infrastructure challenges standing in the way of developing viable drone service implementations, Nesta is also recommending the creation of testbeds in which drone services can be developed with the “facilities and regulatory approvals to support them”.

“Regulation will also need to change: Routine granting of permission must be possible, blanket prohibitions in some types of airspace must be relaxed, and an automated system of permissions — linked to an unmanned traffic management system — needs to be put in place for all but the most challenging uses. And we will need a learning system to share progress on regulation and governance of the technology, within the UK and beyond, for instance with Eurocontrol,” it adds.

“Finally, the UK will need to invest in infrastructure, whether this is done by the public or private sector, to develop the communications and UTM infrastructure required for widespread drone operation.”

In conclusion Nesta argues there is “clear evidence that drones are an opportunity for the UK” — pointing to the “hundreds” of companies already operating in the sector; and to UK universities with research strengths in the area; as well as suggesting public authorities could save money or provide “new and better services thanks to drones”.

At the same time it warns that UK policy responses to drones are lagging those of “leading countries” — suggesting the country could squander the chance to properly develop some early promise.

“The US, EU, China, Switzerland and Singapore in particular have taken bigger steps towards reforming regulations, creating testbeds and supporting businesses with innovative ideas. The prize, if we get this right, is that we shape this new technology for good — and that Britain gets its share of the economic spoils.”

You can read the full report here.

News Source = techcrunch.com

The UK and USA need to extend their “special relationship” to technology development

in America/Apple/Artificial Intelligence/Australia/Battersea Power Station/blockchain/Column/cryptocurrencies/Delhi/Economy/Emerging-Technologies/European Union/Facebook/Google/IBM/India/Jaguar Land Rover/King/London/Microsoft/money/new jersey/New York/Politics/simulation/Singapore/TC/Technology/UK government/United Kingdom/United States/Virtual Reality by

The UK and the USA have always had an enduring bond, with diplomatic, cultural and economic ties that have remained firm for centuries.

We live in an era of profound change, and are living with technologies set to change things ever faster. If Britain and America work together to develop these technologies for the good of mankind, in a way that is open and free, yet also safe and good for our citizens, we can maintain the global lead our nations have enjoyed in the fields of innovation.

Over past months we have seen some very significant strides forward in this business relationship. All of the biggest US companies have made decisions to invest in the UK. Apple is developing a new HQ in the iconic Battersea Power Station, close to the new US embassy, while Google is building a billion dollar new HQ in the increasingly fashionable King’s Cross. Facebook, Amazon, IBM and Microsoft are all extending their operations, and a multitude of smaller US firms are basing their international headquarters in London.

They are all coming here because as we prepare to leave the EU we are building a forward looking Britain that is open to the wider world, and tech is at the heart of this.

Similarly, there have been major expansions or new investment from British firms into the US. Jaguar Land Rover, the UK’s largest automotive manufacturer, supports more than 9,000 jobs in the USA and have recently opened their new multimillion-dollar corporate North America HQ in New Jersey.  iProov, a leading British provider of biometric facial verification technology, became the first international company to be awarded a contract from the US Department of Homeland Security Science & Technology Directorate’s Silicon Valley Innovation Program last month.

We want to work with our global partners – to share expertise, and encourage investment – as we harness technology for the wider good. And that of course includes our old friend and closest ally, the USA.

We have a great deal to offer.

The UK was recently ranked the most AI ready nation among all the OECD countries. In the past three years, new AI start-ups have been created in the UK on an almost weekly basis.

Recently, UK government and industry together committed over $1 billion to support our AI sector, much of which will go towards entrepreneurs. Funding has been set aside to create a nationwide network of tech incubators, that we’re calling “Tech Nation”, which will support new AI businesses as they get off the ground.

We are also excited by — and I am a firm advocate for — the development of blockchain and similar technologies. The UK is leading the way in many areas where blockchain has the potential to be used, such as Fintech. There are now more people working in UK Fintech than in New York or in Singapore, Hong Kong and Australia combined.

And we are eminent in the development of immersive technologies, like Augmented and Virtual Reality, which look set to radically improve many areas of life in coming years, with applications as varied as flight simulation and surgical training techniques.

There is so much to be gained from close collaboration between our two countries on these new technologies and from sharing our expertise.

Together, we can reap the economic benefits of stealing an early lead in their development. We estimate that AI, for example, if widely adopted, could add $33 billion to the UK economy. But, perhaps most importantly, we can also work together to build a strong regulatory and ethical frameworks for their wider application.

It is the role of governments across the world, the UK and US included, to set frameworks for these decentralised, cross border systems so we can manage their use in a safe and effective way.

Our aim should be to harness the power and capability of technology but always for the benefit of, and in service to the populace.

We in the UK are avowedly pro-tech, always seeking to put its power in the hands of our citizens.

We have all learned valuable lessons from the recent scandals regarding data use, most recently around Facebook’s use of data.

We want to build a system that protects and cherishes the freedom of the Internet while protecting the rights of individuals, and their property, including intellectual property.

We want to see freedom in a framework; where our tech entrepreneurs have the space to innovate, knowing they do so with full public trust. Trust underpins a strong economy, and trust in data underpins a strong digital economy.

So in the UK we are developing a Digital Charter, to agree norms and rules for the online world and put them into practice. Our starting point is that what is unacceptable offline should not be tolerated in the online world. That includes how tech companies treat private citizens and use their data, as well as how people treat each other online.

Important changes like these cannot be agreed by one country alone. It is more important than ever that we work together and find common ground so we can make sure that tech continues to change the world for the better. Based on our mutual love of freedom and individual rights Britain and America have through history risen to challenges together. I firmly believe working together we can build that brighter future.

News Source = techcrunch.com

UK report urges action to combat AI bias

in Aleksandr Kogan/Artificial Intelligence/British Business Bank/chairman/cybernetics/data processing/data security/deep neural networks/DeepMind/Delhi/Diversity/Europe/European Union/Facebook/General Data Protection Regulation/Google/Government/Health/India/London/Matt Hancock/National Health Service/oxford university/Policy/Politics/privacy/Royal Free NHS Trust/Technology/UK government/United Kingdom/United States by

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

“Unlike other AI ‘ethics’ standards which seek to create something so weak no one opposes it, the existing standards and conventions of the rule of law are well known and well understood, and provide real and meaningful scrutiny of decisions, assuming an entity believes in the rule of law,” he adds.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

 

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability point is already generally well-aired — and in the autonomous cars case, at least, now having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Exceptions: Terrorist content spreading via online platforms has been decried for some years, with government ministers more than happy to make platforms and technologies their scapegoat and even toughen laws; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these societal pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it also raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

News Source = techcrunch.com

Go to Top