Timesdelhi.com

July 18, 2018
Category archive

DeepMind

AI edges closer to understanding 3D space the way we do

in Artificial Intelligence/Computer Vision/DeepMind/Delhi/Google/India/Politics/Science/TC by

If I show you single picture of a room, you can tell me right away that there’s a table with a chair in front of it, they’re probably about the same size, about this far from each other, with the walls this far away — enough to draw a rough map of the room. Computer vision systems don’t have this intuitive understanding of space, but the latest research from DeepMind brings them closer than ever before.

The new paper from the Google -owned research outfit was published today in the journal Science (complete with news item). It details a system whereby a neural network, knowing practically nothing, can look at one or two static 2D images of a scene and reconstruct a reasonably accurate 3D representation of it. We’re not talking about going from snapshots to full 3D images (Facebook’s working on that) but rather replicating the intuitive and space-conscious way that all humans view and analyze the world.

When I say it knows practically nothing, I don’t mean it’s just some standard machine learning system. But most computer vision algorithms work via what’s called supervised learning, in which they ingest a great deal of data that’s been labeled by humans with the correct answers — for example, images with everything in them outlined and named.

This new system, on the other hand, has no such knowledge to draw on. It works entirely independently of any ideas of how to see the world as we do, like how objects’ colors change towards their edges, how they get bigger and smaller as their distance changes, and so on.

It works, roughly speaking, like this. One half of the system is its “representation” part, which can observe a given 3D scene from some angle, encoding it in a complex mathematical form called a vector. Then there’s the “generative” part, which, based only on the vectors created earlier, predicts what a different part of the scene would look like.

(A video showing a bit more of how this works is available here.)

Think of it like someone hand you a couple pictures of a room, then asking you to draw what you’d see if you were standing in a specific spot in it. Again, this is simple enough for us, but computers have no natural ability to do it; their sense of sight, if we can call it that, is extremely rudimentary and literal, and of course machines lack imagination.

Yet there are few better words that describe the ability to say what’s behind something when you can’t see it.

“It was not at all clear that a neural network could ever learn to create images in such a precise and controlled manner,” said lead author of the paper, Ali Eslami, in a release accompanying the paper. “However we found that sufficiently deep networks can learn about perspective, occlusion and lighting, without any human engineering. This was a super surprising finding.”

It also allows the system to accurately recreate a 3D object from a single viewpoint, such as the blocks shown here:

I’m not sure I could do that.

Obviously there’s nothing in any single observation to tell the system that some part of the blocks extends forever away from the camera. But it creates a plausible version of the block structure regardless that is accurate in every way. Adding one or two more observations requires the system to rectify multiple views, but results in an even better representation.

This kind of ability is critical for robots especially because they have to navigate the real world by sensing it and reacting to what they see. With limited information, such as some important clue that’s temporarily hidden from view, they can freeze up or make illogical choices. But with something like this in their robotic brains, they could make reasonable assumptions about, say, the layout of a room without having to ground-truth every inch.

“Although we need more data and faster hardware before we can deploy this new type of system in the real world,” Eslami said, “it takes us one step closer to understanding how we may build agents that learn by themselves.”

News Source = techcrunch.com

Facebook’s open-source Go bot can now beat professional players

in AI/alphago/Artificial Intelligence/deep learning/DeepMind/Delhi/F8 2018/Gaming/Go/India/Politics/TC by

Go is the go-to game for machine learning researchers. It’s what Google’s DeepMind team famously used to show off its algorithms, and Facebook, too, recently announced that it was building a Go bot of its own. As the team announced at the company’s F8 developer conference today, the ELF OpenGo bot has now achieved professional status after winning all 14 games it played against a group of top 30 human Go players recently.

“We salute our friends at DeepMind for doing awesome work,” Facebook CTO Mike Schroepfer said in today’s keynote. “But we wondered: Are there some unanswered questions? What else can you apply these tools to.” As Facebook notes in a blog post today, the DeepMind model itself also remains under wraps. In contrast, Facebook has open-sourced its bot.

“To make this work both reproducible and available to AI researchers around the world, we created an open source Go bot, called ELF OpenGo, that performs well enough to answer some of the key questions unanswered by AlphaGo,” the team writes today.

It’s not just Go that the team is interested in, though. Facebook’s AI Research group has also developed a StarCraft bot that can handle the often chaotic environment of that game. The company plans to open-source this bot, too. So while Facebook isn’t quite at the point where it can launch a bot that can learn any game (with the right amount of training), the team is clearly making quite a bit of progress here.

News Source = techcrunch.com

UK report urges action to combat AI bias

in Aleksandr Kogan/Artificial Intelligence/British Business Bank/chairman/cybernetics/data processing/data security/deep neural networks/DeepMind/Delhi/Diversity/Europe/European Union/Facebook/General Data Protection Regulation/Google/Government/Health/India/London/Matt Hancock/National Health Service/oxford university/Policy/Politics/privacy/Royal Free NHS Trust/Technology/UK government/United Kingdom/United States by

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

“Unlike other AI ‘ethics’ standards which seek to create something so weak no one opposes it, the existing standards and conventions of the rule of law are well known and well understood, and provide real and meaningful scrutiny of decisions, assuming an entity believes in the rule of law,” he adds.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

 

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability point is already generally well-aired — and in the autonomous cars case, at least, now having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Exceptions: Terrorist content spreading via online platforms has been decried for some years, with government ministers more than happy to make platforms and technologies their scapegoat and even toughen laws; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these societal pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it also raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

News Source = techcrunch.com

DeepMind has yet to find out how smart its AlphaGo Zero AI could be

in alphago/Artificial Intelligence/deep learning/DeepMind/Delhi/Go/India/machine learning/Politics/TC by

Once Alphabet’s artificial intelligence company DeepMind had masted the ability to defeat the best human Go players in the world, it tried to beat its own best attempts using an approach based strictly on a virtual Go player that was totally self-taught.

That Go-playing virtual intelligence was called AlphaGo Zero, and it managed to rediscover over 3,000 years of human knowledge around the game in just 72 hours. I then beat the version of the original AlphaGo that beat champion Lee Sedol in just over three days, and bested the most powerful previous version of AlphaGo ever in just 40 days after that.

DeepMind’s AlphaGo Zero was an immense achievement not just because of its speed, but because it was able to accomplish all this starting from scratch – researchers didn’t do the first step where it uses human data as a baseline from which to begin the system’s education. Instead, it used spontaneous data to start, literally trying out moves on the board at random and working out which were most effective.

Perhaps the most interesting thing about AlphaGo Zero, though, isn’t how fast it was able to do what it did, or with such efficacy, but also that it ultimately didn’t even achieve its full potential. DeepMind CEO and co-founder Demis Hassabis explained on stage at Google’s Go North conference in Toronto that the company actually shut down the experiment before it could determine the upper limits of AlphaGo Zero’s maximum intelligence.

“We never actually found the limit of how good this version of AlphaGo could get,” he said. “We needed the computers for something else.”

Hassabis said that DeepMind may spin up AlphaGo Zero again in future to find out how much further it can go, though the main benefit of that exercise might be to help teach human AlphaGo players about additional, “alien” moves and stratagems that they can study to improve their own play.

DeepMind’s whole goal is to build artificial general intelligence, however, which can use its smarts to accomplish different tasks – so a smarter AlphaGo Zero might be able to better optimize energy management in Google’s data centers, for instance, or even in the electrical grid in general.

News Source = techcrunch.com

DeepMind now has an AI ethics research unit. We have a few questions for it…

in Alphabet/Artificial Intelligence/deep learning/DeepMind/Delhi/ethics/Europe/Google/India/machine learning/Policy/Politics/privacy/Science and Technology/Social/TC by

DeepMind, the U.K. AI company which was acquired in 2014 for $500M+ by Google, has launched a new ethics unit which it says will conduct research across six “key themes” — including ‘privacy, transparency and fairness’ and ‘economic impact: inclusion and equality’.

The XXVI-Alphabet-owned company, whose corporate parent generated almost $90BN in revenue last year, says the research will consider “open questions” such as: “How will the increasing use and sophistication of AI technologies interact with corporate power?”

It will helped in this important work by a number of “independent advisors” (DeepMind also calls them “fellows“) to, it says, “help provide oversight, critical feedback and guidance for our research strategy and work program”; and also by a group of partners, aka existing research institutions, which it says it will work with “over time in an effort to include the broadest possible viewpoints”.

Although it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts.

(Meanwhile, the issue of AI-savvy academics not already being attached, in some consulting form or other, to one tech giant or another is another ethical dilemma for the AI field that we’ve highlighted before.)

The DeepMind ethics research unit is in addition to an internal ethics board apparently established by DeepMind at the point of the Google acquisition because of the founders’ own concerns about corporate power getting its hands on powerful AI.

However the names of the people who sit on that board have never been made public — and are not, apparently, being made public now. Even as DeepMind makes a big show of wanting to research AI ethics and transparency. So you do have to wonder quite how mirrored are the insides of the filter bubbles with which tech giants appear to surround themselves.

One thing is becoming amply clear where AI and tech platform power is concerned: Algorithmic automation at scale is having all sorts of unpleasant societal consequences — which, if we’re being charitable, can be put down to the result of corporates optimizing AI for scale and business growth. Ergo: ‘we make money, not social responsibility’.

But it turns out that if AI engineers don’t think about ethics and potential negative effects and impact before they get to work moving fast and breaking stuff, those hyper scalable algorithms aren’t going to identify the problem on their own and route around the damage. Au contraire. They’re going to amplify, accelerate and exacerbate the damage.

Witness fake news. Witness rampant online abuse. Witness the total lack of oversight that lets anyone pay to conduct targeted manipulation of public opinion and screw the socially divisive consequences.

Given the dawning political and public realization of how AI can cause all sorts of societal problems because its makers just ‘didn’t think of that’ — and thus have allowed their platforms to be weaponized by entities intent on targeted harm, then the need for tech platform giants to control the narrative around AI is surely becoming all too clear for them. Or they face their favorite tool being regulated in ways they really don’t like.

The penny may be dropping from ‘we just didn’t think of that’ to ‘we really need to think of that — and control how the public and policymakers think of that’.

And so we arrive at DeepMind launching an ethics research unit that’ll be putting out ## pieces of AI-related research per year — hoping to influence public opinion and policymakers on areas of critical concern to its business interests, such as governance and accountability.

This from the same company that this summer was judged by the UK’s data watchdog to have broken UK privacy law when its health division was handed the fully identifiable medical records of some 1.6M people without their knowledge or consent. And now DeepMind wants to research governance and accountability ethics? Full marks for hindsight guys.

Now it’s possible DeepMind’s internal ethics research unit is going to publish thoughtful papers interrogating the full spectrum societal risks of concentrating AI in the hands of massive corporate power, say.

But given its vested commercial interests in shaping how AI (inevitably) gets regulated, a fully impartial research unit staffed by DeepMind staff does seem rather difficult to imagine.

“We believe AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards,” writes DeepMind in a carefully worded blog post announcing the launch of the unit.

“Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work,” it adds, before going on to say: “As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work.”

The key phrase there is of course “open research and investigation”. And the key question is whether DeepMind itself can realistically deliver open research and investigation into itself.

There’s a reason no one trusts the survey touting the amazing health benefits of a particular foodstuff carried out by the makers of said foodstuff.

Related: Google was recently fingered by a US watchdog for spending millions funding academic research to to to influence opinion and policy making. (It rebutted the charge with a GIF.)

“To guarantee the rigour, transparency and social accountability of our work, we’ve developed a set of principles together with our Fellows, other academics and civil society. We welcome feedback on these and on the key ethical challenges we have identified. Please get in touch if you have any thoughts, ideas or contributions,” DeepMind adds in the blog.

The website for the ethics unit sets out five core principles it says will be underpinning its research. Principles I’ve copy pasted below so you don’t have to go hunting through multiple link trees* to find them, given DeepMind does not include ‘Principles’ as a tab on the main page so you do really have to go digging through its FAQ links to find them.

(If you do manage to find them, at the bottom of the page it also notes: “We welcome all feedback on our principles, and as a result we may add new commitments to this page over the coming months.”)

So here are those principles that DeepMind has lodged behind multiple links on its Ethics & Society website:

Social benefit
We believe AI should be developed in ways that serve the global social and environmental good, helping to build fairer and more equal societies. Our research will focus directly on ways in which AI can be used to improve people’s lives, placing their rights and well-being at its very heart.

Rigorous and evidence-based
Our technical research has long conformed to the highest academic standards, and we’re committed to maintaining these standards when studying the impact of AI on society. We will conduct intellectually rigorous, evidence-based research that explores the opportunities and challenges posed by these technologies. The academic tradition of peer review opens up research to critical feedback and is crucial for this kind of work.

Transparent and open
We will always be open about who we work with and what projects we fund. All of our research grants will be unrestricted and we will never attempt to influence or pre-determine the outcome of studies we commission. When we collaborate or co-publish with external researchers, we will disclose whether they have received funding from us. Any published academic papers produced by the Ethics & Society team will be made available through open access schemes.

Diverse and interdisciplinary
We will strive to involve the broadest possible range of voices in our work, bringing different disciplines together so as to include diverse viewpoints. We recognize that questions raised by AI extend well beyond the technical domain, and can only be answered if we make deliberate efforts to involve different sources of expertise and knowledge.

Collaborative and inclusive
We believe a technology that has the potential to impact all of society must be shaped by and accountable to all of society. We are therefore committed to supporting a range of public and academic dialogues about AI. By establishing ongoing collaboration between our researchers and the people affected by these new technologies, we seek to ensure that AI works for the benefit of all.

And here are some questions we’ve put to DeepMind in light of the launch of the ethics research unit. We’ll include responses when/if they reply:

  • Is DeepMind going to release the names of the people on its internal ethics board now? Or is it still withholding that information from the public?
  • If it will not be publishing the names, why not?
  • Does DeepMind see any contradiction in funding research into ethics of a technology it is also seeking to benefit from commercially?
  • How will impartiality be ensured given the research is being funded by DeepMind?
  • How many people are staffing the unit? Are any existing DeepMind staff joining the unit or is it being staffed with entirely new hires?
  • How were the fellows selected? Was there an open application process?
  • Will the ethics unit publish all the research it conducts? If not, how will it select which research is and is not published?
  • What’s the unit’s budget for funding research? Is this budget coming entirely from Alphabet? Are there any other financial backers?
  • How many pieces of research will the unit aim to publish per year? Is the intention to publish equally across the six key research themes?
  • Will all research published by the unit have been peer reviewed first?

*Someone should really count how many clicks it takes to extract all the information from DeepMind’s Ethics & Society website, which, per the DeepMind Health website design (and indeed the Google Privacy website) makes a point of snipping text up into smaller chunks and snippets and distributing this information inside boxes/subheadings that each have to clicked to open up to get to the relevant information. Transparency? Looks rather a lot more like obfuscation of information to me, guys

Featured Image: Oleksiy Maksymenko/Getty Images

News Source = techcrunch.com

Go to Top