Timesdelhi.com

September 21, 2018
Category archive

National Health Service

Femtech hardware startup Elvie inks strategic partnership with UK’s NHS

in Apps/biofeedback/Delhi/Elvie/Europe/Gadgets/Hardware/Health/Healthcare/India/National Health Service/NHS/pelvic floor/Politics/sexual health/smart technology/Tania Boler/United Kingdom/Wearables/women's health by

Elvie, a femtech hardware startup whose first product is a sleek smart pelvic floor exerciser, has inked a strategic partnership with the UK’s National Health Service that will make the device available nationwide through the country’s free-at-the-point-of-use healthcare service so at no direct cost to the patient.

It’s a major win for the startup that was co-founded in 2013 by CEO Tania Boler and Jawbone founder, Alexander Asseily, with the aim of building smart technology that focuses on women’s issues — an overlooked and underserved category in the gadget space.

Boler’s background before starting Elvie (née Chiaro) including working for the U.N. on global sex education curriculums. But her interest in pelvic floor health, and the inspiration for starting Elvie, began after she had a baby herself and found there was more support for women in France than the U.K. when it came to taking care of their bodies after giving birth.

With the NHS partnership, which is the startup’s first national reimbursement partnership (and therefore, as a spokeswoman puts it, has “the potential to be transformative” for the still young company), Elvie is emphasizing the opportunity for its connected tech to help reduce symptoms of urinary incontinence, including those suffered by new mums or in cases of stress-related urinary incontinence.

The Elvie kegel trainer is designed to make pelvic floor exercising fun and easy for women, with real-time feedback delivered via an app that also gamifies the activity, guiding users through exercises intended to strengthen their pelvic floor and thus help reduce urinary incontinence symptoms. The device can also alert users when they are contracting incorrectly.

Elvie cites research suggesting the NHS spends £233M annually on incontinence, claiming also that around a third of women and up to 70% of expectant and new mums currently suffer from urinary incontinence. In 70 per cent of stress urinary incontinence cases it suggests symptoms can be reduced or eliminated via pelvic floor muscle training.

And while there’s no absolute need for any device to perform the necessary muscle contractions to strengthen the pelvic floor, the challenge the Elvie Trainer is intended to help with is it can be difficult for women to know they are performing the exercises correctly or effectively.

Elvie cites a 2004 study that suggests around a third of women can’t exercise their pelvic floor correctly with written or verbal instruction alone. Whereas it says that biofeedback devices (generally, rather than the Elvie Trainer specifically) have been proven to increase success rates of pelvic floor training programmes by 10% — which it says other studies have suggested can lower surgery rates by 50% and reduce treatment costs by £424 per patient head within the first year.

“Until now, biofeedback pelvic floor training devices have only been available through the NHS for at-home use on loan from the patient’s hospital, with patient allocation dependent upon demand. Elvie Trainer will be the first at-home biofeedback device available on the NHS for patients to keep, which will support long-term motivation,” it adds.

Commenting in a statement, Clare Pacey, a specialist women’s health physiotherapist at Kings College Hospital, said: “I am delighted that Elvie Trainer is now available via the NHS. Apart from the fact that it is a sleek, discreet and beautiful product, the app is simple to use and immediate visual feedback directly to your phone screen can be extremely rewarding and motivating. It helps to make pelvic floor rehabilitation fun, which is essential in order to be maintained.”

Elvie is not disclosing commercial details of the NHS partnership but a spokeswoman told us the main objective for this strategic partnership is to broaden access to Elvie Trainer, adding: “The wholesale pricing reflects that.”

Discussing the structure of the supply arrangement, she said Elvie is working with Eurosurgical as its delivery partner — a distributor she said has “decades of experience supplying products to the NHS”.

“The approach will vary by Trust, regarding whether a unit is ordered for a particular patient or whether a small stock will be held so a unit may be provided to a patient within the session in which the need is established. This process will be monitored and reviewed to determine the most efficient and economic distribution method for the NHS Supply Chain,” she added.

News Source = techcrunch.com

Kry bags $66M to launch its video-call-a-doctor service in more European markets

in Apps/CBT/Delhi/doctor on demand/Europe/Fundings & Exits/Health/Healthcare/India/KRY/National Health Service/Norway/Politics/spain/spokesperson/Sweden/Technology/telehealth/telemedicine/United Kingdom/United States by

Swedish telehealth startup Kry has closed a $66 million Series B funding round led by Index Ventures, with participation from existing investors Accel, Creandum, and Project A.

It raised a $22.8M Series A round just over a year ago, bringing its total raised since being founded back in 2014 to around $92M.

The new funding will be put towards market expansion, with the UK and French markets its initial targets. It also says it wants to deepen its penetration in existing markets: Sweden, Norway and Spain, and to expand its medical offering to be able to offer more services via the remote consultations.

A spokesperson for Kry also tells us it’s exploring different business models.

While the initial Kry offering requires patients to pay per video consultation this may not offer the best approach to scale the business in a market like the UK where healthcare is free at the point of use, as a result of the taxpayer funded National Health Service.

“Our goal is to offer our service to as many patients as possible. We are currently exploring different models to deliver our care and are in close discussions with different stakeholders, both public and private,” a spokesperson told us.

“Just as the business models will vary across Europe so will the price,” he added.

While consultations are conducted remotely, via the app’s video platform — with Kry’s pitch being tech-enabled convenience and increased accessibility to qualified healthcare professionals, i.e. thanks to the app-based delivery of the service — it specifies that doctors are always recruited locally in each market where it operates.

In terms of metrics, it says it’s had around 430,000 user registrations to date, and that some 400,000 “patients meetings” have been conducted so far (to be clear that’s not unique users, as it says some have been repeat consultations; and some of the 430k registrations are people who have not yet used the service).

Across its first three European markets it also says the service grew by 740% last year, and it claims it now accounts for more than 3% of all primary care doctor visits in Sweden — where it has more than 300 clinicians working in the service.

In March this year it also launched an online psychology service and says it’s now the largest provider of CBT-treatments in Sweden.

Commenting on the funding in a statement, Martin Mignot, partner at Index Ventures, said: “Kry offers a unique opportunity to deliver a much improved healthcare to patients across Europe and reduce the overall costs associated with primary care. Kry has already become a household name in Sweden where regulators have seen first-hand how it benefits patients and allowed Kry to become an integral part of the public healthcare system. We are excited to be working with Johannes and his team to bring Kry to the rest of Europe.”

As well as the app being the conduit for a video consultation between doctor and patient, patients must also describe in writing and input their symptoms into the app, uploading relevant pictures and responding to symptom-specific questions.

During the video call with a Kry doctor, patients may also receive prescriptions for medication, advice, referral to a specialist, or lab or home tests with a follow-up appointment — with prescribed medication and home tests able to be delivered to the patient’s home within two hours, according to the startup.

“We have users from all age groups. Our oldest patient just turned 100 years old. One big user group is families with young children but we see that usage is becoming more even over different age groups,” adds the spokesman.

There are now a number of other startups seeking to scale businesses in the video-call-a-doctor telehealth space — such as Push Doctor, in the UK, and Doctor On Demand in the US, to name two.

News Source = techcrunch.com

UK report urges action to combat AI bias

in Aleksandr Kogan/Artificial Intelligence/British Business Bank/chairman/cybernetics/data processing/data security/deep neural networks/DeepMind/Delhi/Diversity/Europe/European Union/Facebook/General Data Protection Regulation/Google/Government/Health/India/London/Matt Hancock/National Health Service/oxford university/Policy/Politics/privacy/Royal Free NHS Trust/Technology/UK government/United Kingdom/United States by

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

“Unlike other AI ‘ethics’ standards which seek to create something so weak no one opposes it, the existing standards and conventions of the rule of law are well known and well understood, and provide real and meaningful scrutiny of decisions, assuming an entity believes in the rule of law,” he adds.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

 

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability point is already generally well-aired — and in the autonomous cars case, at least, now having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Exceptions: Terrorist content spreading via online platforms has been decried for some years, with government ministers more than happy to make platforms and technologies their scapegoat and even toughen laws; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these societal pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it also raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

News Source = techcrunch.com

Documents detail DeepMind’s plan to apply AI to NHS data in 2015

in AI/Apps/Artificial Intelligence/Consent/data ethics/deep learning/DeepMind/Delhi/Europe/Google/Health/health data/India/machine learning/medical research/National Health Service/NHS/patient data/Politics/privacy/Royal Free Hospitals NHS Trust/Science/Science and Technology/streams/TC/United Kingdom by

More details have emerged about a controversial 2015 patient data-sharing arrangement between Google DeepMind and a UK National Health Service Trust which paint a contrasting picture vs the pair’s public narrative about their intended use of 1.6 million citizens’ medical records.

DeepMind and the Royal Free NHS Trust signed their initial information sharing agreement (ISA) in September 2015 — ostensibly to co-develop a clinical task management app, called Streams, for early detection of an acute kidney condition using an NHS algorithm.

Patients whose fully identifiable medical records were being shared with the Google-owned company were neither asked for their consent nor informed their data was being handed to the commercial entity.

Indeed, the arrangement was only announced to the public five months after it was inked — and months after patient data had already started to flow.

And it was only fleshed out in any real detail after a New Scientist journalist obtained and published the ISA between the pair, in April 2016 — revealing for the first time, via a Freedom of Information request, quite how much medical data was being shared for an app that targets a single condition.

This led to an investigation being opened by the UK’s data protection watchdog into the legality of the arrangement. And as public pressure mounted over the scope and intentions behind the medical records collaboration, the pair stuck to their line that patient data was not being used for training artificial intelligence.

They also claimed they did not need to seek patient consent for their medical records to be shared because the resulting app would be used for direct patient care — a claimed legal basis that has since been demolished by the ICO, which concluded a more than year-long investigation in July.

However a series of newly released documents shows that applying AI to the patient data was in fact a goal for DeepMind right from the earliest months of its partnership with the Royal Free — with its intention being to utilize the wide-ranging access to and control of publicly-funded medical data it was being granted by the Trust to simultaneously develop its own AI models.

In a FAQ note on its website when it publicly announced the collaboration, in February 2016, DeepMind wrote: “No, artificial intelligence is not part of the early-stage pilots we’re announcing today. It’s too early to determine where AI could be applied here, but it’s certainly something we are excited about for the future.”

Omitted from that description of its plans was the fact it had already received a favorable ethical opinion from an NHS Health Research Authority research ethics committee to run a two-year AI research study on the same underlying NHS patient data.

DeepMind’s intent was always to apply AI

The newly released documents, obtained via an FOI filed by health data privacy advocacy organization medConfidential, show DeepMind made an ethics application for an AI research project using Royal Free patient data in October 2015 — with the stated aim of “using machine learning to improve prediction of acute kidney injury and general patient deterioration”.

Earlier still, in May 2015, the company gained confirmation from an insurer to cover its potential liability for the research project — which it subsequently notes having in place in its project application.

And the NHS ethics board granted DeepMind’s AI research project application in November 2015 — with the two-year AI research project scheduled to start in December 2015 and run until December 2017.

A brief outline of the approved research project was previously published on the Health Research Authority’s website, per its standard protocol, but the FOI reveals more details about the scope of the study — which is summarized in DeepMind’s application as follows:

By combining classical statistical methodology and cutting-edge machine learning algorithms (e.g. ‘unsupervised and  semi­supervised learning’), this research project will create improved techniques of data analysis
and prediction of who may get AKI [acute kidney injury], more accurately identify cases when they occur, and better alert doctors to their presence.

DeepMind’s application claimed that the existing NHS algorithm, which it was deploying via the Streams app, “appears” to be missing and misclassifying some cases of AKI, and generating false positives — and goes on to suggest: “The problem is not with the tool which DeepMind have made, but with the  algorithm itself. We think we can overcome these problems, and create a system which works better.”

Although at the time it wrote this application, in October 2015, user tests of the Streams app had not yet begun — so it’s unclear how DeepMind could so confidently assert there was no “problem” with a tool it hadn’t yet tested. But presumably it was attempting to convey information about (what it claimed were) “major limitations” with the working of the NHS’ national AKI algorithm passed on to it by the Royal Free.

(For the record: In an FOI response that TechCrunch received back from the Royal Free in August 2016, the Trust told us that the first Streams user tests were carried out on 12-14 December 2015. It further confirmed: “The application has not been implemented outside of the controlled user tests.”)

Most interestingly, DeepMind’s AI research application shows it told the NHS ethics board that it could process NHS data for the study under “existing information sharing agreements” with the Royal Free.

“DeepMind acting as a data processor, under existing information sharing agreements with the responsible care organisations (in this case the Royal Free Hospitals NHS Trust), and providing existing services on identifiable patient data, will identify and anonymize the relevant records,” the Google division wrote in the research application.

The fact that DeepMind had taken active steps to gain approval for AI research on the Royal Free patient data as far back as fall 2015 flies in the face of all the subsequent assertions made by the pair to the press and public — when they claimed the Royal Free data was not being used to train AI models.

For instance, here’s what this publication was told in May last year, after the scope of the data being shared by the Trust with DeepMind had just emerged (emphasis mine):

DeepMind confirmed it is not, at this point, performing any machine learning/AI processing on the data it is receiving, although the company has clearly indicated it would like to do so in future. A note on its website pertaining to this ambition reads: “[A]rtificial intelligence is not part of the early-stage pilots we’re announcing today. It’s too early to determine where AI could be applied here, but it’s certainly something we are excited about for the future.”

The Royal Free spokesman said it is not possible, under the current data-sharing agreement between the trust and DeepMind, for the company to apply AI technology to these data-sets and data streams.

That type of processing of the data would require another agreement, he confirmed.

The only thing this data is for is direct patient care,” he added. “It is not being used for research, or anything like that.”

As the FOI makes clear, and contrary to the Royal Free spokesman’s claim, DeepMind had in fact been granted ethical approval by the NHS Health Research Authority in November 2015 to conduct AI research on the Royal Free patient data-set — with DeepMind in control of selecting and anonymizing the PID (patient identifiable data) intended for this purpose.

Conducting research on medical data would clearly not constitute an act of direct patient care — which was the legal basis DeepMind and the Royal Free were at the time claiming for their reliance on implied consent of NHS patients to their data being shared. So, in seeking to paper over the erupting controversy about how many patients’ medical records had been shared without their knowledge or consent, it appears the pair felt the need to publicly de-emphasize their parallel AI research intentions for the data.

“If you have been given data, and then anonymise it to do research on, it’s disingenuous to claim you’re not using the data for research,” said Dr Eerke Boiten, a cyber security professor at De Montford University whose research interests encompass data privacy and ethics, when asked for his view on the pair’s modus operandi here.

“And [DeepMind] as computer scientists, some of them with a Ross Anderson pedigree, they should know better than to believe in ‘anonymised medical data’,” he added — a reference to how trivially easy it has been shown to be for sensitive medical data to be re-identified once it’s handed over to third parties who can triangulate identities using all sorts of other data holdings.

Also commenting on what the documents reveal, Phil Booth, coordinator of medConfidential, told us: “What this shows is that Google ignored the rules. The people involved have repeatedly claimed ignorance, as if they couldn’t use a search engine. Now it appears they were very clear indeed about all the rules and contractual arrangements; they just deliberately chose not to follow them.”

Asked to respond to criticism that it has deliberately ignored NHS’ information governance rules, a DeepMind spokeswoman said the AI research being referred to “has not taken place”.

“To be clear, no research project has taken place and no AI has been applied to that dataset. We have always said that we would like to undertake research in future, but the work we are delivering for the Royal Free is solely what has been said all along — delivering Streams,” she added.

She also pointed to a blog post the company published this summer after the ICO ruled that the 2015 ISA with the Royal Free had broken UK data protection laws — in which DeepMind admits it “underestimated the complexity of NHS rules around patient data” and failed to adequately listen and “be accountable to and [be] shaped by patients, the public and the NHS as a whole”.

“We made a mistake in not publicising our work when it first began in 2015, so we’ve proactively announced and published the contracts for our subsequent NHS partnerships,” it wrote in July.

“We do not foresee any major ethical… issues”

In one of the sections of DeepMind’s November 2015 AI research study application form, which asks for “a summary of the main ethical, legal or management issues arising from the research project”, the company writes: “We do not foresee any major ethical, legal or management issues.”

Clearly, with hindsight, the data-sharing partnership would quickly run into major ethical and legal problems. So that’s a pretty major failure of foresight by the world’s most famous AI-building entity. (Albeit, it’s worth noting that the rest of a fuller response in this section has been entirely redacted — but presumably DeepMind is discussing what it considers lesser issues here.)

The application also reveals that the company intended not to register the AI research in a public database — bizarrely claiming that “no appropriate database exists for work such as this”.

In this section the application form includes the following guidance note for applicants: “Registration of research studies is encouraged wherever possible”, and goes on to suggest various possible options for registering a study — such as via a partner NHS organisation; in a register run by a medical research charity; or via publishing through an open access publisher.

DeepMind makes no additional comment on any of these suggestions.

When we asked the company why it had not intended to register the AI research the spokeswoman reiterated that “no research project has taken place”, and added: “A description of the initial HRA [Health Research Authority] application is publicly available on the HRA website.”

Evidently the company — whose parent entity Google’s corporate mission statement claims it wants to ‘organize the world’s information’ — was in no rush to more widely distribute its plans for applying AI to NHS data at this stage.

Details of the size of the study have also been redacted in the FOI response so it’s not possible to ascertain how many of the 1.6M medical records DeepMind intended to use for the AI research, although the document does confirm that children’s medical records would be included in the study.

The application confirms that Royal Free NHS patients who have previously opted out of their data being used for any medical research would be excluded from the AI study (as would be required by UK law).

As noted above, DeepMind’s application also specifies that the company would be both handling fully identifiable patient data from the Royal Free, for the purposes of developing the clinical task management app Streams, and also identifying and anonymizing a sub-set of this data to run its AI research.

This could well raise additional questions over whether the level of control DeepMind was being afforded by the Trust over patients’ data is appropriate for an entity that is described as occupying the secondary role of data processor — vs the Royal Free claiming it remains the data controller.

“A data processor does not determine the purpose of processing — a data controller does,” said Boiten, commenting on this point. “Doing AI research” is too aspecific as a purpose, so I find it impossible to view DeepMind as only a data processor in this scenario,” he added.

One thing is clear: When the DeepMind-Royal Free collaboration was publicly revealed with much fanfare, the fact they had already applied for and been granted ethical approval to perform AI research on the same patient data-set was not — in their view — a consideration they deemed merited detailed public discussion. Which is a huge miscalculation when you’re trying to win the public’s trust for the sharing of their most sensitive personal data.

Asked why it had not informed the press or the public about the existence and status of the research project at the time, a DeepMind spokeswoman failed to directly respond to the question — instead she reiterated that: “No research is underway.”

DeepMind and the Royal Free both claim that, despite receiving a favorable ethical opinion on the AI research application in November 2015 from the NHS ethics committee, additional approvals would have been required before the AI research could have gone ahead.

“A favourable opinion from a research ethics committee does not constitute full approval. This work could not take place without further approvals,” the DeepMind spokeswoman told us.

“The AKI research application has initial ethical approval from the national research ethics service within the Health Research Authority (HRA), as noted on the HRA website. However, DeepMind does not have the next step of approval required to proceed with the study — namely full HRA approval (previously called local R&D approval).

“In addition, before any research could be done, DeepMind and the Royal Free would also need a research collaboration agreement,” she added.

The HRA’s letter to DeepMind confirming its favorable opinion on the study does indeed note:

Management permission or approval must be obtained from each host organisation prior to the start of the study at the site concerned.

Management permission (“R&D approval”) should be sought from all NHS organisations involved in the study in accordance with NHS research governance arrangements

However since the proposed study was to be conducted purely on a database of patient data, rather than at any NHS locations, and given that the Royal Free already had an information-sharing arrangement inked in place with DeepMind, it’s not clear exactly what additional external approvals they were awaiting.

The original (now defunct and ICO sanctioned) ISA between the pair does include the below paragraph — granting DeepMind the ability to anonymize the Royal Free patient data-set “for research” purposes. And although this clause lists several bodies, one of which it says would also need to approve any projects under “formal research ethics”, the aforementioned HRA (“the National Research Ethics Service”) is included in this list.

So again, it’s not clear whose rubberstamp they would still have required.

The value of transparency

At the same time, it’s clear that transparency is a preferred principle of medical research ethics — hence the NHS encouraging those filling in research applications to publicly register their studies.

A UK government-commissioned life science strategy review, published this week, also emphasizes the importance of transparency in engendering and sustaining public trust in health research projects — arguing it’s an essential component for furthering the march of digital innovation.

The same review also recommends that the UK government and the NHS take ownership of training health AIs off of taxpayer funded health data-sets — exactly to avoid corporate entities coming in and asset-stripping potential future medical insights.

(“Most of the value is the data,” asserts review author, Sir John Bell, an Oxford University professor of medicine. Data that, in DeepMind’s case, has been so far freely handed over by multiple NHS organizations — in June, for example, it emerged that another NHS Trust which has inked a five-year data-sharing deal with DeepMind, Taunton & Somerset, is not paying the company for the duration of the contract; unless (and in the unlikely eventuality) that the service support exceeds £15,000 a month. So essentially DeepMind is being ‘paid’ with access to NHS patients’ data.)

Even before the ICO’s damning verdict, the original ISA between DeepMind and the Royal Free had been extensively criticized for lacking robust legal and ethical safeguards on how patient data could be used. (Even as DeepMind’s co-founder Mustafa Suleyman tried to brush off criticism, saying negative headlines were the result of “a group with a particular view to peddle“.)

But after the original controversy flared the pair subsequently scrapped the agreement and replaced it, in November 2016, with a second data-sharing contract which included some additional information governance concessions — while also continuing to share largely the same quantity and types of identifiable Royal Free patient data as before.

Then this July, as noted earlier, the ICO ruled that the original ISA had indeed breached UK privacy law. “Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening,” it stated in its decision.

The ICO also said it had asked the Trust to commit to making changes to address the shortcomings that the regulator had identified.

In a statement on its website the Trust said it accepted the findings and claimed to have “already made good progress to address the areas where they have concerns”, and to be “doing much more to keep our patients informed about how their data is used”.

“We would like to reassure patients that their information has been in our control at all times and has never been used for anything other than delivering patient care or ensuring their safety,” the Royal Free’s July statement added.

Responding to questions put to it for this report, the Royal Free Hospitals NHS Trust confirmed it was aware of and involved with the 2015 DeepMind AI research study application.

“To be clear, the application was for research on de-personalised data and not the personally identifiable data used in providing Stream,” said a spokeswoman.

“No research project has begun, and it could not begin without further approvals. It is worth noting that fully approved research projects involving de-personalised data generally do not require patient consent,” she added.

At the time of writing the spokeswoman had not responded to follow-up questions asking why, in 2016, it had made such explicit public denials about its patient data being used for AI research, and why it chose not to make public the existing application to conduct AI research at that time — or indeed, at an earlier time.

Another curious facet to this saga involves the group of “independent reviewers” that Suleyman, announced the company had signed up in July 2016 to — as he put it — “examine our work and publish their findings”.

His intent was clearly to try to reset public perceptions of the DeepMind Health initiative after a bumpy start for transparency, consent, information governance and regulatory best practice — with the wider hope of boosting public trust in what an ad giant wanted with people’s medical data by allowing some external eyeballs to roll in and poke around.

What’s curious is that the reviewers make no reference to DeepMind’s AI research study intentions for the Royal Free data-set in their first report — also published this July.

We reached out to the chair of the group, former MP Julian Huppert, to ask whether DeepMind informed the group it was intending to undertake AI research on the same data-set.

Huppert confirmed to us that the group had been aware there was “consideration” of an AI research project using the Royal Free data at the time it was working on its report, but claimed he does not “recall exactly” when the project was first mentioned or by whom.

“Both the application and the decision not to go ahead happened before the panel was formed,” he said, by way of explanation for the memory lapse.

Asked why the panel did not think the project worth mentioning in its first annual report, he told TechCrunch: “We were more concerned with looking at work that DMH had done and were planning to do, than things that they had decided not to go ahead with.”

“I understand that no work was ever done on it. If this project were to be taken forward, there would be many more regulatory steps, which we would want to look at,” he added.

In their report the independent reviews do flag up some issues of concern regarding DeepMind Health’s operations — including potential security vulnerabilities around the company’s handling of health data.

For example, a datacenter server build review report, conducted by an external auditor looking at part of DeepMind Health’s critical infrastructure on behalf of the external reviewers, identified what it judged a “medium risk vulnerability” — noting that: “A large number of files are present which can be overwritten by any user on the reviewed servers.”

“This could allow a malicious user to modify or replace existing files to insert malicious content, which would allow attacks to be conducted against the servers storing the files,” the auditor added.

Asked how DeepMind Health will work to regain NHS patients’ trust in light of such a string of transparency and regulatory failures to-date, the spokeswoman provided the following statement: “Over the past eighteen months we’ve done a lot to try to set a higher standard of transparency, appointing a panel of Independent Reviewers who scrutinise our work, embarking on a patient involvement program, proactively publishing NHS contracts, and building tools to enable better audits of how data is used to support care. In our recently signed partnership with Taunton and Somerset NHS Trust, for example, we committed to supporting public engagement activity before any patient data is transferred for processing. And at our recent consultation events in London and Manchester, patients provided feedback on DeepMind Health’s work.”

Asked whether it had informed the independent reviewers about the existence of the AI research application, the spokeswoman declined to respond directly. Instead she repeater the prior line that: “No research project is underway.”

News Source = techcrunch.com

Building health AIs should be UK ambition, says strategy review

in AI/Artificial Intelligence/Delhi/Europe/Government/Health/health care/health data/India/machine learning/National Health Service/NHS/Politics/privacy/TC/United Kingdom by

A wide-ranging, UK government-commissioned industrial strategy review of the life sciences sector, conducted by Oxford University’s Sir John Bell, has underlined the value locked up in publicly-funded data held by the country’s National Health Service — and called for a new regulatory framework to be established in order to “capture for the UK the value in algorithms generated using NHS data”.

The NHS is a free-at-the-point of use national health service covering some 65 million users — which gives you an idea of the unique depth and granularity of the patient data it holds.

And how much potential value could therefore be created for the nation by utilizing patient data-sets to develop machine learning algorithms for medical diagnosis and tracking.

“AI is likely to be used widely in healthcare and it should be the ambition for the UK to develop and test integrated AI systems that provide real-time data better than human monitoring and prediction of a wide range of patient outcomes in conditions such as mental health, cancer and inflammatory disease,” writes Bell in the report.

His recommendation for the government and the NHS to be pro-active about creating and capturing AI-enabled value off of valuable, taxpayer-funded health data-sets comes hard on the heels of the conclusion of a lengthy investigation by the UK’s data protection watchdog, the ICO, into a controversial 2015 data-sharing arrangement between Google-DeepMind and a London-based NHS Trust, the Royal Free Hospitals Trust, to co-develop a clinical task management app.

In July the ICO concluded that the arrangement — DeepMind’s first with an NHS Trust — breached UK privacy law, saying the ~1.6M NHS patients whose full medical records are being shared with the Google-owned company (without their consent) could not have “reasonably expected” their information to be used in this way.

And while the initial application the pair have co-developed does not involve applying machine learning algorithms to NHS data, a wider memorandum of understanding between them sets out their intention to do just that within five years.

Meanwhile, DeepMind has also inked additional data-sharing arrangements with other NHS Trusts that do already entail AI-based research — such as a July 2016 research partnership with Moorfields Eye Hospital that’s aiming to investigate whether machine learning algorithms can automate the analysis of digital eye scans to diagnose two eye conditions.

In that instance DeepMind is getting free access to one million “anonymized” eye scans to try to develop diagnosis AI models.

The company has committed to publishing the results of the research but any AI models it develops — trained off of the NHS data-set — are unlikely to be handed back freely to the public sector.

Rather, the company’s stated aim for its health-based AI ambitions is to create commercial IP, via multiple research partnerships with NHS organizations — positioning itself to sell trained AI models as a future software-based service to healthcare organizations at whatever price it deems appropriate.

This is exactly the sort of data-enabled algorithmic value that Bell is urging the UK government to be pro-active about capturing for the country — by establishing a regulatory framework that positions the NHS (and the UK’s citizens who fund it) to benefit from data-based AI insights generated off of its vast data holdings, instead of allowing large commercial entities to push in and asset strip these taxpayer funded assets.

“[E]xisting data access agreements in the UK for algorithm development have currently been completed at a local level with mainly large companies and may not share the rewards fairly, given the essential nature of NHS patient data to developing algorithms,” warns Bell.

“There is an opportunity for defining a clear framework to better realise the true value for the NHS of the data at a national level, as currently agreements made locally may not share the benefit with other regions,” he adds.

In an interview with the Guardian newspaper he is asked directly for his views on DeepMind’s collaboration with the Royal Free NHS Trust — and describes it as the “canary in the coalmine”.

“I heard that story and thought ‘Hang on a minute, who’s going to profit from that?’” he is quoted as saying. “What Google’s doing in [other sectors], we’ve got an equivalent unique position in the health space. Most of the value is the data. The worst thing we could do is give it away for free.”

“What you don’t want is somebody rocking up and using NHS data as a learning set for the generation of algorithms and then moving the algorithm to San Francisco and selling it so all the profits come back to another jurisdiction,” Bell also told the newspaper.

In his report, Bell also highlights the unpreparedness of “current or planned” regulations to provide a framework to “account for machine learning algorithms that update with new data” — pointing out, for example, that: “Currently algorithms making medical claims are regulated as medical devices.”

And again, in 2016 DeepMind suspended testing of the Streams app it had co-developed with the Royal Free NHS Trust after it emerged the pair had failed to register this software as a medical device with the MHRA prior to trialling it in the hospitals.

Bell suggests that a better approach for testing healthcare software and algorithms could involve sandboxed access and use of dummy data — rather than testing with live patient data, as DeepMind and the Royal Free were.

“One approach to this may be in the development of ‘sandbox’ access to deidentified or synthetic data from providers such as NHS Digital, where innovators could safely develop algorithms and trial new regulatory approaches for all product types,” he writes.

In the report Bell also emphasizes the importance of transparency in winning public trust to further the progress of research which utilizes publicly funded health data-sets.

“Many more people support than oppose health data being used by commercial organisations undertaking health research, but it is also clear that strong patient and clinician engagement and involvement, alongside clear permissions and controls, are vital to the success of any health data initiative,” he writes.

“This should take place as part of a wider national conversation with the public enabling a true understanding of data usage in as much detail as they wish, including clear information on who can access data and for what purposes. This conversation should also provide full information on how health data is vital to improving health, care and services through research.”

He also calls for the UK’s health care system to “set out clear and consistent national approaches to data and interoperability standards and requirements for data access agreements” in order to help reduce response time across all data providers, writing: “Currently, arranging linkage and access to national-level datasets used for research can require multiple applications and access agreements with unclear timelines. This can cause delays to data access enabling both research and direct care.”

Other NHS-related recommendations in the report include a call to end handwritten prescriptions and make eprescribing mandatory for hospitals; the creation of a forum for researchers across academia, charities and industry to engage with all national health data programs; and the creation of between two and five digital innovation hubs to provide data across regions of three to five million people with the aim of accelerating research access to meaningful national datasets.

Featured Image: Rido/Shutterstock

News Source = techcrunch.com

Go to Top