Timesdelhi.com

October 19, 2018
Category archive

NHS

Siilo injects $5.1M to try to transplant WhatsApp use in hospitals

in Apps/Delhi/eBuddy/encryption/eqt ventures/Europe/European Union/funding/Health/Healthcare/healthcare industry/India/medical data/messaging apps/National Health Service/NHS/Politics/Recent Funding/secure messaging/Siilo/smartphone/smartphones/Startups/WhatsApp by

Consumer messaging apps like WhatsApp are not only insanely popular for chatting with friends but have pushed deep into the workplace too, thanks to the speed and convenience they offer. They have even crept into hospitals, as time-strapped doctors reach for a quick and easy way to collaborate over patient cases on the ward.

Yet WhatsApp is not specifically designed with the safe sharing of highly sensitive medical information in mind. This is where Dutch startup Siilo has been carving a niche for itself for the past 2.5 years — via a free-at-the-point-of-use encrypted messaging app that’s intended for medical professions to securely collaborate on patient care, such as via in-app discussion groups and being able to securely store and share patient notes.

A business goal that could be buoyed by tighter EU regulations around handling personal data, say if hospital managers decide they need to address compliance risks around staff use of consumer messaging apps.

The app’s WhatsApp-style messaging interface will be instantly familiar to any smartphone user. But Siilo bakes in additional features for its target healthcare professional users, such as keeping photos, videos and files sent via the app siloed in an encrypted vault that’s entirely separate from any personal media also stored on the device.

Messages sent via Siilo are also automatically deleted after 30 days unless the user specifies a particular message should be retained. And the app does not make automated back-ups of users’ conversations.

Other doctor-friendly features include the ability to blur images (for patient privacy purposes); augment images with arrows for emphasis; and export threaded conversations to electronic health records.

There’s also mandatory security for accessing the app — with a requirement for either a PIN-code, fingerprint or facial recognition biometric to be used. While a remote wipe functionality to nix any locally stored data is baked into Siilo in the event of a device being lost or stolen.

Like WhatsApp, Siilo also uses end-to-end encryption — though in its case it says this is based on the opensource NaCl library

It also specifies that user messaging data is stored encrypted on European ISO-27001 certified servers — and deleted “as soon as we can”.

It also says it’s “possible” for its encryption code to be open to review on request.

Another addition is a user vetting layer to manually verify the medical professional users of its app are who they say they are.

Siilo says every user gets vetted. Though not prior to being able to use the messaging functions. But users that have passed verification unlock greater functionality — such as being able to search among other (verified) users to find peers or specialists to expand their professional network. Siilo says verification status is displayed on profiles.

“At Siilo, we coin this phenomenon ‘network medicine’, which is in contrast to the current old-­fashioned, siloed medicine,” says CEO and co-founder Joost Bruggeman in a statement. “The goal is to improve patient care overall, and patients have a network of doctors providing input into their treatment.”

While Bruggeman brings the all-important medical background to the startup, another co-founder, Onno Bakker, has been in the mobile messaging game for a long time — having been one of the entrepreneurs behind the veteran web and mobile messaging platform, eBuddy.

A third co-founder, CFO Arvind Rao, tells us Siilo transplanted eBuddy’s messaging dev team — couching this ported in-house expertise as an advantage over some of the smaller rivals also chasing the healthcare messaging opportunity.

It is also of course having to compete technically with the very well-resourced and smoothly operating WhatsApp behemoth.

“Our main competitor is always WhatsApp,” Rao tells TechCrunch. “Obviously there are also other players trying to move in this space. TigerText is the largest in the US. In the UK we come across local players like Hospify and Forward.

“A major difference we have very experienced in-house dev team… The experience of this team has helped to build a messenger that really can compete in usability with WhatsApp that is reflected in our rapid adoption and usage numbers.”

“Having worked in the trenches as a surgery resident, I’ve experienced the challenges that healthcare professionals face firsthand,” adds Bruggeman. “With Siilo, we’re connecting all healthcare professionals to make them more efficient, enable them to share patient information securely and continue learning and share their knowledge. The directory of vetted healthcare professionals helps ensure they’re successful team­players within a wider healthcare network that takes care of the same patient.”

Siilo launched its app in May 2016 and has since grown to ~100,000 users, with more than 7.5 million messages currently being processed monthly and 6,000+ clinical chat groups active monthly.

“We haven’t come across any other secure messenger for healthcare in Europe with these figures in the App Store/Google Play rankings and therefore believe we are the largest in Europe,” adds Rao. “We have multiple large institutions across Western-Europe where doctors are using Siilo.”

On the security front, as well flagging the ISO 27001 certification it has for its servers, he notes that it obtained “the highest NHS IG Toolkit level 3” — aka the now replaced system for organizations to self-assess their compliance with the UK’s National Health Service’s information governance processes, claiming “we haven’t seen [that] with any other messaging company”.

Siilo’s toolkit assessment was finalized at the end of Febuary 2018, and is valid for a year — so will be up for re-assessment under the replacement system (which was introduced this April) in Q1 2019. (Rao confirms they will be doing this “new (re-)assessment” at the end of the year.)

As well as being in active use in European hospitals such as St. George’s Hospital, London, and Charité Berlin, Germany, Siilo says its app has had some organic adoption by medical pros further afield — including among smaller home healthcare teams in California, and “entire transplantation teams” from Astana, Kazakhstan.

It also cites British Medical Journal research that found that of the 98.9% of U.K. hospital clinicians who now have smartphones, around a third are using consumer messaging apps in the clinical workplace. Persuading those healthcare workers to ditch WhatsApp at work is Siilo’s mission and challenge.

The team has just announced a €4.5 million (~$5.1M) seed to help it get onto the radar of more doctors. The round is led by EQT Ventures, with participation from existing investors. It says it will be using the funding to scale­ up its user base across Europe, with a particular focus on the UK and Germany.

Commenting on the funding in a statement, EQT Ventures’ Ashley Lundström, a venture lead and investment advisor at the VC firm, said: “The team was impressed with Siilo’s vision of creating a secure global network of healthcare professionals and the organic traction it has already achieved thanks to the team’s focus on building a product that’s easy to use. The healthcare industry has long been stuck using jurassic technologies and Siilo’s real­time messaging app can significantly improve efficiency
and patient care without putting patients’ data at risk.”

While the messaging app itself is free for healthcare professions to use, Siilo also offers a subscription service to monetize the freemium product.

This service, called Siilo Connect offers organisations and professional associations what it bills as “extensive management, administration, networking and software integration tools”, or just data regulation compliance services if they want the basic flavor of the paid tier.

News Source = techcrunch.com

Femtech hardware startup Elvie inks strategic partnership with UK’s NHS

in Apps/biofeedback/Delhi/Elvie/Europe/Gadgets/Hardware/Health/Healthcare/India/National Health Service/NHS/pelvic floor/Politics/sexual health/smart technology/Tania Boler/United Kingdom/Wearables/women's health by

Elvie, a femtech hardware startup whose first product is a sleek smart pelvic floor exerciser, has inked a strategic partnership with the UK’s National Health Service that will make the device available nationwide through the country’s free-at-the-point-of-use healthcare service so at no direct cost to the patient.

It’s a major win for the startup that was co-founded in 2013 by CEO Tania Boler and Jawbone founder, Alexander Asseily, with the aim of building smart technology that focuses on women’s issues — an overlooked and underserved category in the gadget space.

Boler’s background before starting Elvie (née Chiaro) including working for the U.N. on global sex education curriculums. But her interest in pelvic floor health, and the inspiration for starting Elvie, began after she had a baby herself and found there was more support for women in France than the U.K. when it came to taking care of their bodies after giving birth.

With the NHS partnership, which is the startup’s first national reimbursement partnership (and therefore, as a spokeswoman puts it, has “the potential to be transformative” for the still young company), Elvie is emphasizing the opportunity for its connected tech to help reduce symptoms of urinary incontinence, including those suffered by new mums or in cases of stress-related urinary incontinence.

The Elvie kegel trainer is designed to make pelvic floor exercising fun and easy for women, with real-time feedback delivered via an app that also gamifies the activity, guiding users through exercises intended to strengthen their pelvic floor and thus help reduce urinary incontinence symptoms. The device can also alert users when they are contracting incorrectly.

Elvie cites research suggesting the NHS spends £233M annually on incontinence, claiming also that around a third of women and up to 70% of expectant and new mums currently suffer from urinary incontinence. In 70 per cent of stress urinary incontinence cases it suggests symptoms can be reduced or eliminated via pelvic floor muscle training.

And while there’s no absolute need for any device to perform the necessary muscle contractions to strengthen the pelvic floor, the challenge the Elvie Trainer is intended to help with is it can be difficult for women to know they are performing the exercises correctly or effectively.

Elvie cites a 2004 study that suggests around a third of women can’t exercise their pelvic floor correctly with written or verbal instruction alone. Whereas it says that biofeedback devices (generally, rather than the Elvie Trainer specifically) have been proven to increase success rates of pelvic floor training programmes by 10% — which it says other studies have suggested can lower surgery rates by 50% and reduce treatment costs by £424 per patient head within the first year.

“Until now, biofeedback pelvic floor training devices have only been available through the NHS for at-home use on loan from the patient’s hospital, with patient allocation dependent upon demand. Elvie Trainer will be the first at-home biofeedback device available on the NHS for patients to keep, which will support long-term motivation,” it adds.

Commenting in a statement, Clare Pacey, a specialist women’s health physiotherapist at Kings College Hospital, said: “I am delighted that Elvie Trainer is now available via the NHS. Apart from the fact that it is a sleek, discreet and beautiful product, the app is simple to use and immediate visual feedback directly to your phone screen can be extremely rewarding and motivating. It helps to make pelvic floor rehabilitation fun, which is essential in order to be maintained.”

Elvie is not disclosing commercial details of the NHS partnership but a spokeswoman told us the main objective for this strategic partnership is to broaden access to Elvie Trainer, adding: “The wholesale pricing reflects that.”

Discussing the structure of the supply arrangement, she said Elvie is working with Eurosurgical as its delivery partner — a distributor she said has “decades of experience supplying products to the NHS”.

“The approach will vary by Trust, regarding whether a unit is ordered for a particular patient or whether a small stock will be held so a unit may be provided to a patient within the session in which the need is established. This process will be monitored and reviewed to determine the most efficient and economic distribution method for the NHS Supply Chain,” she added.

News Source = techcrunch.com

Drone development should focus on social good first, says UK report

in Delhi/delivery drone/drone/electronics/Emergency services/Emerging-Technologies/Europe/Gadgets/Government/Health/India/London/NESTA/NHS/Politics/robotics/TC/Transportation/UK government/United Kingdom/unmanned aerial vehicles by

A UK government backed drone innovation project that’s exploring how unmanned aerial vehicles could benefit cities — including for use-cases such as medical delivery, traffic incident response, fire response and construction and regeneration — has reported early learnings from the first phase of the project.

Five city regions are being used as drone test-beds as part of Nesta’s Flying High Challenge — namely London, the West Midlands, Southampton, Preston and Bradford.

While five socially beneficial use-cases for drone technology have been analyzed as part of the project so far, including considering technical, social and economic implications of the tech.

The project has been ongoing since December.

Nesta, the innovation-focused charity behind the project and the report, wants the UK to become a global leader in shaping drone systems that place people’s needs first, and writes in the report that: “Cities must shape the future of drones: Drones must not shape the future of cities.”

In the report it outlines some of the challenges facing urban implementations of drone technology and also makes some policy recommendations.

It also says that socially beneficial use-cases have come out as an early winner over of cities to the potential of the tech — over and above “commercial or speculative” applications such as drone delivery or for carrying people in flying taxis.

The five use-cases explored thus far via the project are:

  • Medical delivery within London — a drone delivery network for carrying urgent medical products between NHS facilities, which would routinely carry products such as pathology samples, blood products and equipment over relatively short distances between hospitals in a network
  • Traffic incident response in the West Midlands — responding to traffic incidents in the West Midlands to support the emergency services prior to their arrival and while they are on-site, allowing them to allocate the right resources and respond more effectively
  • Fire response in Bradford — emergency response drones for West Yorkshire Fire and Rescue service. Drones would provide high-quality information to support emergency call handlers and fire ground commanders, arriving on the scene faster than is currently possible and helping staff plan an appropriate response for the seriousness of the incident
  • Construction and regeneration in Preston — drone services supporting construction work for urban projects. This would involve routine use of drones prior to and during construction, in order to survey sites and gather real-time information on the progress of works
  • Medical delivery across the Solent — linking Southampton across the Solent to the Isle of Wight using a delivery drone. Drones could carry light payloads of up to a few kilos over distances of around 20 miles, with medical deliveries of products being a key benefit

Flagging up technical and regulatory challenges to scaling the use of drones beyond a few interesting experiments, Nest writes: “In complex environments, flight beyond the operator’s visual line of sight, autonomy and precision flight are key, as is the development of an unmanned traffic management (UTM) system to safely manage airspace. In isolation these are close to being solved — but making these work at large scale in a complex urban environment is not.”

“While there is demand for all of the use cases that were investigated, the economics of the different use cases vary: Some bring clear cost savings; others bring broader social benefits. Alongside technological development, regulation needs to evolve to allow these use cases to operate. And infrastructure like communications networks and UTM systems will need to be built,” it adds.

The report also emphasizes the importance of public confidence, writing that: “Cities are excited about the possibilities that drones can bring, particularly in terms of critical public services, but are also wary of tech-led buzz that can gloss over concerns of privacy, safety and nuisance. Cities want to seize the opportunity behind drones but do it in a way that responds to what their citizens demand.”

And the charity makes an urgent call for the public to be brought into discussions about the future of drones.

“So far the general public has played very little role,” it warns. “There is support for the use of drones for public benefit such as for the emergency services. In the first instance, the focus on drone development should be on publicly beneficial use cases.”

Giving the combined (and intertwined) complexity of regulatory, technical and infrastructure challenges standing in the way of developing viable drone service implementations, Nesta is also recommending the creation of testbeds in which drone services can be developed with the “facilities and regulatory approvals to support them”.

“Regulation will also need to change: Routine granting of permission must be possible, blanket prohibitions in some types of airspace must be relaxed, and an automated system of permissions — linked to an unmanned traffic management system — needs to be put in place for all but the most challenging uses. And we will need a learning system to share progress on regulation and governance of the technology, within the UK and beyond, for instance with Eurocontrol,” it adds.

“Finally, the UK will need to invest in infrastructure, whether this is done by the public or private sector, to develop the communications and UTM infrastructure required for widespread drone operation.”

In conclusion Nesta argues there is “clear evidence that drones are an opportunity for the UK” — pointing to the “hundreds” of companies already operating in the sector; and to UK universities with research strengths in the area; as well as suggesting public authorities could save money or provide “new and better services thanks to drones”.

At the same time it warns that UK policy responses to drones are lagging those of “leading countries” — suggesting the country could squander the chance to properly develop some early promise.

“The US, EU, China, Switzerland and Singapore in particular have taken bigger steps towards reforming regulations, creating testbeds and supporting businesses with innovative ideas. The prize, if we get this right, is that we shape this new technology for good — and that Britain gets its share of the economic spoils.”

You can read the full report here.

News Source = techcrunch.com

Documents detail DeepMind’s plan to apply AI to NHS data in 2015

in AI/Apps/Artificial Intelligence/Consent/data ethics/deep learning/DeepMind/Delhi/Europe/Google/Health/health data/India/machine learning/medical research/National Health Service/NHS/patient data/Politics/privacy/Royal Free Hospitals NHS Trust/Science/Science and Technology/streams/TC/United Kingdom by

More details have emerged about a controversial 2015 patient data-sharing arrangement between Google DeepMind and a UK National Health Service Trust which paint a contrasting picture vs the pair’s public narrative about their intended use of 1.6 million citizens’ medical records.

DeepMind and the Royal Free NHS Trust signed their initial information sharing agreement (ISA) in September 2015 — ostensibly to co-develop a clinical task management app, called Streams, for early detection of an acute kidney condition using an NHS algorithm.

Patients whose fully identifiable medical records were being shared with the Google-owned company were neither asked for their consent nor informed their data was being handed to the commercial entity.

Indeed, the arrangement was only announced to the public five months after it was inked — and months after patient data had already started to flow.

And it was only fleshed out in any real detail after a New Scientist journalist obtained and published the ISA between the pair, in April 2016 — revealing for the first time, via a Freedom of Information request, quite how much medical data was being shared for an app that targets a single condition.

This led to an investigation being opened by the UK’s data protection watchdog into the legality of the arrangement. And as public pressure mounted over the scope and intentions behind the medical records collaboration, the pair stuck to their line that patient data was not being used for training artificial intelligence.

They also claimed they did not need to seek patient consent for their medical records to be shared because the resulting app would be used for direct patient care — a claimed legal basis that has since been demolished by the ICO, which concluded a more than year-long investigation in July.

However a series of newly released documents shows that applying AI to the patient data was in fact a goal for DeepMind right from the earliest months of its partnership with the Royal Free — with its intention being to utilize the wide-ranging access to and control of publicly-funded medical data it was being granted by the Trust to simultaneously develop its own AI models.

In a FAQ note on its website when it publicly announced the collaboration, in February 2016, DeepMind wrote: “No, artificial intelligence is not part of the early-stage pilots we’re announcing today. It’s too early to determine where AI could be applied here, but it’s certainly something we are excited about for the future.”

Omitted from that description of its plans was the fact it had already received a favorable ethical opinion from an NHS Health Research Authority research ethics committee to run a two-year AI research study on the same underlying NHS patient data.

DeepMind’s intent was always to apply AI

The newly released documents, obtained via an FOI filed by health data privacy advocacy organization medConfidential, show DeepMind made an ethics application for an AI research project using Royal Free patient data in October 2015 — with the stated aim of “using machine learning to improve prediction of acute kidney injury and general patient deterioration”.

Earlier still, in May 2015, the company gained confirmation from an insurer to cover its potential liability for the research project — which it subsequently notes having in place in its project application.

And the NHS ethics board granted DeepMind’s AI research project application in November 2015 — with the two-year AI research project scheduled to start in December 2015 and run until December 2017.

A brief outline of the approved research project was previously published on the Health Research Authority’s website, per its standard protocol, but the FOI reveals more details about the scope of the study — which is summarized in DeepMind’s application as follows:

By combining classical statistical methodology and cutting-edge machine learning algorithms (e.g. ‘unsupervised and  semi­supervised learning’), this research project will create improved techniques of data analysis
and prediction of who may get AKI [acute kidney injury], more accurately identify cases when they occur, and better alert doctors to their presence.

DeepMind’s application claimed that the existing NHS algorithm, which it was deploying via the Streams app, “appears” to be missing and misclassifying some cases of AKI, and generating false positives — and goes on to suggest: “The problem is not with the tool which DeepMind have made, but with the  algorithm itself. We think we can overcome these problems, and create a system which works better.”

Although at the time it wrote this application, in October 2015, user tests of the Streams app had not yet begun — so it’s unclear how DeepMind could so confidently assert there was no “problem” with a tool it hadn’t yet tested. But presumably it was attempting to convey information about (what it claimed were) “major limitations” with the working of the NHS’ national AKI algorithm passed on to it by the Royal Free.

(For the record: In an FOI response that TechCrunch received back from the Royal Free in August 2016, the Trust told us that the first Streams user tests were carried out on 12-14 December 2015. It further confirmed: “The application has not been implemented outside of the controlled user tests.”)

Most interestingly, DeepMind’s AI research application shows it told the NHS ethics board that it could process NHS data for the study under “existing information sharing agreements” with the Royal Free.

“DeepMind acting as a data processor, under existing information sharing agreements with the responsible care organisations (in this case the Royal Free Hospitals NHS Trust), and providing existing services on identifiable patient data, will identify and anonymize the relevant records,” the Google division wrote in the research application.

The fact that DeepMind had taken active steps to gain approval for AI research on the Royal Free patient data as far back as fall 2015 flies in the face of all the subsequent assertions made by the pair to the press and public — when they claimed the Royal Free data was not being used to train AI models.

For instance, here’s what this publication was told in May last year, after the scope of the data being shared by the Trust with DeepMind had just emerged (emphasis mine):

DeepMind confirmed it is not, at this point, performing any machine learning/AI processing on the data it is receiving, although the company has clearly indicated it would like to do so in future. A note on its website pertaining to this ambition reads: “[A]rtificial intelligence is not part of the early-stage pilots we’re announcing today. It’s too early to determine where AI could be applied here, but it’s certainly something we are excited about for the future.”

The Royal Free spokesman said it is not possible, under the current data-sharing agreement between the trust and DeepMind, for the company to apply AI technology to these data-sets and data streams.

That type of processing of the data would require another agreement, he confirmed.

The only thing this data is for is direct patient care,” he added. “It is not being used for research, or anything like that.”

As the FOI makes clear, and contrary to the Royal Free spokesman’s claim, DeepMind had in fact been granted ethical approval by the NHS Health Research Authority in November 2015 to conduct AI research on the Royal Free patient data-set — with DeepMind in control of selecting and anonymizing the PID (patient identifiable data) intended for this purpose.

Conducting research on medical data would clearly not constitute an act of direct patient care — which was the legal basis DeepMind and the Royal Free were at the time claiming for their reliance on implied consent of NHS patients to their data being shared. So, in seeking to paper over the erupting controversy about how many patients’ medical records had been shared without their knowledge or consent, it appears the pair felt the need to publicly de-emphasize their parallel AI research intentions for the data.

“If you have been given data, and then anonymise it to do research on, it’s disingenuous to claim you’re not using the data for research,” said Dr Eerke Boiten, a cyber security professor at De Montford University whose research interests encompass data privacy and ethics, when asked for his view on the pair’s modus operandi here.

“And [DeepMind] as computer scientists, some of them with a Ross Anderson pedigree, they should know better than to believe in ‘anonymised medical data’,” he added — a reference to how trivially easy it has been shown to be for sensitive medical data to be re-identified once it’s handed over to third parties who can triangulate identities using all sorts of other data holdings.

Also commenting on what the documents reveal, Phil Booth, coordinator of medConfidential, told us: “What this shows is that Google ignored the rules. The people involved have repeatedly claimed ignorance, as if they couldn’t use a search engine. Now it appears they were very clear indeed about all the rules and contractual arrangements; they just deliberately chose not to follow them.”

Asked to respond to criticism that it has deliberately ignored NHS’ information governance rules, a DeepMind spokeswoman said the AI research being referred to “has not taken place”.

“To be clear, no research project has taken place and no AI has been applied to that dataset. We have always said that we would like to undertake research in future, but the work we are delivering for the Royal Free is solely what has been said all along — delivering Streams,” she added.

She also pointed to a blog post the company published this summer after the ICO ruled that the 2015 ISA with the Royal Free had broken UK data protection laws — in which DeepMind admits it “underestimated the complexity of NHS rules around patient data” and failed to adequately listen and “be accountable to and [be] shaped by patients, the public and the NHS as a whole”.

“We made a mistake in not publicising our work when it first began in 2015, so we’ve proactively announced and published the contracts for our subsequent NHS partnerships,” it wrote in July.

“We do not foresee any major ethical… issues”

In one of the sections of DeepMind’s November 2015 AI research study application form, which asks for “a summary of the main ethical, legal or management issues arising from the research project”, the company writes: “We do not foresee any major ethical, legal or management issues.”

Clearly, with hindsight, the data-sharing partnership would quickly run into major ethical and legal problems. So that’s a pretty major failure of foresight by the world’s most famous AI-building entity. (Albeit, it’s worth noting that the rest of a fuller response in this section has been entirely redacted — but presumably DeepMind is discussing what it considers lesser issues here.)

The application also reveals that the company intended not to register the AI research in a public database — bizarrely claiming that “no appropriate database exists for work such as this”.

In this section the application form includes the following guidance note for applicants: “Registration of research studies is encouraged wherever possible”, and goes on to suggest various possible options for registering a study — such as via a partner NHS organisation; in a register run by a medical research charity; or via publishing through an open access publisher.

DeepMind makes no additional comment on any of these suggestions.

When we asked the company why it had not intended to register the AI research the spokeswoman reiterated that “no research project has taken place”, and added: “A description of the initial HRA [Health Research Authority] application is publicly available on the HRA website.”

Evidently the company — whose parent entity Google’s corporate mission statement claims it wants to ‘organize the world’s information’ — was in no rush to more widely distribute its plans for applying AI to NHS data at this stage.

Details of the size of the study have also been redacted in the FOI response so it’s not possible to ascertain how many of the 1.6M medical records DeepMind intended to use for the AI research, although the document does confirm that children’s medical records would be included in the study.

The application confirms that Royal Free NHS patients who have previously opted out of their data being used for any medical research would be excluded from the AI study (as would be required by UK law).

As noted above, DeepMind’s application also specifies that the company would be both handling fully identifiable patient data from the Royal Free, for the purposes of developing the clinical task management app Streams, and also identifying and anonymizing a sub-set of this data to run its AI research.

This could well raise additional questions over whether the level of control DeepMind was being afforded by the Trust over patients’ data is appropriate for an entity that is described as occupying the secondary role of data processor — vs the Royal Free claiming it remains the data controller.

“A data processor does not determine the purpose of processing — a data controller does,” said Boiten, commenting on this point. “Doing AI research” is too aspecific as a purpose, so I find it impossible to view DeepMind as only a data processor in this scenario,” he added.

One thing is clear: When the DeepMind-Royal Free collaboration was publicly revealed with much fanfare, the fact they had already applied for and been granted ethical approval to perform AI research on the same patient data-set was not — in their view — a consideration they deemed merited detailed public discussion. Which is a huge miscalculation when you’re trying to win the public’s trust for the sharing of their most sensitive personal data.

Asked why it had not informed the press or the public about the existence and status of the research project at the time, a DeepMind spokeswoman failed to directly respond to the question — instead she reiterated that: “No research is underway.”

DeepMind and the Royal Free both claim that, despite receiving a favorable ethical opinion on the AI research application in November 2015 from the NHS ethics committee, additional approvals would have been required before the AI research could have gone ahead.

“A favourable opinion from a research ethics committee does not constitute full approval. This work could not take place without further approvals,” the DeepMind spokeswoman told us.

“The AKI research application has initial ethical approval from the national research ethics service within the Health Research Authority (HRA), as noted on the HRA website. However, DeepMind does not have the next step of approval required to proceed with the study — namely full HRA approval (previously called local R&D approval).

“In addition, before any research could be done, DeepMind and the Royal Free would also need a research collaboration agreement,” she added.

The HRA’s letter to DeepMind confirming its favorable opinion on the study does indeed note:

Management permission or approval must be obtained from each host organisation prior to the start of the study at the site concerned.

Management permission (“R&D approval”) should be sought from all NHS organisations involved in the study in accordance with NHS research governance arrangements

However since the proposed study was to be conducted purely on a database of patient data, rather than at any NHS locations, and given that the Royal Free already had an information-sharing arrangement inked in place with DeepMind, it’s not clear exactly what additional external approvals they were awaiting.

The original (now defunct and ICO sanctioned) ISA between the pair does include the below paragraph — granting DeepMind the ability to anonymize the Royal Free patient data-set “for research” purposes. And although this clause lists several bodies, one of which it says would also need to approve any projects under “formal research ethics”, the aforementioned HRA (“the National Research Ethics Service”) is included in this list.

So again, it’s not clear whose rubberstamp they would still have required.

The value of transparency

At the same time, it’s clear that transparency is a preferred principle of medical research ethics — hence the NHS encouraging those filling in research applications to publicly register their studies.

A UK government-commissioned life science strategy review, published this week, also emphasizes the importance of transparency in engendering and sustaining public trust in health research projects — arguing it’s an essential component for furthering the march of digital innovation.

The same review also recommends that the UK government and the NHS take ownership of training health AIs off of taxpayer funded health data-sets — exactly to avoid corporate entities coming in and asset-stripping potential future medical insights.

(“Most of the value is the data,” asserts review author, Sir John Bell, an Oxford University professor of medicine. Data that, in DeepMind’s case, has been so far freely handed over by multiple NHS organizations — in June, for example, it emerged that another NHS Trust which has inked a five-year data-sharing deal with DeepMind, Taunton & Somerset, is not paying the company for the duration of the contract; unless (and in the unlikely eventuality) that the service support exceeds £15,000 a month. So essentially DeepMind is being ‘paid’ with access to NHS patients’ data.)

Even before the ICO’s damning verdict, the original ISA between DeepMind and the Royal Free had been extensively criticized for lacking robust legal and ethical safeguards on how patient data could be used. (Even as DeepMind’s co-founder Mustafa Suleyman tried to brush off criticism, saying negative headlines were the result of “a group with a particular view to peddle“.)

But after the original controversy flared the pair subsequently scrapped the agreement and replaced it, in November 2016, with a second data-sharing contract which included some additional information governance concessions — while also continuing to share largely the same quantity and types of identifiable Royal Free patient data as before.

Then this July, as noted earlier, the ICO ruled that the original ISA had indeed breached UK privacy law. “Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening,” it stated in its decision.

The ICO also said it had asked the Trust to commit to making changes to address the shortcomings that the regulator had identified.

In a statement on its website the Trust said it accepted the findings and claimed to have “already made good progress to address the areas where they have concerns”, and to be “doing much more to keep our patients informed about how their data is used”.

“We would like to reassure patients that their information has been in our control at all times and has never been used for anything other than delivering patient care or ensuring their safety,” the Royal Free’s July statement added.

Responding to questions put to it for this report, the Royal Free Hospitals NHS Trust confirmed it was aware of and involved with the 2015 DeepMind AI research study application.

“To be clear, the application was for research on de-personalised data and not the personally identifiable data used in providing Stream,” said a spokeswoman.

“No research project has begun, and it could not begin without further approvals. It is worth noting that fully approved research projects involving de-personalised data generally do not require patient consent,” she added.

At the time of writing the spokeswoman had not responded to follow-up questions asking why, in 2016, it had made such explicit public denials about its patient data being used for AI research, and why it chose not to make public the existing application to conduct AI research at that time — or indeed, at an earlier time.

Another curious facet to this saga involves the group of “independent reviewers” that Suleyman, announced the company had signed up in July 2016 to — as he put it — “examine our work and publish their findings”.

His intent was clearly to try to reset public perceptions of the DeepMind Health initiative after a bumpy start for transparency, consent, information governance and regulatory best practice — with the wider hope of boosting public trust in what an ad giant wanted with people’s medical data by allowing some external eyeballs to roll in and poke around.

What’s curious is that the reviewers make no reference to DeepMind’s AI research study intentions for the Royal Free data-set in their first report — also published this July.

We reached out to the chair of the group, former MP Julian Huppert, to ask whether DeepMind informed the group it was intending to undertake AI research on the same data-set.

Huppert confirmed to us that the group had been aware there was “consideration” of an AI research project using the Royal Free data at the time it was working on its report, but claimed he does not “recall exactly” when the project was first mentioned or by whom.

“Both the application and the decision not to go ahead happened before the panel was formed,” he said, by way of explanation for the memory lapse.

Asked why the panel did not think the project worth mentioning in its first annual report, he told TechCrunch: “We were more concerned with looking at work that DMH had done and were planning to do, than things that they had decided not to go ahead with.”

“I understand that no work was ever done on it. If this project were to be taken forward, there would be many more regulatory steps, which we would want to look at,” he added.

In their report the independent reviews do flag up some issues of concern regarding DeepMind Health’s operations — including potential security vulnerabilities around the company’s handling of health data.

For example, a datacenter server build review report, conducted by an external auditor looking at part of DeepMind Health’s critical infrastructure on behalf of the external reviewers, identified what it judged a “medium risk vulnerability” — noting that: “A large number of files are present which can be overwritten by any user on the reviewed servers.”

“This could allow a malicious user to modify or replace existing files to insert malicious content, which would allow attacks to be conducted against the servers storing the files,” the auditor added.

Asked how DeepMind Health will work to regain NHS patients’ trust in light of such a string of transparency and regulatory failures to-date, the spokeswoman provided the following statement: “Over the past eighteen months we’ve done a lot to try to set a higher standard of transparency, appointing a panel of Independent Reviewers who scrutinise our work, embarking on a patient involvement program, proactively publishing NHS contracts, and building tools to enable better audits of how data is used to support care. In our recently signed partnership with Taunton and Somerset NHS Trust, for example, we committed to supporting public engagement activity before any patient data is transferred for processing. And at our recent consultation events in London and Manchester, patients provided feedback on DeepMind Health’s work.”

Asked whether it had informed the independent reviewers about the existence of the AI research application, the spokeswoman declined to respond directly. Instead she repeater the prior line that: “No research project is underway.”

News Source = techcrunch.com

Building health AIs should be UK ambition, says strategy review

in AI/Artificial Intelligence/Delhi/Europe/Government/Health/health care/health data/India/machine learning/National Health Service/NHS/Politics/privacy/TC/United Kingdom by

A wide-ranging, UK government-commissioned industrial strategy review of the life sciences sector, conducted by Oxford University’s Sir John Bell, has underlined the value locked up in publicly-funded data held by the country’s National Health Service — and called for a new regulatory framework to be established in order to “capture for the UK the value in algorithms generated using NHS data”.

The NHS is a free-at-the-point of use national health service covering some 65 million users — which gives you an idea of the unique depth and granularity of the patient data it holds.

And how much potential value could therefore be created for the nation by utilizing patient data-sets to develop machine learning algorithms for medical diagnosis and tracking.

“AI is likely to be used widely in healthcare and it should be the ambition for the UK to develop and test integrated AI systems that provide real-time data better than human monitoring and prediction of a wide range of patient outcomes in conditions such as mental health, cancer and inflammatory disease,” writes Bell in the report.

His recommendation for the government and the NHS to be pro-active about creating and capturing AI-enabled value off of valuable, taxpayer-funded health data-sets comes hard on the heels of the conclusion of a lengthy investigation by the UK’s data protection watchdog, the ICO, into a controversial 2015 data-sharing arrangement between Google-DeepMind and a London-based NHS Trust, the Royal Free Hospitals Trust, to co-develop a clinical task management app.

In July the ICO concluded that the arrangement — DeepMind’s first with an NHS Trust — breached UK privacy law, saying the ~1.6M NHS patients whose full medical records are being shared with the Google-owned company (without their consent) could not have “reasonably expected” their information to be used in this way.

And while the initial application the pair have co-developed does not involve applying machine learning algorithms to NHS data, a wider memorandum of understanding between them sets out their intention to do just that within five years.

Meanwhile, DeepMind has also inked additional data-sharing arrangements with other NHS Trusts that do already entail AI-based research — such as a July 2016 research partnership with Moorfields Eye Hospital that’s aiming to investigate whether machine learning algorithms can automate the analysis of digital eye scans to diagnose two eye conditions.

In that instance DeepMind is getting free access to one million “anonymized” eye scans to try to develop diagnosis AI models.

The company has committed to publishing the results of the research but any AI models it develops — trained off of the NHS data-set — are unlikely to be handed back freely to the public sector.

Rather, the company’s stated aim for its health-based AI ambitions is to create commercial IP, via multiple research partnerships with NHS organizations — positioning itself to sell trained AI models as a future software-based service to healthcare organizations at whatever price it deems appropriate.

This is exactly the sort of data-enabled algorithmic value that Bell is urging the UK government to be pro-active about capturing for the country — by establishing a regulatory framework that positions the NHS (and the UK’s citizens who fund it) to benefit from data-based AI insights generated off of its vast data holdings, instead of allowing large commercial entities to push in and asset strip these taxpayer funded assets.

“[E]xisting data access agreements in the UK for algorithm development have currently been completed at a local level with mainly large companies and may not share the rewards fairly, given the essential nature of NHS patient data to developing algorithms,” warns Bell.

“There is an opportunity for defining a clear framework to better realise the true value for the NHS of the data at a national level, as currently agreements made locally may not share the benefit with other regions,” he adds.

In an interview with the Guardian newspaper he is asked directly for his views on DeepMind’s collaboration with the Royal Free NHS Trust — and describes it as the “canary in the coalmine”.

“I heard that story and thought ‘Hang on a minute, who’s going to profit from that?’” he is quoted as saying. “What Google’s doing in [other sectors], we’ve got an equivalent unique position in the health space. Most of the value is the data. The worst thing we could do is give it away for free.”

“What you don’t want is somebody rocking up and using NHS data as a learning set for the generation of algorithms and then moving the algorithm to San Francisco and selling it so all the profits come back to another jurisdiction,” Bell also told the newspaper.

In his report, Bell also highlights the unpreparedness of “current or planned” regulations to provide a framework to “account for machine learning algorithms that update with new data” — pointing out, for example, that: “Currently algorithms making medical claims are regulated as medical devices.”

And again, in 2016 DeepMind suspended testing of the Streams app it had co-developed with the Royal Free NHS Trust after it emerged the pair had failed to register this software as a medical device with the MHRA prior to trialling it in the hospitals.

Bell suggests that a better approach for testing healthcare software and algorithms could involve sandboxed access and use of dummy data — rather than testing with live patient data, as DeepMind and the Royal Free were.

“One approach to this may be in the development of ‘sandbox’ access to deidentified or synthetic data from providers such as NHS Digital, where innovators could safely develop algorithms and trial new regulatory approaches for all product types,” he writes.

In the report Bell also emphasizes the importance of transparency in winning public trust to further the progress of research which utilizes publicly funded health data-sets.

“Many more people support than oppose health data being used by commercial organisations undertaking health research, but it is also clear that strong patient and clinician engagement and involvement, alongside clear permissions and controls, are vital to the success of any health data initiative,” he writes.

“This should take place as part of a wider national conversation with the public enabling a true understanding of data usage in as much detail as they wish, including clear information on who can access data and for what purposes. This conversation should also provide full information on how health data is vital to improving health, care and services through research.”

He also calls for the UK’s health care system to “set out clear and consistent national approaches to data and interoperability standards and requirements for data access agreements” in order to help reduce response time across all data providers, writing: “Currently, arranging linkage and access to national-level datasets used for research can require multiple applications and access agreements with unclear timelines. This can cause delays to data access enabling both research and direct care.”

Other NHS-related recommendations in the report include a call to end handwritten prescriptions and make eprescribing mandatory for hospitals; the creation of a forum for researchers across academia, charities and industry to engage with all national health data programs; and the creation of between two and five digital innovation hubs to provide data across regions of three to five million people with the aim of accelerating research access to meaningful national datasets.

Featured Image: Rido/Shutterstock

News Source = techcrunch.com

Go to Top