March 23, 2019
Category archive


Japan’s “Society 5.0” initiative is a roadmap for today’s entrepreneurs

in articles/Artificial Intelligence/Banking/big data/chairman/Column/Delhi/department of defense/Emerging-Technologies/Emmanuel Macron/Entrepreneurship/Finance/Forward/France/g20/Healthcare/hitachi/India/Internet of Things/Japan/NHS/Politics/self-driving car/U.S. government/United Kingdom/United Nations/United States/west coast by

Japan, still suffering the consequences of its ‘Lost Decade’ of economic stagnation, is eyeing a transformation more radical than any the industrialized world has ever seen.

Boldly identified as “Society 5.0” Japan describes its initiative as a purposeful effort to create a new social contract and economic model by fully incorporating the technological innovations of the fourth industrial revolution. It envisions embedding these innovations into every corner of its ageing society. Underpinning this effort is a mandate for sustainability, bound tightly to the new United Nations global goals, the SDG’s. Japan wants to create, in its own words, a ‘super-smart’ society, and one that will serve as a roadmap for the rest of the world.

Japan hosts its first ever G20 summit in 2019 and this grand initiative will be on the agenda at the official B20 (Business 20) summit headed by the chairman of Hitachi .

Components of Society 5.0 and its implications for the US

Society 5.0 addresses a number of key pillars: infrastructure, finance tech, healthcare, logistics, and of course AI. The markets being grown in Japan are impressive. In robotics they predict $87 billion in investments and the IoT market is poised to hit $6 Billion in 2019

This means we are behind. We have not put enough focus on what AI can do not only for industry, but what it can do to move society forward and solve many of our most pervasive problems.

It isn’t just a problem of lack of investment by the United States government. Just this past September the Department of Defense announced a commitment of  $2 billion over the next five years toward new programs advancing artificial intelligence. This issue lies in the lack of a complete partnership between the United States Government and the private sector. But, why is Japan in the lead?

Full Fledged Embrace of AI and Cutting Edge Technology

Along with $1.44 billion from the government for AI funding, the Innovation Network Corp. of Japan is reorganizing to focus on AI and big data. They are projected to grow to $4 billion and operate to at least 2034. Much like in Britain and France, the government has made it a point to team with the private sector to move all of society forward.

Fresh Ideas to address Persistent Societal Problems

Along with the governmental and private partnership, Society 5.0 harnesses AI to address problems that continue to plague society. They are looking at how AI can help with the trappings of an aging population, pollution, and most importantly, how create such a sweeping initiate that is also agile enough to adjust to constant change of society everyday.

The goal of the work being done at Hitachi now on Society 5.0 is to create a Human-Centered Society. Technologies and innovations need to be leveraged to aid humans and our advancement, not to replace us in anyway.

How do American Technologists Close the Gap and partner with Japan?

First, in Silicon valley and beyond, American technologists and entrepreneurs must create a partnership between themselves and the U.S. government. Only when working together can we reach our full potential.

Take the British government as a model. This past April they announced a that it had put together “an AI deal worth more than £1 billion” that includes public and private funding.

France sees the opportunity and is betting on AI as well. This past spring President Emmanuel Macron announced an AI plan that includes $1.6 billion in funding, new research centers, data-sharing initiatives. The road has been clearly mapped for the U.S., just follow the path.

Next, American technologists and entrepreneurs must focus on certain industries and their ability to improve society in its entirety. There are 4 major industries technologists and entrepreneurs can focus on, and disrupt by modeling Japan’s Society 5.0 ideas and approach.


Japan’s society is more heavily weighted towards people over 60 than the rest of the world. In turn, more healthcare is needed to support people for a longer period of time as people live longer.

American technologists and entrepreneurs can capitalize by investing in and developing cognitive AI technologies that will greatly lessen the time needed to complete administrative tasks to allowing medical professionals to concentrate more on actually providing healthcare.

A UK  report suggests approximately 10% of NHS operational expenses could be saved through AI and automation. If this can be mirrored and then improved in the US the rising cost of healthcare, and declining public health can be tackled simultaneously.


While the population in urban centers is growing, rural areas are being left with diminished access to everyday needs like, transportation, stores, hospitals, and community centers.
Continue to invest and develop autonomous vehicles, drones and single-driver cargo truck convoys. Access to basic everyday needs will not be a given for those residing far from urban centers. Here lies another dual opportunity for technologists and entrepreneurs, service those in need while simultaneously moving tech and society forward.


28 percent of major U.S. roads are rated “poor” or in need of a complete rebuild. AI and other technologies such as robots, drones, sensors and IoT will help solve these problems. How? If only 10 percent of cars in the  U.S. became self-driving, those 26 million vehicles would generate 38.4 zettabytes of data annually.  In one year that would create over eight times the volume of the world’s current data.

Not only must we increase investment in autonomous vehicles, but we must make a concerted effort to leverage the data they will produce. Technologists and entrepreneurs will have an unprecedented advantage to leverage this data to predict everything from needs of infrastructure improvements to all bridges and roads being used by the autonomous vehicles. Companies like Hitachi are the ones you should look to work with. They’re doing amazing things in infrastructure today. How can this be translated to the U.S.? That is a question for you to ask and ultimately solve.

Mass transit is far ahead in Japan as well. Japan’s maglev train set a world record speed of 375 mph. With vast expanses of the United States landscape, and the ever growing challenges of flying, the rail transport industry is ripe for the picking. Plans for the midwest and the west coast have seem to come and go. What will be the plan that actually works?


Blockchain is a  solution that will advance security, transparency and fraud prevention in society. Cognitive AI is producing results towards the goals of Society 5.0, ether it be a cashless society or a consumer focused one. Voice prompted AI assistants are currently providing consumer support by depositing money, performing trades, mastering trading platforms, networking, and onboarding of customers. This Omni-channel integration will result in finance and banking evolving to grow around customers needs. With this evolution we will see far less needs for cash and brick and mortar banks.

In the end, data alone is just code without meaning to its user. But, when technologists and entrepreneurs implement AI to its max potential a true difference will be seen. In Society 5.0, humanity and machines will solve the greatest issues society faces in the 21st century. We must embrace what Japan is creating with Society 5.0, or we will simply become a vestige of the technological past.


News Source =

Siilo injects $5.1M to try to transplant WhatsApp use in hospitals

in Apps/Delhi/eBuddy/encryption/eqt ventures/Europe/European Union/funding/Health/Healthcare/healthcare industry/India/medical data/messaging apps/National Health Service/NHS/Politics/Recent Funding/secure messaging/Siilo/smartphone/smartphones/Startups/WhatsApp by

Consumer messaging apps like WhatsApp are not only insanely popular for chatting with friends but have pushed deep into the workplace too, thanks to the speed and convenience they offer. They have even crept into hospitals, as time-strapped doctors reach for a quick and easy way to collaborate over patient cases on the ward.

Yet WhatsApp is not specifically designed with the safe sharing of highly sensitive medical information in mind. This is where Dutch startup Siilo has been carving a niche for itself for the past 2.5 years — via a free-at-the-point-of-use encrypted messaging app that’s intended for medical professions to securely collaborate on patient care, such as via in-app discussion groups and being able to securely store and share patient notes.

A business goal that could be buoyed by tighter EU regulations around handling personal data, say if hospital managers decide they need to address compliance risks around staff use of consumer messaging apps.

The app’s WhatsApp-style messaging interface will be instantly familiar to any smartphone user. But Siilo bakes in additional features for its target healthcare professional users, such as keeping photos, videos and files sent via the app siloed in an encrypted vault that’s entirely separate from any personal media also stored on the device.

Messages sent via Siilo are also automatically deleted after 30 days unless the user specifies a particular message should be retained. And the app does not make automated back-ups of users’ conversations.

Other doctor-friendly features include the ability to blur images (for patient privacy purposes); augment images with arrows for emphasis; and export threaded conversations to electronic health records.

There’s also mandatory security for accessing the app — with a requirement for either a PIN-code, fingerprint or facial recognition biometric to be used. While a remote wipe functionality to nix any locally stored data is baked into Siilo in the event of a device being lost or stolen.

Like WhatsApp, Siilo also uses end-to-end encryption — though in its case it says this is based on the opensource NaCl library

It also specifies that user messaging data is stored encrypted on European ISO-27001 certified servers — and deleted “as soon as we can”.

It also says it’s “possible” for its encryption code to be open to review on request.

Another addition is a user vetting layer to manually verify the medical professional users of its app are who they say they are.

Siilo says every user gets vetted. Though not prior to being able to use the messaging functions. But users that have passed verification unlock greater functionality — such as being able to search among other (verified) users to find peers or specialists to expand their professional network. Siilo says verification status is displayed on profiles.

“At Siilo, we coin this phenomenon ‘network medicine’, which is in contrast to the current old-­fashioned, siloed medicine,” says CEO and co-founder Joost Bruggeman in a statement. “The goal is to improve patient care overall, and patients have a network of doctors providing input into their treatment.”

While Bruggeman brings the all-important medical background to the startup, another co-founder, Onno Bakker, has been in the mobile messaging game for a long time — having been one of the entrepreneurs behind the veteran web and mobile messaging platform, eBuddy.

A third co-founder, CFO Arvind Rao, tells us Siilo transplanted eBuddy’s messaging dev team — couching this ported in-house expertise as an advantage over some of the smaller rivals also chasing the healthcare messaging opportunity.

It is also of course having to compete technically with the very well-resourced and smoothly operating WhatsApp behemoth.

“Our main competitor is always WhatsApp,” Rao tells TechCrunch. “Obviously there are also other players trying to move in this space. TigerText is the largest in the US. In the UK we come across local players like Hospify and Forward.

“A major difference we have very experienced in-house dev team… The experience of this team has helped to build a messenger that really can compete in usability with WhatsApp that is reflected in our rapid adoption and usage numbers.”

“Having worked in the trenches as a surgery resident, I’ve experienced the challenges that healthcare professionals face firsthand,” adds Bruggeman. “With Siilo, we’re connecting all healthcare professionals to make them more efficient, enable them to share patient information securely and continue learning and share their knowledge. The directory of vetted healthcare professionals helps ensure they’re successful team­players within a wider healthcare network that takes care of the same patient.”

Siilo launched its app in May 2016 and has since grown to ~100,000 users, with more than 7.5 million messages currently being processed monthly and 6,000+ clinical chat groups active monthly.

“We haven’t come across any other secure messenger for healthcare in Europe with these figures in the App Store/Google Play rankings and therefore believe we are the largest in Europe,” adds Rao. “We have multiple large institutions across Western-Europe where doctors are using Siilo.”

On the security front, as well flagging the ISO 27001 certification it has for its servers, he notes that it obtained “the highest NHS IG Toolkit level 3” — aka the now replaced system for organizations to self-assess their compliance with the UK’s National Health Service’s information governance processes, claiming “we haven’t seen [that] with any other messaging company”.

Siilo’s toolkit assessment was finalized at the end of Febuary 2018, and is valid for a year — so will be up for re-assessment under the replacement system (which was introduced this April) in Q1 2019. (Rao confirms they will be doing this “new (re-)assessment” at the end of the year.)

As well as being in active use in European hospitals such as St. George’s Hospital, London, and Charité Berlin, Germany, Siilo says its app has had some organic adoption by medical pros further afield — including among smaller home healthcare teams in California, and “entire transplantation teams” from Astana, Kazakhstan.

It also cites British Medical Journal research that found that of the 98.9% of U.K. hospital clinicians who now have smartphones, around a third are using consumer messaging apps in the clinical workplace. Persuading those healthcare workers to ditch WhatsApp at work is Siilo’s mission and challenge.

The team has just announced a €4.5 million (~$5.1M) seed to help it get onto the radar of more doctors. The round is led by EQT Ventures, with participation from existing investors. It says it will be using the funding to scale­ up its user base across Europe, with a particular focus on the UK and Germany.

Commenting on the funding in a statement, EQT Ventures’ Ashley Lundström, a venture lead and investment advisor at the VC firm, said: “The team was impressed with Siilo’s vision of creating a secure global network of healthcare professionals and the organic traction it has already achieved thanks to the team’s focus on building a product that’s easy to use. The healthcare industry has long been stuck using jurassic technologies and Siilo’s real­time messaging app can significantly improve efficiency
and patient care without putting patients’ data at risk.”

While the messaging app itself is free for healthcare professions to use, Siilo also offers a subscription service to monetize the freemium product.

This service, called Siilo Connect offers organisations and professional associations what it bills as “extensive management, administration, networking and software integration tools”, or just data regulation compliance services if they want the basic flavor of the paid tier.

News Source =

Femtech hardware startup Elvie inks strategic partnership with UK’s NHS

in Apps/biofeedback/Delhi/Elvie/Europe/Gadgets/Hardware/Health/Healthcare/India/National Health Service/NHS/pelvic floor/Politics/sexual health/smart technology/Tania Boler/United Kingdom/Wearables/women's health by

Elvie, a femtech hardware startup whose first product is a sleek smart pelvic floor exerciser, has inked a strategic partnership with the UK’s National Health Service that will make the device available nationwide through the country’s free-at-the-point-of-use healthcare service so at no direct cost to the patient.

It’s a major win for the startup that was co-founded in 2013 by CEO Tania Boler and Jawbone founder, Alexander Asseily, with the aim of building smart technology that focuses on women’s issues — an overlooked and underserved category in the gadget space.

Boler’s background before starting Elvie (née Chiaro) including working for the U.N. on global sex education curriculums. But her interest in pelvic floor health, and the inspiration for starting Elvie, began after she had a baby herself and found there was more support for women in France than the U.K. when it came to taking care of their bodies after giving birth.

With the NHS partnership, which is the startup’s first national reimbursement partnership (and therefore, as a spokeswoman puts it, has “the potential to be transformative” for the still young company), Elvie is emphasizing the opportunity for its connected tech to help reduce symptoms of urinary incontinence, including those suffered by new mums or in cases of stress-related urinary incontinence.

The Elvie kegel trainer is designed to make pelvic floor exercising fun and easy for women, with real-time feedback delivered via an app that also gamifies the activity, guiding users through exercises intended to strengthen their pelvic floor and thus help reduce urinary incontinence symptoms. The device can also alert users when they are contracting incorrectly.

Elvie cites research suggesting the NHS spends £233M annually on incontinence, claiming also that around a third of women and up to 70% of expectant and new mums currently suffer from urinary incontinence. In 70 per cent of stress urinary incontinence cases it suggests symptoms can be reduced or eliminated via pelvic floor muscle training.

And while there’s no absolute need for any device to perform the necessary muscle contractions to strengthen the pelvic floor, the challenge the Elvie Trainer is intended to help with is it can be difficult for women to know they are performing the exercises correctly or effectively.

Elvie cites a 2004 study that suggests around a third of women can’t exercise their pelvic floor correctly with written or verbal instruction alone. Whereas it says that biofeedback devices (generally, rather than the Elvie Trainer specifically) have been proven to increase success rates of pelvic floor training programmes by 10% — which it says other studies have suggested can lower surgery rates by 50% and reduce treatment costs by £424 per patient head within the first year.

“Until now, biofeedback pelvic floor training devices have only been available through the NHS for at-home use on loan from the patient’s hospital, with patient allocation dependent upon demand. Elvie Trainer will be the first at-home biofeedback device available on the NHS for patients to keep, which will support long-term motivation,” it adds.

Commenting in a statement, Clare Pacey, a specialist women’s health physiotherapist at Kings College Hospital, said: “I am delighted that Elvie Trainer is now available via the NHS. Apart from the fact that it is a sleek, discreet and beautiful product, the app is simple to use and immediate visual feedback directly to your phone screen can be extremely rewarding and motivating. It helps to make pelvic floor rehabilitation fun, which is essential in order to be maintained.”

Elvie is not disclosing commercial details of the NHS partnership but a spokeswoman told us the main objective for this strategic partnership is to broaden access to Elvie Trainer, adding: “The wholesale pricing reflects that.”

Discussing the structure of the supply arrangement, she said Elvie is working with Eurosurgical as its delivery partner — a distributor she said has “decades of experience supplying products to the NHS”.

“The approach will vary by Trust, regarding whether a unit is ordered for a particular patient or whether a small stock will be held so a unit may be provided to a patient within the session in which the need is established. This process will be monitored and reviewed to determine the most efficient and economic distribution method for the NHS Supply Chain,” she added.

News Source =

Drone development should focus on social good first, says UK report

in Delhi/delivery drone/drone/electronics/Emergency services/Emerging-Technologies/Europe/Gadgets/Government/Health/India/London/NESTA/NHS/Politics/robotics/TC/Transportation/UK government/United Kingdom/unmanned aerial vehicles by

A UK government backed drone innovation project that’s exploring how unmanned aerial vehicles could benefit cities — including for use-cases such as medical delivery, traffic incident response, fire response and construction and regeneration — has reported early learnings from the first phase of the project.

Five city regions are being used as drone test-beds as part of Nesta’s Flying High Challenge — namely London, the West Midlands, Southampton, Preston and Bradford.

While five socially beneficial use-cases for drone technology have been analyzed as part of the project so far, including considering technical, social and economic implications of the tech.

The project has been ongoing since December.

Nesta, the innovation-focused charity behind the project and the report, wants the UK to become a global leader in shaping drone systems that place people’s needs first, and writes in the report that: “Cities must shape the future of drones: Drones must not shape the future of cities.”

In the report it outlines some of the challenges facing urban implementations of drone technology and also makes some policy recommendations.

It also says that socially beneficial use-cases have come out as an early winner over of cities to the potential of the tech — over and above “commercial or speculative” applications such as drone delivery or for carrying people in flying taxis.

The five use-cases explored thus far via the project are:

  • Medical delivery within London — a drone delivery network for carrying urgent medical products between NHS facilities, which would routinely carry products such as pathology samples, blood products and equipment over relatively short distances between hospitals in a network
  • Traffic incident response in the West Midlands — responding to traffic incidents in the West Midlands to support the emergency services prior to their arrival and while they are on-site, allowing them to allocate the right resources and respond more effectively
  • Fire response in Bradford — emergency response drones for West Yorkshire Fire and Rescue service. Drones would provide high-quality information to support emergency call handlers and fire ground commanders, arriving on the scene faster than is currently possible and helping staff plan an appropriate response for the seriousness of the incident
  • Construction and regeneration in Preston — drone services supporting construction work for urban projects. This would involve routine use of drones prior to and during construction, in order to survey sites and gather real-time information on the progress of works
  • Medical delivery across the Solent — linking Southampton across the Solent to the Isle of Wight using a delivery drone. Drones could carry light payloads of up to a few kilos over distances of around 20 miles, with medical deliveries of products being a key benefit

Flagging up technical and regulatory challenges to scaling the use of drones beyond a few interesting experiments, Nest writes: “In complex environments, flight beyond the operator’s visual line of sight, autonomy and precision flight are key, as is the development of an unmanned traffic management (UTM) system to safely manage airspace. In isolation these are close to being solved — but making these work at large scale in a complex urban environment is not.”

“While there is demand for all of the use cases that were investigated, the economics of the different use cases vary: Some bring clear cost savings; others bring broader social benefits. Alongside technological development, regulation needs to evolve to allow these use cases to operate. And infrastructure like communications networks and UTM systems will need to be built,” it adds.

The report also emphasizes the importance of public confidence, writing that: “Cities are excited about the possibilities that drones can bring, particularly in terms of critical public services, but are also wary of tech-led buzz that can gloss over concerns of privacy, safety and nuisance. Cities want to seize the opportunity behind drones but do it in a way that responds to what their citizens demand.”

And the charity makes an urgent call for the public to be brought into discussions about the future of drones.

“So far the general public has played very little role,” it warns. “There is support for the use of drones for public benefit such as for the emergency services. In the first instance, the focus on drone development should be on publicly beneficial use cases.”

Giving the combined (and intertwined) complexity of regulatory, technical and infrastructure challenges standing in the way of developing viable drone service implementations, Nesta is also recommending the creation of testbeds in which drone services can be developed with the “facilities and regulatory approvals to support them”.

“Regulation will also need to change: Routine granting of permission must be possible, blanket prohibitions in some types of airspace must be relaxed, and an automated system of permissions — linked to an unmanned traffic management system — needs to be put in place for all but the most challenging uses. And we will need a learning system to share progress on regulation and governance of the technology, within the UK and beyond, for instance with Eurocontrol,” it adds.

“Finally, the UK will need to invest in infrastructure, whether this is done by the public or private sector, to develop the communications and UTM infrastructure required for widespread drone operation.”

In conclusion Nesta argues there is “clear evidence that drones are an opportunity for the UK” — pointing to the “hundreds” of companies already operating in the sector; and to UK universities with research strengths in the area; as well as suggesting public authorities could save money or provide “new and better services thanks to drones”.

At the same time it warns that UK policy responses to drones are lagging those of “leading countries” — suggesting the country could squander the chance to properly develop some early promise.

“The US, EU, China, Switzerland and Singapore in particular have taken bigger steps towards reforming regulations, creating testbeds and supporting businesses with innovative ideas. The prize, if we get this right, is that we shape this new technology for good — and that Britain gets its share of the economic spoils.”

You can read the full report here.

News Source =

Documents detail DeepMind’s plan to apply AI to NHS data in 2015

in AI/Apps/Artificial Intelligence/Consent/data ethics/deep learning/DeepMind/Delhi/Europe/Google/Health/health data/India/machine learning/medical research/National Health Service/NHS/patient data/Politics/privacy/Royal Free Hospitals NHS Trust/Science/Science and Technology/streams/TC/United Kingdom by

More details have emerged about a controversial 2015 patient data-sharing arrangement between Google DeepMind and a UK National Health Service Trust which paint a contrasting picture vs the pair’s public narrative about their intended use of 1.6 million citizens’ medical records.

DeepMind and the Royal Free NHS Trust signed their initial information sharing agreement (ISA) in September 2015 — ostensibly to co-develop a clinical task management app, called Streams, for early detection of an acute kidney condition using an NHS algorithm.

Patients whose fully identifiable medical records were being shared with the Google-owned company were neither asked for their consent nor informed their data was being handed to the commercial entity.

Indeed, the arrangement was only announced to the public five months after it was inked — and months after patient data had already started to flow.

And it was only fleshed out in any real detail after a New Scientist journalist obtained and published the ISA between the pair, in April 2016 — revealing for the first time, via a Freedom of Information request, quite how much medical data was being shared for an app that targets a single condition.

This led to an investigation being opened by the UK’s data protection watchdog into the legality of the arrangement. And as public pressure mounted over the scope and intentions behind the medical records collaboration, the pair stuck to their line that patient data was not being used for training artificial intelligence.

They also claimed they did not need to seek patient consent for their medical records to be shared because the resulting app would be used for direct patient care — a claimed legal basis that has since been demolished by the ICO, which concluded a more than year-long investigation in July.

However a series of newly released documents shows that applying AI to the patient data was in fact a goal for DeepMind right from the earliest months of its partnership with the Royal Free — with its intention being to utilize the wide-ranging access to and control of publicly-funded medical data it was being granted by the Trust to simultaneously develop its own AI models.

In a FAQ note on its website when it publicly announced the collaboration, in February 2016, DeepMind wrote: “No, artificial intelligence is not part of the early-stage pilots we’re announcing today. It’s too early to determine where AI could be applied here, but it’s certainly something we are excited about for the future.”

Omitted from that description of its plans was the fact it had already received a favorable ethical opinion from an NHS Health Research Authority research ethics committee to run a two-year AI research study on the same underlying NHS patient data.

DeepMind’s intent was always to apply AI

The newly released documents, obtained via an FOI filed by health data privacy advocacy organization medConfidential, show DeepMind made an ethics application for an AI research project using Royal Free patient data in October 2015 — with the stated aim of “using machine learning to improve prediction of acute kidney injury and general patient deterioration”.

Earlier still, in May 2015, the company gained confirmation from an insurer to cover its potential liability for the research project — which it subsequently notes having in place in its project application.

And the NHS ethics board granted DeepMind’s AI research project application in November 2015 — with the two-year AI research project scheduled to start in December 2015 and run until December 2017.

A brief outline of the approved research project was previously published on the Health Research Authority’s website, per its standard protocol, but the FOI reveals more details about the scope of the study — which is summarized in DeepMind’s application as follows:

By combining classical statistical methodology and cutting-edge machine learning algorithms (e.g. ‘unsupervised and  semi­supervised learning’), this research project will create improved techniques of data analysis
and prediction of who may get AKI [acute kidney injury], more accurately identify cases when they occur, and better alert doctors to their presence.

DeepMind’s application claimed that the existing NHS algorithm, which it was deploying via the Streams app, “appears” to be missing and misclassifying some cases of AKI, and generating false positives — and goes on to suggest: “The problem is not with the tool which DeepMind have made, but with the  algorithm itself. We think we can overcome these problems, and create a system which works better.”

Although at the time it wrote this application, in October 2015, user tests of the Streams app had not yet begun — so it’s unclear how DeepMind could so confidently assert there was no “problem” with a tool it hadn’t yet tested. But presumably it was attempting to convey information about (what it claimed were) “major limitations” with the working of the NHS’ national AKI algorithm passed on to it by the Royal Free.

(For the record: In an FOI response that TechCrunch received back from the Royal Free in August 2016, the Trust told us that the first Streams user tests were carried out on 12-14 December 2015. It further confirmed: “The application has not been implemented outside of the controlled user tests.”)

Most interestingly, DeepMind’s AI research application shows it told the NHS ethics board that it could process NHS data for the study under “existing information sharing agreements” with the Royal Free.

“DeepMind acting as a data processor, under existing information sharing agreements with the responsible care organisations (in this case the Royal Free Hospitals NHS Trust), and providing existing services on identifiable patient data, will identify and anonymize the relevant records,” the Google division wrote in the research application.

The fact that DeepMind had taken active steps to gain approval for AI research on the Royal Free patient data as far back as fall 2015 flies in the face of all the subsequent assertions made by the pair to the press and public — when they claimed the Royal Free data was not being used to train AI models.

For instance, here’s what this publication was told in May last year, after the scope of the data being shared by the Trust with DeepMind had just emerged (emphasis mine):

DeepMind confirmed it is not, at this point, performing any machine learning/AI processing on the data it is receiving, although the company has clearly indicated it would like to do so in future. A note on its website pertaining to this ambition reads: “[A]rtificial intelligence is not part of the early-stage pilots we’re announcing today. It’s too early to determine where AI could be applied here, but it’s certainly something we are excited about for the future.”

The Royal Free spokesman said it is not possible, under the current data-sharing agreement between the trust and DeepMind, for the company to apply AI technology to these data-sets and data streams.

That type of processing of the data would require another agreement, he confirmed.

The only thing this data is for is direct patient care,” he added. “It is not being used for research, or anything like that.”

As the FOI makes clear, and contrary to the Royal Free spokesman’s claim, DeepMind had in fact been granted ethical approval by the NHS Health Research Authority in November 2015 to conduct AI research on the Royal Free patient data-set — with DeepMind in control of selecting and anonymizing the PID (patient identifiable data) intended for this purpose.

Conducting research on medical data would clearly not constitute an act of direct patient care — which was the legal basis DeepMind and the Royal Free were at the time claiming for their reliance on implied consent of NHS patients to their data being shared. So, in seeking to paper over the erupting controversy about how many patients’ medical records had been shared without their knowledge or consent, it appears the pair felt the need to publicly de-emphasize their parallel AI research intentions for the data.

“If you have been given data, and then anonymise it to do research on, it’s disingenuous to claim you’re not using the data for research,” said Dr Eerke Boiten, a cyber security professor at De Montford University whose research interests encompass data privacy and ethics, when asked for his view on the pair’s modus operandi here.

“And [DeepMind] as computer scientists, some of them with a Ross Anderson pedigree, they should know better than to believe in ‘anonymised medical data’,” he added — a reference to how trivially easy it has been shown to be for sensitive medical data to be re-identified once it’s handed over to third parties who can triangulate identities using all sorts of other data holdings.

Also commenting on what the documents reveal, Phil Booth, coordinator of medConfidential, told us: “What this shows is that Google ignored the rules. The people involved have repeatedly claimed ignorance, as if they couldn’t use a search engine. Now it appears they were very clear indeed about all the rules and contractual arrangements; they just deliberately chose not to follow them.”

Asked to respond to criticism that it has deliberately ignored NHS’ information governance rules, a DeepMind spokeswoman said the AI research being referred to “has not taken place”.

“To be clear, no research project has taken place and no AI has been applied to that dataset. We have always said that we would like to undertake research in future, but the work we are delivering for the Royal Free is solely what has been said all along — delivering Streams,” she added.

She also pointed to a blog post the company published this summer after the ICO ruled that the 2015 ISA with the Royal Free had broken UK data protection laws — in which DeepMind admits it “underestimated the complexity of NHS rules around patient data” and failed to adequately listen and “be accountable to and [be] shaped by patients, the public and the NHS as a whole”.

“We made a mistake in not publicising our work when it first began in 2015, so we’ve proactively announced and published the contracts for our subsequent NHS partnerships,” it wrote in July.

“We do not foresee any major ethical… issues”

In one of the sections of DeepMind’s November 2015 AI research study application form, which asks for “a summary of the main ethical, legal or management issues arising from the research project”, the company writes: “We do not foresee any major ethical, legal or management issues.”

Clearly, with hindsight, the data-sharing partnership would quickly run into major ethical and legal problems. So that’s a pretty major failure of foresight by the world’s most famous AI-building entity. (Albeit, it’s worth noting that the rest of a fuller response in this section has been entirely redacted — but presumably DeepMind is discussing what it considers lesser issues here.)

The application also reveals that the company intended not to register the AI research in a public database — bizarrely claiming that “no appropriate database exists for work such as this”.

In this section the application form includes the following guidance note for applicants: “Registration of research studies is encouraged wherever possible”, and goes on to suggest various possible options for registering a study — such as via a partner NHS organisation; in a register run by a medical research charity; or via publishing through an open access publisher.

DeepMind makes no additional comment on any of these suggestions.

When we asked the company why it had not intended to register the AI research the spokeswoman reiterated that “no research project has taken place”, and added: “A description of the initial HRA [Health Research Authority] application is publicly available on the HRA website.”

Evidently the company — whose parent entity Google’s corporate mission statement claims it wants to ‘organize the world’s information’ — was in no rush to more widely distribute its plans for applying AI to NHS data at this stage.

Details of the size of the study have also been redacted in the FOI response so it’s not possible to ascertain how many of the 1.6M medical records DeepMind intended to use for the AI research, although the document does confirm that children’s medical records would be included in the study.

The application confirms that Royal Free NHS patients who have previously opted out of their data being used for any medical research would be excluded from the AI study (as would be required by UK law).

As noted above, DeepMind’s application also specifies that the company would be both handling fully identifiable patient data from the Royal Free, for the purposes of developing the clinical task management app Streams, and also identifying and anonymizing a sub-set of this data to run its AI research.

This could well raise additional questions over whether the level of control DeepMind was being afforded by the Trust over patients’ data is appropriate for an entity that is described as occupying the secondary role of data processor — vs the Royal Free claiming it remains the data controller.

“A data processor does not determine the purpose of processing — a data controller does,” said Boiten, commenting on this point. “Doing AI research” is too aspecific as a purpose, so I find it impossible to view DeepMind as only a data processor in this scenario,” he added.

One thing is clear: When the DeepMind-Royal Free collaboration was publicly revealed with much fanfare, the fact they had already applied for and been granted ethical approval to perform AI research on the same patient data-set was not — in their view — a consideration they deemed merited detailed public discussion. Which is a huge miscalculation when you’re trying to win the public’s trust for the sharing of their most sensitive personal data.

Asked why it had not informed the press or the public about the existence and status of the research project at the time, a DeepMind spokeswoman failed to directly respond to the question — instead she reiterated that: “No research is underway.”

DeepMind and the Royal Free both claim that, despite receiving a favorable ethical opinion on the AI research application in November 2015 from the NHS ethics committee, additional approvals would have been required before the AI research could have gone ahead.

“A favourable opinion from a research ethics committee does not constitute full approval. This work could not take place without further approvals,” the DeepMind spokeswoman told us.

“The AKI research application has initial ethical approval from the national research ethics service within the Health Research Authority (HRA), as noted on the HRA website. However, DeepMind does not have the next step of approval required to proceed with the study — namely full HRA approval (previously called local R&D approval).

“In addition, before any research could be done, DeepMind and the Royal Free would also need a research collaboration agreement,” she added.

The HRA’s letter to DeepMind confirming its favorable opinion on the study does indeed note:

Management permission or approval must be obtained from each host organisation prior to the start of the study at the site concerned.

Management permission (“R&D approval”) should be sought from all NHS organisations involved in the study in accordance with NHS research governance arrangements

However since the proposed study was to be conducted purely on a database of patient data, rather than at any NHS locations, and given that the Royal Free already had an information-sharing arrangement inked in place with DeepMind, it’s not clear exactly what additional external approvals they were awaiting.

The original (now defunct and ICO sanctioned) ISA between the pair does include the below paragraph — granting DeepMind the ability to anonymize the Royal Free patient data-set “for research” purposes. And although this clause lists several bodies, one of which it says would also need to approve any projects under “formal research ethics”, the aforementioned HRA (“the National Research Ethics Service”) is included in this list.

So again, it’s not clear whose rubberstamp they would still have required.

The value of transparency

At the same time, it’s clear that transparency is a preferred principle of medical research ethics — hence the NHS encouraging those filling in research applications to publicly register their studies.

A UK government-commissioned life science strategy review, published this week, also emphasizes the importance of transparency in engendering and sustaining public trust in health research projects — arguing it’s an essential component for furthering the march of digital innovation.

The same review also recommends that the UK government and the NHS take ownership of training health AIs off of taxpayer funded health data-sets — exactly to avoid corporate entities coming in and asset-stripping potential future medical insights.

(“Most of the value is the data,” asserts review author, Sir John Bell, an Oxford University professor of medicine. Data that, in DeepMind’s case, has been so far freely handed over by multiple NHS organizations — in June, for example, it emerged that another NHS Trust which has inked a five-year data-sharing deal with DeepMind, Taunton & Somerset, is not paying the company for the duration of the contract; unless (and in the unlikely eventuality) that the service support exceeds £15,000 a month. So essentially DeepMind is being ‘paid’ with access to NHS patients’ data.)

Even before the ICO’s damning verdict, the original ISA between DeepMind and the Royal Free had been extensively criticized for lacking robust legal and ethical safeguards on how patient data could be used. (Even as DeepMind’s co-founder Mustafa Suleyman tried to brush off criticism, saying negative headlines were the result of “a group with a particular view to peddle“.)

But after the original controversy flared the pair subsequently scrapped the agreement and replaced it, in November 2016, with a second data-sharing contract which included some additional information governance concessions — while also continuing to share largely the same quantity and types of identifiable Royal Free patient data as before.

Then this July, as noted earlier, the ICO ruled that the original ISA had indeed breached UK privacy law. “Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening,” it stated in its decision.

The ICO also said it had asked the Trust to commit to making changes to address the shortcomings that the regulator had identified.

In a statement on its website the Trust said it accepted the findings and claimed to have “already made good progress to address the areas where they have concerns”, and to be “doing much more to keep our patients informed about how their data is used”.

“We would like to reassure patients that their information has been in our control at all times and has never been used for anything other than delivering patient care or ensuring their safety,” the Royal Free’s July statement added.

Responding to questions put to it for this report, the Royal Free Hospitals NHS Trust confirmed it was aware of and involved with the 2015 DeepMind AI research study application.

“To be clear, the application was for research on de-personalised data and not the personally identifiable data used in providing Stream,” said a spokeswoman.

“No research project has begun, and it could not begin without further approvals. It is worth noting that fully approved research projects involving de-personalised data generally do not require patient consent,” she added.

At the time of writing the spokeswoman had not responded to follow-up questions asking why, in 2016, it had made such explicit public denials about its patient data being used for AI research, and why it chose not to make public the existing application to conduct AI research at that time — or indeed, at an earlier time.

Another curious facet to this saga involves the group of “independent reviewers” that Suleyman, announced the company had signed up in July 2016 to — as he put it — “examine our work and publish their findings”.

His intent was clearly to try to reset public perceptions of the DeepMind Health initiative after a bumpy start for transparency, consent, information governance and regulatory best practice — with the wider hope of boosting public trust in what an ad giant wanted with people’s medical data by allowing some external eyeballs to roll in and poke around.

What’s curious is that the reviewers make no reference to DeepMind’s AI research study intentions for the Royal Free data-set in their first report — also published this July.

We reached out to the chair of the group, former MP Julian Huppert, to ask whether DeepMind informed the group it was intending to undertake AI research on the same data-set.

Huppert confirmed to us that the group had been aware there was “consideration” of an AI research project using the Royal Free data at the time it was working on its report, but claimed he does not “recall exactly” when the project was first mentioned or by whom.

“Both the application and the decision not to go ahead happened before the panel was formed,” he said, by way of explanation for the memory lapse.

Asked why the panel did not think the project worth mentioning in its first annual report, he told TechCrunch: “We were more concerned with looking at work that DMH had done and were planning to do, than things that they had decided not to go ahead with.”

“I understand that no work was ever done on it. If this project were to be taken forward, there would be many more regulatory steps, which we would want to look at,” he added.

In their report the independent reviews do flag up some issues of concern regarding DeepMind Health’s operations — including potential security vulnerabilities around the company’s handling of health data.

For example, a datacenter server build review report, conducted by an external auditor looking at part of DeepMind Health’s critical infrastructure on behalf of the external reviewers, identified what it judged a “medium risk vulnerability” — noting that: “A large number of files are present which can be overwritten by any user on the reviewed servers.”

“This could allow a malicious user to modify or replace existing files to insert malicious content, which would allow attacks to be conducted against the servers storing the files,” the auditor added.

Asked how DeepMind Health will work to regain NHS patients’ trust in light of such a string of transparency and regulatory failures to-date, the spokeswoman provided the following statement: “Over the past eighteen months we’ve done a lot to try to set a higher standard of transparency, appointing a panel of Independent Reviewers who scrutinise our work, embarking on a patient involvement program, proactively publishing NHS contracts, and building tools to enable better audits of how data is used to support care. In our recently signed partnership with Taunton and Somerset NHS Trust, for example, we committed to supporting public engagement activity before any patient data is transferred for processing. And at our recent consultation events in London and Manchester, patients provided feedback on DeepMind Health’s work.”

Asked whether it had informed the independent reviewers about the existence of the AI research application, the spokeswoman declined to respond directly. Instead she repeater the prior line that: “No research project is underway.”

News Source =

Go to Top