Menu

Timesdelhi.com

February 24, 2019
Category archive

medical imaging

Two former Qualcomm engineers are using AI to fix China’s healthcare problem

in 12 sigma/Artificial Intelligence/Asia/chief executive officer/China/Delhi/Health/Healthcare/hospitals/idc/imaging/India/Infervision/medical imaging/medicine/Politics/Qualcomm/san diego/Sigma/Tencent/Tsinghua University by

Artificial intelligence is widely heralded as something that could disrupt the jobs market across the board — potentially eating into careers as varied as accountants, advertising agents, reporters and more — but there are some industries in dire need of assistance where AI could make a wholly positive impact, a core one being healthcare.

Despite being the world’s second-largest economy, China is still coping with a serious shortage of medical resources. In 2015, the country had 1.8 physicians per 1,000 citizens, according to data compiled by the Organization for Economic Cooperation and Development. That figure puts China behind the U.S. at 2.6 and was well below the OECD average of 3.4.

The undersupply means a nation of overworked doctors who constantly struggle to finish screening patient scans. Misdiagnoses inevitably follow. Spotting the demand, forward-thinking engineers and healthcare professionals move to get deep learning into analyzing medical images. Research firm IDC estimates that the market for AI-aided medical diagnosis and treatment in China crossed 183 million yuan ($27 million) in 2017 and is expected to reach 5.88 billion yuan ($870 million) by 2022.

One up-and-comer in the sector is 12 Sigma, a San Diego-based startup founded by two former Qualcomm engineers with research teams in China. The company is competing against Yitu, Infervision and a handful of other well-funded Chinese startups that help doctors detect cancerous cells from medical scans. Between January and May last year alone, more than 10 Chinese companies with such a focus scored fundings of over 10 million yuan ($1.48 million), according to startup data provider Iyiou. 12 Sigma itself racked up a 200 million yuan Series B round at the end of 2017 and is mulling a new funding round as it looks to ramp up its sales team and develop new products, the company told TechCrunch.

“2015 to artificial intelligence is like 1995 to the Internet. It was the dawn of a revolution,” recalled Zhong Xin, who quit his management role at Qualcomm and went on to launch 12 Sigma in 2015. At the time, AI was cereping into virtually all facets of life, from public security, autonomous driving, agriculture, education to finance. Zhong took a bet on health care.

“For most industries, the AI technology might be available, but there isn’t really a pressing problem to solve. You are creating new demand there. But with healthcare, there is a clear problem, that is, how to more efficiently spot diseases from a single image,” the chief executive added.

An engineer named Gao Dashan who had worked closely with Zhong at Qualcomm’s U.S. office on computer vision and deep learning soon joined as the startup’s technology head. The pair both attended China’s prestigious Tsinghua University, another experience that boosted their sense of camaraderie.

Aside from the potential financial rewards, the founders also felt an urge to start something on their own as they entered their 40s. “We were too young to join the Internet boom. If we don’t create something now for the AI era, it will be too late for us to be entrepreneurs,” admitted Zhong who, with age, also started to recognize the vulnerability of life. “We see friends and relatives with cancers get diagnosed too late and end up  The more I see this happen, the more strongly I feel about getting involved in healthcare to give back to society.”

A three-tier playbook

12 Sigma and its peers may be powering ahead with their advanced imaging algorithms, but the real challenge is how to get China’s tangled mix of healthcare facilities to pay for novel technologies. Infervision, which TechCrunch wrote about earlier, stations programmers and sales teams at hospitals to mingle with doctors and learn their needs. 12 Sigma deploys the same on-the-ground strategy to crack the intricate network.

Zhong Xin, Co-founder and CEO of 12 Sigma / Photo source: 12 Sigma

“Social dynamics vary from region to region. We have to build trust with local doctors. That’s why we recruit sales persons locally. That’s the foundation. Then we begin by tackling the tertiary hospitals. If we manage to enter these hospitals,” said Zhong, referring to the top public hospitals in China’s three-tier medical system. “Those partnerships will boost our brand and give us greater bargaining power to go after the smaller ones.”

For that reason, the tertiary hospitals are crowded with earnest startups like 12 Sigma as well as tech giants like Tencent, which has a dedicated medical imaging unit called Miying. None of these providers is charging the top boys for using their image processors because “they could easily switch over to another brand,” suggested Gao.

Instead, 12 Sigma has its eyes on the second-tier hospitals. As of last April, China had about 30,000 hospitals, out of which 2,427 were rated tertiary, according to a survey done by the National Health and Family Planning Commission. The second tier, serving a wider base in medium-sized cities, had a network of 8,529 hospitals. 12 Sigma believes these facilities are where it could achieve most of its sales by selling device kits and charging maintenance fees in the future.

The bottom tier had 10,135 primary hospitals, which tend to concentrate in small towns and lack the financial capacity to pay the one-off device fees. As such, 12 Sigma plans to monetize these regions with a pay-per-use model.

So far, the medical imaging startup has about 200 hospitals across China testing its devices — for free. It’s sold only 10 machines, generating several millions of yuan in revenue, while very few of its rivals have achieved any sales at all according to Gao. At this stage, the key is to glean enough data so the startup’s algorithms get good enough to convince hospital administrators the machines are worth the investment. The company is targeting 100 million yuan ($14.8 million) in sales for 2019 and aims to break even by 2020.

China’s relatively lax data protection policy means entrepreneurs have easier access to patient scans compared to their peers in the west. Working with American hospitals has proven “very difficult” due to the country’s privacy protection policies, said Gao. They also come with a different motive. While China seeks help from AI to solve its doctor shortage, American hospitals place a larger focus on AI’s economic returns.

“The healthcare system in the U.S. is much more market-driven. Though doctors could be more conservative about applying AI than those in China, as soon as we prove that our devices can boost profitability, reduce misdiagnoses and lower insurance expenditures, health companies are keen to give it a try,” said Gao.

News Source = techcrunch.com

Healthcare by 2028 will be doctor-directed, patient-owned and powered by visual technologies

in Artificial Intelligence/cancer/chemicals/Column/Delhi/dementia/Disease/frost sullivan/genomics/Grail/Health/Healthcare/image-processing/imaging/India/McKinsey/medical imaging/medicine/mri/neural network/NYU/Politics/radiology/roche/stroke/Syria/TC/Technology/telecommunications/telemedicine/tumor/ultrasound/X Ray by

Visual assessment is critical to healthcare – whether that is a doctor peering down your throat as you say “ahhh” or an MRI of your brain. Since the X-ray was invented in 1895, medical imaging has evolved into many modalities that empower clinicians to see into and assess the human body.  Recent advances in visual sensors, computer vision and compute power are currently powering a new wave of innovation in legacy visual technologies(like the X-Ray and MRI) and sparking entirely new realms of medical practice, such as genomics.

Over the next 10 years, healthcare workflows will become mostly digitized, with wide swaths of personal data captured and computer vision, along with artificial intelligence, automating the analysis of that data for precision care. Much of the digitized data across healthcare will be visual and the technologies that capture and analyze it are visual technologies.

These visual technologies traverse a patient’s journey from diagnosis, to treatment, to continuing care and prevention.They capture, analyze, process, filter and manage any visual data from images, videos, thermal, x-ray’s, ultrasound, MRI, CT scans, 3D, and more. Computer vision and artificial intelligence are core to the journey.

Three powerful trends — including miniaturization of diagnostic imaging devices, next generation imaging to for the earliest stages of disease detection and virtual medicine — are shaping the ways in which visual technologies are poised to improve healthcare over the next decade.

Miniaturization of Hardware Along with Computer Vision and AI will allow Diagnostic Imaging to be Mobile

Medical imaging is dominated by large incumbents that are slow to innovate. Most imaging devices (e.g. MRI machines) have not changed substantially since the 1980s and still have major limitations:

  • Complex workflows: large, expensive machines that require expert operators and have limited compatibility in hospitals.

  • Strict patient requirements: such as lying still or holding their breath (a problem for cases such as pediatrics or elderly patients).

  • Expensive solutions: limited to large hospitals and imaging facilities.

But thanks to innovations in visual sensors and AI algorithms, “modern medical imaging is in the midst of a paradigm shift, from large carefully-calibrated machines to flexible, self-correcting, multi-sensor devices” says Daniel K. Sodickson, MD, PhD, NYU School of Medicine, Department of Radiology.

MRI glove-shaped detector proved capable of capturing images of moving fingers.  ©NYU Langone Health

Visual data capture will be done with smaller, easier to use devices, allowing imaging to move out of the radiology department and into the operating room, the pharmacy and your living room.

Smaller sensors and computer vision-enabled image capture will lead to imaging devices that are being redesigned a fraction of the size with:

  • Simpler imaging process: with quicker workflows and lower costs.

  • Lower expertise requirements: less complexity will move imaging from the radiology department to anywhere the patient is.

  • Live imaging via ingestible cameras: innovation includes powering ingestibles via stomach acid, using bacteria for chemical detection and will be feasible in a wider range of cases.

“The use of synthetic neural network-based implementations of human perceptual learning enables an entire class of low-cost imaging hardware and can accelerate and improve existing technologies,” says Matthew Rosen, PhD, MGH/Martinos Center at Harvard Medical School.

©Matthew Rosen and his colleagues at the Martinos Center for Biomedical Imaging in Boston want liberate the MRI.

Next Generation Sequencing, Phenotyping and Molecular Imaging Will Diagnose Disease Before Symptoms are Presented

Genomics, the sequencing of DNA, has grown at a 200% CAGR since 2015, propelled by Next Generation Sequencing (NGS) which uses optical signals to read DNA, like our LDV portfolio company Geniachip which was acquired by Roche. These techniques are helping genomics become a mainstream tool for practitioners, and will hopefully make carrier screening part of routine patient care by 2028.

Identifying the genetic makeup of a disease via liquid biopsies, where blood, urine or saliva is tested for tumor DNA or RNA, are poised to take a prime role in early cancer screening. The company GRAIL, for instance, raised $1B for a cancer blood test that uses NGS and deep learning to detect circulating tumor DNA before a lesion is identified.

Phenomics, the analysis of observable traits (phenotypes) that result from interactions between genes and their environment, will also contribute to earlier disease detection. Phenotypes are expressed physiologically and most will require imaging to be detected and analyzed.

Next Generation Phenotyping (NGP) uses computer vision and deep learning to analyze physiological data, understand particular phenotype patterns, then it correlates those patterns to genes. For example, FDNA’s Face2Gene technology can identify 300-400 disorders with 90%+ accuracy using images of a patient’s face. Additional data (images or videos of hands, feet, ears, eyes) can allow NGP to detect a wide range of disorders, earlier than ever before.

Molecular imaging uses DNA nanotech probes to quantitatively visualize chemicals inside of cells, thus measuring the chemical signature of diseases. This approach may enable early detection of neurodegenerative diseases such as Alzheimer’s, Parkinson’s and dementia.

Telemedicine to Overtake Brick-and-Mortar Doctors Visits

By 2028 it will be more common to visit the doctor via video over your phone or computer than it will be to go to an office.

Telemedicine will make medical practitioners more accessible and easier to communicate with. It will create an all digitized health record of visits for a patient’s profile and it will reduce the costs of logistics and regional gaps in specific medical expertise. An example being the telemedicine services rendered for 1.9M injured in the war in Syria.4

The integration of telemedicine into ambulances has led to stroke patients being treated twice as fast.  Doctors will increasingly call in their colleagues and specialists in real time.

Screening technologies will be integrated into telemedicine so it won’t just be about video calling a doctor. Pre-screening your vitals via remote cameras will deliver extensive efficiencies and hopefully health benefits.

“The biggest opportunity in visual technology in telemedicine is in solving specific use cases. Whether it be detecting your pulse, blood pressure or eye problems, visual technology will be key to collecting data,” says Jeff Nadler, Teldoc health.

Remote patient monitoring (RPM) will be a major factor in the growth of telemedicine and the overall personalization of care. RPM devices, like we are seeing with the Apple Watch, will be a primary source of real-time patient data used to make medical decisions that take into account everyday health and lifestyle factors. This personal data will be collected and owned by patients themselves and provided to doctors.

Visual Tech Will Power the Transformation of Healthcare Over the Next Decade

Visual technologies have deep implications for the future of personalized healthcare and will hopefully improve the health of people worldwide. It represents unique investment opportunities and we at LDV Capital have reviewed over 100 research papers from BCC Research, CBInsights, Frost & Sullivan, McKinsey, Wired, IEEE Spectrum and many more to compile our 2018 LDV Capital Insights report. This report highlights the sectors that power to improve healthcare based on the transformative nature of the technology in the sector, projected growth and business opportunity.

There are tremendous investment opportunities in visual technologies across diagnosis, treatment and continuing care & prevention that will help make people healthier across the globe.

News Source = techcrunch.com

China’s Infervision is helping 280 hospitals worldwide detect cancers from images

in Artificial Intelligence/Asia/Beijing/cancer/chicago/China/cybernetics/Delhi/Disease/Health/Healthcare/hospital/imaging/India/Infervision/medical imaging/medicine/Politics/sequoia capital/Sequoia Capital China/shenzhen/University of Chicago by

Until recently, humans have relied on the trained eyes of doctors to diagnose diseases from medical images.

Beijing-based Infervision is among a handful of artificial intelligence startups around the world racing to improve medical imaging analysis through deep learning, the same technology that powers face recognition and autonomous driving.

The startup, which has to date raised $70 million from leading investors like Sequoia Capital China, began by picking out cancerous lung cells, a prevalent cause of death in China. At the Radiological Society of North America’s annual conference in Chicago this week, the three-year-old company announced extending its computer vision prowess to other chest-related conditions like cardiac calcification.

“By adding more scenarios under which our AI works, we are able to offer more help to doctors,” Chen Kuan, founder and chief executive officer of Infervision, told TechCrunch. While a doctor can spot dozens of diseases from one single image scan, AI needs to be taught how to identify multiple target objects in one go.

But Chen says machines already outstrip humans in other aspects. For one, they are much faster readers. It normally takes doctors 15 to 20 minutes to scrutinize one image, whereas Infervision’s AI can process the visuals and put together a report under 30 seconds.

AI also addresses the long-standing issue of misdiagnosis. Chinese clinical newspaper Medical Weekly reported that doctors with less than five years’ experience only got their answers right 44 percent of the time when diagnosing black lungs, a disease common among coal miners. A research from Zhejiang University that examined autopsies between 1950 to 2009 found that the total clinical misdiagnosis rate averaged 46 percent.

“Doctors work long hours and are constantly under tremendous stress, which can lead to errors,” suggested Chen.

The founder claimed that his company is able to improve the accuracy rate by 20 percent. AI can also fill in for doctors in remote hinterlands where healthcare provision falls short, which is often the case in China.

Winning the first client

A report on bone fractures produced by Infervision’s medical imaging tool

Like any deep learning company, Infervision needs to keep training its algorithms with data from varied sources. As of this week, the startup is working with 280 hospitals – among which twenty are outside of China – and steadily adding a dozen new partners weekly. It also claims that 70 percent of China’s top-tier hospitals use its lung-specific AI tool.

But the firm has had a rough start.

Chen, a native of Shenzhen in south China, founded Infervision after dropping out of his doctoral program at the University of Chicago where he studied under Nobel-winning economist James Heckman. For the first six months of his entrepreneurial journey, Chen knocked on the doors of 40 hospitals across China — to no avail.

“Medical AI was still a novelty then. Hospitals are by nature conservative because they have to protect patients, which make them reluctant to partner with outsiders,” Chen recalled.

Eventually, Sichuan Provincial People’s Hospital gave Infervision a shot. Chen with his two founding members got hold of a small batch of image data, moved into a tiny apartment next to the hospital, and got the company underway.

“We observed how doctors work, explained to them how AI works, listened to their complaints, and iterated our product,” said Chen. Infervision’s product proved adept, and its name soon gathered steam among more healthcare professionals.

“Hospitals are risk-averse, but as soon as one of them likes us, it goes out to spread the word and other hospitals will soon find us. The medical industry is very tight-knit,” the founder said.

It also helps that AI has evolved from a fringe invention to a norm in healthcare over the past few years, and hospitals start actively seeking help from tech startups.

Infervision has stumbled in its foreign markets as well. In the US, for example, Infervision is restricted to visiting doctors only upon appointments, which slows down product iteration.

Chen also admitted that many western hospitals did not trust that a Chinese startup could provide state-of-the-art technology. But they welcomed Infervision in as soon as they found out what it’s able to achieve, which is in part thanks to its data treasure — up to 26,000 images a day.

“Regardless of their technological capability, Chinese startups are blessed with access to mountains of data that no startups elsewhere in the world could match. That’s an immediate advantage,” said Chen.

There’s no lack of rivalry in China’s massive medical industry. Yitu, a pivotal player that also applies its AI to surveillance and fintech, unveiled a cancer detection tool at the Chicago radiological conference this week.

Infervision, which generates revenues by charging fees for its AI solution as a service, says that down the road, it will prioritize product development for conditions that incur higher social costs, such as cerebrovascular and cardiovascular diseases.

News Source = techcrunch.com

Storage provider Cloudian raises $94M

in alpha/Artificial Intelligence/Cloud/cloud computing/cloud storage/Cloudian/computing/data management/Delhi/Enterprise/funding/Goldman Sachs/Healthcare/India/information/machine learning/medical imaging/NTT Docomo Ventures/petabyte/Politics/Storage by

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, no matter whether that’s from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”

News Source = techcrunch.com

NYU and Facebook team up to supercharge MRI scans with AI

in Artificial Intelligence/Computer Vision/Delhi/Facebook/Facebook Artificial Intelligence Research/fair/Health/India/medical imaging/mri/Politics/Science by

Magnetic resonance imaging is an invaluable tool in the medical field, but it’s also a slow and cumbersome process. It may take fifteen minutes or an hour to complete a scan, during which time the patient, perhaps a child or someone in serious pain, must sit perfectly still. NYU has been working on a way to accelerate this process, and is now collaborating with Facebook with the goal of cutting down MRI durations by 90 percent by applying AI-based imaging tools.

It’s important at the outset to distinguish this effort from other common uses of AI in the medical imaging field. An X-ray, or indeed an MRI scan, once completed, could be inspected by an object recognition system watching for abnormalities, saving time for doctors and maybe even catching something they might have missed. This project isn’t about analyzing imagery that’s already been created, but rather expediting its creation in the first place.

The reason MRIs take so long is because the machine must create a series of 2D images or slices, many of which must be stacked up to make a 3D image. Sometimes only a handful are needed, but for full fidelity and depth — for something like a scan for a brain tumor — lots of slices are required.

The FastMRI project, begun in 2015 by NYU researchers, investigates the possibility of creating imagery of a similar quality to a traditional scan, but by collecting only a fraction of the data normally needed.

Think of it like scanning an ordinary photo. You could scan the whole thing… but if you only scanned every other line (this is called “undersampling”) and then intelligently filled in the missing pixels, it would take half as long. And machine learning systems are getting quite good at tasks like that. Our own brains do it all the time: you have blind spots with stuff in them right now that you don’t notice because your vision system is filling in the gaps — intelligently.

The data collected at left could be “undersampled” as at right, with the missing data filled in later

If an AI system could be trained to fill in the gaps from MRI scans where only the most critical data is collected, the actual time during which a patient would have to sit in the imaging tube could be reduced considerably. It’s easier on the patient, and one machine could handle far more people than it does doing a full scan every time, making scans cheaper and more easily obtainable.

The NYU School of Medicine researchers began work on this three years ago and published some early results showing that the approach was at least feasible. But like an MRI scan, this kind of work takes time.

“We and other institutions have taken some baby steps in using AI for this type of problem,” explained NYU’s Dan Sodickson, director of the Center of Advanced Imaging Innovation and Research there. “The sense is that already in the first attempts, with relatively simple methods, we can do better than other current acceleration techniques — get better image quality and maybe accelerate further by some percentage, but not by large multiples yet.”

So to give the project a boost, Sodickson and the radiologists at NYU are combining forces with the AI wonks at Facebook and its Artificial Intelligence Research group (FAIR).

NYU School of Medicine’s Department of Radiology chair Michael Recht, MD, Daniel Sodickson, MD, vice chair for research and director of the Center for Advanced Imaging Innovation and Yvonne Lui, MD, director of artificial intelligence, examine an MRI

“We have some great physicists here and even some hot-stuff mathematicians, but Facebook and FAIR have some of the leading AI scientists in the world. So it’s complementary expertise,” Sodickson said.

And while Facebook isn’t planning on starting a medical imaging arm, FAIR has a pretty broad mandate.

“We’re looking for impactful but also scientifically interesting problems,” said FAIR’s Larry Zitnick. AI-based creation or re-creation of realistic imagery (often called “hallucination”) is a major area of research, but this would be a unique application of it — not to mention one that could help some people.

With a patient’s MRI data, he explained, the generated imagery “doesn’t need to be just plausible, but it needs to retain the same flaws.” So the computer vision agent that fills in the gaps needs to be able to recognize more than just overall patterns and structure, and to be able to retain and even intelligently extend abnormalities within the image. To not do so would be a massive modification of the original data.

Fortunately it turns out that MRI machines are pretty flexible when it comes to how they produce images. If you would normally take scans from 200 different positions, for instance, it’s not hard to tell the machine to do half that, but with a higher density in one area or another. Other imagers like CT and PET scanners aren’t so docile.

Even after a couple years of work the research is still at an early stage. These things can’t be rushed, after all, and with medical data there are ethical considerations and a difficulty in procuring enough data. But the NYU researchers’ ground work has paid off with initial results and a powerful data set.

Zitnick noted that because AI agents require lots of data to train up to effective levels, it’s a major change going from a set of, say, 500 MRI scans to a set of 10,000. With the former data set you might be able to do a proof of concept, but with the latter you can make something accurate enough to actually use.

The partnership announced today is between NYU and Facebook, but both hope that others will join up.

“We’re working on this out in the open. We’re going to be open-sourcing it all,” said Zitnick. One might expect no less of academic research, but of course a great deal of AI work in particular goes on behind closed doors these days.

So the first steps as a joint venture will be to define the problem, document the data set and release it, create baselines and metrics by which to measure their success, and so on. Meanwhile, the two organizations will be meeting and swapping data regularly and running results past actual clinicians.

“We don’t know how to solve this problem,” Zitnick said. “We don’t know if we’ll succeed or not. But that’s kind of the fun of it.”

News Source = techcrunch.com

Go to Top