Menu

Timesdelhi.com

May 23, 2019
Category archive

image-processing

MultiVu raises $7M seed round for its next-gen 3D sensor

in 3d sensor/Artificial Intelligence/Delhi/funding/Fundings & Exits/Hardware/image sensors/image-processing/India/OurCrowd/Politics/Recent Funding/Samsung/serial entrepreneur/smartphone/Startups/TC/Tel Aviv by

MultiVu, a Tel Aviv-based startup that is developing a new 3D imaging solution that only relies on a single sensor and some deep learning smarts, today announced that it has raised a $7 million seed round. The round was led by crowdfunding platform OurCrowd, Cardumen Captial and Hong Kong’s Junson Capital.

Tel Aviv University’s TAU Technology Innovation Momentum Fund supported some of the earlier development of MultiVu’s core technology, which came out of Prof. David Mendlovic’s lab at the university. Mendlovic previously co-founded smartphone camera startup Corephotonics, which was recently acquired by Samsung.

The promise of MultiVu’s sensor is that it can offer 3D imaging with a single-lens camera instead of the usual two-sensor setup. This single sensor can extract depth and color data in a single shot.

This makes for a more compact setup and, by extension, a more affordable solution since it requires fewer components. All of this is powered by the company’s patented light field technology.

Currently, the team is focusing on using the sensor for face authentication in phones and other small devices. That’s obviously a growing market, but there are also plenty of other application for small 3D sensors, ranging from other security use cases to sensors for self-driving cars.

“The technology, which passed the proof-of-concept stage will bring 3D Face Authentication and affordable 3D imaging to the mobile, automotive, industrial and medical markets,” MultiVu CEO Doron Nevo said. “We are excited to be given the opportunity to commercialize this technology.”

Right now, though, the team is mostly focusing on bringing its sensor to market, though. The company will use the new funding for that, as well as new marketing and business development activities.

“We are pleased to invest in the future of 3D sensor technologies and believe that MultiVu will penetrate markets, which until now could not take advantage of costly 3D imaging solutions,” said OurCrowd Senior Investment Partner Eli Nir. “We are proud to be investing in a third company founded by Prof. David Mendlovic (who just recently sold CorePhotonics to Samsung), managed by CEO Doron Nevo – a serial entrepreneur with proven successes and a superb team they have gathered around them.”

News Source = techcrunch.com

Healthcare by 2028 will be doctor-directed, patient-owned and powered by visual technologies

in Artificial Intelligence/cancer/chemicals/Column/Delhi/dementia/Disease/frost sullivan/genomics/Grail/Health/Healthcare/image-processing/imaging/India/McKinsey/medical imaging/medicine/mri/neural network/NYU/Politics/radiology/roche/stroke/Syria/TC/Technology/telecommunications/telemedicine/tumor/ultrasound/X Ray by

Visual assessment is critical to healthcare – whether that is a doctor peering down your throat as you say “ahhh” or an MRI of your brain. Since the X-ray was invented in 1895, medical imaging has evolved into many modalities that empower clinicians to see into and assess the human body.  Recent advances in visual sensors, computer vision and compute power are currently powering a new wave of innovation in legacy visual technologies(like the X-Ray and MRI) and sparking entirely new realms of medical practice, such as genomics.

Over the next 10 years, healthcare workflows will become mostly digitized, with wide swaths of personal data captured and computer vision, along with artificial intelligence, automating the analysis of that data for precision care. Much of the digitized data across healthcare will be visual and the technologies that capture and analyze it are visual technologies.

These visual technologies traverse a patient’s journey from diagnosis, to treatment, to continuing care and prevention.They capture, analyze, process, filter and manage any visual data from images, videos, thermal, x-ray’s, ultrasound, MRI, CT scans, 3D, and more. Computer vision and artificial intelligence are core to the journey.

Three powerful trends — including miniaturization of diagnostic imaging devices, next generation imaging to for the earliest stages of disease detection and virtual medicine — are shaping the ways in which visual technologies are poised to improve healthcare over the next decade.

Miniaturization of Hardware Along with Computer Vision and AI will allow Diagnostic Imaging to be Mobile

Medical imaging is dominated by large incumbents that are slow to innovate. Most imaging devices (e.g. MRI machines) have not changed substantially since the 1980s and still have major limitations:

  • Complex workflows: large, expensive machines that require expert operators and have limited compatibility in hospitals.

  • Strict patient requirements: such as lying still or holding their breath (a problem for cases such as pediatrics or elderly patients).

  • Expensive solutions: limited to large hospitals and imaging facilities.

But thanks to innovations in visual sensors and AI algorithms, “modern medical imaging is in the midst of a paradigm shift, from large carefully-calibrated machines to flexible, self-correcting, multi-sensor devices” says Daniel K. Sodickson, MD, PhD, NYU School of Medicine, Department of Radiology.

MRI glove-shaped detector proved capable of capturing images of moving fingers.  ©NYU Langone Health

Visual data capture will be done with smaller, easier to use devices, allowing imaging to move out of the radiology department and into the operating room, the pharmacy and your living room.

Smaller sensors and computer vision-enabled image capture will lead to imaging devices that are being redesigned a fraction of the size with:

  • Simpler imaging process: with quicker workflows and lower costs.

  • Lower expertise requirements: less complexity will move imaging from the radiology department to anywhere the patient is.

  • Live imaging via ingestible cameras: innovation includes powering ingestibles via stomach acid, using bacteria for chemical detection and will be feasible in a wider range of cases.

“The use of synthetic neural network-based implementations of human perceptual learning enables an entire class of low-cost imaging hardware and can accelerate and improve existing technologies,” says Matthew Rosen, PhD, MGH/Martinos Center at Harvard Medical School.

©Matthew Rosen and his colleagues at the Martinos Center for Biomedical Imaging in Boston want liberate the MRI.

Next Generation Sequencing, Phenotyping and Molecular Imaging Will Diagnose Disease Before Symptoms are Presented

Genomics, the sequencing of DNA, has grown at a 200% CAGR since 2015, propelled by Next Generation Sequencing (NGS) which uses optical signals to read DNA, like our LDV portfolio company Geniachip which was acquired by Roche. These techniques are helping genomics become a mainstream tool for practitioners, and will hopefully make carrier screening part of routine patient care by 2028.

Identifying the genetic makeup of a disease via liquid biopsies, where blood, urine or saliva is tested for tumor DNA or RNA, are poised to take a prime role in early cancer screening. The company GRAIL, for instance, raised $1B for a cancer blood test that uses NGS and deep learning to detect circulating tumor DNA before a lesion is identified.

Phenomics, the analysis of observable traits (phenotypes) that result from interactions between genes and their environment, will also contribute to earlier disease detection. Phenotypes are expressed physiologically and most will require imaging to be detected and analyzed.

Next Generation Phenotyping (NGP) uses computer vision and deep learning to analyze physiological data, understand particular phenotype patterns, then it correlates those patterns to genes. For example, FDNA’s Face2Gene technology can identify 300-400 disorders with 90%+ accuracy using images of a patient’s face. Additional data (images or videos of hands, feet, ears, eyes) can allow NGP to detect a wide range of disorders, earlier than ever before.

Molecular imaging uses DNA nanotech probes to quantitatively visualize chemicals inside of cells, thus measuring the chemical signature of diseases. This approach may enable early detection of neurodegenerative diseases such as Alzheimer’s, Parkinson’s and dementia.

Telemedicine to Overtake Brick-and-Mortar Doctors Visits

By 2028 it will be more common to visit the doctor via video over your phone or computer than it will be to go to an office.

Telemedicine will make medical practitioners more accessible and easier to communicate with. It will create an all digitized health record of visits for a patient’s profile and it will reduce the costs of logistics and regional gaps in specific medical expertise. An example being the telemedicine services rendered for 1.9M injured in the war in Syria.4

The integration of telemedicine into ambulances has led to stroke patients being treated twice as fast.  Doctors will increasingly call in their colleagues and specialists in real time.

Screening technologies will be integrated into telemedicine so it won’t just be about video calling a doctor. Pre-screening your vitals via remote cameras will deliver extensive efficiencies and hopefully health benefits.

“The biggest opportunity in visual technology in telemedicine is in solving specific use cases. Whether it be detecting your pulse, blood pressure or eye problems, visual technology will be key to collecting data,” says Jeff Nadler, Teldoc health.

Remote patient monitoring (RPM) will be a major factor in the growth of telemedicine and the overall personalization of care. RPM devices, like we are seeing with the Apple Watch, will be a primary source of real-time patient data used to make medical decisions that take into account everyday health and lifestyle factors. This personal data will be collected and owned by patients themselves and provided to doctors.

Visual Tech Will Power the Transformation of Healthcare Over the Next Decade

Visual technologies have deep implications for the future of personalized healthcare and will hopefully improve the health of people worldwide. It represents unique investment opportunities and we at LDV Capital have reviewed over 100 research papers from BCC Research, CBInsights, Frost & Sullivan, McKinsey, Wired, IEEE Spectrum and many more to compile our 2018 LDV Capital Insights report. This report highlights the sectors that power to improve healthcare based on the transformative nature of the technology in the sector, projected growth and business opportunity.

There are tremendous investment opportunities in visual technologies across diagnosis, treatment and continuing care & prevention that will help make people healthier across the globe.

News Source = techcrunch.com

Google ups the Pixel 3’s camera game with Top Shot, group selfies and more

in Android/Camera phone/Delhi/digital photography/Google/Google Hardware Event 2018/Google Pixel/image-processing/India/machine learning/photo processing/Pixel 2/Politics/smartphone/TC by

With the Pixel 2, Google introduced one of the best smartphone cameras ever made. It’s fitting then that the Pixel 3 builds on an already pretty perfect camera, adding some bells and whistles sure to please mobile photographers rather than messing with a good thing. On paper, the Pixel 3’s camera doesn’t look much different than its recent forebear. But, because we’re talking about Google, software is where the device will really shine. We’ll go over everything that’s new.

Starting with specs, both the Pixel 3 and the Pixel 3 XL will sport a 12.2MP rear camera with an f/1.8 aperture and an 8MP dual front camera capable of both normal field of view and ultra-wide angle shots. The rear video camera captures 1080p video at 30, 60 or 120 fps, while the front-facing video camera is capable of capturing 1080p video at 30fps. Google did not add a second rear-facing camera, deeming it “unnecessary” given what the company can do with machine learning alone. Knowing how good the Pixel 2’s camera is, we can’t really argue here.

Top Shot

With the Pixel 3, Google introduced Top Shot. With Top Shot, the Pixel 3 compares a burst set of images taken in rapid succession and automatically detects the best shot using machine learning. The idea is that the camera can screen out any photos in which a subject might have their eyes closed or be making a weird face unintentionally, choosing “smiles instead of sneezes” and offering the user the best of the batch. Stuff like this is usually gimmicky, but given Google’s image processing prowess it’s honestly probably going to be pretty good. Or as TechCrunch’s Matt Burns puts it, “Top Shots is Live Photo but useful” which seems like a fair assessment.

Super Res Zoom

Google’s next Pixel 3 camera trick is called Super Res Zoom, which is what it sounds like. Super Res Zoom enables the camera to take a burst of photos and then leverages the fact that each image is very slightly different due to minute hand movements, combining those images together to recreate detail “without grain” — or so Google claims. Because smartphone cameras are limited due to their lack of optical zoom, Super Res Zoom employs burst shooting and a merging algorithm to compensate for detail at a distance, merging slightly different photos into one higher resolution photo. Because digital zoom is notoriously universally bad, we’re looking forward to putting this new method to the test. After all, if it worked for imaging the surface of Mars, it’s bound to work for concert photos.

Night Sight

A machine learning camera hack designed to inspire people to retire flash once and for all (please), Night Sight can visualize a photo taken in “extreme low light.” The idea is that machine learning can make educated guesses about the content in the frame, filling in detail and color correcting so it isn’t just one big noisy mess. If it works remains to be seen but given the Pixel 2’s already stunning low light performance we’d bet this is probably pretty cool.

Group Selfie Cam

Google knows what the people really want. One of the biggest hardware changes to the Pixel 3 line is the introduction of dual front-facing cameras that enable super-wide front-facing shots capable of capturing group photos. The wide angle front-facing shots feature a 97 degree field of view compared to the normal already fairly wide 75 degree field of view. Yes, Google is trying to make “Groupies” a thing — yes, that’s a selfie where you all cram in and hand the phone to the friend with the longest arms. Honestly, it might succeed.

Google has a few more handy tricks up its sleeve. In Photobooth mode, the Pixel 3 can snap the selfie shutter when you smile, no hands needed. With a new kind of motion tracking auto-focus option you can tap once to track the subject of a photo without needing to tap to refocus, a feature sure to be handy for the kind of people that fill up their storage with hundreds of out of focus pet shots.

Google Lens is also back, of course, but honestly its utility is usually left forgotten in the camera settings. And Google’s AR stickers are now called Playground and respond to actions and facial expressions. Google is also launching a Childish Gambino AR experience on Playground (probably as good as this whole AR sticker thing gets, tbh) which will launch with the Pixel 3 and come to the Pixel 1 and Pixel 2 a bit later on.

With the Pixel 3, Google will also improve upon the Pixel 2’s already excellent Portrait Mode, offering the ability to change the depth of field and the subject. And of course the company will still offer free unlimited full resolution photo storage in the wonderfully useful Google Photos, which remains superior to every aspect of photo processing and storage on the iPhone.

Happily, because much of what Google accomplishes in mobile photography is achieved on the software processing side, the last generation Pixel 2 isn’t getting left in the dust, either. Because they don’t rely on new hardware, most of the features that Google announced today for the Pixel 3 will likely be hitting the Pixel 2 as well, though we’ll sort that out and update this post to specify when that is not the case. So far, we know Group Selfies relies on the dual front camera, so that’s Pixel 3 only.

With its Pixel line, now three generations deep, Google has leaned heavily on software-powered tricks and machine learning to make a smartphone camera far better than it should be. Given Google’s image processing chops, that’s a great thing and most of its experimental software workarounds generally works very well. We’re looking forward to taking its latest set of photography tricks for a spin, so keep an eye out for our upcoming Pixel 3 hands-on posts and reviews.

News Source = techcrunch.com

Go to Top