Menu

Timesdelhi.com

May 23, 2019
Category archive

neural network

Reality Check: The marvel of computer vision technology in today’s camera-based AR systems

in animation/AR/ar/vr/Artificial Intelligence/Augmented Reality/Column/Computer Vision/computing/Delhi/Developer/digital media/Gaming/gif/Global Positioning System/gps/India/mobile phones/neural network/Politics/starbucks/TC/Virtual Reality/VR by

British science fiction writer, Sir Arther C. Clark, once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Augmented reality has the potential to instill awe and wonder in us just as magic would. For the very first time in the history of computing, we now have the ability to blur the line between the physical world and the virtual world. AR promises to bring forth the dawn of a new creative economy, where digital media can be brought to life and given the ability to interact with the real world.

AR experiences can seem magical but what exactly is happening behind the curtain? To answer this, we must look at the three basic foundations of a camera-based AR system like our smartphone.

  1. How do computers know where it is in the world? (Localization + Mapping)
  2. How do computers understand what the world looks like? (Geometry)
  3. How do computers understand the world as we do? (Semantics)

Part 1: How do computers know where it is in the world? (Localization)

Mars Rover Curiosity taking a selfie on Mars. Source: https://www.nasa.gov/jpl/msl/pia19808/looking-up-at-mars-rover-curiosity-in-buckskin-selfie/

When NASA scientists put the rover onto Mars, they needed a way for the robot to navigate itself on a different planet without the use of a global positioning system (GPS). They came up with a technique called Visual Inertial Odometry (VIO) to track the rover’s movement over time without GPS. This is the same technique that our smartphones use to track their spatial position and orientation.

A VIO system is made out of two parts.

News Source = techcrunch.com

Healthcare by 2028 will be doctor-directed, patient-owned and powered by visual technologies

in Artificial Intelligence/cancer/chemicals/Column/Delhi/dementia/Disease/frost sullivan/genomics/Grail/Health/Healthcare/image-processing/imaging/India/McKinsey/medical imaging/medicine/mri/neural network/NYU/Politics/radiology/roche/stroke/Syria/TC/Technology/telecommunications/telemedicine/tumor/ultrasound/X Ray by

Visual assessment is critical to healthcare – whether that is a doctor peering down your throat as you say “ahhh” or an MRI of your brain. Since the X-ray was invented in 1895, medical imaging has evolved into many modalities that empower clinicians to see into and assess the human body.  Recent advances in visual sensors, computer vision and compute power are currently powering a new wave of innovation in legacy visual technologies(like the X-Ray and MRI) and sparking entirely new realms of medical practice, such as genomics.

Over the next 10 years, healthcare workflows will become mostly digitized, with wide swaths of personal data captured and computer vision, along with artificial intelligence, automating the analysis of that data for precision care. Much of the digitized data across healthcare will be visual and the technologies that capture and analyze it are visual technologies.

These visual technologies traverse a patient’s journey from diagnosis, to treatment, to continuing care and prevention.They capture, analyze, process, filter and manage any visual data from images, videos, thermal, x-ray’s, ultrasound, MRI, CT scans, 3D, and more. Computer vision and artificial intelligence are core to the journey.

Three powerful trends — including miniaturization of diagnostic imaging devices, next generation imaging to for the earliest stages of disease detection and virtual medicine — are shaping the ways in which visual technologies are poised to improve healthcare over the next decade.

Miniaturization of Hardware Along with Computer Vision and AI will allow Diagnostic Imaging to be Mobile

Medical imaging is dominated by large incumbents that are slow to innovate. Most imaging devices (e.g. MRI machines) have not changed substantially since the 1980s and still have major limitations:

  • Complex workflows: large, expensive machines that require expert operators and have limited compatibility in hospitals.

  • Strict patient requirements: such as lying still or holding their breath (a problem for cases such as pediatrics or elderly patients).

  • Expensive solutions: limited to large hospitals and imaging facilities.

But thanks to innovations in visual sensors and AI algorithms, “modern medical imaging is in the midst of a paradigm shift, from large carefully-calibrated machines to flexible, self-correcting, multi-sensor devices” says Daniel K. Sodickson, MD, PhD, NYU School of Medicine, Department of Radiology.

MRI glove-shaped detector proved capable of capturing images of moving fingers.  ©NYU Langone Health

Visual data capture will be done with smaller, easier to use devices, allowing imaging to move out of the radiology department and into the operating room, the pharmacy and your living room.

Smaller sensors and computer vision-enabled image capture will lead to imaging devices that are being redesigned a fraction of the size with:

  • Simpler imaging process: with quicker workflows and lower costs.

  • Lower expertise requirements: less complexity will move imaging from the radiology department to anywhere the patient is.

  • Live imaging via ingestible cameras: innovation includes powering ingestibles via stomach acid, using bacteria for chemical detection and will be feasible in a wider range of cases.

“The use of synthetic neural network-based implementations of human perceptual learning enables an entire class of low-cost imaging hardware and can accelerate and improve existing technologies,” says Matthew Rosen, PhD, MGH/Martinos Center at Harvard Medical School.

©Matthew Rosen and his colleagues at the Martinos Center for Biomedical Imaging in Boston want liberate the MRI.

Next Generation Sequencing, Phenotyping and Molecular Imaging Will Diagnose Disease Before Symptoms are Presented

Genomics, the sequencing of DNA, has grown at a 200% CAGR since 2015, propelled by Next Generation Sequencing (NGS) which uses optical signals to read DNA, like our LDV portfolio company Geniachip which was acquired by Roche. These techniques are helping genomics become a mainstream tool for practitioners, and will hopefully make carrier screening part of routine patient care by 2028.

Identifying the genetic makeup of a disease via liquid biopsies, where blood, urine or saliva is tested for tumor DNA or RNA, are poised to take a prime role in early cancer screening. The company GRAIL, for instance, raised $1B for a cancer blood test that uses NGS and deep learning to detect circulating tumor DNA before a lesion is identified.

Phenomics, the analysis of observable traits (phenotypes) that result from interactions between genes and their environment, will also contribute to earlier disease detection. Phenotypes are expressed physiologically and most will require imaging to be detected and analyzed.

Next Generation Phenotyping (NGP) uses computer vision and deep learning to analyze physiological data, understand particular phenotype patterns, then it correlates those patterns to genes. For example, FDNA’s Face2Gene technology can identify 300-400 disorders with 90%+ accuracy using images of a patient’s face. Additional data (images or videos of hands, feet, ears, eyes) can allow NGP to detect a wide range of disorders, earlier than ever before.

Molecular imaging uses DNA nanotech probes to quantitatively visualize chemicals inside of cells, thus measuring the chemical signature of diseases. This approach may enable early detection of neurodegenerative diseases such as Alzheimer’s, Parkinson’s and dementia.

Telemedicine to Overtake Brick-and-Mortar Doctors Visits

By 2028 it will be more common to visit the doctor via video over your phone or computer than it will be to go to an office.

Telemedicine will make medical practitioners more accessible and easier to communicate with. It will create an all digitized health record of visits for a patient’s profile and it will reduce the costs of logistics and regional gaps in specific medical expertise. An example being the telemedicine services rendered for 1.9M injured in the war in Syria.4

The integration of telemedicine into ambulances has led to stroke patients being treated twice as fast.  Doctors will increasingly call in their colleagues and specialists in real time.

Screening technologies will be integrated into telemedicine so it won’t just be about video calling a doctor. Pre-screening your vitals via remote cameras will deliver extensive efficiencies and hopefully health benefits.

“The biggest opportunity in visual technology in telemedicine is in solving specific use cases. Whether it be detecting your pulse, blood pressure or eye problems, visual technology will be key to collecting data,” says Jeff Nadler, Teldoc health.

Remote patient monitoring (RPM) will be a major factor in the growth of telemedicine and the overall personalization of care. RPM devices, like we are seeing with the Apple Watch, will be a primary source of real-time patient data used to make medical decisions that take into account everyday health and lifestyle factors. This personal data will be collected and owned by patients themselves and provided to doctors.

Visual Tech Will Power the Transformation of Healthcare Over the Next Decade

Visual technologies have deep implications for the future of personalized healthcare and will hopefully improve the health of people worldwide. It represents unique investment opportunities and we at LDV Capital have reviewed over 100 research papers from BCC Research, CBInsights, Frost & Sullivan, McKinsey, Wired, IEEE Spectrum and many more to compile our 2018 LDV Capital Insights report. This report highlights the sectors that power to improve healthcare based on the transformative nature of the technology in the sector, projected growth and business opportunity.

There are tremendous investment opportunities in visual technologies across diagnosis, treatment and continuing care & prevention that will help make people healthier across the globe.

News Source = techcrunch.com

Wrest control from a snooping smart speaker with this teachable “parasite”

in Advertising Tech/Alexa/Artificial Intelligence/connected devices/Delhi/Europe/Gadgets/GitHub/Google/google home/Hardware/Home Automation/India/Internet of Things/IoT/neural network/Politics/privacy/Security/smart assistant/smart speaker/Speaker by

What do you get when you put one Internet connected device on top of another? A little more control than you otherwise would in the case of Alias the “teachable ‘parasite’” — an IoT project smart speaker topper made by two designers, Bjørn Karmann and Tore Knudsen.

The Raspberry Pi-powered, fungus-inspired blob’s mission is to whisper sweet nonsense into Alexa’s (or Google Home’s) always-on ear so it can’t accidentally snoop on your home.

Project Alias from Bjørn Karmann on Vimeo.

Alias will only stop feeding noise into its host’s speakers when it hears its own wake command — which can be whatever you like.

The middleman IoT device has its own local neural network, allowing its owner to christen it with a name (or sound) of their choosing via a training interface in a companion app.

The open source TensorFlow library was used for building the name training component.

So instead of having to say “Alexa” or “Ok Google” to talk to a commercial smart speaker — and thus being stuck parroting a big tech brand name in your own home, not to mention being saddled with a device that’s always vulnerable to vocal pranks (and worse: accidental wiretapping) — you get to control what the wake word is, thereby taking back a modicum of control over a natively privacy-hostile technology.

This means you could rename Alexa “Bezosallseeingeye”, or refer to your Google Home as “Carelesswhispers”. Whatever floats your boat.

Once Alias hears its custom wake command it will stop feeding noise into the host speaker — enabling the underlying smart assistant to hear and respond to commands as normal.

“We looked at how cordyceps fungus and viruses can appropriate and control insects to fulfill their own agendas and were inspired to create our own parasite for smart home systems,” explain Karmann and Knudsen in a write up of the project. “Therefore we started Project Alias to demonstrate how maker-culture can be used to redefine our relationship with smart home technologies, by delegating more power from the designers to the end users of the products.”

Alias offers a glimpse of a richly creative custom future for IoT, as the means of producing custom but still powerful connected technology products becomes more affordable and accessible.

And so also perhaps a partial answer to IoT’s privacy problem, for those who don’t want to abstain entirely. (Albeit, on the security front, more custom and controllable IoT does increase the hackable surface area — so that’s another element to bear in mind; more custom controls for greater privacy does not necessarily mesh with robust device security.)

If you’re hankering after your own Alexa disrupting blob-topper, the pair have uploaded a build guide to Instructables and put the source code on GitHub. So fill yer boots.

Project Alias is of course not a solution to the underlying tracking problem of smart assistants — which harvest insights gleaned from voice commands to further flesh out interest profiles of users, including for ad targeting purposes.

That would require either proper privacy regulation or, er, a new kind of software virus that infiltrates the host system and prevents it from accessing user data. And unlike this creative physical IoT add-on that kind of tech would not be at all legal.

News Source = techcrunch.com

Banuba raises $7M to supercharge any app or device with the ability to really see you

in Artificial Intelligence/Augmented Reality/belarus/Delhi/Europe/India/mobile/neural network/Politics/Startups/TC by

Walking into the office of Viktor Prokopenya – which overlooks a central London park – you would perhaps be forgiven for missing the significance of this unassuming location, just south of Victoria Station in London. While giant firms battle globally to make Augmented Reality a ‘real industry’, this jovial businessman form Belrarus is poised to launch a revolutionary new technology for just this space. This is the kind of technology some of the biggest companies in the world are snapping up right now, and yet, scuttling off to make me a coffee in the kitchen is someone who could be sitting on just such a company.

Regardless of whether it’s immediate future is obvious or not, AR has a future if the amount of investment pouring into the space is anything to go by.

In 2016 AR and VR attracted $2.3 billion worth of investments (a 300% jump from 2015) and is expected to reach $108 billion by 2021 – 25% of which will be aimed at the AR sector. But, according to numerous forecasts, AR will overtake VR in 5-10 years.

Apple is clearly making headway in its AR developments, having recently acquired AR lens company Akonia Holographics and in releasing iOS 12 this month, it enables developers to fully utilize ARKit 2, no doubt prompting the release of a new wave of camera-centric apps. This year Sequoia Capital China, SoftBank invested $50M in AR camera app Snow. Samsung recently introduced its version of the AR cloud and a partnership with Wacom that turns Samsung’s S-Pen into an augmented reality magic wand.

The IBM/Unity partnership allows developers to integrate Watson cloud services such as visual recognition, speech to text, and more into their Unity applications.

So there is no question that AR is becoming increasingly important, given the sheer amount of funding and M&A activity.

Joining the field is Prokopenya’s “Banuba” project. For although you can download a Snapchat-like app called ‘Banuba’ from the App Store right now, underlying this is a suite of tools of which Prokopenya is the founding investor, and who is working closely to realize a very big vision with the founding team of AI/AR experts behind it.

The key to Banuba’s pitch is the idea that its technology could equip not only apps but even hardware devices with “vision”. This is a perfect marriage of both AI and AR. What if, for instance, Amazon’s Alexa couldn’t just hear you? What if it could see you and interpret your facial expressions or perhaps even your mood? That’s the tantalizing strategy at the heart of this growing company.

Better known for its consumer apps, which have been effectively testing their concepts in the consumer field for the last year, Banuba is about to move heavily into the world of developer tools with the release of its new Banuba 3.0 mobile SDK. (Available to download now in the App Store for iOS devices and Google Play Store for Android). It’s also now secured a further $7m in funding from Larnabel Ventures, the fund of Russian entrepreneur Said Gutseriev, and Prokopenya’s VP Capital.

This move will take its total funding to $12m. In the world of AR, this is like a Romulan warbird de-cloaking in a scene from Star Trek.

Banuba hopes that its SDK will enable brands and apps to utilise 3D Face AR inside their own apps, meaning users can benefit from cutting-edge face motion tracking, facial analysis, skin smoothing and tone adjustment. Banuba’s SDK also enables app developers to utilise background subtraction, which is similar to ‘green screen’ technology regularly used in movies and TV shows, enabling end-users to create a range of AR scenarios. Thus, like magic, you can remove that unsightly office surrounding and place yourself on a beach in the Bahamas…

Because Banuba’s technology equips devices with ‘vision’, meaning they can ‘see’ human faces in 3D and extract meaningful subject analysis based on neural networks, including age, gender, it can do things that other apps just cannot do. It can even monitor your heart rate via spectral analysis of the time-varying color tones in your face.

It has already been incorporated into an app called Facemetrix, which can track a child’s eyes to ascertain whether they are reading something on a phone or tablet or not. Thanks to this technology, it is possible to not just to “track” a person’s gaze, but also to control a smartphone’s function with a gaze. To that end, the SDF can detect micro-movements of the eye with subpixel accuracy in real-time, and also detects certain points of the eye. The idea behind this is to “Gamify education”, rewarding a child with games and entertainment apps if the Facemetrix app has duly checked that they really did read the e-book they told their parents they’d read.

If that makes you think of a parallel with a certain Black Mirror episode where a young girl is prevented from seeing certain things via a brain implant, then you wouldn’t be a million miles away. At least this is a more benign version…

Banuba’s SDK also includes ‘Avatar AR’, empowering developers to get creative with digital communication by giving users the ability to interact with – and create personalized – avatars using any iOS or Android device.

Prokopenya says: “We are in the midst of a critical transformation between our existing smartphones and future of AR devices, such as advanced glasses and lenses. Camera-centric apps have never been more important because of this.” He says that while developers using ARKit and ARCore are able to build experiences primarily for top-of-the-range smartphones, Banuba’s SDK can work on even low-range smartphones.

The SDK will also feature Avatar AR, which allows users to interact with fun avatars or create personalised ones for all iOS and Android devices. Why should users of Apple’s iPhone X be the only people to enjoy Animoji?

Banbua is also likely to take advantage of the news that Facebook recently announced it was testing AR ads in its newsfeed, following trials for businesses to show off products within Messenger.

Banuna’s technology won’t simply be for fun apps however. Inside 2 years, the company has filed 25 patent applications with the the US patent office and of six of those were processed in record time compared with the average. Its R&D center, staffed by 50 people and based in Minsk, is focused on developing a portfolio of technologies.

Interestingly, Belarus has become famous for AI and facial recognition technologies.

For instance, cast your mind back to early 2016, when Facebook bought Masquerade, a Minsk-based developer of a video filter app, MSQRD, which at one point was one of the most popular apps in the App Store. And in 2017, another Belarusian company, AIMatter, was acquired by Google, only months after raising $2M. It too took an SDK approach, releasing a platform for real-time photo and video editing on mobile, dubbed Fabby. This was built upon a neural network-based AI platform. But Prokopenya has much bolder plans for Banuba.

In early 2017, he and Banuba launched a “technology-for-equity” program to enroll app developers and publishers across the world. This signed up Inventain, another startup from Belarus, to develop AR-based mobile games.

Prokopenya says the technologies associated with AR will be “leveraged by virtually every kind of app. Any app can recognize its user through the camera: male or female, age, ethnicity, level of stress, etc.” He says the app could then respond to the user in any number of ways. Literally, your apps could be watching you.

So for instance, a fitness app could see how much weight you’d lost just by using the Banuba SDF to look at your face. Games apps could personalize the game based on what it knows about your face, such as reading your facial cues.

Back in his London office, overlooking a small park, Prokopenya waxes lyrical about the “incredible concentration of diversity, energy and opportunity” of London. “Living in London is fantastic,” he says. “The only thing I am upset about, however, is the uncertainty surrounding Brexit and what it might mean for business in the UK in the future.”

London may be great (and will always be), but sitting on his desk though is a laptop with direct links back to Minsk, a place where the facial recognition technologies of the future are only now just emerging.

News Source = techcrunch.com

Building a great startup requires more than genius and a great invention

in Amazon/AOL/autodesk/AWS/Business/business models/Column/Delhi/design/dollar shave club/driver/Economy/Entrepreneurship/Google/gps/India/Innovation/instagram/Intel/Microsoft/neural network/Politics/quantum computing/smartphones/social media/Startup company/TC/Tesla/thomas edison/Uber/Venture Capital/zoox by

Many entrepreneurs assume that an invention carries intrinsic value, but that assumption is a fallacy.

Here, the examples of the 19th and 20th century inventors Thomas Edison and Nikola Tesla are instructive. Even as aspiring entrepreneurs and inventors lionize Edison for his myriad inventions and business acumen, they conveniently fail to recognize Tesla, despite having far greater contributions to how we generate, move, and harness power. Edison is the exception, with the legendary penniless Tesla as the norm.

Universities are the epicenter of pure innovation research. But the reality is that academic research is supported by tax dollars. The zero-sum game of attracting government funding is mastered by selling two concepts: Technical merit, and and broader impact toward benefiting society as a whole. These concepts are usually at odds with building a company, which succeeds only by generating and maintaining competitive advantage through barriers to entry.

In rare cases, the transition from intellectual merit to barrier to entry is successful. In most cases, the technology, though cool, doesn’t give the a fledgling company the competitive advantage it needs to exist among incumbents, and inevitable copycats. Academics, having emphasized technical merit and broader impact to attract support for their research, often fail to solve for competitive advantage, thereby creating great technology in search for a business application.

Of course there are exceptions: Time and time again, whether it’s driven by hype or perceived existential threat, big incumbents will be quick to buy companies purely for technology.  Cruise/GM (autonomous cars), DeepMind/Google (AI), and Nervana/Intel (AI chips). But as we move from 0-1 to 1-N in a given field, success is determined by winning talent over winning technology. Technology becomes less interesting; the onus on the startup to build a real business.

If a startup chooses to take venture capital, it not only needs to build a real business, but one that will be valued in the billions. the question becomes how a startup can create durable, attractive business, with a transient, short-lived technological advantage.

Most investors understand this stark reality. Unfortunately, while dabbling in technologies which appeared like magic to them during the cleantech boom, many investors were lured back into the innovation fallacy, believing that pure technological advancement would equal value creation. Many of them re-learned this lesson the hard way. As frontier technologies are attracting broader attention, I believe many are falling back into the innovation trap.

So what should aspiring frontier inventors solve for as they seek to invest capital to translate pure discovery to building billion-dollar companies?  How can the technology be cast into an unfair advantage that will yield big margins and growth that underpin billion-dollar businesses?

Talent productivity: In this age of automation, human talent is scarce, and there is incredible value attributed to retaining and maximizing human creativity.  Leading companies seek to gain an advantage by attracting the very best talent. If your technology can help you make more scarce talent more productive, or help your customers become more productive, then you are creating an unfair advantage internally, while establishing yourself as the de facto product for your customers.

Great companies such as Tesla and Google have built tools for their own scarce talent, and build products their customers, in their own ways, can’t do without. Microsoft mastered this with its Office products in the 90s, through innovation and acquisition, Autodesk with its creativity tools, and Amazon with its AWS Suite. Supercharging talent yields one of the most valuable sources of competitive advantage: switchover cost.  When teams are empowered with tools they love, they will loathe the notion of migrating to shiny new objects, and stick to what helps them achieve their maximum potential.

Marketing and Distribution Efficiency: Companies are worth the markets they serve.  They are valued for their audience and reach.  Even if their products in of themselves don’t unlock the entire value of the market they serve, they will be valued for their potential to, at some point in the future, be able to sell to the customers that have been tee’d up with their brands. AOL leveraged cheap CD-ROMs and the postal system to get families online, and on email.

Dollar Shave Club leveraged social media and an otherwise abandoned demographic to lock down a sales channel that was ultimately valued at a billion dollars. The inventions in these examples were in how efficiently these companies built and accessed markets, which ultimately made them incredibly valuable.

Network effects: Its power has ultimately led to its abuse in startup fundraising pitches. LinkedIn, Facebook, Twitter, and Instagram generate their network effects through Internet and Mobile. Most marketplace companies need to undergo the arduous, expensive process of attracting vendors and customers.  Uber identified macro trends (e.g., urban living) and leveraged technology (GPS in cheap smartphones) to yield massive growth in building up supply (drivers) and demand (riders).

Our portfolio company Zoox will benefit from every car benefitting from edge cases every vehicle encounters: akin to the driving population immediately learning from special situations any individual driver encounters. Startups should think about how their inventions can enable network effects where none existed, so that they are able to achieve massive scale and barriers by the time competitors inevitably get access to the same technology.

Offering an end-to-end solution: There isn’t intrinsic value in a piece of technology; it’s offering a complete solution that delivers on an unmet need deep-pocketed customers are begging for. Does your invention, when coupled to a few other products, yield a solution that’s worth far more than the sum of its parts? For example, are you selling a chip, along with design environments, sample neural network frameworks, and datasets, that will empower your customers to deliver magical products? Or, in contrast, does it make more sense to offer standard chips, licensing software, or tag data?

If the answer is to offer components of the solution, then prepare to enter a commodity, margin-eroding, race-to-the-bottom business. The former, “vertical” approach is characteristic of more nascent technologies, such as operating robots-taxis, quantum computing, and launching small payloads into space. As the technology matures and becomes more modular, vendors can sell standard components into standard supply chains, but face the pressure of commoditization.

A simple example is Personal Computers, where Intel and Microsoft attracted outsized margins while other vendors of disk drives, motherboards, printers, and memory faced crushing downward pricing pressure.  As technology matures, the earlier vertical players must differentiate with their brands, reach to customers, and differentiated product, while leveraging what’s likely going to be an endless number of vendors providing technology into their supply chains.

A magical new technology does not go far beyond the resumes of the founding team.

What gets me excited is how the team will leverage the innovation, and attract more amazing people to establish a dominant position in a market that doesn’t yet exist. Is this team and technology the kernel of a virtuous cycle that will punch above its weight to attract more money, more talent, and be recognized for more than it’s product?

News Source = techcrunch.com

Go to Top