Menu

Timesdelhi.com

June 16, 2019
Category archive

machine learning

Health at Scale lands $16M Series A to bring machine learning to healthcare

in Artificial Intelligence/Delhi/Enterprise/funding/Health at Scale/India/machine learning/medical/Optum/Politics/Startups/TC by

Health at Scale, a startup with founders who have both medical and engineering expertise, wants to bring machine learning to bear on healthcare treatment options to produce outcomes with better results and less aftercare. Today the company announced a $16 million Series A. Optum, which is part of the UnitedHealth Group, was the sole investor.

Today, when people look at treatment options, they may look at a particular surgeon or hospital, or simply what the insurance company will cover, but they typically lack the data to make truly informed decisions. This is true across every part of the healthcare system, particularly in the U.S. The company believes using machine learning, it can produce better results.

“We are a machine learning shop, and we focus on what I would describe as precision delivery. So in other words, we look at this question of how do we match patients to the right treatments, by the right providers, at the right time,” Zeeshan Syed, Health at Scale CEO told TechCrunch.

The founders see the current system as fundamentally flawed, and while they see their customers as insurance companies, hospital systems and self-insured employers, they say the tools they are putting into the system should help everyone in the loop get a better outcome.

The idea is to make treatment decisions more data-driven. While they aren’t sharing their data sources, they say they have information, from patients with a given condition, to doctors who treat that condition, to facilities where the treatment happens. By looking at a patient’s individual treatment needs and medical history, they believe they can do a better job of matching that person to the best doctor and hospital for the job. They say this will result in the fewest post-operative treatment requirements, whether that involves trips to the emergency room or time in a skilled nursing facility, all of which would end up adding significant additional cost.

If you’re thinking this is strictly about cost savings for these large institutions, Mohammed Saeed, who is the company’s chief medical officer and has an MD from Harvard and a PhD in electrical engineering from MIT, insists that isn’t the case. “From our perspective, it’s a win-win situation since we provide the best recommendations that have the patient interest at heart, but from a payer or provider perspective, when you have lower complication rates you have better outcomes and you lower your total cost of care long term,” he said.

The company says the solution is being used by large hospital systems and insurer customers, although it couldn’t share any. The founders also said it has studied the outcomes after using its software and the machine learning models have produced better outcomes, although it couldn’t provide the data to back that up at that point at this time.

The company was founded in 2015 and currently has 11 employees. It plans to use today’s funding to build out sales and marketing to bring the solution to a wider customer set.

Google’s Translatotron converts one spoken language to another, no text involved

in Artificial Intelligence/Delhi/Google/India/machine learning/machine translation/Politics/Science/Translation by

Every day we creep a little closer to Douglas Adams’ famous and prescient babel fish. A new research project from Google takes spoken sentences in one language and outputs spoken words in another — but unlike most translation techniques, it uses no intermediate text, working solely with the audio. This makes it quick, but more importantly lets it more easily reflect the cadence and tone of the speaker’s voice.

Translatotron, as the project is called, is the culmination of several years of related work, though it’s still very much an experiment. Google’s researchers, and others, have been looking into the possibility of direct speech-to-speech translation for years, but only recently have those efforts borne fruit worth harvesting.

Translating speech is usually done by breaking down the problem into smaller sequential ones: turning the source speech into text (speech-to-text, or STT), turning text in one language into text in another (machine translation), and then turning the resulting text back into speech (text-to-speech, or TTS). This works quite well, really, but it isn’t perfect; Each step has types of errors it is prone to, and these can compound one another.

Furthermore, it’s not really how multilingual people translate in their own heads, as testimony about their own thought processes suggests. How exactly it works is impossible to say with certainty, but few would say that they break down the text and visualize it changing to a new language, then read the new text. Human cognition is frequently a guide for how to advance machine learning algorithms.

Spectrograms of source and translated speech. The translation, let us admit, is not the best. But it sounds better!

To that end researchers began looking into converting spectrograms, detailed frequency breakdowns of audio, of speech in one language directly to spectrograms in another. This is a very different process from the three-step one, and has its own weaknesses, but it also has advantages.

One is that, while complex, it is essentially a single-step process rather than multi-step, which means, assuming you have enough processing power, Translatotron could work quicker. But more importantly for many, the process makes it easy to retain the character of the source voice, so the translation doesn’t come out robotically, but with the tone and cadence of the original sentence.

Naturally this has a huge impact on expression and someone who relies on translation or voice synthesis regularly will appreciate that not only what they say comes through, but how they say it. It’s hard to overstate how important this is for regular users of synthetic speech.

The accuracy of the translation, the researchers admit, is not as good as the traditional systems, which have had more time to hone their accuracy. But many of the resulting translations are (at least partially) quite good, and being able to include expression is too great an advantage to pass up. In the end, the team modestly describes their work as a starting point demonstrating the feasibility of the approach, though it’s easy to see that it is also a major step forward in an important domain.

The paper describing the new technique was published on Arxiv, and you can browse samples of speech, from source to traditional translation to Translatotron, at this page. Just be aware that these are not all selected for the quality of their translation, but serve more as examples of how the system retains expression while getting the gist of the meaning.

Beyond costs, what else can we do to make housing affordable?

in Accelerator/affordable housing/Airbnb/andrew rasiej/Artificial Intelligence/car ownership/car sharing/Column/construction/Crowdfunding/Delhi/dreamit ventures/Enterprise/Finance/funding/getaround/Government/harvard university/India/Kaiser Permanente/kim-mai cutler/LinkedIn/Logistics/machine learning/MIT/patagonia/Philanthropy/Policy/Politics/Real estate/real estate finance/richard florida/Startups/Stonly Baptiste/TC/Transportation/Trulia/ucla/Venture Capital/Zipcar by

This week on Extra Crunch, I am exploring innovations in inclusive housing, looking at how 200+ companies are creating more access and affordability. Yesterday, I focused on startups trying to lower the costs of housing, from property acquisition to management and operations.

Today, I want to focus on innovations that improve housing inclusion more generally, such as efforts to pair housing with transit, small business creation, and mental rehabilitation. These include social impact-focused interventions, interventions that increase income and mobility, and ecosystem-builders in housing innovation.

Nonprofits and social enterprises lead many of these innovations. Yet because these areas are perceived to be not as lucrative, fewer technologists and other professionals have entered them. New business models and technologies have the opportunity to scale many of these alternative institutions — and create tremendous social value. Social impact is increasingly important to millennials, with brands like Patagonia having created loyal fan bases through purpose-driven leadership.

While each of these sections could be their own market map, this overall market map serves as an initial guide to each of these spaces.

Social impact innovations

These innovations address:

Algorithmia raises $25M Series B for its AI automation platform

in Algorithmia/articles/Artificial Intelligence/ceo/CIO/Cloud/Delhi/Developer/Enterprise/Gradient Ventures/India/machine learning/ML/norwest partners/norwest venture partners/Politics/Rakuten Ventures/Seattle/TC/Technology/United Nations by

Algorithmia, a Seattle-based startup that offers a cloud-agnostic AI automation platform for enterprises, today announced a $25 million Series B funding round led by Norwest Partners. Madrona, Gradient Ventures, Work-Bench, Osage University Partners and Rakuten Ventures also participated in this round.

While the company started out five years ago as a marketplace for algorithms, it now mostly focuses on machine learning and helping enterprises take their models into production.

“It’s actually really hard to productionize machine learning models,” Algorithmia CEO Diego Oppenheimer told me. “It’s hard to help data scientists to not deal with data infrastructure but really being able to build out their machine learning and AI muscle.”

To help them, Algorithmia essentially built out a machine learning DevOps platform that allows data scientists to train their models on the platform and with the framework of their choice, bring it to Algorithmia — a platform that has already been blessed by their IT departments — and take it into production.

“Every Fortune 500 CIO has an AI initiative but they are bogged down by the difficulty of managing and deploying ML models,” said Rama Sekhar, a partner at Norwest Venture Partners, who has now joined the company’s board. “Algorithmia is the clear leader in building the tools to manage the complete machine learning lifecycle and helping customers unlock value from their R&D investments.”

With the new funding, the company will double down on this focus by investing in product development to solve these issues, but also by building out its team, with a plan to double its headcount over the next year. A year from now, Oppenheimer told me, he hopes that Algorithmia will be a household name for data scientists and, maybe more importantly, their platform of choice for putting their models into production.

“How does Algorithmia succeed? Algorithmia succeeds when our customers are able to deploy AI and ML applications,” Oppenheimer said. “And although there is a ton of excitement around doing this, the fact is that it’s really difficult for companies to do so.”

The company previously raised a $10.5 million Series A round led by Google’s AI fund. It’s customers now include the United Nations, a number of U.S. intelligence agencies and Fortune 500 companies. In total, over 90,000 engineers and data scientists are now on the platform.

Iguazio brings its data science platform to Azure and Azure Stack

in AI/Artificial Intelligence/Azure/Azure Stack/Cloud/data science/Delhi/iguazio/India/machine learning/Microsoft/Politics/Serverless by

Iguazio, an end-to-end platform that allows data scientists to take machine learning models from data ingestion to training, testing and production, today announced that it is bringing its solution to Microsoft’s Azure cloud and Azure Stack on-premises platform.

The 80-person company, which has received a total of $48 million in funding to date, aims to make it easier for data scientists to do the work they are actually paid to do. The company argues that a lot of the work that data scientists do today is about managing the infrastructure and handling integrations, not building the machine learning models.

“We see that machine learning pipelines are way more complex than people think,” Iguazio CEO Asaf Somekh told me. “People think this is good stuff, but it’s actually horrible. We’re trying to simplify that.”

To do this, Iguazio is betting on open source. It uses standard tools and API to pull in data from a wide variety of sources, which is then stored in its real-time in-memory database, which can handle streaming data, as well as time series data, tables and files. It also uses standard Jupyter notebooks instead of some form of proprietary format, but what’s maybe most interesting is that the company also built and open-platform for building data science pipelines. To build the models, Iguazio also uses KubeFlow, a machine learning toolkit for the Kubernetes container platform.

Given that Azure and Azure Stack are essentially the same platform, as far as the APIs are concerned, Iguazio can then take its software and run it both in the cloud and on premises. Soon, it’ll also bring its service to Microsoft’s Azure Data Box Edge, Microsoft’s hardware solution for storing and analyzing data at the edge, which can be equipped with FPGAs for deploying machine learning models.

“Partnering with Iguazio, we can offer additional options for AI applications in the cloud to also run on the edge. Iguazio provides an additional path to run AI on the edge beyond our current Microsoft Azure Machine Learning inferencing on the edge,” said Henry Jerez, Principal Group Product Manager at Microsoft’s Intelligent Edge Solutions Platform Group. “This new marketplace option provides an additional alternate path for our customers to bring intelligence close to the data sources for applications such as predictive maintenance and real-time recommendation engines.”

The Azure solution joins Iguazio’s existing options to deploy its services on top of AWS and Google Cloud Platform.

1 2 3 34
Go to Top