Connect with us

Artificial Intelligence

ReviveMed turns drug discovery into a big data problem and raises $1.5M to solve it

What if there’s a drug that already exists that could treat a disease with no known therapies, but we just haven’t made the connection? Finding that connection by exhaustively analyzing complex biomechanics within the body — with the help of machine learning, naturally — is the goal of ReviveMed, a new biotech startup out of MIT that just raised $1.5 million in seed funding.

Around the turn of the century, genomics was the big thing. Then, as the power to investigate complex biological processes improved, proteomics became the next frontier. We may have moved on again, this time to the yet more complex field of metabolomics, which is where ReviveMed comes in.

Leila Pirhaji, ReviveMed’s founder and CEO, began work on the topic during her time as a postgrad at MIT. The problem she and her colleagues saw was the immense complexity of interactions between proteins, which are encoded in DNA and RNA, and metabolites, a class of biomolecules with even greater variety. Hidden in these innumerable interactions somewhere are clues to how and why biological processes are going wrong, and perhaps how to address that.

“The interaction of proteins and metabolites tells us exactly what’s happening in the disease,” Pirhaji told me. “But there are over 40,000 metabolites in the human body. DNA and RNA are easy to measure, but metabolites have tremendous diversity in mass. Each one requires its own experiment to detect.”

As you can imagine, the time and money that would be involved in such an extensive battery of testing have made metabolomics difficult to study. But what Pirhaji and her collaborators at MIT decided was that it was similar enough to other “big noisy data set” problems that the nascent approach of machine learning could be effective.

“Instead of doing experiments,” Pirhaji said, “why don’t we use AI and our database?” ReviveMed, which she founded along with data scientist Demarcus Briers and biotech veteran Richard Howell, is the embodiment of that proposal.

Pharmaceutical companies and research organizations already have a mess of metabolites masses, known interactions, suspected but unproven effects, and disease states and outcomes. Plenty of experimentation is done, but the results are frustratingly vague owing to the inability to the inability to be sure about the metabolites themselves or what they’re doing. Most experimentation has resulted in partial understanding of a small proportion of known metabolites.

That data isn’t just a few drives’ worth of spreadsheets and charts, either. Not only does the data comprise drug-protein, protein-protein, protein-metabolite, and metabolite-disease interactions, but they’re including data that’s essentially never been analyzed: “We’re looking at metabolites that no one has looked at before.”

The information is sitting in an archive somewhere, gathering dust. “We actually have to go physically pick up the mass spectrometry files,” Pirhaji said. (“They’re huge,” she added.)

Once they got the data all in one place (Pirhaji described it as “a big hairball with millions of interactions,” in a presentation in March), they developed a model to evaluate and characterize everything in it, producing the kind of insights machine learning systems are known for.

The “hairball.”

The results were more than a little promising. In a trial run, they identified new disease mechanisms for Huntington’s, new therapeutic targets (i.e. biomolecules or processes that could be affected by drugs), and existing drugs that may affect those targets.

The secret sauce, or one ingredient anyway, is the ability to distinguish metabolites with similar masses (sugars or fats with different molecular configurations but the same number and type of atoms, for instance) and correlate those metabolites with both drug and protein effects and disease outcomes. The metabolome fills in the missing piece between disease and drug without any tests establishing it directly.

At that point the drug will, of course, require real-world testing. But although ReviveMed does do some verification on its own, this is when the company would hand back the results to its clients, pharmaceutical companies, which then take the drug and its new effect to market.

In effect, the business model is offering a low-cost, high-reward R&D as a service to pharma, which can hand over reams of data it has no particular use for, potentially resulting in practical applications for drugs that already have millions invested in their testing and manufacture. What wouldn’t Pfizer pay to determine that Robitussin also prevents Alzheimers? That knowledge is worth billions, and ReviveMed is offering a new, powerful way to check for such things with little in the way of new investment.

This is the kind of web of molecules and effects that the system sorts through.

ReviveMed, for its part, is being a bit more choosy than that — its focus is on untreatable diseases with a good chance that existing drugs affect them. The first target is fatty liver disease, which affects millions, causing great suffering and cost. And something like Huntington’s, in which genetic triggers and disease effects are known but not the intermediate mechanisms, is also a good candidate for which the company’s models can fill the gap.

The company isn’t reliant on Big Pharma for its data, though. The original training data was all public (though “very fragmented”) and it’s that on which the system is primarily based. “We have a patent on our process for getting this metabolome data and translating it into insights,” Pirhaji notes, although the work they did at MIT is available for anyone to access (it was published in Nature Methods, in case you were wondering).

But compared with genomics and proteomics, not much metabolomic data is public — so although ReviveMed can augment its database with data from clients, its researchers are also conducting hundreds of human tests on their own to improve the model.

The business model is a bit complicated as well — “It’s very case by case,” Pirhaji told me. A research hospital looking to collaborate and share data while sharing any results publicly or as shared intellectual property, for instance, would not be a situation where a lot of cash would change hands. But a top-5 pharma company — two of which ReviveMed already has dealings with — that wants to keep all the results for itself and has limitless coffers would pay a higher cost.

I’m oversimplifying, but you get the idea. In many cases, however, ReviveMed will aim to be a part of any intellectual property it contributes to. And of course the data provided by the clients goes into the model and improves it, which is its own form of payment. So you can see that negotiations might get complicated. But the company already has several revenue-generating pilots in place, so even at this early stage those complications are far from insurmountable.

Lastly there’s the matter of the seed round: $1.5 million, led by Rivas Capital along with TechU, Team Builder Ventures, and WorldQuant. This should allow them to hire the engineers and data scientists they need and expand in other practical ways. Placing well in a recent Google machine learning competition got them $200K worth of cloud computing credit, so that should keep them crunching for a while.

ReviveMed’s approach is a fundamentally modern one that wouldn’t be possible just a few years ago, such is the scale of the data involved. It may prove to be a powerful example of data-driven biotech as lucrative as it is beneficial. Even the early proof-of-concept and pilot work may provide help to millions or save lives — it’s not every day a company is founded that can say that.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Accel Partners

With at least $1.3 billion invested globally in 2018, VC funding for blockchain blows past 2017 totals

Although bitcoin and blockchain technology may not take up quite as much mental bandwidth for the general public as it did just a few months ago, companies in the space continue to rake in capital from investors.

One of the latest to do so is Circle, which recently announced a $110 million Series E round led by bitcoin mining hardware manufacturer Bitmain. Other participating investors include Tusk VenturesPantera CapitalIDG Capital PartnersGeneral CatalystAccel PartnersDigital Currency GroupBlockchain Capital and Breyer Capital.

This round vaults Circle into an exclusive club of crypto companies that are valued, in U.S. dollars, at $1 billion or more in their most recent venture capital round. According to Crunchbase data, Circle was valued at $2.9 billion pre-money, up from a $420 million pre-money valuation in its Series D round, which closed in May 2016. According to Crunchbase data, only Coinbase and Robinhood — a mobile-first stock-trading platform which recently made a big push into cryptocurrency trading — were in the crypto-unicorn club, which Circle has now joined.

But that’s not the only milestone for the world of venture-backed cryptocurrency and blockchain startups.

Back in February, Crunchbase News predicted that the amount of money raised in old-school venture capital rounds by blockchain and blockchain-adjacent startups in 2018 would surpass the amount raised in 2017. Well, it’s only May, and it looks like the prediction panned out.

In the chart below, you’ll find worldwide venture deal and dollar volume for blockchain and blockchain-adjacent companies. We purposely excluded ICOs, including those that had traditional VCs participate, and instead focused on venture deals: angel, seed, convertible notes, Series A, Series B and so on. The data displayed below is based on reported data in Crunchbase, which may be subject to reporting delays, and is, in some cases, incomplete.

A little more than five months into 2018, reported dollar volume invested in VC rounds raised by blockchain companies surpassed 2017’s totals. Not just that, the nearly $1.3 billion in global dollar volume is greater than the reported funding totals for the 18 months between July 1, 2016 and New Year’s Eve in 2017.

And although Circle’s Series E round certainly helped to bump up funding totals year-to-date, there were many other large funding rounds throughout 2018:

There were, of course, many other large rounds over the past five months. After all, we had to get to $1.3 billion somehow.

All of this is to say that investor interest in the blockchain space shows no immediate signs of slowing down, even as the price of bitcoin, ethereum and other cryptocurrencies hover at less than half of their all-time highs. Considering that regulators are still figuring out how to treat most crypto assets, massive price volatility and dubious real-world utility of the technology, it may surprise some that investors at the riskiest end of the risk capital pool invest as much as they do in blockchain.

Notes on methodology

Like in our February analysis, we first created a list of companies in Crunchbase’s bitcoin, ethereum, blockchaincryptocurrency and virtual currency categories. We added to this list any companies that use those keywords, as well as “digital currency,” “utility token” and “security token” that weren’t previously included in the above categories. After de-duplicating this list, we merged this set of companies with funding rounds data in Crunchbase.

Please note that for some entries in Crunchbase’s round data, the amount of capital raised isn’t known. And, as previously noted, Crunchbase’s data is subject to reporting delays, especially for seed-stage companies. Accordingly, actual funding totals are likely higher than reported here.

News Source = techcrunch.com

Continue Reading

Artificial Intelligence

AI will save us from yanny/laurel, right? Wrong

If you haven’t taken part in the yanny/laurel controversy over the last couple days, allow me to sincerely congratulate you. But your time is up. The viral speech synth clip has met the AI hype train and the result is, like everything in this mortal world, disappointing.

Sonix, a company that produces AI-based speech recognition software, ran the ambiguous sound clip through Google, Amazon, and Watson’s transcription tools, and of course its own.

Google and Sonix managed to get it on the first try — it’s “laurel,” by the way. Not yanny. Laurel.

But Amazon stumbled, repeatedly producing “year old” as its best guess for what the robotic voice was saying. IBM’s Watson, amazingly, got it only half the time, alternating between hearing “yeah role” and “laurel.” So in a way, it’s the most human of them all.

Top: Amazon; bottom: IBM.

Sonix CEO Jamie Sutherland told me in an email that he can’t really comment on the mixed success of the other models, not having access to them.

“As you can imagine the human voice is complex and there are so many variations of volume, cadence, accent, and frequency,” he wrote. “The reality is that different companies may be optimizing for different use cases, so the results may vary. It is challenging for a speech recognition model to accommodate for everything.”

My guess as an ignorant onlooker is it may have something to do with the frequencies the models have been trained to prioritize. Sounds reasonable enough!

It’s really an absurd endeavor to appeal to a system based on our own hearing and cognition to make an authoritative judgement in a matter on which our hearing and cognition are demonstrably lacking. But it’s still fun.

News Source = techcrunch.com

Continue Reading

AI

What we know about Google’s Duplex demo so far

The highlight of Google’s I/O keynote earlier this month was the reveal of Duplex, a system that can make calls to set up a salon appointment or a restaurant reservation for you by calling those places, chatting with a human and getting the job done. That demo drew lots of laughs at the keynote, but after the dust settled, plenty of ethical questions popped up because of how Duplex tries to fake being human. Over the course of the last few days, those were joined by questions about whether the demo was staged or edited after Axios asked Google a few simple questions about the demo that Google refused to answer.

We have reached out to Google with a number of very specific questions about this and have not heard back. As far as I can tell, the same is true for other outlets that have contacted the company.

If you haven’t seen the demo, take a look at this before you read on.

So did Google fudge this demo? Here is why people are asking and what we know so far:

During his keynote, Google CEO Sundar Pichai noted multiple times that we were listening to real calls and real conversations (“What you will hear is the Google Assistant actually calling a real salon.”). The company made the same claims in a blog post (“While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.”).

Google has so far declined to disclose the name of the businesses it worked with and whether it had permission to record those calls. California is a two-consent state, so our understanding is that permission to record these calls would have been necessary (unless those calls were made to businesses in a state with different laws). So on top of the ethics questions, there are also a few legal questions here.

We have some clues, though. In the blog post, Google Duplex lead Yaniv Leviathan and engineering manager Matan Kalman posted a picture of themselves eating a meal “booked through a call from Duplex.” Thanks to the wonder of crowdsourcing and a number of intrepid sleuths, we know that this restaurant was Hongs Gourmet in Saratoga, California. We called Hongs Gourmet last night, but the person who answered the phone referred us to her manager, who she told us had left for the day. (We’ll give it another try today.)

Sadly, the rest of Google’s audio samples don’t contain any other clues as to which restaurants were called.

What prompted much of the suspicion here is that nobody who answers the calls from the Assistant in Google’s samples identifies their name or the name of the business. My best guess is that Google cut those parts from the conversations, but it’s hard to tell. Some of the audio samples do however sound as if the beginning was edited out.

Google clearly didn’t expect this project to be controversial. The keynote demo was clearly meant to dazzle — and it did so in the moment because, if it really works, this technology represents the culmination of years of work on machine learning. But the company clearly didn’t think through the consequences.

My best guess is that Google didn’t fake these calls. But it surely only presented the best examples of its tests. That’s what you do in a big keynote demo, after all, even though in hindsight, showing the system fail or trying to place a live call would have been even better (remember Steve Job’s Starbucks call?).

For now, we’ll see if we can get more answers, but so far all of our calls and emails have gone unanswered. Google could easily do away with all of those questions around Duplex by simply answering them, but so far, that’s not happening.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending