Menu

Timesdelhi.com

June 16, 2019
Category archive

Artificial Intelligence

Food delivery startup Dahmakan eats up $5M for expansion in Southeast Asia

in Artificial Intelligence/Asia/Atami Capital/ceo/China/Companies/Dahmakan/Delhi/Food/food delivery/Foodpanda/funding/Fundings & Exits/go-jek/Google/grab/grain/India/Indonesia/jessica li/Kuala Lumpur/Malaysia/nestle/online food ordering/Politics/Rocket Internet/Singapore/Southeast Asia/TC/temasek/Thailand/transport/websites/Y Combinator/zomato by

It’s harvest season for Southeast Asia’s full-stack food delivery startups. Following on from Singapore’s Grain raising $10 million, so Malaysia-based Dahmakan today announced a $5 million financing round of its own.

The money takes the startup to $10 million raised to date — its last round as $2.6 million last year — and it comes via new investors U.S-based Partech Partners and China’s UpHonest Capital and existing backers Y-Combinator, Atami Capital and the former CEO of Nestlé who was an angel investor. The round was closed earlier this year but is now being announced alongside this expansion play.

It’s been a busy couple of years for the company, which was founded in 2015 by former execs from Rocket Internet’s FoodPanda service. Dahmakan — which means “Have you eaten?” in Malay — graduated Y Combinator in 2017 and it expanded to Thailand last year through an acquisition, so what’s on the menu for 2019?

It is going all in on ‘cloud kitchen’ model of using unwanted retail space to cook up meals specifically for digital orders, which is entirely its business since it handles all processes in house rather than through a marketplace model.

Already, in its home town of Kuala Lumpur, Malaysia, Dahmakan has introduced ‘satellite’ hubs that will allow it to serve customers located in different parts of the city more efficiently. The service already fares better than rivals like FoodPanda, Grab Food and (in Thailand) GoJek’s GetFood service because customers order ahead of time from a fixed menu with scheduled delivery times, but there’s room to do better and more.

“The way that we are thinking about it is that we are 18 months ahead of the competition in terms of the cloud kitchen model. Most are only starting to build out clusters of mini kitchens (150sqft) or so without leveraging too much AI in terms of product development, procurement or automation in machinery,” Dahmakan COO and co-founder Jessica Li told TechCrunch.

“What we’ve figured out is how to scale food production for thousands of deliveries while maintaining quality and keeping costs at 30 percent below comparable restaurant prices,” she added, explaining that the company plans to add “new brands and new products” using the satellite hub approach.

A serving of Ayam Penyet, Indonesian smashed chicken

Dahmakan is looking to extend its reach in Southeast Asia, too.

Li said the immediate priority is domestic growth in Malaysia with the service set to expand in Penang and Johor Bharu during the third quarter of this year. Beyond that, she revealed that Dahmakan plans to move into Singapore and Indonesia before the end of 2019.

Food delivery is quickly becoming the new ride-hailing war in Southeast Asia as Grab and Go-Jek, which have raised the most money in the region, pour capital into space. Quite why they are doing so isn’t entirely clear. Food could be a channel for loyalty (if such a thing can exist in incentive-led verticals) and user engagement for ride-hailing or other parts of their so-called “super app” services, but, either way, it is certainly distorting the market by flooding users with promotions.

That’s not necessarily a bad thing for startups like Dahmakan and Grain which have grown in a more sustainable and responsible manner. They benefit from more people using food delivery in general, while they may also become attractive acquisition targets in the future.

Like Grain, Dahmakan puts a focus on healthy eating, which stands in contrast to the typical junk food orders that others in the space serve through their marketplace of restaurants. That certainly helps them stand out among certain audiences, and it’ll be interesting to see what new products and brands that Dahmakan is hatching to capitalize on the flood of attention food delivery is seeing..

This is certainly only the start. A Google-Temasek report on Southeast Asia published last year forecasts that the region’s food delivery market will grow from an estimated $2 million last year to $8 billion in 2025. That four-fold prediction is larger than the growth forecast for ride-hailing, although the latter is larger.

“That’s faster than any other region even China,” Li said.

A report from Google and Temasek predicts huge growth for ride-hailing and food delivery services in Southeast Asia

DefinedCrowd offers mobile apps to empower its AI-annotating masses

in Apps/Artificial Intelligence/Battlefield/definedcrowd/Delhi/India/Politics/Startups by

DefinedCrowd, the Startup Battlefield alumnus that produces and refines data for AI-training purposes, has just debuted iOS and Android apps for its army of human annotators. It should help speed up a process that the company already touts as one of the fastest in the industry.

It’s no secret that AI relies almost totally on data that has been hand-annotated by humans, pointing out objects in photos, analyzing the meaning of sentences or expressions, and so on. Doing this work has become a sort of cottage industry, with many annotators doing it part time or between other jobs.

There’s a limit, however, to what you can do if the interface you must use to do it is only available on certain platforms. Just as others occasionally answer an email or look over a presentation while riding the bus or getting lunch, it’s nice to be able to do work on mobile — essential, really, at this point.

To that end DefinedCrowd has made its own app, which shares the Neevo branding of the company’s annotation community, that lets its annotators work whenever they want, tackling image or speech annotation tasks on the go. It’s available on iOS and Android starting today.

It’s a natural evolution of the market, CEO Daniela Braga told me. There’s a huge demand for this kind of annotation work, and it makes no sense to restrict the schedules or platforms of the people doing it. She suggested everyone in the annotation space would have apps soon, just as every productivity or messaging service does. And why not?

The company is growing quickly, going from a handful of employees to over a hundred, spread over its offices in Lisbon, Porto, Seattle, and Tokyo. The market, likewise, is exploding as more and more companies find that AI is not just applicable to what they do, but not out of their reach.

Biofourmis raises $35M to develop smarter treatments for chronic diseases

in Artificial Intelligence/Asia/Aviva Ventures/Biofourmis/blogs/boston/ceo/China/Co-founder/Delhi/digital media/Disease/fda/funding/Fundings & Exits/Health/Healthcare/India/insurance/MassMutual Ventures/openspace ventures/PillPack/Politics/Sequoia/Singapore/TechCrunch/United States/Verizon Media/websites/world wide web by

Biofourmis, a Singapore-based startup pioneering a distinctly tech-based approach to the treatment of chronic conditions, has raised a $35 million Series B round for expansion.

The round was led by Sequoia India and MassMutual Ventures, the VC fund from Massachusetts Mutual Life Insurance Company. Other investors who put in include EDBI, the corporate investment arm of Singapore’s Economic Development Board, China-based healthcare platform Jianke and existing investors Openspace Ventures, Aviva Ventures and SGInnovate, a Singapore government initiative for deep tech startups. The round takes Biofourmis to $41.6 million raised to date, according to Crunchbase.

This isn’t your typical TechCrunch funding story.

Biofourmis CEO Kuldeep Singh Rajput moved to Singapore to start a PhD, but he dropped out to start the business with co-founder Wendou Niu in 2015 because he saw the potential to “predict disease before it happens,” he told TechCrunch in an interview.

AI-powered specialist post-discharge care

There are a number of layers to Biofourmis’ work, but essentially it uses a combination of data collected from patients and an AI-based system to customize treatments for post-discharge patients. The company is focused on a range of therapeutics, but its most advanced is cardiac, so patients who have been discharged after heart failure or other heart-related conditions.

With that segment of patients, the Biofourmis platform uses a combination of data from sensors — medical sensors rather than consumer wearables, which are worn 24/7 — and its tech to monitor patient health, detect problems ahead of time and prescribe an optimum treatment course. That information is disseminated through companion mobile apps for patients and caregivers.

Bioformis uses a mobile app as a touch point to give patients tailored care and drug prescriptions after they are discharged from hospital

That’s to say that medicine works differently on different people, so by collecting and monitoring data and crunching numbers, Biofourmis can provide the best drug to help optimize a patient’s health through what it calls a ‘digital pill.’ That’s not Matrix-style futurology, it’s more like a digital prescription that evolves based on the needs of a patient in real-time. It plans to use a network of medical delivery platforms, including Amazon-owned PillPack, to get the drugs to patients within hours.

Yes, that’s future tense because Biofourmis is waiting on FDA approval to commercialize its service. That’s expected to come by the end of this year, Singh Rajput told TechCrunch. But he’s optimistic given clinical trials, which have covered some 5,000 patients across 20 different sites.

On the tech side, Singh Rajput said Biofourmis has seen impressive results with its predictions. He cited tests in the U.S. which enabled the company to “predict heart failure 14 days in advance” with around 90 percent sensitivity. That was achieved using standard medical wearables at the cost of hundreds of dollars, rather than thousands with advanced kit such as Heartlogic from Boston Scientific — although the latter has a longer window for predictions.

The type of disruption that Biofourmis might appear to upset the applecart for pharma companies, but Singh Rajput maintains that the industry is moving towards a more qualitative approach to healthcare because it has been hard to evaluate the performance of drugs and price them accordingly.

“Today, insurance companies are blinded not having transparency on how to price drugs,” he said. “But there are already 50 drugs in the market paying based on outcomes so the market is moving in that direction.”

Outcome-based payments mean insurance firms reimburse all outcomes based on the performance of the drugs, in other words how well patients recover. The rates vary, but a lack of reduction in remission rates can see insurers lower their payouts because drugs aren’t working as well as expected.

Singh Rajput believes Biofourmis can level the playing field and added more granular transparency in terms of drug performance. He believes pharma companies are keen to show their products perform better than others, so over the long-term that’s the model Biofourmis wants to encourage.

Indeed, the confidence is such that Biofourmis intends to initially go to market via pharma companies, who will sell the package into clinics bundled with their drugs, before moving to work with insurance firms once traction is gained. While the Biofourmis is likely to be bundled with initial medication, the company will take a commission of 5-10 percent on the recommended drugs sold through its digital pill.

Biofourmis CEO and co-founder Kuldeep Singh Rajput dropped out of his PhD course to start the company in 2015

Doubling down on the US

With its new money, Biofourmis is doubling down on that imminent commercialization by relocating its headquarters to Boston. It will retain its presence in Singapore, where it has 45 people who handle software and product development, but the new U.S. office is slated to grow from 14 staff right now to up to 120 by the end of the year.

“The U.S. has been a major market focus since day one,” Singh Rajput said. “Being closer to customers and attracting the clinical data science pool is critical.”

While he praised Singapore and said the company remains committed to the country — adding EDBI to its investors is certainly a sign — he admitted that Boston, where he once studied, is a key market for finding “data scientists with core clinical capabilities.”

That expansion is not only to bring the cardio product to market, but also to prepare products to cover other therapeutics. Right now, it has six trials in place that cover pain, orthopedics and oncology. There are also plans to expand in other markets outside of the U.S, and in particular Singapore and China, where Biofourmis plans to lead on Jianke.

Not lacking in confidence, Singh Rajput told TechCrunch that the company is on course to reach a $1 billion valuation when it next raises funding, that’s estimated as 18 months away and the company isn’t saying how much it is worth today.

Singh Rajput did confirm, however, that the round was heavily oversubscribed, and that the startup rebuffed investment offers from pharma companies in order to “avoid a conflict of interest and stay neutral.”

He is also eying a future IPO, which is tentatively set for 2023 — although by then, Singh Rajput said, Biofourmis would need at least two products in the market.

There’s a long way to go before then, but this round has certainly put Biofourmis and its digital pill approach on the map within the tech industry.

Freshworks acquires customer success service Natero

in Artificial Intelligence/Business/ceo/Customer Engagement/customer experience/Customer success/Delhi/Economy/Freshdesk/freshworks/India/Marketing/ML/Politics/TC by

Customer engagement service Freshworks, which you may still remember under its old name of Freshdesk, today announced that it has acquired Natero, a customer success service with some AI/ML smarts that helps businesses prevent churn and manage their customers.

The acquisition, Freshworks CEO Girish Mathrubootham told me, will help the company complete its mission to provide its users with a 360-degree view of their customers. As Mathrubootham stressed, Freshdesk started out with a focus on customer support and then added additional functionality for marketers and other roles over time. Today, however, companies want this full 360-degree view of a customer and be able to offer differentiated service to their top customers, for example. In many ways, the acquisition of Natero closes the loop here.

“The acquisition extends our ‘customer-for-life’ vision to all teams, including account and customer success managers who require up-to-date customer usage and health data to proactively engage those accounts at risk of churn or ready to buy more,” Mathrubootham said.

Natero founder and CEO Craig Soules echoed this and noted that the only way to do this is to have a rich customer model at the core of these efforts. “More and more people wanted to take data from Natero and take it to sales tools,” he said when I asked him about how his company will fit into the Freshworks portfolio — and why he sold the company. “We Freshworks, we saw a company that was going into this direction and that was doing customer success for a very long it. […] It felt like a very natural fit to leverage this customer model.”

Mathrubootham also noted that Freshworks was actually a Natero customer so when Natero got to the point where it was looking for more capital to expand this focus on its customer model, the two companies started talking.

Natero will continue to exist as a stand-alone product, but it will also become part of the Freshworks 360 suite, Freshwork’s integrated customer engagement suite.

Ahead of today’s acquisition, Natero had raised a total of $3.3 million. That’s not a lot for a startup that launched back in 2012, but Soules noted how he was able to fund the company’s expansion through revenue. The two companies did not disclose the acquisition price.

Why is Facebook doing robotics research?

in Artificial Intelligence/Delhi/Facebook/Gadgets/Hardware/India/Politics/robotics/robots/Science/Social/TC by

It’s a bit strange to hear that the world’s leading social network is pursuing research in robotics rather than, say, making search useful, but Facebook is a big organization with many competing priorities. And while these robots aren’t directly going to affect your Facebook experience, what the company learns from them could be impactful in surprising ways.

Though robotics is a new area of research for Facebook, its reliance on and bleeding-edge work in AI are well known. Mechanisms that could be called AI (the definition is quite hazy) govern all sorts of things, from camera effects to automated moderation of restricted content.

AI and robotics are naturally overlapping magisteria — it’s why we have an event covering both — and advances in one often do the same, or open new areas of inquiry, in the other. So really it’s no surprise that Facebook, with its strong interest in using AI for a variety of tasks in the real and social media worlds, might want to dabble in robotics to mine for insights.

What then could be the possible wider applications of the robotics projects it announced today? Let’s take a look.

Learning to walk from scratch

“Daisy” the hexapod robot.

Walking is a surprisingly complex action, or series of actions, especially when you’ve got six legs, like the robot used in this experiment. You can program in how it should move its legs to go forward, turn around, and so on, but doesn’t that feel a bit like cheating? After all, we had to learn on our own, with no instruction manual or settings to import. So the team looked into having the robot teach itself to walk.

This isn’t a new type of research — lots of roboticists and AI researchers are into it. Evolutionary algorithms (different but related) go back a long way, and we’ve already seen interesting papers like this one:

By giving their robot some basic priorities like being “rewarded” for moving forward, but no real clue how to work its legs, the team let it experiment and try out different things, slowly learning and refining the model by which it moves. The goal is to reduce the amount of time it takes for the robot to go from zero to reliable locomotion from weeks to hours.

What could this be used for? Facebook is a vast wilderness of data, complex and dubiously structured. Learning to navigate a network of data is of course very different from learning to navigate an office — but the idea of a system teaching itself the basics on a short timescale given some simple rules and goals is shared.

Learning how AI systems teach themselves, and how to remove roadblocks like mistaken priorities, cheating the rules, weird data-hoarding habits and other stuff is important for agents meant to be set loose in both real and virtual worlds. Perhaps the next time there is a humanitarian crisis that Facebook needs to monitor on its platform, the AI model that helps do so will be informed by the autodidactic efficiencies that turn up here.

Leveraging “curiosity”

Researcher Akshara Rai adjusts a robot arm in the robotics AI lab in Menlo Park. (Facebook)

This work is a little less visual, but more relatable. After all, everyone feels curiosity to a certain degree, and while we understand that sometimes it kills the cat, most times it’s a drive that leads us to learn more effectively. Facebook applied the concept of curiosity to a robot arm being asked to perform various ordinary tasks.

Now, it may seem odd that they could imbue a robot arm with “curiosity,” but what’s meant by that term in this context is simply that the AI in charge of the arm — whether it’s seeing or deciding how to grip, or how fast to move — is given motivation to reduce uncertainty about that action.

That could mean lots of things — perhaps twisting the camera a little while identifying an object gives it a little bit of a better view, improving its confidence in identifying it. Maybe it looks at the target area first to double check the distance and make sure there’s no obstacle. Whatever the case, giving the AI latitude to find actions that increase confidence could eventually let it complete tasks faster, even though at the beginning it may be slowed by the “curious” acts.

What could this be used for? Facebook is big on computer vision, as we’ve seen both in its camera and image work and in devices like Portal, which (some would say creepily) follows you around the room with its “face.” Learning about the environment is critical for both these applications and for any others that require context about what they’re seeing or sensing in order to function.

Any camera operating in an app or device like those from Facebook is constantly analyzing the images it sees for usable information. When a face enters the frame, that’s the cue for a dozen new algorithms to spin up and start working. If someone holds up an object, does it have text? Does it need to be translated? Is there a QR code? What about the background, how far away is it? If the user is applying AR effects or filters, where does the face or hair stop and the trees behind begin?

If the camera, or gadget, or robot, left these tasks to be accomplished “just in time,” they will produce CPU usage spikes, visible latency in the image, and all kinds of stuff the user or system engineer doesn’t want. But if it’s doing it all the time, that’s just as bad. If instead the AI agent is exerting curiosity to check these things when it senses too much uncertainty about the scene, that’s a happy medium. This is just one way it could be used, but given Facebook’s priorities it seems like an important one.

Seeing by touching

Although vision is important, it’s not the only way that we, or robots, perceive the world. Many robots are equipped with sensors for motion, sound, and other modalities, but actual touch is relatively rare. Chalk it up to a lack of good tactile interfaces (though we’re getting there). Nevertheless, Facebook’s researchers wanted to look into the possibility of using tactile data as a surrogate for visual data.

If you think about it, that’s perfectly normal — people with visual impairments use touch to navigate their surroundings or acquire fine details about objects. It’s not exactly that they’re “seeing” via touch, but there’s a meaningful overlap between the concepts. So Facebook’s researchers deployed an AI model that decides what actions to take based on video, but instead of actual video data, fed it high-resolution touch data.

Turns out the algorithm doesn’t really care whether it’s looking at an image of the world as we’d see it or not — as long as the data is presented visually, for instance as a map of pressure on a tactile sensor, it can be analyzed for patterns just like a photographic image.

What could this be used for? It’s doubtful Facebook is super interested in reaching out and touching its users. But this isn’t just about touch — it’s about applying learning across modalities.

Think about how, if you were presented with two distinct objects for the first time, it would be trivial to tell them apart with your eyes closed, by touch alone. Why can you do that? Because when you see something, you don’t just understand what it looks like, you develop an internal model representing it that encompasses multiple senses and perspectives.

Similarly, an AI agent may need to transfer its learning from one domain to another — auditory data telling a grip sensor how hard to hold an object, or visual data telling the microphone how to separate voices. The real world is a complicated place and data is noisier here — but voluminous. Being able to leverage that data regardless of its type is important to reliably being able to understand and interact with reality.

So you see that while this research is interesting in its own right, and can in fact be explained on that simpler premise, it is also important to recognize the context in which it is being conducted. As the blog post describing the research concludes:

We are focused on using robotics work that will not only lead to more capable robots but will also push the limits of AI over the years and decades to come. If we want to move closer to machines that can think, plan, and reason the way people do, then we need to build AI systems that can learn for themselves in a multitude of scenarios — beyond the digital world.

As Facebook continually works on expanding its influence from its walled garden of apps and services into the rich but unstructured world of your living room, kitchen, and office, its AI agents require more and more sophistication. Sure, you won’t see a “Facebook robot” any time soon… unless you count the one they already sell, or the one in your pocket right now.

1 2 3 122
Go to Top