Menu

Timesdelhi.com

February 24, 2019
Category archive

Artificial Intelligence

Helen Yiang and Andy Wheeler will be speaking at TC Sessions: Robotics + AI April 18 at UC Berkeley

in Andy Wheeler/Artificial Intelligence/carbon/Co-founder/computing/Delhi/google ventures/GV/India/managing partner/marc raibert/Melonee Wise/mit media lab/orbital insight/peter barrett/Politics/president/robot/robotics/Software/Speaker/TC Sessions: Robotics + AI by

We’re just under two months out from this year’s TC Sessions: Robotics + AI event, and we’ve still got a lot left to announce. As noted, we’ll have Anca Dragan, Marc Raibert, Alexei Efros, Hany Farid, Melonee Wise, Peter Barrett, Rana el Kaliouby, Arnaud Thiercelin and Laura Major at the April event, and today we’ve got a pair of names to add to the ever-growing speaker list.

Today we’re excited to announce to additions to our VC panel, who will be discussing the wild world of robotics investments.

Founding and Managing Partner of FoundersX Ventures, Helen Liang will be joining us at the event to discuss the 20 early stage robotics and AI startups she has invested in. Liang brings a decade of product development to her work at her early-stage capital fund and also serves as Founding President at Tech for Good.

Andy Wheeler is a founding partner at GV (formerly Google Ventures), focusing on bringing early-stage tech to market. He is a co-founder of Ember Corporation and a veteran of MIT Media Lab. His list of early investments include Carbon, Farmer’s Business Network, Abundant Robotics and Orbital Insight.

Early bird tickets are now on sale – book your $249 ticket today and save $100 before prices go up. Students, did you know that you can save $45 with a heavily-discounted student ticket? Book your student ticket here.

News Source = techcrunch.com

Google Cloud’s speech APIs gets cheaper and learn new languages

in Artificial Intelligence/Cloud/computing/Delhi/Developer/Google/google search/India/Politics/Portugal/Speech Recognition/speech synthesis/text-to-speech/wavenet by

Google today announced an update to its Cloud Speech-to-Text and Text-to-Speech APIs that introduces a few new features that should be especially interesting to enterprise users, as well as improved language support and a price cut.

Most of these updates focus on the Speech-to-Text product, but Cloud Text-to-Speech is getting a major update with 31 new WaveNet and 24 new standard voices. The service now also supports seven new languages: Danish, Portuguese/Portugal, Russian, Polish, Slovakian, Ukrainian, and Norwegian Bokmål. These are all in beta right now and extend the list of supported languages to 21 total.

The service now also features the ability to optimize audio playback for specific devices. That sounds like a minor thing, but it allows you to tell a call center application for interactive voice responses and another application for use with a headset.

As for Cloud Speech-to-Text, this update focuses on making the service more usable in situations where developers have to support users on multiple channels — think a phone conference. For this, the company introduced multi-channel recognition as a beta last year and now, this feature is generally available.

Similarly, Google’s premium AI models for video and enhanced phones launched into beta last year with the promise of fewer transcription errors than Google’s other model which mostly focuses on short queries and voice commands. This model, too, is now generally available.

In addition to the new features, Google also decided to cut the price for using the Speech-to-Text service. The company decided to cut the prices of the standard and premium video model for transcribing videos for those who opt in to Google’s data logging program by 33 percent. By opting in, you allow Google to use your data to help train Google’s models. The company promises that only a limited number of employees will have access to the data and that it will solely use it to train and improve its products, but chances are not everybody is going to feel comfortable opting in to this, even if it means there’s a discount.

Thankfully, the regular premium video model is now also 25 percent cheaper without having to log in to Google’s data logging. Like before, the first 60 minutes are still free.

 

News Source = techcrunch.com

Tech investors see bugs as a big business as Ÿnsect raises $125 million

in agriculture/aquaculture/articles/Artificial Intelligence/bpifrance/Co-founder/Delhi/Food/food and drink/France/greenhouse gas/greenhouse gas emissions/idinvest partners/India/north america/Paris/Politics/Talis Capital/TC by

A company using advanced technologies to grow and harvest mealworms (larval beetles) at scale is on track to become one of the venture capital industry’s oddest billion dollar investments.

Ÿnsect, (pronounced ‘insect”) is a Paris-based producer of insect protein that has just closed on $125 million as the company looks to expand into North America selling bug-based nutrients to fish farms, animal farms, and the everyday harvesters of vegetables. 

The company isn’t worth $1 billion… yet. But that’s clearly the goal as it bulks up for a global expansion effort.

According to the company’s chief executive Antoine Hubert, a former agronomist turned bug farm maven, the company grew out of efforts to promote sustainability in the food system at foods and companies across France.

“We thought we could make a bigger impact by developing not only education but production,” in the realm of novel proteins for agriculture, Hubert says. 

Since agriculture is a leading producer of carbon dioxide and methane emissions that contribute to global warming, then any steps that are taken to reduce those emissions by making supply chains and production more efficient would be good for the environment.

The food system has an impact on greenhouse gas. We decided to develop a proper technology to produce large volumes of proteins at competitive prices,” Hubert says. 

The company borrows automation and sensing technologies from areas as diverse as automotive manufacturing and data center heating ventilation and cooling and applies it to the cultivation of mealworms. The company actually has 25 patents on the technologies it has deployed and is on track to book more than $70 million in revenue this year.

Bugs are clearly big business.

Why mealworms, though? Because Hubert says they’re the highest quality insect for pound-for-pound protein production.

Image courtesy of Ÿnsect

The company said that it raised this $125 million (€110m) Series C round to scale up production. Ÿnsect intends to build the world’s biggest insect farm in Amiens Metropole, Northern France and will begin expanding its presence in the North American market. 

The deal, led by Astanor Ventures with participation from Bpifrance, Talis Capital, Idinvest Partners, Finasucre and Compagnie du Bois Sauvage is the largest ag tech deal to date outside of North America and should plant a flag for the role of insect cultivation in the animal feedstock and fertilizer market, which is a combined global market of $800 billion.

That’s good news for competitors like Protix, AgriProtein, EnviroFlight and Beta Hatch, which are all building insect kingdoms of their own with eyes on the same, massive, global market. In fact, before Ÿnsect’s big haul, Protix held the title of the venture-backed bug business with the most cash. The company raised $50 million in financing back in 2017 to expand its insect empire.

Ÿnsect’s bug protein has already found its way into pet and plant food, fish food for aquaculture and other applications, but as demand for sources of high quality proteins continues to grow alongside a rising global population, the company sees one of its largest opportunities in fish and shellfish farming.

“By offering an insect protein alternative to traditional animal and fish-based feed sources, Ÿnsect can help offset the growing competition for ocean fish stock required to feed two billion more people by 2050, while alleviating fish, water and soil depletion, as well as agriculture’s staggering 25% share of global greenhouse gas emissions,” says Hubert. “Our goal is simply to give insects back their natural place in the food chain.”

It was this ability for Ÿnsect to slot itself into the global food chain that attracted Talis Capital as an investor, according to the firm’s co-founder Matus Maar.

“With the global population expected to grow to nine billion by 2050, current acquaculture and animal feeding practices are unsustainable.” Mar said in a statement. “Ÿnsect taps into a huge, yet highly inefficient global market by offering a premium and – above all – sustainable insect-derived product through a fully automated, AI-enabled production process.”

News Source = techcrunch.com

This robotics museum in Korea will construct itself (in theory)

in architecture/Artificial Intelligence/Delhi/design/Gadgets/Hardware/India/korea/Politics/robotics/robots by

The planned Robot Science Museum in Seoul will have a humdinger of a first exhibition: its own robotic construction. It’s very much a publicity stunt, though a fun one — but who knows? Perhaps robots putting buildings together won’t be so uncommon in the next few years, in which case Korea will just be an early adopter.

The idea for robotic construction comes from Melike Altinisik Architects, the Turkish firm that won a competition to design the museum. Their proposal took the form of an egg-like shape covered in panels that can be lifted into place by robotic arms.

“From design, manufacturing to construction and services robots will be in charge,” wrote the firm in the announcement that they had won the competition. Now, let’s be honest: this is obviously an exaggeration. The building has clearly been designed by the talented humans at MAA, albeit with a great deal of help from computers. But it has been designed with robots in mind, and they will be integral to its creation.The parts will all be designed digitally, and robots will “mold, assemble, weld and polish” the plates for the outside, according to World Architecture, after which of course they will also be put in place by robots. The base and surrounds will be produced by an immense 3D printer laying down concrete.

So while much of the project will unfortunately have to be done by people, it will certainly serve as a demonstration of those processes that can be accomplished by robots and computers.

Construction is set to begin in 2020, with the building opening its (likely human-installed) doors in 2022 as a branch of the Seoul Metropolitan Museum. Though my instincts tell me that this kind of unprecedented combination of processes is more likely than not to produce significant delays. Here’s hoping the robots cooperate.

News Source = techcrunch.com

The Samsung S10’s cameras get ultra-wide-angle lenses and more AI smarts

in Artificial Intelligence/camera+/Delhi/Gadgets/Hardware/India/Politics/s10/Samsung/TC by

Samsung’s S10 lineup features a whopping four models, the S10e, the S10, the S10+ and the S10 5G. Unsurprisingly, one of the features that differentiates these models is the camera system. Gone are the days, after all, where one camera would suffice. Now, all the S10 models, except for the budget S10e, feature at least three rear cameras and the high-end 5G model even goes for four — and all of them promise more AI smarts and better video stabilization.

All models get at least a standard 12MP read wide-angle camera with a 77-degree field of view, a 16MP ultra-wide-angle camera for 123-degree shots, and a 10MP selfie camera. The standard S10 then adds a 12MP telephoto lens to the rear camera setup and then S10+ gets an 8MP RGB depth camera. The high-end S10 5G adds a hQVGA 3D depth camera to both the front and rear setup.

The ultra-wide lens is a first for Samsung’s flagship S10 series, though it’s a bit late to the game here given that others have already offered these kind of lenses on their phones before. Still, if you are planning on getting an S10, this new lens will come in handy for large group shots and landscape photos.

On the video front, Samsung promises better stabilization, UHD quality for both the rear and front cameras and HDR10+ support for the rear camera. That makes it the first phone to support HDR10+.

These days, though, it’s all about computational photography and like its competitors, Samsung promises that its new cameras are also significantly smarter than its predecessors. Specifically, the company is pointing to its new scene optimizer for the S10 line which uses the phone’s neural processing unit to recognize and process up to 30 different scenes and also offer shot suggestions to help you better frame the scene. Samsung says it analyzed over 100 million professional photos to create the machine learning models to power this feature.

On the software side, Smasung now also offers a version of Adobe’s Premiere Rush, the company’s video editor that’s specifically geared toward editing on the go for YouTube. Oh, and the phones will also get a special Instagram mode.

Since we haven’t actually used the phones yet, though, it’s hard to say how much a difference those AI smarts really make in day-to-day use.

News Source = techcrunch.com

1 2 3 102
Go to Top