Connect with us

Artificial Intelligence

Baidu plans to mass produce Level 4 self-driving cars with BAIC by 2021

Baidu, China’s internet technology giant, hopes to be in the business of mass producing autonomous cars by 2021, thanks to a partnership with BAIC Group, a Chinese automaker which will handle the manufacturing part of that equation. BAIC Group is one of Baidu’s many partners for its Apollo autonomous driving program, and it’ll use the open platform to produce vehicles with Level 3 autonomous features by 2019 before moving on to fully self-driving Level 4 cars by 2021, the companies announced today.

Baidu will contribute cybersecurity, image recognition and self-driving, as well as its DuerOS virtual assistant capabilities, and BAIC will integrate those technologies into its own vehicles. The two anticipate that by 2019, over 1 million of BAIC’s production vehicles will feature Baidu networking tech, and the companies will be working on building gout an automotive cloud-based ecosystem of products and services, too, including crowd-sourced traffic info and more.

Just last month, GM announced that it would be mass producing its own self-driving vehicles with subsidiary Cruise Automation. The GM large volume autonomous car is based on the Bolt platform, but features more integrated self-driving sensors and computing technology designed to be produced at scale.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Google injects Hire with AI to speed up common tasks

Since Google Hire launched last year it has been trying to make it easier for hiring managers to manage the data and tasks associated with the hiring process, while maybe tweaking LinkedIn while they’re at it. Today the company announced some AI-infused enhancements that they say will help save time and energy spent on manual processes.

“By incorporating Google AI, Hire now reduces repetitive, time-consuming tasks, like scheduling interviews into one-click interactions. This means hiring teams can spend less time with logistics and more time connecting with people,” Google’s Berit Hoffmann, Hire product manager wrote in a blog post announcing the new features.

The first piece involves making it easier and faster to schedule interviews with candidates. This is a multi-step activity that involves scheduling appropriate interviewers, choosing a time and date that works for all parties involved in the interview and scheduling a room in which to conduct the interview. Organizing these kind of logistics tend to eat up a lot of time.

“To streamline this process, Hire now uses AI to automatically suggest interviewers and ideal time slots, reducing interview scheduling to a few clicks,” Hoffmann wrote.

Photo: Google

Another common hiring chore is finding keywords in a resume. Hire’s AI now finds these words for a recruiter automatically by analysing terms in a job description or search query and highlighting relevant words including synonyms and acronyms in a resume to save time spent manually searching for them.

Photo: Google

Finally, another standard part of the hiring process is making phone calls, lots of phone calls. To make this easier, the latest version of Google Hire has a new click-to-call function. Simply click the phone number and it dials automatically and registers the call in call a log for easy recall or auditing.

While Microsoft has LinkedIn and Office 365, Google has G Suite and Google Hire. The strategy behind Hire is to allow hiring personnel to work in the G Suite tools they are immersed in every day and incorporate Hire functionality within those tools.

It’s not unlike CRM tools that integrate with Outlook or GMail because that’s where sales people spend a good deal of their time anyway. The idea is to reduce the time spent switching between tools and make the process a more integrated experience.

While none of these features individually will necessarily wow you, they are making use of Google AI to simplify common tasks to reduce some of the tedium associated with every-day hiring tasks.

News Source = techcrunch.com

Continue Reading

Artificial Intelligence

What’s under those clothes? This system tracks body shapes in real time

With augmented reality coming in hot and depth tracking cameras due to arrive on flagship phones, the time is right to improve how computers track the motions of people they see — even if that means virtually stripping them of their clothes. A new computer vision system that does just that may sound a little creepy, but it definitely has its uses.

The basic problem is that if you’re going to capture a human being in motion, say for a movie or for an augmented reality game, there’s a frustrating vagueness to them caused by clothes. Why do you think motion capture actors have to wear those skintight suits? Because their JNCO jeans make it hard for the system to tell exactly where their legs are. Leave them in the trailer.

Same for anyone wearing a dress, a backpack, a jacket — pretty much anything other than the bare minimum will interfere with the computer getting a good idea of how your body is positioned.

The multi-institutional project (PDF), due to be presented at CVPR in Salt Lake City, combines depth data with smart assumptions about how a body is shaped and what it can do. The result is a sort of X-ray vision, revealing the shape and position of a person’s body underneath their clothes, that works in real time even during quick movements like dancing.

The paper builds on two previous methods, DynamicFusion and BodyFusion. The first uses single-camera depth data to estimate a body’s pose, but doesn’t work well with quick movements or occlusion; the second uses a skeleton to estimate pose but similarly loses track during fast motion. The researchers combined the two approaches into “DoubleFusion,” essentially creating a plausible skeleton from the depth data and then sort of shrink-wrapping it with skin at an appropriate distance from the core.

As you can see above, depth data from the camera is combined with some basic reference imagery of the person to produce both a skeleton and track the joints and terminations of the body. On the right there, you see the results of just DynamicFusion (b), just BodyFusion (c) and the combined method (d).

The results are much better than either method alone, seemingly producing excellent body models from a variety of poses and outfits:

Hoodies, headphones, baggy clothes, nothing gets in the way of the all-seeing eye of DoubleFusion.

One shortcoming, however, is that it tends to overestimate a person’s body size if they’re wearing a lot of clothes — there’s no easy way for it to tell whether someone is broad or they are just wearing a chunky sweater. And it doesn’t work well when the person interacts with a separate object, like a table or game controller — it would likely try to interpret those as weird extensions of limbs. Handling these exceptions is planned for future work.

The paper’s first author is Tao Yu of Tsinghua University in China, but researchers from Beihang University, Google, USC, and the Max Planck Institute were also involved.

“We believe the robustness and accuracy of our approach will enable many applications, especially in AR/VR, gaming, entertainment and even virtual try-on as we also reconstruct the underlying body shape,” write the authors in the paper’s conclusion. “For the first time, with DoubleFusion, users can easily digitize themselves.”

There’s no use denying that there are lots of interesting applications of this technology. But there’s also no use denying that this technology is basically X-ray Spex.

News Source = techcrunch.com

Continue Reading

AI

Prisma co-founders raise $1M to build a social app called Capture

Two of the co-founders of the art filter app Prisma have left to build a new social app.

Prisma, as you may recall, had a viral moment back in 2016 when selfie takers went crazy for the fine art spin the app’s AI put on photos — in just a few seconds of processing.

Downloads leapt, art selfies flooded Instagram, and similar arty effects soon found their way into all sorts of rival apps and platforms. Then, after dipping a toe into social waters with the launch of a feed of its own, the company shifted focus to b2b developer tools — and we understand it’s since become profitable.

But two of Prisma’s co-founders, Aleksey Moiseyenkov and Aram Hardy, got itchy feet when they had an idea for another app business. And they’ve both now left to set up a new startup, called Capture Technologies.

The plan is to launch the app — which will be called Capture — in Q4, with a beta planned for September or October, according to Hardy (who’s taking the CMO role).

They’ve also raised a $1M seed for Capture, led by US VC firm General Catalyst . Also investing are KPCB, Social Capital, Dream Machine VC (the seed fund of former TechCrunch co-editor, Alexia Bonatsos), Paul Heydon, and Russian Internet giant, Mail.Ru Group.

Josh Elman from Greylock Partners is also involved as an advisor.

Hardy says they had the luxury of being able to choose their seed investors, after getting a warmer reception for Capture than they’d perhaps expected — thinking it might be tough to raise funding for a new social app given how that very crowded space has also been monopolized by a handful of major platforms… (hi Facebook, hey Snap!)

But they also believe they’ve identified overlooked territory — where they can offer something fresh to help people interact with others in real-time.

They’re not disclosing further details about the idea or how the Capture app will work at this stage, as they’re busy building and Hardy says certain elements could change and evolve before launch day.

What they will say is that the app will involve AI, and will put the emphasis for social interactions squarely on the smartphone camera.

Speed will also be a vital ingredient, as it was with Prisma — literally fueling the app’s virality. “We see a huge move to everything which is happening right now, which is really real-time,” Hardy tells TechCrunch. “Even when we started Prisma there were lots of similar products which were just processing one photo for five, ten, 15 minutes, and people were not using it because it takes time.

“People want everything right now. Right here. So this is a trend which is taking place right now. People just want everything right now, right here. So we’re trying to give it to them.”

“Our team’s mission is to bring an absolutely new and unique experience to how people interact with each other. We would like to come up with something unique and really fresh,” adds Moiseyenkov, Capture’s CEO (pictured above left, with Hardy).

“We see a huge potential in new social apps despite the fact that there are too many huge players.”

Having heard the full Capture pitch from Hardy I can say it certainly seems like an intriguing idea. Though how exactly they go about selectively introducing the concept will be key to building the momentum needed to power their big vision for the app. But really that’s true of any social product.

Their idea has also hooked a strong line up of seed investors, doubtless helped by the pair’s prior success with Prisma. (If there’s one thing investors love more than a timely, interesting idea, it’s a team with pedigree — and these two certainly have that.)

“I’m happy to have such an amazing and experienced team,” adds Moiseyenkov, repaying the compliment to Capture’s investors.

“Your first investors are your team. You have to ask lots of questions like you do when you decide whether this or that person is a perfect fit for your team. Because investors and the team are those people with whom you’re going to build a great product. At the same time, investors ask lots of questions to you.”

Capture’s investors were evidently pleased enough with the answers their questions elicited to cut Capture its founding checks. And the startup’s team is already ten-strong — and hard at work to get a beta launched in fall.

The business is based in the US and Europe, with one office in Moscow, where Hardy says they’ve managed to poach some relevant tech talent from Russian social media giant vk.com; and another slated to be opening in a couple of weeks time, on Snap’s home turf of LA. 

“We’ll be their neighbors in Venice beach,” he confirms, though he stresses there will still be clear blue water between the two companies’ respective social apps, adding: “Snapchat is really a different product.”

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending