Menu

Timesdelhi.com

April 22, 2019
Category archive

self-driving

Aurora cofounder and CEO Chris Urmson on the company’s new investor, Amazon, and much more

in Amazon/aurora/Automotive/Delhi/India/Logistics/Politics/robotics/self-driving/sequoia capital/t.rowe price/TC/Transportation/waymo by

You might not think of self-driving technologies and politics having much in common, but at least in one way, they overlap meaningfully: yesterday’s enemy can be tomorrow’s ally.

Such was the message we gleaned Thursday night, at a small industry event in San Francisco, where we had the chance to sit down with Chris Urmson, the cofounder and CEO of Aurora, a company that (among many others) is endeavoring to make self-driving technologies a safer and more widely adopted alternative to human drivers.

It was a big day for Urmson. Earlier the same day, his two-year-old company announced a whopping $530 million in Series B funding, a round that was led by top firm Sequoia Capital and that included “significant investment” from T. Rowe Price and Amazon.

The financing for Aurora — which is building what it calls a “driver” technology that it expects to eventually integrate into cars built by Volkswagen, Hyundai, and China’s Byton, among others —  is highly notable, even in a sea of giant fundings. Not only does it represent Sequoia’s biggest bet yet on any kind of self-driving technology, it’s also an “incredible endorsement” from T. Rowe Price, said Urmson Thursday night, suggesting it demonstrates that the money management giant “thinks long term and strategically [that] we’re the independent option to self-driving cars.”

Even more telling, perhaps, is the participation of Amazon, which is in constant competition to be the world’s most valuable company, and whose involvement could lead to variety of scenarios down the road, from Aurora powering delivery fleets overseen by Amazon, to Amazon acquiring Aurora outright. Amazon has already begun marketing more aggressively to global car companies and Tier 1 suppliers that are focused on building connected products, saying its AWS platform can help them speed their pace of innovation and lower their cost structures. In November, it also debuted a global, autonomous racing league for 1/18th scale, radio-controlled, self-driving four-wheeled race cars that are designed to help developers learn about reinforcement learning, a type of machine learning. Imagine what it could learn from Aurora.

Indeed, at the event, Urmson said that as Aurora had “constructed our funding round, [we were] very much thinking strategically about how to be successful in our mission of building a driver. And one thing that a driver can do is move people, but it can also move goods. And it’s harder to think of a company where moving goods is more important than Amazon.” Added Urmson, “Having the opportunity to have them partner with us in this funding round, and [talk about] what we might build in the future is awesome.” (Aurora’s site also now features language about “transforming the way people and goods move.”)

The interest of Amazon, T. Rowe, Sequoia and Aurora’s other backers isn’t surprising. Urmson was the formal technical lead of Google’s self-driving car program (now Waymo) . One of his cofounders, Drew Bagnell, is a machine learning expert who still teaches at Carnegie Mellon and was formerly the head of Uber’s autonomy and perception team. Aurora’s third cofounder is Sterling Anderson, the former program manager of Tesla’s Autopilot team.

Aurora’s big round seemingly spooked Tesla investors, in fact, with shares in the electric car maker dropping as a media outlets reported on the details. The development seems like just the type of possibility that had Tesla CEO Elon Musk unsettled when Aurora got off the ground a couple of years ago, and Tesla almost immediately filed a lawsuit against it, accusing Urmson and Anderson of trying poach at least a dozen Tesla engineers and accusing Anderson of taking confidential information and destroying the  evidence “in an effort to cover his tracks.”

That suit was dropped two and a half weeks later in a settlement that saw Aurora pay $100,000. Anderson said at the time the amount was meant to cover the cost of an independent auditor to scour Aurora’s systems for confidential Tesla information. Urmson reiterated on Thursday night that it was purely an “economic decision” meant to keep Aurora from getting further embroiled in an expansive spat.

But Urmson, who has previously called the lawsuit “classy,” didn’t take the bait on Thursday when asked about Musk, including whether he has talked in the last two years with Musk (no), and whether Aurora might need Tesla in the future (possibly). Instead of lord Aurora’s momentum over the company, Urmson said that Aurora and Tesla “got off on the wrong foot.” Laughing a bit, he went on to lavish some praise on the self-driving technology that lives inside Tesla cars, adding that “if there’s an opportunity to work them in the future, that’d be great.”

Aurora, which is also competing for now against the likes of Uber, also sees Uber as a potential partner down the line, said Urmson. Asked about the company’s costly self-driving efforts, whose scale has been drastically downsized in the eleven months since one of its vehicles struck and killed a pedestrian in Arizona, Urmson noted simply that Aurora is “in the business of delivering the driver, and Uber needs a lot of drivers, so we think it would be wonder to partner with them, to partner with Lyft, to partner [with companies with similar ambitions] globally. We see those companies as partners in the future.”

He’d added when asked for more specifics that there’s “nothing to talk about right now.”

Before Thursday’s event, Aurora had sent us some more detailed information about the four divisions that currently employ the 200 people that make up the company, a number that will obviously expand with its new round, as will the testing it’s doing, both on California roads and in Pittsburgh, where it also has a sizable presence. We didn’t have a chance to run them during our conversation with Urmson, but we thought they were interesting and that you might think so, too.

Below, for example, is the “hub” of the Aurora Driver. This is the computer system that powers, coordinates and fuses signals from all of the vehicle’s sensors, executes the software and controls the vehicle. Aurora says it’s designing the Aurora Driver to seamlessly integrate with a wide variety of vehicle platforms from different makes, models and classes with the goal of delivering the benefits of its technology broadly.

Below is a visual representation of Aurora’s perception system, which the company says is able to understand complex urban environments where vehicles need to safely navigate amid many moving objects, including bikes, scooters, pedestrians, and cars.

It didn’t imagine it would at the outset, but Aurora is building its own mapping system to ensure what it (naturally) calls the highest level of precision and scalability, so vehicles powered by the company can understand where they are and update the maps as the world changes.

We asked Urmson if, when the tech is finally ready to go into cars, they will white-label the technology or else use Aurora’s brand as a selling point. He said the matter hasn’t been decided yet but seemed to suggest that Aurora is leaning in the latter direction. He also said the technology would be installed on the carmakers’ factory floors (with Aurora’s help).

One of the ways that Aurora says it’s able to efficiently develop a robust “driver” is to build its own simulation system. It uses its simulator to test its software with different scenarios that vehicles encounter on the road, which it says enables repeatable testing that’s impossible to achieve by just driving more miles.

Aurora’s motion planning team works closely with the perception team to create a system that both detects the important objects on and around the road, and tries to accurately predict how they will move in the future. The ability to capture, understand, and predict the motion of other objects is critical if the tech is going to navigate real world scenarios in dense urban environments, and Urmson has said in the past that Aurora has crafted its related workflow in a way that’s superior to competitors that send the technology back and forth.

Specifically, he told The Atlantic last year: “The classic way you engineer a system like this is that you have a team working on perception. They go out and make it as good as they can and they get to a plateau and hand it off to the motion-planning people. And they write the thing that figures out where to stop or how to change a lane and it deals with all the noise that’s in the perception system because it’s not seeing the world perfectly. It has errors. Maybe it thinks it’s moving a little faster or slower than it is. Maybe every once in a while it generates a false positive. The motion-planning system has to respond to that.

“So the motion-planning people are lagging behind the perception people, but they get it all dialed in and it’s working well enough—as well as it can with that level of perception—and then the perception people say, ‘Oh, but we’ve got a new push [of code].’ Then the motion-planning people are behind the eight ball again, and their system is breaking when it shouldn’t.”

We also asked Urmson about Google, whose self-driving unit was renamed Waymo as it spun out from the Alphabet umbrella as its own company. He was highly diplomatic, saying only good things about the company and, when asked if they’d ever challenged him on anything since leaving, answering that they had not.

Still, he told as one of the biggest advantage that Aurora enjoys is that it was able to use the learnings of its three founders and to start from scratch, whereas the big companies from which each has come cannot completely start over.

As he told TechCrunch in a separate interview last year when asked how Aurora tests its technology, then it comes to self-driving tech, size matters less than one might imagine. “There’s this really easy metric that everyone is using, which is number of miles driven, and it’s one of those things that was really convenient for me in my old place [Google] because we’re out there and we were doing a hell of a lot more than anybody else was at the time, and so it was an easy number to talk about. What’s lost in that, though, is it’s not really the volume of the miles that you drive.” It’s about the quality of the data, he’d continued, suggesting that, for now, at least, Aurora’s is hard to beat.

News Source = techcrunch.com

Nauto will notify drivers when they’re distracted in real-time

in Artificial Intelligence/Automotive/autonomous driving/autonomous vehicles/Delhi/India/NAUTO/Politics/self-driving/Transportation by

Nauto, the transportation company that aims to make human drivers safer and train autonomous vehicles for all types of scenarios, has just launched Prevent. Nauto Prevent is designed to prevent distracted driving by notifying drivers when they’ve had their eyes off the road for too long.

Nauto Prevent’s notifications are dependent upon factors like how long you’ve had your eyes averted from the road and how fast you’re driving. If you’ve been distracted for more than five seconds and are driving at 60 mph, you’ll hear a voice notification. But if you continue to be distracted, you’ll hear an alarm.

“We designed the whole thing to be really focused on keeping the driver safe without being intrusive,” Nauto CEO Stefan Heck told TechCrunch. “We want to help human drivers, not just rat them out to their boss.”

This feature is on top of Nauto’s flagship product that helps companies better train their commercial drivers. Nauto’s core offering is a two-way facing camera that sits up near the rear-view mirror to monitor both driver behavior and road conditions. Using computer vision and artificial intelligence, Nauto then provides insights and coaching to drivers around distraction and fatigue. With Prevent, drivers will now receive notifications if they’re distracted, tailgating or if there are other potential risks on the road.

The idea is not to just alert drivers when they’re doing something illegal, but to notify them when there’s actual risk involved.

“Holding a cell phone or looking at a phone makes you susceptible to a ticket,” Heck said. “But if you’re stopped at a red light, the risk is low.”

Nauto’s list price is $499 for the device, with monthly pricing of $39.95, depending on the customer and market. Nauto is not currently disclosing the cost of Prevent, nor how many customers it has. What Heck would tell me is that Nauto is nationwide across various taxi, ride-sharing, rental, package delivery and enterprise service fleets.

Nauto is launching Prevent with the hope to reduce collisions, accidents and injuries on the road before the industry reaches autonomy. But once autonomous officially arrives, Nauto will be ready to provide companies with its driving data and real-world scenarios of what driverless cars can expect from a distracted driver who, for example, runs a red light. Another use case Nauto has stumbled upon pertains to autonomous safety drivers and helping companies ensure its drivers are paying attention.

To date, Nauto has raised $173.9 million from General Motors Ventures, Toyota AI Ventures and BMW iVentures. Its most recent round came last July when it raised a $159 million Series B round from SoftBank and Greylock.

News Source = techcrunch.com

Building the best possible driver inside Waymo’s Castle

in Artificial Intelligence/Automation/Automotive/autonomous cars/Delhi/Emerging-Technologies/India/Politics/robotics/self-driving/self-driving cars/TC/transport/Transportation/waymo by

Waymo has been very protective of its testing process in past, but recently it started opening up – likely as a bid to help get the public more comfortable with self-driving vehicle technology as it moves towards broad deployment of its autonomous cars. As part of that, the former Google self-driving car project asked a group of journalists to pay a visit to its Castle testing facility in Northern California.

The Castle isn’t just a very cool name for a proving ground, it’s the actual name of the former air force base (used during the 1940s for training bombers for WWII) that Google took over back in 2013 to house some of its ‘X’ projects, including Project Loon and what would eventually become Waymo in 2016.

At Castle, we got a rare look at one aspect of Waymo’s testing process for its autonomous cars, complete with a briefing on the company’s approach from CEO John Krafcik, VP of Engineering Dimitry Dolgov, UX and Early Rider Program Product Manager Juliet Rothenberg and Head of UX Design Ryan Powell.

Krafcik opened by giving a rundown of the various terms that have been applied to self-driving technology, ranging from “driver assistance” to “semi-driverless cars,” noting that there’s been “a lot of confusion about what the terminology means.” Part of Waymo’s aim is to clear up the confusion – and by implication, perhaps douse the cold water of reality on some of its competition’s more grandiose claims.

It also helps Waymo clearly explain where they sit on the spectrum of driverless vehicle technologies, and how they concluded that they would focus only on technologies that would classify as Level 4 and Level 5 by the SAE’s standards – fully driverless tech requiring no intervention by a human driver.

Waymo classifies anything from Levels 1 through 3 as technically “driver assist” features, according to Krafcik, and this is an “important divide” which Waymo has observed first hand, concluding early on that it’s not an area they’re interested in pursuing.

Krafcik revealed that one of the first products Waymo considered bringing to market back in 2012 and 2013 was a highway driving assist feature, which would handle everything. between onramp and exit, but that also required drivers to be fully attentive to the road and their surroundings while it was in operation.

The results, per Krafcik, were downright frightening: Footage taken from the vehicles of Google employees testing the highway assist features, which the company showed us during the briefing, including people texting, doing makeup, fumbling around their seat for charge cables and even, in one particularly grievous instance, sleeping while driving 55 MPH on a freeway.

“We shut down this aspect of the project a couple of days after seeing that,” Krafcik said. “The better you make the driver assist technologies… the more likely the human behind the wheel is to fall asleep. and then when the vehicle says hey I need you to take over, they lack contextual awareness.”

This is why Waymo has been very vocal in the past and today about focusing on Level 4 (full autonomy within specific ‘domains’ or geographies and conditions) and Level 5 (full, unqualified autonomy).

How does Castle enter into its goal of achieving that by “building the world’s most experienced driver?” In short: Practice.

Waymo likes to quantify its progress in terms of miles driven, since driving experience is the primary means of improvement for autonomous technology, according to it and many others in this space. Krafcik said at the event that Waymo has managed 3.5 million autonomous miles across testing in 20 different cities this year, and it managed 2.5 billion (with a ‘b’) miles in 2016 in simulation, or via testing in virtual software environments reflecting real-world conditions.

At Castle, you get aspects of the unpredictability of real-world driving, combined with the control of simulation. Stephanie Villegas, who leads ‘Structured Testing’ at Castle, explained that this type of testing allows them to model and stage challenging situations Waymo has encountered in real live, and also to validate things they know the cars have been able to do well when they issue updates to make sure there aren’t any regressions.

Structured Testing sounds kind of complicated but it’s actually explained in the name – Waymo sets up (structures) tests using its self-driving vehicles (the latest generation Chrysler Pacifica-based test car in the examples we saw), as well as things they call “fauxes” (pronounced “foxes” by Villegas. These are other cars, pedestrians, cyclists and other variables (contractors working for Waymo) who replicate the real world conditions that Waymo is trying to test for. The team runs these Tess over and over, “as many times as we can where we’re still seeing improvement” per Villegas – and each time the conditions will vary slightly since it’s real-world testing with actual human beings.

Waymo and Villegas took us through three structured tests, including one in which a passing car cuts off the self-driving van without much warning; one where the self-driving car has to deal with a vehicle backing out of a driveway on a corner; and one where it encounters movers in a roadway and has to navigate around them, while also heeding oncoming traffic.

The self-driving car handled every situation, all of which featured three test runs, with aplomb, and Villegas said that although Waymo always performs its tests with safety drivers on board, each of these was done fully autonomously with no intervention from the drivers at all.

In general, Waymo says it’s essentially nailed down Level 4 self-driving, especially in the controlled confines of its autonomous proving ground at Castle. We even got a ride in the self-driving Pacifica – without anyone even at the wheel at all – and it went smoothly (more on that here). But it doesn’t sound like Castle’s useful life is anywhere near at an end: Villegas said that she’s run countless Structured Tests in her time at Waymo since 2012, first at a semi-private and disused parking garage for the Shoreline Ampitheater near Google’s Mountain View HQ, and later at Castle when the company outgrew that space.

Getting Waymo’s tech ready for Level 4 autonomy in public deployment will require more testing still, as will making the eventual leap from Level 4 to Level 5, the company’s true ultimate goal. It’s not just a matter of having somewhere Waymo can test without worrying about state regulations – it’s about a place where serendipity can be manufactured, to help ensure its cars are ready for anything, without having to wait for them to encounter those scenarios on real roads when the stakes are highest.

News Source = techcrunch.com

Cruise’s self-driving Chevrolet Bolts are coming to New York next year

in Automotive/autonomous driving/chevrolet bolt/cruise/Cruise Automation/Delhi/GM/India/Politics/self-driving/self-driving cars/TC/Transportation by

GM’s Cruise Automation will expand its test pool, while keeping a focus on city driving, something it has said gives it an edge in the autonomous driving space. What better city to use for testing, then, than New York, one of the densest and most hectic traffic nightmares in North America.

Cruise will test its self-driving fleet in New York in a five mile square section of Manhattan, the company announced led via the WSJ, in a move that will also make it the first automaker to test autonomous vehicles in the city. Each will have a safety driver on board, as they do in the current San Francisco test, but now they’ll be tackling inclement four-season weather, as well as other drivers and pedestrians who are less laid back than their west coast counterparts.

Alongside the pilot deployment, Cruise will also be operating a new research center in the city, likely because it doesn’t make much sense to round trip the data back to its offices in San Francisco. No word yet on timeframes for consumer-facing deployment, but as Cruise’s testing in NYC proceeds, it seems likely the GM subsidiary will replicate its staff-facing prototype on-demand autonomous pick up service in Manhattan, too.

Cruise recently explained that it believes its testing in city environments provides much more useful data in terms of helping teach its autonomous driving systems, vs. testing in suburban areas, like the Arizona pilot location for Waymo’s on-demand ride hailing trial. I’d expect more major cities to become testing targets for Cruise as capacity and local regulators allow, then, since it seems like GM will aim to deploy and future consumer self-driving services in those areas first.

News Source = techcrunch.com

Baidu plans to mass produce Level 4 self-driving cars with BAIC by 2021

in Artificial Intelligence/Automotive/autonomous vehicles/BAIC/Baidu/Cruise Automation/Delhi/GM/India/Politics/self-driving/self-driving cars/TC/Transportation by

Baidu, China’s internet technology giant, hopes to be in the business of mass producing autonomous cars by 2021, thanks to a partnership with BAIC Group, a Chinese automaker which will handle the manufacturing part of that equation. BAIC Group is one of Baidu’s many partners for its Apollo autonomous driving program, and it’ll use the open platform to produce vehicles with Level 3 autonomous features by 2019 before moving on to fully self-driving Level 4 cars by 2021, the companies announced today.

Baidu will contribute cybersecurity, image recognition and self-driving, as well as its DuerOS virtual assistant capabilities, and BAIC will integrate those technologies into its own vehicles. The two anticipate that by 2019, over 1 million of BAIC’s production vehicles will feature Baidu networking tech, and the companies will be working on building gout an automotive cloud-based ecosystem of products and services, too, including crowd-sourced traffic info and more.

Just last month, GM announced that it would be mass producing its own self-driving vehicles with subsidiary Cruise Automation. The GM large volume autonomous car is based on the Bolt platform, but features more integrated self-driving sensors and computing technology designed to be produced at scale.

News Source = techcrunch.com

1 2 3
Go to Top