Connect with us

Artificial Intelligence

Luminar rolls out its development platform and scores Volvo partnership and investment

The wizards in lidar tech at Luminar are doubling down on the practical side of autonomous car deployment with a partnership with and investment from Volvo, as well as a new “perception development platform” that helps squeeze every last drop out of its laser-based imagery.

Volvo Cars has been one of the big investors in autonomous vehicles, and while they have produced some cars equipped for driverless operation, the company seems to understand that this is a very long game it’s playing. There’s more to it than just slapping some sensors on a production vehicle and sending it on its way.

Part of that long game is picking winners in the industry, as well, and Volvo seems to be confident that Luminar, whose lidar tech is in many ways leaps and bounds beyond the competition, will be among them. Volvo’s recently established Tech Fund has made an investment in Luminar — its first, and of an undisclosed size.

That doesn’t mean they get a seat on the board or anything — it’s purely a financial play, Luminar’s founder and CEO Austin Russell told me.

The two are also doubling down on their partnership as far as the actual lidar tech being used. Luminar today announced its “perception development platform,” for which Volvo is the first customer. Essentially Luminar itself is taking over some of the duties of spotting and identifying common objects its lidar units see, rather than leaving that entirely to the car’s systems. Russell told me that it was a matter of making sure that its data was being used effectively.

“A lot of times we see 2D algorithms applied to true 3D data, and it just doesn’t make the most of it,” he said. He said that his team often sees partners (not necessarily Volvo) applying dated 2D analysis to rich 3D data. That might have been fine a couple years ago, he said, but with advances in lidar tech the point clouds and 3D data have improved by orders of magnitude — it’s become “almost camera-like.” So Luminar is making its own algorithms for detection and labeling of what its hardware sees.

“We’re providing data that you can rely on to understand a given situation — the data you need to make a decision,” he said, though in response to my questions he emphasized that Luminar’s platform was not making any decisions on its own.

As an example (illustrated in the gif above), imagine a car traveling down the road at 65 MPH. Luminar’s lidar unit, constantly bathing the area in front of it with lasers and analyzing the reflected signal, spots a stopped car blocking the shoulder about 700 feet ahead using its own smarts. Closer up it detects that there’s a person there and a spare tire on the ground.

The lidar doesn’t have any idea what to do with that data — it just knows that it’s 90 percent sure what it sees. So it passes that information on to the car’s “brain,” perhaps before that brain has done its own analysis and spotted the car for itself. The brain can then decide whether to slow down, change lanes, or maybe even confer with other nearby autonomous vehicles.

Russell said that Volvo, rather wisely, decided to constrain the application of this system strictly to highway driving. That makes it a much smaller problem space, but also a risky one. “Operating at higher speeds puts pressure on you to get a lot more range,” Russell said. “250 meters is still just like 7 and a half seconds ahead.” But every little bit counts.

Volvo is the one of four major OEMs that Luminar has partnered with, and the second to be announced publicly — there’s the Toyota Research Institute, but the other two are still a mystery. Chances are, however, they’ll be getting something like this as well, though it will be different for everyone.

“It’s a standardized platform,” Russell said. “The implementation is specific, but the software itself isn’t. We’re not just throwing it out there. And that’s also a reason why we’re working with 4 OEMs and not everyone under the sun. This will only be available to partners.”

Luminar’s tech puts it in the lead in many ways, but competitors aren’t standing still. Strong partnerships, however, may prove to be more important than technological superiority — though of course it can’t hurt to have both.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

apple inc

Get your trusted midterm elections news from us, says Apple

Apple News has a new old mission: Curating political news and analysis by paying a team of experienced human editors to quality-assess journalism, rather than letting unchecked algorithms run wild and exaggerate anything — no matter how awful, obnoxious or untrue.

‘Fakebook’ eat your heart out.

Apple says human curation is not a new direction for Apple News — describing it as a “guiding principle” across the product since it launched three years ago.

Although it certainly wasn’t shouting so loudly about it back then when algorithmic feeds were still riding high. But the company says Apple News has always had a team of editors — which it says are focused on “discovering and spotlighting well-sourced fact-based stories to provide readers with relevant, reliable news and information from a wide range of publishers”.

Those “experienced” editors are also now being put to work assessing political reportage and commentary around the US midterms. With only publishers they deem to be “reliable” getting to be political sources for Apple News.

The launch is focused on the US 2018 midterm elections, at least initially, which will get a dedicated section in the product — providing what Cupertino bills as “timely, trustworthy midterm election information” along with “the most important reporting and analysis from a diverse set of publishers”.

We’ve asked the company whether it plans to expand the Apple News election section approach to other markets.

“Today more than ever people want information from reliable sources, especially when it comes to making voting decisions,” said Lauren Kern, editor-in-chief of Apple News, in a statement. “An election is not just a contest; it should raise conversations and spark national discourse. By presenting quality news from trustworthy sources and curating a diverse range of opinions, Apple News aims to be a responsible steward of those conversations and help readers understand the candidates and the issues.”

Apple is clearly keen to avoid accusations of political bias — hence stressing the section will include a “diverse range of opinions”, with content being sourced from the likes of Fox News, Vox, the Washington Post, Politico and Axios, plus other unnamed publishers.

Though there will equally clearly be portions of the political spectrum who decry Apple News’ political output as biased against them — and thus akin to political censorship.

Safe to say, don’t expect Breitbart to be a fan. But as any journalist worth their salt will tell you, you can’t please all the people all of the time. And not trying to do so is essentially a founding tenet of the profession. It’s also why algorithms suck at being editors.

The launch of a dedicated section for an election event within Apple’s news product is clearly a response to major failures where tech platforms have intersected with political events — at least where business models rely on fencing content at vast scale and thus favor algorithmic curation (with all the resulting clickbaity, democracy-eroding pitfalls that flow from that).

Concern about algorithmic impacts on democratic processes continues to preoccupy politicians and regulators in the US and beyond. And while it’s fair to say that multiple tech platforms have a fake news and political polarization problem, Facebook has been carrying the biggest can here, given how extensively Kremlin agents owned its platform during the 2016 US presidential elections.

Since then the company has announced a raft of changes intended to combat this type of content — including systems to verify political advertisers; working with third party fact checkers; closing scores of suspect accounts around elections; and de-emphasizing news generally in its News Feed in favor of friends’ based updates which are harder for malicious agents to game at scale.

But its core algorithmic approach to programming the hierarchies of content on its platform has not changed.

And while it’s ramping up the number of content moderation and safety staff on its books — saying it will have 20,000 people working on that by the end of this year — that’s still reactive content assessment; which is the polar opposite of editorial selection and curation.

So Apple evidently sees an opportunity for its News product to step in and fill the trust gap with reliable political information.

As well as general news and commentary from the selected trusted publishers, Apple says it will also include “special features with stories curated by Apple News editors from trusted publishers”, including opinion columns “about hot-button issues that are intended to offer readers a full range of ideas and debate about important subjects, from news sources they may not already follow” (so it’s also taking aim at algorithmically generated filter bubbles); and an election dashboard from the Washington Post — which contextualizes “key data like current polling, what pundits are saying and survey data on voter enthusiasm”.

Local news is another focus for the section, with a feature that aims to highlight “quality reporting about issues that matter to local constituents on the most important races”.

The 2018 Midterm Elections section is available to Apple News users in the US from now until November.

News Source = techcrunch.com

Continue Reading

Artificial Intelligence

In Army of None, a field guide to the coming world of autonomous warfare

The Silicon Valley-military industrial complex is increasingly in the crosshairs of artificial intelligence engineers. A few weeks ago, Google was reported to be backing out of a Pentagon contract around Project Maven, which would use image recognition to automatically evaluate photos. Earlier this year, AI researchers around the world joined petitions calling for a boycott of any research that could be used in autonomous warfare.

For Paul Scharre, though, such petitions barely touch the deep complexity, nuance, and ambiguity that will make evaluating autonomous weapons a major concern for defense planners this century. In Army of None, Scharre argues that the challenges around just the definitions of these machines will take enormous effort to work out between nations, let alone handling their effects. It’s a sobering, thoughtful, if at times protracted look at this critical topic.

Scharre should know. A former Army Ranger, he joined the Pentagon working in the Office of Secretary of Defense, where he developed some of the Defense Department’s first policies around autonomy. Leaving in 2013, he joined the DC-based think tank Center for a New American Security, where he directs a center on technology and national security. In short, he has spent about a decade on this emerging tech, and his expertise clearly shows throughout the book.

The first challenge that belies these petitions on autonomous weapons is that these systems already exist, and are already deployed in the field. Technologies like the Aegis Combat System, High-speed Anti-Radiation Missile (HARM), and the Harpy already include sophisticated autonomous features. As Scharre writes, “The human launching the Harpy decides to destroy any enemy radars within a general area in space and time, but the Harpy itself chooses the specific radar it destroys.” The weapon can loiter for 2.5 hours while it determines a target with its sensors — is it autonomous?

Scharre repeatedly uses the military’s OODA loop (for observe, orient, decide, and act) as a framework to determine the level of autonomy for a given machine. Humans can be “in the loop,” where they determine the actions of the machine, “on the loop” where they have control but the machine is mostly working independently, and “out of the loop” when machines are entirely independent of human decision-making.

The framework helps clear some of the confusion between different systems, but it is not sufficient. When machines fight machines, for instance, the speed of the battle can become so great that humans may well do more harm then good intervening. Millions of cycles of the OODA loop could be processed by a drone before a human even registers what is happening on the battlefield. A human out of the loop, therefore, could well lead to safer outcomes. It’s exactly these kinds of paradoxes that make the subject so difficult to analyze.

In addition to paradoxes, constraints are a huge theme in the book as well. Speed is one — and the price of military equipment is another. Dumb missiles are cheap, and adding automation has consistently added to the price of hardware. As Scharre notes, “Modern missiles can cost upwards of a million dollars apiece. As a practical matter, militaries will want to know that there is, in fact, a valid enemy target in the area before using an expensive weapon.”

Another constraint is simply culture. The author writes, “There is intense cultural resistance within the U.S. military to handing over jobs to uninhabited systems.” Not unlike automation in the civilian workforce, people in power want to place flesh-and-blood humans in the most complex assignments. These constraints matter, because Scharre foresees a classic arms race around these weapons as dozens of countries pursue these machines.

Humans “in the loop” may be the default today, but for how long?

At a higher level, about a third of the book is devoted to the history of automation, (generalized) AI, and the potential for autonomy, topics which should be familiar to any regular reader of TechCrunch. Another third of the book or so is a meditation on the challenges of the technology from a dual use and strategic perspective, as well as the dubious path toward an international ban.

Yet, what I found most valuable in the book was the chapter on ethics, lodged fairly late in the book’s narrative. Scharre does a superb job covering the ground of the various schools of thought around the ethics of autonomous warfare, and how they intersect and compete. He extensively analyzes and quotes Ron Arkin, a roboticist who has spent significant time thinking about autonomy in warfare. Arkin tells Scharre that “We put way too much faith in human warfighters,” and argues that autonomous weapons could theoretically be programmed never to commit a war crime unlike humans. Other activists, like Jody Williams, believe that only a comprehensive ban can ensure that such weapons are never developed in the first place.

Scharre regrets that more of these conversations don’t take into account the strategic positions of the military. He notes that international discussions on bans are led by NGOs and not by nation states, whereas all examples of successful bans have been the other way around.

Another challenge is simply that antiwar activism and anti-autonomous weapons activism are increasingly being conflated. Scharre writes, “One of the challenges in weighing the ethics of autonomous weapons is untangling which criticisms are about autonomous weapons and which are really about war.” Citing Sherman, who marched through the U.S. South in the Civil War in an aggressive pillage, the author reminds the reader that “war is hell,” and that militaries don’t choose weapons in a vacuum, but relatively against other tools in their and their competitors’ arsenals.

The book is a compendium of the various issues around autonomous weapons, although it suffers a bit from the classic problem of being too lengthy on some subjects (drone swarms) while offering limited information on others (arms control negotiations). The book also is marred at times by errors, such as “news rules of engagement” that otherwise detract from a direct and active text. Tighter editing would have helped in both cases. Given the inchoate nature of the subject, the book works as an overview, although it fails to present an opinionated narrative on where autonomy and the military should go in the future, an unsatisfying gap given the author’s extensive and unique background on the subject.

All that said, Army of None is a one-stop guide book to the debates, the challenges, and yes, the opportunities that can come from autonomous warfare. Scharre ends on exactly the right note, reminding us that ultimately, all of these machines are owned by us, and what we choose to build is within our control. “The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” We should continue to engage, and petition, and debate, but always with a vision for the future we want to realize.

News Source = techcrunch.com

Continue Reading

Artificial Intelligence

Species-identifying AI gets a boost from images snapped by citizen naturalists

Someday we’ll have an app that you can point at a weird bug or unfamiliar fern and have it spit out the genus and species. But right now computer vision systems just aren’t up to the task. To help things along, researchers have assembled hundreds of thousands of images taken by regular folks of critters in real life situations — and by studying these, our AI helpers may be able to get a handle on biodiversity.

Many computer vision algorithms have been trained on one of several large sets of images, which may have everything from people to household objects to fruits and vegetables in them. That’s great for learning a little about a lot of things, but what if you want to go deep on a specific subject or type of image? You need a special set of lots of that kind of image.

For some specialties, we have that already: FaceNet, for instance, is the standard set for learning how to recognize or replicate faces. But while computers may have trouble recognizing faces, we rarely do — while on the other hand, I can never remember the name of the birds that land on my feeder in the spring.

Fortunately, I’m not the only one with this problem, and for years the community of the iNaturalist app has been collecting pictures of common and uncommon animals for identification. And it turns out that these images are the perfect way to teach a system how to recognize plants and animals in the wild.

Could you tell the difference?

You might think that a computer could learn all it needs to from biology textbooks, field guides, and National Geographic. But when you or I take a picture of a sea lion, it looks a lot different from a professional shot: the background is different, the angle isn’t perfect, the focus is probably off, and there may even be other animals in the shot. Even a good computer vision algorithm might not see much in common between the two.

The photos taken through the iNaturalist app, however, are all of the amateur type — yet they have also been validated and identified by professionals who, far better than any computer, can recognize a species even when it’s occluded, poorly lit, or blurry.

The researchers, from Caltech, Google, Cornell, and iNaturalist itself, put together a limited subset of the more than 1.6 million images in the app’s databases, presented this week at CVPR in Salt Lake City. They decided that in order for the set to be robust, it should have lots of different angles and situations, so they searched for species that have had at least 20 different people spot them.

The resulting set of images (PDF) still has over 859,000 pictures of over 5,000 species. These they had people annotate by drawing boxes around the critter in the picture, so the computer would know what to pay attention to. A set of images was set aside for training the system, another set for testing it.

Examples of bounding boxes being put on images.

Ironically, they can tell it’s a good set because existing image recognition engines perform so poorly on it, not even reaching 70 percent first-guess accuracy. The very qualities that make the images themselves so amateurish and difficult to parse make them extremely valuable as raw data; these pictures haven’t been sanitized or set up to make it any easier for the algorithms to sort through.

Even the systems created by the researchers with the iNat2017 set didn’t fare so well. But that’s okay — finding where there’s room to improve is part of defining the problem space.

The set is expanding, as others like it do, and the researchers note that the number of species with 20 independent observations has more than doubled since they started working on the dataset. That means iNat2018, already under development, will be much larger and will likely lead to more robust recognition systems.

The team says they’re working on adding more attributes to the set so that a system will be able to report not just species, but sex, life stage, habitat notes, and other metadata. And if it fails to nail down the species, it could in the future at least make a guess at the genus or whatever taxonomic rank it’s confident about — e.g. it may not be able to tell if it’s anthopleura elegantissima or anthopleura xanthogrammica, but it’s definitely an anemone.

This is just one of many parallel efforts to improve the state of computer vision in natural environments; you can learn more about the ongoing collection and competition that leads to the iNat datasets here, and other more class-specific challenges are listed here.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending