Timesdelhi.com

January 18, 2019
Category archive

robotics

Researchers ran a simulator to teach this robot dog to roll over

in Delhi/India/Politics/Remove From TC River/robotics by

Advanced robots are expensive, and teaching them can be incredibly time consuming. With the proper simulation, however, roboticists can train their machines to learn quickly. A team from the Robotic Systems Lab in Zurich, Switzerland have demonstrated as much in a new paper.

The research outlines how training a neutral a neural network using simulation taught the Boston Dynamics-esque ANYmal robot how to perform some impressive feats, including the ability to roll over, as a method for recovering from a fall.

Using the simulation, researchers were able to train more than 2,000 computerized version of the quadrupedal robot simultaneously in real time. Doing so made it possible for researchers to examine different methods in order to determine the best way to execute certain tasks.

Once collected, those learnings can then be transferred to the robot. As Popular Science notes, this is all similar to to the ways in which many company test and refine self-driving systems.

“Using policies trained in simulation,” the team writes in the paper, “the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than before, and recovering from falling even in complex configurations.”

News Source = techcrunch.com

Hany Farid and Peter Barrett will be speaking at TC Sessions: Robotics + AI April 18 at UC Berkeley

in Artificial Intelligence/Canvas/common sense/computing/CTO/Delhi/India/Microsoft/peter barrett/playground global/Politics/righthand robotics/robotics/robotics startups/Skydio/Software/uc-berkeley by

We’re very excited to announce our first guests for this year’s TC Sessions: Robotics. TechCrunch is returning to the U.C. Berkeley campus again this April for another full-day session delving into all aspects of robotics. As we mark our third year, we’ve decided to add programming devoted to artificial intelligence, because you can’t really do robotics without AI.

We’ve got a ton of speakers, panels and demos to announce in the coming months, but we’re excited to start with a pair who encompass two distinct parts of the industry.

Hany Farid is Dartmouth’s Albert Bradley 1915 Third Century Professor of Computer Science, with a focus on human perception, image analysis and digital forensics. A recipient of the National Academy of Inventors, Alfred P. Sloan and John Simon Guggenheim fellowships, Farid is set to join the U.C. Berkeley faculty in July of this year.

Peter Barrett is the CTO of Playground Global, an investment firm that has backed a number of robotics startups, including Agility, Canvas, Common Sense, Skydio and Righthand Robotics. Prior to co-founding Playground, Barrett founded Rocket Science Games and served as the CTO of CloudCar and Microsoft TV.

TC Sessions: Robotics + AI is being held April 18 at UC Berkeley’s Zellerbach Hall.

Early Bird tickets are on sale now for $249 and students get a big discount with tickets running at just $45.

News Source = techcrunch.com

Robots learn to grab and scramble with new levels of agility

in Artificial Intelligence/Berkeley/Delhi/ETHZ/Gadgets/Hardware/India/Politics/robotics/robots/Science/TC by

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

News Source = techcrunch.com

Daily Crunch: Bing has a child porn problem

in Apps/Artificial Intelligence/Asia/Cloud/Delhi/Developer/Entertainment/Fundings & Exits/Government/Hack/Hardware/India/Politics/robotics/Social/Startups/TC by

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 9am Pacific, you can subscribe here:

1. Microsoft Bing not only shows child pornography, it suggests it

A TechCrunch-commissioned report has found damning evidence on Microsoft’s search engine. Our findings show a massive failure on Microsoft’s part to adequately police its Bing search engine and to prevent its suggested searches and images from assisting pedophiles.

2. Unity pulls nuclear option on cloud gaming startup Improbable, terminating game engine license

Unity, the widely popular gaming engine, has pulled the rug out from underneath U.K.-based cloud gaming startup Improbable and revoked its license — effectively shutting them out from a top customer source. The conflict arose after Unity claimed Improbable broke the company’s Terms of Service and distributed Unity software on the cloud.

3. Improbable and Epic Games establish $25M fund to help devs move to ‘more open engines’ after Unity debacle

Just when you thought things were going south for Improbable the company inked a late-night deal with Unity competitor Epic Games to establish a fund geared toward open gaming engines. This begs the question of how Unity and Improbable’s relationship managed to sour so quickly after this public debacle.

4. The next phase of WeChat 

WeChat boasts more than 1 billion daily active users, but user growth is starting to hit a plateau. That’s been expected for some time, but it is forcing the Chinese juggernaut to build new features to generate more time spent on the app to maintain growth.

5. Bungie takes back its Destiny and departs from Activision 

The creator behind games like Halo and Destiny is splitting from its publisher Activision to go its own way. This is good news for gamers, as Bungie will no longer be under the strict deadlines of a big gaming studio that plagued the launch of Destiny and its sequel.

6. Another server security lapse at NASA exposed staff and project data

The leaking server was — ironically — a bug-reporting server, running the popular Jira bug triaging and tracking software. In NASA’s case, the software wasn’t properly configured, allowing anyone to access the server without a password.

7. Is Samsung getting serious about robotics? 

This week Samsung made a surprise announcement during its CES press conference and unveiled three new consumer and retail robots and a wearable exoskeleton. It was a pretty massive reveal, but the company’s look-but-don’t-touch approach raised far more questions than it answered.

News Source = techcrunch.com

Taking a stroll with Samsung’s robotic exoskeleton

in CES 2019/Delhi/India/Politics/robotics/Samsung by

Samsung’s look but don’t touch policy left many wondering precisely how committed the company is to its new robots. On the other hand, the company was more than happy to let me take the GEMS (Gait Enhancing and Motivation System) spin.

The line includes a trio of wearable exoskeletons, the A (ankle), H (hip) and K (knee). Each serve a different set of needs and muscles, but ultimately provide the same functions: walking assistant and resistance for helping wearers improve strength and balance.

Samsung’s far from the first to tackle the market, of course. There are a number of companies with exoskeleton solutions aimed at walking support/rehabilitation and/or field assistance for physically demanding jobs. Rewalk, Ekso and SuitX have all introduced compelling solutions, and a number of automotive companies have also invested in the space.

At this stage, it’s hard to say precisely what Samsung can offer that others can’t, though certainly the company’s got plenty of money, know how and super smart employees. As with the robots, if it truly commits and invests, if could produce some really remarkable work in this space.

Having taken the hip system for a bit of a spin Samsung’s booth, I can at least say that the assistive and resistance modes do work. A rep described the resistance as feeling something akin to walking under water, and I’m hard pressed to come up with a better analogy. The assistive mode is a bit hard to pick up on at first, but is much more noticeable when walking up stairs after trying out the other mode.

Like the robots, it’s hard to know how these products will ultimately fit into the broader portfolio of a company best know for smartphones, TVs and chips. Hopefully we won’t have to wait until the next CES to find out.

News Source = techcrunch.com

1 2 3 26
Go to Top