Timesdelhi.com

January 18, 2019
Category archive

robots

Robots learn to grab and scramble with new levels of agility

in Artificial Intelligence/Berkeley/Delhi/ETHZ/Gadgets/Hardware/India/Politics/robotics/robots/Science/TC by

Robots are amazing things, but outside of their specific domains they are incredibly limited. So flexibility — not physical, but mental — is a constant area of research. A trio of new robotic setups demonstrate ways they can evolve to accommodate novel situations: using both “hands,” getting up after a fall, and understanding visual instructions they’ve never seen before.

The robots, all developed independently, are gathered together today in a special issue of the journal Science Robotics dedicated to learning. Each shows an interesting new way in which robots can improve their interactions with the real world.

On the other hand…

First there is the question of using the right tool for a job. As humans with multi-purpose grippers on the ends of our arms, we’re pretty experienced with this. We understand from a lifetime of touching stuff that we need to use this grip to pick this up, we need to use tools for that, this will be light, that heavy, and so on.

Robots, of course, have no inherent knowledge of this, which can make things difficult; it may not understand that it can’t pick up something of a given size, shape, or texture. A new system from Berkeley roboticists acts as a rudimentary decision-making process, classifying objects as able to be grabbed either by an ordinary pincer grip or with a suction cup grip.

A robot, wielding both simultaneously, decides on the fly (using depth-based imagery) what items to grab and with which tool; the result is extremely high reliability even on piles of objects it’s never seen before.

It’s done with a neural network that consumed millions of data points on items, arrangements, and attempts to grab them. If you attempted to pick up a teddy bear with a suction cup and it didn’t work the first ten thousand times, would you keep on trying? This system learned to make that kind of determination, and as you can imagine such a thing is potentially very important for tasks like warehouse picking for which robots are being groomed.

Interestingly, because of the “black box” nature of complex neural networks, it’s difficult to tell what exactly Dex-Net 4.0 is actually basing its choices on, although there are some obvious preferences, explained Berkeley’s  Ken Goldberg in an email.

“We can try to infer some intuition but the two networks are inscrutable in that we can’t extract understandable ‘policies,’ ” he wrote. “We empirically find that smooth planar surfaces away from edges generally score well on the suction model and pairs of antipodal points generally score well for the gripper.”

Now that reliability and versatility are high, the next step is speed; Goldberg said that the team is “working on an exciting new approach” to reduce computation time for the network, to be documented, no doubt, in a future paper.

ANYmal’s new tricks

Quadrupedal robots are already flexible in that they can handle all kinds of terrain confidently, even recovering from slips (and of course cruel kicks). But when they fall, they fall hard. And generally speaking they don’t get up.

The way these robots have their legs configured makes it difficult to do things in anything other than an upright position. But ANYmal, a robot developed by ETH Zurich (and which you may recall from its little trip to the sewer recently), has a more versatile setup that gives its legs extra degrees of freedom.

What could you do with that extra movement? All kinds of things. But it’s incredibly difficult to figure out the exact best way for the robot to move in order to maximize speed or stability. So why not use a simulation to test thousands of ANYmals trying different things at once, and use the results from that in the real world?

This simulation-based learning doesn’t always work, because it isn’t possible right now to accurately simulate all the physics involved. But it can produce extremely novel behaviors or streamline ones humans thought were already optimal.

At any rate that’s what the researchers did here, and not only did they arrive at a faster trot for the bot (above), but taught it an amazing new trick: getting up from a fall. Any fall. Watch this:

It’s extraordinary that the robot has come up with essentially a single technique to get on its feet from nearly any likely fall position, as long as it has room and the use of all its legs. Remember, people didn’t design this — the simulation and evolutionary algorithms came up with it by trying thousands of different behaviors over and over and keeping the ones that worked.

Ikea assembly is the killer app

Let’s say you were given three bowls, with red and green balls in the center one. Then you’re given this on a sheet of paper:

As a human with a brain, you take this paper for instructions, and you understand that the green and red circles represent balls of those colors, and that red ones need to go to the left, while green ones go to the right.

This is one of those things where humans apply vast amounts of knowledge and intuitive understanding without even realizing it. How did you choose to decide the circles represent the balls? Because of the shape? Then why don’t the arrows refer to “real” arrows? How do you know how far to go to the right or left? How do you know the paper even refers to these items at all? All questions you would resolve in a fraction of a second, and any of which might stump a robot.

Researchers have taken some baby steps towards being able to connect abstract representations like the above with the real world, a task that involves a significant amount of what amounts to a sort of machine creativity or imagination.

Making the connection between a green dot on a white background in a diagram and a greenish roundish thing on a black background in the real world isn’t obvious, but the “visual cognitive computer” created by Miguel Lázaro-Gredilla and his colleagues at Vicarious AI seems to be doing pretty well at it.

It’s still very primitive, of course, but in theory it’s the same toolset that one uses to, for example, assemble a piece of Ikea furniture: look at an abstract representation, connect it to real-world objects, then manipulate those objects according to the instructions. We’re years away from that, but it wasn’t long ago that we were years away from a robot getting up from a fall or deciding a suction cup or pincer would work better to pick something up.

The papers and videos demonstrating all the concepts above should be available at the Science Robotics site.

News Source = techcrunch.com

Is Samsung getting serious about robotics?

in CES 2019/Delhi/India/Politics/robotics/robots/Samsung by

A funny thing happened at Samsung’s CES press conference. After the PC news, 8K TVs and Bixby-sporting washing machines, the company announced “one more thing,” handing over a few brief moments to announce a robotics division, three new consumer and retail robots and a wearable exoskeleton.

It was a pretty massive reveal in an extremely short space, and, quite frankly, raised far more questions than it answered. Within the broader context of a press conference, it’s often difficult to determine where the hype ends and the serious commitment to a new category begins.

This goes double for a company like Samsung, which has been taking extra care in recent months to demonstrate its commitment to the future, as the mobile industry undergoes its first major slowdown since the birth of the smartphone. It follows a similar play by LG, which has offered a glimpse into its own robotics plans for back to back years, including allowing a ‘bot to copilot this year’s keynote.

We all walked away from the press conference unsure of what to make of it all, with little more to show for things than a brief onstage demo. Naturally, I jumped at the opportunity to spend some quality time with the new robots behind the scenes the following day. There were some caveats, however.

First, the company insisted we watch a kind of in-person orientation, wherein a trio of miced up spokespeople walked us through the new robots. There’s Bot Care, a healthcare robot designed to assist with elder care, which features medication reminders, health briefings and the ability to check vitals with a finger scan. There are also yoga lessons and an emergency system that will dial 911 if a user falls.

There’s also Bot Air, an adorable little trash can-style robot that zooms around monitoring air quality and cleaning it accordingly. Bot Retail rounds out the bunch, with a touchscreen for ordering and trays in the rear for delivering food and other purchases.

The other major caveat was look, but don’t touch. You can get as close as you want, but you can’t interact with the robot beyond that.

The demos were impressive. The robots’ motions are extremely lifelike, with subtle touches that imbue on each a sense of personality rarely seen outside of movie robots like Wall-E. The response time was quick and they showed a wide range of genuinely useful tasks. If the robots are capable of performing as well in person as they do in these brief, choreographed demos, Samsung may have truly cracked the code of personal care and retail robotics.

That, of course, is a big if. Samsung wouldn’t answer the question of how much these demos are being orchestrated behind the scenes, but given how closely the company kept to the script, one suspects we’re largely looking at approximations of how such a human/robot interaction could ultimately play out somewhere down the road. And a Samsung spokesperson I spoke to admitted that everything is very early stages.

Really, it looks to be more akin to a proof of concept. Like, hey, we’re Samsung. We have a lot of money, incredibly smart people and know how to build components better than just about anyone. This is what it would look like if we went all-in on robotics. The company also wouldn’t answer questions regarding how seriously they’re ultimately taking robotics as a category.

You can’t expect to succeed in building incredibly complex AI/robotics/healthcare systems by simply dipping your toe in the water. I would love to see Samsung all-in on this. These sorts of things have the potential to profoundly impact the way we interact with technology, and Samsung is one of a few companies in a prime position to successfully explore this category. But doing so is going to require a true commitment of time, money and human resources.

CES 2019 coverage - TechCrunch

News Source = techcrunch.com

Samsung is launching a bunch of new robots and a wearable exoskeleton

in CES 2019/Delhi/India/Politics/robotics/robots/Samsung by

Okay, this is legitimately a fun surprise, In addition to all of the standard TV and appliance talk, Samsung used today’s CES press conference  to announce a number of different robots — an entirely new field for the consumer electronics company. The company offered a sneak preview of the Samsung Bot Care on stage at the event.

The rolling home robot is a health care assistant designed for elderly users and other people in need of home assistants. The ‘bot can offer health briefings, give out medication and check a user’s vitals.

There’s also the Samsung Bot Air, which is an in-home air quality monitor and the Samsung Bot Retail, which brings that technology into a brick and mortar setting. In addition to all of these, we got the briefest sneak preview of Samsung Gems, a mobility assisting exoskeleton that appears to be targeted athletes.

Samsung really blew through all of that as a kind of “one more thing” at the end of an event in which it spent a majority talking about Bixby on washing machines and the like. Between that and the general lack of information around availability, I suspect we won’t be seeing any of these products in stores any time soon. Hardware is hard and robots are harder. 

Still, a fun little glimpse at what might be around the corner from the company.

News Source = techcrunch.com

Robot delivery dogs deployed by self-driving cars are coming

in autonomous delivery/autonomous vehicles/CES 2019/Delhi/India/Politics/robots/TC/Transportation by

Let’s hope you’re not afraid of dogs, because if Continental gets its way, autonomous robot dogs are going to be delivering your packages. At the Consumer Electronics Show today, Continental unveiled its Black Mirror-esque vision for how driverless vehicles can autonomously deploy bots to facilitate last-mile deliveries.

But it’s not just to look cool while also horrifying you — it’s designed to increase availability, efficiency and safety in the realm of package delivery. The first part is the driverless vehicle itself, called the Continental Urban Mobility Experience (CUbE). Its specific purpose is to carry delivery robot dogs and deploy them to handle the last yards of the goods. So, imagine one of these CUbE pods dropping off a robot dog, and then seeing that robot dog run up to your door with your package.

“With the help of robot delivery, Continental’s vision for seamless mobility can extend right to your doorstep. Our vision of cascaded robot delivery leverages a driverless vehicle to carry delivery robots, creating an efficient transport team,” Continental Head of Systems and Technology, Chassis & Safety division Ralph Lauxmann said in a press release. “Both are electrified, both are autonomous and, in principle, both can be based on the same scalable technology portfolio. These synergies create an exciting potential for holistic delivery concepts using similar solutions for different platforms. Beyond this technology foundation, it’s reasonable to expect a whole value chain to develop in this area.”

It’s not clear if and when these will be deployed, but it’s undoubtedly an intriguing vision of the future. Segway is also at CES this week, showing off its new autonomous delivery bots. The idea is to use the bot to make autonomous deliveries for food, packages and other items.

News Source = techcrunch.com

Watch the ANYmal quadrupedal robot go for an adventure in the sewers of Zurich

in Artificial Intelligence/Delhi/ETH Zurich/ETHZ/Gadgets/Hardware/India/Politics/robotics/robots/Science by

There’s a lot of talk about the many potential uses of multi-legged robots like Cheetahbot and Spot — but in order for those to come to fruition, the robots actually have to go out and do stuff. And to train for a glorious future of sewer inspection (and helping rescue people, probably), this Swiss quadrupedal bot is going deep underground.

ETH Zurich / Daniel Winkler

The robot is called ANYmal, and it’s a long-term collaboration between the Swiss Federal Institute of Technology, abbreviated there as ETH Zurich, and a spinoff from the university called ANYbotics. Its latest escapade was a trip to the sewers below that city, where it could eventually aid or replace the manual inspection process.

ANYmal isn’t brand new — like most robot platforms, it’s been under constant revision for years. But it’s only recently that cameras and sensors like lidar have gotten good enough and small enough that real-world testing in a dark, slimy place like sewer pipes could be considered.

Most cities have miles and miles of underground infrastructure that can only be checked by expert inspectors. This is dangerous and tedious work — perfect for automation. Imagine instead of yearly inspections by people, if robots were swinging by once a week. If anything looks off, it calls in the humans. It could also enter areas rendered inaccessible by disasters or simply too small for people to navigate safely.

But of course, before an army of robots can inhabit our sewers (where have I encountered this concept before? Oh yeah…) the robot needs to experience and learn about that environment. First outings will be only minimally autonomous, with more independence added as the robot and team gain confidence.

“Just because something works in the lab doesn’t always mean it will in the real world,” explained ANYbot co-founder Peter Fankhauser in the ETHZ story.

Testing the robot’s sensors and skills in a real-world scenario provides new insights and tons of data for the engineers to work with. For instance, when the environment is completely dark, laser-based imaging may work, but what if there’s a lot of water, steam, or smoke? ANYmal should also be able to feel its surroundings, its creators decided.

ETH Zurich / Daniel Winkler

So they tested both sensor-equipped feet (with mixed success) and the possibility of ANYmal raising its “paw” to touch a wall, to find a button or determine temperature or texture. This latter action had to be manually improvised by the pilots, but clearly it’s something it should be able to do on its own. Add it to the list!

You can watch “Inspector ANYmal’s” trip below Zurich in the video below.

News Source = techcrunch.com

1 2 3 4
Go to Top