Menu

Timesdelhi.com

June 25, 2019
Category archive

robotics - page 3

Tesla vaunts creation of ‘the best chip in the world’ for self-driving

in Artificial Intelligence/Automotive/autonomous vehicles/Delhi/India/Politics/robotics/self-driving/self-driving cars/Tesla/Tesla Autonomy Day/Transportation by

At its “Autonomy Day” today, Tesla detailed the new custom chip that will be running the self-driving software in its vehicles. Elon Musk rather peremptorily called it “the best chip in the world…objectively.” That might be a stretch, but it certainly should get the job done.

Called for now the “full self-driving computer,” or FSD Computer, it is a high-performance, special-purpose chip built (by Samsung, in Texas) solely with autonomy and safety in mind. Whether and how it actually outperforms its competitors is not a simple question and we will have to wait for more data and closer analysis to say more.

Former Apple chip engineer Pete Bannon went over the FSDC’s specs, and while the numbers may be important to software engineers working with the platform, what’s more important at a higher level is meeting various requirements specific to self-driving tasks.

Perhaps the most obvious feature catering to AVs is redundancy. The FSDC consists of two duplicate systems right next to each other on one board. This is a significant choice, though hardly unprecedented, simply because splitting the system in two naturally divides its power as well, so if performance were the only metric (if this was a server, for instance) you’d never do it.

Here, however, redundancy means that should an error or damage creep in somehow or another, it will be isolated to one of the two systems and reconciliation software will detect and flag it. Meanwhile the other chip, on its own power and storage systems, should be unaffected. And if something happens that breaks both at the same time, the system architecture is the least of your worries.

Redundancy is a natural choice for AV systems, but it’s made more palatable by the extreme levels of acceleration and specialization that are possible nowadays for neural network-based computing. A regular general-purpose CPU like you have in your laptop will get schooled by a GPU when it comes to graphics-related calculations, and similarly a special compute unit for neural networks will beat even a GPU. As Bannon notes, the vast majority of calculations are a specific math operation and catering to that yields enormous performance benefits.

Pair that with high speed RAM and storage and you have very little in the way of bottlenecks as far as running the most complex parts of the self-driving systems. The resulting performance is impressive, enough to make a proud Musk chime in during the presentation:

“How could it be that Tesla, who has never designed a chip before, would design the best chip in the world? But that is objectively what has occurred. Not best by a small margin, best by a big margin.”

Let’s take this with a grain of salt, as surely engineers from Nvidia, Mobileye, and other self-driving concerns would take issue with the statement on some grounds or another. And even if it is the best chip in the world, there will be a better one in a few months — and regardless, hardware is only as good as the software that runs on it. (Fortunately Tesla has some amazing talent on that side as well.)

(One quick note for a piece of terminology you might not be familiar with: OPs. This is short for operations for second, and it’s measured in the billions and trillions these days. FLOPs is another common term, which means floating-point operations per second; these pertain to higher-precision math often used by supercomputers for scientific calculations. One isn’t better or worse than the other, and they shouldn’t be compared directly or considered exchangeable.)

Update: Right on cue, Nvidia objected to Tesla’s comparison in a statement, calling it “inaccurate.” The Xavier chip Tesla compared its hardware favorably to is a more lightweight chip for autopilot-type features, not full self driving. The 320-TOP Drive AGX Pegasus would have been a better comparison, the company said — though admittedly the Pegasus pulls about four times as much power. So per-watt Tesla comes out ahead by the stats we’ve seen. (Chris here called it during the webcast.)

High-performance computing tasks tend to drain the battery, like doing transcoding or HD video editing on your laptop and it bites the dust after 45 minutes. If your car did that you’d be mad, and rightly so. Fortunately a side effect of acceleration tends to be efficiency.

The whole FSDC runs on about 100 watts (or 50 per compute unit), which is pretty low — it’s not cell phone chip low, but it’s well below what a desktop or high performance laptop would pull, less even than many single GPUs. Some AV-oriented chips draw more, some draw less, but Tesla’s claim is that they’re getting more power per watt than the competition. Again, these claims are difficult to vet immediately considering the closed nature of AV hardware development, but it’s clear that Tesla is at least competitive and may very well beat its competitors on some important metrics.

Two more AV-specific features found on the chip, though not in duplicate (the compute pathways converge at some point), are some CPU lockstep work and a security layer. Lockstep means that it is being very carefully enforced that the timing on these chips is the same, ensuring that they are processing the exact same data at the same time. It would be disastrous if they got out of sync either with each other or with other systems. Everything in AVs depends on very precise timing while minimizing delay, so robust lockstep measures are put in place to keep that straight.

The security section of the chip vets commands and data cryptographically to watch for, essentially, hacking attempts. Like all AV systems, this is a finely-oiled machine and interference must not be allowed for any reason — lives are on the line. So the security piece watches the input and output data carefully to watch for anything suspicious like spoofed visual data (to trick the car into thinking there’s a pedestrian, for instance) to tweaked output data (say to prevent it from taking proper precautions if it does detect a pedestrian).

The most impressive part of all might be that this whole custom chip is backwards-compatible with existing Teslas, able to be dropped right in, and it won’t even cost that much. Exactly how much the system itself costs Tesla, and how much you’ll be charged as a customer — well, that will probably vary. But despite being the “best chip in the world,” this one is relatively affordable.

Part of that might be from going with a 14nm fabrication process rather than the sub-10nm process others have chosen (and to which Tesla may eventually have to migrate). For power savings the smaller the better and as we’ve established, efficiency is the name of the game here.

We’ll know more once there’s a bit more objective — truly objective, apologies to Musk — testing on this chip and its competition. For now just know that Tesla isn’t slacking and the FSD Computer should be more than enough to keep your Model 3 on the road.

Boston Dynamics showcases new uses for SpotMini ahead of commercial production

in boston dynamics/Delhi/India/Politics/robotics/TC/TC Sessions: Robotics + AI by

Last year at our TC Sessions: Robotics event, Boston Dynamics announced its intention to commercialize SpotMini. It was a big step for the secretive company. After a quarter of century building some of the world’s most sophisticated robots, it was finally taking a step into the commercial realm, making the quadrupedal robot available to anyone with the need and financial resources for the device.

CEO Marc Raibert made a return appearance at our event this week to discuss the progress Boston Dynamics has made in the intervening 12 months, both with regard to SpotMini and the company’s broader intentions to take a more market-based approach to a number of its creations.

The appearance came hot on the heels of a key acquisition for the company. In fact, Kinema was the first major acquisition in the company’s history — no doubt helped along by the very deep coffers of its parent company, SoftBank. The Bay Area-based startup’s imaging technology forms a key component to Boston Dynamics’ revamped version of its wheeled robot hand. With a newfound version system and its dual arms replaced with a multi-suction cupped gripper.

A recent video from the company demonstrated the efficiency and speed with which the system can be deployed to move boxes from shelf to conveyor belt. As Raibert noted onstage, Handle is the closest Boston Dynamics has come to a “purpose-built robot” — i.e. a robot designed from the ground up to perform a specific task. It marks a new focus for a company that, after its earliest days of DARPA-funded projects, appears to primarily be driven by the desire to create the world’s most sophisticated robots.

“We estimate that there’s about a trillion cubic foot boxes moved around the world every year,” says Raibert. “And most of it’s not automated. There’s really a huge opportunity there. And of course this robot is great for us, because it includes the DNA of a balancing robot and moving dynamically and having counterweights that let it reach a long way. So it’s not different, in some respects, from the robots we’ve been building for years. On the other hand, some of it is very focused on grasping, being able to see boxes and do tasks like stack them neatly together.”

The company will maintain a foot on that side of things, as well. Robots like the humanoid Atlas will still form an important piece of its work, even when no commercial applications are immediately apparent.

But once again, it was SpotMini who was the real star of the show. This time, however, the company debuted the version of the robot that will go into production. At first glance, the robot looked remarkably similar to the version we had onstage last year.

“We’ve we’ve redesigned many of the components to make it more reliable, to make the skins work better and to protect it if it does fall,” says Raibert.  “It has two sets [of cameras] on the front, and one on each side and one on the back. So we can see in all directions.”

I had have the opportunity to pilot the robot — making me one of a relatively small group of people outside of the Boston Dynamics offices who’ve had the opportunity to do so. While SpotMini has all of the necessary technology for autonomous movement, user control is possible and preferred in certain situations (some of which we’ll get to shortly).

[Gifs featured are sped up a bit from original video above]

The controller is an OEMed design that looks something like an Xbox controller with an elongated touchscreen in the middle. The robot can be controlled directly with the touchscreen, but I opted for a pair of joysticks. Moving Spot around is a lot like piloting a drone. One joystick moves the robot forward and back, the other turns it left and right.

Like a drone, it takes some getting used to, particularly with regard to the orientation of the robot. One direction is always forward for the robot, but not necessarily for the pilot. Tapping a button on the screen switches the joystick functionality to the arm (or “neck,” depending on your perspective). This can be moved around like a standard robotic arm/grasper. The grasper can also be held stationary, while the rest of the robot moves around it in a kind of shimmying fashion.

Once you get the hang of it, it’s actually pretty simple. In fact, my mother, whose video game experience peaked out at Tetris, was backstage at the event and happily took the controller from Boston Dynamics, controlling the robot with little issue.

Boston Dynamics is peeling back the curtain more than ever. During our conversation, Raibert debuted behind the scenes footage of component testing. It’s a sight to behold, with various pieces of the robot splayed out on lab bench. It’s a side of Boston Dynamics we’ve not really seen before. Ditto for the images of large Spot Mini testing corrals, where several are patrolling around autonomously.

Boston Dynamics also has a few more ideas of what the future could look like for the robot. Raibert shared footage of Massachusetts State Police utilizing spot in different testing scenarios, where the robot’s ability to open doors could potentially get human officers out of harm’s way during a hostage or terrorist situation.

Another unit was programmed to autonomously patrol a construction site in Tokyo, outfitted with a Street View-style 360 camera, so it can monitor building progress. “This lets the construction company get an assessment of progress at their site,” he explains. “You might think that that’s a low end task. But these companies have thousands of sites. And they have to patrol them at least a couple of times a week to know where they are in progress. And they’re anticipating using Spot for that. So we have over a dozen construction companies lined up to do tests at various stages of testing and proof of concept in their scenarios.”

Raibert says the Spot Mini is still on track for a July release. The company plans to manufacture around 100 in its initial run, though it’s still not ready to talk about pricing.

Industrial robotics giant Fanuc is using AI to make automation even more automated

in Artificial Intelligence/Asia/bin-picking/Delhi/fanuc/India/industrial automation/Industrial Robotics/manufacturing/Politics/robotics/TC Sessions: Robotics + AI by

Industrial automation is already streamlining the manufacturing process, but first those machines must be painstakingly trained by skilled engineers. Industrial robotics giant Fanuc wants to make robots easier to train, therefore making automation more accessible to a wider range of industries, including pharmaceuticals. The company announced a new artificial intelligence-based tool at TechCrunch’s Robotics + AI Sessions event today that teaches robots how to pick the right objects out of a bin with simple annotations and sensor technology, reducing the training process by hours.

Bin-picking is exactly what it sounds like: a robot arm is trained to pick items out of bins and used for tedious, time-consuming tasks like sorting bulk orders of parts. Images of example parts are taken with a camera for the robot to match with vision sensors. Then the conventional process of training bin-picking robots means teaching it many rules so it knows what parts to pick up.

“Making these rules in the past meant having to through a lot of iterations and trial and error. It took time and was very cumbersome,” said Dr. Kiyonori Inaba, the head of Fanuc Corporation’s Robot Business Division, during a conversation ahead of the event.

These rules include details like how to locate the parts on the top of the pile or which ones are the most visible. Then after that, human operators need to tell it when it makes an error in order to refine its training. In industries that are relatively new to automation, finding enough engineers and skilled human operators to train robots can be challenging.

This is where Fanuc’s new AI-based tool comes in. It simplifies the training process so the human operator just needs to look at a photo of parts jumbled in a bin on a screen and tap a few examples of what needs to be picked up, like showing a small child how to sort toys. This is significantly less training than what typical AI-based vision sensors need and can also be used to train several robots at once.

“It is really difficult for the human operator to show the robot how to move in the same way the operator moves things,” said Inaba. “But by utilizing AI technology, the operator can teach the robot more intuitively than conventional methods.” He adds that the technology is still in its early stages and it remains to be seen if it can be used during in assembly as well.

IAM Robotics puts a unique spin on warehouse automation

in Delhi/iam robotics/India/Politics/robotics by

Before robots get to do the fun stuff, they’re going to be tasked with all of the things humans don’t want to do. It’s a driving tenet of automation — developing robotics and AI designed to replace dull, dirty and dangerous tasks. It’s no surprise, then, that warehouses and fulfillment centers have been major drivers in the field.

Earlier this week, we reported that Amazon would be acquiring Canvas, adding another piece to its portfolio, adding to the 100,000 or so robotics it currently deploys across 25 or so fulfillment centers. Even Boston Dynamics has been getting into the game, acquiring a vision system in order to outfit its Handle robot for the warehouse life.

Like so much of the robotics world, Pittsburgh is a key player in the world of automation. IAM Robotics is one of the more compelling local entrants in the space. We paid the company a visit on a recent trip to town. Located in a small office outside of the city, the startup offers a unique take on the increasingly important pick and place robotics, combining a robotic arm with a mobile system.

“What’s unique about IAM robotics is we’re the only ones with a mobile robot that is also capable of manipulating objects and moving things around the warehouse by itself,” CEO Joel Reed told TechCrunch. “It doesn’t require a person in the loop to actually physically handle things. And what’s unique about that is we’re empowering machine with AI and computer vision technologies to make those decisions by itself. So it’s fully autonomous, it’s driving around, using its own ability to see.”

The startup has mostly operated quietly, in spite of a $20 million venture round led by KCK late last year. After a quick demo in the office, it’s easier to see how early investors have found promise in the company. Still, the demo marks a pretty stark contrast from the Bossa Nova warehouse where we spent the previous day.

There are a couple of small rows of groceries in a corner of the office space, a few feet away from where the rest of IAM’s staff is at work. A pair of the company’s Swift robots go to work, traveling up and down the small, makeshift aisle. When the robot locates the desired product on a shelf, a long, multi-segmented arm drops down, positioning itself in front of a box. The suction cup tip attaches to the product, then the arm swivels back around to release it into a bin.

Used correctly, the Swift could help companies staff difficult-to-fill positions, while adding a layer of efficiency in the warehouse. “Our customers or prospective customers are looking to automate to both reduce costs, but also to alleviate this manual labor shortage,” says Reed. “So we have a younger generation that’s just more interested in doing jobs like gig economy jobs, drive for Uber, Lyft, those kinds of things, because they can make more money than they could in working at a warehouse.”

Claire Delaunay will be speaking at TC Sessions: Robotics + AI next week at UC Berkeley

in Anthony Levandowski/Artificial Intelligence/california/Claire Delaunay/colin angle/Delhi/Events/Google/India/marc raibert/Melonee Wise/nvidia/otto/Politics/robotics/TC/TC Sessions: Robotics + AI/TC Sessions: Robotics+AI 2019/transport/Uber by

We’re a week out from our third-annual TC Sessions: Robotics event, and we still have some surprises left to announce. I know, we’re just as surprised as you are. We’ve already announced that Marc Raibert, Colin Angle, Melonee Wise and Anthony Levandowski will be joining us in Berkeley next week, and today we’re adding Claire Delaunay to the list of distinguished names.

Delaunay is VP of engineering at NVIDIA. Prior to NVIDIA, she worked as the director of Engineering at Uber, after the ridesharing service acquired her startup, Otto. She has also worked as the robotics program lead at Google.

She is currently the head of NVIDIA Isaac. The company’s robotics platform is designed to make it easier for companies of various experience levels and means to develop robots. Delaunay will discuss the platform and showcase some of NVIDIA’s in-house robotics reference devices, including Kaya and Carter.

Speaking of NVIDIA, TechCrunch is partnering with them on April 17 (the day before the conference) to host a Deep Learning for Robotics workshop at UC Berkeley. This in-person workshop will teach you how to implement and deploy an end-to-end project through hands-on training in eight hours, led by an instructor. Click here to learn more about the workshop.

Hear from Delaunay and other awesome speakers next week at TC Sessions: Robotics + AI. Purchase your $349 tickets now before prices go up $100 at the door. Student tickets are just $45 — book yours now.

1 2 3 4 5 36
Go to Top