Making something fly involves a lot of trade-offs. Bigger stuff can hold more fuel or batteries, but too big and the lift required is too much. Small stuff takes less lift to fly but might not hold a battery with enough energy to do so. Insect-sized drones have had that problem in the past — but now this RoboFly is taking its first flaps into the air… all thanks to the power of lasers.
We’ve seen bug-sized flying bots before, like the RoboBee, but as you can see it has wires attached to it that provide power. Batteries on board would weigh it down too much, so researchers have focused in the past on demonstrating that flight is possible in the first place at that scale.
But what if you could provide power externally without wires? That’s the idea behind the University of Washington’s RoboFly, a sort of spiritual successor to the RoboBee that gets its power from a laser trained on an attached photovoltaic cell.
“It was the most efficient way to quickly transmit a lot of power to RoboFly without adding much weight,” said co-author of the paper describing the bot, Shyam Gollakota. He’s obviously very concerned with power efficiency — last month he and his colleagues published a way of transmitting video with 99 percent less power than usual.
There’s more than enough power in the laser to drive the robot’s wings; it gets adjusted to the correct voltage by an integrated circuit, and a microcontroller sends that power to the wings depending on what they need to do. Here it goes:
“To make the wings flap forward swiftly, it sends a series of pulses in rapid succession and then slows the pulsing down as you get near the top of the wave. And then it does this in reverse to make the wings flap smoothly in the other direction,” explained lead author Johannes James.
At present the bot just takes off, travels almost no distance and lands — but that’s just to prove the concept of a wirelessly powered robot insect (it isn’t obvious). The next steps are to improve onboard telemetry so it can control itself, and make a steered laser that can follow the little bug’s movements and continuously beam power in its direction.
“Whether for AR or robots, anytime you have software interacting with the world, it needs a 3D model of the globe. We think that map will look a lot more like the decentralized internet than a version of Apple Maps or Google Maps.” That’s the idea behind new startup Fantasmo, according to co-founder Jameson Detweiler. Coming out of stealth today, Fantasmo wants to let any developer contribute to and draw from a sub-centimeter accuracy map for robot navigation or anchoring AR experiences.
Fantasmo plans to launch a free Camera Positioning Standard (CPS) that developers can use to collect and organize 3D mapping data. The startup will charge for commercial access and premium features in its TerraOS, an open-sourced operating system that helps property owners keep their maps up to date and supply them for use by robots, AR and other software equipped with Fantasmo’s SDK.
Directly competing with Google’s own Visual Positioning System is an audacious move. Fantasmo is betting that private property owners won’t want big corporations snooping around to map their indoor spaces, and instead will want to retain control of this data so they can dictate how it’s used. With Fantasmo, they’ll be able to map spaces themselves and choose where robots can roam or if the next Pokémon GO can be played there.
“Only Apple, Google, and HERE Maps want this centralized.If this data sits on one of the big tech company’s servers, they could basically spy on anyone at any time,” says Detweiler. The prospect gets scarier when you imagine everyone wearing camera-equipped AR glasses in the future. “The AR cloud on a central server is Big Brother. It’s the end of privacy.”
Detweiler and his co-founder Dr. Ryan Measel first had the spark for Fantasmo as best friends at Drexel University. “We need to build Pokémon in real life! That was the genesis of the company,” says Detweiler. In the meantime he founded and sold LaunchRock, a 500 Startups company for creating “Coming Soon” sign-up pages for internet services.
After Measel finished his PhD, the pair started Fantasmo Studios to build augmented reality games like Trash Collectors From Space, which they took through the Techstars accelerator in 2015. “Trash Collectors was the first time we actually created a spatial map and used that to sync multiple people’s precise position up,” says Detweiler. But while building the infrastructure tools to power the game, they realized there was a much bigger opportunity to build the underlying maps for everyone’s games. Now the Santa Monica-based Fantasmo has 11 employees.
“It’s the internet of the real world,” says Detweiler. Fantasmo now collects geo-referenced photos, scans them for identifying features like walls and objects, and imports them into its point cloud model. Apps and robots equipped with the Fantasmo SDK can then pull in the spatial map for a specific location that’s more accurate than federally run GPS. That lets them peg AR objects to precise spots in your environment while making sure robots don’t run into things.
Fantasmo identifies objects in geo-referenced photos to build a 3D model of the world
“I think this is the most important piece of infrastructure to be built during the next decade,” Detweiler declares. That potential attracted funding from TenOneTen, Freestyle Capital, LDV, NoName Ventures, Locke Mountain Ventures and some angel investors. But it’s also attracted competitors like Escher Reality, which was acquired by Pokémon GO parent company Niantic, and Ubiquity6, which has investment from top-tier VCs like Kleiner Perkins and First Round.
Google is the biggest threat, though. With its industry-leading traditional Google Maps, experience with indoor mapping through Tango, new VPS initiative and near limitless resources. Just yesterday, Google showed off using an AR fox in Google Maps that you can follow for walking directions.
Fantasmo is hoping that Google’s size works against it. The startup sees a path to victory through interoperability and privacy. The big corporations want to control and preference their own platforms’ access to maps while owning the data about private property. Fantasmo wants to empower property owners to oversee that data and decide what happens to it. Measel concludes, “The world would be worse off if GPS was proprietary. The next evolution shouldn’t be any different.”
NASA’s latest mission to Mars, Insight, is set to launch early Saturday morning in pursuit of a number of historic firsts in space travel and planetology. The lander’s instruments will probe the surface of the planet and monitor its seismic activity with unprecedented precision, while a pair of diminutive cubesats riding shotgun will test the viability of tiny spacecraft for interplanetary travel.
Saturday at 4:05 AM Pacific is the first launch opportunity, but if weather forbids it, they’ll just try again soon after — the chances of clouds sticking around all the way until June 8, when the launch window closes, are slim to none.
Insight isn’t just a pretty name they chose; it stands for Interior Exploration using Seismic Investigations, Geodesy and Heat Transport, at least after massaging the acronym a bit. Its array of instruments will teach us about the Martian interior, granting us insight (see what they did there?) into the past and present of Mars and the other rocky planets in the solar system, including Earth.
Bruce Banerdt, principal investigator for the mission at NASA’s Jet Propulsion Laboratory, has been pushing for this mission for more than two decades, after practically a lifetime working at the place.
“This is the only job I’ve ever had in my life other than working in the tire shop during the summertime,” he said in a recent NASA podcast. He’s worked on plenty of other missions, of course, but his dedication to this one has clearly paid off. It was actually originally scheduled to launch in 2016, but some trouble with an instrument meant they had to wait until the next launch window — now.
Insight is a lander in the style of Phoenix, about the size of a small car, and shot towards Mars faster than a speeding bullet. The launch is a first in itself: NASA has never launched an interplanetary mission from the West coast, but conditions aligned in this case making California’s Vandenberg air base the best option. It doesn’t even require a gravity assist to get where it’s going.
“Instead of having to go to Florida and using the Earth’s rotation to help slingshot us into orbit… We can blast our way straight out,” Banerdt said in the same podcast. “Plus we get to launch in a way that is gonna be visible to maybe 10 million people in Southern California because this rocket’s gonna go right by LA, right by San Diego. And if people are willing to get up at four o’clock in the morning, they should see a pretty cool light show that day.”
The Atlas V will take it up to orbit and the Centaur will give it its push towards Mars, after which it will cruise for six months or so, arriving late in the Martian afternoon on November 26 (Earth calendar).
Its landing will be as exciting (and terrifying) as Phoenix’s and many others. When it hits the Martian atmosphere, Insight will be going more than 13,000 MPH. It’ll slow down first using the atmosphere itself, losing 90 percent of its velocity as friction against a new, reinforced heat shield. A parachute takes off another 90 percent, but it’ll still be going over 100 MPH, which would make for an uncomfortable landing. So a couple thousand feet up it will transition to landing jets that will let it touch down at a stately 5.4 MPH at the desired location and orientation.
After the dust has settled (literally) and the lander has confirmed everything is in working order, it will deploy its circular, fanlike solar arrays and get to work.
Robot arms and self-hammering robomoles
Insight’s mission is to get into the geology of Mars with more detail and depth than ever before. To that end it is packing gear for three major experiments.
SEIS is a collection of six seismic sensors (making the name a tidy bilingual, bidirectional pun) that will sit on the ground under what looks like a tiny Kingdome and monitor the slightest movement of the ground underneath. Tiny high-frequency vibrations or longer-period oscillations, they should all be detected.
“Seismology is the method that we’ve used to gain almost everything we know, all the basic information about the interior of the Earth, and we also used it back during the Apollo era to understand and to measure sort of the properties of the inside of the moon,” Banerdt said. “And so, we want to apply the same techniques but use the waves that are generated by Mars quakes, by meteorite impacts to probe deep into the interior of Mars all the way down to its core.”
The heat flow and physical properties probe is an interesting one. It will monitor the temperature of the planet below the surface continually for the duration of the mission — but in order to do so, of course, it has to dig its way down. For that purpose it’s installed with what the team calls a “self-hammering mechanical mole.” Pretty self-explanatory, right?
The “mole” is sort of like a hollow, inch-thick, 16-inch-long nail that will use a spring-loaded tungsten block inside itself to drive itself into the rock. It’s estimated that it will take somewhere between 5,000 and 20,000 strikes to get deep enough to escape the daily and seasonal temperature changes at the surface.
Lastly there’s the Rotation and Interior Structure Experiment, which actually doesn’t need a giant nail, a tiny Kingdome, or anything like that. The experiment involves tracking the position of Insight with extreme precision as Mars rotates, using its radio connection with Earth. It can be located to within about four inches, which when you think about it is pretty unbelievable to begin with. The way that position varies may indicate a wobble in the planet’s rotation and consequently shed light on its internal composition. Combined with data from similar experiments in the ’70s and ’90s, it should let planetologists determine how molten the core is.
“In some ways, InSight is like a scientific time machine that will bring back information about the earliest stages of Mars’ formation 4.5 billion years ago,” said Banerdt in an earlier news release. “It will help us learn how rocky bodies form, including Earth, its moon, and even planets in other solar systems.”
In another space first, Insight has a robotic arm that will not just do things like grab rocks to look at, but will grab items from its own inventory and deploy them into its workspace. Its little fingers will grab handles on top of each deployable instrument and grab it just like a human might. Well, maybe a little differently, but the principle is the same. At nearly 8 feet long, it has a bit more reach than the average astronaut.
Cubes riding shotgun
One of the MarCO cubesats.
Insight is definitely the main payload, but it’s not the only one. Launching on the same rocket are two cubesats, known collectively as Mars Cube One, or MarCO. These “briefcase-size” guys will separate from the rocket around the same time as Insight, but take slightly different trajectories. They don’t have the control to adjust their motion and enter an orbit, so they’ll just zoom by Mars right as Insight is landing.
Cubesats launch all the time, though, right? Sure — into Earth orbit. This will be the first attempt to send Cubesats to another planet. If successful there’s no limit to what could be accomplished — assuming you don’t need to pack anything bigger than a breadbox.
The spacecraft aren’t carrying any super-important experiments; there are two in case one fails, and both are only equipped with UHF antennas to send and receive data, and a couple low-resolution visible-light cameras. The experiment here is really the cubesats themselves and this launch technique. If they make it to Mars, they might be able to help send Insight’s signal home, and if they keep operating beyond that, it’s just icing on the cake.
Misty Robotics has been talking a good game for 10 months now. Back in June, the Colorado-based Sphero spin-off announced its plans to develop a mainstream home robot. At CES in January, it introduced the world to the Misty I, a skeletal robotic development platform.
Four months later, it’s back with Misty II. While it looks like more of a consumer device, Misty II is still pretty far from the mainstream consumer robot the startup has been promising all along. Like its more unassuming predecessor, the Misty II “personal robot” is designed to be a kind of development platform — albeit one that’s a lot easier to write for than traditional robots.
According to Misty,
Among the skills the company is making available through GitHub are the ability to drive autonomously, respond to commands, locate its charger and recognize faces.
In spite of getting $11.5 million in funding early on, the company is still opting to crowdfund the new robot. The campaign, which runs through the end of the month, is designed to establish a community around the robot. Those who pitch in will get the robot at a discount — though it’s still not what you would call cheap, at $1,600. That’s apparently about half of what the company expects the robot to get via retail — a price that will almost certainly keep it out of the mainstream in this early iteration.
It goes without saying that getting dressed is one of the most critical steps in our daily routine. But long practice has made it second nature, and people suffering from dementia may lose that familiarity, making dressing a difficult and frustrating process. This smart dresser from NYU is meant to help them through the process while reducing the load on overworked caregivers.
It may seem that replacing responsive human help with a robotic dresser is a bit insensitive. But not only are there rarely enough caregivers to help everyone in a timely manner at, say, a nursing care facility, the residents themselves might very well prefer the privacy and independence conferred by such a solution.
“Our goal is to provide assistance for people with dementia to help them age in place more gracefully, while ideally giving the caregiver a break as the person dresses – with the assurance that the system will alert them when the dressing process is completed or prompt them if intervention is needed,” explained the project’s leader, Winslow Burleston, said in an NYU news release.
DRESS, as the team calls the device, is essentially a 5-drawer dresser with a tablet on top that serves as both display and camera, monitoring and guiding the user through the dressing process.
There are lots of things that can go wrong when you’re putting on your clothes, and really only one way it can go right — shirts go on right side out and trousers forwards, socks on both feet, etc. That simplifies the problem for DRESS, which looks for tags attached to the clothes to make sure they’re on right and in order, making sure someone doesn’t attempt to put on their shoes before their trousers. Lights on each drawer signal the next item of clothing to don.
If there’s any problem — the person can’t figure something out, can’t find the right drawer, or gets distracted, for instance — the caregiver is alerted and will come help as normal. But if all goes right, the person will have dressed themselves all on their own, something that might not have been possible before.
DRESS is just a prototype right now, a proof of concept to demonstrate its utility. The team is looking into improving the vision system, standardizing clothing folding, and enlarging or otherwise changing the coded tags on each item.