Timesdelhi.com

January 18, 2019
Category archive

Delhi - page 3

SeeTree raises $11.5M to help farmers manage their orchards

in agriculture/AgTech/Artificial Intelligence/Canaan Partners/Delhi/drones/Fundings & Exits/India/investors/Politics/precision agriculture/Recent Funding/Startups/Tel Aviv/Uri Levine by

SeeTree, a Tel Aviv-based startup that uses drones and artificial intelligence to bring precision agriculture to their groves, today announced that it has raised an $11.5 million Series A funding round led by Hanaco Ventures, with participation from previous investors Canaan Partners Israel, Uri Levine and his investors group, iAngel and Mindset. This brings the company’s total funding to $15 million.

The idea behind the company, which also has offices in California and Brazil, is that in the past, drone-based precision agriculture hasn’t really lived up to its promise and didn’t work all that well for permanent crops like fruit trees. “In the past two decades, since the concept was born, the application of it, as well as measuring techniques, has seen limited success — especially in the permanent-crop sector,” said SeeTree CEO Israel Talpaz. “They failed to reach the full potential of precision agriculture as it is meant to be.”

He argues that the future of precision agriculture has to take a more holistic view of the entire farm. He also believes that past efforts didn’t quite offer the quality of data necessary to give permanent crop farmers the actionable recommendations they need to manage their groves.

SeeTree is obviously trying to tackle these issues and it does so by offering granular per-tree data based on the imagery gathered from drones and the company’s machine learning algorithms that then analyze this imagery. Using this data, farmers can then decide to replace trees that underperform, for example, or map out a plan to selectively harvest based on the size of a tree’s fruits and its development stages. They can then also correlate all of this data with their irrigation and fertilization infrastructure to determine the ROI of those efforts.

“Traditionally, farmers made large-scale business decisions based on intuitions that would come from limited (and often unreliable) small-scale testing done by the naked eye,” said Talpaz. “With SeeTree, farmers can now make critical decisions based on accurate and consistent small and large-scale data, connecting their actions to actual results in the field.”

SeeTree was founded by Talpaz, who like so many Israeli entrepreneurs previously worked for the country’s intelligence services, as well as Barak Hachamov (who you may remember from his early personalized news startup my6sense) and Guy Morgenstern, who has extensive experience as an R&D executive with a background in image processing and communications systems.

News Source = techcrunch.com

OrCam’s MyMe uses facial recognition to remember everyone you meet

in CES/CES 2019/Delhi/Gadgets/India/OrCam/OrCam MyMe/Politics by

Meet the Orcam MyMe, a tiny device that you clip on your T-shirt to help you remember faces. The OrCam MyMe features a small smartphone-like camera and a proprietary facial-recognition algorithm so that you can associate names with faces. It can be a useful device at business conferences, or to learn more about how you spend a typical day.

This isn’t OrCam’s first device. The company has been selling the MyEye for a few years. It’s a wearable device for visually impaired people that you clip to your glasses. Thanks to its camera and speaker, you can point your finger at some text and get some audio version of the test near your ear. It can also tell you if there’s somebody familiar in front of you.

OrCam is expanding beyond this market with a mass market product. It features the same technological foundation, but with a different use case. OrCam’s secret sauce is that it can handle face recognition and optical character recognition on a tiny device with a small battery — images are not processed in the cloud.

It’s also important to note that the OrCam MyMe doesn’t record video or audio. When the device detects a face, it creates a signature and tries to match it with existing signatures. While it’s not a spy camera, it still feels a bit awkward when you realize that there’s a camera pointed at you.

When there’s someone in front of you, the device sends a notification to your phone and smart watch. You can then enter the name of this person on your phone so that the next notification shows the name of the person you’re talking with.

If somebody gives you a business card, you can also hold it in front of you. The device then automatically matches the face with the information on the business card.

After that, you can tag people in different categories. For instance, you can create a tag for family members, another one for colleagues and another one for friends.

The app shows you insightful graphs representing your work-life balance over the past few weeks and months. If you want to quantify everything in your life, this could be an effective way of knowing that you should spend more time with your family for instance.

While the device isn’t available just yet, the company already sold hundreds of early units on Kickstarter. Eventually, OrCam wants to create a community of enthusiasts and figure out new use cases.

I saw the device at CES last week and it’s much smaller than you’d think based on photos. You don’t notice it unless you’re looking for the device. It’s not as intrusive as Google Glass for instance. You can optionally use a magnet if the clip doesn’t work with what you’re wearing.

OrCam expects to ship the MyMe in January 2020 for $399. It’s an impressive little device, but the company also faces one challenge — I’m not sure everyone feels comfortable about always-on facial recognition just yet.

News Source = techcrunch.com

Dreaming of Mars, the startup Relativity Space gets its first launch site on Earth

in 3d printing/Air Force/Amazon/Blue Origin/california/Delhi/Elon Musk/Florida/hyperloop/India/jeff bezos/Los Angeles/Louisiana/Mississippi/Politics/Relativity Space/Seattle/Space/spaceflight/spaceport/SpaceX/Startups/TC/transport/U.S. Air Force/United Launch Alliance/United States/WeWork by

3D-printing the first rocket on Mars.

That’s the goal Tim Ellis and Jordan Noone set for themselves when they founded Los Angeles-based Relativity Space in 2015.

At the time they were working from a WeWork in Seattle, during the darkest winter in Seattle history, where Ellis was wrapping up a stint at Blue Origin . The two had met in college at USC in their jet propulsion lab. Noone had gone on to take a job at SpaceX and Ellis at Blue Origin, but the two remained in touch and had an idea for building rockets quickly and cheaply — with the vision that they wanted to eventually build these rockets on Mars.

Now, more than $35 million dollars later, the company has been awarded a multi-year contract to build and operate its own rocket launch facilities at Cape Canaveral Air Force Station in Florida.

That contract, awarded by The 45th Space Wing of the Air Force, is the first direct agreement the U.S. Air Force has completed with a venture-backed orbital launch company that wasn’t also being subsidized by billionaire owner-operators.

By comparison, Relativity’s neighbors at Cape Canaveral are Blue Origin (which Jeff Bezos has been financing by reportedly selling $1 billion in shares of Amazon stock since 2017); SpaceX (which has raised roughly $2.5 billion since its founding and initial capitalization by Elon Musk); and United Launch Alliance, the joint venture between the defense contracting giants Lockheed Martin Space Systems and Boeing Defense.

Like the other launch sites at Cape Canaveral, Launch Complex 16, where Relativity expects to be launching its first rockets by 2020, has a storied history in the U.S. space and missile defense program. It was used for Titan missile launches, the Apollo and Gemini programs and Pershing missile launches.

From the site, Relativity will be able to launch its first designed rocket, the Terran 1, which is the only fully 3D-printed rocket in the world.

That rocket can carry a maximum payload of 1,250 kilograms to a low earth orbit of 185 kilometers above the Earth. Its nominal payload is 900 kilograms of a Sun-synchronous orbit 500 kilometers out, and it has a 700 kilogram high-altitude payload capacity to 1,200 kilometers in Sun-synchronous orbit. Relativity prices its dedicated missions at $10 million, and $11,000 per kilogram to achieve Sun-synchronous orbit.

If the company’s two founders are right, then all of this launch work Relativity is doing is just a prelude to what the company considers to be its real mission — the advancement of manufacturing rockets quickly and at scale as a test run for building out manufacturing capacity on Mars.

“Rockets are the business model now,” Ellis told me last year at the company’s offices at the time, a few hundred feet from SpaceX. “That’s why we created the printing tech. Rockets are the largest, lightest-weight, highest-cost item that you can make.”

It’s also a way for the company to prove out its technology. “It benefits the long-term mission,” Ellis continued. “Our vision is to create the intelligent automated factory on Mars… We want to help them to iterate and scale the society there.”

Ellis and Noone make some pretty remarkable claims about the proprietary 3D printer they’ve built and housed in their Inglewood offices. Called “Stargate,” the printer is the largest of its kind in the world and aims to go from raw materials to a flight-ready vehicle in just 60 days. The company claims that the speed with which it can manufacture new rockets should pare down launch timelines by somewhere between two and four years.

Another factor accelerating Relativity’s race to market is a long-term contract the company signed last year with NASA for access to testing facilities at the agency’s Stennis Space Center on the Mississippi-Louisiana border. It’s there, deep in the Mississippi delta swampland, that Relativity plans to develop and quality control as many as 36 complete rockets per year on its 25-acre space.

All of this activity helps the company in another segment of its business: licensing and selling the manufacturing technology it has developed.

“The 3D factory and automation is the other product, but really that’s a change in emphasis,” says Ellis. “It’s always been the case that we’re developing our own metal 3D printing technology. Not only can we make rockets. If the long-term mission is 3D printing on Mars, we should think of the factory as its own product tool.”

Not everyone agrees. At least one investor I talked to said that in many cases, the cost of 3D printing certain basic parts outweighs the benefits that printing provides.

Still, Relativity is undaunted.

But first, the company — and its competitors at Blue Origin, SpaceX, United Launch Alliance and the hundreds of other companies working on launching rockets into space again — need to get there. For Relativity, the Canaveral deal is one giant step for the company, and one great leap toward its ultimate goal.

“This is a giant step toward being a launch company,” says Ellis. “And it’s aligned with the long-term vision of one day printing on Mars.”

News Source = techcrunch.com

Researchers ran a simulator to teach this robot dog to roll over

in Delhi/India/Politics/Remove From TC River/robotics by

Advanced robots are expensive, and teaching them can be incredibly time consuming. With the proper simulation, however, roboticists can train their machines to learn quickly. A team from the Robotic Systems Lab in Zurich, Switzerland have demonstrated as much in a new paper.

The research outlines how training a neutral a neural network using simulation taught the Boston Dynamics-esque ANYmal robot how to perform some impressive feats, including the ability to roll over, as a method for recovering from a fall.

Using the simulation, researchers were able to train more than 2,000 computerized version of the quadrupedal robot simultaneously in real time. Doing so made it possible for researchers to examine different methods in order to determine the best way to execute certain tasks.

Once collected, those learnings can then be transferred to the robot. As Popular Science notes, this is all similar to to the ways in which many company test and refine self-driving systems.

“Using policies trained in simulation,” the team writes in the paper, “the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than before, and recovering from falling even in complex configurations.”

News Source = techcrunch.com

Spotify launches Car View on Android to make using its app less dangerous behind the wheel

in Android/android apps/Apps/car view/Delhi/distracted driving/Driving/India/Media/mobile/Music/Politics/Spotify/streaming by

Spotify is making it easier to use its streaming app in the car, when the phone is connected to the vehicle over Bluetooth. The company today confirmed the launch of a new feature called “Car View,” which is a simplified version of the service’s Now Playing screen that includes larger fonts, bigger buttons, and no distractions from album art. In Car View, you’re only shown the track title and artist, so you can read the screen with just a glance.

The site 9to5Google was the first to spot the feature’s appearance in Spotify’s settings. However, some users have had the option for weeks in what had appeared to be a slow rollout or possibly a test, pre-launch.

Spotify this morning formally announced the launch of Car View in a post to its Community Forums.

The company says the feature is currently available only on Android devices, and only when the device is connected over Bluetooth.

When the phone connects, Car View is automatically enabled when your music or podcast starts playing.

Above: Car View in action; credit: 9to5Google

While Spotify already offers several in-car experiences through integrations with other apps like Google Maps, Waze, as well as through Android Auto, using the music app while behind the wheel has been very distracting and difficult.

I’ve personally found Spotify so dangerous to navigate while in the car, that I just won’t use it unless I set it up to stream before I drive. Or, in some cases, I’ll hand the phone to a passenger to control instead.

Given the difficulty with Spotify in the car, Car View’s lack of support for those who use the app over an AUX cable is a little disappointing.There’s no good reason why users should not be allowed to manually enable Car View from the Settings, if they choose. After all, it’s just a change to the user interface of a single view – and it’s been built!

Of course, manually toggling Car View on might not feel as seamless as the Bluetooth experience, but a feature like this could prevent accidents caused by people fiddling with their phone in the car. Hopefully, Spotify will make Car View more broadly accessible in time.

According to Spotify, once Car View is enabled, you can access your Library, tap to Browse, or use Search. While listening, you can use the seek bar to skip to another part of the song.

In the case that a passenger is controlling the music on your phone, they can temporarily disable Car View by way of the three dots menu. And if, for some reason, you don’t want to use Car View, the feature can be disabled in the Settings. (But keep it on, OK?)

Spotify also noted Car View supports landscape view, and will arrive on iOS in the future. It didn’t offer a time frame.

Car View officially launched on Android this week, and is now rolling out globally to all users.

 

News Source = techcrunch.com

Go to Top