Magic Leap may be raising as much as $1 billion to fund their vision of a future filled with augmented reality glasses.
A Delaware filing dated Wednesday was provided to us by CB Insights, confirming that the secretive startup has authorized about $1 billion in new funding. The filing authorizes over 37 million shares of Series D preferred stock at $27 per share. No details on investors yet. Using this and previous filings, Equidate is calculating that the proposed round is being done at about a $7 billion valuation.
A spokesperson for Magic Leap would not confirm that the round had been completed.
To date, the most remarkable public details to emerge regarding the Plantation, Florida augmented reality startup have been the substantial cash they’ve raised and the noteworthy names that are backing them. This round withstanding, the company has publicly announced nearly $1.4 billion in funding coming from high-profile investors that include Google, Alibaba and Andreessen Horowitz.
This substantial amount of funding has placed Magic Leap firmly in the public eye, but the only official hints of the startup’s consumer product strategy have emerged from dated patent filings and cryptic remarks by the company’s leadership referring to the launch of a device it seems to be tentatively calling “Magic Leap One.”
A report last month in Bloomberg suggested that the AR startup may be readying itself to begin shipment the device to a “small group of users” in the next six months at a price that could be as much as $2,000. In the past few weeks, the company has begun a new marketing push that has included a branding revamp with a new logo, new website and a new promo video which promises that “the whole story is coming soon.”
News Source = techcrunch.com
What’s under those clothes? This system tracks body shapes in real time
With augmented reality coming in hot and depth tracking cameras due to arrive on flagship phones, the time is right to improve how computers track the motions of people they see — even if that means virtually stripping them of their clothes. A new computer vision system that does just that may sound a little creepy, but it definitely has its uses.
The basic problem is that if you’re going to capture a human being in motion, say for a movie or for an augmented reality game, there’s a frustrating vagueness to them caused by clothes. Why do you think motion capture actors have to wear those skintight suits? Because their JNCO jeans make it hard for the system to tell exactly where their legs are. Leave them in the trailer.
Same for anyone wearing a dress, a backpack, a jacket — pretty much anything other than the bare minimum will interfere with the computer getting a good idea of how your body is positioned.
The multi-institutional project (PDF), due to be presented at CVPR in Salt Lake City, combines depth data with smart assumptions about how a body is shaped and what it can do. The result is a sort of X-ray vision, revealing the shape and position of a person’s body underneath their clothes, that works in real time even during quick movements like dancing.
The paper builds on two previous methods, DynamicFusion and BodyFusion. The first uses single-camera depth data to estimate a body’s pose, but doesn’t work well with quick movements or occlusion; the second uses a skeleton to estimate pose but similarly loses track during fast motion. The researchers combined the two approaches into “DoubleFusion,” essentially creating a plausible skeleton from the depth data and then sort of shrink-wrapping it with skin at an appropriate distance from the core.
As you can see above, depth data from the camera is combined with some basic reference imagery of the person to produce both a skeleton and track the joints and terminations of the body. On the right there, you see the results of just DynamicFusion (b), just BodyFusion (c) and the combined method (d).
The results are much better than either method alone, seemingly producing excellent body models from a variety of poses and outfits:
Hoodies, headphones, baggy clothes, nothing gets in the way of the all-seeing eye of DoubleFusion.
One shortcoming, however, is that it tends to overestimate a person’s body size if they’re wearing a lot of clothes — there’s no easy way for it to tell whether someone is broad or they are just wearing a chunky sweater. And it doesn’t work well when the person interacts with a separate object, like a table or game controller — it would likely try to interpret those as weird extensions of limbs. Handling these exceptions is planned for future work.
The paper’s first author is Tao Yu of Tsinghua University in China, but researchers from Beihang University, Google, USC, and the Max Planck Institute were also involved.
“We believe the robustness and accuracy of our approach will enable many applications, especially in AR/VR, gaming, entertainment and even virtual try-on as we also reconstruct the underlying body shape,” write the authors in the paper’s conclusion. “For the first time, with DoubleFusion, users can easily digitize themselves.”
There’s no use denying that there are lots of interesting applications of this technology. But there’s also no use denying that this technology is basically X-ray Spex.
News Source = techcrunch.com
This AR guppy feeds on the spectrum of human emotion
Indiecade always offers a nice respite from the wall of undulating human flesh and heat that is the rest of the E3 show floor. The loose confederation of independent developers often produces compelling and bizarre gaming experiences outside of the big studio system.
TendAR is the most compelling example of this out of this year’s batch. It is, simply put, a pet fish that feeds on human emotions through augmented reality. I can’t really explain why this is a thing, but it is. It’s a video game, so just accept it and move on.
The app is produced by Tender Claws, a small studio out of Los Angeles best known for Virtual Virtual Reality, an Oculus title that boasts among its “key features”:
- 50+ unique virtual virtual realities
- An artichoke screams at you
TendAR fits comfortably within that manner of absurdist framework, though the title has more in common with virtual pets like Tamagotchi and the belovedly bizarre Dreamcast cult hit, Seaman. There’s also a bit of Douglas Adams wrapped up in there, in that your pet guppy feeds on human emotions detected through face detection.
The app is designed for two players, both holding onto the same phone, feigning different emotions when prompted by a chatty talking fish. If you fail to give it what it wants, your fish will suffer. I tried the game and my guppy died almost immediately. Apparently my ability to approximate sadness is severely lacking. Tell it to my therapist, am I right?
The app is due out this year for Android.
News Source = techcrunch.com
Now Snapchat lets you unsend messages like Faceboook promised
Mark Zuckerberg’s Facebook messages were retracted from the inboxes of some users, six sources told TechCrunch in April. Facebook quickly tried to normalize that breach of trust by claiming it would give everyone the ability to unsend messages in the coming months. We haven’t heard a word about it since, and Facebook told me it had nothing more to share here today.
Well Snap is stepping up. Snapchat will let you retract your risque, embarassing, or incriminating messages thanks to a new feature called Clear Chats that’s rolling out globally over the next few weeks.
Hold down on a text, image, video, memory, sticker, or audio note in a one-on-one or group chat Snapchat message thread and you’ll see a Delete button. Tap it, and Snapchat will try to retract the message, though it admits it won’t always work if the recipient lacks an internet connection or updated version of the app. The recipient will also be notified…something Facebook didn’t do in the case of Zuckerberg’s messages.
The Clear Chats feature could make people more comfortable sending sensitive information over Snapchat. The app already auto-deletes messages after they’re viewed unless a recipient chooses to screenshot or Save them, which their conversation partner can see. This could be especially useful for thwarting cases of revenge porn, where hackers or jilted ex-lovers expose someone’s nude images.
Unfortunately, the Clear Chats option could also be used to send then retract abusive messages, destroying the paper trail. Social media evidence is increasingly being used in divorce and custody battles, which an unsend feature might undermine…especially if Facebook goes through with rolling it out on its platform where messages are normally permanent. But right now, Snapchat’s priority is doing whatever it can to boost usage after hitting its slowest growth rate ever last quarter. If teens feel like Snapchat is a consequence-free place to message, whether or not that’s true, they might favor it over SMS and other social apps.
More Snapchat Spectacles And Ecommerce News
Snap made a few other announcements today. Spectacles v2, which are actually pretty great and I continue use, are now available for purchase through Amazon in the U.S., U.K, and Canada. The $150 photo- and video-recording sunglasses come to more European countries via Jeff Bezos soon, such as France, Germany, Italy, and Spain. Amazon will sell Spectacles in three color combos: Onyx Moonlight, Sapphire Twilight, & Ruby Daybreak.
Until now, you could only buy v2 on Snap’s website. That’s because Snapchat’s eagerness to develop a bevy of sales channels made it very tough to forecast demand for its lackluster v1 Spectacles. They only sold 220,000. That led to hundreds of thousands of pairs gathering dust unsold in warehouses, and Snapchat taking an embarrassing $40 million write-off.
“We had an inventory challenge with v1” Snap’s VP of hardware Mike Randall told me in April. “We don’t think it was a product issue. It was an internal understanding our demand issue vs a planning issue. So we think by having a more simplistic channel strategy with v2 we can more thoughtfully manage demand with v2 vs v1.” Working with Amazon and its robust toolset should help Snap get Spectacles in front of more buyers without obscuring how many it should be manufacturing.
Still, the worst thing about Spectacles is Snapchat. The inability to dump footage directly to your phone’s camera roll, and the incompatibility of its round media format with other social networks means it’s tough to share your Spetacles content anywhere else while making it look good. Snap has experimented with a traditional landscape export format, but that hasn’t rolled out. Spetacles could strongly benefit from Snap partnering with fellow apps or open sourcing to let others show its circular always-full-screen format in all its glory.
Finally, Snapchat is launching a new ecommerce ad unit that shows a carousel of purchaseable items at the bottom of the screen that users can tap to buy without leaving the Snapchat app. This follows our prediction that Snap launching its own in-app merch store was really the foundation of a bigger ecommerce platform that’s now rolling out.
Merchants can use the Snap Pixel to measure how their ads lead to sales. The ability to shave down the ecommerce conversion funnel could get advertisers spending more on Snapchat when it could use the dollars. Last quarter it lost $385 million and missed its revenue target by $14 million.
Snapchat is also bringing its augmented reality advertisements to its self-serve ad buying tool. They’re sold on an effective CPM basis for $8 to $20 depending on targeting. Snapchat is also turning its new multiplayer game filters called Snappables into ads.
Overall, it’s good to see Snapchat iterating across its software, hardware, and business units. Plagued by executive departures, fierce competition from Facebook, a rough recent earnings report, and share price troubles, it’s easy to imagine the team getting distracted. The long-term roadmap is fuzzy. With Stories becoming more popular elsewhere, Spectacles sales not being enough to right the ship, and Instagram preparing to launch a long-form video hub that competes with Snapchat Discover, Snap needs to figure out its identity. Perhaps that will hinge on some flashy new feature that captures the imagination of the youth. But otherwise, it must lock in for a long-haul of efficient and methodical improvement. If it’s not growing, the best it can do is hold on to its core audience and squeeze as many dollars out of them as possible without looking desperate.
News Source = techcrunch.com
Follow on Twitter
Delhi1 month ago
Chefow – Delhi based food tech startup to bring home cooked food at your doorstep.
Delhi1 month ago
A Start of New Technical Ecosystem by International Blockchain Technology Council
Delhi1 month ago
Army of Poets – Education For All
Delhi1 year ago
Vistaprint to shutdown Indian Operations? 75% Employees Fired, Senior Leaders
Amazing8 months ago
वीडियो : सलमान खान की शादी हो या ना हो लेकिन राहुल गांधी ने अपनी शादी को लेकर खोला राज
Delhi1 month ago
Dheeraj Sharma, National President Of NSC to partner with DIYguru for NSC Students
India1 year ago
How BajaTutor is helping Car Racing enthusiasts to build their ATV’s?
Education1 year ago
Haridwar Boy making his way to promote Online Learning in India for GATE / IES Exams