Menu

Timesdelhi.com

February 24, 2019
Category archive

Augmented Reality

Movable Ink adds augmented reality to email marketing arsenal

in Augmented Reality/Delhi/email marketing/India/Marketing/movable ink/Politics by

Movable Ink has always helped marketers create highly customizable, visually interesting emails. Today, the company announced a new capability for marketers who want to introduce light-weight augmented reality (AR) to their campaigns.

Moveable Ink co-founder and CTO Michael Nutt says the company was looking for a way to provide customers with AR experiences with less fuss than most current methods. “Marketers were looking for something interesting in AR, but they wanted to do it themselves without expensive consultants. We had this powerful visual channel already. We combined that with web technologies and put together an offering for our clients,” he explained.

This isn’t highly sophisticated AR, but it does provide a starting point for marketers who want to get involved with it. The idea involves creating branded selfies. Say you are using a vacation company to take a cruise. The company could send you an email a couple of days prior to the trip. Clicking the email takes you to a site where you can take a picture of yourself, superimposed over a relevant background. Users can share these images on social media, thereby acting as brand ambassadors for these companies.

Photo: Movable Ink

 

Movable Ink’s mission involves making marketing emails more interesting so people open them. The AR component is really about increasing engagement, and Movable Ink says that in early Betas, it has been seeing a 40 percent increase in open rates and 50 percent of participants who do open the email, spending more than a minute engaging with the AR experience.

The flavor of AR the company is offering doesn’t require the end user have any special equipment and it doesn’t require the marketers to have coding skills. It’s all designed using tools that work inside any browser with graphical overlays and face filters to provide this customized selfie experience.

Once marketers create these experiences, they can measure and report on them like any marketing email, looking at opens, engagement time, the number of times the camera has been activated and how many pictures have been taken.

The company began working on this capability about a year ago and launched in Beta in October. The product is available for Movable Ink customers starting today.

News Source = techcrunch.com

LEGO launches eight AR-focused sets

in Apps/Augmented Reality/Delhi/India/lego/Politics by

LEGO’s long been a leader among traditional toy companies when it comes to embracing tech trends, from mobile apps to robotics. The toy maker’s been talking up its plans to embrace augmented reality since a couple of WWDCs ago, and now it’s finally ready to go all-in with the launch of eight AR-focused sets.

All are part of Hidden Side, a new series of sets designed to skirt the line between the physical and virtual. All are haunted buildings that tell a larger story about a couple of kids tasked with using a ghost-hunting app to uncover mysterious goings-on in their hometown.

The sets range from $20 to $130 and offer experiences that adapt as the story continues to roll out. The addition of a digital component gives the company a bit of leeway here when it comes to building out things over time. Those who don’t buy a set can also use the app to play a standalone game from the point of view of the ghosts — though obviously the whole thing is more heightened if you own the physical LEGO.

The sets are completely new — built from the ground up to support AR, unlike those shown at WWDC. Also, interestingly, the company didn’t use ARKit or ARCore to build out the experiences, instead opting for the more robust model recognition of Vuforia’s SDK.

The sets will arrive in “late summer,” along with the app, which will hit both the App Store and Google Play.

News Source = techcrunch.com

Facebook mulled multi-billion-dollar acquisition of gaming giant Unity, book claims

in Augmented Reality/Delhi/Facebook/India/Mark Zuckerberg/Politics/TC/unity-technologies/Virtual Reality by

Less than a year after making a $3 billion investment into the future of virtual reality with the purchase of Oculus VR, Facebook CEO Mark Zuckerberg was considering another multi-billion-dollar bet to ensure that his company dominated the VR platform, buying Unity, the popular game engine that’s used to build half of all gaming titles.

This claim is made in a new book coming out next week, “The History of the Future,” by Blake Harris, which digs deep into the founding story of Oculus and the drama surrounding the Facebook acquisition, subsequent lawsuits, and personal politics of founder Palmer Luckey.

In the early days while he was writing the book, Harris worked closely with the Facebook PR team and was granted regular interviews with key execs before, as he puts it, his “access came to an end.” Harris claims that through reporting out the book, he had gained access to more than 25,000 documents from sources, including a nearly 2,500-word email sent by Mark Zuckerberg to then-Oculus CEO Brendan Iribe, Sheryl Sandberg and a half-dozen other Facebook leaders detailing his interest in buying Unity. TechCrunch has not independently verified the contents of the email.

The email, dated June 22, 2015, lays out an argument for further prioritizing AR/VR and buying the game engine company. The proposed deal,  codenamed “One” according to the book, would have brought one of the world’s most recognizable game developer tool startups into the fold of the internet giant bent on bringing consumers onboard its upcoming VR platform as it looked to ward off competition from other tech giants.

Unity CEO John Riccitiello

The potential deal obviously did not end up going through, and since 2015, Unity has raised nearly $600 million on a valuation north of $3 billion. A report from Cheddar earlier this week noted the company was setting its sights on a 2020 IPO.

Nevertheless, the email seems to offer rare perspectives into Zuckerberg’s thoughts on virtual reality and Facebook’s competitive footing. Though only parts are referenced in the book, Harris has sent TechCrunch the full email embedded below:

2015 06 22 MARK’S VISION by on Scribd

“We are vulnerable on mobile to Google and Apple because they make major mobile platforms,” the email reads. “From a timing perspective, we are better off the sooner the next platform becomes ubiquitous and the shorter the time we exist in a primarily mobile world dominated by Google and Apple. The shorter this time, the less our community is vulnerable to the actions of others. Therefore, our goal is not only to win in VR / AR, but also to accelerate its arrival. This is part of my rationale for acquiring companies and increasing investment in them sooner rather than waiting until later to derisk them further.”

Beyond staking a claim on the VR platform, Zuckerberg also frames an argument for owning Unity as a means of pushing competitors to support Facebook’s other platform services.

“If we own Unity, then Android, Windows and iOS will all need us to support them on [sic] larger portions of their ecosystems won’t work. While we wouldn’t reject them outright, we will have options for how deeply we support them,” Zuckerberg continues. “On the flip side, if someone else buys Unity or the leader in any core technology component of this new ecosystem, we risk being taken out of the market completely if that acquirer is hostile and decides not to support us.”

Though, again, a Unity deal never came to fruition, Zuckerberg seems to be strongly in favor of the deal going through — though he notes there are clear challenges that could leave their efforts bungled.

“Going back to the question of whether it is worth investing billions of dollars into Unity and other core technology over the next decade, the most difficult aspect to evaluate is that we cannot definitively say that if we do X, we will succeed. There are many major pieces of this ecosystem to assemble and many different ways we could be hobbled. All we know is that this improves our chances to build something great.

“Given the overall opportunity of strengthening our position in the next major wave of computing, I think it’s a clear call to do everything we can to increase our chances. A few billion dollars is expensive, but we can afford it.”

Facebook did not comment on the email to TechCrunch. A spokesperson, however, did send along a statement about the book: “The book doesn’t get everything right, but what we hope people remember is the future of VR will not be defined by one company, one team, or even one person. This industry was built by a community of pioneers who believed in VR against all odds and that’s the history we celebrate.”

News Source = techcrunch.com

Asteroid is building a human-machine interaction engine for AR developers

in Asteroid Technologies/Augmented Reality/biosensory data/Delhi/Hardware/India/Politics/Recent Funding/Startups/TC by

When we interact with computers today we move the mouse, we scroll the trackpad, we tap the screen, but there is so much that the machines don’t pick up on — what about where we’re looking, the subtle gestures we make and what we’re thinking?

Asteroid is looking to get developers comfortable with the idea that future interfaces are going to take in much more biosensory data. The team has built a node-based human-machine interface engine for macOS and iOS that allows developers to build interactions that can be imported into Swift applications.

“What’s interesting about emerging human-machine interface tech is the hope that the user may be able to ‘upload’ as much as they can ‘download’ today,”Asteroid founder Saku Panditharatne wrote in a Medium post.

To bring attention to their development environment, they’ve launched a crowdfunding campaign that gives a decent snapshot of the depth of experiences that can be enabled by today’s commercially available biosensors. Asteroid definitely doesn’t want to be a hardware startup, but their campaign is largely serving as a way to expose developers to what tools could be in their interaction design arsenal.

There are dev kits and then there are dev kits, and this is a dev kit. Developers jumping on board for the total package get a bunch of open hardware, i.e. a bunch of gear and cases to build out hacked-together interface solutions. The $450 kit brings capabilities like eye-tracking, brain-computer interface electrodes and some gear to piece together a motion controller. Backers can also just buy the $200 eye-tracking kit alone. It’s all very utility minded and clearly not designed to make Asteroid those big hardware bucks.

“The long-term goal is to support as much AR hardware as we can, we just made our own kit because I don’t think there is that much good stuff out there outside of labs,” Panditharatne told TechCrunch.

The crazy hardware seems to be a bit of a labor of love for the time being, while a couple of AR/VR devices have eye-tracking baked-in, it’s still a generation away from most consumer VR devices, and you’re certainly not going to find too much hardware with brain-computer interface systems built-in. The startup says their engine will do plenty with just a smartphone camera and a microphone, but the broader sell with the dev kit is that you’re not building for a specific piece of hardware, you’re experimenting on the bet that interfaces are going to grow more closely intertwined with how we process the world as humans.

Panditharatne founded the company after stints at Oculus and Andreessen Horowitz where she spent a lot of time focusing on the future of AR and VR. Panditharatne tells us that Asteroid has raised more than $2 million in funding, but that they’re not detailing the source of that cash quite yet.

The company is looking to raise $20,000 from their Indiegogo campaign, but the platform is the clear sell here, exposing people to their human-machine interaction engine. Asteroid is taking sign-ups to join the waiting list for the product on their site.

News Source = techcrunch.com

Hands-on with an Alpha build of Google Maps’ Augmented Reality mode

in Augmented Reality/Delhi/Google/Google-Maps/India/Politics/TC by

I think most of us have had this experience, especially when you’re in a big city: you step off of public transit, take a peek at Google Maps to figure out which way you’re supposed to go… and then somehow proceed to walk two blocks in the wrong direction.

Maybe the little blue dot wasn’t actually in the right place yet. Maybe your phone’s compass was bugging out and facing the wrong way because you’re surrounded by 30-story buildings full of metal and other things that compasses hate.

Google Maps’ work-in-progress augmented reality mode wants to end that scenario, drawing arrows and signage onto your camera’s view of the real world to make extra, super sure you’re heading the right way. It compares that camera view with its massive collection of Street View imagery to try to figure out exactly where you’re standing and which way you’re facing, even when your GPS and/or compass might be a little off. It’s currently in alpha testing, and I spent some hands-on time with it this morning.

A little glimpse of what it looks like in action:

Google first announced AR walking directions about nine months ago at its I/O conference, but has been pretty quiet about it since. Much of that time has been spent figuring out the subtleties of the user interface. If they drew a specific route on the ground, early users tried to stand directly on top of the line when walking, even if it wasn’t necessary or safe. When they tried to use particle effects floating in the air to represent paths and curves (pictured below in any early prototype) a Google UX designer tells us one user asked why they were ‘following floating trash’.

The Maps team also learned that no one wants to hold their phone up very long. The whole experience has to be pretty quick, and is designed to be used in short bursts — in fact, if you hold up the camera for too long, the app will tell you to stop.

Firing up AR mode feels like starting up any other Google Maps trip. Pop in your destination, hit the walking directions button… but instead of “Start”, you tap the new “Start AR” button.

A view from your camera appears on screen, and the app asks you to point the camera at buildings across the street. As you do so, a bunch of dots will pop up as it recognizes building features and landmarks that might help it pinpoint your location. Pretty quickly — a few seconds, in our handful of tests — the dots fade away, and a set of arrows and markers appear to guide your way. A small cut-out view at the bottom shows your current location on the map, which does a pretty good job of making the transition from camera mode to map mode a bit less jarring.

When you drop the phone to a more natural position – closer to parallel with the ground, like you might hold it if you’re reading texts while you walk — Google Maps will shift back into the standard 2D map view. Hold up the phone like you’re taking a portrait photo of what’s in front of you, and AR mode comes back in.

In our short test (about 45 minutes in all), the feature worked as promised. It definitely works better in some scenarios than others; if you’re closer to the street and thus have a better view of the buildings across the way, it works out its location pretty quick and with ridiculous accuracy. If you’re in the middle of a plaza, it might take a few seconds longer.

Google’s decision to build this as something that you’re only meant to use for a few seconds is the right one. Between making yourself an easy target for would-be phone thieves or walking into light poles, no one wants to wander a city primarily through the camera lens of their phone. I can see myself using it on the first step or two of a trek to make sure I’m getting off on the right foot, at which point an occasional glance at the standard map will hopefully suffice. It’s about helping you feel more certain, not about holding your hand the entire way.

Google did a deeper dive on how the tech works here, but in short: it’s taking the view from your camera and sending a compressed version up to the cloud, where it’s analyzed for unique visual features. Google has a good idea of where you are from your phones’ GPS signal, so it can compare the Street View data it has for the surrounding area to look for things it thinks should be nearby — certain building features, statues, or permanent structures — and work backwards to your more precise location and direction. There’s also a bunch of machine learning voodoo going on here to ignore things that might be prominent but not necessarily permanent (like trees, large parked vehicles, and construction.)

The feature is currently rolling out to “Local Guides” for feedback. Local Guides are an opt-in group of users who contribute reviews, photos, and places while helping Google fact check location information in exchange for early access to features like this.

Alas, Google told us repeatedly that it has no idea when it’ll roll out beyond that group.

News Source = techcrunch.com

1 2 3 26
Go to Top