Connect with us

AI

Microsoft Translator gets offline AI translations

Chances are you mostly need a translator app on your phone while you are traveling. But that’s also when you are most likely to not have any connectivity. While most translation apps still work when they are offline, they can’t use the sophisticated — and computationally intense — machine learning algorithms in the cloud that typically power them. Until now, that was also the case for the Microsoft Translator app on Amazon Fire, Android and iOS, but starting today, the app will actually run a slightly modified neural translation when offline (though iOS users may still have to wait a few days, since the update still has to be approved by Apple).

What’s interesting about this is that Microsoft is able to do this on virtually any modern phone and that there is no need for a custom AI chip in them.

Microsoft’s Arul Menezes tells me that these new translation packs are “dramatically better” and provide far more human-like translation than the old ones, which relied on an older approach to machine translations that has now been far surpassed by machine learning-based systems. The updated language packs (which only take up about half the space of the old ones) are now available for Arabic, Chinese-Simplified, French, German, Italian, Japanese, Korean, Portuguese, Russian, Spanish and Thai, with others to follow.

Menezes tells me that Microsoft first trialed these on-device neural translation with Huawei, which started including its homegrown AI co-processor in its Mate 10 and Honor 10 phones last year. Now, however, thanks to what Menezes called “a lot of careful engineering,” the team is able to run these models on phones without dedicated AI hardware, too.

A mobile platform is still somewhat limited compared to a data center, though, so the team also shrunk the models a bit, so if you’re offline, chances are you’ll still see a few more translations that aren’t quite right compared to when you’re online. Microsoft promises that the difference in quality between online and offline translation is barely noticeable, though. “The gap between the neural offline translation and the previous translation quality with our older models is huge,” said Menezes — and he wasn’t shy to compare the quality of Microsoft’s translation services to Google’s.

With this update, Microsoft is also making these offline capabilities available to other app developers on Android who want to use them in their apps (for a price, of course). These apps can now call the Microsoft Translator app in the background, get the translation and then display it to their users. If you’re offline, it’ll use the offline translations and if you are online, it’ll send the queries to the Microsoft cloud.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

AI

What we know about Google’s Duplex demo so far

The highlight of Google’s I/O keynote earlier this month was the reveal of Duplex, a system that can make calls to set up a salon appointment or a restaurant reservation for you by calling those places, chatting with a human and getting the job done. That demo drew lots of laughs at the keynote, but after the dust settled, plenty of ethical questions popped up because of how Duplex tries to fake being human. Over the course of the last few days, those were joined by questions about whether the demo was staged or edited after Axios asked Google a few simple questions about the demo that Google refused to answer.

We have reached out to Google with a number of very specific questions about this and have not heard back. As far as I can tell, the same is true for other outlets that have contacted the company.

If you haven’t seen the demo, take a look at this before you read on.

So did Google fudge this demo? Here is why people are asking and what we know so far:

During his keynote, Google CEO Sundar Pichai noted multiple times that we were listening to real calls and real conversations (“What you will hear is the Google Assistant actually calling a real salon.”). The company made the same claims in a blog post (“While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.”).

Google has so far declined to disclose the name of the businesses it worked with and whether it had permission to record those calls. California is a two-consent state, so our understanding is that permission to record these calls would have been necessary (unless those calls were made to businesses in a state with different laws). So on top of the ethics questions, there are also a few legal questions here.

We have some clues, though. In the blog post, Google Duplex lead Yaniv Leviathan and engineering manager Matan Kalman posted a picture of themselves eating a meal “booked through a call from Duplex.” Thanks to the wonder of crowdsourcing and a number of intrepid sleuths, we know that this restaurant was Hongs Gourmet in Saratoga, California. We called Hongs Gourmet last night, but the person who answered the phone referred us to her manager, who she told us had left for the day. (We’ll give it another try today.)

Sadly, the rest of Google’s audio samples don’t contain any other clues as to which restaurants were called.

What prompted much of the suspicion here is that nobody who answers the calls from the Assistant in Google’s samples identifies their name or the name of the business. My best guess is that Google cut those parts from the conversations, but it’s hard to tell. Some of the audio samples do however sound as if the beginning was edited out.

Google clearly didn’t expect this project to be controversial. The keynote demo was clearly meant to dazzle — and it did so in the moment because, if it really works, this technology represents the culmination of years of work on machine learning. But the company clearly didn’t think through the consequences.

My best guess is that Google didn’t fake these calls. But it surely only presented the best examples of its tests. That’s what you do in a big keynote demo, after all, even though in hindsight, showing the system fail or trying to place a live call would have been even better (remember Steve Job’s Starbucks call?).

For now, we’ll see if we can get more answers, but so far all of our calls and emails have gone unanswered. Google could easily do away with all of those questions around Duplex by simply answering them, but so far, that’s not happening.

News Source = techcrunch.com

Continue Reading

AI

Google Clips gets better at capturing candids of hugs and kisses (which is not creepy, right?)

Google Clips’ AI-powered “smart camera” just got even smarter, Google announced today, revealing improved functionality around Clips’ ability to automatically capture specific moments – like hugs and kisses. Or jumps and dance moves. You know, in case you want to document all your special, private moments in a totally non-creepy way.

I kid, I kid!

Well, not entirely. Let me explain.

Look, Google Clips comes across to me as more of a proof-of-concept device that showcases the power of artificial intelligence as applied to the world of photography, rather than a breakthrough consumer device.

I’m the target market for this camera – a parent and a pet owner (and look how cute she is) – but I don’t at all have a desire for a smart camera designed to capture those tough-to-photograph moments, even though neither my kid nor my pet will sit still for pictures.

I’ve tried to articulate this feeling, and I find it’s hard to say why I don’t want this thing, exactly. It’s not because the photos are automatically uploaded to the cloud or made public – they are not. They are saved to the camera’s 16 GB of onboard storage and can be reviewed later with your phone, where you can then choose to keep them, share them, or delete them. And it’s not even entirely because of the price point – though, arguably, even with the recent $50 discount it’s quite the expensive toy at $199.

Maybe it’s just the camera’s premise.

That in order for us to fully enjoy a moment, we have to capture it. And because some moments are so difficult to capture, we spend too much time with phone-in-hand, instead of actually living our lives – like playing with our kids or throwing the ball for the dog, for example. And that the only solution to this problem is more technology. Not just putting the damn phone down.

What also irks me is the broader idea behind Clips that all our precious moments have to be photographed or saved as videos. They do not. Some are meant to be ephemeral. Some are meant to be memories. In aggregate, our hearts and minds tally up all these little life moments – a hug, a kiss, a smile – and then turn them into feelings. Bonds. Love.  It’s okay to miss capturing every single one.

I’m telling you, it’s okay.

At the end of the day, there are only a few times I would have even considered using this product – when baby was taking her first steps, and I was worried it would happen while my phone was away. Or maybe some big event, like a birthday party, where I wanted candids but had too much going on to take photos. But even in these moments, I’d rather prop my phone up and turn on a “Google Clips” camera mode, rather than shell out hundreds for a dedicated device.

Just saying.

You may feel differently. That’s cool. To each their own.

Anyway, what I think is most interesting about Clips is the actual technology. That it can view things captured through a camera lens and determine the interesting bits – and that it’s already getting better at this, only months after its release. That we’re teaching A.I. to understand what’s actually interesting to us humans, with our subjective opinions. That sort of technology has all kinds of practical applications beyond a physical camera that takes spy shots of Fido.

The improved functionality is rolling out to Clips with the May update, and will soon be followed by support for family pairing which will let multiple family members connect the camera to their device to view content.

Here’s an intro to Clips, if you missed it the first time. (See below)

Note that it’s currently on sale for $199. Yeah, already. Hmmm. 

News Source = techcrunch.com

Continue Reading

AI

Google Maps goes beyond directions

Google today announced a new version of Google Maps that will launch later this summer. All of the core Google Maps features for getting directions aren’t going away, of course, but on top of that, the team has now built a new set of features that are all about exploration.

“About a year ago, when we started to talk to users, one of the things we asked them was: how can we really help you? What else do you want Google Maps to do? And one of the overwhelming answers that we got back was just really a lot of requests around helping users explore an area, help me decide where to go,” Sophia Lin, Google’s senior product manager on the Google Maps team, told me. “So we really started digging in to thinking about what we can really do here from Google that would really help people.”

Right now, Google Maps is obviously best known for helping people get where they want to go, but for a while now, Google has featured all kinds of additional content in the service. Many users never touch those features, though, it seems. While I couldn’t get Lin to tell me about the percentage of users who currently use the existing Google Maps exploration tools, this new initiative is also part of an attempt to get users to move beyond directions when they think about Maps.

And because this is Google, that new experience is all about personalization with the help of AI.

So in the new Maps, you’ll find the new “For you” tab that’s basically a newsfeed-like experience with recommendations for you. You’ll be able to “follow” certain neighborhoods and cities (or maybe a place you plan to visit soon), similar to a social networking experience. When Google Maps finds interesting updates in that area — maybe a restaurant that’s trending or a new coffee shop that opens — it’ll tell you about that in your feed.

“People had problems finding out what’s new,” Lin told me. “Sometimes you are really lucky and you’re walking down the street and stumble across something, but oftentimes that’s not the case and you find out about something six months after it opened, so what we started looking into was can we understand, from anonymized population dynamics, what places are trending, what are the places that people are going.”

There are also algorithmically designed “Foodie List” and “Trending this week” lists that show you what’s new and interesting and where the trendmakers in an area are hanging out. As Lin told me, the Foodie List is based on an anonymized cohort analysis that looks at where people who go out a lot gather. Because those are often the first to try new places, too, their movements often tend to presage trends. Similarly, the “Trending” list looks at the overall population, so that list can change based on season, with an ice cream parlor trending in the summer, for example.

For other items in the “For you” feed, Google Maps will actually analyze articles about local news to see what’s new, too.

Lin stressed that the feed isn’t so much about the volume of information but about presenting the right information at the right time and for the right person.

In addition to the “For you” feed, there are also a number of new basic exploration features, which are all powered by AI, too. Maps will generate lists of Michelin-starred restaurants, for example, or popular brunch spots depending on your context and the time of day.

Another major new feature that’s coming to Maps soon is “your match.” If you regularly peruse the star ratings of various restaurants before you decide where to go, then you know that those ratings can only tell you so much. Now, with “your match,” Maps will present you with a personalized score that tells you how closely a restaurant matches your own preferences.

Google Maps learns about those preferences based on how you have rated this and other places and your own preferences, which you can actually set manually in the Google Maps settings once this update goes live. Interestingly, Google does not try to base these scores on how other people like you have rated a place.

The third major new feature of the new app is group planning. Based on the demo I saw, the team actually did a really nice job with this. The general idea here is to allow you to easily create a list of suggestions for a group outing (or just a dinner with your significant other) by long-pressing on a place listing. Google Maps will then pop up a chat head-like bubble that follows you around as you browse for other places. Once you have compiled your list, you can share it with your friends, who can then vote for their favorites.

Google will launch this new Google Maps experience later this summer. It will come to both iOS and Android, though the team hasn’t decided which one will come first yet. For now, all of these new features will only come to the app, not the web.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending