Connect with us

AI

What we know about Google’s Duplex demo so far

The highlight of Google’s I/O keynote earlier this month was the reveal of Duplex, a system that can make calls to set up a salon appointment or a restaurant reservation for you by calling those places, chatting with a human and getting the job done. That demo drew lots of laughs at the keynote, but after the dust settled, plenty of ethical questions popped up because of how Duplex tries to fake being human. Over the course of the last few days, those were joined by questions about whether the demo was staged or edited after Axios asked Google a few simple questions about the demo that Google refused to answer.

We have reached out to Google with a number of very specific questions about this and have not heard back. As far as I can tell, the same is true for other outlets that have contacted the company.

If you haven’t seen the demo, take a look at this before you read on.

So did Google fudge this demo? Here is why people are asking and what we know so far:

During his keynote, Google CEO Sundar Pichai noted multiple times that we were listening to real calls and real conversations (“What you will hear is the Google Assistant actually calling a real salon.”). The company made the same claims in a blog post (“While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.”).

Google has so far declined to disclose the name of the businesses it worked with and whether it had permission to record those calls. California is a two-consent state, so our understanding is that permission to record these calls would have been necessary (unless those calls were made to businesses in a state with different laws). So on top of the ethics questions, there are also a few legal questions here.

We have some clues, though. In the blog post, Google Duplex lead Yaniv Leviathan and engineering manager Matan Kalman posted a picture of themselves eating a meal “booked through a call from Duplex.” Thanks to the wonder of crowdsourcing and a number of intrepid sleuths, we know that this restaurant was Hongs Gourmet in Saratoga, California. We called Hongs Gourmet last night, but the person who answered the phone referred us to her manager, who she told us had left for the day. (We’ll give it another try today.)

Sadly, the rest of Google’s audio samples don’t contain any other clues as to which restaurants were called.

What prompted much of the suspicion here is that nobody who answers the calls from the Assistant in Google’s samples identifies their name or the name of the business. My best guess is that Google cut those parts from the conversations, but it’s hard to tell. Some of the audio samples do however sound as if the beginning was edited out.

Google clearly didn’t expect this project to be controversial. The keynote demo was clearly meant to dazzle — and it did so in the moment because, if it really works, this technology represents the culmination of years of work on machine learning. But the company clearly didn’t think through the consequences.

My best guess is that Google didn’t fake these calls. But it surely only presented the best examples of its tests. That’s what you do in a big keynote demo, after all, even though in hindsight, showing the system fail or trying to place a live call would have been even better (remember Steve Job’s Starbucks call?).

For now, we’ll see if we can get more answers, but so far all of our calls and emails have gone unanswered. Google could easily do away with all of those questions around Duplex by simply answering them, but so far, that’s not happening.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

AI

Prisma co-founders raise $1M to build a social app called Capture

Two of the co-founders of the art filter app Prisma have left to build a new social app.

Prisma, as you may recall, had a viral moment back in 2016 when selfie takers went crazy for the fine art spin the app’s AI put on photos — in just a few seconds of processing.

Downloads leapt, art selfies flooded Instagram, and similar arty effects soon found their way into all sorts of rival apps and platforms. Then, after dipping a toe into social waters with the launch of a feed of its own, the company shifted focus to b2b developer tools — and we understand it’s since become profitable.

But two of Prisma’s co-founders, Aleksey Moiseyenkov and Aram Hardy, got itchy feet when they had an idea for another app business. And they’ve both now left to set up a new startup, called Capture Technologies.

The plan is to launch the app — which will be called Capture — in Q4, with a beta planned for September or October, according to Hardy (who’s taking the CMO role).

They’ve also raised a $1M seed for Capture, led by US VC firm General Catalyst . Also investing are KPCB, Social Capital, Dream Machine VC (the seed fund of former TechCrunch co-editor, Alexia Bonatsos), Paul Heydon, and Russian Internet giant, Mail.Ru Group.

Josh Elman from Greylock Partners is also involved as an advisor.

Hardy says they had the luxury of being able to choose their seed investors, after getting a warmer reception for Capture than they’d perhaps expected — thinking it might be tough to raise funding for a new social app given how that very crowded space has also been monopolized by a handful of major platforms… (hi Facebook, hey Snap!)

But they also believe they’ve identified overlooked territory — where they can offer something fresh to help people interact with others in real-time.

They’re not disclosing further details about the idea or how the Capture app will work at this stage, as they’re busy building and Hardy says certain elements could change and evolve before launch day.

What they will say is that the app will involve AI, and will put the emphasis for social interactions squarely on the smartphone camera.

Speed will also be a vital ingredient, as it was with Prisma — literally fueling the app’s virality. “We see a huge move to everything which is happening right now, which is really real-time,” Hardy tells TechCrunch. “Even when we started Prisma there were lots of similar products which were just processing one photo for five, ten, 15 minutes, and people were not using it because it takes time.

“People want everything right now. Right here. So this is a trend which is taking place right now. People just want everything right now, right here. So we’re trying to give it to them.”

“Our team’s mission is to bring an absolutely new and unique experience to how people interact with each other. We would like to come up with something unique and really fresh,” adds Moiseyenkov, Capture’s CEO (pictured above left, with Hardy).

“We see a huge potential in new social apps despite the fact that there are too many huge players.”

Having heard the full Capture pitch from Hardy I can say it certainly seems like an intriguing idea. Though how exactly they go about selectively introducing the concept will be key to building the momentum needed to power their big vision for the app. But really that’s true of any social product.

Their idea has also hooked a strong line up of seed investors, doubtless helped by the pair’s prior success with Prisma. (If there’s one thing investors love more than a timely, interesting idea, it’s a team with pedigree — and these two certainly have that.)

“I’m happy to have such an amazing and experienced team,” adds Moiseyenkov, repaying the compliment to Capture’s investors.

“Your first investors are your team. You have to ask lots of questions like you do when you decide whether this or that person is a perfect fit for your team. Because investors and the team are those people with whom you’re going to build a great product. At the same time, investors ask lots of questions to you.”

Capture’s investors were evidently pleased enough with the answers their questions elicited to cut Capture its founding checks. And the startup’s team is already ten-strong — and hard at work to get a beta launched in fall.

The business is based in the US and Europe, with one office in Moscow, where Hardy says they’ve managed to poach some relevant tech talent from Russian social media giant vk.com; and another slated to be opening in a couple of weeks time, on Snap’s home turf of LA. 

“We’ll be their neighbors in Venice beach,” he confirms, though he stresses there will still be clear blue water between the two companies’ respective social apps, adding: “Snapchat is really a different product.”

News Source = techcrunch.com

Continue Reading

AI

Apple introduces the A.I. phone

At Apple’s WWDC 2018 – an event some said would be boring this year with its software-only focus and lack of new MacBooks and iPads – the company announced what may be its most important operating system update to date, with the introduction of iOS 12. Through a series of Siri enhancements and features, Apple is turning its iPhone into a highly personalized device, powered by its Siri A.I.

This “new A.I. iPhone” – which, to be clear, is your same ol’ iPhone running a new mobile OS – will understand where you are, what you’re doing, and what you need to know right then and there.

The question now is will users embrace the usefulness of Siri’s forthcoming smarts, or will they find its sudden insights creepy and invasive?

Siri Suggestions

 

After the installation of iOS 12, Siri’s Suggestions will be everywhere.

In the same place on the iPhone Search screen where you today see those Siri suggested apps to launch, you’ll begin to see other things Siri thinks you may need to know, too.

For example, Siri may suggest that you:

  • Call your grandma for her birthday.
  • Tell someone you’re running late to the meeting via a text.
  • Start your workout playlist because you’re at the gym.
  • Turn your phone on Do Not Disturb at the movies.

And so on.

These will be useful in some cases, and perhaps annoying in others. (It would be great if you could swipe on the suggestions to further train the system to not show certain ones again. After all, not all your contacts deserve a birthday phone call.)

Siri Suggestions will also appear on the Lock Screen when it thinks it can help you perform an action of some kind. For example,  placing your morning coffee order – something you regularly do around a particular time of day – or launching your preferred workout app, because you’ve arrived at the gym.

These suggestions even show up on Apple Watch’s Siri watch face screen.

Apple says the relevance of its suggestions will improve over time, based on how you engage.

If you don’t take an action by tapping on these items, they’ll move down on the watch face’s list of suggestions, for instance.

A.I.-powered workflows

These improvements to Siri would have been enough for iOS 12, but Apple went even further.

The company also showed off a new app called Siri Shortcuts.

The app is based on technology Apple acquired from Workflow, a clever – if somewhat advanced – task automation app that allows iOS users to combine actions into routines that can be launched with just a tap. Now, thanks to the Siri Shortcuts app, those routines can be launched by voice.

On stage at the developer event, the app was demoed by Kim Beverett from the Siri Shortcuts team, who showed off a “heading home” shortcut she had built.

When she tells Siri she’s “heading home,” her iPhone simultaneously launched directions for her commute in Apple Maps, set her home thermostat to 70 degrees, turned on her fan, messaged an ETA to her roommate, and launched her favorite NPR station.

That’s arguably very cool – and it got a big cheer from the technically-minded developer crowd – but it’s most certainly a power user feature. Launching an app to build custom workflows is not something everyday iPhone users will do right off the bat – or in some cases, ever.

Developers to push users to Siri

But even if users hide away this new app in their Apple “junk” folder, or toggle off all the Siri Suggestions in Settings, they won’t be able to entirely escape Siri’s presence in iOS 12 and going forward.

That’s because Apple also launched new developer tools that will allow app creators to build integrations with Siri directly into their own apps.

Developers will update their apps’ code so that every time a user take a particular action – for example, placing their coffee order, streaming a favorite podcast, starting their evening jog with a running app, or anything else – the app will let Siri know. Over time, Siri will learn users’ routines – like, on many weekday mornings, around 8 to 8:30 AM, the user places a particular coffee order through an coffee shop app’s order ahead system.

These will inform those Siri Suggestions that appear all over your iPhone, but developers will also be able to just directly prod the user to add this routine to Siri right in their own apps.

 

In your favorite apps, you’ll start seeing an “Add to Siri” link or button in various places – like when you perform a particular action – such as looking for your keys in Tile’s app, viewing travel plans in Kayak, ordering groceries with Instacart, and so on.

Many people will probably tap this button out of curiosity – after all, most don’t watch and rewatch the WWDC keynote like the tech crowd does.

The “Add to Siri” screen will then pop up, offering a suggestion of voice prompt that can be used as your personalized phase for talking to Siri about this task.

In the coffee ordering example, you might be prompted to try the phrase “coffee time.” In the Kayak example, it could be “travel plans.”

You record this phrase with the big, red record button at the bottom of the screen. When finished, you have a custom Siri shortcut.

You don’t have to use the suggested phrase the developer has written. The screen explains you can make up your own phrase instead.

In addition to being able to “use” apps via Siri voice commands, Siri can also talk back after the initial request.

It can confirm your request has been acted upon – for example, Siri may respond, “OK. Ordering. Your coffee will be ready in 5 minutes,” after you said “Coffee time” or whatever your trigger phrase was.

Or it can tell you if something didn’t work – maybe the restaurant is out of a food item on the order you placed – and help you figure out what to do next (like continue your order in the iOS app.)

It can even introduce some personality, as it responds. In the demo, Tile’s app jokes back that it hopes your missing keys aren’t “under a couch cushion.”

There are a number of things you could do beyond these limited examples – the App Store has over 2 million apps whose developers can hook into Siri.

And you don’t have to ask Siri only on your phone – you can talk to Siri on your Apple Watch and HomePod, too.

Yes, this will all rely on developer adoption, but it seems Apple has figured out how to give developers a nudge.

Siri Suggestions are the new Notifications

You see, as Siri’s smart suggestions spin up, traditional notifications will wind down.

In iOS 12, Siri will take note of your behavior around notifications, and then push you to turn those you don’t engage with off, or move them into a new silent mode Apple calls “Delivered Quietly.” This middle ground for notifications will allow apps to send their updates to the Notification Center, but not the Lock Screen. They also can’t buzz your phone or wrist.

At the same time, iOS 12’s new set of digital well-being features will hide notifications from users at particular times  – like when you’ve enabled Do Not Disturb at Bedtime, for example. This mode will not allow notifications to display when you check your phone at night or first thing upon waking.

Combined, these changes will encourage more developers to adopt the Siri integrations, because they’ll be losing a touchpoint with their users as their ability to grab attention through notifications fades.

Machine Learning in Photos

A.I. will further infiltrate other parts of the iPhone, too, in iOS 12.

A new “For You” tab in the Photos app will prompt users to share photos taken with other people, thanks to facial recognition and machine learning.  And those people, upon receiving your photos, will then be prompted to share their own back with you.

The tab will also pull out your best photos and feature them, and prompt you to try different lighting and photo effects. A smart search feature will make suggestions and allow you to pull up photos from specific places or events.

Smart or Creepy?

Overall, iOS 12’s A.I.-powered features will make Apple’s devices more personalized to you, but they could also rub some people the wrong way.

Maybe people won’t want their habits noticed by their iPhone, and will find Siri prompts annoying – or, at worst, creepy, because they don’t understand how Siri knows these things about them.

Apple is banking hard on the fact that it’s earned users’ trust through its stance on data privacy over the years.

And while not everyone knows that Siri is does a lot of its processing on your device, not in cloud, many do seem to understand that Apple doesn’t sell user data to advertisers to make money.

That could help sell this new “A.I. phone” concept to consumers, and pave the way for more advancements later on.

But on the flip side, if Siri Suggestions become overbearing or get things wrong too often, it could lead users to just switch them off entirely through iOS Settings. And with that, Apple’s big chance to dominate in the A.I.-powered device market, too.

News Source = techcrunch.com

Continue Reading

AI

Google Clips gets better at capturing candids of hugs and kisses (which is not creepy, right?)

Google Clips’ AI-powered “smart camera” just got even smarter, Google announced today, revealing improved functionality around Clips’ ability to automatically capture specific moments – like hugs and kisses. Or jumps and dance moves. You know, in case you want to document all your special, private moments in a totally non-creepy way.

I kid, I kid!

Well, not entirely. Let me explain.

Look, Google Clips comes across to me as more of a proof-of-concept device that showcases the power of artificial intelligence as applied to the world of photography, rather than a breakthrough consumer device.

I’m the target market for this camera – a parent and a pet owner (and look how cute she is) – but I don’t at all have a desire for a smart camera designed to capture those tough-to-photograph moments, even though neither my kid nor my pet will sit still for pictures.

I’ve tried to articulate this feeling, and I find it’s hard to say why I don’t want this thing, exactly. It’s not because the photos are automatically uploaded to the cloud or made public – they are not. They are saved to the camera’s 16 GB of onboard storage and can be reviewed later with your phone, where you can then choose to keep them, share them, or delete them. And it’s not even entirely because of the price point – though, arguably, even with the recent $50 discount it’s quite the expensive toy at $199.

Maybe it’s just the camera’s premise.

That in order for us to fully enjoy a moment, we have to capture it. And because some moments are so difficult to capture, we spend too much time with phone-in-hand, instead of actually living our lives – like playing with our kids or throwing the ball for the dog, for example. And that the only solution to this problem is more technology. Not just putting the damn phone down.

What also irks me is the broader idea behind Clips that all our precious moments have to be photographed or saved as videos. They do not. Some are meant to be ephemeral. Some are meant to be memories. In aggregate, our hearts and minds tally up all these little life moments – a hug, a kiss, a smile – and then turn them into feelings. Bonds. Love.  It’s okay to miss capturing every single one.

I’m telling you, it’s okay.

At the end of the day, there are only a few times I would have even considered using this product – when baby was taking her first steps, and I was worried it would happen while my phone was away. Or maybe some big event, like a birthday party, where I wanted candids but had too much going on to take photos. But even in these moments, I’d rather prop my phone up and turn on a “Google Clips” camera mode, rather than shell out hundreds for a dedicated device.

Just saying.

You may feel differently. That’s cool. To each their own.

Anyway, what I think is most interesting about Clips is the actual technology. That it can view things captured through a camera lens and determine the interesting bits – and that it’s already getting better at this, only months after its release. That we’re teaching A.I. to understand what’s actually interesting to us humans, with our subjective opinions. That sort of technology has all kinds of practical applications beyond a physical camera that takes spy shots of Fido.

The improved functionality is rolling out to Clips with the May update, and will soon be followed by support for family pairing which will let multiple family members connect the camera to their device to view content.

Here’s an intro to Clips, if you missed it the first time. (See below)

Note that it’s currently on sale for $199. Yeah, already. Hmmm. 

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending