Connect with us

Alexa

TiVo adds Alexa voice control to its DVRs

TiVo’s DVRs are getting Alexa support. The company is announcing its lineup of DVRs, including Series 4 (Premiere), 5 (Roamio), and 6 (Bolt) boxes like TiVo BOLT VOX introduced last fall, will be gaining support for Amazon’s virtual assistant, Alexa. The assistant will be able to do things like change the channel, skip commercials, jump back or forward, launch apps like Netflix, and more, says TiVo.

The company is not the only third-party DVR maker to have added support for Alexa.

Thanks to developer tools like the Video Skill API, other cable and satellite TV companies, streaming services, and content providers can now add voice control to their devices and apps, as well. For example, Dish last fall became the first U.S. pay TV provider to integrate with Alexa for hands-free TV. Others working with Amazon include DirecTV and TechCrunch’s parent (by way of Oath), Verizon.

Amazon’s Video Skill API was updated in March to include support for DVR recording, allowing users to set and manage their DVR recordings via their voice – that’s something TiVo, presumably, will add at a later date, as it’s not live yet. At the time of Amazon’s announcement, it had also said TiVo was one of the companies developing experiences using the Video Skill API.

In addition, TiVo itself had announced plans to add smart home integration, including voice control through Alexa and Google Assistant, back at CES in January. 

According to TiVo, Alexa will let its customers do many of the things they can do today with the TiVo remote. For example, you can ask Alexa to change the channel by saying things like “Alexa, watch CBS” or “Alexa, go to FOX.” You can also launch apps on TiVo” by saying things like “Alexa, open Netflix.”

But where TiVo’s implementation is different from other DVR makers is how it has put Alexa to use to control its devices’ unique features, like skipping the commercials – which TiVo calls SkipMode. 

This can be done by telling Alex to “skip commercials,” says TiVo, and it joins other playback-related skills like jumping back 8 seconds (“Alexa, go back”), pausing and playing, fast-forward, and rewind. 

“With far-field voice control, life becomes more untethered for our customers,” said Andrew Heymann, TiVo’s Senior Director of Product Management, in an announcement about the new functionality. “They can continue to enjoy watching their favorite programming with TiVo’s cool features even when they’re preparing dinner and their hands are too dirty to use the remote, or when they’re exercising, and they don’t have access to their remote. Life suddenly gets a lot easier,” he adds.

A placeholder screen for the new Alexa functionality popped up this weekend on supported devices, reports Dave Zatz, who regularly covers TiVo. The screen says Alexa is “coming soon” and will roll out to TiVo retail devices with software version 20.7.4 or later. The rollout is expected to compete by June 1st, it also notes.

The addition of Alexa to TiVo’s boxes is notable, too, because TiVo itself had developed voice-control functionality of its own. Its newer BOLT VOX and Mini VOX were the company’s first DVRs to include a voice remote control, which offers similar functionality to Alexa.

However, TiVo sold the remote separately, which limited the reach of its voice control offering for consumers. With Alexa, the company is able to go after the growing market of those who already own an Amazon Echo device.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Alexa

Microsoft shows off Alexa-Cortana integration, launches sign-up website for news

Microsoft still isn’t giving a timeline as to when its virtual assistant, Cortana, will support integration with Amazon Alexa – something the companies had announced last year. But the company at its Build developer conference today did show off how that integration will work, in an on-stage demo with support from Amazon, and it launched a new website for developers interested in receiving Alexa-Cortana integration news and information going forward.

When Microsoft and Amazon first discussed integrating their virtual assistants, it was described as a two-way street – that is, Cortana could pass requests back to Alexa, and vice versa. For example, Alexa customers would be able to access Cortana’s productivity features, like booking meetings, accessing work calendars, or reading work emails. Meanwhile, Cortana users could ask Alexa to control smart home devices, shop Amazon, or use Alexa’s some 40,000 skills. 

But there were some concerns those commands would be awkward, and that integrations like this could be unnecessary too.

At Build, Microsoft CEO Satya Nadella stressed the values of a more open system, saying “We want to make it possible for our customers to get the most out of their personal digital assistants – not be bound to some walled garden.”

Perhaps, though, what Microsoft really wants is to benefit from Alexa’s momentum.

In a brief demo, Microsoft Cortana GM Megan Saunders along with Amazon Alexa SVP Tom Taylor showed how Alexa and Cortana would work together. It didn’t look quite as unwieldy as you may have imagined.

Saunders directed her Echo speaker to “open Cortana,” which saw the digital assistant responding with a different voice, “Cortana here, how can I help?”

The experience seemed more like launching and using a third-party skill, rather than a series of tricky verbal commands.

She was then able to ask Cortana for information on her calendar, without having to say “Cortana,” or “Alexa” again – just “how’s my day?”

And she told Cortana to “send an email to Tom Taylor saying ‘I’ll see you tonight’” – again, without having to command the assistant by name.

After the Alexa-to-Cortana demo, Taylor showed off the reverse situation – calling up Alexa from Cortana.

While using Cortana on his PC, he said to Microsoft’s Assistant, “Hey Cortana, open Alexa.” Alexa responded in her own voice: “hi there, this is Alexa. How can I help?”

Taylor used Alexa to order an Uber using the third-party Uber skill and told her to turn off the lights.

He also asked Alexa what she thought of Cortana, to which Amazon’s assistant replied, with her typical cheesy humor, “I like Cortana. We both have experience with rings, although hers is more of a Halo.” Oh, hardy-har-har. 

Of course, what people really wanted to hear about is when the Cortana-Alexa integration would go live, and unfortunately there was no news on that front.

Saunders referred to the experience as still being in a “limited beta” for the time being, but did note the launch of a new website for developers.

Developers who are building skills for Cortana and Alexa can go to this new site in order to sign up to be notified when the integrations go live.

“For all of you developers out there building skills, Cortana and Alexa is going to enable access to more people across more devices,” said Saunders. “And we can’t wait to see what you build.”

News Source = techcrunch.com

Continue Reading

Alexa

Suki raises $20M to create a voice assistant for doctors

When trying to figure out what to do after an extensive career at Google, Motorola, and Flipkart, Punit Soni decided to spend a lot of time sitting in doctors’ offices to figure out what to do next.

It was there that Soni said he figured out one of the most annoying pain points for doctors in any office: writing down notes and documentation. That’s why he decided to start Suki — previously Robin AI — to create a way for doctors to simply start talking aloud to take notes when working with patients, rather than having to put everything into a medical record system, or even writing those notes down by hand. That seemed like the lowest hanging fruit, offering an opportunity to make it easier for doctors that see dozens of patients to make their lives significantly easier, he said.

“We decided we had found a powerful constituency who were burning out because of just documentation,” Soni said. “They have underlying EMR systems that are much older in design. The solution aligns with the commoditization of voice and machine learning. If you put it all together, if we can build a system for doctors and allow doctors to use it in a relatively easy way, they’ll use it to document all the interactions they do with patients. If you have access to all data right from a horse’s mouth, you can use that to solve all the other problems on the health stack.”

The company said it has raised a $15 million funding round led by Venrock, with First Round, Social+Capital, Nat Turner of Flatiron Health, Marc Benioff, and other individual Googlers and angels. Venrock also previously led a $5 million seed financing round, bringing the company’s total funding to around $20 million. It’s also changing its name from Robin AI to Suki, though the reason is actually a pretty simple one: “Suki” is a better wake word for a voice assistant than “Robin” because odds are there’s someone named Robin in the office.

The challenge for a company like Suki is not actually the voice recognition part. Indeed, that’s why Soni said they are actually starting a company like this today: voice recognition is commoditized. Trying to start a company like Suki four years ago would have meant having to build that kind of technology from scratch, but thanks to incredible advances in machine learning over just the past few years, startups can quickly move on to the core business problems they hope to solve rather than focusing on early technical challenges.

Instead, Suki’s problem is one of understanding language. It has to ingest everything that a doctor is saying, parse it, and figure out what goes where in a patient’s documentation. That problem is even more complex because each doctor has a different way of documenting their work with a patient, meaning it has to take extra care in building a system that can scale to any number of doctors. As with any company, the more data it collects over time, the better those results get — and the more defensible the business becomes, because it can be the best product.

“Whether you bring up the iOS app or want to bring it in a website, doctors have it in the exam room,” Soni said. “You can say, ‘Suki, make sure you document this, prescribe this drug, and make sure this person comes back to me for a follow-up visit.’ It takes all that, it captures it into a clinically comprehensive note and then pushes it to the underlying electronic medical record. [Those EMRs] are the system of record, it is not our job to day-one replace these guys. Our job is to make sure doctors and the burnout they are having is relieved.”

Given that voice recognition is commoditized, there will likely be others looking to build a scribe for doctors as well. There are startups like Saykara looking to do something similar, and in these situations it often seems like the companies that are able to capture the most data first are able to become the market leaders. And there’s also a chance that a larger company — like Amazon, which has made its interest in healthcare already known — may step in with its comprehensive understanding of language and find its way into the doctors’ office. Over time, Soni hopes that as it gets more and more data, Suki can become more intelligent and more than just a simple transcription service.

“You can see this arc where you’re going from an Alexa, to a smarter form of a digital assistant, to a device that’s a little bit like a chief resident of a doctor,” Soni said. “You’ll be able to say things like, ‘Suki, pay attention,’ and all it needs to do is listen to your conversation with the patient. I’m, not building a medical transcription company. I’m basically trying to build a digital assistant for doctors.”

News Source = techcrunch.com

Continue Reading

Alexa

Alexa will soon gain a memory, converse more naturally, and automatically launch skills

Alexa will soon be able to recall information you’ve directed her to remember, as well as have more natural conversations that don’t require every command to begin with “Alexa.” She’ll also be able to launch skills in response to questions you ask, without explicit instructions to do so. The features are the first of what Amazon says are many launches this year that will make its virtual assistant more personalized, smarter, and more engaging.

The news was announced this morning in a keynote presentation from the head of the Alexa Brain group, Ruhi Sarikaya, speaking at the World Wide Web Conference in Lyon, France.

He explained that the Alexa Brain initiative is focused on improving Alexa’s ability to track context and memory within and across dialog sessions, as well as make it easier for users to discover and interact with Alexa’s now over 40,000 third-party skills.

With the memory update, arriving soon to U.S. users, Alexa will be able to remember any information you ask her to, and retrieve it later.

For example, you might direct Alexa to remember an important day by saying something like, “Alexa, remember that Sean’s birthday is June 20th.” Alexa will then reply, “Okay, I’ll remember that Sean’s birthday is June 20th.” This effectively turns Alexa into a way to offload information you’d otherwise have to store in your own brain, and is reminiscent of earlier bots, like Wonder, which were designed to remember anything you told it, for later retrieval over SMS or messaging platforms.

Memory, of course, has also been one of Google Assistant’s more useful features – so it was time for Alexa to catch up on this front.

In addition, Alexa will soon be able to have more natural conversations with users, thanks to something called “context carryover.” This means that Alexa will be able to understand follow-up questions and respond appropriately, even though you haven’t addressed her as “Alexa.”

For instance, you could ask “Alexa, how is the weather in Seattle?” and then ask, “What about this weekend?” after Alexa responds.

You can even change the subject, saying “Alexa, how’s the weather in Portland?,” then “How long does it take to get there?”

The feature, says Sarikaya, takes advantage of deep learning models applied to the spoken language understanding pipeline, in order to have conversations that carry customers’ intent and entities within and across domains – like it did between weather and traffic, in the example above.

Natural conversations are also coming “soon” to Alexa device owners in the U.S., U.K. and Germany.

A third advance arriving in the near future focuses on Alexa’s skills. These are the third-party voice apps that aim to help you do more with Alexa – like checking your credit card account information, playing news radio, ordering an Uber, playing a game, and more. There are so many out there, it’s becoming harder to surface them just by digging around in the Alexa Skills Store.

In the weeks ahead, U.S. users will be able to launch skills using natural phrases, instead of explicit commands like “Alexa, open [skill name]” or “…enable [skill name].”

Amazon has been working to make Alexa’s skills easier to use for years. In 2016, Echo was updated to allow users to enable new Alexa skills by voice, and last year, Alexa began suggesting skills in response to certain questions in limited scenarios. With the new feature, now in beta testing, Alexa will instead locate and launch skills for you.

Sarikaya gives an example of this from the current beta test, noting that he asked Alexa “how do I remove an oil stain from my shirt?”

Alexa responded by saying “Here is Tide Stain Remover,” which is the name of Procter & Gamble’s skill that walks you through stain removal for over 200 specific stain types – including oil.

Before, it was hard to imagine why anyone would seek out and enable a Tide skill on their own, but having it in Alexa’s repertoire now begins to make more sense.

This could also potentially present Amazon with an advertising model, similar to Google’s keyword bidding system. If someone asks for information that could be answered by a skill touting a particular product or brand, Amazon could eventually have advertisers compete to be the skill recommended first. (Perhaps the others could be called up with a follow-up request, “any other ideas?”)

Amazon isn’t giving an exact launch date for any of these three new features, only that they’re coming soon.

But despite the new launches, Sarikaya notes there’s still a lot of work left ahead.

“We have many challenges still to address, such as how to scale these new experiences across languages and different devices, how to scale skill arbitration across the tens of thousands of Alexa skills, and how to measure experience quality,” he says. “Additionally, there are component-level technology challenges that span automatic speech recognition, spoken language understanding, dialog management, natural language generation, text-to-speech synthesis, and personalization,” he says.

“Skills arbitration, context carryover and the memory feature are early instances of a class of work Amazon scientists and engineers are doing to make engaging with Alexa more friction-free,” Sarikaya continues. “We’re on a multi-year journey to fundamentally change human-computer interaction, and as we like to say at Amazon, it’s still Day 1.”

 

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending