Connect with us

Artificial Intelligence

ROSS Intelligence lands $8.7M Series A to speed up legal research with AI

Armed with an understanding of machine learning, ROSS Intelligence is going after LexisNexis and Thomson Reuters for ownership of legal research. The startup, founded in 2015 by Andrew Arruda, Jimoh Ovbiagele and Pargles Dall’Oglio at the University of Toronto, is announcing an $8.7 million Series A today led by iNovia Capital with participation from Comcast Ventures Catalyst Fund, Y Combinator Continuity Fund, Real Ventures, Dentons’ NextLaw Labs and angels.

At its core, ROSS is a platform that helps legal teams sort through case law to find details relevant to new cases. This process takes days and even weeks with standard keyword search so ROSS is augmenting keyword search with machine learning to simultaneously speed up the research process and improve relevancy of items found.

“Bluehill benchmarks Lexis’s tech and they are finding 30% more relevant info with ROSS in less time,”  Andrew Arruda, co-founder and CEO of ROSS, explained to me in an interview.

ROSS is using a combination of off the shelf and proprietary deep learning algorithms for its AI stack. The startup is using IBM Watson for at least some of its natural language processing capabilities but the team shied away from elaborating.

Building a complete machine learning stack is expensive so it makes sense for startups to lean on off the shelf tech early on so long as decisions are being made that ensure the scaleability of the business. Much of the value wrapped up in ROSS is related to its corpus of training data. The startup is working with 20 law firms to simulate workflow examples and test results with human feedback.

“We really spent time looking at the value ROSS was delivering back to law firms,” noted Kai Bond, an investor in ROSS through Comcast Ventures. “What took a week now takes two to four hours.”

  1. Screen Shot 2017-10-10 at 10.28.51 AM

  2. Screen Shot 2017-10-10 at 10.29.34 AM

  3. Screen Shot 2017-10-10 at 10.58.21 AM

The company’s initial plan to get to market was to sell software designed for a specific domains of law to large firms like Latham & Watkins and Sidley Austin. Today ROSS offers products in both bankruptcy and intellectual property law. It is looking to expand into other types of law like labor and employment, simultaneously moving down to serve smaller firms.

LexisNexis and Thomson Reuters are frequently on the butt end of claims made by machine learning-powered data analytics startups emerging in a potpourri of industries. A strategy favored by many of these businesses is pushing products to interns and college students for free so that they, in turn, push their advanced tools into the arms of future employers.

“The work ROSS is doing with law schools and law students is interesting,” Karam Nijjar, a partner at iNovia Capital and investor in ROSS, asserted. “As these students enter the workforce, you’re taking someone using an iPhone and handing them a Blackberry their first day on the job.”

Prior to today’s Series A, ROSS had secured a $4.3 million seed round also led by iNovia Capital. As ROSS moves to scale it will be navigating a heavy field of mergers and acquisitions and attempts by legacy players to ensure legal tech services remain consolidated.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Google injects Hire with AI to speed up common tasks

Since Google Hire launched last year it has been trying to make it easier for hiring managers to manage the data and tasks associated with the hiring process, while maybe tweaking LinkedIn while they’re at it. Today the company announced some AI-infused enhancements that they say will help save time and energy spent on manual processes.

“By incorporating Google AI, Hire now reduces repetitive, time-consuming tasks, like scheduling interviews into one-click interactions. This means hiring teams can spend less time with logistics and more time connecting with people,” Google’s Berit Hoffmann, Hire product manager wrote in a blog post announcing the new features.

The first piece involves making it easier and faster to schedule interviews with candidates. This is a multi-step activity that involves scheduling appropriate interviewers, choosing a time and date that works for all parties involved in the interview and scheduling a room in which to conduct the interview. Organizing these kind of logistics tend to eat up a lot of time.

“To streamline this process, Hire now uses AI to automatically suggest interviewers and ideal time slots, reducing interview scheduling to a few clicks,” Hoffmann wrote.

Photo: Google

Another common hiring chore is finding keywords in a resume. Hire’s AI now finds these words for a recruiter automatically by analysing terms in a job description or search query and highlighting relevant words including synonyms and acronyms in a resume to save time spent manually searching for them.

Photo: Google

Finally, another standard part of the hiring process is making phone calls, lots of phone calls. To make this easier, the latest version of Google Hire has a new click-to-call function. Simply click the phone number and it dials automatically and registers the call in call a log for easy recall or auditing.

While Microsoft has LinkedIn and Office 365, Google has G Suite and Google Hire. The strategy behind Hire is to allow hiring personnel to work in the G Suite tools they are immersed in every day and incorporate Hire functionality within those tools.

It’s not unlike CRM tools that integrate with Outlook or GMail because that’s where sales people spend a good deal of their time anyway. The idea is to reduce the time spent switching between tools and make the process a more integrated experience.

While none of these features individually will necessarily wow you, they are making use of Google AI to simplify common tasks to reduce some of the tedium associated with every-day hiring tasks.

News Source = techcrunch.com

Continue Reading

Artificial Intelligence

What’s under those clothes? This system tracks body shapes in real time

With augmented reality coming in hot and depth tracking cameras due to arrive on flagship phones, the time is right to improve how computers track the motions of people they see — even if that means virtually stripping them of their clothes. A new computer vision system that does just that may sound a little creepy, but it definitely has its uses.

The basic problem is that if you’re going to capture a human being in motion, say for a movie or for an augmented reality game, there’s a frustrating vagueness to them caused by clothes. Why do you think motion capture actors have to wear those skintight suits? Because their JNCO jeans make it hard for the system to tell exactly where their legs are. Leave them in the trailer.

Same for anyone wearing a dress, a backpack, a jacket — pretty much anything other than the bare minimum will interfere with the computer getting a good idea of how your body is positioned.

The multi-institutional project (PDF), due to be presented at CVPR in Salt Lake City, combines depth data with smart assumptions about how a body is shaped and what it can do. The result is a sort of X-ray vision, revealing the shape and position of a person’s body underneath their clothes, that works in real time even during quick movements like dancing.

The paper builds on two previous methods, DynamicFusion and BodyFusion. The first uses single-camera depth data to estimate a body’s pose, but doesn’t work well with quick movements or occlusion; the second uses a skeleton to estimate pose but similarly loses track during fast motion. The researchers combined the two approaches into “DoubleFusion,” essentially creating a plausible skeleton from the depth data and then sort of shrink-wrapping it with skin at an appropriate distance from the core.

As you can see above, depth data from the camera is combined with some basic reference imagery of the person to produce both a skeleton and track the joints and terminations of the body. On the right there, you see the results of just DynamicFusion (b), just BodyFusion (c) and the combined method (d).

The results are much better than either method alone, seemingly producing excellent body models from a variety of poses and outfits:

Hoodies, headphones, baggy clothes, nothing gets in the way of the all-seeing eye of DoubleFusion.

One shortcoming, however, is that it tends to overestimate a person’s body size if they’re wearing a lot of clothes — there’s no easy way for it to tell whether someone is broad or they are just wearing a chunky sweater. And it doesn’t work well when the person interacts with a separate object, like a table or game controller — it would likely try to interpret those as weird extensions of limbs. Handling these exceptions is planned for future work.

The paper’s first author is Tao Yu of Tsinghua University in China, but researchers from Beihang University, Google, USC, and the Max Planck Institute were also involved.

“We believe the robustness and accuracy of our approach will enable many applications, especially in AR/VR, gaming, entertainment and even virtual try-on as we also reconstruct the underlying body shape,” write the authors in the paper’s conclusion. “For the first time, with DoubleFusion, users can easily digitize themselves.”

There’s no use denying that there are lots of interesting applications of this technology. But there’s also no use denying that this technology is basically X-ray Spex.

News Source = techcrunch.com

Continue Reading

AI

Prisma co-founders raise $1M to build a social app called Capture

Two of the co-founders of the art filter app Prisma have left to build a new social app.

Prisma, as you may recall, had a viral moment back in 2016 when selfie takers went crazy for the fine art spin the app’s AI put on photos — in just a few seconds of processing.

Downloads leapt, art selfies flooded Instagram, and similar arty effects soon found their way into all sorts of rival apps and platforms. Then, after dipping a toe into social waters with the launch of a feed of its own, the company shifted focus to b2b developer tools — and we understand it’s since become profitable.

But two of Prisma’s co-founders, Aleksey Moiseyenkov and Aram Hardy, got itchy feet when they had an idea for another app business. And they’ve both now left to set up a new startup, called Capture Technologies.

The plan is to launch the app — which will be called Capture — in Q4, with a beta planned for September or October, according to Hardy (who’s taking the CMO role).

They’ve also raised a $1M seed for Capture, led by US VC firm General Catalyst . Also investing are KPCB, Social Capital, Dream Machine VC (the seed fund of former TechCrunch co-editor, Alexia Bonatsos), Paul Heydon, and Russian Internet giant, Mail.Ru Group.

Josh Elman from Greylock Partners is also involved as an advisor.

Hardy says they had the luxury of being able to choose their seed investors, after getting a warmer reception for Capture than they’d perhaps expected — thinking it might be tough to raise funding for a new social app given how that very crowded space has also been monopolized by a handful of major platforms… (hi Facebook, hey Snap!)

But they also believe they’ve identified overlooked territory — where they can offer something fresh to help people interact with others in real-time.

They’re not disclosing further details about the idea or how the Capture app will work at this stage, as they’re busy building and Hardy says certain elements could change and evolve before launch day.

What they will say is that the app will involve AI, and will put the emphasis for social interactions squarely on the smartphone camera.

Speed will also be a vital ingredient, as it was with Prisma — literally fueling the app’s virality. “We see a huge move to everything which is happening right now, which is really real-time,” Hardy tells TechCrunch. “Even when we started Prisma there were lots of similar products which were just processing one photo for five, ten, 15 minutes, and people were not using it because it takes time.

“People want everything right now. Right here. So this is a trend which is taking place right now. People just want everything right now, right here. So we’re trying to give it to them.”

“Our team’s mission is to bring an absolutely new and unique experience to how people interact with each other. We would like to come up with something unique and really fresh,” adds Moiseyenkov, Capture’s CEO (pictured above left, with Hardy).

“We see a huge potential in new social apps despite the fact that there are too many huge players.”

Having heard the full Capture pitch from Hardy I can say it certainly seems like an intriguing idea. Though how exactly they go about selectively introducing the concept will be key to building the momentum needed to power their big vision for the app. But really that’s true of any social product.

Their idea has also hooked a strong line up of seed investors, doubtless helped by the pair’s prior success with Prisma. (If there’s one thing investors love more than a timely, interesting idea, it’s a team with pedigree — and these two certainly have that.)

“I’m happy to have such an amazing and experienced team,” adds Moiseyenkov, repaying the compliment to Capture’s investors.

“Your first investors are your team. You have to ask lots of questions like you do when you decide whether this or that person is a perfect fit for your team. Because investors and the team are those people with whom you’re going to build a great product. At the same time, investors ask lots of questions to you.”

Capture’s investors were evidently pleased enough with the answers their questions elicited to cut Capture its founding checks. And the startup’s team is already ten-strong — and hard at work to get a beta launched in fall.

The business is based in the US and Europe, with one office in Moscow, where Hardy says they’ve managed to poach some relevant tech talent from Russian social media giant vk.com; and another slated to be opening in a couple of weeks time, on Snap’s home turf of LA. 

“We’ll be their neighbors in Venice beach,” he confirms, though he stresses there will still be clear blue water between the two companies’ respective social apps, adding: “Snapchat is really a different product.”

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending