Connect with us

Delhi

LawGeex raises $12M for its AI-powered contract review technology

Can Artificial Intelligence replace lawyers? Perhaps sometime in the distant future, but in the meantime AI is already augmenting the work done by legal professionals as startups race to reach that ultimate goal.

One burgeoning player in the AI-powered legal tech space is Tel Aviv-based LawGeex, which has developed automated contract review technology to help companies sift through things like NDAs, supply agreements, purchase orders, and SaaS licenses, to ensure they’re aren’t any unsanctioned legal gotchas buried deep in legalise. Today, the company is announcing that it has closed $12 million in new investment.

Led by VC fund Aleph, with participation from previous backers, including Lool Ventures, the new round of funding will be used by LawGeex to further develop its product, and build a bigger presence in the U.S. where it recently opened a New York office. It brings the startup’s total funding to date to $21.5 million.

Designed to answer the question ‘Can I sign this?’ the LawGeex contract review system aims to significantly speed up and cut costs inherent with the contract approval process. The idea is that once a new contract is sent to a business, it is uploaded to LawGeex where a “first-pass review” of the contract is undertaken using the startup’s AI. This checks the contract against a company’s predefined legal policies.

“If everything looks good, we can automatically approve the contract for signing right then and there,” explains LawGeex VP Marketing Shmuli Goldberg. “If we spot any issues that need to be corrected, we escalate the contract to the legal team, and highlight the exact sentence they need to fix, and what they need to do to fix it”.

The desired outcome is that legal professionals no longer need to spend time reviewing problem-free contracts, and only spend a few minutes, instead of hours, on problematic ones. “We free up the time of whoever does that first review of the contract, be it a paralegal who takes a first look before sending issues on to a lawyer, or a contract review team who triage incoming contracts,” Goldberg says.

Put more simply, the LawGeex product operates a little like a spelling or grammar checker (see screenshot above). But instead of looking for specific keywords or language, the AI has been trained to understand technical legal language or so-called legalese. “It actively reads the contracts and “understands” the legal concepts. This means we can find and flag provisions even if they’re written in a way we’ve never seen before,” says the LawGeex VP.

To make all of this possible, over the last four years the company’s “recursive neural network”-based AI has been trained by feeding it hundreds of thousands of legal contracts, and having experienced U.S. lawyers annotate those contracts along the way. “We’ve now reached the point we can say that in certain cases, for example reviewing standard NDAs, our AI is actually more accurate than a human, as a recent study led by several academics at leading universities showed,” claims Goldberg.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.

News Source = techcrunch.com

Continue Reading

Delhi

58-year-old NRI masturbates sitting beside woman on board flight, held at Delhi airport

The security control room at the IGI Airport was informed in the early hours today that there was an “unruly passenger” on board a Turkish Airlines flight approaching Delhi.

Continue Reading

Delhi

After tens of thousands of pre-orders, 3D audio headphones startup Ossic disappears

After taking tens of thousands of crowd-funding pre-orders for a high-end pair of “3D sound” headphones, audio startup Ossic announced this weekend that it is shutting down the company and backers will not be receiving refunds.

The company raised $2.7 million on Kickstarter and $3.2 million on Indiegogo for their Ossic X headphones which they pitched as a pair of high-end head-tracking headphones that would be perfect for listening to 3D audio, especially in a VR environment. While the company also raised a “substantial seed investment,” in a letter on the Ossic website, the company blamed the slow adoption of virtual reality alongside their crowdfunding campaign stretch goals which bogged down their R&D team.

“This was obviously not our desired outcome. The team worked exceptionally hard and created a production-ready product that is a technological and performance breakthrough. To fail at the 5 yard-line is a tragedy. We are extremely sorry that we cannot deliver your product and want you to know that the team has done everything possible including investing our own savings and working without salary to exhaust all possibilities.”

We have reached out to the company for additional details.

Through January 2017, the San Diego company had received more than 22,000 pre-orders for their Ossic X headphones. This past January, Ossic announced that they had shipped out the first units to the 80 backers in their $999 developer tier headphones. In that same update, the company said they would enter “mass production” by late spring 2018.

In the end, after tens of thousands of pre-orders, Ossic only built 250 pairs of headphones and only shipped a few dozen to Kickstarter backers.

Crowdfunding campaign failures for hardware products are rarely shocking, but often the collapse comes from the company not being able to acquire additional funding from outside investors. Here, Ossic appears to have been misguided from the start and even with nearly $6 million in crowdfunding and seed funding, which they said nearly matched that number, they were left unable to begin large-scale manufacturing. The company said in their letter, that it would likely take more than $2 million in additional funding to deliver the existing backlog of pre-orders.

Backers are understandably quite upset about not receiving their headphones. A group of over 1,200 Facebook users have joined a recently-created page threatening a class action lawsuit against the team.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending