Connect with us

Delhi

Stripe debuts Radar anti-fraud AI tools for big businesses, says it has halted $4B in fraud to date

Cybersecurity continues to be a growing focus and problem in the digital world, and now Stripe is launching a new paid product that it hopes will help its customers better battle one of the bigger side-effects of data breaches: online payment fraud. Today, Stripe is announcing Radar for Fraud Teams, an expansion of its free AI-based Radar service that runs alongside Stripe’s core payments API to help identify and block fraudulent transactions.

And there are further efforts that Stripe is planning in coming months. Michael Manapat, Stripe’s engineering manager for Radar and machine learning, said the company is going to soon launch a private beta of a “dynamic authentication” that will bring in two-factor authentication and start to see Stripe’s first forays into considering how to implement biometric factors in payments. Fingerprints and other physical attributes have become increasingly popular ways to identify mobile and other users.

The initial iteration of Radar launched in October 2016, and since then, Manapat tells me that it has prevented $4 billion in fraud for its “hundreds of thousands” of customers.

Considering the wider scope of how much e-commerce is affected by fraud — one study estimates $57.8 billion in e-commerce fraud across eight major verticals in a one-year period between 2016 and 2017 — this is a decent dent, but there is a lot more work to be done. And Stripe’s position of knowing four out of every five payment card numbers globally (on account of the ubiquity of its payments API) gives it a strong position to be able to tackle it.

The new paid product comes alongside an update to the core, free product that Stripe is dubbing Radar 2.0, which Stripe claims will have more advanced machine learning built into it and can therefore up its fraud detection by some 25 percent over the previous version.

New features for the whole product (free and paid) will include being able to detect when a proxy VPN is being used (which fraudsters might use to appear like they are in one country when they are actually in another) and ingesting billions of data points to train its model, which is now being updated on a daily basis automatically — itself an improvement on the slower and more manual system that Manapat said Stripe has been using for the past couple of years.

Meanwhile, the paid product is an interesting development.

At the time of the original launch, Stripe co-founder John Collison hinted that the company would be considering a paid product down the line. Stripe has said multiple times that it’s in no rush to go public — and statement that a spokesperson reiterated this week — but it’s notable that a paid tier is a sign of how Stripe is slowly building up more monetization and revenue generation.

Stripe is valued at around $9.2 billion as of its last big round in 2016. Most recently, it quietly raised another $44 million in March of this year, according to a Form D filing and data from Pitchbook, bringing the total to just under $500 million raised.

The Teams product, aimed at businesses that are big enough to have dedicated fraud detection staff, will be priced at an additional $0.02 per transaction, on top of Stripe’s basic transaction fees of a 2.9 percent commission plus 30 cents per successful card charge in the U.S. (fees vary in other markets).

The chief advantage of taking the paid product will be that teams will be able to customise how Radar works with their own transactions.

This will include a more complete set of data for teams that review transactions, and a more granular set of tools to determine where and when sales are reviewed, for example based on usage patterns or the size of the transaction. There are already a set of flags the work to note when a card is used in frequent succession across disparate geographies; but Manapat said that newer details such as analysing the speed at which payment details are entered and purchases are made will now also factor into how it flags transactions for review.

Similarly, teams will be able to determine the value at which a transaction needs to be flagged. This is the online equivalent of when certain purchases require or waive you to enter a PIN or provide a signature to seal the deal. (And it’s interesting to see that some e-commerce operations are potentially allowing some dodgy sales to happen simply to keep up the user experience for the majority of legitimate transactions.)

Users of the paid product will also be able to now use Radar to help with their overall management of how it handles fraud. This will include being able to keep lists of attributes, names and numbers that are scrutinised, and to check against them with analytics also created by Stripe to help identify trending issues, and to plan anti-fraud activities going forward.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.

News Source = techcrunch.com

Continue Reading

Delhi

58-year-old NRI masturbates sitting beside woman on board flight, held at Delhi airport

The security control room at the IGI Airport was informed in the early hours today that there was an “unruly passenger” on board a Turkish Airlines flight approaching Delhi.

Continue Reading

Delhi

After tens of thousands of pre-orders, 3D audio headphones startup Ossic disappears

After taking tens of thousands of crowd-funding pre-orders for a high-end pair of “3D sound” headphones, audio startup Ossic announced this weekend that it is shutting down the company and backers will not be receiving refunds.

The company raised $2.7 million on Kickstarter and $3.2 million on Indiegogo for their Ossic X headphones which they pitched as a pair of high-end head-tracking headphones that would be perfect for listening to 3D audio, especially in a VR environment. While the company also raised a “substantial seed investment,” in a letter on the Ossic website, the company blamed the slow adoption of virtual reality alongside their crowdfunding campaign stretch goals which bogged down their R&D team.

“This was obviously not our desired outcome. The team worked exceptionally hard and created a production-ready product that is a technological and performance breakthrough. To fail at the 5 yard-line is a tragedy. We are extremely sorry that we cannot deliver your product and want you to know that the team has done everything possible including investing our own savings and working without salary to exhaust all possibilities.”

We have reached out to the company for additional details.

Through January 2017, the San Diego company had received more than 22,000 pre-orders for their Ossic X headphones. This past January, Ossic announced that they had shipped out the first units to the 80 backers in their $999 developer tier headphones. In that same update, the company said they would enter “mass production” by late spring 2018.

In the end, after tens of thousands of pre-orders, Ossic only built 250 pairs of headphones and only shipped a few dozen to Kickstarter backers.

Crowdfunding campaign failures for hardware products are rarely shocking, but often the collapse comes from the company not being able to acquire additional funding from outside investors. Here, Ossic appears to have been misguided from the start and even with nearly $6 million in crowdfunding and seed funding, which they said nearly matched that number, they were left unable to begin large-scale manufacturing. The company said in their letter, that it would likely take more than $2 million in additional funding to deliver the existing backlog of pre-orders.

Backers are understandably quite upset about not receiving their headphones. A group of over 1,200 Facebook users have joined a recently-created page threatening a class action lawsuit against the team.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending