Connect with us

Delhi

LocationSmart didn’t just sell mobile phone locations, it leaked them

What’s worse than companies selling the real-time locations of cell phones wholesale? Failing to take security precautions that prevent people from abusing the service. LocationSmart did both, as numerous sources indicated this week.

The company is adjacent to a hack of Securus, a company in the lucrative business of prison inmate communication; LocationSmart was the partner that allowed the former to provide mobile device locations in real time to law enforcement and others. There are perfectly good reasons and methods for establishing customer location, but this isn’t one of them.

Police and FBI and the like are supposed to go directly to carriers for this kind of information. But paperwork is such a hassle! If carriers let LocationSmart, a separate company, access that data, and LocationSmart sells it to someone else (Securus), and that someone else sells it to law enforcement, much less paperwork required! That’s what Securus told Senator Ron Wyden (D-OR) it was doing: acting as a middle man between the government and carriers, with help from LocationSmart.

LocationSmart’s service appears to locate phones by which towers they have recently connected to, giving a location within seconds to as close as within a few hundred feet. To prove the service worked, the company (until recently) provided a free trial of its service where a prospective customer could put in a phone number and, once that number replied yes to a consent text, the location would be returned.

It worked quite well, but is now offline. Because in its excitement to demonstrate the ability to locate a given phone, the company appeared to forget to secure the API by which it did so, Brian Krebs reports.

Krebs heard from CMU security researcher Robert Xiao, who had found that LocationSmart “failed to perform basic checks to prevent anonymous and unauthorized queries.” And not through some hardcore hackery — just by poking around.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do. This is something anyone could discover with minimal effort,” he told Krebs. Xiao posted the technical details here.

They verified the back door to the API worked by testing it with some known parties, and when they informed LocationSmart, the company’s CEO said they would investigate.

This is enough of an issue on its own. But it also calls into question what the wireless companies say about their own policies of location sharing. When Krebs contacted the four major U.S. carriers, they all said they all require customer consent or law enforcement requests.

Yet using LocationSmart’s tool, phones could be located without user consent, on those very carriers. Both of these things can’t be true — and one was just demonstrated, while the other is an assurance from an industry infamous for deception and bad privacy policy.

There are three options that I can think of:

  • LocationSmart has a way of finding location via towers that does not require authorization from the carriers in question. This seems unlikely for technical and business reasons; the company also listed the carriers and other companies on its front page as partners, though their logos have since been removed.
  • LocationSmart has a sort of skeleton key to carrier info; their requests might be assumed to be legit because they have law enforcement clients or the like. This is more likely, but also contradicts the carriers’ requirement that they require consent or some kind of law enforcement justification.
  • Carriers don’t actually check on a case by case basis whether a request has consent; they may foist that duty off on the ones doing the requests, like LocationSmart (which does ask for consent in the official demo). But if carriers don’t ask for consent and third parties don’t either, and neither keeps the other accountable, the requirement for consent may as well not exist.

None of these is particularly heartening. But no one expected anything good to come out of a poorly secured API that let anyone request the approximate location of anyone’s phone. I’ve asked LocationSmart for comment on how the issue was possible (and also Krebs for a bit of extra data that might shed light on this).

It’s worth mentioning that LocationSmart is not the only business that does this, just the one implicated today in this security failure and in the shady practices of Securus.

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.

News Source = techcrunch.com

Continue Reading

Delhi

58-year-old NRI masturbates sitting beside woman on board flight, held at Delhi airport

The security control room at the IGI Airport was informed in the early hours today that there was an “unruly passenger” on board a Turkish Airlines flight approaching Delhi.

Continue Reading

Delhi

After tens of thousands of pre-orders, 3D audio headphones startup Ossic disappears

After taking tens of thousands of crowd-funding pre-orders for a high-end pair of “3D sound” headphones, audio startup Ossic announced this weekend that it is shutting down the company and backers will not be receiving refunds.

The company raised $2.7 million on Kickstarter and $3.2 million on Indiegogo for their Ossic X headphones which they pitched as a pair of high-end head-tracking headphones that would be perfect for listening to 3D audio, especially in a VR environment. While the company also raised a “substantial seed investment,” in a letter on the Ossic website, the company blamed the slow adoption of virtual reality alongside their crowdfunding campaign stretch goals which bogged down their R&D team.

“This was obviously not our desired outcome. The team worked exceptionally hard and created a production-ready product that is a technological and performance breakthrough. To fail at the 5 yard-line is a tragedy. We are extremely sorry that we cannot deliver your product and want you to know that the team has done everything possible including investing our own savings and working without salary to exhaust all possibilities.”

We have reached out to the company for additional details.

Through January 2017, the San Diego company had received more than 22,000 pre-orders for their Ossic X headphones. This past January, Ossic announced that they had shipped out the first units to the 80 backers in their $999 developer tier headphones. In that same update, the company said they would enter “mass production” by late spring 2018.

In the end, after tens of thousands of pre-orders, Ossic only built 250 pairs of headphones and only shipped a few dozen to Kickstarter backers.

Crowdfunding campaign failures for hardware products are rarely shocking, but often the collapse comes from the company not being able to acquire additional funding from outside investors. Here, Ossic appears to have been misguided from the start and even with nearly $6 million in crowdfunding and seed funding, which they said nearly matched that number, they were left unable to begin large-scale manufacturing. The company said in their letter, that it would likely take more than $2 million in additional funding to deliver the existing backlog of pre-orders.

Backers are understandably quite upset about not receiving their headphones. A group of over 1,200 Facebook users have joined a recently-created page threatening a class action lawsuit against the team.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending