Connect with us

computer security

Equifax products also leaked thousands of salary histories

Equifax is still leaking like a sieve. Security researcher Brian Krebs has outlined a vulnerability in Equifax’s The Work Number product, a system used by credit companies to confirm your salary.

The system uses a number of personal details, including your SSN and birthdate, to bring up a salary history. These are details leaked in Equifax’s 143 million record breach this year.

Krebs writes:

To find out how easy it is to view your detailed salary history, you’ll need your employer’s name or employer code. Helpfully, this page lets you look that up quite easily (although if you opt to list employers alphabetically by the first letter of the company name, there are so many entries for each letter that I found Equifax’s database simply crashes half the time instead of rendering the entire list).

From there it was simply a matter of going to a page – now missing – and entering your SSN and “PIN,” a term that suggests a degree of security but is actually your eight-digit birthdate. From there users were shown a full set of salary data.

The Equifax breach shows us a few things but primarily it proves that the systems put in place to protect banks from customers are inefficient and prone to catastrophic failure. While I doubt this will cause a popular uprising and wipe out services like Equifax, here’s hoping that some industrious startup with a quantum encryption scheme and half a brain can figure out a better solution to keeping our financial data secure.

Featured Image: Tom Williams/CQ Roll Call/Getty Images

News Source = techcrunch.com

Continue Reading
Click to comment

Leave a Reply

Atlanta

A simple solution to end the encryption debate

Criminals and terrorists, like millions of others, rely on smartphone encryption to protect the information on their mobile devices. But unlike most of us, the data on their phones could endanger lives and pose a great threat to national security.

The challenge for law enforcement, and for us as a society, is how to reconcile the advantages of gaining access to the plans of dangerous individuals with the cost of opening a door to the lives of everyone else. It is the modern manifestation of the age-old conflict between privacy versus security, playing out in our pockets and palms.

One-size-fits all technological solutions, like a manufacturer-built universal backdoor tool for smartphones, likely create more dangers than they prevent. While no solution will be perfect, the best ways to square data access with security concerns require a more nuanced approach that rely on non-technological procedures.

The FBI has increasingly pressed the case that criminals and terrorists use smartphone security measures to avoid detection and investigation, arguing for a technological, cryptographic solution to stop these bad actors from “going dark.” In fact, there are recent reports that the Executive Branch is engaged in discussions to compel manufacturers to build technological tools so law enforcement can read otherwise-encrypted data on smartphones.

But the FBI is also tasked with protecting our nation against cyber threats. Encryption has a critical role in protecting our digital systems against compromises by hackers and thieves. And of course, a centralized data access tool would be a prime target for hackers and criminals. As recent events prove – from the 2016 elections to the recent ransomware attack against government computers in Atlanta – the problem will likely only become worse. Anything that weakens our cyber defenses will only make it more challenging for authorities to balance these “dual mandates” of cybersecurity and law enforcement access.

There is also the problem of internal threats: when they have access to customer data, service providers themselves can misuse or sell it without permission. Once someone’s data is out of their control, they have very limited means to protect it against exploitation. The current, growing scandal around the data harvesting practices on social networking platforms illustrates this risk. Indeed, our company Symphony Communications, a strongly encrypted messaging platform, was formed in the wake of a data misuse scandal by a service provider in the financial services sector.

(Photo by Chip Somodevilla/Getty Images)

So how do we help law enforcement without making data privacy even thornier than it already is? A potential solution is through a non-technological method, sensitive to the needs of all parties involved, that can sometimes solve the tension between government access and data protection while preventing abuse by service providers.

Agreements between some of our clients and the New York State Department of Financial Services (“NYSDFS”), proved popular enough that FBI Director Wray recently pointed to them as a model of “responsible encryption” that solves the problem of “going dark” without compromising robust encryption critical to our nation’s business infrastructure.

The solution requires storage of encryption keys — the codes needed to decrypt data — with third party custodians. Those custodians would not keep these client’s encryption keys. Rather, they give the access tool to clients, and then clients can choose how to use it and to whom they wish to give access. A core component of strong digital security is that a service provider should not have access to client’s unencrypted data nor control over a client’s encryption keys.

The distinction is crucial. This solution is not technological, like backdoor access built by manufacturers or service providers, but a human solution built around customer control.  Such arrangements provide robust protection from criminals hacking the service, but they also prevent customer data harvesting by service providers.

Where clients choose their own custodians, they may subject those custodians to their own, rigorous security requirements. The clients can even split their encryption keys into multiple pieces distributed over different third parties, so that no one custodian can access a client’s data without the cooperation of the others.

This solution protects against hacking and espionage while safeguarding against the misuse of customer content by the service provider. But it is not a model that supports service provider or manufacturer built back doors; our approach keeps the encryption key control in clients’ hands, not ours or the government’s.

A custodial mechanism that utilizes customer-selected third parties is not the answer to every part of the cybersecurity and privacy dilemma. Indeed, it is hard to imagine that this dilemma will submit to a single solution, especially a purely technological one. Our experience shows that reasonable, effective solutions can exist. Technological features are core to such solutions, but just as critical are non-technological considerations. Advancing purely technical answers – no matter how inventive – without working through the checks, balances and risks of implementation would be a mistake.

News Source = techcrunch.com

Continue Reading

Apps

Hacker Kevin Mitnick shows how to bypass 2FA

A new exploit allows hackers to spoof two-factor authentication requests by sending a user to a fake login page and then stealing the username, password, and session cookie.

KnowBe4 Chief Hacking Officer Kevin Mitnick showed the hack in a public video. By convincing a victim to visit a typo-squatting domain liked “LunkedIn.com” and capturing the login, password, and authentication code, the hacker can pass the credentials to the actual site and capture the session cookie. Once this is done the hacker can login indefinitely. This essentially uses the one time 2FA code as a way to spoof a login and grab data.

“A white hat hacker friend of Kevin’s developed a tool to bypass two-factor authentication using social engineering tactics – and it can be weaponized for any site,” said Stu Sjouwerman, KnowBe4 CEO. “Two-factor authentication is intended to be an extra layer of security, but in this instance, we clearly see that you can’t rely on it alone to protect your organization.”

Sjouwerman notes that anti-phishing education is deeply important and that a hack like this is impossible to complete if the victim is savvy about security and the dangers of clicking links that come into your email box. To demonstrate this, Sjouwerman sent me an email seemingly addressed to me from Matt Burns (matt@techcrunch.com) talking about a typo in a post. When I clicked on it I was transferred to a SendGrid redirect site and dumped into TechCrunch – but the payload could have been more nefarious.

“This highlights the need for new-school security awareness training and simulated phishing because people are truly your last line of defense,” said Sjouwerman. He estimates that hackers will begin trying this technique in the next few weeks and urges users and IT managers to harden their security protocols.

News Source = techcrunch.com

Continue Reading

acoustic research

We love augmented reality, but let’s fix things that could become big problems

Augmented Reality (AR) is still in its infancy and has a very promising youth and adulthood ahead. It has already become one of the most exciting, dynamic, and pervasive technologies ever developed. Every day someone is creating a novel way to reshape the real world with a new digital innovation.

Over the past couple of decades, the Internet and smartphone revolutions have transformed our lives, and AR has the potential to be that big. We’re already seeing AR act as a catalyst for major change, driving advances in everything from industrial machines to consumer electronics. It’s also pushing new frontiers in education, entertainment, and health care.

But as with any new technology, there are inherent risks we should acknowledge, anticipate, and deal with as soon as possible. If we do so, these technologies are likely to continue to thrive. Some industry watchers are forecasting a combined AR/VR market value of $108 billion by 2021, as businesses of all sizes take advantage of AR to change the way their customers interact with the world around them in ways previously only possible in science fiction.

As wonderful as AR is and will continue to be, there are some serious privacy and security pitfalls, including dangers to physical safety, that as an industry we need to collectively avoid. There are also ongoing threats from cyber criminals and nation states bent on political chaos and worse — to say nothing of teenagers who can be easily distracted and fail to exercise judgement — all creating virtual landmines that could slow or even derail the success of AR. We love AR, and that’s why we’re calling out these issues now to raise awareness.

Without widespread familiarity with the potential pitfalls, as well as robust self-regulation, AR will not only suffer from systemic security issues, it may be subject to stringent government oversight, slowing innovation, or even threaten existing First Amendment rights. In a climate where technology has come under attack from many fronts for unintended consequences and vulnerabilities–including Russian interference with the 2016 election as well as ever-growing incidents of hacking and malware–we should work together to make sure this doesn’t happen.

If anything causes government overreach in this area, it’ll likely be safety and privacy issues. An example of these concerns is shown in this dystopian video, in which a fictional engineer is able to manipulate both his own reality and that of others via retinal AR implants. Because AR by design blurs the divide between the digital and real worlds, threats to physical safety, job security, and digital identity can emerge in ways that were simply inconceivable in a world populated solely by traditional computers.

While far from exhaustive, the lists below present some of the pitfalls, as well as possible remedies for AR. Think of these as a starting point, beginning with pitfalls:

  • AR can cause big identity and property problems: Catching Pokemons on a sidewalk or receiving a Valentine on a coffee cup at Starbucks is really just scratching the surface of AR capabilities. On a fundamental level, we could lose the power to control how people see us. Imagine a virtual, 21st century equivalent of a sticky note with the words “kick me” stuck to some poor victim’s back. What if that note was digital, and the person couldn’t remove it? Even more seriously, AR could be used to create a digital doppelganger of someone doing something compromising or illegal. AR might also be used to add indelible graffiti to a house, business, sign, product, or art exhibit, raising some serious property concerns.
  • AR can threaten our privacy: Remember Google Glass and “Glassholes?” If a woman was physically confronted in a San Francisco dive bar just for wearing Google Glass (reportedly, her ability to capture the happenings at the bar on video was not appreciated by other patrons), imagine what might happen with true AR and privacy. We may soon see the emergence virtual dressing rooms, which would allow customers to try on clothing before purchasing online. A similar technology could be used to overlay virtual nudity onto someone without their permission. With AR wearables, for example, someone could surreptitiously take pictures of another person and publish them in real time, along with geotagged metadata. There are clear points at which the problem moves from the domain of creepiness to harassment and potentially to a safety concern.
  • AR can cause physical harm: Although hacking bank accounts and IoT devices can wreak havoc, these events don’t often lead to physical harm. With AR, however, this changes drastically when it is superimposed on the real world. AR can increase distractions and make travel more hazardous. As it becomes more common, over-reliance on AR navigation will leave consumers vulnerable to buggy or hacked GPS overlays that can manipulate drivers or pilotsmaking our outside world less safe. For example, if a bus driver’s AR headset or heads-up display starts showing illusory deer on the road, that’s a clear physical danger to pedestrians, passengers, and other drivers.
  • AR could launch disturbing career arms races: As AR advances, it can improve everything from individual productivity to worker data access, significantly impacting job performance. Eventually, workers with training and experience with AR technology might be preferred over those who don’t. That could lead to an even wider gap between so-called digital elites and those without such digital familiarity. More disturbingly, we might see something of an arms race in which a worker with eye implants as depicted in the film mentioned above might perform with higher productivity, thereby creating a competitive advantage over those who haven’t had the surgery. The person in the next cubicle could then feel pressure to do the same just to remain competitive in the job market.

How can we address and resolve these challenges? Here are some initial suggestions and guidelines to help get the conversation started:

  • Industry standards: Establish a sort of AR governing body that would evaluate, debate and then publish standards for developers to follow. Along with this, develop a centralized digital service akin to air traffic control for AR that classifies public, private and commercial spaces as well as establishes public areas as either safe or dangerous for AR use.
  • A comprehensive feedback system: Communities should feel empowered to voice their concerns. When it comes to AR, a strong and responsive way for reporting unsecure vendors that don’t comply with AR safety, privacy, and security standards will go a long way in driving consumer trust in next-gen AR products.
  • Responsible AR development and investment: Entrepreneurs and investors need to care about these issues when developing and backing AR products. They should follow a basic moral compass and not simply chase dollars and market share.
  • Guardrails for real-time AR screenshots: Rather than disallowing real-time AR screenshots entirely, instead control them through mechanisms such as geofencing. For example, an establishment such as a nightclub would need to set and publish its own rules which are then enforced by hardware or software.

While ambitious companies focus on innovation, they must also be vigilant about the potential hazards of those breakthroughs. In the case of AR, working to proactively wrestle with the challenges around identity, privacy and security will help mitigate the biggest hurdles to the success of this exciting new technology.

Recognizing risks to consumer safety and privacy is only the first step to resolving long-term vulnerabilities that rapidly emerging new technologies like AR create. Since AR blurs the line between the real world and the digital one, it’s imperative that we consider the repercussions of this technology alongside its compelling possibilities. As innovators, we have a duty to usher in new technologies responsibly and thoughtfully so that they’re improving society in ways that can’t also be abused -we need to anticipate problems and police ourselves. If we don’t safeguard our breakthroughs and the consumers who use them, someone else will.

News Source = techcrunch.com

Continue Reading

Most Shared Posts

Follow on Twitter

Trending