Menu

Timesdelhi.com

March 19, 2019
Category archive

data security

What business leaders can learn from Jeff Bezos’ leaked texts

in Column/computing/cryptography/data protection/data security/Delhi/European Union/Facebook/General Data Protection Regulation/Google/human rights/India/jeff bezos/Microsoft/national security/online security/oregon/Politics/privacy/Ron Wyden/terms of service/United States/Wickr by

The ‘below the belt selfie’ media circus surrounding Jeff Bezos has made encrypted communications top of mind among nervous executive handlers. Their assumption is that a product with serious cryptography like Wickr – where I work – or Signal could have helped help Mr. Bezos and Amazon avoid this drama.

It’s a good assumption, but a troubling conclusion.

I worry that moments like these will drag serious cryptography down to the level of the National Enquirer. I’m concerned that this media cycle may lead people to view privacy and cryptography as a safety net for billionaires rather than a transformative solution for data minimization and privacy.

We live in the chapter of computing when data is mostly unprotected because of corporate indifference. The leaders of our new economy – like the vast majority of society – value convenience and short-term gratification over the security and privacy of consumer, employee and corporate data.  

We cannot let this media cycle pass without recognizing that when corporate executives take a laissez-faire approach to digital privacy, their employees and organizations will follow suit.

Two recent examples illustrate the privacy indifference of our leaders…

  • The most powerful executive in the world is either indifferent to, or unaware that, unencrypted online flirtations would be accessed by nation states and competitors.
  • 2016 presidential campaigns were either indifferent to, or unaware that, unencrypted online communications detailing “off-the-record” correspondence with media and payments to adult actor(s) would be accessed by nation states and competitors.

If our leaders do not respect and understand online security and privacy, then their organizations will not make data protection a priority. It’s no surprise that we see a constant stream of large corporations and federal agencies breached by nation states and competitors. Who then can we look to for leadership?

GDPR is an early attempt by regulators to lead. The European Union enacted GDPR to ensure individuals own their data and enforce penalties on companies who do not protect personal data. It applies to all data processors, but the EU is clearly focused on sending a message to the large US based data processors – Amazon, Facebook, Google, Microsoft, etc. In January, France’s National Data Protection Commission sent a message by fining Google $57 million for breaching GDPR rules. It was an unprecedented fine that garnered international attention. However, we must remember that in 2018 Google’s revenues were greater than $300 million … per day! GPDR is, at best, an annoying speed-bump in the monetization strategy of large data processors.

It is through this lens that Senator Ron Wyden’s (Oregon) idealistic call for billions of dollars in corporate fines and jail time for executives who enable privacy breaches can be seen as reasonable. When record financial penalties are inconsequential it is logical to pursue other avenues to protect our data.

Real change will come when our leaders understand that data privacy and security can increase profitability and reliability. For example, the Compliance, Governance and Oversight Council reports that an enterprise will spend as much as $50 million to protect 10 petabytes of data, and that $34.5 million of this is spent on protecting data that should be deleted. Serious efficiencies are waiting to be realized and serious cryptography can help.  

So, thank you Mr. Bezos for igniting corporate interest in secure communications. Let’s hope this news cycle convinces our corporate leaders and elected officials to embrace data privacy, protection and minimization because it responsible, profitable and efficient. We need leaders and elected officials to set an example and respect their own data and privacy if we have any hope of their organizations to protect ours.

News Source = techcrunch.com

Massive mortgage and loan data leak gets worse as original documents also exposed

in Amazon-S3/cloud storage/computer security/data breach/data security/database/Delhi/email/Finance/Government/India/New York/ocr/Politics/Prevention/privacy/Security/texas/United States/web browser by

Remember that massive data leak of mortgage and loan data we reported on Wednesday?

In case you missed it, millions of documents were found leaking after an exposed Elasticsearch server was found without a password. The data contained highly sensitive financial data on tens of thousands of individuals who took out loans or mortgages over the past decade with U.S. financial institutions. The documents were converted using a technology called OCR from their original paper documents to a computer readable format and stored in the database, but they weren’t easy to read. That said, it was possible to discern names, addresses, birth dates, Social Security numbers and other private financial data by anyone who knew where to find the server.

Independent security researcher Bob Diachenko and TechCrunch traced the source of the leaking database to a Texas-based data and analytics company, Ascension. When reached, the company said that one of its vendors, OpticsML, a New York-based document management startup, had mishandled the data and was to blame for the data leak.

It turns out that data was exposed again — but this time, it was the original documents.

Diachenko found the second trove of data in a separate exposed Amazon S3 storage server, which too was not protected with a password. Anyone who went to an easy-to-guess web address in their web browser could have accessed the storage server and see — and download — the files stored inside.

In a note to TechCrunch, Diachenko said he was “very surprised” to find the server in the first place, let alone open and accessible. Because Amazon storage servers are private by default and aren’t accessible to the web, someone would have made a conscious decision to set its permissions to public.

The bucket contained 21 files containing 23,000 pages of PDF documents stitched together — or about 1.3 gigabytes in size. Diachenko said that portions of the data in the exposed Elasticsearch database on Wednesday matched data found in the Amazon S3 bucket, confirming that some or all of the data is the same as what was previously discovered. Like in Wednesday’s report, the server contained documents from banks and financial institutions across the U.S., including loans and mortgage agreements. We also found documents from U.S. Department of Housing and Urban Development, as well as W-2 tax forms, loan repayment schedules, and other sensitive financial information.

Two of the files — redacted — found on the exposed storage server. (Image: TechCrunch)

Many of the files also contained names, addresses, phone numbers, and Social Security numbers, and more.

When we tried to reach OpticsML on Wednesday, its website had been pulled offline and the listed phone number was disconnected. After scouring through old cached version of the site, we found an email address.

TechCrunch emailed chief executive Sean Lanning, and the bucket was secured within the hour.

Lanning acknowledged our email but did not comment. Instead, OpticsML chief technology officer John Brozena confirmed the breach in a separate email, but declined to answer several questions about the exposed data — including how long the bucket was open and why it was set to public.

“We are working with the appropriate authorities and a forensic team to analyze the full extent of the situation regarding the exposed Elasticsearch server,” said Brozena. “As part of this investigation we learned that 21 documents used for testing were made identifiable by the previously discussed Elasticsearch leak. These documents were taken offline promptly.”

He added that OpticsML is “working to notify all affected parties” when asked about informing customers and state regulators, as per state data breach notification laws.

But Diachenko said there was no telling how many times the bucket might have been accessed before it was discovered.

“I would assume that after such publicity like these guys had, first thing you would do is to check if your cloud storage is down or, at least, password-protected,” he said.

News Source = techcrunch.com

Mozilla adds website breach notifications to Firefox

in data breach/data security/Delhi/Europe/Firefox/Firefox Monitor/Firefox Quantum/Have I Been Pwned/India/Mozilla/Politics/Security/security breaches/Troy Hunt/web browser by

Mozilla is adding a new security feature to its Firefox Quantum web browser that will alert users when they visit a website that has recently reported a data breach.

When a Firefox user lands on a website with a breach in its recent past they’ll see a pop up notification informing them of the barebones details of the breach and suggesting they check to see if their information was compromised.

“We’re bringing this functionality to Firefox users in recognition of the growing interest in these types of privacy- and security-centric features,” Mozilla said today. “This new functionality will gradually roll out to Firefox users over the coming weeks.”

Here’s an example of what the site breach notifications look like and the kind of detail they will provide:

Mozilla’s website breach notification feature in Firefox

Mozilla is tying the site breach notification feature to an email account breach notification service it launched earlier this year, called Firefox Monitor, which it also said today is now available in an additional 26 languages.

Firefox users can click through to Monitor when they get a pop up about a site breach to check whether their own email was involved.

As with Firefox Monitor, Mozilla is relying on a list of breached websites provided by its partner, Troy Hunt’s pioneering breach notification service, Have I Been Pwned.

There can of course be a fine line between feeling informed and feeling spammed with too much information when you’re just trying to get on with browsing the web. But Mozilla looks to sensitive to that because it’s limiting breach notifications to one per breached site. It will also only raise a flag if the breach itself occurred in the past 12 months.

Data breaches are an unfortunate staple of digital life, stepping up in recent years in frequency and size along with big data services. That in turn has cranked up awareness of the problem. And in Europe tighter laws were introduced this May to bring in a universal breach disclosure requirement and raise penalties for data protection failures.

The GDPR framework also generally encourages data controllers and processors to improve their security systems given the risk of much heftier fines.

Although it will likely take some time for any increases in security investments triggered by the regulation to filter down and translate into fewer breaches — if indeed the law ends up having that hoped for impact.

But one early win for GDPR is it has greased the pipe for companies to promptly disclose breaches. This means it’s helping to generate more up-to-date security information which consumers can in turn use to inform the digital choices they make. So the regulation looks to be generating positive incentives.

News Source = techcrunch.com

Cognigo raises $8.5M for its AI-driven data protection platform

in Artificial Intelligence/Crowdfunding/data protection/data security/Delhi/Enterprise/Europe/Finance/funding/Fundings & Exits/General Data Protection Regulation/India/machine learning/OurCrowd/Politics/Recent Funding/regulatory compliance/Startups/TC by

Cognigo, a startup that aims to use AI and machine learning to help enterprises protect their data and stay in compliance with regulations like GDPR, today announced that it has raised an $8.5 million Series A round. The round was led by Israel-based crowdfunding platform OurCrowd, with participation from privacy company Prosegur and State of Mind Ventures.

The company promises that it can help businesses protect their critical data assets and prevent personally identifiable information from leaking outside of the company’s network. And it says it can do so without the kind of hands-on management that’s often required in setting these kinds of systems up and managing them over time. Indeed, Cognigo says that it can help businesses achieve GDPR compliance in days instead of months.

To do this, the company tells me, it’s using pre-trained language models for data classification. That model has been trained to detect common categories like payslips, patents, NDAs and contracts. Organizations can also provide their own data samples to further train the model and customize it for their own needs. “The only human intervention required is during the systems configuration process which would take no longer than a single day’s work,” a company spokesperson told me. “Apart from that, the system is completely human-free.”

The company tells me that it plans to use the new funding to expand its R&D, marketing and sales teams, all with the goal of expanding its market presence and enhancing awareness of its product. “Our vision is to ensure our customers can use their data to make smart businesses decisions while making sure that the data is continuously protected and in compliance,” the company tells me.

News Source = techcrunch.com

Children are being “datafied” before we’ve understood the risks, report warns

in Advertising Tech/Anne Longfield/Apps/Artificial Intelligence/big data/children's data/consumer protection/data management/data processing/data protection/data security/Delhi/Education/Europe/European Union/Facebook/General Data Protection Regulation/identity management/India/Policy/Politics/privacy/Snapchat/Social/social media/terms of service/United Kingdom by

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

News Source = techcrunch.com

1 2 3 4
Go to Top