Timesdelhi.com

December 12, 2018
Category archive

data security

Mozilla adds website breach notifications to Firefox

in data breach/data security/Delhi/Europe/Firefox/Firefox Monitor/Firefox Quantum/Have I Been Pwned/India/Mozilla/Politics/Security/security breaches/Troy Hunt/web browser by

Mozilla is adding a new security feature to its Firefox Quantum web browser that will alert users when they visit a website that has recently reported a data breach.

When a Firefox user lands on a website with a breach in its recent past they’ll see a pop up notification informing them of the barebones details of the breach and suggesting they check to see if their information was compromised.

“We’re bringing this functionality to Firefox users in recognition of the growing interest in these types of privacy- and security-centric features,” Mozilla said today. “This new functionality will gradually roll out to Firefox users over the coming weeks.”

Here’s an example of what the site breach notifications look like and the kind of detail they will provide:

Mozilla’s website breach notification feature in Firefox

Mozilla is tying the site breach notification feature to an email account breach notification service it launched earlier this year, called Firefox Monitor, which it also said today is now available in an additional 26 languages.

Firefox users can click through to Monitor when they get a pop up about a site breach to check whether their own email was involved.

As with Firefox Monitor, Mozilla is relying on a list of breached websites provided by its partner, Troy Hunt’s pioneering breach notification service, Have I Been Pwned.

There can of course be a fine line between feeling informed and feeling spammed with too much information when you’re just trying to get on with browsing the web. But Mozilla looks to sensitive to that because it’s limiting breach notifications to one per breached site. It will also only raise a flag if the breach itself occurred in the past 12 months.

Data breaches are an unfortunate staple of digital life, stepping up in recent years in frequency and size along with big data services. That in turn has cranked up awareness of the problem. And in Europe tighter laws were introduced this May to bring in a universal breach disclosure requirement and raise penalties for data protection failures.

The GDPR framework also generally encourages data controllers and processors to improve their security systems given the risk of much heftier fines.

Although it will likely take some time for any increases in security investments triggered by the regulation to filter down and translate into fewer breaches — if indeed the law ends up having that hoped for impact.

But one early win for GDPR is it has greased the pipe for companies to promptly disclose breaches. This means it’s helping to generate more up-to-date security information which consumers can in turn use to inform the digital choices they make. So the regulation looks to be generating positive incentives.

News Source = techcrunch.com

Cognigo raises $8.5M for its AI-driven data protection platform

in Artificial Intelligence/Crowdfunding/data protection/data security/Delhi/Enterprise/Europe/Finance/funding/Fundings & Exits/General Data Protection Regulation/India/machine learning/OurCrowd/Politics/Recent Funding/regulatory compliance/Startups/TC by

Cognigo, a startup that aims to use AI and machine learning to help enterprises protect their data and stay in compliance with regulations like GDPR, today announced that it has raised an $8.5 million Series A round. The round was led by Israel-based crowdfunding platform OurCrowd, with participation from privacy company Prosegur and State of Mind Ventures.

The company promises that it can help businesses protect their critical data assets and prevent personally identifiable information from leaking outside of the company’s network. And it says it can do so without the kind of hands-on management that’s often required in setting these kinds of systems up and managing them over time. Indeed, Cognigo says that it can help businesses achieve GDPR compliance in days instead of months.

To do this, the company tells me, it’s using pre-trained language models for data classification. That model has been trained to detect common categories like payslips, patents, NDAs and contracts. Organizations can also provide their own data samples to further train the model and customize it for their own needs. “The only human intervention required is during the systems configuration process which would take no longer than a single day’s work,” a company spokesperson told me. “Apart from that, the system is completely human-free.”

The company tells me that it plans to use the new funding to expand its R&D, marketing and sales teams, all with the goal of expanding its market presence and enhancing awareness of its product. “Our vision is to ensure our customers can use their data to make smart businesses decisions while making sure that the data is continuously protected and in compliance,” the company tells me.

News Source = techcrunch.com

Children are being “datafied” before we’ve understood the risks, report warns

in Advertising Tech/Anne Longfield/Apps/Artificial Intelligence/big data/children's data/consumer protection/data management/data processing/data protection/data security/Delhi/Education/Europe/European Union/Facebook/General Data Protection Regulation/identity management/India/Policy/Politics/privacy/Snapchat/Social/social media/terms of service/United Kingdom by

A report by England’s children’s commissioner has raised concerns about how kids’ data is being collected and shared across the board, in both the private and public sectors.

In the report, entitled Who knows what about me?, Anne Longfield urges society to “stop and think” about what big data means for children’s lives.

Big data practices could result in a data-disadvantaged generation whose life chances are shaped by their childhood data footprint, her report warns.

The long term impacts of profiling minors when these children become adults is simply not known, she writes.

“Children are being “datafied” – not just via social media, but in many aspects of their lives,” says Longfield.

“For children growing up today, and the generations that follow them, the impact of profiling will be even greater – simply because there is more data available about them.”

By the time a child is 13 their parents will have posted an average of 1,300 photos and videos of them on social media, according to the report. After which this data mountain “explodes” as children themselves start engaging on the platforms — posting to social media 26 times per day, on average, and amassing a total of nearly 70,000 posts by age 18.

“We need to stop and think about what this means for children’s lives now and how it may impact on their future lives as adults,” warns Longfield. “We simply do not know what the consequences of all this information about our children will be. In the light of this uncertainty, should we be happy to continue forever collecting and sharing children’s data?

“Children and parents need to be much more aware of what they share and consider the consequences. Companies that make apps, toys and other products used by children need to stop filling them with trackers, and put their terms and conditions in language that children understand. And crucially, the Government needs to monitor the situation and refine data protection legislation if needed, so that children are genuinely protected – especially as technology develops,” she adds.

The report looks at what types of data is being collected on kids; where and by whom; and how it might be used in the short and long term — both for the benefit of children but also considering potential risks.

On the benefits side, the report cites a variety of still fairly experimental ideas that might make positive use of children’s data — such as for targeted inspections of services for kids to focus on areas where data suggests there are problems; NLP technology to speed up analysis of large data-sets (such as the NSPCC’s national case review repository) to find common themes and understand “how to prevent harm and promote positive outcomes”; predictive analytics using data from children and adults to more cost-effectively flag “potential child safeguarding risks to social workers”; and digitizing children’s Personal Child Health Record to make the current paper-based record more widely accessible to professionals working with children.

But while Longfield describes the increasing availability of data as offering “enormous advantages”, she is also very clear on major risks unfolding — be it to safety and well-being; child development and social dynamics; identity theft and fraud; and the longer term impact on children’s opportunity and life chances.

“In effect [children] are the “canary in the coal mine for wider society, encountering the risks before many adults become aware of them or are able to develop strategies to mitigate them,” she warns. “It is crucial that we are mindful of the risks and mitigate them.”

Transparency is lacking

One clear takeaway from the report is there is still a lack of transparency about how children’s data is being collected and processed — which in itself acts as a barrier to better understanding the risks.

“If we better understood what happens to children’s data after it is given – who collects it, who it is shared with and how it is aggregated – then we would have a better understanding of what the likely implications might be in the future, but this transparency is lacking,” Longfield writes — noting that this is true despite ‘transparency’ being the first key principle set out in the EU’s tough new privacy framework, GDPR.

The updated data protection framework did beef up protections for children’s personal data in Europe — introducing a new provision setting a 16-year-old age limit on kids’ ability to consent to their data being processed when it came into force on May 25, for example. (Although EU Member States can choose to write a lower age limit into their laws, with a hard cap set at 13.)

And mainstream social media apps, such as Facebook and Snapchat, responded by tweaking their T&Cs and/or products in the region. (Although some of the parental consent systems that were introduced to claim compliance with GDPR appear trivially easy for kids to bypass, as we’ve pointed out before.)

But, as Longfield points out, Article 5 of the GDPR states that data must be “processed lawfully, fairly and in a transparent manner in relation to individuals”.

Yet when it comes to children’s data the children’s commissioner says transparency is simply not there.

She also sees limitations with GDPR, from a children’s data protection perspective — pointing out that, for example, it does not prohibit the profiling of children entirely (stating only that it “should not be the norm”).

While another provision, Article 22 — which states that children have the right not to be subject to decisions based solely on automated processing (including profiling) if they have legal or similarly significant effects on them — also appears to be circumventable.

“They do not apply to decision-making where humans play some role, however minimal that role is,” she warns, which suggests another workaround for companies to exploit children’s data.

“Determining whether an automated decision-making process will have “similarly significant effects” is difficult to gauge given that we do not yet understand the full implications of these processes – and perhaps even more difficult to judge in the case of children,” Longfield also argues.

“There is still much uncertainty around how Article 22 will work in respect of children,” she adds. “The key area of concern will be in respect of any limitations in relation to advertising products and services and associated data protection practices.”

Recommendations

The report makes a series of recommendations for policymakers, with Longfield calling for schools to “teach children about how their data is collected and used, and what they can do to take control of their data footprints”.

She also presses the government to consider introducing an obligation on platforms that use “automated decision-making to be more transparent about the algorithms they use and the data fed into these algorithms” — where data collected from under 18s is used.

Which would essentially place additional requirements on all mainstream social media platforms to be far less opaque about the AI machinery they use to shape and distribute content on their platforms at vast scale. Given that few — if any — could claim not to have no under 18s using their platforms.

She also argues that companies targeting products at children have far more explaining to do, writing: 

Companies producing apps, toys and other products aimed at children should be more transparent about any trackers capturing information about children. In particular where a toy collects any video or audio generated by a child this should be made explicit in a prominent part of the packaging or its accompanying information. It should be clearly stated if any video or audio content is stored on the toy or elsewhere and whether or not it is transmitted over the internet. If it is transmitted, parents should also be told whether or not it will be encrypted during transmission or when stored, who might analyse or process it and for what purposes. Parents should ask if information is not given or unclear.

Another recommendation for companies is that terms and conditions should be written in a language children can understand.

(Albeit, as it stands, tech industry T&Cs can be hard enough for adults to scratch the surface of — let alone have enough hours in the day to actually read.)

Photo: SementsovaLesia/iStock

A recent U.S. study of kids apps, covered by BuzzFeed News, highlighted that mobile games aimed at kids can be highly manipulative, describing instances of apps making their cartoon characters cry if a child does not click on an in-app purchase, for example.

A key and contrasting problem with data processing is that it’s so murky; applied in the background so any harms are far less immediately visible because only the data processor truly knows what’s being done with people’s — and indeed children’s — information.

Yet concerns about exploitation of personal data are stepping up across the board. And essentially touch all sectors and segments of society now, even as risks where kids are concerned may look the most stark.

This summer the UK’s privacy watchdog called for an ethical pause on the use by political campaigns of online ad targeting tools, for example, citing a range of concerns that data practices have got ahead of what the public knows and would accept.

It also called for the government to come up with a Code of Practice for digital campaigning to ensure that long-standing democratic norms are not being undermined.

So the children’s commissioner’s appeal for a collective ‘stop and think’ where the use of data is concerned is just one of a growing number of raised voices policymakers are hearing.

One thing is clear: Calls to quantify what big data means for society — to ensure powerful data-mining technologies are being applied in ways that are ethical and fair for everyone — aren’t going anywhere.

News Source = techcrunch.com

Apple’s Tim Cook makes blistering attack on the “data industrial complex”

in Advertising Tech/Apple/Artificial Intelligence/Brussels/data protection/data security/Delhi/digital rights/ethics/Europe/european parliament/European Union/Giovanni Buttarelli/human rights/India/law/Politics/privacy/Security/surveillance/TC/terms of service/Tim Cook by

Apple’s CEO Tim Cook has joined the chorus of voices warning that data itself is being weaponized again people and societies — arguing that the trade in digital data has exploded into a “data industrial complex”.

Cook did not namecheck the adtech elephants in the room: Google, Facebook and other background data brokers that profit from privacy-hostile business models. But his target was clear.

“Our own information — from the everyday to the deeply personal — is being weaponized against us with military efficiency,” warned Cook. “These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded and sold.

“Taken to the extreme this process creates an enduring digital profile and lets companies know you better than you may know yourself. Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm.”

“We shouldn’t sugarcoat the consequences. This is surveillance,” he added.

Cook was giving the keynote speech at the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC), which is being held in Brussels this year, right inside the European Parliament’s Hemicycle.

“Artificial intelligence is one area I think a lot about,” he told an audience of international data protection experts and policy wonks, which included the inventor of the World Wide Web itself, Sir Tim Berners-Lee, another keynote speaker at the event.

“At its core this technology promises to learn from people individually to benefit us all. But advancing AI by collecting huge personal profiles is laziness, not efficiency,” Cook continued.

“For artificial intelligence to be truly smart it must respect human values — including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

That sense of responsibility is why Apple puts human values at the heart of its engineering, Cook said.

In the speech, which we previewed yesterday, he also laid out a positive vision for technology’s “potential for good” — when combined with “good policy and political will”.

“We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR. We also celebrate the new steps taken, not only here in Europe but around the world — in Singapore, Japan, Brazil, New Zealand. In many more nations regulators are asking tough questions — and crafting effective reform.

“It is time for the rest of the world, including my home country, to follow your lead.”

Cook said Apple is “in full support of a comprehensive, federal privacy law in the United States” — making the company’s clearest statement yet of support for robust domestic privacy laws, and earning himself a burst of applause from assembled delegates in the process.

Cook argued for a US privacy law to prioritize four things:

  1. data minimization — “the right to have personal data minimized”, saying companies should “challenge themselves” to de-identify customer data or not collect it in the first place
  2. transparency — “the right to knowledge”, saying users should “always know what data is being collected and what it is being collected for, saying it’s the only way to “empower users to decide what collection is legitimate and what isn’t”. “Anything less is a shame,” he added
  3. the right to access — saying companies should recognize that “data belongs to users”, and it should be made easy for users to get a copy of, correct and delete their personal data
  4. the right to security — saying “security is foundational to trust and all other privacy rights”

“We see vividly, painfully how technology can harm, rather than help,” he continued, arguing that platforms can “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense or what is true or false”.

“This crisis is real. Those of us who believe in technology’s potential for good must not shrink from this moment”, he added, saying the company hopes “to work with you as partners”, and that: “Our missions are closely aligned.”

He also made a sideswipe at tech industry efforts to defang privacy laws — saying that some companies will “endorse reform in public and then resist and undermine it behind closed doors”.

“They may say to you our companies can never achieve technology’s true potential if there were strengthened privacy regulations. But this notion isn’t just wrong it is destructive — technology’s potential is and always must be rooted in the faith people have in it. In the optimism and the creativity that stirs the hearts of individuals. In its promise and capacity to make the world a better place.”

“It’s time to face facts,” Cook added. “We will never achieve technology’s true potential without the full faith and confidence of the people who use it.”

Opening the conference before the Apple CEO took to the stage, Europe’s data protection supervisor Giovanni Buttarelli argued that digitization is driving a new generational shift in the respect for privacy — saying there is an urgent need for regulators and indeed societies to agree on and establish “a sustainable ethics for a digitised society”.

“The so-called ‘privacy paradox’ is not that people have conflicting desires to hide and to expose. The paradox is that we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization,” Buttarelli argued.

“To cultivate a sustainable digital ethics, we need to look, objectively, at how those technologies have affected people in good ways and bad; We need a critical understanding of the ethics informing decisions by companies, governments and regulators whenever they develop and deploy new technologies.”

The EU’s data protection supervisor told an audience largely made up of data protection regulators and policy wonks that laws that merely set a minimum standard are not enough, including the EU’s freshly painted GDPR.

“We need to ask whether our moral compass been suspended in the drive for scale and innovation,” he said. “At this tipping point for our digital society, it is time to develop a clear and sustainable moral code.”

“We do not have a[n ethical] consensus in Europe, and we certainly do not have one at a global level. But we urgently need one,” he added.

“Not everything that is legally compliant and technically feasible is morally sustainable,” Buttarelli continued, pointing out that “privacy has too easily been reduced to a marketing slogan.

“But ethics cannot be reduced to a slogan.”

“For us as data protection authorities, I believe that ethics is among our most pressing strategic challenges,” he added.

“We have to be able to understand technology, and to articulate a coherent ethical framework. Otherwise how can we perform our mission to safeguard human rights in the digital age?”

News Source = techcrunch.com

DoorDash customers say their accounts have been hacked

in computer security/credential stuffing/data breach/data security/Delhi/DoorDash/Food/Hack/India/multi-factor authentication/new york city/Payments/Politics/Security by

Food delivery startup DoorDash has received dozens of complaints from customers who say their accounts have been hacked.

Dozens of people have tweeted at @DoorDash with complaints that their accounts had been improperly accessed and had fraudulent food deliveries charged to their account. In many cases, the hackers changed their email addresses so that the user could not regain access to their account until they contacted customer services. Yet, many said that they never got a response from DoorDash, or if they did, there was no resolution.

Several Reddit threads also point to similar complaints.

DoorDash is now a $4 billion company after raising $250 million last month, and serves more than 1,000 cities across the U.S. and Canada.

After receiving a tip, TechCrunch contacted some of the affected customers.

Four people we spoke to who had tweeted or commented that their accounts had been hacked said that they had used their DoorDash password on other sites. Three people said they weren’t sure if they used their DoorDash password elsewhere.

But six people we spoke to said that their password was unique to DoorDash, and three confirmed they used a complicated password generated by a password manager.

DoorDash said that there has been no data breach and that the likely culprit was credential stuffing, in which hackers take lists of stolen usernames and passwords and try them on other sites that may use the same credentials.

Yet, when asked, DoorDash could not explain how six accounts with unique passwords were breached.

“We do not have any information to suggest that DoorDash has suffered a data breach,” said spokesperson Becky Sosnov in an email to TechCrunch. “To the contrary, based on the information available to us, including internal investigations, we have determined that the fraudulent activity reported by consumers resulted from credential stuffing.”

The victims that we spoke to said they used either the app or the website, or in some cases both. Some were only alerted when their credit cards contacted them about possible fraud.

“Simply makes no sense that so many people randomly had their accounts infiltrated for so much money at the same time,” said one victim.

If, as DoorDash claims, credential stuffing is the culprit, we asked if the company would improve its password policy, which currently only requires a minimum of eight characters. We found in our testing that a new user could enter “password” or “12345678” as their password — which have for years ranked in the top five worst passwords.

The company also would not say if it plans to roll out countermeasures to prevent credential stuffing, like two-factor authentication.

News Source = techcrunch.com

1 2 3
Go to Top