Menu

Timesdelhi.com

February 22, 2019
Category archive

digital rights

Industries must adopt ethics along with technology

in Artificial Intelligence/big data/Column/computing/Delhi/digital rights/ethics/European Union/India/information/New-York-Times/oil/Politics/privacy/smartphone/social media/Technology/terms of service/United Kingdom/United States by

A recent New York Times investigation into how smartphone-resident apps collect location data exposes why it’s important for industry to admit that the ethics of individuals who code and commercialize technology is as important as the technology’s code itself.

For the benefit of technology users, companies building technologies must make efforts to raise awareness of their potential human risks – and be honest about how people’s data is used by their innovations. People developing innovations must demand commitment from the C-suite – and boardrooms – of global technology companies to ethical technology. Specifically, the business world needs to instill workforce ethics champions throughout company ranks, develop corporate transparency frameworks and hire diverse teams to interact with, create and improve upon these technologies.

Image courtesy of Shutterstock

Responsible handling of data is no longer a question

Our data is a valuable asset and the commercial insight it brings to marketers is priceless. Data has become a commodity akin to oil or gold, but user privacy should be the priority – and endgame – for companies across industries benefiting from data. As companies grow and shift, there needs to be an emphasis placed on user consent, clearly establishing what and how data is being used, tracking collected data, placing privacy at the forefront and informing users where AI is making sensitive decisions.

On the flip side, people are beginning to realize that seemingly harmless data they enter into personal profiles, apps and platforms can be taken out of context, commercialized and potentially sold without user consent. The bottom line: consumers are now holding big data and big tech accountable for data privacy – and the public scrutiny of companies operating inside and outside of tech will only grow from here.

Whether or not regulators in the United States, United Kingdom, European Union and elsewhere act, the onus is on Big Tech and private industry to step up by addressing public scrutiny head-on. In practice, this involves Board and C-Suite level acknowledgement of the issues and working-level efforts to address them comprehensively. Companies should clearly communicate steps being taken to improve data security, privacy, ethics and general practices.

Image courtesy of TechCrunch/Bryce Durbin

People working with data need to be more diverse and ethical

Efforts to harvest personal data submitted to technology platforms reinvigorates the need for ethics training for people in all positions at companies that handle sensitive data. The use of social media and third party platforms raises the importance of building backend technologies distributing and analyzing human data, like AI, to be ethical and transparent. We also need the teams actually creating these technologies to be more diverse, as diverse as the community that will eventually use them. Digital equality should be a human right that encompasses fairness in algorithms, access to digital tools and the opportunity for anyone to develop digital skills.

Many companies boast reactionary and retrospective improvements, to boost ethics and transparency in products already on the market. The reality is that it’s much harder to retrofit ethics into technology after the fact. Companies need to have the courage to make the difficult decision at the working and corporate levels not launch biased or unfair systems in some cases.

In practice, organizations must establish guidelines that people creating technologies can work within throughout a product’s development cycle. It’s established and common practice for developers and researchers to test usability, potential flaws and security prior to a product hitting the market. That’s why technology developers should also be testing for fairness, potential biases and ethical implementation before a product hits the market or deploys into the enterprise.

Photo courtesy of Getty Images

The future of technology will be all about transparency

Recent events confirm that the business world’s approach to building and deploying data-consuming technologies, like AI, needs to focus squarely on ethics and accountability. In the process, organizations building technologies and supporting applications need to fundamentally incorporate both principles into their engineering. A single company that’s not careful, and breaks the trust of its users, can cause a domino effect in which consumers lose trust in the greater technology and any company leveraging it.

Enterprises need to develop internal principles and processes that hold people from the Board to the newest hire accountable. These frameworks should govern corporate practices and transparently showcase companies’ commitment to ethical AI and data practices. That’s why my company introduced The Ethics of Code to address critical ethics issues before AI products launch and our customers questions around accountability.

Moving into 2019 with purpose

Ultimately, there’s now a full-blown workforce, public and political movement toward ethical data practices that was already in motion within some corners of the tech community. Ideally, the result will be change in the form of more ethical technology created, improved and managed transparently by highly accountable people – from company developers to CEOs to Boards of Directors. Something the world has needed since way before ethical questions sparked media headlines, entered living rooms and showed up on government agendas.

News Source = techcrunch.com

Feds like cryptocurrencies and blockchain tech and so should antitrust agencies

in author/Bitcoin/blockchain/Column/computing/cryptocurrencies/decentralization/Delhi/digital rights/Economy/ethereum/fed/General Data Protection Regulation/Germany/human rights/India/money/Politics/privacy/St. Louis by

While statements and position papers from most central banks were generally skeptical of cryptocurrencies, the times may be changing.

Earlier this year, the Federal Reserve of Saint Louis published a study that relates the positive effects of cryptocurrencies for privacy protection.

Even with the precipitous decline in value of Bitcoin, Ethereum and other currencies, the Federal Reserve author emphasized the new competitive offering these currencies created exactly because of the way they function, and accordingly, why they are here to stay.

And antitrust authorities should welcome cryptocurrencies and blockchain technologies for the same reason.

Fact: crypto-currencies are good for (legitimate) privacy protection

In the July article from Federal Reserve research fellow Charles M. Kahn, cryptocurrencies were held up as an exemplar of a degree of privacy protection that not even the central banks can provide to customers.

Kahn further stressed that “privacy in payments is desired not just for illegal transactions, but also for protection from malfeasance or negligence by counterparties or by the payments system provider itself.”

The act of payment engages the liability of the person who makes it. As a consequence, parties insert numerous contractual clauses to limit their liability. This creates a real issue due to the fact that some “parties to the transaction are no longer able to support the lawyers’ fees necessary to uphold the arrangement.” Smart contracts may address this issue by automating conflict resolution, but for anyone who doesn’t have access to them, crypto-currencies solve the problem differently. They make it possible to make a transaction without revealing your identity.

Above all, crypto-currencies are a reaction to fears of privacy invasion, whether by governments or big companies, according to Kahn. And indeed, following Cambridge Analytica and fake news revelations, we are hearing more and more opinions expressing concerns. The General Data Protection Regulation is set to protect private citizens, but in practice, “more and more individuals will turn to payments technologies for privacy protection in specific transactions.” In this regard, cryptocurrencies provide an alternative solution that competes directly with what the market currently offers.

Consequence: blockchain is good for competition and consumers

Indeed, cryptocurrencies may be the least among many blockchain applications. The diffusion of data among a decentralized network that is independently verified by some or all of the network’s participating stakeholders is precisely the aspect of the technology that provides privacy protection and competes with applications outside the blockchain by offering a different kind of service.

The Fed of St. Louis’ study underlines that “because privacy needs are different in type and degree, we should expect a variety of platforms to emerge for specific purposes, and we should expect continued competition between traditional and start-up providers.”

And how not to love variety? In an era where antitrust authorities are increasingly interested in consumers’ privacy, crypto-currencies (and more generally blockchains) offer a much more effective protection than antitrust law and/or the GDPR combined.

These agencies should be happy about that, but they don’t say a word about it. That silence could lead to flawed judgements, because ignoring the speed of blockchain development — and its increasingly varied use — leads to misjudge the real nature of the competitive field.

And in fact, because they ignore the existence of blockchain (applications), they tend to engage in more and more procedures where privacy is seen as an antitrust concern (see what’s happening in Germany). But blockchain is actually providing an answer to this issue ; it can’t be said accordingly that the market is failing. And without a market failure, antitrust agencies’ intervention is not legitimate.

The roles of the fed and antitrust agencies could change

This new privacy offering from blockchain technologies should also lead to changes in the role of agencies. As the Fed study stressed:

“the future of central banks and payments authorities is no longer in privacy provision but in privacy regulation, in holding the ring as different payments platforms offer solutions appropriate to different niches with different mixes of expenses and safety, and with attention to different parts of the public’s demand for privacy.”

Some constituencies may criticize the expanding role of central banks in enforcing and ensuring privacy online, but those banks would be even harder pressed if they handled the task themselves instead of trying to relinquish it to the network.

The same applies to antitrust authorities. It is not for them to judge what the business model of digital companies should be and what degree of privacy protection they should offer. Their role is to ensure that alternatives exist, here, that blockchain can be deployed without misinformed regulation to slow it down.

Perhaps antitrust agencies should be more vocal about the benefits of cryptocurrencies and blockchain and advise governments not to prevent them.

After all, if even a Fed is now pro-crypto-currencies, antitrust regulators should jump on the wagon without fear. After all, blockchain creates a new alternative by offering real privacy protections, which ultimately put more power in the hands of consumers. If antitrust agencies can’t recognize that, we will soon ask ourselves: who are they really protecting?

News Source = techcrunch.com

Apple’s Tim Cook makes blistering attack on the “data industrial complex”

in Advertising Tech/Apple/Artificial Intelligence/Brussels/data protection/data security/Delhi/digital rights/ethics/Europe/european parliament/European Union/Giovanni Buttarelli/human rights/India/law/Politics/privacy/Security/surveillance/TC/terms of service/Tim Cook by

Apple’s CEO Tim Cook has joined the chorus of voices warning that data itself is being weaponized again people and societies — arguing that the trade in digital data has exploded into a “data industrial complex”.

Cook did not namecheck the adtech elephants in the room: Google, Facebook and other background data brokers that profit from privacy-hostile business models. But his target was clear.

“Our own information — from the everyday to the deeply personal — is being weaponized against us with military efficiency,” warned Cook. “These scraps of data, each one harmless enough on its own, are carefully assembled, synthesized, traded and sold.

“Taken to the extreme this process creates an enduring digital profile and lets companies know you better than you may know yourself. Your profile is a bunch of algorithms that serve up increasingly extreme content, pounding our harmless preferences into harm.”

“We shouldn’t sugarcoat the consequences. This is surveillance,” he added.

Cook was giving the keynote speech at the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC), which is being held in Brussels this year, right inside the European Parliament’s Hemicycle.

“Artificial intelligence is one area I think a lot about,” he told an audience of international data protection experts and policy wonks, which included the inventor of the World Wide Web itself, Sir Tim Berners-Lee, another keynote speaker at the event.

“At its core this technology promises to learn from people individually to benefit us all. But advancing AI by collecting huge personal profiles is laziness, not efficiency,” Cook continued.

“For artificial intelligence to be truly smart it must respect human values — including privacy. If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”

That sense of responsibility is why Apple puts human values at the heart of its engineering, Cook said.

In the speech, which we previewed yesterday, he also laid out a positive vision for technology’s “potential for good” — when combined with “good policy and political will”.

“We should celebrate the transformative work of the European institutions tasked with the successful implementation of the GDPR. We also celebrate the new steps taken, not only here in Europe but around the world — in Singapore, Japan, Brazil, New Zealand. In many more nations regulators are asking tough questions — and crafting effective reform.

“It is time for the rest of the world, including my home country, to follow your lead.”

Cook said Apple is “in full support of a comprehensive, federal privacy law in the United States” — making the company’s clearest statement yet of support for robust domestic privacy laws, and earning himself a burst of applause from assembled delegates in the process.

Cook argued for a US privacy law to prioritize four things:

  1. data minimization — “the right to have personal data minimized”, saying companies should “challenge themselves” to de-identify customer data or not collect it in the first place
  2. transparency — “the right to knowledge”, saying users should “always know what data is being collected and what it is being collected for, saying it’s the only way to “empower users to decide what collection is legitimate and what isn’t”. “Anything less is a shame,” he added
  3. the right to access — saying companies should recognize that “data belongs to users”, and it should be made easy for users to get a copy of, correct and delete their personal data
  4. the right to security — saying “security is foundational to trust and all other privacy rights”

“We see vividly, painfully how technology can harm, rather than help,” he continued, arguing that platforms can “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense or what is true or false”.

“This crisis is real. Those of us who believe in technology’s potential for good must not shrink from this moment”, he added, saying the company hopes “to work with you as partners”, and that: “Our missions are closely aligned.”

He also made a sideswipe at tech industry efforts to defang privacy laws — saying that some companies will “endorse reform in public and then resist and undermine it behind closed doors”.

“They may say to you our companies can never achieve technology’s true potential if there were strengthened privacy regulations. But this notion isn’t just wrong it is destructive — technology’s potential is and always must be rooted in the faith people have in it. In the optimism and the creativity that stirs the hearts of individuals. In its promise and capacity to make the world a better place.”

“It’s time to face facts,” Cook added. “We will never achieve technology’s true potential without the full faith and confidence of the people who use it.”

Opening the conference before the Apple CEO took to the stage, Europe’s data protection supervisor Giovanni Buttarelli argued that digitization is driving a new generational shift in the respect for privacy — saying there is an urgent need for regulators and indeed societies to agree on and establish “a sustainable ethics for a digitised society”.

“The so-called ‘privacy paradox’ is not that people have conflicting desires to hide and to expose. The paradox is that we have not yet learned how to navigate the new possibilities and vulnerabilities opened up by rapid digitization,” Buttarelli argued.

“To cultivate a sustainable digital ethics, we need to look, objectively, at how those technologies have affected people in good ways and bad; We need a critical understanding of the ethics informing decisions by companies, governments and regulators whenever they develop and deploy new technologies.”

The EU’s data protection supervisor told an audience largely made up of data protection regulators and policy wonks that laws that merely set a minimum standard are not enough, including the EU’s freshly painted GDPR.

“We need to ask whether our moral compass been suspended in the drive for scale and innovation,” he said. “At this tipping point for our digital society, it is time to develop a clear and sustainable moral code.”

“We do not have a[n ethical] consensus in Europe, and we certainly do not have one at a global level. But we urgently need one,” he added.

“Not everything that is legally compliant and technically feasible is morally sustainable,” Buttarelli continued, pointing out that “privacy has too easily been reduced to a marketing slogan.

“But ethics cannot be reduced to a slogan.”

“For us as data protection authorities, I believe that ethics is among our most pressing strategic challenges,” he added.

“We have to be able to understand technology, and to articulate a coherent ethical framework. Otherwise how can we perform our mission to safeguard human rights in the digital age?”

News Source = techcrunch.com

The Internet Bill of Rights is just one piece of our moral obligations

in Column/Delhi/digital media/digital rights/Facebook/India/Internet/internet access/internet service providers/isp/net neutrality/new media/open Internet/Politics/smartphones/Technology/Tim-berners lee/United States/Virtual Reality by

Congressman Ro Khanna’s proposed Internet Bill of Rights pushes individual rights on the Internet forward in a positive manner. It provides guidelines for critical elements where the United States’ and the world’s current legislation is lacking, and it packages it in a way that speaks to all parties. The devil, as always, is in the details—and Congressman Khanna’s Internet Bill of Rights still leaves quite a bit to subjective interpretation.

But what should not be neglected is that we as individuals have not just rights but also moral obligations to this public good—the Internet. The web positively impacts our lives in a meaningful fashion, and we have a collective responsibility to nurture and keep it that way.

Speaking to the specific rights listed in the Bill, we can likely all agree that citizens should have control over information collected about them, and that we should not be discriminated against based on that personal data. We probably all concur that Internet Service Providers should not be permitted to block, throttle, or engage in paid prioritization that would negatively impact our ability to access the world’s information. And I’m sure we all want access to numerous affordable internet providers with clear and transparent pricing.

These are all elements included in Congressman Khanna’s proposal; all things that I wholeheartedly support.

As we’ve seen of late with Facebook, Google, and other large corporations, there is an absolute need to bring proper legislation into the digital age. Technological advancements have progressed far faster than regulatory changes, and drastic improvements are needed to protect users.

What we must understand, however, is that corporations, governments, and individuals all rely on the same Internet to prosper. Each group should have its own set of rights as well as responsibilities. And it’s those responsibilities that need more focus.

Take, for example, littering. There may be regulations in place that prevent people from discarding their trash by the side of the road. But regardless of these laws, there’s also a moral obligation we have to protect our environment and the world in which we live. For the most part, people abide by these obligations because it’s the right thing to do and because of social pressure to keep the place they live beautiful—not because they have a fear of being fined for littering.

We should approach the protection of the Internet in the same way.

We should hold individuals, corporations, and governments to a higher standard and delineate their responsibilities to the Internet. All three groups should accept and fulfill those responsibilities, not because we create laws and fines, but because it is in their best interests.

For individuals, the Internet has given them powers beyond their wildest dreams and it continues to connect us in amazing ways. For corporations, it has granted access to massively lucrative markets far and wide that would never have been accessible before. For governments, it has allowed them to provide better services to their citizens and has created never before seen levels of tax revenue from the creation of businesses both between and outside their physical borders.

Everyone — and I mean everyone — has gained (and will continue to gain) from protecting an open Internet, and we as a society need to recognize that and start imposing strong pressure against those who do not live up to their responsibilities.

We as people of the world should feel tremendously grateful to all the parties that contributed to the Internet we have today. If a short-sighted government decides it wants to restrict the Internet within its physical borders, this should not be permitted. It will not only hurt us, but it will hurt that very government by decreasing international trade and thus tax revenue, as well as decreasing the trust that the citizens of that country place in their government. Governments often act against their long-term interests in pursuit of short-term thinking, thus we have 2 billion people living in places with heavy restrictions on access to online information.

When an Internet Service Provider seeks full control over what content it provides over its part of the Internet, this, again, should not be allowed. It will, in the end, hurt that very Internet Service Provider’s revenue; a weaker, less diverse Internet will inevitably create less demand for the very service they are providing along with a loss of trust and loyalty from their customers.

Without the Internet, our world would come grinding to a halt. Any limitations on the open Internet will simply slow our progress and prosperity as a human race. And, poignantly, the perpetrators of those limitations stand to lose just as much as any of us.

We have a moral responsibility, then, to ensure the Internet remains aligned with its original purpose. Sure, none of us could have predicted the vast impact the World Wide Web would have back in 1989—probably not even Sir Tim Berners-Lee himself—but in a nutshell, it exists to connect people, WHEREVER they may be, to a wealth of online information, to other people, and to empower individuals to make their lives better.

This is only possible with an open and free Internet.

Over the next five years, billions of devices—such as our garage door openers, refrigerators, thermostats, and mattresses—will be connected to the web via the Internet of Things. Further, five billion users living in developing markets will join the Internet for the first time, moving from feature phones to smartphones. These two major shifts will create incredible opportunities for good, but also for exploiting our data—making us increasingly vulnerable as Internet users.

Now is the time to adequately provide Americans and people around the world with basic online protections, and it is encouraging to see people like Congressman Khanna advancing the conversation. We can only hope this Internet Bill of Rights remains bipartisan and real change occurs.

Regardless of the outcome, we must not neglect our moral obligations—whether individual Internet users, large corporations, or governments. We all shoulder a responsibility to maintain an open Internet. After all, it is perhaps the most significant and impactful creation in modern society.

News Source = techcrunch.com

Rights management and content monetization company HAAWK raises $2.5 million

in Delhi/digital rights/Fundings & Exits/HAAWK/India/Media/music industry/Politics/Royalties/TC by

The music industry has been struggling with the best way to manage digital rights and to make sure that for decades artists get the money they’re due for the songs they write.

These days, there are tons of companies trying to solve the problem of managing, monitoring and paying musicians for the songs they make. The latest entrant into this competitive marketplace is HAAWK, which just scored $2.5 million in new financing for its technology.

Rincon Venture Partners, Wavemaker Partners, Frontier Venture Capital and Apex Ventures all participated in the company’s latest round of financing.

HAAWK actually serves more than just music rights holders. The company is also working with film and television rights holders to ensure they get compensated for their work.

The company also acquired certain assets and intellectual property from Dart Music, which filed for bankruptcy earlier this year.

HAAWK tracks assets and delivers royalties for music and movies distributed on YouTube, iTunes/Apple Music, Spotify, Amazon Prime Music, Pandora, Google Play, TIDAL, Deezer and SoundCloud.

“The company is led by an experienced management team, fresh off an exit that yielded significant returns to investors. HAAWK provides products and services that solve complex challenges faced by today’s media rights holders and the company’s strategy for unlocking new value for content owners is one that we both appreciate and support,” said Jim Andelman, the managing director of Rincon Venture Partners.

HAAWK was founded by Ryan Born, a serial entrepreneur whose previous digital rights business, AdRev/AudioMicro, was sold to Zealot Networks for $20 million. AdRev raised roughly $1.25 million before its sale.

Featured Image: Theo Wargo/Getty Images

News Source = techcrunch.com

Go to Top