Timesdelhi.com

September 26, 2018
Category archive

chairman

Alibaba goes big on Russia with joint venture focused on gaming, shopping and more

in alibaba/alibaba group/AliExpress/alipay/Asia/ceo/chairman/China/daniel zhang/Delhi/digital banking/e-commerce/Economy/Europe/food delivery/funding/Fundings & Exits/Getty-Images/India/jack ma/mail/mail.ru/Mail.ru Group/megafon/online marketplaces/Paytm/Politics/Russia/smartphones/social media/Southeast Asia/Turkey/world wide web by

Alibaba is doubling down on Russia after the Chinese e-commerce giant launched a joint venture with one of the country’s leading internet companies.

Russia is said to have over 70 million internet users, around half of its population, with countless more attracted from Russian-speaking neighboring countries. The numbers are projected to rise as, like in many parts of the world, the growth of smartphones brings more people online. Now Alibaba is moving in to ensure it is well placed to take advantage.

Mail.ru, the Russia firm that offers a range of internet services including social media, email and food delivery to 100 million registered users, has teamed up with Alibaba to launch AliExpress Russia, a JV that they hope will function as a “one-stop destination” for communication, social media, shopping and games. Mail.ru backer MegaFon, a telecom firm, and the country’s sovereign wealth fund RDIF (Russian Direct Investment Fund) have also invested undisclosed amounts into the newly-formed organization.

To recap: Alibaba — which launched its AliExpress service in Russia some years ago — will hold 48 percent of the business, with 24 percent for MegaFon, 15 percent for Mail.ru and the remaining 13 percent take by RDIF. In addition, MegaFon has agreed to trade its 10 percent stake in Mail.ru to Alibaba in a transaction that (alone) is likely to be worth north of $500 million.

That figure doesn’t include other investments in the venture.

“The parties will inject capital, strategic assets, leadership, resources and expertise into a joint venture that leverages AliExpress’ existing businesses in Russia,” Alibaba explained on its Alizila blog.

Alibaba looks to have picked its horse in Russia’s internet race: Mail.ru [Image via KIRILL KUDRYAVTSEV/AFP/Getty Images]

The strategy, it seems, is to pair Mail.ru’s consumer services with AliExpress, Alibaba’s international e-commerce marketplace. That’ll allow Russian consumers to buy from AliExpress merchants in China, but also overseas markets like Southeast Asia, India, Turkey (where Alibaba recently backed an e-commerce firm) and other parts of Europe where it has a presence. Likewise, Russian online sellers will gain access to consumers in those markets. Alibaba’s ‘branded mall’ — TMall — is also a part of the AliExpress Russia offering.

This deal suggests that Alibaba has picked its ‘horse’ in Russia’s internet race, much the same way that it has repeatedly backed Paytm — the company offering payments, e-commerce and digital banking — in India with funding and integrations.

Already, Alibaba said that Russia has been a “vital market for the growth” for its Alipay mobile payment service. It didn’t provide any raw figures to back that up, but you can bet that it will be pushing Alipay hard as it runs AliExpress Russia, alongside Mail.ru’s own offering, which is called Money.Mail.Ru.

“Most Russian consumers are already our users, and this partnership will enable us to significantly increase the access to various segments of the e-commerce offering, including both cross-border and local merchants. The combination of our ecosystems allows us to leverage our distribution through our merchant base and goods as well as product integrations,” said Mail.Ru Group CEO Boris Dobrodeev in a statement.

This is the second strategic alliance that MegaFon has struck this year. It formed a joint venture with Gazprombank in May through a deal that saw it offload five percent of its stake in Mail.ru. MegaFon acquired 15.2 percent of Mail.ru for $740 million in February 2017.

The Russia deal comes a day after Alibaba co-founder and executive chairman Jack Ma — the public face of the company — announced plans to step down over the next year. Current CEO Daniel Zhang will replace him as chairman, meaning that the company will also need to appoint a new CEO.

News Source = techcrunch.com

Security tokens will be coming soon to an exchange near you

in alipay/Bitcoin/chairman/coinbase/Column/cryptocurrencies/cryptocurrency/Delhi/Economy/Finance/homer simpson/India/initial coin offering/Jeremy Allaire/Josh Stein/laser/money/payment network/Politics/polymath/Real estate/security token/switzerland/TC/tokenization/tzero/U.S. Securities and Exchange Commission/university of oregon/Venmo by

While cryptocurrencies have generated the lion’s share of investment and attention to date, I’m more excited about the potential for another blockchain-based digital asset: security tokens.

Security tokens are defined as “any blockchain-based representation of value that is subject to regulation under security laws.” In other words, they represent ownership in a real-world asset, whether that is equity, debt or even real estate. (They also encompass certain pre-launch utility tokens.)

With $256 trillion of real-world assets in the world, the opportunity for crypto-securities is truly massive, especially with regards to asset classes like real estate and fine art that have historically suffered from limited commerce and liquidity. As I’ve written previously, imagine if real estate was tokenized into security tokens that you could trade as safely and easily as you do stocks. That’s where we’re headed.

There’s a lot of forward momentum around tokenized securities, so much so that based on their current trajectory, I believe security tokens are going to become a common part of Wall Street parlance in the near future. Investors won’t just be able to buy and sell tokens on mainstream exchanges, however; “crypto-native” companies are also throwing their hats into this ring.

The starter pistol has been fired

The race is on to bring security tokens to the masses

 

Because Bitcoin and other cryptocurrencies are not classified as securities, it’s been much easier to facilitate trading on a large scale. Security tokens are more complex, requiring not just capabilities around trading, but also issuance and, critically, compliance. (See more of my thoughts on compliance here.) It’s a major undertaking, which is why we haven’t seen the Coinbase or Circle of security token trading emerge yet (or seen these companies expand their platforms to address this—more on that later).

Meanwhile, regular exchanges are blazing the trail and moving into providing tokens trading. The founder and chairman of the company that owns the NYSE announced a new venture, Bakkt, that would provide an on-ramp for institutional investors interested in purchasing cryptocurrencies. Last month, the SIX Swiss Exchange—Switzerland’s principal stock trading exchange—announced plans to build a regulated exchange for tokenized securities. The trading and issuing platform, SIX Digital Exchange, will adhere to the same regulatory standards as the non-digital exchanges and be overseen by Swiss financial regulators.

This announcement confirms a few things:

  1. Most assets (stocks, bonds, real estate, etc) will be tokenized and supported on regulated trading platforms.

  2. Incumbents like SIX have a head start due to their size, regulatory licensing and built-in user base. They are likely to use this advantage to defend their position of power.

  3. Most investors will never know they are using distributed ledger technology, let alone trading tokenized assets. They will simply buy and sell assets as they always have.

I expect other major financial exchanges to follow SIX’s lead and onboard crypto trading before long. I can imagine them salivating over the trading fees now, Homer Simpson style.

Live shot of financial exchanges drooling over crypto trading fees

 

Crypto companies are revving their engines

The big crypto companies are preparing to enter the security token arena

Stock exchanges won’t have the space to themselves, however. Crypto companies like Polymath and tZERO have already debuted dedicated platforms for security tokens, and all signs indicate announcements from Circle and Coinbase unveiling their own tokenized asset exchanges are not far behind.

Coinbase is much closer to offering security token products after acquiring a FINRA-registered broker-dealer in June, effectively backward-somersaulting its way into a state of regulatory compliance. President and COO Asiff Hirji all but confirmed crypto-securities are in the company’s roadmap, saying that Coinbase “can envision a world where we may even work with regulators to tokenize existing types of securities.”

Circle is also laser-focused on security tokens. Circle CEO and co-founder Jeremy Allaire explained the company’s acquisition of crypto exchange Poloniex and launch of app Circle Invest in terms of the “tokenization of everything.” In addition, it is pursuing registration as a broker-dealer with the SEC to facilitate token trading—it could also attempt to take the same backdoor acquisition approach as Coinbase.

If there’s a reason Circle and Coinbase haven’t moved into security token services even more rapidly, it’s that there simply aren’t that many security tokens yet. Much of this is due to the lack of compliance and issuance platforms, keeping high-quality securities on legacy systems issuers feel more comfortable with. As projects like Harbor ramp up more, this comfort gap will grow smaller and smaller, driving the big crypto players deeper into security token services.

The old guard vs. the new wave

Expect a battle between traditional and crypto exchanges.

 

This showdown between traditional finance incumbents and crypto giants will be worth watching. One is incentivized to preserve the status quo, while the other is looking to create a new, more global financial system.

The Swiss SIX Exchanges of the world enjoy some distinct advantages over the likes of Coinbase — they have decades of traditional financial operating experience, deep relationships throughout the industry and a head start on regulatory compliance. Those advantages probably mean that such incumbents will probably be the first to make infrastructural and logistical upgrades to their systems using security tokens. The first time you interact with a security token, it is likely to be through the Nasdaq.

Having said that, incumbents’ greatest disadvantage will be transporting an old-finance-world mentality to these innovations. Coinbase, Circle, Polymath, Robinhood and other newer players are better suited to harnessing the stepchange elements of security tokens — particularly asset interoperability and imaginative security design.

University of Oregon Professor Stephen McKeon, an authority on security tokens, told me that “the potential for programmable securities to enable the expression of new investment types is the most exciting feature.” Harbor CEO Josh Stein explained why private securities in particular will be transformed: “by automating compliance, issuers can allow their investors to trade to the limit of their liquidity across multiple exchanges. Now imagine a world where buyers and sellers around the world can trade 24/7/365 with near instantaneous settlement and no counterparty risk – that is something only possible through blockchain.”

Those hypergrowth startups are going to experiment with these new paradigms in ways that older firms won’t think of. You can see evidence of this forward thinking in Circle’s efforts to build a payment network that allows Venmo users to send value to Alipay users — exactly embracing interoperability, if not in an asset sense.

The race is on

As Polymath’s Trevor Koverko and Anthony “Pomp” Pompliano have been saying for the past year, the financial services world is moving towards security tokens. As the crypto economy matures, we’re inching closer to a new era of real-world assets being securitized on the blockchain in a regulatory compliant manner.

The challenge for both traditional and crypto exchanges will be to educate investors about this new way to buy and sell investments while powering these securities transactions via a smooth, seamless experience. Ultimately, security tokens lay the groundwork for granting investors their biggest wish — the ability to trade equity, debt, real estate and digital assets all on the same platform.

News Source = techcrunch.com

Golden Gate Ventures hits first close on new $100M fund for Southeast Asia

in Asia/Business/Carousell/carro/chairman/Co-founder/Delhi/e-commerce/Economy/Eduardo Saverin/Facebook/Finance/funding/Fundings & Exits/go-jek/golden gate ventures/India/jeffrey paine/jungle ventures/Masayoshi Son/nsi ventures/Politics/Priceline/Singapore/SoftBank/South Korea/Southeast Asia/temasek/Tokopedia/traveloka/vinnie lauria by

One of the fascinating things about watching an emerging startup ecosystem is that it isn’t just companies that are scaling, the very VC firms that feed them are growing themselves, too. That’s perhaps best embodied by Golden Gate Ventures, a Singapore-based firm founded by three Silicon Valley entrepreneurs in 2011 which is about to close a huge new fund for Southeast Asia.

Golden Gate started out with a small seed investment fund before raising a second worth $60 million in 2015. Now it is in the closes stages of finalizing a new $100 million fund, which has completed a first close of over $65 million in commitments, a source with knowledge of discussions told TechCrunch.

A filing lodged with the SEC in June first showed the firm’s intent to raise $100 million. The source told TechCrunch that a number of LPs from Golden Gate’s previous funds have already signed up, including Naver, while Mistletoe, the firm run by SoftBank Chairman Masayoshi Son’s brother Taizo, is among the new backers joining.

Golden Gate’s existing LP base also includes Singapore sovereign fund Temasek, Facebook co-founder Eduardo Saverin, and South Korea’s Hanwha.

A full close for the fund is expected before the end of the year.

The firm has made over 40 investments to date and its portfolio includes mobile classifieds service Carousell, automotive sales startup Carro, real estate site 99.co, and payment gateway Omise. TechCrunch understands that the firm’s investment thesis will remain the same with this new fund. When it raised its second fund, founding partner Vinnie Lauria told us that Golden Gate had found its match at early-stage investing and it will remain lean and nimble like the companies it backs.

One significant change internally, however, sees Justin Hall promoted to partner at the fund. He joins Lauria, fellow founding partner Jeffrey Paine, and Michael Lints at partner level.

Hall first joined Golden Gate in 2012 as an intern while still a student, before signing on full-time in 2013. His rise through the ranks exemplifies the growth and development within Southeast Asia’s startup scene over that period — it isn’t just limited to startups themselves.

The Golden Gate Ventures team circa 2016 — it has since added new members

With the advent of unicorns such as ride-sharing firms Grab and Go-Jek, travel startup Traveloka, and e-commerce companies like Tokopedia, Southeast Asia has begun to show potential for homegrown tech companies in a market that includes over 650 million consumers and more than 300 million internet users. The emergence of these companies has spiked investor interest, which provides the capital that is the lifeblood for VCs and their funds.

Golden Gate is the only one raising big. Openspace, formerly NSI Ventures, is raising $125 million for its second fund, Jungle Ventures is said to be planning a $150 million fund, and Singapore’s Golden Equator and Korea Investment Partners have a joint $88 million fund, while Temasek-linked Vertex closed a record $210 million fund last year.

Growth potential is leading the charge but at the same time funds are beginning to focus on realizing returns for LPs through exits, which is challenging since there have been few acquisitions of meaningful size or public listings out of Southeast Asia so far. But, for smaller funds, the results are already promising.

Data from Prequin, which tracks investment money worldwide, shows that Golden Gate’s first fund has already returned a multiple of over 4X, while its second is at 1.3 despite a final close in 2016.

Beyond any secondary sales — it is not uncommon for early-stage backers to sell a minority portion of equity as more investment capital pours in — Golden Gate’s exits have included the sale of Redmart to Lazada (although not a blockbuster), Priceline’s acquisition of Woomoo, Line’s acquisition of Temanjalan and the sale of Mapan (formerly Ruma) to Go-Jek.

News Source = techcrunch.com

UK report urges action to combat AI bias

in Aleksandr Kogan/Artificial Intelligence/British Business Bank/chairman/cybernetics/data processing/data security/deep neural networks/DeepMind/Delhi/Diversity/Europe/European Union/Facebook/General Data Protection Regulation/Google/Government/Health/India/London/Matt Hancock/National Health Service/oxford university/Policy/Politics/privacy/Royal Free NHS Trust/Technology/UK government/United Kingdom/United States by

The need for diverse development teams and truly representational data-sets to avoid biases being baked into AI algorithms is one of the core recommendations in a lengthy Lords committee report looking into the economic, ethical and social implications of artificial intelligence, and published today by the upper House of the UK parliament.

“The main ways to address these kinds of biases are to ensure that developers are drawn from diverse gender, ethnic and socio-economic backgrounds, and are aware of, and adhere to, ethical codes of conduct,” the committee writes, chiming with plenty of extant commentary around algorithmic accountability.

“It is essential that ethics take centre stage in AI’s development and use,” adds committee chairman, Lord Clement-Jones, in a statement. “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

The report also calls for the government to take urgent steps to help foster “the creation of authoritative tools and systems for auditing and testing training datasets to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions” — recommending a publicly funded challenge to incentivize the development of technologies that can audit and interrogate AIs.

“The Centre for Data Ethics and Innovation, in consultation with the Alan Turing Institute, the Institute of Electrical and Electronics Engineers, the British Standards Institute and other expert bodies, should produce guidance on the requirement for AI systems to be intelligible,” the committee adds. “The AI development sector should seek to adopt such guidance and to agree upon standards relevant to the sectors within which they work, under the auspices of the AI Council” — the latter being a proposed industry body it wants established to help ensure “transparency in AI”.

The committee is also recommending a cross-sector AI Code to try to steer developments in a positive, societally beneficial direction — though not for this to be codified in law (the suggestion is it could “provide the basis for statutory regulation, if and when this is determined to be necessary”).

Among the five principles they’re suggesting as a starting point for the voluntary code are that AI should be developed for “the common good and benefit of humanity”, and that it should operate on “principles of intelligibility and fairness”.

Though, elsewhere in the report, the committee points out it can be a challenge for humans to understand decisions made by some AI technologies — going on to suggest it may be necessary to refrain from using certain AI techniques for certain types of use-cases, at least until algorithmic accountability can be guaranteed.

“We believe it is not acceptable to deploy any artificial intelligence system which could have a substantial impact on an individual’s life, unless it can generate a full and satisfactory explanation for the decisions it will take,” it writes in a section discussing ‘intelligible AI’. “In cases such as deep neural networks, where it is not yet possible to generate thorough explanations for the decisions that are made, this may mean delaying their deployment for particular uses until alternative solutions are found.”

A third principle the committee says it would like to see included in the proposed voluntary code is: “AI should not be used to diminish the data rights or privacy of individuals, families or communities”.

Though this is a curiously narrow definition — why not push for AI not to diminish rights, period?

“It’s almost as if ‘follow the law’ is too hard to say,” observes Sam Smith, a coordinator at patient data privacy advocacy group, medConfidential, discussing the report.

“Unlike other AI ‘ethics’ standards which seek to create something so weak no one opposes it, the existing standards and conventions of the rule of law are well known and well understood, and provide real and meaningful scrutiny of decisions, assuming an entity believes in the rule of law,” he adds.

Looking at the tech industry as a whole, it’s certainly hard to conclude that self-defined ‘ethics’ appear to offer much of a meaningful check on commercial players’ data processing and AI activities.

Topical case in point: Facebook has continued to claim there was nothing improper about the fact millions of people’s information was shared with professor Aleksandr Kogan. People “knowingly provided their information” is the company’s defensive claim.

Yet the vast majority of people whose personal data was harvested from Facebook by Kogan clearly had no idea what was possible under its platform terms — which, until 2015, allowed one user to ‘consent’ to the sharing of all their Facebook friends. (Hence ~270,000 downloaders of Kogan’s app being able to pass data on up to 87M Facebook users.)

So Facebook’s self-defined ‘ethical code’ has been shown to be worthless — aligning completely with its commercial imperatives, rather than supporting users to protect their privacy. (Just as its T&Cs are intended to cover its own “rear end”, rather than clearly inform people’s about their rights, as one US congressman memorably put it last week.)

“A week after Facebook were criticized by the US Congress, the only reference to the Rule of Law in this report is about exempting companies from liability for breaking it,” Smith adds in a MedConfidential response statement to the Lords report. “Public bodies are required to follow the rule of law, and any tools sold to them must meet those legal obligations. This standard for the public sector will drive the creation of tools which can be reused by all.”

 

Health data “should not be shared lightly”

The committee, which took evidence from Google -owned DeepMind as one of a multitude of expert witnesses during more than half a year’s worth of enquiry, touches critically on the AI company’s existing partnerships with UK National Health Service Trusts.

The first of which, dating from 2015 — and involving the sharing of ~1.6 million patients’ medical records with the Google-owned company — ran into trouble with the UK’s data protection regulator. The UK’s information commissioner concluded last summer that the Royal Free NHS Trust’s agreement with DeepMind had not complied with UK data protection law.

Patients’ medical records were used by DeepMind to develop a clinical task management app wrapped around an existing NHS algorithm for detecting a condition known as acute kidney injury. The app, called Streams, has been rolled out for use in the Royal Free’s hospitals — complete with PR fanfare. But it’s still not clear what legal basis exists to share patients’ data.

“Maintaining public trust over the safe and secure use of their data is paramount to the successful widespread deployment of AI and there is no better exemplar of this than personal health data,” the committee warns. “There must be no repeat of the controversy which arose between the Royal Free London NHS Foundation Trust and DeepMind. If there is, the benefits of deploying AI in the NHS will not be adopted or its benefits realised, and innovation could be stifled.”

The report also criticizes the “current piecemeal” approach being taken by NHS Trusts to sharing data with AI developers — saying this risks “the inadvertent under-appreciation of the data” and “NHS Trusts exposing themselves to inadequate data sharing arrangements”.

“The data held by the NHS could be considered a unique source of value for the nation. It should not be shared lightly, but when it is, it should be done in a manner which allows for that value to be recouped,” the committee writes.

A similar point — about not allowing a huge store of potential value which is contained within publicly-funded NHS datasets to be cheaply asset-stripped by external forces — was made by Oxford University’s Sir John Bell in a UK government-commissioned industrial strategy review of the life sciences sector last summer.

Despite similar concerns, the committee also calls for a framework for sharing NHS data be published by the end of the year, and is pushing for NHS Trusts to digitize their current practices and records — with a target deadline of 2022 — in “consistent formats” so that people’s medical records can be made more accessible to AI developers.

But worryingly, given the general thrust towards making sensitive health data more accessible to third parties, the committee does not seem to have a very fine-grained grasp of data protection in a health context — where, for example, datasets can be extremely difficult to render truly anonymous given the level of detail typically involved.

Although they are at least calling for the relevant data protection and patient data bodies to be involved in provisioning the framework for sharing NHS data, alongside Trusts that have already worked with DeepMind (and in one case received an ICO wrist-slap).

They write:

We recommend that a framework for the sharing of NHS data should be prepared and published by the end of 2018 by NHS England (specifically NHS Digital) and the National Data Guardian for Health and Care should be prepared with the support of the ICO [information commissioner’s office] and the clinicians and NHS Trusts which already have experience of such arrangements (such as the Royal Free London and Moorfields Eye Hospital NHS Foundation Trusts), as well as the Caldicott Guardians [the NHS’ patient data advocates]. This framework should set out clearly the considerations needed when sharing patient data in an appropriately anonymised form, the precautions needed when doing so, and an awareness of the value of that data and how it is used. It must also take account of the need to ensure SME access to NHS data, and ensure that patients are made aware of the use of their data and given the option to opt out.

As the Facebook-Cambridge Analytica scandal has clearly illustrated, opt-outs alone cannot safeguard people’s data or their legal rights — which is why incoming EU data protection rules (GDPR) beef up consent requirements to require a clear affirmative. (And it goes without saying that opt-outs are especially concerning in a medical context where the data involved is so sensitive — yet, at least in the case of a DeepMind partnership with Taunton and Somerset NHS Trust, patients do not even appear to have been given the ability to say no to their data being processed.)

Opt-outs (i.e. rather than opt-in systems) for data-sharing and self-defined/voluntary codes of ‘ethics’ demonstrably do very little to protect people’s legal rights where digital data is concerned — even if it’s true, for example, that Facebook holds itself in check vs what it could theoretically do with data, as company execs have suggested (one wonders what kind stuff they’re voluntarily refraining from, given what they have been caught trying to manipulate).

The wider risk of relying on consumer savvy to regulate commercial data sharing is that an educated, technologically aware few might be able to lock down — or reduce — access to their information; but the mainstream majority will have no clue they need to or even how it’s possible. And data protection for a select elite doesn’t sound very equitable.

Meanwhile, at least where this committee’s attitude to AI is concerned, developers and commercial entities are being treated with favorable encouragement — via the notion of a voluntary (and really pretty basic) code of AI ethics — rather than being robustly reminded they need to follow the law.

Given the scope and scale of current AI-fueled sandals, that risks the committee looking naive.

Though the government has made AI a strategic priority, and policies to foster and accelerate data-sharing to drive tech developments are a key part of its digital and industrial strategies. So the report needs to be read within that wider context.

The committee does add its voice to questions about whether/how legal liability will mesh with automated decision making — writing that “clarity is required” on whether “new mechanisms for legal liability and redress” are needed or not.

We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to Government appropriate remedies to ensure that the law is clear in this area,” it says on this. “At the very least, this work should establish clear principles for accountability and intelligibility. This work should be completed as soon as possible.” 

But this isn’t exactly cutting edge commentary. Last month the government announced a three-year regulatory review focused on self-driving cars and the law, for instance. And the liability point is already generally well-aired — and in the autonomous cars case, at least, now having its tires extensively kicked in the UK.

What’s less specifically discussed in government circles is how AIs are demonstrably piling pressure on existing laws. And what — if anything — should be done to address those kind of AI-fueled breaking points. (Exceptions: Terrorist content spreading via online platforms has been decried for some years, with government ministers more than happy to make platforms and technologies their scapegoat and even toughen laws; more recently hate speech on online platforms has also become a major political target for governments in Europe.)

The committee briefly touches on some of these societal pressure points in a section on AI’s impact on “social and political cohesion”, noting concerns raised to it about issues such as filter bubbles and the risk of AIs being used to manipulate elections. “[T]here is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself. The manipulation of data in particular will be a key area for public understanding and discussion in the coming months and years,” it writes here. 

However it has little in the way of gunpowder — merely recommending that research is commissioned into “the possible impact of AI on conventional and social media outlets”, and to investigate “measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency”.

Elsewhere in the report, it also raise an interesting concern about data monopolies — noting that investments by “large overseas technology companies in the UK economy” are “increasing consolidation of power and influence by a select few”, which it argues risks damaging the UK’s home-grown AI start-up sector.

But again there’s not much of substance in its response. The committee doesn’t seem to have formed its own ideas on how or even whether the government needs to address data being concentrating power in the hands of big tech — beyond calling for “strong” competition frameworks. This lack of conviction is attributed to hearing mixed messages on the topic from its witnesses. (Though may well also be related to the economic portion of the enquiry’s focus.)

“The monopolisation of data demonstrates the need for strong ethical, data protection and competition frameworks in the UK, and for continued vigilance from the regulators,” it concludes. “We urge the Government, and the Competition and Markets Authority, to review proactively the use and potential monopolisation of data by the big technology companies operating in the UK.”

The report also raises concerns about access to funding for UK AI startups to ensure they can continue scaling domestic businesses — recommending that a chunk of the £2.5BN investment fund at the British Business Bank, which the government announced in the Autumn Budget 2017, is “reserved as an AI growth fund for SMEs with a substantive AI component, and be specifically targeted at enabling such companies to scale up”.

No one who supports the startup cause would argue with trying to make more money available. But if data access has been sealed up by tech giants all the scale up funding in the world won’t help domestic AI startups break through that algorithmic ceiling.

Also touched on: The looming impact of Brexit, with the committee calling on the government to “commit to underwriting, and where necessary replacing, funding for European research and innovation programmes, after we have left the European Union” . Which boils down to another whistle in a now very long score of calls for replacement funding after the UK leaves the EU.

Funding for regulators is another concern, with a warning that the ICO must be “adequately and sustainably resourced” — as a result of the additional burden the committee expects AI to put on existing regulators.

This issue is also on the radar of the UK’s digital minister, Matt Hancock, who has said he’s considering what additional resources the ICO might need — such as the power to compel testimony from individuals. (Though the ICO itself has previously raised concerns that the minister and his data protection bill are risking undermining her authority.) For now it remains to be seen how well armed the agency will be to meet the myriad challenges generated and scaled by AI’s data processors.

“Blanket AI-specific regulation, at this stage, would be inappropriate,” the report adds. “We believe that existing sector-specific regulators are best placed to consider the impact on their sectors of any subsequent regulation which may be needed. We welcome that the Data Protection Bill and GDPR appear to address many of the concerns of our witnesses regarding the handling of personal data, which is key to the development of AI. The Government Office for AI, with the Centre for Data Ethics and Innovation, needs to identify the gaps, if any, where existing regulation may not be adequate. The Government Office for AI must also ensure that the existing regulators’ expertise is utilised in informing any potential regulation that may be required in the future.”

The committee’s last two starter principles for their voluntary AI code serve to underline how generously low the ethical bar is really being set here — boiling down to: AI shouldn’t be allowed to kill off free schools for our kids, nor be allowed to kill us — which may itself be another consequence of humans not always being able to clearly determine how AI does what it does or exactly what it might be doing to us.

News Source = techcrunch.com

Go to Top