The European Union’s competition commission is looking into how Amazon uses data from retailers selling via its ecommerce marketplace, Reuters reports.
Competition commissioner Margrethe Vestager revealed the action today during a press conference. “We are gathering information on the issue and we have sent quite a number of questionnaires to market participants in order to understand this issue in full,” she said.
It’s not a formal antitrust probe at this stage, with Vestager also telling reporters: “These are very early days… we haven’t formally opened a case. But we are trying to make sure that we get the full picture.”
The Commission appears to be trying to determine whether or not third-party merchants selling on Amazon’s platform are being placed at a disadvantage vs the products Amazon also sells, thereby competing directly with some of its marketplace participants.
“You have these platforms that have sort of dual purpose, they are both hosting a lot of merchants to enable maybe the smaller guy to have his business, to be found, to do his commerce, and at the same time thgey themselves are merchants — big merchants. So they’re both hosts and they also do the merchant business themselves. And the question here is the about the data,” Vestager also said.
“Because if you as Amazon get the data from the smaller merchants that you host which can be of course completely legitimate because you can improve your service to these smaller merchants. Well, do you then also use these data to do your own calculations? As to what is the new big thing? What is it that people want? What kind of offers do they like to receive? What makes them buy things? And that has made us start a preliminary… antitrust investigation into Amazon’s business practices.”
Companies found to be in breach of EU antitrust rules can be fined up to 10 per cent of their global annual turnover.
We’ve reached out to Amazon for comment.
In recent years the ecommerce giant has greatly expanded the own-brand products it sells via its marketplace, such as via its Amazon Elements line, which includes vitamin supplements and baby wipes, and AmazonBasics — which covers a wide array of ‘everyday’ items including batteries and even towels.
The company does not always brand its own-brand products with the Amazon label, also operating a raft of additional own brands — including for kids clothes, women’s fashion, sportswear, home furnishings and most recently diapers, to name a few. So it is not always immediately transparent to shoppers on its marketplace when they are buying something produced by Amazon itself.
Meanwhile, tech giants’ grip on big data has been flagged as a potential antitrust concern by Vestager for several years now.
In a speech at the DLD conference back in 2016 she said: “If a company’s use of data is so bad for competition that it outweighs the benefits, we may have to step in to restore a level playing field,” adding then that she was continuing to “look carefully at this issue”.
It’s not clear how the Amazon probe will pan out but it signifies a stepping up of the Commission’s action in this area.
The EU also issued Google with a recordbreaking $5BN fine this summer, for abusing the dominance of its Android mobile operating system.
That fine followed another recordbreaking penalty in 2017, when Google was slapped with an $2.7BN antitrust fine related to its search comparison service, Google Shopping.
Today’s plenary vote in the European parliament was on amended proposals that had been rejected by MEPs in a vote in July with parliamentarians arguing for a fuller debate and more balanced measures.
The vote is a major victory for MEP Axel Voss who has been driving the copyright reform.
MEPs largely backed Voss’ amended proposals today which had narrowed the scope of the rejected text, such as, in the case of Article 11, by allowing for links to contain individual words from the linked to publishers’ content — an attempt to respond to critics’ contention that the measure would outlaw hyperlinks (which can often contain the headline of an article).
On Article 13 Voss’ amended proposal had reduced the scope to platforms that both host “significant” amounts of content and also “promote” them. It also includes an exception for small businesses.
As the votes were announced a visibly delighted Voss beamed, clapped and hugged his seat neighbours, as well as giving a broad thumbs up to all those watching.
But critics and free speech advocates described it as a catastrophe.
MEP Marietje Schaake expressed disappointment with the result, telling us: “The Parliament squandered the opportunity to get the copyright reform on the right track. This is a disastrous result for the protection of our fundamental rights, ordinary internet users and Europe´s future in the field of artificial intelligence. We have set a step backwards instead of creating a true copyright reform that is fit for the 21st century.”
“Members of the house, a heartfelt thanks for the job that we have done together. This is a good sign for the creative industries in Europe,” said Voss after the vote, as he asked for the report to be sent back to committee to begin institutional negotiations with Member States, via the European Council.
MEPs duly obliged.
There was just one interruption prior to that last vote, with a single MEP standing up to denounce the result as “an enormous strike against freedom of speech on the Internet”. Proceedings continued.
Other amendments that had been tabled by MEPs but were rejected by the parliament as a whole included ditching the reforms entirely and leaving the current law as is.
Welcoming the parliament’s vote in a statement, the European Commission’s VP for the Digital Single Market Andrus Ansip and commissioner for Digital Economy and Society, Mariya Gabriel, put out this joint statement:
We welcome today’s vote at the European Parliament. It is a strong and positive signal and an essential step to achieving our common objective of modernising the copyright rules in the European Union.
Discussions between the co-legislators can now start on a legislative proposal which is a key element of the Digital Single Market strategy and one of the priorities for the European Commission .
Our aim for this reform is to bring tangible benefits for EU citizens, researchers, educators, writers, artists, press and cultural heritage institutions and to open up the potential for more creativity and content by clarifying the rules and making them fit for the digital world. At the same time, we aim to safeguard free speech and ensure that online platforms – including 7,000 European online platforms – can develop new and innovative offers and business models.
The Commission stands ready to start working with the European Parliament and the Council of the EU, so that the directive can be approved as soon as possible, ideally by the end of 2018. We are fully committed to working with the co-legislators in order to achieve a balanced and positive outcome enabling a true modernisation of the copyright legislation that Europe needs.
Also very happy with the result a swathe of creative industries.
The European Publishers Council welcome the adoption of the Publisher’s neighbouring right. In a statement its exec director, Angela Mills Wade, said: “Today, we give credit to MEPs who voted for press freedom, democracy, professional journalism and European values. We thank the Rapporteur, Axel Voss, MEP, for working tirelessly to achieve a balanced outcome.”
While the parliament has now agreed its position on the reform the process is not yet over. There will be trilogue negotiations with Member State representatives, via the European Council, and a final vote — likely early next year.
Now that Parliament and Council have adopted their positions, we will have one final chance to reject #UploadFilters and #LinkTax in the final vote on the directive after trilogue, probably in the spring. Talk to your governments meanwhile! #SaveYourInternet
Commenting on the parliament’s vote, the Computer & Communications Industry Association, which is also not a supported of the reforms, urged the Council and Parliament to “come to a balanced outcome in final negotiations”.
“We regret that a majority of Members of the European Parliament today ignored the warnings of the online sector, academics, innovative publishers, research institutions and civil rights groups on the real threats this proposal causes,” said its senior policy manager, Maud Sacquet, in a statement.
BEUC, the European Consumer Organisation, also denounced the result of the plenary vote, warning that if the plans MEPs backed today become EU law the “benefits of the Internet for consumers will be at risk”.
“It is beyond comprehension that time and again EU policy makers refuse to bring copyright law into the 21st century. Consumers nowadays express themselves by sampling, creating and mixing music, videos and pictures, then sharing their creations online. MEPs have decided to thwart this freedom of expression which is dangerous for creativity and innovation,” said Monique Goyens, director general of BEUC, in a statement.
“The consequence of this vote is clear. Platforms will have no other option than to scan and filter any content that consumers want to upload. Experience shows that this will lead to many uploads being unjustifiably blocked. This is not the type of internet consumers need or expect. This protectionist reform will only benefit the copyright industry at the expense of consumers.”
European Union lawmakers are facing a major vote on digital copyright reform proposals on Wednesday — a process that has set the Internet’s hair fully on fire.
Here’s a run down of the issues and what’s at stake…
The most controversial component of the proposals concerns user-generated content platforms such as YouTube, and the idea they should be made liable for copyright infringements committed by their users — instead of the current regime of takedowns after the fact (which locks rights holders into having to constantly monitor and report violations — y’know, at the same time as Alphabet’s ad business continues to roll around in dollars and eyeballs).
Critics of the proposal argue that shifting the burden of rights liability onto platforms will flip them from champions to chillers of free speech, making them reconfigure their systems to accommodate the new level of business risk.
More specifically they suggest it will encourage platforms into algorithmically pre-filtering all user uploads — aka #censorshipmachines — and then blinkered AIs will end up blocking fair use content, cool satire, funny memes etc etc, and the free Internet as we know it will cease to exist.
Backers of the proposal see it differently, of course. These people tend to be creatives whose professional existence depends upon being paid for the sharable content they create, such as musicians, authors, filmmakers and so on.
Their counter argument is that, as it stands, their hard work is being ripped off because they are not being fairly recompensed for it.
Consumers may be the ones technically freeloading by uploading and consuming others’ works without paying to do so but creative industries point out it’s the tech giants that are gaining the most money from this exploitation of the current rights rules — because they’re the only ones making really fat profits off of other people’s acts of expression. (Alphabet, Google’s ad giant parent, made $31.16BN in revenue in Q1 this year alone, for example.)
YouTube has been a prime target for musicians’ ire — who contend that the royalties the company pays them for streaming their content are simply not fair recompense.
The second controversy attached to the copyright reform concerns the use of snippets of news content.
European lawmakers want to extend digital copyright to also cover the ledes of news stories which aggregators such as Google News typically ingest and display — because, again, the likes of Alphabet is profiting off of bits of others’ professional work without paying them to do so. And, on the flip side, media firms have seen their profits hammered by the Internet serving up free content.
The reforms would seek to compensate publishers for their investment in journalism by letting them charge for use of these text snippets — instead of only being ‘paid’ in traffic (i.e. by becoming yet more eyeball fodder in Alphabet’s aggregators).
Critics don’t see it that way of course. They see it as an imposition on digital sharing — branding the proposal a “link tax” and arguing it will have a wider chilling effect of interfering with the sharing of hyperlinks.
They argue that because links can also contain words of the content being linked to. And much debate has raged over on how the law would (or could) define what is and isn’t a protected text snippet.
They also claim the auxiliary copyright idea hasn’t worked where it’s already been tried (in Germany and Spain). Google just closed its News aggregator in the latter market, for example. Though at the pan-EU level it would have to at least pause before taking a unilateral decision to shutter an entire product.
Germany’s influential media industry is a major force behind Article 11. But in Germany a local version of a snippet law that was passed in 2013 ended up being watered down — so news aggregators were not forced to pay for using snippets, as had originally been floated.
Without mandatory payment (as is the case in Spain) the law has essentially pitted publishers against each other. This is because Google said it would not pay and also changed how it indexes content for Google News in Germany to make it opt-in only.
That means any local publishers that don’t agree to zero-license their snippets to Google risk losing visibility to rivals that do. So major German publishers have continued to hand their snippets over to Google.
But they appear to believe a pan-EU law might manage to tip the balance of power. Hence Article 11.
Awful amounts of screaming
For critics of the reforms, who often sit on the nerdier side of the spectrum, their reaction can be summed up by a screamed refrain that IT’S THE END OF THE FREE WEB AS WE KNOW IT.
A coalition of original Internet architects, computer scientists, academics and others — including the likes of world wide web creator Sir Tim Berners-Lee, security veteran Bruce Schneier, Google chief evangelist Vint Cerf, Wikipedia founder Jimmy Wales and entrepreneur Mitch Kapor — also penned an open letter to the European Parliament’s president to oppose Article 13.
In it they wrote that while “well-intended” the push towards automatic pre-filtering of users uploads “takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users”.
There is more than a little irony there, though, given that (for example) Google’s ad business conducts automated surveillance of the users of its various platforms for ad targeting purposes — and through that process it’s hoping to control the buying behavior of the individuals it tracks.
At the same time as so much sound and fury has been directed at attacking the copyright reform plans, another very irate, very motivated group of people have been lustily bellowing that content creators need paying for all the free lunches that tech giants (and others) have been helping themselves to.
But the death of memes! The end of fair digital use! The demise of online satire! The smothering of Internet expression! Hideously crushed and disfigured under the jackboot of the EU’s evil Filternet!
And so on and on it has gone.
(For just one e.g., see the below video — which was actually made by an Australian satirical film and media company that usually spends its time spoofing its own government’s initiatives but evidently saw richly viral pickings here… )
For a counter example, to set against the less than nuanced yet highly sharable satire-as-hyperbole on show in that video, is the Society of Authors — which has written a 12-point breakdown defending the actual substance of the reform (at least as it sees it).
A topline point to make right off the bat is it’s hardly a fair fight to set words against a virally sharable satirical video fronted by a young lady sporting very pink lipstick. But, nonetheless, debunk the denouncers these authors valiantly attempt to.
To wit: They reject claims the reforms will kill hyperlinking or knife sharing in the back; or do for online encyclopedias like Wikimedia; or make snuff out of memes; or strangle free expression — pointing out that explicit exceptions that have been written in to qualify what it would (and would not) target and how it’s intended to operate in practice.
Wikipedia, for example, has been explicitly stated as being excluded from the proposals.
But they are still pushing water uphill — against the tsunami of DEATH OF THE MEMES memes pouring the other way.
Russian state propaganda mouthpiece RT has even joined in the fun, because of course Putin is no fan of EU…
The Society of Authors makes the very pertinent point that tech giants have spent millions lobbying against the reforms. They also argue this campaign has been characterised by “a loop of misinformation and scaremongering”.
So, basically, Google et al stand accused of spreading (even more) fake news with a self-interested flavor. Who’d have thunk it?!
Dollar bills standing on a table in Berlin, Germany. (Photo by Thomas Trutschel/Photothek via Getty Images)
The EU’s (voluntary) Transparency Register records Google directly spending between $6M and $6.4M on regional lobbying activities in 2016 alone. (Although that covers not just copyright related lobbying but a full laundry list of “fields of interest” its team of 14 smooth-talking staffers apply their Little Fingers to.)
But the company also seeks to exert influence on EU political opinion via membership of additional lobbying organizations.
And the register lists a full TWENTY-FOUR organizations that Google is therefore also speaking through (by contrast, Facebook is merely a member of eleven bodies) — from the American chamber of Commerce to the EU to dry-sounding thinktanks, such as the Center for European Policy Studies and the European Policy Center. It is also embedded in startup associations, like Allied for Startups. And various startup angles have been argued by critics of the copyright reforms — claiming Europe is going to saddle local entrepreneurs with extra bureaucracy.
Google’s dense web of presence across tech policy influencers and associations amplifies the company’s regional lobbying spend to as much as $36M, music industry bosses contend.
Though again that dollar value would be spread across multiple GOOG interests — so it’s hard to sum the specific copyright lobbying bill. (We asked Google — it didn’t answer). Multiple millions looks undeniable though.
Of course the music industry and publishers have been lobbying too.
But probably not at such a high dollar value. Though Europe’s creative industries have the local contacts and cultural connections to bend EU politicians’ ears. (As, well, they probably should.)
Seasoned European commissioners have professed themselves astonished at the level of lobbying — and that really is saying something.
Yes there are actually two sides to consider…
Returning to the Society of Authors, here’s the bottom third of their points — which focus on countering the copyright reform critics’ counterarguments:
The proposals aren’t censorship: that’s the very opposite of what most journalists, authors, photographers, film-makers and many other creators devote their lives to.
Not allowing creators to make a living from their work is the real threat to freedom of expression.
Not allowing creators to make a living from their work is the real threat to the free flow of information online.
Not allowing creators to make a living from their work is the real threat to everyone’s digital creativity.
Stopping the directive would be a victory for multinational internet giants at the expense of all those who make, enjoy and enjoy using creative works.
Certainly some food for thought there.
But as entrenched, opposing positions go, it’s hard to find two more perfect examples.
And with such violently opposed and motivated interest groups attached to the copyright reform issue there hasn’t really been much in the way of considered debate or nuanced consideration on show publicly.
But being exposed to endless DEATH OF THE INTERNET memes does tend to have that effect.
What’s that about Article 3 and AI?
There is also debate about Article 3 of the copyright reform plan — which concerns text and data-mining. (Or TDM as the Commission sexily conflates it.)
The original TDM proposal, which was rejected by MEPs, would have limited data mining to research organisations for the purposes of scientific research (though Member States would have been able to choose to allow other groups if they wished).
This portion of the reforms has attracted less attention (butm again, it’s difficult to be heard above screams about dead memes). Though there have been concerns raised from certain quarters that it could impact startup innovation — by throwing up barriers to training and developing AIs by putting rights blocks around (otherwise public) data-sets that could (otherwise) be ingested and used to foster algorithms.
Or that “without an effective data mining policy, startups and innovators in Europe will run dry”, as a recent piece of sponsored content inserted into Politico put it.
That paid for content was written by — you guessed it! — Allied for Startups.
Aka the organization that counts Google as a member…
The most fervent critics of the copyright reform proposals — i.e. those who would prefer to see a pro-Internet-freedoms overhaul of digital copyright rules — support a ‘right to read is the right to mine’ style approach on this front.
So basically a free for all — to turn almost any data into algorithmic insights. (Presumably these folks would agree with this kind of thing.)
Middle ground positions which are among the potential amendments now being considered by MEPs would support some free text and data mining — but, where legal restrictions exist, then there would be licenses allowing for extractions and reproductions.
And now the amendments, all 252 of them…
The whole charged copyright saga has delivered one bit of political drama already — when the European Parliament voted in July to block proposals agreed only by the legal affairs committee, thereby reopening the text for amendments and fresh votes.
So MEPs now have the chance to refine the parliament’s position via supporting select amendments — with that vote taking place next week.
There are 252 in all! Which just goes to show how gloriously messy the democratic process is.
It also suggests the copyright reform could get entirely stuck — if parliamentarians can’t agree on a compromise position which can then be put to the European Council and go on to secure final pan-EU agreement.
So, for example, she argues that amendments to add limited exceptions for platform liability would still constitute “upload filters” (and therefore “censorship machines”).
Her preference would be deleting the article entirely and making no change to the current law. (Albeit that’s not likely to be a majority position, given how many MEPs backed the original Juri text of the copyright reform proposals 278 voted in favor, losing out to 318 against.)
But she concedes that limiting the scope of liability to only music and video hosting platforms would be “a step in the right direction, saving a lot of other platforms (forums, public chats, source code repositories, etc.) from negative consequences”.
She also flags an interesting suggestion — via another tabled amendment — of “outsourcing” the inspection of published content to rightholders via an API”.
“With a fair process in place [it] is an interesting idea, and certainly much better than general liability. However, it would still be challenging for startups to implement,” she adds.
Reda has also tabled a series of additional amendments to try to roll back what she characterizes as “some bad decisions narrowly made by the Legal Affairs Committee” — including adding a copyright exception for user generated content (which would essentially get platforms off the hook insofar as rights infringements by web users are concerned); adding an exception for freedom of panorama (aka the taking and sharing of photos in public places, which is currently not allowed in all EU Member States); and another removing a proposed extra copyright added by the Juri committee to cover sports events — which she contends would “filter fan culture away“.
So is the free Internet about to end??
MEP Catherine Stihler, a member of the Progressive Alliance of Socialists and Democrats, who also voted in July to reopen debate over the reforms reckons nearly every parliamentary group is split — ergo the vote is hard to call.
“It is going to be an interesting vote,” she tells TechCrunch. “We will see if any possible compromise at the last minute can be reached but in the end parliament will decide which direction the future of not just copyright but how EU citizens will use the internet and their rights on-line.
“Make no mistake, this vote affects each one of us. I do hope that balance will be struck and EU citizens fundamental rights protected.”
So that sort of sounds like a ‘maybe the Internet as you know it will change’ then.
Other views are available, though, depending on the MEP you ask.
We reached out to Axel Voss, who led the copyright reform process for the Juri committee, and is a big proponent of Article 13, Article 11 (and the rest), to ask if he sees value in the debate having been reopened rather than fast-tracked into EU law — to have a chance for parliamentarians to achieve a more balanced compromise. At the time of writing Voss hadn’t responded.
Voting to reopen the debate in July, Stihler argued there are “real concerns” about the impact of Article 13 on freedom of expression, as well as flagging the degree of consumer concern parliamentarians had been seeing over the issue (doubtless helped by all those memes + petitions), adding: “We owe it to the experts, stakeholders and citizens to give this directive the full debate necessary to achieve broad support.”
MEP Marietje Schaake, a member of the Alliance of Liberals and Democrats for Europe, was willing to hazard a politician’s prediction that the proposals will be improved via the democratic process — albeit, what would constitute an improvement here of course depends on which side of the argument you stand.
But she’s routing for exceptions for user generated content and additional refinements to the three debated articles to narrow their scope.
Her spokesman told us: “I think we’ll end up with new exceptions on user generated content and freedom of panorama, as well as better wording for article 3 on text and data mining. We’ll end up probably with better versions of articles 11 and 13, the extent of the improvement will depend on the final vote.”
The vote will be held during an afternoon plenary session on September 12.
As a teenager in Nigeria, I tried to build an artificial intelligence system. I was inspired by the same dream that motivated the pioneers in the field: That we could create an intelligence of pure logic and objectivity that would free humanity from human error and human foibles.
I was working with weak computer systems and intermittent electricity, and needless to say my AI project failed. Eighteen years later—as an engineer researching artificial intelligence, privacy and machine-learning algorithms—I’m seeing that so far, the premise that AI can free us from subjectivity or bias is also disappointing. We are creating intelligence in our own image. And that’s not a compliment.
Researchers have known for awhile that purportedly neutral algorithms can mirror or even accentuate racial, gender and other biases lurking in the data they are fed. Internet searches on names that are more often identified as belonging to black people were found to prompt search engines to generate ads for bailbondsmen. Algorithms used for job-searching were more likely to suggest higher-paying jobs to male searchers than female. Algorithms used in criminal justice also displayed bias.
Five years later, expunging algorithmic bias is turning out to be a tough problem. It takes careful work to comb through millions of sub-decisions to figure out why the algorithm reached the conclusion it did. And even when that is possible, it is not always clear which sub-decisions are the culprits.
Yet applications of these powerful technologies are advancing faster than the flaws can be addressed.
Recent research underscores this machine bias, showing that commercial facial-recognition systems excel at identifying light-skinned males, with an error rate of less than 1 percent. But if you’re a dark-skinned female, the chance you’ll be misidentified rises to almost 35 percent.
AI systems are often only as intelligent—and as fair—as the data used to train them. They use the patterns in the data they have been fed and apply them consistently to make future decisions. Consider an AI tasked with sorting the best nurses for a hospital to hire. If the AI has been fed historical data—profiles of excellent nurses who have mostly been female—it will tend to judge female candidates to be better fits. Algorithms need to be carefully designed to account for historical biases.
Occasionally, AI systems get food poisoning. The most famous case was Watson, the AI that first defeated humans in 2011 on the television game show “Jeopardy.” Watson’s masters at IBM needed to teach it language, including American slang, so they fed it the contents of the online Urban Dictionary. But after ingesting that colorful linguistic meal, Watson developed a swearing habit. It began to punctuate its responses with four-letter words.
We have to be careful what we feed our algorithms. Belatedly, companies now understand that they can’t train facial-recognition technology by mainly using photos of white men. But better training data alone won’t solve the underlying problem of making algorithms achieve fairness.
Algorithms can already tell you what you might want to read, who you might want to date and where you might find work. When they are able to advise on who gets hired, who receives a loan, or the length of a prison sentence, AI will have to be made more transparent—and more accountable and respectful of society’s values and norms.
Accountability begins with human oversight when AI is making sensitive decisions. In an unusual move, Microsoft president Brad Smith recently called for the U.S. government to consider requiring human oversight of facial-recognition technologies.
The next step is to disclose when humans are subject to decisions made by AI. Top-down government regulation may not be a feasible or desirable fix for algorithmic bias. But processes can be created that would allow people to appeal machine-made decisions—by appealing to humans. The EU’s new General Data Protection Regulation establishes the right for individuals to know and challenge automated decisions.
Today people who have been misidentified—whether in an airport or an employment data base—have no recourse. They might have been knowingly photographed for a driver’s license, or covertly filmed by a surveillance camera (which has a higher error rate.) They cannot know where their image is stored, whether it has been sold or who can access it. They have no way of knowing whether they have been harmed by erroneous data or unfair decisions.
Minorities are already disadvantaged by such immature technologies, and the burden they bear for the improved security of society at large is both inequitable and uncompensated. Engineers alone will not be able to address this. An AI system is like a very smart child just beginning to understand the complexities of discrimination.
To realize the dream I had as a teenager, of an AI that can free humans from bias instead of reinforcing bias, will require a range of experts and regulators to think more deeply not only about what AI can do, but what it should do—and then teach it how.
We’ve been following the reforms to CFIUS — the Committee on Foreign Investment in the United States — since the proposal was first floated late last year. The committee is charged with protecting America’s economic interests by preventing takeovers of companies by foreign entities where the transaction could have deleterious national security consequences. The committee and its antecedents have slowly gained powers over the past few decades since the Korean War, but this week, it suddenly gained a whole lot more.
One of the top priorities of this legislation was to make it more difficult for Chinese venture capital firms to invest in American startups and pilfer intellectual property or acquire confidential user data.
Congress fulfilled that goal in two ways. First, the definition of a “covered transaction” has been massively expanded, with a focus on “critical technology” industries. In the past, there was an expectation that a foreign entity had to essentially buy out a company in order to trigger a CFIUS review. That jurisdiction has now been expanded to include such actions as adding a member to a company’s board of directors, even in cases where an investment is essentially passive.
That means that the typical VC round could now trigger a review in Washington — and in the fast timelines of startup fundraising, that might be enough friction to keep Chinese venture capital out of the American ecosystem. Given that Chinese venture capital (at least by some measures) has outpaced U.S. venture capital in the first half of this year, this provision will have huge ramifications for startups and their valuations.
The second element Congress added was requiring that CFIUS receive all partnership agreements that a company has signed with a foreign investor. Often in a transaction, there is a main agreement spelling out the overall structure of a deal, and then side agreements with individual investors with special terms not shared with the wider syndicate, such as the right to access internal company data or intellectual property. By requiring further disclosure, CFIUS will have a more holistic picture of a deal and any risks it might add for national security.
It’s important to note that Congress was keen on balancing the need for investment with the need of national security. Through oversight provisions, including allowing CFIUS decisions to be contested in the DC Court of Appeals, Congress has designed the reform to be fairer, even as it takes a harder line on certain transactions.
It will take many months for the provisions to come in full force, so some of the effects of this bill won’t be felt until the end of next year. Nonetheless, Congress has sent a clear message of its intent.
So far, the tech industry appears to have been more insulated from the back-and-forth than expected, although the increasing scope and intensity of tariffs could change that calculus. Apple updated its quarterly filing this week to include a new risk around trade disputes, saying that “Tariffs could also make the Company’s products more expensive for customers, which could make the Company’s products less competitive and reduce consumer demand.” Legal boilerplate for sure, but it is the first time the company has included such a provision in its filing.
The tariffs drama is going to continue in the weeks and months ahead. But this week in particularly was a watershed for U.S. and China technology relations, and a busy week for tech lobbyists and policy officials.
For startups, most of this news basically boils down to the following: the U.S. is one market, and China is another. Cross-investing and cross-distribution just aren’t going to be easy as they were even a few months ago. Pick a market — one market — and focus your energies there. Clearly, it’s going to be tough times for anyone caught in the middle between the two.