The impact of AI must be front of mind for enforcers of merger control policy, the European Union’s antitrust chief and digital EVP, Margrethe Vestager, said yesterday, warning that “wide-reaching” digital markets can lead to unexpected economic effects. Speaking during a seminar discussing how to prevent tech giants like Microsoft, Google and Meta from monopolizing AI, she gave a verbal shot across the bows of Big Tech to expect more — and deeper — scrutiny of their operations.
“We have to look carefully at vertical integration and at ecosystems. We have to take account of the impact of AI in how we assess mergers. We even have to think about how AI might lead to new kinds of algorithmic collusion,” she said.
Her remarks suggest the bloc will be a lot more active in its assessments of tech M&A going forward — and, indeed, cosy AI partnerships.
Last month the EU said it would look into whether Microsoft’s investment in generative AI giant OpenAI is reviewable under the bloc’s merger regulations.
Vestager’s address was also notable for clearly expressing that competition challenges are inherent to how cutting edge AI is developed, with the Commission EVP flagging “barriers to entry everywhere”.
“Large Language Models [LLMs] depend on huge amounts of data, they depend on cloud space, and they depend on chips. There are barriers to entry everywhere. Add to this the fact that the tech giants have the resources to acquire the best and brightest talent,” she said. “We’re not going to see disruption driven by a handful of college drop-outs who somehow manage to outperform Microsoft’s partner Open AI or Google’s DeepMind. The disruption from AI will come from within the nest of existing tech ecosystems.”
The blistering rise of generative AI over the past year+ has shone a spotlight on how developments are dominated by a handful of firms with either close ties to familiar Big Tech platforms or who are tech giants themselves. Examples include ChatGPT maker OpenAI’s close partnership with hyperscaler Microsoft; Google and Amazon ploughing investment into OpenAI rival Anthropic; and Facebook’s parent Meta mining its social media data mountain to develop its own series of foundational models (aka LLaMA).
How European AI startups can hope to compete without equivalent access to key AI infrastructure was a running thread in the seminar discussions.
Challenges and uncertainties
“We’ve seen LLaMa 2 being open sourced. Will LLaMa 3 also be open sourced?” wondered Tobias Haar, general counsel of the German foundational model AI startup Aleph Alpha, speaking during a panel discussion that followed Vestager’s address. “Will there be companies that rely on open source Large Language Models that suddenly, at least not in the next iterative stage, are no longer available as an open source?”
Haar emphasized that uncertainty over access to key AI inputs is why the startup took the decision to invest in building and training its own foundational models in its own data center — “in order to keep and maintain this independence”. At the same time, he flagged the challenge inherent for a European startup in trying to compete with US hyperscalers and the dedicated compute resource they can roll out for training AIs with their chosen partners.
Aleph Alpha’s own data center runs 512 A100 Nvidia GPUs — the “largest commercial AI cluster” in Europe, per Haar. But he emphasized this pales in comparison to Big Tech’s infrastructure for training — pointing to Microsoft’s announcement that it would be installing circa 10,000 GPUs in the U.K. last year alone, as part of a £25 billion investment over three years (which will actually fund more than 20,000 GPUs by 2026).
“In order to put it into perspective — and perspective is also what is relevant in the competition, legal assessment of what is going on in the market field — we run 512 A100 GPUs by Nvidia,” he said. “This is a lot because it makes us somewhat independent but it’s still nothing compared to the sheer computing power there is for other organisations to train and to fine tune their LLMs on. And I know that OpenAI has been training the LLMs — but I understand that Microsoft is fine tuning them also to their needs. So this is already [not a level playing field].”
In her address, Vestager did not offer any concrete plan for how the bloc might move to level the playing field for homegrown generative AI startups — nor even entirely commit to the need for the bloc to intervene. (But tackling digital market concentration, which was built up, partially, under her watch, remains a tricky subject for the EU — which has increasingly been accused of regulating everything but changing nothing when it comes to Big Tech’s market power.)
Nonetheless, her address suggests the EU is preparing to get a lot tougher and more comprehensive in scrutinizing tech deals, as a consequence of recent developments in AI.
Only a handful of years ago Vestager cleared Google’s controversial acquisition of fitness wearable maker Fitbit, accepting commitments from the tech giant it wouldn’t use Fitbit’s data for ads for a period of ten-years — but leaving the tech giant free to mine users’ data for other purposes, including AI. (To wit: Last year Google added a generative AI chatbot to the Fitbit app.)
But the days of Big Tech getting to cherry-pick acquisition targets, and grab juicy-looking AI training data, may be winding down in Europe.
Vestager also implied the bloc will seek to make full use of existing competition tools, including the Digital Markets Act (DMA) — an ex ante competition reform which comes into application on six tech giants (including Microsoft, Google and Meta) early next month — as part of its playbook to shape how the AI market develops, suggesting the EU’s competition policy must work hand-in-glove with digital regulations to keep pace with risks and harms.
There have been doubts over how — or even whether — the DMA applies to generative AI, given no cloud services have so far been designated under the regulation as so-called “core platform services”. So there are worries the bloc has, once again, missed the boat when it comes to putting meaningful market controls on the next wave of disruptive tech.
In her address, Vestager rejected the idea it’s already too late for the EU to prevent Big Tech sewing up AI markets — tentatively suggesting “we can make an impact” — but she also warned the “window of opportunity” for enforcers and lawmakers to shape outcomes that are “truly beneficial to our economy, to our citizens and to our democracies”, as she put it, will only be briefly open.
Still, her speech raised a lot more questions over how enforcers and policymakers should respond to the layered challenges thrown up by AI — including democratic integrity, intellectual property and the ethical application of such systems, to name a few — than she had actual solutions. She also sounded a bit hesitant when it came to how to weigh competition considerations with the broader sweep of societal harms AI use may entail. So her message — and resolve — seemed a bit conflicted.
“There are still big questions around how intellectual property rights are respected. About how ethical AI is deployed. About areas where AI should never be deployed. In each of these decisions, there is a competition policy dimension that needs to be considered. Conversely, how AI regulation is enforced will affect the openness and accessibility of the markets it impacts,” she said, implying there may be trade offs between regulating AI risks and creating a vibrant AI ecosystem.
“There are questions around input neutrality and the influence such systems could have on our democracies. A Large Language Model is only as good as the inputs it receives, and for this there must always be a discretionary element. Do we really want our opinion-making to be reliant on AI systems that are under the control not of the European people — but of tech oligarchs and their shareholders?” she also wondered, suggesting the bloc may need to think about drafting even more laws to regulate AI risks.
Clearly, coming with more laws now is not a recipe for instant action on AI — yet her speech literally called for “acting swiftly” (and “thinking ahead” and “cooperating”) to maximise the benefits of AI while minimizing the risk.
Overall, despite the promise of more intelligent merger scrutiny, the tone she struck veered toward ‘managing expectations’. And her call to action appealed to a broader collective of international enforcers, regulators and policymakers to join forces to fix this one — rather than the EU sticking its head above the parapet.
While Vestager avoided instant answers for derailing Big Tech’s well-funded dash to monopolize AI, other panellists offered a few.
Solutions
The fieriest ideas came from Barry Lynn of the Washington-based Open Markets Institute, a non-profit whose stated mission starts with stopping monopolies. “Let’s break off cloud,” he suggested. “Let’s turn cloud into a utility. It’s pretty easy to do. This is actually one of the easiest solutions we can embrace right now — and it would take away a huge amount of their leverage.”
He also called for a blanket non-discrimination regime (i.e. “common carrier” type rules for platforms to prohibit price discrimination and information manipulation); and for a requisitioning of aggregated “public data” tech giants have amassed by tracking web users. “Why does Google own the data? That’s our data,” he argued. “It’s public data… It doesn’t belong to Google — doesn’t belong to any of these folks. It’s our data. Let’s exert ownership over it.”
Microsoft’s director of competition, Carel Maske, who had — awkwardly enough — been seated right next to Lynn on the panel, all but broke into a sweat when the moderator offered him the chance to respond to that. “I think there’s a lot to discuss,” he hedged, before doing his best to brush aside Lynn’s case for immediate structural separation of hyperscalers.
“I’m not sure you are addressing, really, the needs of the investments that are needed in cloud and infrastructure,” he got out, dangling a skeletal argument against being broken up (i.e. that structural separation of Big Tech from core AI infrastructure would undermine the investment needed to drive innovation forward), before hurrying to route the chat back to more comfortable topics (like “how to make competition tools work” or “what the appropriate regulatory framework is”), which Microsoft evidently feels won’t prevent Big Tech business as usual.
Talking of whether existing competition tools are able to do the job of bringing tech giants’ scramble for AI to heel, another panellist, Andreas Mundt — president of the German competition authority, the Federal Cartel Office (FCO) — had a negative perspective to recount, drawn from recent experience.
Existing merger processes have already failed, domestically, to tackle Microsoft’s cosy relationship with OpenAI, he pointed out. The FCO took an early look at whether the partnership should be subject to merger control — before deciding, last November, the arrangement did not “currently” meet the bar.
During the panel, Mundt said he would have liked a very different outcome. He argued tech giants have — very evidently — changed tack from the earlier “killer acquisition” strategy they deployed to slay emergent competition — to a softer partnership model that allows these close engagements to fly under enforcers’ radar.
“All we see are very soft cooperations,” he noted. “This is why we looked at this Microsoft OpenAI issue — and what did we find? Well, we were not very happy about it but from a formal point of view, we could not say this was a merger.
“What we found — and this should not be underestimated — in 2019 when Microsoft invested more than €1 billion into OpenAI we saw the creation of a substantial competitive influence of Microsoft into OpenAI. And that was long before Sam Altman was fired and rehired again. So there is this influence, as we see it, and this is why merger control is so important.
“But we could not prohibit that as a merger, by the way, because by that time, OpenAI had no impact in Germany — they weren’t active on German markets — this is why it was not a merger from our perspective. But what remains, it is very, very important, there is this substantial, competitive influence — and we must look at that.”
Asked what he would have liked to be able to do about Microsoft OpenAI, the FCO’s Mundt said he wanted to look at the core question: “Was it a merger? And was it a merger that maybe needs to go to phase two — that we should assess and maybe block?”
Striking a more positive note, the FCO president professed himself “very happy” the European Commission took the subsequent decision — last month — to open its own proceeding to check whether Microsoft and OpenAI’s partnership falls under the bloc’s merger rules. He also highlighted the U.K. competition authority’s move here, in December, when it said it would look at whether the tie-up amounts to a “relevant merger” situation.
Those proceedings are ongoing.
“I can promise you, we will look at all these cooperations very carefully — and if we see, if it only gets close to a merger, we will try to get it in [to merger rules],” Mundt added, factoring fellow enforcers’ actions into his calculation of what success looks like here.
A whole army of competition and digital rule enforcers working together — even in parallel — to attack the knotty problems thrown up by Big Tech + AI was also named by Vestager as a critical piece for cracking this puzzle. (And on this front, she encouraged responses to an open consultation on generative AI and virtual worlds the competition unit is running open until March 11.)
“For me, the very first lesson from our experience so far is that our impact will always be greatest when we work together, communicate clearly, and act early on,” she emphasized, adding: “I will continue to engage with my counterparts in the United States and elsewhere, to align our approach as much as possible.”
Antitrust enforcers admit they’re in a race to understand how to tackle AI
Don’t expect competition authorities to wade into the Microsoft-OpenAI power-play — yet