Germany and the EU can’t copycat Silicon Valley’s AI strategy
The author Steven Hill argues in his op-ed that Europe should not sacrifice its role as the “conscience of the Internet” on the altar of commercial and military priorities.
by Steven Hill
The pace of AI development in Silicon Valley has turned fast and furious. ChatGPT, Bard, Gemini, OpenAI, Bing AI, no one had heard of these applications and companies just a year ago. Their sudden appearance reflects the rise of large language models which are trained on massive amounts of data and can be applied to a diverse range of tasks. China also has emerged as a frontrunner in developing and applying AI and digital technologies, with over 100 generative AI models battling for dominance.
Now French company Mistral AI and German company Aleph Alpha,
with funding from SAP and Bosch , have raced to catch up, raising hundreds of millions of euros in seed funding. Ominously, fears of the dangers of this new technology have been pushed to the sidelines, taking a backseat to commercial interests and national security priorities. Precautionary principle be damned, it’s a new gold rush in the making, and the race is on to stake your claim.
Meanwhile, governments are struggling to keep up with regulatory guardrails. With its General Data Protection Regulation, Digital Services Act and Digital Markets Act, the EU developed a global reputation as the “conscience of the Internet” in regulating Big Tech. However, its recent efforts to pass the first in the Western world AI Act unexpectedly ran into a surprising volte-face from several member states, including Germany along with France and Italy.
The three largest EU member states used their influence to overturn months of negotiations and compromise, and threatened to turn the pioneering legislation into mush. Fears had suddenly emerged in certain corridors of influence that the EU would be undermining its own companies’ ability to innovate and catch up with Silicon Valley and China. It was likely no coincidence that a co-founder of Mistral AI was a former French state secretary and has direct access to President Emmanuel Macron. In the US, powerful lobbying groups are industry-based, with Silicon Valley maintaining armies of lobbyists in Washington, D.C. But in Europe, the biggest lobbyists in Brussels often are not companies but national governments. Nationally favored companies benefit from the most powerful of lobbying arm-twisters.
But while Germany and France are trying to keep pace with Silicon Valley, it would seem that the political and business special interests have missed the forest for the trees. Their posture reflects a clear misunderstanding of what Silicon Valley is and what drives it, both today and historically. Consequently, the EU and Germany is in danger of adopting the wrong strategy for AI development.
“Silicon Valley envy” distorts the big picture
Germany and Europe have long had a bad case of “Silicon Valley envy.” While Europeans have lamented “Where are the European Googles, Apples, Microsofts, Facebooks and Amazons,” and now “Where are the European OpenAIs and Teslas,” they forget that much of that new tech has its roots in massive military spending that has subsidized Silicon Valley for decades. Going back to the 1930s and World War II, the San Francisco Bay Area has long been a major site of US government research and technology. Nearby Stanford University in the 1950s and 60s became a research magnet that attracted top tech talent to companies like Fairchild Semiconductor and Bell Telephone Laboratories, focused on military priorities like responding to the Soviet Union’s Sputnik space satellite.
In 1969, the Stanford Research Institute (now known as SRI International), operated one of the four original nodes that comprised the military-funded ARPANET, the first version of the Internet. Other familiar technologies, such as Apple’s voice-recognizing personal assistant Siri, the World Wide Web, Google Maps, internet search and automated vehicles, began their birth stories as projects of DARPA, the Defense Advanced Research Projects Agency. More recently, the military pioneered the initial funding research in artificial intelligence, subterranean exploration and deep-space satellites, high-performance molecules and better GPS.
With that stable base of R&D investment from public tax dollars, venture capital funders in Silicon Valley have had the luxury of rolling the dice with their private money on new companies and technologies. Seven out of 10 Silicon Valley startups fail and 9 out of 10 never earn a profit. But the ones that make it through this investment casino – OpenAI, Google, Amazon, Meta/Facebook, Apple – have become hugely profitable and dominant.
China is now engaged in a similar state- and military-sponsored strategy. Is Germany or the EU prepared to launch such a high-risk, military-subsidized and expensive course of development?
EU funding levels are far behind the US and China
Germany and the EU have been trying for several years to increase funding and investment for AI development, but with limited success. Too often there have been grand announcements about new funding sources from the EU, or Germany, France and the UK, but the amount of money actually laid “on the barrel head” has been modest.
In 2022, private AI investment in the US amounted to $47.4 billion, which easily surpassed China’s private investment at $13.4 billion. In the EU, private investment amounted to only about $6.6 billion, with the UK alone almost matching that at $4.4 billion. Over the last decade this disparity has been even greater – American private investment climbed to $241 billion, China $95 billion, and the EU about $16 billion, less than the UK at $18 billion. The EU funding levels over the past decade haven’t been that much more than Israel’s, at $10.8 billion.
But maybe the recent investments in the EU means Europe has become aware of the funding gap, and is starting to catch up? Not at all. Looking at the number of newly funded AI companies in 2022, the US is far in the lead with 542, China is pretty far behind with 160, and the EU even further behind at around 140.
Even more telling is if we examine AI funding by focus areas over the half decade from 2017 through 2022. In the crucial fulcrum areas of private investment for semiconductors, medical and healthcare, energy, cloud investment, financial technology, cybersecurity and data protection, and artificial/virtual reality, US private investment has out-invested EU/UK combined investors by anywhere from 58 times (semiconductors) to 34 times (artificial/virtual reality), to 16 times (cybersecurity and data protection) to 13 times (cloud investment). EU/UK spending levels generally remain far behind China’s as well in most of these focus areas.
The areas where the EU/UK combined have invested competitive funding levels include industrial automation (nearly twice the private investment as the US), marketing and digital ads, human resources technology and retail. With the exception of industrial automation, these areas are not the buzziest game changers, in terms of advancing AI development goals or strategy.
And that’s just looking at private investment, and doesn’t take into account what the US and Chinese governments are spending. While exact amounts often is top secret information, China’s government has reportedly invested $2 billion just to build a national AI technology park in Beijing, and billions more in military applications. In recent years the US military has annually spent about $7 billion on unclassified projects for AI and related fields (big data and cloud computing), a 32% increase since 2012. But it spent billions more on classified R&D, though the exact figure is unknown.
So the US and China are spending real money. It seems unlikely that Germany or Europe will ever match the deep pockets of US or Chinese governments and companies. That means EU investment and R&D need to be more strategic, even as the EU needs to continue its trajectory as the conscience of digital technology development. What might an alternative “European way” of AI development look like?
The missing AI vision
Amidst all the headlines about military robot drones, or AI stealing our jobs, or the onset of a human-machine merged singularity, what is not discussed enough is what kind of AI research would best serve the public interest. There is a real danger that we will not effectively harness the true power of this technology because current research efforts, whether in Silicon Valley or China, lack a sufficiently humanistic outlook. The right kind of AI development ideally would benefit humankind, rather than focus exclusively on commercial, for-profit applications or on military uses that reinforce a bunkering down into national silos.
All of these shortcomings suggest a direction for European efforts that could make a unique contribution and transform the EU into a global leader. Certainly, the EU and its member states should continue to fund AI development, and support its businesses and academic institutions in developing it. Germany’s government and German research organizations, for example, can do a lot to help the important Mittelstand sectorof small and medium enterprises to adopt the most applicable AI technologies. It doesn’t always matter so much where those technologies are developed, or which companies develop them. It’s just as important to be able to incorporate those technologies into businesses and society in practical ways so that the human benefits are obvious and the economy remains competitive.
So, there are practical reasons for continuing to invest in AI research and development, and helping the new technologies to penetrate into German businesses and industry in practical ways. But Germany and the EU will lose their important perch in the global economy if they sacrifice their crucial role as the “conscience” of the AI movement. That’s why the recent watering down of the EU’s legislation to regulate AI is so disappointing.
The goal in that legislation, first introduced in April 2021, has long been to establish a new global benchmark for countries seeking to harness the potential benefits of AI technology, while trying to protect against its possible risks, like killing jobs, spreading misinformation online or endangering national security. In the Digital Services Act and the Digital Markets Act, the EU established an important precedent in which the giant companies and their potentially dangerous technologies that are “systemic risks” would face additional oversight, transparency and regulation. The drafters of the AI Act tried to incorporate this principle into the legislation over what was called “foundation models” of AI development. After initially supporting this approach, suddenly Germany, France and Italy tried to throw this principle overboard.
A compromise eventually was reached, but despite recent headlines announcing that the EU had agreed on the terms of this landmark legislation, this is only a provisional agreement and details are still being worked out. Those include provisions over the rigor of enforcement (primarily via large fines and in rare cases a ban of the technology from the EU market), national security exemptions, and whether to ban specific applications such as emotion prediction technologies, predictive policing, biometric profiling by race, religion or political viewpoints, and remote biometric identification, which could potentially lead to mass surveillance.
The law needs to go through several more steps before final approval. EU and German leaders should not underestimate the political, economic and moral value of being the conscience of the Internet. A realistic appraisal of the EU’s place in this rapidly developing AI World makes it clear that innovation extends beyond the technologies and into the role of establishing the safest and most humanistic digital infrastructure guidelines for the 21st century.
Steven Hill is editor and main author of DemocracySOS and is the author of the book “Europe’s Promise: Why the European Way Is the Best Hope In an Insecure Age.” You can reach out to him under @StevenHill1776.