The Political Economy of AI | #AI23

February 23, 2023
New necessities for government spending and upcoming regulations make political involvement one of the major trends for AI.

Technological development has always been dependent on political interventions. Silicon Valley, for instance, could not have become what it is today without the cold war and the space industry’s reliance on semiconductor technology. AI is no different from other technologies when it comes to its dependency on politics. The earliest advancements in AI were only possible due to significant governmental investments. This reliance on the current political tide was also why the claim of James Lighthill and others that, due to combinatorial complexity, AI was a dead end, could more or less stop the progress in AI and cause the infamous AI Winter

The developments in recent years, however, have made progress in AI largely independent of government spending. During the last years, it has become obvious that AI can amortize its costs, which has opened the floodgates for large-scale private investments. As the technology is becoming more and more marketable, it provides enough return on investment to escape the dependency on public funding. By now, progress in AI is mainly driven by the markets and with Deepmind, OpenAI and Anthropic, the three most important capability labs are companies independent of government funding. But investments alone are not the only way politics shapes the trajectory of technology. A more indirect but arguably even larger impact on technology is caused by regulations. They build the playground on which entrepreneurs, researchers and established corporations implement a technology. Well-designed regulations avoid possible harm and guarantee a safe infrastructure for all stakeholders, without hindering technological progress. 

The last years of AI progress were relatively policy free. Not only was public funding less crucial but there was also no regulation that specifically targeted AI. But the times have changed quickly. New necessities for government spending and upcoming regulations make political involvement one of the major trends for AI. 

Government Spending

With the coming of age of Foundation Models, such as GPT-3 and Dall-E, governmental engagement becomes a relevant topic again, if the EU wants to be a part of the future and maintain its sovereignty. This is why the German AI Association is pushing forward the LEAM initiative. We have already explained the importance of Foundation Models and argued for European solutions in a different article series. However, with the current AI hype, the project has become much more urgent. 

The German AI Association has recently published a feasibility study in which the urgency of action and the technical feasibility get demonstrated. According to the study, 73% of AI foundation models have come from the US and 15% from China. If Europe misses the current paradigm shift we will be completely reliant on foreign models and hence foreign standards for data protection and AI safety. Moreover the European Economy so far is heavily reliant on the automobile industry. But this industry is ceasing in global relevancy. Hence, we are in desperate need of a new powerhouse for the European economy. 

However, the supercomputing infrastructure that is needed for Europe to adopt the new paradigm in AI is lacking, so far, and individual companies do not have the means to develop it. This is why a joint effort of industry, academia and politics is necessary to restore and preserve European competitiveness. How governments are behaving in this manner and in how far they will be able to direct sufficient resources towards this issue will be game-deciding. 

Regulations

On April 21, 2021, the European Commission released the first proposal for the European AI Act. The parliament is now scheduled to vote on the most recent draft this year by the end of March. If the proposal gets accepted by the parliament this will be the start of the world's first horizontal regulation of AI. The law, as proposed, divides AI applications into three different risk categories and a category of banned applications. The higher the assessed risk of the application the more compliance is required to develop them. The high-risk category will likely be the most important category and entails, among other regulations, a conformity assessment that has to be passed, before the product becomes available to consumers. Examples of applications, classified as “high-risk” are systems used for employment management, education and systems used in the context of critical infrastructure.

A recent survey of the European AI scene tried to measure the impact of the AI Act and came to the conclusion that Europe’s innovative power would be severely limited by the AI Act in its current state. It seems that between a third and one-half of AI systems would be classified as “high-risk”, imposing high additional costs that are especially painful for SMEs and Startups. 16% of the surveyed startups consider stopping to develop AI or relocating outside of Europe and, according to 73% of the surveyed VCs, the AI Act will be a significant hit to Europe’s competitiveness. Moreover, VC investments shift away from European AI startups in the “high-risk”-category, reinforcing the difficulties for the affected ventures. 

The main issue with the AI Act in its current form seems to be that the “high-risk”- category is targeting broad areas instead of specific harmful use cases. That way, many products have to go through the compliance processes unnecessarily. The regulation thereby favours large corporations which can afford the compliance costs and the long waiting time.  

Summary

The trajectory of AI in Europe, but also in the world, will largely be decided on a governmental level. Public funding and regulations will not only decide what systems can be built but also how fast they will be developed and who can profit from them. This is why we see the role of politics as a major trend emerging. Decision makers in AI will have to factor in politics if they want to navigate through technological progress.

Read more