This briefing explores AI's place in an increasingly technological world, and focuses on the developments within the US, China and the EU. This piece was written by Victoria Iamoni and edited by Tanvi Sureka.
We are amid a new global revolution: the AI Boom. Over the past decade, as artificial intelligence (AI) technology has advanced, politicians and entrepreneurs have drawn comparisons of its transformative power with the Industrial Revolution. AI was even named Collins Dictionary’s 2023 Word of the Year. The Information Technology & Innovation Foundation defines AI as “the simulation of human intelligence in machines, enabling them to perform tasks such as visual perception, speech recognition, decision-making, and language translation.” What started as a sci-fi dream is now the centre of an increasingly competitive geopolitical and technological race. Governments worldwide have signalled their intent to harness AI’s incredible potential to create higher productivity, improve living standards, and secure economic investment in a world of rising isolationism and protectionism.
It is no surprise that countries are interested. The Tony Blair Institute for Global Change has published numerous reports on AI’s potential in defence, climate, legislative strategy, healthcare and many more. American multinational Goldman Sachs has noted that “the promise of generative AI technology to transform companies, industries, and societies is leading tech giants and beyond to spend an estimated $1 trillion”. Currently, international players are racing to establish their own AI governance frameworks to be the global leader. This article examines and contrasts the strategies of the three big juggernauts of AI: the US, China and the European Union (EU). At the heart of their potential to be the world leader lies a critical resource: data. As governments and corporations drive investment into AI technologies, the role of citizens — whose data fuels these systems — and their interactions with governments will shape the technological and international landscape of the future.
Now, what is the relationship between data and AI? To understand this, we must look at the creation of generative AI models. America has the most famous, OpenAI’s ChatGPT, which was introduced in November 2022 and is valued at up to $157 billion. China's Baidu, a multinational that is part of their “AI Tigers”, had 2023 revenues of $18.96 billion. The AI Tigers are “a club of [Chinese] unicorns focusing on generative artificial intelligence that include Zhipu AI, Moonshot AI and MiniMax”. The EU has Aleph Alpha, a German startup that raised over $500 million in 2023. All these companies use programmes that are deep learning models, meaning they need big data sets to train in learning patterns so that they can replicate and learn to make predictions and solve tasks. The better and more all-encompassing the data, the better the output and performance of these models. Thus, these companies have a strong incentive to access as much data as possible for the longevity and performance of their programmes, and also for their financial success. However, if given bad data, the programme doesn’t perform well. There is a contrasting incentive to make sure the data is trustworthy. Three actors are currently competing in the data market, so how do they set themselves apart in their approach to data governance?
US: Leading by the market
The US is the current leader in AI. To put this in perspective, “investors allocated one in three VC dollars in North America to AI and machine learning, compared to less than one in five in Europe and one in 10 in Asia”. OpenAI, Google and Microsoft dominate the market, benefiting from relatively lax federal regulation that prioritises economic competitiveness. The US framework has been characterised by the Center for International Economic and Military Influence (CEIMIA) as “[using] non-binding principles, voluntary guidance on risk management, and the application of existing sectoral legislation rather than the development of new AI-specific legislation at the federal level.” There have been efforts to introduce ethical principles into AI governance from both Biden and Trump's White House. What has come about has been in the form of voluntary commitments from companies such as Amazon, Google, Meta, Microsoft, and OpenAI to drive “safe, secure, and trustworthy AI development”. However, without comprehensive federal data protection legislation, moving away from sector-specific regulations for data has been difficult. This has been made worse by the American market approach, which seems to prioritise the potential of its AI industry.
Consequently, vulnerabilities are evident. High-profile data breaches, such as Yahoo and Microsoft hacks, have exposed millions of personal accounts, undermining trust in American systems. This is even more worrying given the fact that OpenAI is working with the US Treasury and Department of Defence. More importantly, without robust data protections, the potential of AI in the US could be at risk. Issues such as digital human rights law, risks in trust from customers and citizens, actions to prevent fraud and cybersecurity bring heavy and high costs. Additionally, there are issues with AI and algorithmic bias due to not using the correct and most effective data in their systems. Bad or illegal data creates bad products in the long run and undermines consumers in the market. At the moment, public perception has mainly been swayed by inaccurate responses from AI systems, such as ChatGPT making up legal cases when answering questions, to Google AI recommending individuals eat “a rock a day”. But without the data protections to fight against these issues, the US’s competitive edge might not be enough to ensure its leadership in the AI race.
China: Leading by the state
China has emerged as a formidable competitor over the past decade. Their approach has been state-led, with heavy investment driving the way forward. They are the country with the most publications in AI research, surpassing the US. Through its continued ascent, it has also strengthened foreign investment with Saudi Arabia through Aramco in this growing sector. But it is mainly China's state-driven model that leverages state capital and financial aid to its domestic markets that has made the difference. It has also given the Chinese Communist Party (CCP) renewed control over its technology sector. The CCP has faced challenges to their authority from Big Tech and their market dominance, leading to massive crackdowns over tech giants in 2020 to disrupt monopolies and weaken their influence. Given the CCP’s approach of using “a series of stringent antitrust, data, and labour regulation, while imposing astronomical fines on companies” which caused tech giants' market capitalization to decrease by 75%, one can see the delicate balance in using data regulation to drive or ruin progress.
The CCP’s specific approach to AI is through their “two-year rules and regulations”, which allows for updated governmental control over the powerful digital economy. In terms of their AI data regulation framework, this has a lot of power over the effectiveness of their systems. Their framework has a three-pillar approach: “(1) content moderation online to ensure traceability and authenticity, (2) data protection to prevent harm to users or the undermining of public order, and (3) algorithmic management to ensure security, ethicality, and transparency”. This centralised approach currently supports rapid innovation, especially as there seems to be no threat to the CCP, given its involvement.
What is interesting to note about their data protection approach is the focus on public order over individual digital human rights law. The CCP already enforces the “Great Firewall”, an internet censorship system with strict content moderation policies. When paired with the focus on traceability in data regulation, it raises questions about the role of AI technology as a potential tool for public order governance. However, more pressing for AI innovation is if there were governmental concerns over the power of China’s AI Tigers, risking another round of crackdowns. The impact that these processes could have is still uncertain, but for now there seems to be limited hindrance to China’s progress as an AI leader.
EU: Leading by digital rights
The European Union (EU) sets itself apart from the US and China with its digital rights-led model. With a stringent focus on the right to privacy, presided over by the European Court of Human Rights, they have taken a more protective approach towards citizens regarding investment in AI. First, they have the General Data Protection Regulation (GDPR), which provides holistic citizen protection for data. It is paired with the recent 2024 EU Artificial Intelligence Act. It has been described as the “first comprehensive regulation on AI by a major regulator anywhere.” The 2024 Act is holistic, using categories to systematise the potential risk of AI systems, imposing even higher restrictions for governance on those deemed “high risk”.
When considering the competition, it makes sense that the EU has made itself the leader of ‘ethical AI’, led by principles of “excellence and trust”. They do not have the investment or capital to match the US or China, so they are attempting to make themselves the best place for consumers of AI. The EU is seeking to set a global trend of trust and transparency. In the US, 54% of Americans feel cautious and concerned about artificial intelligence. There could be a space to help calm these fears. However, their ethical approach is already threatened by some of the big tech corporations. Google is currently under investigation for breaching GDPR with its AI models, potentially risking fines of up to 4% of its global annual revenues. Rival tech giant Meta announced that “we will release a multimodal Llama model over the coming months, but not in the EU due to the unpredictable nature of the European regulatory environment". This is compounded by private investment in the EU lagging behind. Investors see the region's strict data protection and governance legislation as stifling AI innovation rather than safeguarding it. Whether the EU can innovate in parallel with leading in AI regulation is yet to be determined.
The Future of AI
It remains to be seen whether the US, China or the EU will have the most success with its AI strategy. However, what is known is that the nation which leads the AI race will set the tone over the coming years for future technological innovation. Whether the world is driven by a market, state, or digital rights-based approach to this transformative technology is up to scientists, entrepreneurs, investors, and policymakers.
Comments