On 19 June 2025, CODE kicked off its first workshop on Artificial Intelligence (AI), bringing together a diverse group of experts from across borders for a full day of dialogue. The conversation unfolded along three core thematic pillars: the AI infrastructure and its ecosystem; the role of Big Tech; and the geopolitics of AI.
The full programme and speaker line-up can be found here!
We have asked one of the scholars participating in our workshop a couple of questions about the links between geopolitics and technology. The expert in question is Raluca Csernatoni.
Raluca is a guest professor on European security and defence, with a focus on emerging technologies, at the Brussels School of Governance (BSoG) and its Centre for Security, Diplomacy and Strategy (CSDS), at Vrije Universiteit Brussel (VUB). In parallel to her academic role, she is also a research fellow working on the nexus between European defence and emerging and disruptive technologies, such as Artificial Intelligence (AI), at Carnegie Europe in Brussels.
Antonio Calcara: Hello Raluca, many thanks for accepting our invitation to the CODE workshop and for agreeing to share some thoughts with the members of our Substack community. What does your research on technology and geopolitics focus on?
Raluca Csernatoni: My research profile is situated at the crossroads of international politics, security, and technology. I have consistently explored how emerging and disruptive technologies (EDTs) in the civilian and military domains shape and are shaped by European and international security, as well as geopolitical dynamics. Notably, I research dual-use technological innovation ecosystems in the European Union and NATO, by building theoretical bridges between International Relations, Security Studies, European Studies, and Science and Technology Studies. This interdisciplinary approach remains at the heart of my ongoing work.
I have published extensively on the geopolitics of emerging technologies, including AI, European security and defence policy, the European defence technological and industrial base, transatlantic technological cooperation, as well as European digital sovereignty and cybersecurity cooperation within the context of the EU, NATO, and the Indo-Pacific region. In my work, I examined the military and civilian drone ecosystems, exploring how supply chains and social norms surrounding autonomy reverberate from Brussels to Washington, DC.
In the cyber realm, I coordinate Carnegie Europe’s EU Cyber Direct work on the impact of EDTs in cyberspace, especially with regard to security, resilience, and norm-building, while also analysing EU efforts to secure digital sovereignty through regulatory interventions, standards, industrial policy, and data governance. In recent years, my research has especially focused on the disruptive impact of (military) AI, for instance, my 2024 Carnegie Europe long report, Charting the Geopolitics and European Governance of Artificial Intelligence, examines the EU’s bid to set a global gold standard for trustworthy AI amid state-corporate rivalry and a fragmented regulatory regime complex, whereas another 2025 Carnegie Europe long report, on The EU’s AI Power Play: Between Deregulation and Innovation, zooms into the EU’s recent deregulatory shift that risks eroding democratic oversight and the union’s global norm-setting credibility, including recent initiatives to invest in a European sovereign AI agenda.
AC: Why do you believe this research area is important or timely?
RC: I do believe that the stakes of geopolitics-technology research have never been higher. Emerging and disruptive technologies now sit at the heart of great power competition and corporate geopolitics, shaping everything from battlefield outcomes in Ukraine to supply-chain realignments across advanced semiconductors and rare earths. Decisions made today regarding AI safety, drone export rules, or semiconductor subsidies will establish strategic advantages and normative frameworks that will last for decades.
Meanwhile, liberal democracies face a dual imperative: harness innovation to remain competitive while guarding against surveillance, disinformation, and coercive dependencies that authoritarian actors exploit. The EU’s AI Act, the U.S.-China tech arms race, and intensified cyber operations all illustrate how technical choices have immediate geopolitical, security, economic, and human-rights consequences. By analysing these fast-moving developments through interdisciplinary lenses, scholars can provide critical and evidence-based guidance to policymakers struggling to craft resilient, values-driven governance.
In short, studying the techno-geopolitical nexus is timely because the window for shaping a rules-based, inclusive digital order is rapidly closing, and informed insight is essential to keep it open. More importantly, examining this nexus demands academic interdisciplinarity: in my case, by blending critical security studies, international relations, and science and technology studies, I aim to illuminate how sociotechnical power dynamics shape governance and security practices.
AC: Everyone is talking about AI. Do you think this technology will have a major impact on geopolitics?
RC: Absolutely, but not because algorithms possess innate power or due to the current politics of hype surrounding the advent of Artificial General Intelligence (AGI) or superintelligence. I consider AI systems as sociotechnical assemblages, namely layers of data extraction, cloud infrastructure, semiconductor chokepoints, compute power, human talent, labour practices, and regulatory regimes. These material-symbolic and public-private networks can both reinforce and reorder global hierarchies.
States and corporate technological giants that command advanced fabs, vast datasets, and cloud hyperscalers gain leverage over supply chains, alliance politics, and norm-setting, as witnessed by the U.S. chip export controls or China’s smart-city diplomacy. Yet, the AI’s diffusion through open-source code and commodity hardware allows smaller actors to weaponise deepfakes, leverage AI-powered cyber technologies, and swarm drones, thus unsettling traditional power balances.
Most interestingly, I believe that the very proclamation of something as “AI-enabled” is itself performative: it attracts investment, legitimises interventions, and frames geopolitical and security debates around the speed, scale, and the inevitability of AI, often marginalising ethical, regulatory, or human rights concerns. With respect to geopolitics, particularly regarding the future of warfare, AI systems are viewed as radically disruptive, paradigmatically altering the very nature of war. This is, of course, debatable, but we already see how AI-driven sensor fusion, drone swarms, and rapid decision-support systems compress battle rhythms, favouring state and corporate actors who can experiment with and iterate algorithms in real time. In this regard, human-machine teaming, predictive logistics or C2, and algorithmic target selection promise decisive strategic advantages but also introduce ethical concerns and escalation risks that demand urgent global governance.
AC: What role can Europe play in the current technological competition, in your assessment?
RC: The EU governs AI in multifaceted ways while navigating geopolitical, economic, and regulatory concerns. A nuanced understanding is needed of the EU’s AI technopolitics and the ways this is reflected in European efforts to govern this field, foster AI innovation, and ensure trustworthiness. In this regard, the EU’s AI Act aims to regulate the use of AI systems based on risk levels, reflecting a commitment to the human-centric and responsible development of AI. Moreover, the EU’s pursuit of homegrown AI innovation highlights the critical importance of AI in bolstering European technological sovereignty and reducing strategic dependencies.
So, on the one hand, the EU views establishing a global standard through the AI Act as a pivotal objective, prompting discussions around the need for international governance standards. Yet, amid the global race for AI supremacy, Europe faces challenges in establishing a gold standard for AI regulation while maintaining a technological edge. While specific provisions of the AI Act may exert substantial influence on global markets, Europe’s efforts alone will not establish a comprehensive international standard for AI.
On the other hand, while the EU has allocated substantial investment through various programmes, competition from other major economies, particularly the U.S. and China, remains formidable. In response, the EU will need to match its rhetoric of technological sovereignty on AI with significant funding. As things stand, there is little evidence that the EU will be able to pursue sovereign AI on its own, including global leadership in the AI domain, given Europe’s lack of major high-tech companies, critical infrastructure, and substantial investment.
AC: What are the five books that inspired your research the most?
RC: It is incredibly challenging to choose only five books, as I am constantly on the lookout for inspiring works. More recently, I have been reading, for instance, the philosophical works of Byung-Chul Han. But if I were to list some books that inspired me, I would go with:
The Outdatedness of Human Beings by Günther Anders;
The Origins of Totalitarianism by Hannah Arendt;
The Society of the Spectacle by Guy Debord;
Power/Knowledge by Michel Foucault;
Simians, Cyborgs, and Women by Donna Haraway.
Anders exposes the existential dislocations created by industrial automation and the existential dread under the advent of nuclear annihilation or AI superintelligence, illuminating how, for instance, disruptive technologies outpace political imagination.
Arendt clarifies the conditions under which technological bureaucracies can erode plurality and foster authoritarian security logics, while Debord foregrounds critiques around the politics of AI hype and drone imageries, where our very perceptions become a battleground for influence.
Foucault supplies the analytical toolkit for tracing how AI and data regimes intertwine surveillance, discipline, power, and geopolitical strategy. Finally, Haraway’s cyborg, a hybrid of organism and machine, inspires deeper reflection on human-machine relations, the construction of subjectivities, and posthuman critiques of military innovation and power.
AC: Can you recommend any (non-)academic articles that you consider particularly relevant for understanding the links between geopolitics and technology?
RC: Yes, I do have some suggestions:
Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
De Goede, M., & Westermeier, C. (2022). Infrastructural geopolitics. International Studies Quarterly, 66(3).
Dwyer, A. C. (2023). Cybersecurity’s grammars: A more‐than‐human geopolitics of computation. Area, 55(1), 10-17.
Jackman, A., & Brickell, K. (2022). ‘Everyday droning’: Towards a feminist geopolitics of the drone-home. Progress in Human Geography, 46(1), 156-178.
Klauser, F. (2022). Policing with the drone: Towards an aerial geopolitics of security. Security Dialogue, 53(2), 148-163.
Mbembe, A. (2006). Necropolitics. Raisons politiques, 21(1), 29-60.
Shaw, I. G. (2017). Robot Wars: U.S. Empire and geopolitics in the robotic age. Security dialogue, 48(5), 451-470.
Taken together, these works map the multiple conceptual and empirical entry points through which technical artefacts and geopolitical power intersect.