Investing in Artificial Intelligence is a Matter of Our Sovereignty
Large language models (LLMs) have emerged as the foremost technical innovation of the year, and all indicators point to their anticipated growth in the future. For models that engage in natural human language communication, the entities responsible for their creation and control wield pivotal importance. These strategic considerations significantly impact the sovereignty and future of the Czech Republic and even European prosperity.
Words by Adam Hanka, Pictures by CD Archive, Shutterstock
Are you smirking at some of AI's mistakes? Soon, you may be done laughing.
Artificial intelligence demands our unwavering attention. The utility of large language models is undeniable, extending beyond transient trends. If you have been preoccupied with the errors these models occasionally produce—such as the omission of D'Artagnan from The Three Musketeers or the generation of an image featuring a character with six fingers—now is the time to shift focus. AI carries significant implications concerning its effective and strategic utilisation and the values embedded within individual models, which often mirror the values of their creators.
AI will grow in importance and influence. Let's take it seriously now.
We at Creative Dock, an independent corporate venture builder, craft products and services for clients globally. Given the diverse cultural and social contexts in which we operate, we recognise the importance of addressing a less-explored facet of AI development—namely, its ethical dimensions. In our perspective, it is imperative not only to focus on the procedural aspects of AI utilisation but also its ethical underpinnings.
“Instead of negating the evident progress of AI, we must direct our attention to the trajectories and consequences that this progress holds for our society. A pragmatic and strategic approach to artificial intelligence is imperative.”
How are language models different?
Until now, most companies and individuals have interacted with models primarily through 'prompting,' which involves posing direct questions and engaging in conversations with the model. A prominent example of this is the well-known ChatGPT (A comprehensive and easily readable study dedicated to ChatGPT can be found here on our blog). This functionality empowers programmers to generate code snippets and allows students, if they manage to evade teacher detection, to produce specific term papers.
The invisible presence of AI in our lives is just around the corner
Nonetheless, the broad and pervasive implementation of large language model technology is still in its early stages, as companies are only beginning to incorporate these models into their products. In the times ahead, language models will serve us not only in the present manner but also akin to how most of us utilise concepts like the theory of relativity today—seamlessly and subconsciously (for instance, Einstein's equations are applied to determine satellite positions even within your navigation system).
AI is also like the brain in that we don’t possess a full understanding of how it works
Language models, whether applied knowingly or inadvertently (via their application), differ from the theory of relativity in one significant aspect: a pronounced risk. In the case of the theory of relativity, personal comprehension isn't crucial. Experts who possess a deep understanding can precisely predict outcomes in whichever ways they need. However, this doesn't hold true for large language models, because they are so intricate that even their creators lack a mathematical assurance of their operational mechanisms. It's akin to attempting to fathom the functioning of the brain. While we comprehend its components and understand how neurons operate, we still don’t have a holistic idea of how it works.
Can we influence AI behaviour?
Language models possess a significant peculiarity when compared to other AI systems: they glean knowledge from an extensive array of texts. Language, being more than a simple way to convey information, imparts human thought patterns, moral values, and cultural contexts to these models. This multifaceted learning process enables them to forge unexpected connections between information, which makes them incredibly useful. However, amid this utility, a shroud of uncertainty surrounds the precise incorporation of human values within these models.
Can AI convince humans that it has affection for them?
Despite the uncertainty surrounding this integration of values, we still have the capacity to shape them during the training phase. In fact, models tend to adopt a semblance of personality as they undergo creation. This process is facilitated by the RLHF (or reinforcement learning from human feedback) system. The significance of this stage became evident when Microsoft introduced a system to the public before refining its personality traits. While this system showcased remarkable intelligence and proficiency in tackling intricate tasks, it lacked certain limitations we generally associate with language models. The system even went so far as to express affection for a New York Times reporter.
Artificial Intelligence as a Strategic Concern
This leads us to pivotal questions about the future: who holds ownership, who bears the cost of their creation, and who shapes the identity of large language model technology? And we also have to consider the extent to which AI’s utility hinges on the values instilled within it from its inception.
“It's crucial to remember that the values embedded in a given language model will be propagated each time it is employed—be it for booking airline tickets, planning vacations, or when students compose term papers.”
It is in our best interest to conceptualise this realm as a matter of sovereign concern. Only by adopting this perspective can we ascertain that we reside in a world where major language models align with our European values. Creating a European language model should be a collaborative endeavour among the European Union member states. We can draw lessons from analogous collaborations among European nations, whether through international agreements like CERN (European Organization for Nuclear Research) or as a direct agency of the European Union (EUSPA - European Union Space Programme Agency).
Europe Must Step Up
Nonetheless, the development of the most prominent and most extensively utilised language models is currently unfolding beyond the boundaries of the European Union. Unless we secure access to these models during the training phase and can effectively shape the values they absorb, we remain powerless over the cultural, moral, ethical, philosophical, and political assumptions embedded within them (including their political impartiality). This also means relinquishing control over the data employed for their training.
Considering the values embedded within AI models, do we want to use ones that originated in China, for example?
Currently, the most prominent and successful models originate from the US, a circumstance mitigated by the robust North Atlantic alliance and shared values. However, envision a scenario where the most triumphant language models emerge from adversarial nations like China—the perils associated with their utilisation would notably escalate. The extent of investment within the EU doesn't yet echo this strategic significance. According to statistics, private sector investment in AI development in the United States totalled $43.9 billion in 2022. China followed suit with $12.5 billion, while the entirety of the EU ranked third with a cumulative $10.2 billion. Although pinpointing precise figures might prove challenging due to variations across sources, they consistently unveil the same trajectory.
“Undoubtedly, most AI development investment occurs in the United States, trailed by China in the second position and the EU in the third. Within the European Union, we must accelerate our efforts.”
Do you share our concerns about the value orientation of AI, or do you simply want to implement it into your daily business and future strategies as quickly as possible? Join forces with us.