GPU

The king of Denmark, Jensen Huang (CEO and founder of Nvidia), and Nadia Carlsten (CEO of the Danish Center for AI Innovation) pose for a photo holding an oversized cable in front of a bright screen full of sparkles. It is the inauguration of Denmark's "sovereign" AI supercomputer, aka Gefjon, named after the Nordic goddess of ploughing.

"Gefion is going to be a factory of intelligence. This is a new industry that never existed before. It sits on top of the IT industry. We’re inventing something fundamentally new" (https://blogs.nvidia.com/blog/denmark-sovereign-ai-supercomputer/)
Gefion is powered by 1,528 H100's, a GPU developed by Nvidia. This object, the Graphics Processing Unit, is a key element that, arguably, paves the way for Denmark's sovereignty and heavy ploughing in the AI world. Beyond all the sparkles, this photo shows the importance of the GPU object not only as a technical matter, but also a political and powerful element of today's landscape.
Since the boom of Large Language Models, Nvidia's graphic cards and GPUs have become somewhat familiar and mainstream. The GPU powerhouse, however, has a long history that predates their central position in generative AI, including the stable diffusion ecosystem: casual and professional gaming, cryptocurrencies mining, and just the right processing for n-dimensional matrices that translate pixels and words into latent space and viceversa.
What is a GPU?
A graphics processing unit (GPU) is an electronic circuit focused on processing images in computer graphics. Originally designed for early videogames, like arcades, this specialised hardware performs calculations for generating graphics in 2D and later for 3D. While most computational systems have a Central Processing Unit (CPU), the generation of images, for example, 3D polygons, requires a different set of mathematical calculations. GPUs gather instructions for video processing, light, 3D objects, textures, etc. The range of GPUs is vast, from small and cheap processors integrated into phones and smaller devices, to state of the art graphic cards piled in data centres to calculate massive language models.
What is the network that sustains this object?
Like many other circuits, GPUs require a very advance production process, that starts with mineral mining for both common and rare minerals (silicon, gold, hafnium, tantalum, palladium, copper, boron, cobalt, tungsten, etc). Their life-cycle and supply chain locate GPUs into a material network with the same issues of other chips and circuits: conflict minerals, labour rights, by-products of manufacturing and distribution, and waste. When training or generating AI responses, the GPU is the object that consumes the most energy in relation to other computational components. A home-user commercially available GPU like the GeForce RTX 5090 can consume 575 watts, almost double than a CPU in the same category. Industry GPUs like the A100 use a similar amount of energy, with the caveat that they are usually managed in massive data centres. Allegedly, the training of GPT4 used 25,000 of the latter, for around 100 days (i.e. approximately 1 gigawatt, or 100 million led bulbs). This places GPUs in a highly material network that is for the most part invisible, yet enacted with every prompt, LoRA training, and generation request.

Perhaps the most important part of the GPU in our Objects of interest and necessity network and maps, is that this object is a, if not the, translation piece: most of the calculations to and from "human-readable" objects, like a painting, a photograph or a piece of text, into an n-number of matrices, vectors, and coordinates, or latent space, are made possible by the GPU. In many ways, the GPU is a transition object, a door into and out of a, sometimes grey-boxed, space.
It is also a mundane object, historically used by gamers, thus owning much of its history and design to the gaming culture. Indeed, the development of computer games demand fuelled the design and development of the current GPU capabilities. In own our study, we use an Nvidia RTX 3060ti super, bought originally for gaming purposes. When talking about current generative AI, populated by major tech players and trillion-valued companies and corporations, we want to stress the overlapping history of digital cultures, like gaming and gamers, that shape this object.
Being a mundane object also allows for paths towards autonomy. While our GPU was originally used for gaming, it open a door for detaching from big tech industry players. With some tuning and self-managed software, GPUs can process queries on open, mid-size and large, language models, including stable diffusion.
LoRA training
- datacenters, individual users, networks (horde)
How does it evolve through time?
Jacob Gaboury follows 30 years of computational research geared towards producing the GPU (MORE ON THIS) -- emphasis on parallel computing, and hyperspecialized work "detached" from CPU computation.
- gaming
- crypto
The creation of cryptocurrencies like Bitcoin
- neural networks computation
- llm's
- large scale industrial AI
Do Graphics Processing Units Have Politics? https://cargocollective.com/gansky/Do-Graphics-Processing-Units-Have-Politics
How does it create value? Or decrease / affect value?
- buzz, horde, cryptocurrencies
- covid19 scarcity
- production value (e.g. civitAI)
- national AI sovereignty
- autonomy
- military?
emphasis on autonomy / cultural dependency on autonomy (e.g. the gpu we use) / layer of material inftrastructure connected to cultural layers
What is its place/role in techno cultural strategies?
How does it relate to autonomous infrastructure?