Hugging Face
What is Hugging Face?
A central cohesive source of support and stability when exploring autonomous AI image creation is Hugging Face. Hugging Face is, simply put, a collaborative hub for AI development – not specifically targeted AI image creation, but generative AI more broadly (including speech synthesis, text-to-video. image-to-video, image-to-3D, and much more). It attracts amateur developers who use the platform to experiment with AI models, as well as professionals who use the expertise of the company or use the platform as an outset for entrepreneurship. By making AI models, datasets and also processing power widely available it can also be labelled as an attempt to democratise AI and delink from the key commercial platforms, yet at the same time Hugging Face is deeply intertwined with numerous commercial interests, It is therefore suspended between more autonomous and peer-based communities of practice, and a need for more 'client-server' relations in model training.
What is the network that sustains Hugging Face?
Hugging Face is a platform, but what it offers is more resembling an infrastructure for, in particular, training models. As such, Hugging Face is an object that operates in a space that is not typically seen by ordinary users. It is a space for developers (amateurs or professionals) to use and interact with the computational models of a latent space (see Maps), and specify advanced settings for model training (see LoRA), but also to access a material infrastructure of GPUs.
Companies involved in training foundation models have their own infrastructures (specialised racks of hardware and expertise), but they may make their models available on Hugging Face. This includes both Stability AI, but also the Chinese DeepSeek, and others. Users often upload their own datasets to experiment with the many models in Hugging Face, and typically, these datasets are freely available on the platform for other users. But users also experiment in other ways. They ‘post train’ the models and create LoRAs for instance. Others create 'pipelines' of models, meaning that the outcome of one model can become the input for another model, At the time of writing there are nearly 500,000 datasets and 2,000,000 models freely available. It also contains a lively community where users share their creations and experiences. As the posts will quickly reveal, the community has specialised developer knowledge of how to experiment with latent space and computational models.

How has Hugging Face evolved through time?
Hugging Face initially set out in 2016 by French entrepreneurs Clément Delangue, Julien Chaumond, and Thomas Wolf. Already in 2017 they received their first round of investment of $1,2 million, as it was stated in the press release, Hugging Face is a "new chatbot app for bored teenagers. The New York-based startup is creating a fun and emotional bot. Hugging Face will generate a digital friend so you can text back and forth and trade selfies." In 2021 they received a $40 million investment to develop its "open source library for natural language processing (NLP) technologies." There were (in 2021) "10,000 forks" (i.e., branches of development projects) and "around 5,000 companies [...] using Hugging Face in one way or another, including Microsoft with its search engine Bing."
This trajectory shows how the company has gradually moved from providing a service (a chatbot) to becoming a major (if not the) platform for AI development – now, not only in language technologies, but also (as mentioned) in speech synthesis, text-to-video. image-to-video, image-to-3D and much more. But it also shows an evolution of generative AI. Like today, early ChatGPT (developed by OpenAI and released in 2022) used large language models (LLMs), but offered very few parameters for experimentation: the prompt (textual input) and the temperature (the randomness and creativity of the model's output). Today, there are all kinds of components and parameters. This also explains the present-day richness of Hugging Face’ interface: many of the commercial platforms do not offer this richness, and an intrinsic part of the delinking from them seems to be attached to a fascination of settings and advanced configurations (see also Interfaces).


How does Hugging Face affect the creation of value?
Hugging Face has an estimated market value of $4.5 billion (as of 2023). What does this exorbitant value of a platform that is little publicly known reflect?
On the one hand, the company has capitalised on the various communities of developers in, for instance, visual culture who experiment on the platform and share their datasets and LoRAs, but this is only a partial explanation.
Hugging Face is not only for amateur developers. On the platform one also finds an 'Enterprise Hub' where Hugging Face offers, for instance, advanced computing at higher scale with a more dedicated hardware setup ('ZeroGPU', see also GPU), and also 'Priority Support'. For this more commercial use of the platform, access is typically more restricted. In this sense, the platform has become innately linked to a plane of business innovation and has also teamed up with Meta to boost European startups in an "AI Accelerator Program".
Notably, Hugging Face also collaborates with other key corporations in the business landscape of AI. For instance, Amazon Web Services (ASW), allowing users to make the trained models in Hugging Face available through Amazon SageMaker. Nasdaq Private Market also lists a whole range of investors in Hugging Face (Amazon, Google, Intel, IBM, NVIDIA, etc.).
The excessive (and growing) market value of Hugging Face reflects, in essence, the hight degree of expertise that has accumulated within a company that consistently has sought to accommodate a cultural community, but also a business and enterprise plane of AI. Managing an infrastructure of both hardware and software for AI models at this large scale is a highly sought expertise.

What is the role of Hugging Face in techno-cultural strategies?
Regardless of the Enterprise Hub, Hugging Face also remains a hub for amateur developers who do experimentation with generative AI, beyond what the commercial platforms conventionally offer – and also share their insights in the platforms's 'Community' section. An example is the user 'mgane' who has shared a dataset of "76 cartoon art-style video game character spritesheets." The images are "open-source 2D video game asset sites from various artists." mgane has used them on Hugging Face to build LoRAs on Stable Diffusion, that is "for some experimental tests on Stable Diffusion XL via LORA and Dreambooth training methods for some solid results post-training."

A user like mgane is arguably both embedded in a specific 2D gaming culture, and also has the developer skills necessary to access and experiment with models in the command line interface. However, users can also access the many models in Hugging Face through more graphical user interfaces like Draw Things that allows for accessing and combining models and LoRAs to generate images, and also to train one's own LoRAs (see Interfaces).
How does Hugging Face relate to autonomous infrastructures?
Looking at Hugging Face, the separation of community labour from capital interests (i.e., 'autonomy') in generative AI does not seem to be an either-or. Rather, dependencies generative AI seem to be in a constant movement, gravitating from 'peer-to-peer' communities, towards and 'client-server relations' that are more easily capitalised. This may be due to the need for infrastructures that demand a high level of expertise and technical requirements involved in generative AI, but is not without consequence.
When, as noted by the European Business Review, most tech-companies in AI want to collaborate with Hugging Face, it is because the company offers an infrastructure for AI. Or, rather, it offers a platform that performs as an infrastructure for AI – an "linchpin" that keeps everything in the production in position. As also noted by Paul Edwards, a platform seems to be, in a more general view, the new mode of handling infrastructures in the age of data and computation. Working with AI models is a demanding task that requires both expertise, hardware and organisation of labour, and what Hugging Face offers is speed, reliability, and not least agility in a world of AI that is in constant flux, and where new models and techniques are introduced almost at a monthly basis.
With their 'linchpin status' Hugging Face builds on existing infrastructures: they depend on already existing infrastructures such as the flow of energy or water, necessary to make the platform run, and they also build on social and organisational infrastructures, such as those of both start-ups and cultural communities. At the same time, however, they also reconfigure these relations – creating cultural, social and commercial dependencies on Hugging Face as a new 'platformed' infrastructure for AI.
++++++++++++++++++++++++++
Hugging Face [CARD TEXT]
Hugging Face initially stated out in 2016 out as a chatbot for teenagers, but is now a (if bot the) collaborative hub for AI development – not specifically targeted AI image creation, but generative AI more broadly (including speech synthesis, text-to-video. image-to-video, image-to-3D, and much more). It attracts amateur developers who use the platform to experiment with AI models, as well as professionals who typically use the platform as an outset for entrepreneurship. By making AI models and datasets available it can also be labelled as an attempt to democratise AI and delink from the key commercial platforms, yet at the same time Hugging Face is deeply intertwined with these companies and various commercial interests.
Hugging face has (as of 2023) an estimated market value of $4,5 billion. It has received high amounts of venture capital from Amazon, Google, Intel, IBM, NVIDIA and other key corporation in AI and has, because of the company's expertise in handling AI models at large scale, also collaborations with both Meta and Amazon Web Services. Yet, at the same time, it also remains a platform for amateur developers and entrepreneurs who use the platform as an infrastructure for experimentation with advanced configurations that the conventional platforms do not offer.
Hugging Face is a key example of how generative AI - also when seeking autonomy – depends on a specialised technical infrastructure. In a constantly evolving field reliability, security, scalability and adaptability become important parameters, and Hugging Face offers this in the form of a platform.
IMAGES: Business + interface/draw things + mgane + historical/contemporary interfaces.