CodeAndShare: Difference between revisions

From CTPwiki

Created page with "These recent years have been characterized by a strong movement of recentralization towards platforms (Srnicek et al., 2018) and a nearly total monopoly over the infrastructure of large scale computation. This is also true for flagship products of image generation software such as Dall-e or Adobe Firefly. However, the relentless work of communities of developers, enthusiasts and activists has helped create a nascent alternative made of disparate pieces of software, mater..."
 
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
==== September 2025 session ====
For this workshop we propose a tour in a social and technical system, where we stop and wonder about the different objects that, in one way or the other, take part in the generation of images by artificial intelligence (AI). Most people’s experiences with generative AI image creation come from platforms like OpenAI’s DALL-E, Google’s Gemini, Midjourney, or other proprietary services. In contrast, there is a whole ecology of open source services and software that are distinct yet often based on the same underlying models or techniques of so-called ‘diffusion’. They are the meeting point for communities who seek some kind of independence and autonomy from the mainstream platforms.
We will share the outcome of an investigative process where we – by trying out different software, reading documentation and research, looking into communities of practice that experiment with AI image creation, and more – have sought to understand the things that make generative AI images possible; that is, the underlying dependencies on relations between communities, models, technical units, and more in AI image creation. Within this system there is not just a functional apparatus, but also an ‘imaginary’; that is, there are underlying expectations and norms that are met in specific objects, as well as shared visual cultures.
What we will present is not just a collection of the objects that makes generative AI images, but also an exploration of an imaginary of AI image creation through the collection and exhibition of objects – and in particular, an imaginary of ‘autonomy’.
Practically we will present maps, diagrams, interfaces, material components and we will introduce to a technique, the LoRA, with which we will train a component that acts as a small model and helps fine-tune existing models for our personal needs. Bring your own laptop if you can!
[https://ctp.cc.au.dk/w/index.php/CodeAndShare More info]
==== February 2025 session ====
These recent years have been characterized by a strong movement of recentralization towards platforms (Srnicek et al., 2018) and a nearly total monopoly over the infrastructure of large scale computation. This is also true for flagship products of image generation software such as Dall-e or Adobe Firefly. However, the relentless work of communities of developers, enthusiasts and activists has helped create a nascent alternative made of disparate pieces of software, material resources, documentation, forums and counter-lobbies.
These recent years have been characterized by a strong movement of recentralization towards platforms (Srnicek et al., 2018) and a nearly total monopoly over the infrastructure of large scale computation. This is also true for flagship products of image generation software such as Dall-e or Adobe Firefly. However, the relentless work of communities of developers, enthusiasts and activists has helped create a nascent alternative made of disparate pieces of software, material resources, documentation, forums and counter-lobbies.


Line 4: Line 16:


During this session, led by Nicolas Malevé, we will explore the various agents, software, interfaces and datasets that sustain Stable Diffusion as well as the communities involved in the project. Through several small experiments we will test how scripts, models and interfaces can help play with the pipeline of image generation and transform it.
During this session, led by Nicolas Malevé, we will explore the various agents, software, interfaces and datasets that sustain Stable Diffusion as well as the communities involved in the project. Through several small experiments we will test how scripts, models and interfaces can help play with the pipeline of image generation and transform it.
[[Category:Objects of Interest and Necessity]]

Latest revision as of 10:25, 9 July 2025

September 2025 session

For this workshop we propose a tour in a social and technical system, where we stop and wonder about the different objects that, in one way or the other, take part in the generation of images by artificial intelligence (AI). Most people’s experiences with generative AI image creation come from platforms like OpenAI’s DALL-E, Google’s Gemini, Midjourney, or other proprietary services. In contrast, there is a whole ecology of open source services and software that are distinct yet often based on the same underlying models or techniques of so-called ‘diffusion’. They are the meeting point for communities who seek some kind of independence and autonomy from the mainstream platforms.

We will share the outcome of an investigative process where we – by trying out different software, reading documentation and research, looking into communities of practice that experiment with AI image creation, and more – have sought to understand the things that make generative AI images possible; that is, the underlying dependencies on relations between communities, models, technical units, and more in AI image creation. Within this system there is not just a functional apparatus, but also an ‘imaginary’; that is, there are underlying expectations and norms that are met in specific objects, as well as shared visual cultures.

What we will present is not just a collection of the objects that makes generative AI images, but also an exploration of an imaginary of AI image creation through the collection and exhibition of objects – and in particular, an imaginary of ‘autonomy’.

Practically we will present maps, diagrams, interfaces, material components and we will introduce to a technique, the LoRA, with which we will train a component that acts as a small model and helps fine-tune existing models for our personal needs. Bring your own laptop if you can!

More info

February 2025 session

These recent years have been characterized by a strong movement of recentralization towards platforms (Srnicek et al., 2018) and a nearly total monopoly over the infrastructure of large scale computation. This is also true for flagship products of image generation software such as Dall-e or Adobe Firefly. However, the relentless work of communities of developers, enthusiasts and activists has helped create a nascent alternative made of disparate pieces of software, material resources, documentation, forums and counter-lobbies.

If during a long period, to inspect state of the art AI models one required the permission of software giants under restrictive conditions, the situation is now changing as the alliance between a large series of smaller actors has reached a critical threshold. The Stable Diffusion ecosystem is a paradigmatic example.

During this session, led by Nicolas Malevé, we will explore the various agents, software, interfaces and datasets that sustain Stable Diffusion as well as the communities involved in the project. Through several small experiments we will test how scripts, models and interfaces can help play with the pipeline of image generation and transform it.