Prompt: Difference between revisions

From CTPwiki

Line 6: Line 6:
== How does it evolve through time? ==
== How does it evolve through time? ==
      
      
Technically, the authors of the Weskill's blog identify the year 2017 as a watershed moment that came with the “Attention Is All You Need” paper that introduced the transformer architecture and the Zero Shot prompting technique: "You supply only the instruction, relying on the model’s pre‑training to handle the task."<ref name=":0">https://blog.weskill.org/2025/04/history-and-evolution-of-prompt.html</ref> The whole complexity of generating text or image was abstracted away from the user and supported by a huge computing infrastructure operating behind the scene.
Technically, the authors of the Weskill's blog identify the year 2017 as a watershed moment that came with the “Attention Is All You Need” paper that introduced the transformer architecture and the Zero Shot prompting technique: "You supply only the instruction, relying on the model’s pre‑training to handle the task."<ref name=":0">https://blog.weskill.org/2025/04/history-and-evolution-of-prompt.html</ref> The whole complexity of generating text or image is abstracted away from the user and supported by a huge computing infrastructure operating behind the scene. In that, prompt-based generators break from previous experiments with GANs which remained confined to a technically skilled audience and hold a promise of democratization in terms of accessibility and simplicity. Evolution of prompting in Flux models. They integrate the advances in LLMs to add a more refined semantic understanding of the prompt.
 
Evolution of the interface for these objects. Early chatgpt offered two parameters through the API: prompt and temperature. Today extremely complex object with all kinds of components and parameters. Visually what is the difference? Richness of the interface in decentralization (the more options, the better...). Ye the prompt remains very central. Break from previous experiments with GANs which remained confined to a technically skilled audience. Promise of democratization.
 
Diverging tendencies in the ways users are invited to prompts in AI systems. One philosophy is to ask as little as possible from the user. With only a few words, the user get what they supposedly want. The system has to make up for all the bits that are missing, context, etc. This involves prompt augmentation on the server side. Also a lot of implicit assumptions.
 
The other approach is to give the user all the means to prompt as an expert. Prompt expansion is visible to the user, providing tools to improve the prompt, offer context, continuous chat, etc. Image generation in Stable Horde is stateless. Meaning that every prompt is considered in isolation. The system can't infer anything from past prompts, no concepts of sessions. Much harder to create this sense of continuity than with a centralized service such as OpenAI's chatGPT.
 
Evolution of prompting in Flux models. They integrate the advances in LLMs to add a more refined semantic understanding of the prompt.


(When does the negative prompt appear? )
(When does the negative prompt appear? )


== How does it create value? Or decrease / affect value? ==
== How does it create value? Or decrease / affect value? ==
The quality of a model is evaluated to how well it responds to prompts. Prompt adherence.
Prompt adherence, a model's ability to respond to prompts coherently is a major criteria to evaluate its quality. Yet prompt adherence can be understood in different ways. The interfaces designed for prompting oscillate between two opposite poles reflecting diverging tendencies in what users value in AI systems: a desire of simplicity "just type a few words and the magic happens" and a desire of control. In the first case, the philosophy is to ask as little as possible from the user.  For example, early chatgpt offered two parameters through the API: prompt and temperature. With only a few words, the user get what they supposedly want. The system has to make up for all the bits that are missing, context, etc. This involves prompt augmentation on the server side as well as a lot of implicit assumptions. At the opposite end of the spectrum, in interfaces like ArtBot, the prompt is surrounded by a vertiginous list of options where the users must make every choice explicitly. Here the user is treated as an expert. Prompt expansion is visible to them, providing tools to improve the prompt, offer context, continuous chat, etc.  


== What is its place/role in techno cultural strategies? ==
== What is its place/role in techno cultural strategies? ==
The fantasy behind the system is that by interpreting the prompt, it "reads" what is in the user's mind. The interpreting the prompt involves much more than a  literal translation of a string of words into pixels. It is the interpretation of the meaning of these words. As historically prompts were limited in size, this work of interpretation would be performed on the basis of a very minimal description. Even often with a syntax reduced to a comma-separated list or a string of tags. This had for effect that the model was tasked to fill the blanks. As the model tried to make do, it would inevitably reveal its own bias. If a prompt mentions an interior, the model would generate an image of a house that reflects the dominant trends in its training data. Prompting is therefore half ideation and half search: the visualisation of an idea (what the user wants to see) and the visualisation of the model's worldview. After a few prompts, a user understands that each model has its own singularity. The model is trained with particular data. Further, it synthesizes these data in its own way. Through prompting the user gradually develops a feel for the model's singularity. Elaborate semantics to work around perceived limitations. Targeted keywords to which a particular model responds.   
The fantasy behind the system is that by interpreting the prompt, it "reads" what is in the user's mind. The interpreting the prompt involves much more than a  literal translation of a string of words into pixels. It is the interpretation of the meaning of these words. As historically prompts were limited in size, this work of interpretation was performed on the basis of a very minimal description. Even often with a syntax reduced to a comma-separated list or a string of tags. This had for effect that the model was tasked to fill the blanks. As the model tried to make do, it would inevitably reveal its own bias. If a prompt mentions an interior, the model would generate an image of a house that reflects the dominant trends in its training data. Prompting is therefore half ideation and half search: the visualisation of an idea (what the user wants to see) and the visualisation of the model's worldview. After a few prompts, a user understands that each model has its own singularity. The model is trained with particular data. Further, it synthesizes these data in its own way. Through prompting, the user gradually develops a feel for the model's singularity. They elaborate semantics to work around perceived limitations and devise targeted keywords to which a particular model responds.   


Gaming the model. Midjourney prompt "The viking telling a secret in the mouth of another" to generate an image of two vikings kissing on the mouth. A way to outsmart the system or to bypass censorship. But also an understanding of how to navigate latent space. Guiding, steering the denoising process through prompting towards a particular goal. <gallery widths="300" heights="300">
Gaming the model. Midjourney prompt "The viking telling a secret in the mouth of another" to generate an image of two vikings kissing on the mouth. A way to outsmart the system or to bypass censorship. But also an understanding of how to navigate latent space. Guiding, steering the denoising process through prompting towards a particular goal. <gallery widths="300" heights="300">
Line 31: Line 23:
== How does it relate to autonomous infrastructure? ==
== How does it relate to autonomous infrastructure? ==
The prompt becomes related to autonomous infrastructure through the practice of annotating LoRAs for instance. LoRA's annotation is a form of pre-prompting, embedding prompts in the model. Thinking about images infrastructurally. Not for one image but for a whole genre or character.
The prompt becomes related to autonomous infrastructure through the practice of annotating LoRAs for instance. LoRA's annotation is a form of pre-prompting, embedding prompts in the model. Thinking about images infrastructurally. Not for one image but for a whole genre or character.
Relation between prompting and infrastructure. Image generation in Stable Horde is stateless. Meaning that every prompt is considered in isolation. The system can't infer anything from past prompts, no concepts of sessions. Much harder to create this sense of continuity than with a centralized service such as OpenAI's chatGPT.
[[Category:Objects of Interest and Necessity]]
[[Category:Objects of Interest and Necessity]]

Revision as of 20:20, 11 August 2025

In a nutshell, a prompt is a string of words meant to guide an image generator in the creation of an image.

What is the network that sustains this object?

Prompts can be shared or kept private. But a search on prompting in any search engine yields an impressive amount of results. In the generative AI ecosystem, a prompt is an object of exchange as well as a source of inspiration or as a means of reproduction. There is a whole economy of sharing for prompts that encompasses lists of the best prompts, tutorials and demos. In CivitAI, users post images together with the prompt they used to generate them. Encouraging others to try them out.

How does it evolve through time?

Technically, the authors of the Weskill's blog identify the year 2017 as a watershed moment that came with the “Attention Is All You Need” paper that introduced the transformer architecture and the Zero Shot prompting technique: "You supply only the instruction, relying on the model’s pre‑training to handle the task."[1] The whole complexity of generating text or image is abstracted away from the user and supported by a huge computing infrastructure operating behind the scene. In that, prompt-based generators break from previous experiments with GANs which remained confined to a technically skilled audience and hold a promise of democratization in terms of accessibility and simplicity. Evolution of prompting in Flux models. They integrate the advances in LLMs to add a more refined semantic understanding of the prompt.

(When does the negative prompt appear? )

How does it create value? Or decrease / affect value?

Prompt adherence, a model's ability to respond to prompts coherently is a major criteria to evaluate its quality. Yet prompt adherence can be understood in different ways. The interfaces designed for prompting oscillate between two opposite poles reflecting diverging tendencies in what users value in AI systems: a desire of simplicity "just type a few words and the magic happens" and a desire of control. In the first case, the philosophy is to ask as little as possible from the user. For example, early chatgpt offered two parameters through the API: prompt and temperature. With only a few words, the user get what they supposedly want. The system has to make up for all the bits that are missing, context, etc. This involves prompt augmentation on the server side as well as a lot of implicit assumptions. At the opposite end of the spectrum, in interfaces like ArtBot, the prompt is surrounded by a vertiginous list of options where the users must make every choice explicitly. Here the user is treated as an expert. Prompt expansion is visible to them, providing tools to improve the prompt, offer context, continuous chat, etc.

What is its place/role in techno cultural strategies?

The fantasy behind the system is that by interpreting the prompt, it "reads" what is in the user's mind. The interpreting the prompt involves much more than a literal translation of a string of words into pixels. It is the interpretation of the meaning of these words. As historically prompts were limited in size, this work of interpretation was performed on the basis of a very minimal description. Even often with a syntax reduced to a comma-separated list or a string of tags. This had for effect that the model was tasked to fill the blanks. As the model tried to make do, it would inevitably reveal its own bias. If a prompt mentions an interior, the model would generate an image of a house that reflects the dominant trends in its training data. Prompting is therefore half ideation and half search: the visualisation of an idea (what the user wants to see) and the visualisation of the model's worldview. After a few prompts, a user understands that each model has its own singularity. The model is trained with particular data. Further, it synthesizes these data in its own way. Through prompting, the user gradually develops a feel for the model's singularity. They elaborate semantics to work around perceived limitations and devise targeted keywords to which a particular model responds.

Gaming the model. Midjourney prompt "The viking telling a secret in the mouth of another" to generate an image of two vikings kissing on the mouth. A way to outsmart the system or to bypass censorship. But also an understanding of how to navigate latent space. Guiding, steering the denoising process through prompting towards a particular goal.

Sensitivity, brittleness. Humour, political critique or political hegemony. Trumpian visual politics is written through prompts.

How does it relate to autonomous infrastructure?

The prompt becomes related to autonomous infrastructure through the practice of annotating LoRAs for instance. LoRA's annotation is a form of pre-prompting, embedding prompts in the model. Thinking about images infrastructurally. Not for one image but for a whole genre or character.

Relation between prompting and infrastructure. Image generation in Stable Horde is stateless. Meaning that every prompt is considered in isolation. The system can't infer anything from past prompts, no concepts of sessions. Much harder to create this sense of continuity than with a centralized service such as OpenAI's chatGPT.