Prompt: Difference between revisions

From CTPwiki

Line 6: Line 6:
== How does it evolve through time? ==
== How does it evolve through time? ==
      
      
Technically, "The watershed moment came in 2017 with the “Attention Is All You Need” paper, introducing the transformer architecture. Transformers processed input tokens in parallel, allowing for much larger contexts and richer representations."<ref name=":0">https://blog.weskill.org/2025/04/history-and-evolution-of-prompt.html</ref>
Technically, the authors of the Weskill's blog identify the year 2017 as a watershed moment that came with the “Attention Is All You Need” paper that introduced the transformer architecture and the Zero Shot prompting technique: "You supply only the instruction, relying on the model’s pre‑training to handle the task."<ref name=":0">https://blog.weskill.org/2025/04/history-and-evolution-of-prompt.html</ref> The whole complexity of generating text or image was abstracted away from the user and supported by a huge computing infrastructure operating behind the scene.
 
"Zero‑Shot Prompting: You supply only the instruction, relying on the model’s pre‑training to handle the task."<ref name=":0" />


Evolution of the interface for these objects. Early chatgpt offered two parameters through the API: prompt and temperature. Today extremely complex object with all kinds of components and parameters. Visually what is the difference? Richness of the interface in decentralization (the more options, the better...). Ye the prompt remains very central. Break from previous experiments with GANs which remained confined to a technically skilled audience. Promise of democratization.
Evolution of the interface for these objects. Early chatgpt offered two parameters through the API: prompt and temperature. Today extremely complex object with all kinds of components and parameters. Visually what is the difference? Richness of the interface in decentralization (the more options, the better...). Ye the prompt remains very central. Break from previous experiments with GANs which remained confined to a technically skilled audience. Promise of democratization.

Revision as of 12:59, 11 August 2025

In a nutshell, a prompt is a string of words meant to guide an image generator in the creation of an image.

What is the network that sustains this object?

Prompts can be shared or kept private. But a search on prompting in any search engine yields an impressive amount of results. In the generative AI ecosystem, a prompt is an object of exchange as well as a source of inspiration or as a means of reproduction. There is a whole economy of sharing for prompts that encompasses lists of the best prompts, tutorials and demos. In CivitAI, users post images together with the prompt they used to generate them. Encouraging others to try them out.

How does it evolve through time?

Technically, the authors of the Weskill's blog identify the year 2017 as a watershed moment that came with the “Attention Is All You Need” paper that introduced the transformer architecture and the Zero Shot prompting technique: "You supply only the instruction, relying on the model’s pre‑training to handle the task."[1] The whole complexity of generating text or image was abstracted away from the user and supported by a huge computing infrastructure operating behind the scene.

Evolution of the interface for these objects. Early chatgpt offered two parameters through the API: prompt and temperature. Today extremely complex object with all kinds of components and parameters. Visually what is the difference? Richness of the interface in decentralization (the more options, the better...). Ye the prompt remains very central. Break from previous experiments with GANs which remained confined to a technically skilled audience. Promise of democratization.

Diverging tendencies in the ways users are invited to prompts in AI systems. One philosophy is to ask as little as possible from the user. With only a few words, the user get what they supposedly want. The system has to make up for all the bits that are missing, context, etc. This involves prompt augmentation on the server side. Also a lot of implicit assumptions.

The other approach is to give the user all the means to prompt as an expert. Prompt expansion is visible to the user, providing tools to improve the prompt, offer context, continuous chat, etc. Image generation in Stable Horde is stateless. Meaning that every prompt is considered in isolation. The system can't infer anything from past prompts, no concepts of sessions. Much harder to create this sense of continuity than with a centralized service such as OpenAI's chatGPT.

Evolution of prompting in Flux models. They integrate the advances in LLMs to add a more refined semantic understanding of the prompt.

(When does the negative prompt appear? )

How does it create value? Or decrease / affect value?

The quality of a model is evaluated to how well it responds to prompts. Prompt adherence.

What is its place/role in techno cultural strategies?

The fantasy behind the system is that by interpreting the prompt, it "reads" what is in the user's mind. The interpreting the prompt involves much more than a literal translation of a string of words into pixels. It is the interpretation of the meaning of these words. As historically prompts were limited in size, this work of interpretation would be performed on the basis of a very minimal description. Even often with a syntax reduced to a comma-separated list or a string of tags. This had for effect that the model was tasked to fill the blanks. As the model tried to make do, it would inevitably reveal its own bias. If a prompt mentions an interior, the model would generate an image of a house that reflects the dominant trends in its training data. Prompting is therefore half ideation and half search: the visualisation of an idea (what the user wants to see) and the visualisation of the model's worldview. After a few prompts, a user understands that each model has its own singularity. The model is trained with particular data. Further, it synthesizes these data in its own way. Through prompting the user gradually develops a feel for the model's singularity. Elaborate semantics to work around perceived limitations. Targeted keywords to which a particular model responds.

Gaming the model. Midjourney prompt "The viking telling a secret in the mouth of another" to generate an image of two vikings kissing on the mouth. A way to outsmart the system or to bypass censorship. But also an understanding of how to navigate latent space. Guiding, steering the denoising process through prompting towards a particular goal.

Sensitivity, brittleness. Humour, political critique or political hegemony. Trumpian visual politics is written through prompts.

How does it relate to autonomous infrastructure?

The prompt becomes related to autonomous infrastructure through the practice of annotating LoRAs for instance. LoRA's annotation is a form of pre-prompting, embedding prompts in the model. Thinking about images infrastructurally. Not for one image but for a whole genre or character.