LoRA: Difference between revisions

From CTPwiki

Line 9: Line 9:
== What is a LoRA? ==
== What is a LoRA? ==


Initially developed for LLMs, the Low-Rank Adaptation (LoRA) technique is a fine-tuning method that freezes an existing model and inserts a smaller number of weights to adjust the model's behaviour to a particular need. Instead of a full retraining of the model, LoRAs only require the training of the weights that have been inserted in the model's attention layers. Therefore LoRAs are quite lightweight and able to leverage the capabilities of larger models. Users equipped with a consumer-grade GPU can train their own LoRAs reasonably fast (on a mac M3, a LoRA can be produced in 30 minutes). LoRAs are quite popular within communities of amateurs and developers alike. At the time of writing, the AI platform Hugging Face lists 71,312 LoRAs.   
Initially developed for LLMs, the Low-Rank Adaptation (LoRA) technique is a fine-tuning method that freezes an existing model and inserts a smaller number of weights to adjust the model's behaviour to a particular need. Instead of a full retraining of the model, LoRAs only require the [https://medium.com/@efrat_37973/lora-under-the-hood-how-it-really-works-in-visual-generative-ai-e6c10611b461 training of the weights that have been inserted in the model's attention layers]. Therefore LoRAs are quite lightweight and able to leverage the capabilities of larger models. Users equipped with a consumer-grade GPU can train their own LoRAs reasonably fast (on a mac M3, a LoRA can be produced in 30 minutes). LoRAs are quite popular within communities of amateurs and developers alike. At the time of writing, the AI platform Hugging Face lists 71,312 LoRAs.   


== What is the network that sustains this object? ==
== What is the network that sustains this object? ==

Revision as of 14:36, 7 July 2025

https://civitai.com/models/1266100/the-incredible-hulk-2008, LoRA The Incredible Hulk (2008)

On his personal page on the CivitAI website, the user BigHeadTF promotes his recent creation, a small model called The Incredible Hulk (2008). Compared to earlier movies of the Hulk, the 2008 version shows a tormented Bruce Banner who transforms into a green creature with "detailed musculature, dark green skin, and an almost tragic sense of isolation". The model helps generate characters resembling this iconic version of Hulk in new images.

To demonstrate the capabilities of his model, BigHeadTF has selected a few pictures of his own. Hulk is in turn depicted as cajoling a teddy bear or crossdressing as Shrek's Princess Fiona. The images play with the contrast between Hulk's overblown virility and childlike or female connotations. The images demonstrate the model's ability to expand the hero's universe into other registers or fictional worlds. The Incredible Hulk (2008) doesn't just reproduce faithfully existing images of Hulk, it also opens new avenues for creation and combinations for the green hero.

This blend of pop and remix culture that strives on the blurring of boundaries between genres infuses a large number of creations made with generative AI. However what distinguishes BigHeadTF is that he doesn't share only images but the software component that makes his images distinctive. The model he distributes on his page is called a LoRA. The most famous models such as Stable Diffusion or Flux are rather general-purpose. These 'base' or 'foundation' models can be used to generate images in many styles and can handle a huge variety of prompts. But they may show limitations when a user wants a specific output such as a particular genre of manga, a style that emulates black and white film noir or when an improvement is needed for some details (specific hands positions, etc) or to produce legible text. This is where LoRAs come in. A LoRA is a smaller model created with a technique that makes it possible to improve the performance of a base model on a given task.

What is a LoRA?

Initially developed for LLMs, the Low-Rank Adaptation (LoRA) technique is a fine-tuning method that freezes an existing model and inserts a smaller number of weights to adjust the model's behaviour to a particular need. Instead of a full retraining of the model, LoRAs only require the training of the weights that have been inserted in the model's attention layers. Therefore LoRAs are quite lightweight and able to leverage the capabilities of larger models. Users equipped with a consumer-grade GPU can train their own LoRAs reasonably fast (on a mac M3, a LoRA can be produced in 30 minutes). LoRAs are quite popular within communities of amateurs and developers alike. At the time of writing, the AI platform Hugging Face lists 71,312 LoRAs.

What is the network that sustains this object?

Making a Lora is like baking a cake, a post by knxo on CivitAI, https://civitai.com/articles/138/making-a-lora-is-like-baking-a-cake

The process of LoRA training is very similar to training a model, but at a different scale. Even if it requires dramatically less compute, it still involves the same kind of highly complex technical decisions. In fact, training a LoRA mobilizes the whole network of operation of decentralized image generation and offers a privileged view on its mode of production.

Software dependencies

Various layers of software libraries tame this complexity. A highly skilled user can train a LoRA locally with a series of scripts like kohya_ss and pore through the vertiginous list of options. Platforms like Hugging Face distribute software libraries (peft) that abstract away the complexity of integration of the various components such as LoRAs in the AI generation pipeline. And for those who don't want to fiddle with code or lack access to a local GPU, the option of training LoRA are offered by websites such as Runway ML, Eden AI, Hugging Face or CivitAI for different price schemes.

LoRA as a contact zone between communities with different expertise

'Making a LoRA is like baking a cake', says an widely read tutorial, ' a lot of preparation, and then letting it bake. If you didn't properly make the preparations, it will probably be inedible.' To guide the wannabe LoRA creator in their journey, a wealth of tutorials and documentations in various forms are available from sources such as subreddits, Discord channels, YouTube videos, forums and the platforms that release the code or offer the training and hosting services. They are diverse in tone and they offer varying forms of expertise. A significant portion of this documentation effort consists in code snippets, detailed explanations of parameters and options, bug reports, detailed instructions for the installation of software, tests of hardware compatibility. By professionals, hackers, amateurs, newbies. With access to very different infrastructure. Users that take for granted unlimited access to compute and others with a struggling local installation. This diversity reflects the position of LoRA's in the AI ecosystem. Between expertise and informed amateurism and between resource hungry and consumer grade technology. Whereas foundational model training still remains in the hands of a (happy) few, LoRA training opens up a perspective of democratization of the means of production for those who have time, persistence and a small capital to invest.

Curation as an operational practice

There is more to LoRA than the technicalities of installing libraries and training. LoRAs are objects of curation. Many tutorials begin with a primer on dataset curation. Indeed, if a user decides to embark on the adventure of creating a LoRA, the reason is that a model fails to generate convincing images for a given subject or style. The remedy is to create a dataset that provides better samples for a given character, object or genre. Fans, artists and amateurs produce an abundant literature on the various questions raised by dataset curation: the identification of sources, the selection of images (criteria of quality, diversity, etc), the annotation (tagging), scale (LoRAs can be trained on datasets containing as little as one image and can include collections of thousands of images). Curatorial practice very different than for large scale models and broad scraping. Manually picking. Connecting to established collections. Archival dimension. Prevalence of manga culture and porn.

Re-modelling

Dependency on base models:

'This LoRa model was designed to generate images that look more "real" and less "artificially AI-generated by FLUX". It achieves this by making the natural and artificial lighting look more real and making the bokeh less strong. It also adds details to every part of the image, including skin, eyes, hair, and foliage. It also aims to reduce somewhat the common "FLUX chin and skin" and other such issues.' [1]

A dialogue with the base model. Vocabulary, semantics. Rewording the underlying model.

How does it evolve through time?

  • From the Microsoft lab to platforms and informed amateurs, diversification of offer
  • Expansion of the image generation pipeline

How does it create value? Or decrease / affect value?

Rewards for the best creative use of a LoRA, https://civitai.com/bounties/8690/5k-crazywhatever

What is its place/role in techno cultural strategies?

Changing the representation of a female artist in the model. Example of Louise Bourgeois experiment.

An image of Louise Bourgeois in her studio generated with the model Real Vision
An image of Louise Bourgeois per Real Vision
  • Curation, classification, defining styles
  • Demystify the idea that you can generate anything
  • Exploiting the bias, reversing the problem, work with it
  • Conceptual labour
  • Sharing
  • Visibility in communities
  • Needs are defined bottom-up
  • Competence in visual culture, think about image in an abstract way

How does it relate to autonomous infrastructure?

  • Regain control over the training, re-appropriation of the model via fine-tuning
  • (Partial) Decentralization of model production
  • The question of the means of production
  • Ambivalence

Images

Two images generated with the prompt "Student apartment in Budapest with mezzanine". Image on the right with a LoRA
Two images generated with the prompt "Student apartment in Budapest with mezzanine". Image on the right has been generated with a LoRA trained with images of mezzanines
The LoRA decomposition as explained by a coder
The LoRA decomposition as explained by a coder
Screengrab of the LoRA page on the civit.ai platform
Screengrab of the LoRA page on the civit.ai platform
LoRA: How to Adapt LLMs Efficiently and Without Latency, By Elisa Terumi, https://exploringartificialintelligence.substack.com/p/lora-how-to-adapt-llms-efficiently