-
Notifications
You must be signed in to change notification settings - Fork 539
Open
Labels
enhancementNew feature or requestNew feature or request
Description
Feature Summary
Add support for reference-only preprocessor modes to control image generation using a reference image
Detailed Description
The sd-webui-controlnet project has implemented reference-only preprocessing modes (reference_adain, reference_adain+attn) that allow controlling the output generation using a reference image.
Key features:
- Transfers style and visual characteristics from a reference image to the generated output
- Works without requiring additional ControlNet model files (uses the base SD model's attention mechanisms)
- Can be combined with traditional ControlNet models (canny, depth, pose) for more complex workflows
How it works:
These modes modify the self-attention layers during generation by injecting features from the reference image:
reference_only: Direct substitution of K/V matrices from reference imagereference_adain: Applies Adaptive Instance Normalization before attention injectionreference_adain+attn: Hybrid approach combining both methods (recommended for style transfer)
References:
- Main: https://github.com/huggingface/diffusers/blob/main/examples/community/stable_diffusion_controlnet_reference.py
- Implementation discussion: [New Preprocessor] The "reference_adain" and "reference_adain+attn" are added Mikubill/sd-webui-controlnet#1280
- Usage examples: https://www.reddit.com/r/StableDiffusion/comments/13lla5c/a_deep_dive_into_the_new_reference_controlnets/
This feature would enable powerful style transfer and reference-based generation in stable-diffusion.cpp without additional model loading.
Alternatives you considered
No response
Additional context
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request