QwenEdit InStyle LoRA — style transfer that stays on style

Esha
By Esha
2 Min Read

Works with style references. Simple setup. Clear outputs.

Contents

  • QwenEdit InStyle basics in plain words
  • How to prompt (what to type)
  • What it’s good at (and where it slips)
  • Training set (what it used)
  • Where everything lives (kept simple)

QwenEdit InStyle basics in plain words

  • Base: QwenEdit (image-to-image).
  • Adapter: LoRA that focuses on style.
  • Goal: keep the look of a style ref without copying random details from the ref image.
  • Result: stronger style match, fewer unwanted artifacts, better prompt follow.
  • License: Apache-2.0.

How to prompt (what to type)

  • Start with: “Make an image in this style of …”
  • Then say what you want.
  • Example: “Make an image in this style of a serene mountain landscape at sunset.”
  • Keep it clear; let the style ref do the heavy lifting.

What it’s good at (and where it slips)

Strengths

  • Catches small style cues (color, brush feel, composition mood).
  • Avoids dragging over stray objects or textures from the ref.
  • Keeps the new image coherent and close to your prompt.

Can struggle

  • Very abstract or odd styles that don’t map well to scenes.
  • Ultra-specific technical asks that fight the style.
  • Anatomy can wobble in tough cases.

Training set (what it used)

  • Trained on a curated set of high-quality Midjourney style refs.
  • The set separates style from content so the LoRA learns the look, not the leftover objects.

Dataset: https://huggingface.co/datasets/peteromallet/high-quality-midjouney-srefs

Where everything lives (kept simple)

Share This Article
Follow:
Studied Computer Science. Passionate about AI, ComfyUI workflows, and hands-on learning through trial and error. Creator of AIStudyNow — sharing tested workflows, tutorials, and real-world experiments.
1 Comment

Leave a Reply

Your email address will not be published. Required fields are marked *