Works with style references. Simple setup. Clear outputs.
Contents

Contents
- QwenEdit InStyle basics in plain words
- How to prompt (what to type)
- What it’s good at (and where it slips)
- Training set (what it used)
- Where everything lives (kept simple)
QwenEdit InStyle basics in plain words
- Base: QwenEdit (image-to-image).
- Adapter: LoRA that focuses on style.
- Goal: keep the look of a style ref without copying random details from the ref image.
- Result: stronger style match, fewer unwanted artifacts, better prompt follow.
- License: Apache-2.0.
How to prompt (what to type)
- Start with: “Make an image in this style of …”
- Then say what you want.
- Example: “Make an image in this style of a serene mountain landscape at sunset.”
- Keep it clear; let the style ref do the heavy lifting.
What it’s good at (and where it slips)
Strengths
- Catches small style cues (color, brush feel, composition mood).
- Avoids dragging over stray objects or textures from the ref.
- Keeps the new image coherent and close to your prompt.
Can struggle
- Very abstract or odd styles that don’t map well to scenes.
- Ultra-specific technical asks that fight the style.
- Anatomy can wobble in tough cases.
Training set (what it used)
- Trained on a curated set of high-quality Midjourney style refs.
- The set separates style from content so the LoRA learns the look, not the leftover objects.
Dataset: https://huggingface.co/datasets/peteromallet/high-quality-midjouney-srefs
Where everything lives (kept simple)
- Model page (download the LoRA): https://huggingface.co/peteromallet/Qwen-Image-Edit-InStyle/blob/main/InStyle-0.5.safetensors
Thanks a lot, you are very precious for me.