Skip to content

ByteVisionLab/DreamLite

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

30 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DreamLite: A Lightweight On-Device Unified Model for Image Generation and Editing

Hugging Face Spaces PaperΒ  Project PageΒ  Visitors

🌿 Overview

We introduce DreamLite, a compact and unified on-device diffusion model (0.39B) that seamlessly supports both text-to-image generation and text-guided image editing within a single network architecture.

Built upon a pruned mobile U-Net backbone, DreamLite unifies multimodal conditioning through In-Context Spatial Concatenation directly in the latent space. By leveraging progressive step distillation, DreamLite achieves ultra-fast 4-step inference, capable of generating or editing a 1024Γ—1024 image in ~3 seconds on an iPhone 17 Pro (powered by 4-bit Qwen-VL and fp16 VAE+UNet) β€” operating fully on-device with zero cloud dependency.

DreamLite Teaser

DreamLite Architecture
Figure 1. The overall unified architecture of DreamLite.

πŸ“° News

  • [2026.04] πŸŽ‰πŸŽ‰πŸŽ‰ We officially released the inference code.

🎬 On-Device Demo

Experience real-time generation and editing on an iPhone 17 Pro. No internet connection or cloud processing required.

Human Portrait & Style Transfer Nature Landscape & Background Swap Product & Object Replacement

Note: If demos fail to render natively on GitHub, please visit our Project Page to watch the full demonstrations.


βš™οΈ Getting Started

1. Environment Setup

# Clone the repository
git clone https://github.com/ByteVisionLab/DreamLite.git
cd DreamLite

# Create and activate a conda environment
conda create -n dreamlite python=3.10 -y
conda activate dreamlite

# Install dependencies
pip install -r requirements.txt

Ensure the model weights (DreamLite-base and DreamLite-mobile) are placed in the following directory structure:

DreamLite/
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ DreamLite-base/
β”‚   └── DreamLite-mobile/

2. Inference via CLI

You can readily generate or edit images utilizing our provided command-line interfaces.

# ==========================================
# DreamLite-base: 28 Steps (High Fidelity)
# ==========================================
# Text-to-Image Generation
python infer.py --prompt "A close-up of a fire spitting dragon cinematic shot."

# Text-guided Image Editing
python infer.py --prompt "Transfer this image to oil-painting style." --image_path ./inputs/source.png

# ==========================================
# DreamLite-mobile: 4 Steps (Ultra Fast)
# ==========================================
# Text-to-Image Generation
python infer_mobile.py --prompt "A portrait of a young woman with flowers." 

# Text-guided Image Editing
python infer_mobile.py --prompt "Change the background to a dense forest." --image_path ./inputs/source.png

3. Benchmark Evaluation

We provide comprehensive benchmark evaluation scripts (GenEval & ImgEdit) to facilitate performance comparisons between DreamLite and other state-of-the-art models. Please configure your local dataset paths within tools/benchmark/infer_geneval.py and tools/benchmark/infer_imgedit.py prior to execution.

# Run the benchmark evaluation
python tools/benchmark/infer_geneval.py --save_dir ./output/benchmark/geneval_output --geneval_json "YOUR_GENEVAL/evaluation_metadata.jsonl"
python tools/benchmark/infer_imgedit.py --save_dir ./output/benchmark/imgedit_output --json_path "YOUR_IMGEDIT_PATH/ImgEdit/Benchmark/Basic/basic_edit.json" --img_root "YOUR_IMGEDIT_IMAGES_PATH/ImgEdit/Benchmark/singleturn"

4. Interactive Gradio Demo

We provide a user-friendly web interface powered by Gradio. You can try our live demo on Hugging Face Spaces, or deploy it locally on your own machine (GPU/CPU).

Hugging Face Spaces

To run the interactive demo locally:

# Launch the local web server
python tools/app.py

πŸ€— Checkpoints

We offer two distinct variants of the DreamLite model to provide an optimal balance between visual fidelity and on-device inference latency.

Note

Model Access: Model weights are currently undergoing safety review. To request early access, please contact us at πŸ“§ klfeng1206@outlook.com with an email titled "DreamLite Access Request".

In your email, please ensure to include:

  1. Your Name & Affiliation (e.g., University, Company, or personal portfolio).
  2. Intended Use Case (Please briefly describe how you plan to use the DreamLite model).

⚠️ Important Usage and Compliance Notice: By accessing and using these models, you agree to abide by our ethical guidelines. These models must NOT be used to generate, edit, or distribute any content that is sexually explicit, pornographic, violent, discriminatory, or otherwise illegal. We strictly prohibit the use of DreamLite for malicious purposes.

Model Variant Params Resolution Steps Guidance
DreamLite (Base) 0.39B 1024Γ—1024 28 CFG & IMG_CFG
DreamLite (Mobile) 0.39B 1024Γ—1024 4 No CFG

πŸ“Š Main Results

Quantitative comparison with state-of-the-art methods on generation and editing benchmarks.

generation comparison
Text-to-Image generation comparison.

editing comparison
Text-guided image editing comparison.

Method Params GenEval ↑ DPG ↑ ImgEdit ↑ GEdit-EN-Q ↑
FLUX.1-Dev / Kontext 12B 0.67 84.0 3.76 6.79
BAGEL 7B 0.82 85.1 3.42 7.20
OmniGen2 4B 0.80 83.6 3.44 6.79
LongCat-Image / Edit 6B 0.87 86.6 4.49 7.55
DeepGen1.0 2B 0.83 84.6 4.03 7.54
SANA-1.6B 1.6B 0.67 84.8 - -
SANA-0.6B 0.6B 0.64 83.6 - -
SnapGen++ (small) 0.4B 0.66 85.2 - -
VIBE 1.6B - - 3.85 7.28
EditMGT 0.96B - - 2.89 6.33
DreamLite (Ours) 0.39B 0.72 85.8 4.11 6.88

πŸŽ›οΈ LoRA Fine-tuning

We provide comprehensive support for LoRA fine-tuning and inference, enabling lightweight customization of DreamLite on your own domain-specific datasets.

For detailed instructions, training scripts, and examples, please refer to our dedicated LoRA Fine-Tuning Guide.

πŸ“‘ Open-Source Plan

  • Release paper on arXiv
  • Release inference code
  • Release LoRA training
  • Release model weights on HuggingFace
  • Release online demo
  • On-device Deployment Reference

πŸ™ Acknowledgement

We thank the great work from SDXL, SnapGen, Qwen and TAESDXL. The work is under supervision from Prof. Wangmeng Zuo.

πŸͺͺ License

Code: Apache-2.0

Model weights: see WEIGHTS_LICENSE, CC BY-NC 4.0

πŸ“„ Citation

If our work assists your research, feel free to give us a star ⭐ or cite us using:

@article{feng2026dreamlite,
  title={DreamLite: A Lightweight On-Device Unified Model for Image Generation and Editing},
  author={Kailai Feng and Yuxiang Wei and Bo Chen and Yang Pan and Hu Ye and Songwei Liu and Chenqian Yan and Yuan Gao},
  journal={arXiv preprint arXiv:2603.28713},
  year={2026}
}

About

πŸ”₯ Official impl. of "DreamLite: A Lightweight On-Device Unified Model for Image Generation and Editing".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages