Skip to content

Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation (ICCV 2023)

License

Notifications You must be signed in to change notification settings

EnVision-Research/Selective-Diffusion-Distillation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Sep 28, 2023
8585fc4 · Sep 28, 2023

History

13 Commits
Jul 26, 2023
Jul 26, 2023
Jul 26, 2023
Jul 26, 2023
Jul 26, 2023
Jul 26, 2023
Jul 16, 2023
Sep 28, 2023
Jul 26, 2023
Jul 26, 2023
Jul 26, 2023
Jul 26, 2023
Jul 26, 2023

Repository files navigation

Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation (ICCV 2023)

This is the official implementation of SDD (ICCV 2023).

Conventional diffusion editing pipeline faces a trade-off problem: adding too much noise affects the fidelity of the image while adding too little affects its editability. In this paper, we propose a novel framework, Selective Diffusion Distillation (SDD), that ensures both the fidelity and editability of images. Instead of directly editing images with a diffusion model, we train a feedforward image manipulation network under the guidance of the diffusion model. Besides, we propose an effective indicator to select the semantic-related timestep to obtain the correct semantic guidance from the diffusion model. This approach successfully avoids the dilemma caused by the diffusion process.

For more details, please refer to:

Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation [Paper]
Luozhou Wang*, Shuai Yang*, Shu Liu, Yingcong Chen

Installation

  1. Create an environment with python==3.8.0 conda create -n sdd python==3.8.0
  2. Activate it conda activate sdd
  3. Install basic requirements pip install -r requirements.txt

Getting Started

Preparation

  1. Prepare data and pretrain checkpoints.

    Data: CelebA latent code (train), CelebA latent code (test)

    Pretrain stylegan2: stylegan2-ffhq

    Facenet for IDLoss: facenet

  2. Prepare your token from Huggingface. Please place your token at ./TOKEN.

Infer with pretrain SDD checkpoint (white hair)

  1. Download pretrain SDD checkpoint white hair. Please place it at ./pretrain/white_hair.pt.

  2. Run inference. python inference.py --config ./configs/white_hair.yml --work_dir work_dirs/white_hair/

Train your own SDD

  1. Prepare your yaml file.
  2. Train SDD. python train.py --config [YOUR YAML] --work_dir [YOUR WORK DIR]

Search with HQS

  1. Prepare your yaml file.
  2. Search with HQS python search.py --config [YOUR YAML] --work_dir [YOUR WORK DIR]

Citation

If you find this project useful in your research, please consider citing:

@misc{wang2023steps,
      title={Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation}, 
      author={Luozhou Wang and Shuai Yang and Shu Liu and Ying-cong Chen},
      year={2023},
      eprint={2307.08448},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

About

Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation (ICCV 2023)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages