bk-sdm-v2-small / README.md
bokyeong1015's picture
Update README.md
ad82c51 verified
metadata
license: creativeml-openrail-m
tags:
  - stable-diffusion
  - stable-diffusion-diffusers
  - text-to-image
datasets:
  - ChristophSchuhmann/improved_aesthetics_6.5plus
library_name: diffusers
pipeline_tag: text-to-image
extra_gated_prompt: >-
  This model is open access and available to all, with a CreativeML OpenRAIL-M
  license further specifying rights and usage.

  The CreativeML OpenRAIL License specifies: 


  1. You can't use the model to deliberately produce nor share illegal or
  harmful outputs or content 

  2. The authors claim no rights on the outputs you generate, you are free to
  use them and are accountable for their use which must not go against the
  provisions set in the license

  3. You may re-distribute the weights and use the model commercially and/or as
  a service. If you do, please be aware you have to include the same use
  restrictions as the ones in the license and share a copy of the CreativeML
  OpenRAIL-M to all your users (please read the license entirely and carefully)

  Please read the full license carefully here:
  https://huggingface.co/spaces/CompVis/stable-diffusion-license
      
extra_gated_heading: Please read the LICENSE to access this model

BK-SDM-v2 Model Card

BK-SDM-{v2-Base, v2-Small, v2-Tiny} are obtained by compressing SD-v2.1-base.

  • Block-removed Knowledge-distilled Stable Diffusion Models (BK-SDMs) are developed for efficient text-to-image (T2I) synthesis:
    • Certain residual & attention blocks are eliminated from the U-Net of SD.
    • Despite the use of very limited data, distillation retraining remains surprisingly effective.
  • Resources for more information: Paper, GitHub.

Examples with 🤗Diffusers library.

An inference code with the default PNDM scheduler and 50 denoising steps is as follows.

import torch
from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained("nota-ai/bk-sdm-v2-small", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a black vase holding a bouquet of roses"
image = pipe(prompt).images[0]  
    
image.save("example.png")

Compression Method

Based on the U-Net architecture and distillation retraining of BK-SDM, a reduced batch size (from 256 to 128) is used in BK-SDM-v2 for faster training speeds.

  • Training Data: 212,776 image-text pairs (i.e., 0.22M pairs) from LAION-Aesthetics V2 6.5+.
  • Hardware: A single NVIDIA A100 80GB GPU
  • Gradient Accumulations: 4
  • Batch: 128 (=4×32)
  • Optimizer: AdamW
  • Learning Rate: a constant learning rate of 5e-5 for 50K-iteration retraining

Experimental Results

The following table shows the zero-shot results on 30K samples from the MS-COCO validation split. After generating 512×512 images with the PNDM scheduler and 25 denoising steps, we downsampled them to 256×256 for evaluating generation scores.

  • Our models were drawn at the 50K-th training iteration.

Compression of SD-v2.1-base

Model FID↓ IS↑ CLIP Score↑
(ViT-g/14)
# Params,
U-Net
# Params,
Whole SDM
Stable Diffusion v2.1-base 13.93 35.93 0.3075 0.87B 1.26B
BK-SDM-v2-Base (Ours) 15.85 31.70 0.2868 0.59B 0.98B
BK-SDM-v2-Small (Ours) 16.61 31.73 0.2901 0.49B 0.88B
BK-SDM-v2-Tiny (Ours) 15.68 31.64 0.2897 0.33B 0.72B

Compression of SD-v1.4

Model FID↓ IS↑ CLIP Score↑
(ViT-g/14)
# Params,
U-Net
# Params,
Whole SDM
Stable Diffusion v1.4 13.05 36.76 0.2958 0.86B 1.04B
BK-SDM-Base (Ours) 15.76 33.79 0.2878 0.58B 0.76B
BK-SDM-Base-2M (Ours) 14.81 34.17 0.2883 0.58B 0.76B
BK-SDM-Small (Ours) 16.98 31.68 0.2677 0.49B 0.66B
BK-SDM-Small-2M (Ours) 17.05 33.10 0.2734 0.49B 0.66B
BK-SDM-Tiny (Ours) 17.12 30.09 0.2653 0.33B 0.50B
BK-SDM-Tiny-2M (Ours) 17.53 31.32 0.2690 0.33B 0.50B

Visual Analysis: Image Areas Affected By Each Word

KD enables our models to mimic the SDM, yielding similar per-word attribution maps. The model without KD behaves differently, causing dissimilar maps and inaccurate generation (e.g., two sheep and unusual bird shapes).

cross-attn-maps

Uses

Please follow the usage guidelines of Stable Diffusion v1.

Acknowledgments

Citation

@article{kim2023architectural,
  title={BK-SDM: A Lightweight, Fast, and Cheap Version of Stable Diffusion},
  author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
  journal={arXiv preprint arXiv:2305.15798},
  year={2023},
  url={https://arxiv.org/abs/2305.15798}
}
@article{kim2023bksdm,
  title={BK-SDM: Architecturally Compressed Stable Diffusion for Efficient Text-to-Image Generation},
  author={Kim, Bo-Kyeong and Song, Hyoung-Kyu and Castells, Thibault and Choi, Shinkook},
  journal={ICML Workshop on Efficient Systems for Foundation Models (ES-FoMo)},
  year={2023},
  url={https://openreview.net/forum?id=bOVydU0XKC}
}

This model card is based on the Stable Diffusion v1 model card.