Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can it be used in sd1.5 and can it be combined with other acceleration methods such as ByteDance/Hyper SD #1

Open
libai-lab opened this issue Jun 4, 2024 · 6 comments

Comments

@libai-lab
Copy link

Can it be used in sd1.5 and can it be combined with other acceleration methods such as ByteDance/Hyper SD

@A-suozhang
Copy link
Member

MixDQ supports SD1.5. By using the lcm_lora.yaml file, you can conduct quantization for SD1.50like models (Dreamlike) with LCM-lora. Our quantization code is independent of the timestep-wise acceleration method. By substituting the sdxl-turbo model ID in the config, it is compatible with HyperSD.

@greasebig
Copy link

Can it be used directly in sd1.5 or sdxl. What i mean is using W8A8 to accelerate normal 20 steps inference, without lcm_lora or sdxl turbo.

@A-suozhang
Copy link
Member

Yes, it could be directly used. Just follow the example sdxl.yaml in our configs. For Sd1.5 model, you could remove the lora-related configs in lcm_lora.yaml, it`s compatible with standard Sd1.5.

@greasebig
Copy link

greasebig commented Jun 17, 2024

if I want to use normal sdxl 20steps inference in the pipeline from https://huggingface.co/nics-efc/MixDQ/tree/main, what should I do ?

@greasebig
Copy link

pipeline from https://huggingface.co/nics-efc/MixDQ/tree/main, seems to only compatible with lcm_lora and sdxl turbo.

@greasebig
Copy link

seems like i need to Generate Calibration Data and Post Training Quantization (PTQ) Process.
it may cause very long time.
do you have the quant parameters ckpt.pth that can be used directly in sdxl?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants