rg-lcd.github.io - Reward Guided Latent Consistency Distillation

Description: We proposed Reward Guided Latent Consistency Distillation.

text-to-image (42) consistency model (3) learning from human/ai feedback (2)

Example domain paragraphs

Latent Consistency Distillation (LCD) has emerged as a promising paradigm for efficient text-to-image synthesis. By distilling a latent consistency model (LCM) from a pre-trained teacher latent diffusion model (LDM), LCD facilitates the generation of high-fidelity images within merely 2 to 4 inference steps. However, the LCM's efficient inference is obtained at the cost of the sample quality. In this paper, we propose compensating the quality loss by aligning LCM's output with human preference during traini

As directly optimizing towards differentiable RMs can suffer from over-optimization, we overcome this difficulty by proposing the use of a latent proxy RM (LRM). This novel component serves as an intermediary, connecting our LCM with the RM. Empirically, we demonstrate that incorporating the LRM into our RG-LCD successfully avoids high-frequency noise in the generated images, contributing to both improved FID on MS-COCO and a higher HPSv2.1 score on HPSv2's test set, surpassing those achieved by the baselin

Our RG-LCD framework consists of three main components: a teacher LDM, a student LCM, and a Reward Model (RM). The teacher LDM is pre-trained on a large-scale dataset and serves as the source of the ground-truth latent codes. The student LCM is trained to mimic the teacher LDM's generation process by distilling the teacher's latent codes. During training, the LCM is optimized to maximize the reward predicted by the RM.

Links to rg-lcd.github.io (3)