当前位置:
X-MOL 学术
›
arXiv.cs.AI
›
论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance
arXiv - CS - Artificial Intelligence Pub Date : 2024-03-26 , DOI: arxiv-2403.17377 Donghoon Ahn, Hyoungwon Cho, Jaewon Min, Wooseok Jang, Jungwoo Kim, SeonHwa Kim, Hyun Hee Park, Kyong Hwan Jin, Seungryong Kim
arXiv - CS - Artificial Intelligence Pub Date : 2024-03-26 , DOI: arxiv-2403.17377 Donghoon Ahn, Hyoungwon Cho, Jaewon Min, Wooseok Jang, Jungwoo Kim, SeonHwa Kim, Hyun Hee Park, Kyong Hwan Jin, Seungryong Kim
Recent studies have demonstrated that diffusion models are capable of
generating high-quality samples, but their quality heavily depends on sampling
guidance techniques, such as classifier guidance (CG) and classifier-free
guidance (CFG). These techniques are often not applicable in unconditional
generation or in various downstream tasks such as image restoration. In this
paper, we propose a novel sampling guidance, called Perturbed-Attention
Guidance (PAG), which improves diffusion sample quality across both
unconditional and conditional settings, achieving this without requiring
additional training or the integration of external modules. PAG is designed to
progressively enhance the structure of samples throughout the denoising
process. It involves generating intermediate samples with degraded structure by
substituting selected self-attention maps in diffusion U-Net with an identity
matrix, by considering the self-attention mechanisms' ability to capture
structural information, and guiding the denoising process away from these
degraded samples. In both ADM and Stable Diffusion, PAG surprisingly improves
sample quality in conditional and even unconditional scenarios. Moreover, PAG
significantly improves the baseline performance in various downstream tasks
where existing guidances such as CG or CFG cannot be fully utilized, including
ControlNet with empty prompts and image restoration such as inpainting and
deblurring.
中文翻译:
具有扰动注意力引导的自校正扩散采样
最近的研究表明,扩散模型能够生成高质量的样本,但其质量在很大程度上取决于采样引导技术,例如分类器引导(CG)和无分类器引导(CFG)。这些技术通常不适用于无条件生成或各种下游任务(例如图像恢复)。在本文中,我们提出了一种新颖的采样指导,称为扰动注意指导(PAG),它可以提高无条件和条件设置下的扩散样本质量,无需额外的训练或集成外部模块即可实现这一目标。 PAG 旨在在整个去噪过程中逐步增强样本的结构。它涉及通过用单位矩阵替换扩散 U-Net 中选定的自注意力图来生成结构退化的中间样本,考虑自注意力机制捕获结构信息的能力,并引导去噪过程远离这些退化样本。在 ADM 和稳定扩散中,PAG 令人惊讶地提高了有条件甚至无条件情况下的样本质量。此外,PAG 显着提高了无法充分利用 CG 或 CFG 等现有指导的各种下游任务的基线性能,包括具有空提示的 ControlNet 以及修复和去模糊等图像恢复。
更新日期:2024-03-28
中文翻译:
具有扰动注意力引导的自校正扩散采样
最近的研究表明,扩散模型能够生成高质量的样本,但其质量在很大程度上取决于采样引导技术,例如分类器引导(CG)和无分类器引导(CFG)。这些技术通常不适用于无条件生成或各种下游任务(例如图像恢复)。在本文中,我们提出了一种新颖的采样指导,称为扰动注意指导(PAG),它可以提高无条件和条件设置下的扩散样本质量,无需额外的训练或集成外部模块即可实现这一目标。 PAG 旨在在整个去噪过程中逐步增强样本的结构。它涉及通过用单位矩阵替换扩散 U-Net 中选定的自注意力图来生成结构退化的中间样本,考虑自注意力机制捕获结构信息的能力,并引导去噪过程远离这些退化样本。在 ADM 和稳定扩散中,PAG 令人惊讶地提高了有条件甚至无条件情况下的样本质量。此外,PAG 显着提高了无法充分利用 CG 或 CFG 等现有指导的各种下游任务的基线性能,包括具有空提示的 ControlNet 以及修复和去模糊等图像恢复。