Skip to content

QuenithAI/Diffusion-Large-Language-Models-Paper-List

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Awesome Diffusion Large Language Models by QuenithAI

A curated collection of papers, models, and resources for top-tier researches of diffusion large language models.

Awesome   PRs Welcome   Issues Welcome

Note

This repository is proudly maintained by the frontline research mentors at QuenithAI (应达学术). It aims to provide the most comprehensive and cutting-edge map of papers and technologies in the field of Diffusion Large Language Models.

Your contributions are also vital—feel free to open an issue or submit a pull request to become a collaborator of this repository. We expect your participation!

If you require expert 1-on-1 guidance on your submissions to top-tier conferences and journals, we invite you to contact us via WeChat or E-mail.


本仓库由 「应达学术」(QuenithAI) 的一线科研导师团队倾力打造并持续维护,旨在为您呈现文生图领域最全面、最前沿的论文。

您的贡献对我们和社区来说至关重要——我们诚邀有志之士通过 open an issuesubmit a pull request 来成为这个项目的合作者之一,期待您的加入!

如果您在冲刺科研顶会的道路上需要专业的1V1指导,欢迎通过微信邮件联系我们

⚡ Latest Updates
  • (Sep 17th, 2025): Initial update of the repository.

📚 Table of Contents


✍️ Survey Papers

📜 Papers & Models

✨ 2026

✅ Published Papers

  • [CVPR 2026] LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning
    Paper Project Page GitHub Hugging Face

  • [CVPR 2026] LLaDA-MedV: Exploring Large Language Diffusion Models for Biomedical Image Understanding
    Paper Project Page GitHub Hugging Face

  • [CVPR 2026] Sparse-LaViDa: Sparse Multimodal Discrete Diffusion Language Models
    Paper Project Page GitHub Hugging Face

  • [CVPR 2026] dMLLM-TTS: Self-Verified and Efficient Test-Time Scaling for Diffusion Multi-Modal Large Language Models
    Paper Project Page GitHub Hugging Face

  • [ICLR 2026] Any-Order Flexible Length Masked Diffusion
    Paper Project Page GitHub

  • [ICLR 2026] Attention Is All You Need for KV Cache in Diffusion LLMs
    Paper Project Page

  • [ICLR 2026] Beyond Fixed: Training-Free Variable-Length Denoising for Diffusion Large Language Models
    Paper GitHub

  • [ICLR 2026] DPad: Efficient Diffusion Language Models with Suffix Dropout
    Paper GitHub

  • [ICLR 2026] DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation
    Paper GitHub Hugging Face

  • [ICLR 2026] Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing
    Paper Project Page GitHub Hugging Face

  • [ICLR 2026] Diffusion Language Model Knows the Answer Before It Decodes
    Paper GitHub

  • [ICLR 2026] Diffusion Language Models For Code Infilling Beyond Fixed-size Canvas
    Paper Project Page GitHub Hugging Face

  • [ICLR 2026] Fast-dLLM v2: Efficient Block-Diffusion LLM
    Paper Project Page

  • [ICLR 2026] Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding
    Paper Project Page GitHub

  • [ICLR 2026] FlashDLM: Accelerating Diffusion Language Model Inference via Efficient KV Caching and Guided Diffusion
    Paper

  • [ICLR 2026] Improving Reasoning for Diffusion Language Models via Group Diffusion Policy Optimization
    Paper

  • [ICLR 2026] Inpainting-Guided Policy Optimization for Diffusion Large Language Models
    Paper

  • [ICLR 2026] Learning to Parallel: Accelerating Diffusion Large Language Models via Adaptive Parallel Decoding
    Paper Project Page GitHub

  • [ICLR 2026] On the Reasoning Abilities of Masked Diffusion Language Models
    Paper GitHub

  • [ICLR 2026] Quant-dLLM: Post-Training Extreme Low-Bit Quantization for Diffusion Large Language Models
    Paper GitHub

  • [ICLR 2026] Revolutionizing Reinforcement Learning Framework for Diffusion Large Language Models
    Paper GitHub Hugging Face

  • [ICLR 2026] SPG: Sandwiched Policy Gradient for Masked Diffusion Language Models
    Paper

  • [ICLR 2026] Soft-Masked Diffusion Language Models
    Paper GitHub Hugging Face

  • [ICLR 2026] SparseD: Sparse Attention for Diffusion Language Models
    Paper

  • [ICLR 2026] Time Is a Feature: Exploiting Temporal Dynamics in Diffusion Language Models
    Paper Project Page

  • [ICLR 2026] UltraLLaDA: Scaling the Context Length to 128K for Diffusion Large Language Models
    Paper

  • [ICLR 2026] Unveiling the Potential of Diffusion Large Language Model in Controllable Generation
    Paper Project Page

  • [ICLR 2026] dParallel: Learnable Parallel Decoding for dLLMs
    Paper GitHub

  • [ICLR 2026] wd1: Weighted Policy Optimization for Reasoning in Diffusion Language Models
    Paper GitHub

  • [ICLR 2026] A2D: Any-Order, Any-Step Safety Alignment for Diffusion Language Models
    Paper

  • [ICLR 2026] Accelerating Diffusion Large Language Models with SlowFast Sampling: The Three Golden Principles
    Paper GitHub

  • [ICLR 2026] Beyond Masks: Efficient, Flexible Diffusion Language Models via Deletion-Insertion Processes
    Paper

  • [ICLR 2026] Beyond Scattered Acceptance: Fast and Coherent Inference for DLMs via Longest Stable Prefixes
    Paper

  • [ICLR 2026] Diffusion Language Models are Provably Optimal Parallel Samplers
    Paper

  • [ICLR 2026] Don't Settle Too Early: Self-Reflective Remasking for Diffusion Language Models
    Paper

  • [ICLR 2026] Dynamic-dLLM: Dynamic Cache-Budget and Adaptive Parallel Decoding for Training-Free Acceleration of Diffusion LLM
    Paper GitHub

  • [ICLR 2026] ES-dLLM: Efficient Inference for Diffusion Large Language Models by Early-Skipping
    Paper GitHub

  • [ICLR 2026] FS-DFM: Fast and Accurate Long Text Generation with Few-Step Diffusion Language Models
    Paper GitHub

  • [ICLR 2026] Hierarchy Decoding: A Training-free Parallel Decoding Strategy for Diffusion Large Language Models
    Paper

  • [ICLR 2026] Membership Inference Attacks Against Fine-tuned Diffusion Language Models
    Paper GitHub

  • [ICLR 2026] Parallel Multimodal Diffusion Language Models for Thinking-Aware Editing and Generation
    Paper Project Page GitHub Hugging Face

  • [ICLR 2026] Planner Aware Path Learning in Diffusion Language Models Training
    Paper GitHub

  • [ICLR 2026] Rainbow Padding: Mitigating Early Termination in Instruction-Tuned Diffusion LLMs
    Paper Project Page GitHub Hugging Face

  • [ICLR 2026] ReFusion: A Diffusion Large Language Model with Parallel Autoregressive Decoding
    Paper GitHub

  • [ICLR 2026] Revokable Decoding for Efficient and Effective DLLMs
    Paper

  • [ICLR 2026] Scaling Behavior of Discrete Diffusion Language Models
    Paper GitHub

  • [ICLR 2026] Semantic-Aware Diffusion LLM Inference With Adaptive Block Size
    Paper GitHub

  • [ICLR 2026] Stopping Computation for Converged Tokens in Masked Diffusion-LM Decoding
    Paper Project Page

  • [ICLR 2026] Test-Time Scaling in Diffusion LLMs via Hidden Semi-Autoregressive Experts
    Paper Project Page GitHub

  • [ICLR 2026] TRACEDET: Hallucination Detection from the Decoding Trace of Diffusion Large Language Models
    Paper GitHub

  • [ICLR 2026] Toward Safer Diffusion Language Models: Discovery and Mitigation of Priming Vulnerability
    Paper GitHub

  • [ICLR 2026] Watermarking Diffusion Language Models
    Paper GitHub

  • [ICLR 2026] What Exactly Does Guidance Do in Masked Discrete Diffusion Models
    Paper

💡 Pre-Print Papers

✨ 2025

✅ Published Papers

💡 Pre-Print Papers

⇧ Back to ToC


🎓 About Us

QuenithAI is a professional organization composed of top researchers, dedicated to providing high-quality 1-on-1 research mentoring for university students worldwide. Our mission is to help students bridge the gap from theoretical knowledge to cutting-edge research and publish their work in top-tier conferences and journals.

Maintaining this Awesome Text-to-Image Generation list requires significant effort, just as completing a high-quality paper requires focused dedication and expert guidance. If you're looking for one-on-one support from top scholars on your own research project, to quickly identify innovative ideas and make publications, we invite you to contact us ASAP.

➡️ Contact us via WeChat or E-mail to start your research journey.


「应达学术」(QuenithAI) 是一家由顶尖研究者组成,致力于为全球高校学生提供高质量1V1科研辅导的专业机构。我们的使命是帮助学生培养出色卓越的科研技能,在顶级会议和期刊上发表自己的成果。

维护一个GitHub调研仓库需要巨大的精力,正如完成一篇高质量的论文一样,离不开专注的投入和专业的指导。如果您希望在自己的研究项目中,获得来自顶尖学者的一对一支持,我们诚邀您与我们取得联系。

➡️ 欢迎通过 微信邮件 联系我们,开启您的科研之旅。

⇧ Back to ToC


🤝 Contributing

Contributions are welcome! Please see our Contribution Guidelines for details on how to add new papers, correct information, or improve the repository.


💬 Join the Community

Join our community to stay up-to-date with the latest advancements, share your work, and collaborate with other researchers and developers in the field of video generation, diffusion large language models, and more!

If you are interested, please contact our administrator to join the group.

About

Tracking the latest and greatest research papers on diffusion large language models.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors