Skip to content

Pull requests: Dao-AILab/flash-attention

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

feat: add support for pytorch2.8
#1801 opened Aug 8, 2025 by NanoCode012 Loading…
[skip_ci] ABI stable fa3
#1791 opened Jul 31, 2025 by mikaylagawarecki Draft
2 tasks
feat: blocksparse support
#1784 opened Jul 30, 2025 by guangyunh-nv Draft
[CI] build upon manylinux, improve compatibility
#1780 opened Jul 29, 2025 by zipzou Loading…
Fixes incorrect variable reference in comment
#1775 opened Jul 25, 2025 by LoserCheems Loading…
Change the update method of the sub-module
#1774 opened Jul 25, 2025 by RealTapeL Loading…
add var_len case for benchmark_mla_decode
#1770 opened Jul 22, 2025 by XiaobingSuper Loading…
[AMD] Torch Compile Issues
#1756 opened Jul 15, 2025 by micmelesse Loading…
Suppress warnings in windows compilation
#1748 opened Jul 10, 2025 by XXXXRT666 Loading…
Theoretically make compiling from pip quicker
#1703 opened Jun 8, 2025 by whrit Loading…
fix: fa3 backward check qkv with qkv_scale and dqkv
#1686 opened May 29, 2025 by yuyu5333 Loading…
Fix/deterministic dk dv
#1678 opened May 26, 2025 by yuWeiCute Loading…
Fix a bug in flash_attn_triton.py
#1668 opened May 15, 2025 by AminDarabi Loading…
Fix typos in multiple files
#1655 opened May 8, 2025 by co63oc Loading…
ProTip! Exclude everything labeled bug with -label:bug.