The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>
-
Updated
Sep 29, 2024 - Python
The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>
Demo code for CVPR2023 paper "Sparsifiner: Learning Sparse Instance-Dependent Attention for Efficient Vision Transformers"
Pytorch implementation of "Activating More Pixels Sparsely: A Structural Similarity-Inspired Unrolling Framework for Lightweight Image Super-Resolution"
Text Summarization Modeling with three different Attention Types
Add a description, image, and links to the sparse-attention topic page so that developers can more easily learn about it.
To associate your repository with the sparse-attention topic, visit your repo's landing page and select "manage topics."