Skip to content

Latest commit

 

History

History
33 lines (18 loc) · 712 Bytes

README.md

File metadata and controls

33 lines (18 loc) · 712 Bytes

ResNeSt

PyTorch implementation of ResNeSt : Split-Attention Networks [1].

This implementation is only for my understanding of the architecture of ResNeSt.
Mostly the radix-major implementation of the bottleneck block.

The official implementation

Requirements

  • docker
  • docker-compose

Model

  • Only supports dilation=1.

ToDo

  • Evaluate the model

Reference

[1] ResNeSt : Split-Attention Networks, Hang Zhang, Chongruo Wu, Zhongyue Zhang, Yi Zhu, Zhi Zhang, Haibin Lin, Yue Sun, Tong He, Jonas Mueller, R. Manmatha, Mu Li, Alexander Smola, https://arxiv.org/abs/2004.08955

Author

Sawada Tomoya