Skip to content

ViDT models trained for 50 and 150 epochs

Compare
Choose a tag to compare
@songhwanjun songhwanjun released this 05 Nov 09:05
· 9 commits to main since this release
12b1593

There are ViDT pre-trained models for 50 and 150 epochs with different model sizes (from nano to base).
We activated auxiliary decoding loss and iterative box refinement.