Skip to content

A Transformer visualization system for visual analysis on Attention Mechanism.

Notifications You must be signed in to change notification settings

FoxerLee/Transformer-Vis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Transformer-Vis

A Transformer visualization system for visual analysis on Attention Mechanism.

Usage

Before you run all code, install some requirements defined in requirements.txt.

python -r requirements.txt

Train your model

Firstly unzip the embedding file in the ./train/embeddings folder (or use your own).

unzip ./train/embeddings/google.zip

Then run the following command:

cd train
python san.py -emb embeddings/google.txt

The model will be stored in ./train/model folder. Or you can download our pretrained model at google drive.

The code are modefied based on SSAN-self-attention-sentiment-analysis-classification. To change which self-attention architecture, you can go through this repository.

Set up visualization tool

Put the model from the previous part into ./web/static/model/

Then run the command to start Django.

cd web
python manage.py runserver

Now you can use Transformers-Vis at http://127.0.0.1:8000/.

Some Charts from Transofrmers-Vis

We use D3 to complish the visualization.

All visualization codes can be found at https://observablehq.com/@wmx567?tab=notebooks

Max matrix

Max matrix is used to detect outlier points. It will help users to find the most contributed words or dimensions in model.

Comparison matrix

It shows softmax values which interprets how each word contributes to a word vector.

Try it to find more interesting charts! 🥳🥳

Developers

About

A Transformer visualization system for visual analysis on Attention Mechanism.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published