Thank you Stanford University for providing all the course resources online.
The course website: http://cs231n.stanford.edu/
Refer the wiki page of this repo.(https://github.com/sakshikakde/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition-Assignments/wiki/Tools)
- Compute distance using 2 loops
- Compute distance using 1 loops
- Compute distance using no loops
- Choose best value of K
- Compute SVM loss : naive way
- Compute SVM loss : vectorized way
- Implement SGD
- Tune regularization strength and learning rate
- Visualize the learned weights for each class
- Compute softmax loss : naive way
- Compute softmax loss : vectorized way
- Compute gradient
- Tune regularization strength and learning rate
- Visualize the learned weights for each class
- Implement forward pass using the weights and biases
- Compute loss
- Implement backpass
- Implement train function using SGD
- Implement predict function
- Tune hidden layer dimension, regularization strength and learning rate
- Visualize the learned weights for each class
- Implement affine layer: forward and backward
- Implement ReLU ctivation: forward and backward
- Sandwich layer( Affine + ReLU): forward and backward
- Loss layers: Softmax and SVM
- Two layer network to get atleast 50 % accuracy
- Fully-connected network with an arbitrary number of hidden layers.
- Implement fancy update rules: SGD+Momentum, RMSProp and Adam
- Implement Batch Normalization: forward and backward
- Fully Connected Nets with batch normalization
- Relation between batch normalization and weight initialization
- Relation between batch normalization and batch size
- Implement layer normalization: forward and backward
- Relation between layer normalization and batch size
- Implement Dropout: forward and backward
- Fully-connected nets with Dropout
- Comaparision of output with and without dropout
- Implement naive convolution: forward and backward
- Implement naive max pooling: forward and backward
- Pre implemented sandwich layers
- Implement a three-layer ConvNet: conv - relu - 2x2 max pool - affine - relu - affine - softmax
- Visualize Filters(learned kernals)
- Impement spatial batch normalization: forward and backward
- Impement group batch normalization: forward and backward
- Pytorch basic tutorial by Justin Johnson: https://github.com/jcjohnson/pytorch-examples
- Barebones PyTorch: Abstraction level 1
- PyTorch Module API: Abstraction level 2 using nn.Module
- PyTorch Sequential API: Abstraction level 3 using nn.Sequential
- CIFAR-10 open-ended challenge:
My model:(conv->spatial batchnorm->relu->droupout)x3 -> maxpooling -> (affine->batchnorm->relu)x2 -> affine -> scores -> nesterov momentum
traing accuracy:99 %, validation accuracy: 73.2 %, test accuracy = 73.5 %
- Download and load Microsoft COCO datset
- Vanilla RNN: step forward, step backward
- Vanilla RNN: forward, backward
- Word embedding: forward, backward
- Temporal Affine layer, Temporal Softmax loss
- Implement forward and backward pass for the model
- Check model
- Overfit RNN captioning model
- RNN test-time sampling
- Download and load Microsoft COCO datset
- LSTM: step forward, step backward
- LSTM: forward, backward
- Check model
- Overfit LSTM captioning model
- LSTM test-time sampling
- Saliency Maps
- Fooling Images
- Class visualization: review