Skip to content

A personal depthwise convolution layer implementation on caffe by liuhao.(only GPU)2018/02/08 Add the funtion of depth_multiplier

Notifications You must be signed in to change notification settings

rickchen147258/DepthwiseConvolution

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Depthwise Convolutional Layer

Introduction

This is a personal caffe implementation of mobile convolution layer. For details, please read the original paper:

How to build

  1. Merge the caffe folder in the repo with your own caffe. $ cp -r $REPO/caffe/* $YOURCAFFE/
  2. Then make. $ cd $YOURCAFFE && make

Usage

Replacing the type of mobile convolution layer with "DepthwiseConvolution" is all. Please refer to the example/Withdw_MN_train_128_1_train.prototxt, which is altered from

GPUPerformance on example net

GPUPerformance Origin1 Mine
forward_batch1 41 ms 8 ms
backward_batch1 51 ms 11 ms
forward_batch16 532 ms 36 ms
backward_batch16 695 ms 96 ms

2018/02/08 Updated

  1. Add the funtion of depth_multiplier(e.g. Now you can set the input channel = 32, group = 32 and output channel = 64. It mean the depth_multiplier = 2)

Transfer normal net to mobilenet

I write a script [transfer2Mobilenet.py] to convert normal net to mobilenet format. You may try too.
usage: python ./transfer2Mobilenet.py sourceprototxt targetprototxt [--midbn nobn --weight_filler msra --activation ReLU]    ["--origin_type" means the depthwise convolution layer's type will be "Convolution" instead of "DepthwiseConvolution"]

The "transferTypeToDepthwiseConvolution.py" will be used for changing the depthwise convolution layer's type from "Convolution" to "DepthwiseConvolution".

Footnotes

  1. When turn on cudnn, the memory consuming of mobilenet would increase to unbelievable level. You may try.

About

A personal depthwise convolution layer implementation on caffe by liuhao.(only GPU)2018/02/08 Add the funtion of depth_multiplier

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Cuda 44.0%
  • Python 32.6%
  • C++ 23.4%