- Propose an intuitive and effective deep learning framework for blur kernel estimation in single image super resolution
- Propose a new non-blind SR network using the spatial feature transform layers for multiple blur kernels
- Test blind SR performance on both carefully selected blur kernels and real images: shows SOTA performance in blind SR problem
- Assume that the degradation kernels are unavailable
- Formulated as follows:
$I^{LR} = (k \otimes I^{HR}) \downarrow_s + n$ -
$I^{HR}$ = HR image,$I^{LR}$ = LR image related by a degradation model,$\otimes$ = Convolution operation -
$k$ = blur kenel,$\downarrow_s$ = downsampling operation,$n$ = additive noise
-
- SR methods assume that the downsampling blur kernel is known and pre-defined, but the blur kernels involved in real applications are typically complicated and unavailable
- Bring regular artifacts (either over-sharpening or over-smoothing), which can be applied to correct inaccurate blur kernels
- Conversely, if the input kernel is sharper than the correct one, then the results will be over-shapened with obvious ringing effects as follow figure
- Proposed Iterative Kernel Correction (IKC) framework consists of a SR model F, a predictor P and a corrector C
-
SFTMD network
- SR network architecture using spatial feature transform (SFT) layers to handle multiple blur kernels to alleiviate kernel mismatch problem
- Architecture of SFTMD
-
SFT layer
- Affine transformation for the feature maps
$F$ conditioned on the kernel maps$H$ by a scaling and shifting operation $SFT(F, H) = \gamma \odot F + \beta$
- Affine transformation for the feature maps
-
Predictor/Corrector
- Predictor: to adopt a predictor function
$k = P(I^{LR})$ that estimates k from the LR input directly - Corrector: To correctly estimate
$k$ , we build a corrector function$C$ that measures the difference between the estimated kernel and the ground truth kernel - Architecture of Predictor/Corrector
- Predictor: to adopt a predictor function
- Run test codes for SFTMD & IKC Methods in official repository
docker pull qbxlvnf11docker/ikc_env:latest
nvidia-docker run --name IKC_env --gpus all -it -p 8888:8888 -e GRANT_SUDO=yes --user root -v {reposit_root_folder}:/workspace/IKC -v {data_folder}:/workspace/data -w /workspace/IKC pytorch/pytorch bash
- Default scale (up_scale, mod_scale): 4
- Source data path lsit: set 'sourcedir_list' variable in line
- Generated data path lsit: set 'savedir_list' variable in line
- Run follow command in root folder
python codes/scripts/generate_mod_LR_bic.py
- dataset_name in command: 'set5', 'set14', 'bsd100', 'urban100'
- dataroot_GT in config file: 'LRblur', 'Bic', ...
python codes/test_SFTMD.py -opt_F codes/options/test/test_SFTMD_{dataset_name}.yml
- dataset_name in command: 'set5', 'set14', 'bsd100', 'urban100'
- dataroot_GT in config file: 'LRblur', 'Bic', ...
python codes/test_IKC.py -opt_F codes/options/test/test_SFTMD_{dataset_name}.yml -opt_P codes/options/test/test_Predictor_{dataset_name}.yml -opt_C codes/options/test/test_Corrector_{dataset_name}.yml
@article{IKC,
title={Blind Super-Resolution With Iterative Kernel Correction},
author={Jinjin Gu et al.},
journal={arXiv},
year={2019}
}
https://github.com/yuanjunchai/IKC
https://github.com/Lornatang/SFTMD-PyTorch