Skip to content

Commit

Permalink
fast slope based extrapolation for convolutions
Browse files Browse the repository at this point in the history
  • Loading branch information
nhauber99 committed Aug 26, 2023
1 parent 3def704 commit 8a1faaf
Show file tree
Hide file tree
Showing 9 changed files with 151 additions and 14 deletions.
28 changes: 14 additions & 14 deletions docs/Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@ GEM
activesupport (3.2.22.5)
i18n (~> 0.6, >= 0.6.4)
multi_json (~> 1.0)
addressable (2.8.1)
addressable (2.8.5)
public_suffix (>= 2.0.2, < 6.0)
coffee-script (2.4.1)
coffee-script-source
execjs
coffee-script-source (1.11.1)
colorator (1.1.0)
commonmarker (0.23.6)
concurrent-ruby (1.1.10)
dnsruby (1.61.9)
simpleidn (~> 0.1)
commonmarker (0.23.10)
concurrent-ruby (1.2.2)
dnsruby (1.70.0)
simpleidn (~> 0.2.1)
em-websocket (0.5.3)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0)
ethon (0.16.0)
ffi (>= 1.15.0)
eventmachine (1.2.7)
execjs (2.8.1)
faraday (2.7.2)
faraday (2.7.10)
faraday-net_http (>= 2.0, < 3.1)
ruby2_keywords (>= 0.0.4)
faraday-net_http (3.0.2)
Expand Down Expand Up @@ -199,7 +199,7 @@ GEM
kramdown-parser-gfm (1.1.0)
kramdown (~> 2.0)
liquid (4.0.3)
listen (3.7.1)
listen (3.8.0)
rb-fsevent (~> 0.10, >= 0.10.3)
rb-inotify (~> 0.9, >= 0.9.10)
mercenary (0.3.6)
Expand All @@ -208,19 +208,19 @@ GEM
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
multi_json (1.15.0)
nokogiri (1.13.10-x64-mingw-ucrt)
nokogiri (1.15.4-x64-mingw-ucrt)
racc (~> 1.4)
octokit (4.25.1)
faraday (>= 1, < 3)
sawyer (~> 0.9)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
public_suffix (4.0.7)
racc (1.6.2)
racc (1.7.1)
rb-fsevent (0.11.2)
rb-inotify (0.10.1)
ffi (~> 1.0)
rexml (3.2.5)
rexml (3.2.6)
rouge (3.26.0)
ruby2_keywords (0.0.5)
rubyzip (2.3.2)
Expand All @@ -239,16 +239,16 @@ GEM
unicode-display_width (~> 1.1, >= 1.1.1)
typhoeus (1.4.0)
ethon (>= 0.9.0)
tzinfo (2.0.5)
tzinfo (2.0.6)
concurrent-ruby (~> 1.0)
tzinfo-data (1.2022.7)
tzinfo-data (1.2023.3)
tzinfo (>= 1.0.0)
unf (0.1.4)
unf_ext
unf_ext (0.0.8.2-x64-mingw-ucrt)
unicode-display_width (1.8.0)
wdm (0.1.1)
webrick (1.7.0)
webrick (1.8.1)

PLATFORMS
x64-mingw-ucrt
Expand All @@ -264,4 +264,4 @@ DEPENDENCIES
webrick (~> 1.7)

BUNDLED WITH
2.4.3
2.3.26
61 changes: 61 additions & 0 deletions docs/_posts/2023-08-26-ConvolutionPadding.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
title: "Fast Slope Based Extrapolation for Convolutions"
date: 2023-08-26
---

<script type="text/x-mathjax-config">
MathJax.Hub.Config({
tex2jax: {
inlineMath: [['$','$'], ['\\(','\\)']],
processEscapes: true
}
});
</script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>

## The Problem
When applying high-pass filters to images, such as height maps, a common issue is edge artifacts. These are caused by having to extrapolate pixels and finding a good solution to this is actually not that trivial.

Even Photoshop suffers from the same issue as shown below:

Input Image: Simple Linear Gradient
<img src="/Blog/assets/convolution/ps_input.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />

High-Pass Filtered Image with Enhanced Contrast
<img src="/Blog/assets/convolution/ps_output.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />


Ideally, the high-pass filter should produce a uniform output. However, the edge pixels show a gradient. This is caused by having to extrapolate pixels outside the image when convolving it, which is often done with a replicate or reflection padding.
This would be a plot of the pixel intensities for the same example:

<img src="/Blog/assets/convolution/linear_no_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />


## The Solution
I've developed a method to solve this issue. A naive implementation for 1 dimension would look like this:
1. Calculate the slope $$s$$ of the neighborhood of each pixel weighted by the convolution kernel
2. Apply the kernel to the image and extrapolate missing samples by $$ v_i + d_{ij} * s $$. Here $$v_i$$ is the value of the current pixel and $$d_{ij}$$ is the distance to the pixel being extrapolated.

This means that pixels outside of the boundary are extrapolated using the slope of the local neighborhood of each pixel. The only challenge remaining is to find an algorithm which can do this in a single pass, otherwise the memory accesses would need to be doubled or tripled, which would significantly affect the performance of a GPU implementation. Luckily, this is possible.

Firstly, finding the local slope requires calculating a weighted linear regression for the neighborhood of each pixel. I got the algorithm for the single pass weighted linear regression from ChatGPT and sadly can't find the where it supposedly got this from. However, I've compared it to the Matlab implementation of weighted linear regression and the results match up perfectly.

This is the formula I used for calculating the slope, it needs the implementation to maintain 5 running sums:

$$ s = \frac{\sum_0^i w_ix_iy_i-\frac{(\sum_0^i w_ix_i)(\sum_0^i w_iy_i)}{\sum_0^i w_i}}{\sum_0^i w_ix_i^2-\frac{(\sum_0^i w_ix_i)^2}{\sum_0^i w_i}} $$

With the extrapolation applied to the same linear gradient, the blurred version is a perfect match of the original and the difference is zero:

<img src="/Blog/assets/convolution/linear_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />

Here is a more complex example with random noise added to a sine wave:

<img src="/Blog/assets/convolution/sine_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />

The same convolution without extrapolation would result in this:

<img src="/Blog/assets/convolution/sine_no_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />

This improvement is particularly noticeable in high-pass filtered data.

The Matlab implementation of this method in one dimension can be downloaded [here](/Blog/assets/convolution/slopeExtrapolation.m).
Binary file added docs/assets/convolution/linear_extrapolation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/convolution/ps_input.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/convolution/ps_output.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/convolution/sine_extrapolation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/convolution/sine_no_extrapolation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
76 changes: 76 additions & 0 deletions docs/assets/convolution/slopeExtrapolation.m
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@

% AUTHOR: Niklas Hauber
% example script showing slope extrapolation

useSlopeExtrapolation = true;
n = 500; % Size of the array
sigma = 10; % gaussian std dev
kernel_size = 20; % must be odd

% init data
%data = (1:n)/n-0.5;
data = sin((1:n)/80);
data = data + 0.01*randn(1,n);
%data(10:20) = nan;
%data(50:60) = nan;

% init gaussian kernel
half_size = floor(kernel_size / 2);
kernel = zeros(1, kernel_size);
for i = -half_size:half_size
kernel(i + half_size + 1) = exp(-i^2 / (2 * sigma^2));
end
kernel = kernel / sum(kernel);

blurred = zeros(1, n);

for i = 1:n
self = data(i);

valueSum = 0;
distSum = 0;

weight_sum = 0;
wsum_xy = 0;
wsum_x = 0;
wsum_y = 0;
wsum_x2 = 0;

for x = -half_size:half_size
idx = i + x;
k = kernel(x + half_size + 1);
valid_sample = idx>= 1 && idx<=n && ~isnan(data(idx));
if valid_sample
y = data(idx);
valueSum = valueSum + y * k;
weight_sum=weight_sum+k;

wsum_xy = wsum_xy+k*x*y;
wsum_x = wsum_x+k*x;
wsum_y = wsum_y+k*y;
wsum_x2 = wsum_x2+k*x*x;
elseif useSlopeExtrapolation
valueSum = valueSum+self*k;
distSum = distSum+k*x;
end
end

if useSlopeExtrapolation
slope = (wsum_xy- wsum_x*wsum_y/weight_sum) / (wsum_x2 - wsum_x*wsum_x/weight_sum);
blurred(i) = valueSum + distSum*slope;
else
blurred(i) = valueSum / weight_sum;
end
end

% Plot the original and blurred arrays
%figure
subplot(2,1,1);
hold off; % to replace last figure
plot(data, 'b', 'DisplayName', 'Original');
hold on;
plot(blurred, 'r', 'DisplayName', 'Blurred');
legend;
subplot(2,1,2);
plot(blurred-data, 'r', 'DisplayName', 'Difference');
legend;

0 comments on commit 8a1faaf

Please sign in to comment.