Sampled Gaussian Kernel
A Sampled Gaussian Kernel is an Error Function that ...
- AKA: Scale Space Implementation.
- …
- Counter-Example(s)
- See: Kaiser Window, Error Function, Window Function, Hamming Window, Blackman Window.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Scale_space_implementation#The_sampled_Gaussian_kernel Retrieved:2018-4-8.
- When implementing the one-dimensional smoothing step in practice, the presumably simplest approach is to convolve the discrete signal fD with a sampled Gaussian kernel: : [math]\displaystyle{ L(x, t) = \sum_{n=-\infty}^{\infty} f(x-n) \, G(n, t) }[/math] where : [math]\displaystyle{ G(n, t) = \frac {1}{\sqrt{2\pi t}} e^{-\frac{n^2}{2t}} }[/math] (with t = σ2) which in turn is truncated at the ends to give a filter with finite impulse response : [math]\displaystyle{ L(x, t) = \sum_{n=-M}^{M} f(x-n) \, G(n, t) }[/math] for M chosen sufficiently large (see error function) such that :[math]\displaystyle{ 2 \int_M^{\infty} G(u, t) \, du = 2 \int_{\frac{M}{\sqrt{t}}}^{\infty} G(v, 1) \, dv \lt \varepsilon. }[/math]
A common choice is to set M to a constant C times the standard deviation of the Gaussian kernel : [math]\displaystyle{ M = C \sigma + 1 = C \sqrt{t} + 1 }[/math] where C is often chosen somewhere between 3 and 6.
Using the sampled Gaussian kernel can, however, lead to implementation problems, in particular when computing higher-order derivatives at finer scales by applying sampled derivatives of Gaussian kernels. When accuracy and robustness are primary design criteria, alternative implementation approaches should therefore be considered.
For small values of ε (10−6 to 10−8) the errors introduced by truncating the Gaussian are usually negligible. For larger values of ε, however, there are many better alternatives to a rectangular window function. For example, for a given number of points, a Hamming window, Blackman window, or Kaiser window will do less damage to the spectral and other properties of the Gaussian than a simple truncation will. Notwithstanding this, since the Gaussian kernel decreases rapidly at the tails, the main recommendation is still to use a sufficiently small value of ε such that the truncation effects are no longer important.
- When implementing the one-dimensional smoothing step in practice, the presumably simplest approach is to convolve the discrete signal fD with a sampled Gaussian kernel: : [math]\displaystyle{ L(x, t) = \sum_{n=-\infty}^{\infty} f(x-n) \, G(n, t) }[/math] where : [math]\displaystyle{ G(n, t) = \frac {1}{\sqrt{2\pi t}} e^{-\frac{n^2}{2t}} }[/math] (with t = σ2) which in turn is truncated at the ends to give a filter with finite impulse response : [math]\displaystyle{ L(x, t) = \sum_{n=-M}^{M} f(x-n) \, G(n, t) }[/math] for M chosen sufficiently large (see error function) such that :[math]\displaystyle{ 2 \int_M^{\infty} G(u, t) \, du = 2 \int_{\frac{M}{\sqrt{t}}}^{\infty} G(v, 1) \, dv \lt \varepsilon. }[/math]