Gaussian and sieve filters

From BanghamLab
Jump to navigation Jump to search
Siv1-gaussiansieve.png</wikiflv>


OK its a another crap Figure. I spent ages getting the trace above each new one to show the moving window used to compute the output of each stage.

Left panel: Gaussian filter. Stage (scale) <math>sigma</math>=1. Stage 2 <math>sigma</math>=2 on the previous stage. And so forth. Each stage simplifies the signal until, at the bottom, it fades away. It does preserve scale-space and that is GOOD.

Right panel: recursive median. Whatever the scale each computation requires three numbers, the middle one, one ahead and one behind. If the one behind comes directly from the previous stage the median is found.

If, on the other hand the one behind is the value that has just been computed using the previous window then the recursive median is computed. It is similar to an 'irr' filter as opposed to a 'fir' filter. Stage (scale) 1 window of 3. The filter removes extrema of length (scale) 1. Stage 2 also has a window of 3 but is computed on a basis of 5 values. This filter removes extrema of scale 2. And so forth. Each stage simplifies the signal until, at the bottom, it is entirely removed. It turns out that extrema at each scale are removed in a nice regular way it does preserve scale-space and that is GOOD.

The output depends on the computation kernel the recursive median is one. Others include connected set openings ('o'), closings ('c') and composites of these 'M' ('o' followed by 'c' at each stage) or 'N' ('c' followed by 'o'). These last two yield outputs that are almost indistinguishable from the recursive median. (I have just realised how confusing it is referring to the recursive mean as an 'm' sieve, I shall start using the abbreviation 'v' as much as possible instead.) This is dull stuff.

This is dull stuff compared to something that has been thought about much less, let alone exploited- a PhD project waiting to happen. Sieves (and the recursive median filter is what I call one of the sieves) are idempotent. In other words having made one pass through the data at any particular scale, making another pass through the result changes nothing.

This is not like a linear (diffusion) filter where repeated passes at the same scale simply smooth the signal away. It means that one could build entirely new filtering schema exploiting the idempotency property, here are some ideas. They begin to look a little biological.

Two twists in the algorithm make it practical

The implementation is very literal and slow. Each stage is run and re-run to idempotency. I called the filter a data-sieve. The twists in this story is firstly to switch to multiple pass median filters(Bangham, 1993)<ref>Bangham, J. Andrew, "Properties of a Series of Nested Median Filters, Namely the Data Sieve," IEEE Trans Sig. Process. Vol. 41. NO. I. Jan 1993</ref>, then being excited enough to look for something faster/better: 'recursive median filters''(Bangham, Chardaire et. al. 1996)<ref>Bangham, J. Andrew, "Multiscale nonlinear decomposition: the sieve decomposition theorem" IEEE Trans Pat. Anal. Mach. Intelligence. Vol. 18. NO. 5. Jan 1996</ref> and sieves. Each twist was patented. The first was (I think) in 1988. Patents are difficult to produce, and enforce. I was supported in this by Cambridge Consultants Limited (CCL) who had taken over the company. Fantastic engineers but we and the vision community had not (or we had not noticed) at that time appreciated the coming possibilities of SIFT. Scale-invariant feature transform (or SIFT) is an algorithm in computer vision to detect and describe local features in images. The algorithm [1] was published by David Lowe in 1999.[1]

References

<references />