The Kuwahara-Nagao filter, or "KN filter", is awesome. It is an edge-preserving filter that both blurs and sharpens at the same time, like magic. The core concept is to sample regions about a reference pixel, and to pick the average colour of the region with the least variance. Or, in other words, avoid regions with 'a lot going on'
Sometimes an image is a bit 'grainy', and it's desired to smooth the image. See figure 1 a. One way is to apply a blur. But doing so blurs everything including details, like edges, that were not intended to be blurred. See figure 1 b.
Sometimes an image is overly blurry, and sharpenning is needed. But sharpenning doesn't discriminate, and artefacts become sharpenned just as much as useful details. See figure 1 c.
KN filter solves everything. The KN filter both smooths small artefacts while not just preserving but even enhancing edges (though may produce a slight blockiness to the image but, that's a bonus, and doesn't count). See figure 1 d.
Sometimes an image is overly blurry, and sharpenning is needed. But sharpenning doesn't discriminate, and artefacts become sharpenned just as much as useful details. See figure 1 c.
KN filter solves everything. The KN filter both smooths small artefacts while not just preserving but even enhancing edges (though may produce a slight blockiness to the image but, that's a bonus, and doesn't count). See figure 1 d.
![]() |
| a) |
![]() |
| b) |
![]() |
| c) |
![]() |
| d) |
What's the problem?
I've come across more than a few implementations of the KN filter that produce artefacts and I'm suspecting that the standard method used to increase the scale of the filter is wrong.
Kang et al produced a paper called "Image and Video Abstraction by Anisotropic Kuwahara Filtering" that, while being an excellent paper otherwise, includes what I consider to be an incorrect implementation of the KN filter.
For example in Kang's paper (their) Figure 2 (b) shows a quad-shaped wavy/shell pattern where flat colours might be expected, per figure 2.
Kang et al produced a paper called "Image and Video Abstraction by Anisotropic Kuwahara Filtering" that, while being an excellent paper otherwise, includes what I consider to be an incorrect implementation of the KN filter.
For example in Kang's paper (their) Figure 2 (b) shows a quad-shaped wavy/shell pattern where flat colours might be expected, per figure 2.
![]() |
| Figure 2, Kang et al's 's KN filtered image |
The very good (for what it's meant to do) graphics program Paint.net allows user created effects. There are two such user created KN filters (that I found) that produce the same artefacts as are visible in Kang et al.
However, another program, mtPaint, has a KN implementation but, produces images without the artefacts, per figure 3.
However, another program, mtPaint, has a KN implementation but, produces images without the artefacts, per figure 3.
![]() |
| Figure 3, how the image should be filtered with the KN filter |
While testing my own initial implementation of the KN filter, I used the mtPaint output as the benchmark for what is 'correct'. Why do I consider the Kang and (user made) Paint.net implemenations bad and the mtPaint implementation good? Because the former look crap, and the latter doesn't. QED.
The original paper (from 1976) referenced by Kang et al exists as far as I can tell only behind a pay wall; if anyone knows of a handy public copy or link please email me, I'd love to see the original paper. Meanwhile, there are many public sources that describe the KN filter's concept and method (wikipedia, various universities' lecture notes, etc) – but uncannily all these only ever (so far as I've found) refer to a 3x3 box size – which suggests that the source probably does not define scaling up from 3x3. And this is likely the cause of the artefacts; people have guessed at a scaling method, incorrectly.
The method for the KN filter is roughly as follows. (This isn't meant as a tutorial on the KN filter itself, it's here for reference, and to be improved on).
The method for the KN filter is roughly as follows. (This isn't meant as a tutorial on the KN filter itself, it's here for reference, and to be improved on).
- Take a 3x3 box, divide it into four 2x2 boxes or samples, overlapping about the centre pixel. Per Figure 4, below.
- For each sample get the variance and average colour.
- Select the sample with lowest variance, and take the average colour of that sample, and make it the colour for the centre pixel.
![]() |
| Figure 4, the standard 3x3 KN filter box |
When I first tried to implement my own KN filter, and wanted to scale the filter size up, I did the 'obvious', by just making each of the four boxes bigger, per figure 5.
![]() |
| Figure 5, standard, and wrong, sampling for a 5x5 filter, with only four samples |
With this scaling method I successfully reproduced the crappy wavy artefacts! Yeay!
Recognising the problem
It occured to me that this scaling method marginalises the centre reference pixel more as the filter size increases, which is bad.
It occured to me to fill the filter with as many sample regions (of the same size) as distinctly fit, as per figure 6.
It occured to me to fill the filter with as many sample regions (of the same size) as distinctly fit, as per figure 6.
![]() |
| Figure 6, correct KN sampling for a 5x5 filter area |
It occured to me that the general definition for the KN filter may not be, and maybe never was, 'draw an nxn box and then sample four boxes overlapping about the centre pixel...', but, generally, 'fill the box with as many of the same sized samples as can overlap with the centre pixel' – which only happens to mean, for a 3x3 filter, four 2x2 boxes. The fact that there are four samples for a 3x3 filter is just what happens for a 3x3 filter, and is, I reckon, not a rule to be applied for all filter sizes.
- For an n x n box (where n is asserted to be odd), define span as the edge length of a sample such that span = ((n+1)/2).
- Fill the filter area with span x span number of samples, each having span edge length.
- Proceed with getting variance and average colour per sample as per usual, picking sample with lowest variance, using the average colour of that for the reference pixel.









tracinpres_gi Kim Lopez https://wakelet.com/wake/ToLbtKmDKnMGcuaTEmRvj
ReplyDeleteovviemota
Vpuncfepropni Mike Anderson NetBalancer
ReplyDeleteAutodesk AutoCAD
Pinnacle Studio
fectpaddrahi
AticuAtinc-ro Sarah Metcalf click here
ReplyDeletethere
genmocorpo