Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved anti-aliasing #74

Open
tmeasday opened this issue Sep 26, 2019 · 7 comments
Open

Improved anti-aliasing #74

tmeasday opened this issue Sep 26, 2019 · 7 comments

Comments

@tmeasday
Copy link
Contributor

We've seen quite a few false positives that are due to the anti-aliasing algorithm not quite getting it right when the shape being anti-aliased is quite thin or forms a ring.

The issue comes from the algorithm's reliance on both the darkest and lightest neighbour being "definitely not antialiased" -- which is detected via having at least 3 neighbours of equal color. However, in such cases above, one of the two will not be. Here's an example:

Screenshot 2019-09-26 18 24 19

In the image above the orange arrow points at the candidate anti-aliased pixel, and the purple arrow at the darkest neighbour. Notice that because the shape drawn (a dark grey circle in this case) is quite thin (1px wide) we don't end up finding 3 other pixels in the neighbourhood of the dark grey pixel of equal color.

I think an idea to improve the algorithm would be to perhaps relax the constraint on one of the darkest or lightest (with the assumption that the other is a region of flat color and should have plenty of neighbours). Perhaps the relaxed condition could be simply that it has some neighbours of relatively close color? I'm not quite sure. I want to put together a few test cases.

In any case I was interested in your thoughts. Have you thought about this problem before?

@mourner
Copy link
Member

mourner commented Nov 13, 2019

@tmeasday Sorry for a late reply, just noticed the issue! I haven't thought about this problem, but open to any improvements — keep me updated on your experiments.

@tmeasday
Copy link
Contributor Author

tmeasday commented Nov 13, 2019

Thanks @mourner -- as it turned out the problem was mostly mitigated at our end by a totally different technique that resulted in the anti-aliasing being much more consistent (so the diff algorithm not hitting this problem any more), so this isn't a pressing problem right now. I guess it would be good to keep improving the algorithm though, I'll let you know if I look at it again.

@tmeasday
Copy link
Contributor Author

I'd be happy to share my test cases privately; the only reason I haven't posted them here already is that they are customer images and I would need to crop them down (to something similar to the example above) before posting them to a public issue.

@Danielku15
Copy link

I am facing currently similar issues where anti-aliasing or more precisely font-smoothing is causing my visual test-suite to fail on a regular base. I fear that the current implementation of antialiased is not sufficient to properly detect anti-aliased pixel differences.

Just increasing the overall threshold for my tests to fail makes most of my tests useless because significant differences are also not detected anymore. But if there would be a good way to improve the anti-aliasing detection I think my problems would be mostly gone.

I was wondering if there is a way to make the current algorithm more tolerant or if there exist alternative algorithms we could put in place for certain use-cases.

@fro0116
Copy link

fro0116 commented Nov 3, 2020

I've been running into similar issues with my test suite using pixelmatch.

Please excuse my ignorance, but I'm really curious why anti-aliasing behavior isn't deterministic in the first place? Intuitively, it feels anti-aliasing should be an entirely deterministic algorithm that should only depend on the raw source pixels, and if those don't change, neither should the anti-aliased result. I've tried to do a bit of research into this in my own but my Google-fu has been failing me so far.

Would really appreciate it if the pixelmatch maintainers or other folks in this thread who might be subject matter experts on this could be willing to enlighten me or point me to some reading material. 🙏

@Danielku15
Copy link

Danielku15 commented Nov 4, 2020

@fro0116 From a Chrome perspective I can give maybe some insight based on my experience: Chrome uses Skia as graphics library for rendering. Skia tries to utilize hardware accelleration if available to benefit from the GPU. The GPU rendering operations heavily depend on graphic cards and drivers. There are options to disable GPU rendering which usually gives some more deterministic results.

FireFox is using different render engines and graphic libraries underneath. I think they use some wrapper to other OS specific libraries. Update: FireFox also seems to use skia, but there are still a lot of compilation flags and settings that can influence the rendering while within Chromium browsers it should be fairly consistent.
(Source: https://en.wikipedia.org/wiki/Skia_Graphics_Engine)

There are many Antialiasing algorithms in the market and all behave a bit different.

Next level is the text rendering, there different operating systems are the key point. Depending on what library is used to interpret the font data (e.g. FreeType vs. OS built-in) you get different font renderings. Also the type setting (aligning the characters) is again dependent on some libraries and settings used.

On top come operating specific settings which browsers try to respect (e.g. ClearText in Windows, high contrast modes,...)

All these aspects influence the way how anti-aliasing works. Chrome even behaves different in headless mode than compared to the UI display variant

This makes it so hard for libraries like pixelmatch to do a deterministic detection of differences. If the anti aliasing between 2 renders are different, changes are detected. That's where the tolerance comes into place. I think there are not much algorithms/papers how to detect anti-aliased pixels. Also there are also just a few parameters which might control the tolerance for anti-aliasing detection. Maybe somebody will come up with some AI/ML kernel to detect differences in a more tolerant way.

A change in anti-aliasing for a circle might be totally fine as it is still a circle at the right place. But it might not be fine to have a shift of a "bow" connecting 2 elements as it might not end up at the right connector point anymore. Not an easy topic 😨

@johns10
Copy link

johns10 commented Jun 10, 2022

Hi! I just finished creating a pure elixir port of this library in Elixir, and I'm pretty sure I know why this is happening.

The reason for the non-deterministic behavior of antialiasing detection stems from the fact that when we have multiple pixels with the same brightness/darkness, we only consider the last one we found. What we need to do is add the darkest/lightest pixels to an accumulator, and then reduce over those pixels, determining if any of them have adjacent, identical pixels.

The problem is here:

    for (let x = x0; x <= x2; x  ) {
        for (let y = y0; y <= y2; y  ) {
            if (x === x1 && y === y1) continue;

            // brightness delta between the center pixel and adjacent one
            const delta = colorDelta(img, img, pos, (y * width   x) * 4, true);

            // count the number of equal, darker and brighter adjacent pixels
            if (delta === 0) {
                zeroes  ;
                // if found more than 2 equal siblings, it's definitely not anti-aliasing
                if (zeroes > 2) return false;

            // remember the darkest pixel
            } else if (delta < min) {
                min = delta;
                minX = x;
                minY = y;

            // remember the brightest pixel
            } else if (delta > max) {
                max = delta;
                maxX = x;
                maxY = y;
            }
        }
    }

It should actually be something like this (forgive crude syntax, didn't test):

    for (let x = x0; x <= x2; x  ) {
        for (let y = y0; y <= y2; y  ) {
            if (x === x1 && y === y1) continue;

            // brightness delta between the center pixel and adjacent one
            const delta = colorDelta(img, img, pos, (y * width   x) * 4, true);
            // console.log(`x: ${x}, y: ${y}, delta: ${delta}\r`);

            // count the number of equal, darker and brighter adjacent pixels
            if (delta === 0) {
                // console.log(zeroes);
                zeroes  ;
                // if found more than 2 equal siblings, it's definitely not anti-aliasing
                if (zeroes > 2) return false;

            // remember the darkest pixel
            } else if (delta < min) {
                min = delta;
                minCoords = [{x: x, y: y}];

            // remember the brightest pixel
            } else if (delta > max) {
                max = delta;
                maxCoords = [{x: x, y: y}];
            } else if (delta === max) {
                maxCoords.push({x: x, y: y})
            } else if (delta === min) {
                minCoords.push({x: x, y: y})
            }
        }
    }

Then, write a reducer for hasManySiblings that reduces over the coordinates, returning true if any of the calls to hasManySiblings returns true.

I identified this problem because when I mapped through the adjacent pixels, I was mapping over the y axis first. You can reproduce the behavior by swapping lines 105 and 106 in index.js.

In my library https://github.com/user-docs/pexelmatch I opted to use the same algorithm as pixelmatch, so that I could use the same fixtures and maintain consistency. Let me know if ya'll think this is a good evaluation and you want to apply this change, as I'd like to keep the tests for pexelmatch consistent with this library.

I used these additional fixtures to diagnose the problem:
8a
8b

Here is the diff produced by the current algorithm
8diff

Here is the diff produced by the reversing the x and y order
temp_diff

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants