-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improved anti-aliasing #74
Comments
@tmeasday Sorry for a late reply, just noticed the issue! I haven't thought about this problem, but open to any improvements — keep me updated on your experiments. |
Thanks @mourner -- as it turned out the problem was mostly mitigated at our end by a totally different technique that resulted in the anti-aliasing being much more consistent (so the diff algorithm not hitting this problem any more), so this isn't a pressing problem right now. I guess it would be good to keep improving the algorithm though, I'll let you know if I look at it again. |
I'd be happy to share my test cases privately; the only reason I haven't posted them here already is that they are customer images and I would need to crop them down (to something similar to the example above) before posting them to a public issue. |
I am facing currently similar issues where anti-aliasing or more precisely font-smoothing is causing my visual test-suite to fail on a regular base. I fear that the current implementation of Just increasing the overall threshold for my tests to fail makes most of my tests useless because significant differences are also not detected anymore. But if there would be a good way to improve the anti-aliasing detection I think my problems would be mostly gone. I was wondering if there is a way to make the current algorithm more tolerant or if there exist alternative algorithms we could put in place for certain use-cases. |
I've been running into similar issues with my test suite using pixelmatch. Please excuse my ignorance, but I'm really curious why anti-aliasing behavior isn't deterministic in the first place? Intuitively, it feels anti-aliasing should be an entirely deterministic algorithm that should only depend on the raw source pixels, and if those don't change, neither should the anti-aliased result. I've tried to do a bit of research into this in my own but my Google-fu has been failing me so far. Would really appreciate it if the pixelmatch maintainers or other folks in this thread who might be subject matter experts on this could be willing to enlighten me or point me to some reading material. 🙏 |
@fro0116 From a Chrome perspective I can give maybe some insight based on my experience: Chrome uses Skia as graphics library for rendering. Skia tries to utilize hardware accelleration if available to benefit from the GPU. The GPU rendering operations heavily depend on graphic cards and drivers. There are options to disable GPU rendering which usually gives some more deterministic results.
There are many Antialiasing algorithms in the market and all behave a bit different. Next level is the text rendering, there different operating systems are the key point. Depending on what library is used to interpret the font data (e.g. FreeType vs. OS built-in) you get different font renderings. Also the type setting (aligning the characters) is again dependent on some libraries and settings used. On top come operating specific settings which browsers try to respect (e.g. ClearText in Windows, high contrast modes,...) All these aspects influence the way how anti-aliasing works. Chrome even behaves different in headless mode than compared to the UI display variant This makes it so hard for libraries like pixelmatch to do a deterministic detection of differences. If the anti aliasing between 2 renders are different, changes are detected. That's where the tolerance comes into place. I think there are not much algorithms/papers how to detect anti-aliased pixels. Also there are also just a few parameters which might control the tolerance for anti-aliasing detection. Maybe somebody will come up with some AI/ML kernel to detect differences in a more tolerant way. A change in anti-aliasing for a circle might be totally fine as it is still a circle at the right place. But it might not be fine to have a shift of a "bow" connecting 2 elements as it might not end up at the right connector point anymore. Not an easy topic 😨 |
Hi! I just finished creating a pure elixir port of this library in Elixir, and I'm pretty sure I know why this is happening. The reason for the non-deterministic behavior of antialiasing detection stems from the fact that when we have multiple pixels with the same brightness/darkness, we only consider the last one we found. What we need to do is add the darkest/lightest pixels to an accumulator, and then reduce over those pixels, determining if any of them have adjacent, identical pixels. The problem is here:
It should actually be something like this (forgive crude syntax, didn't test):
Then, write a reducer for I identified this problem because when I mapped through the adjacent pixels, I was mapping over the y axis first. You can reproduce the behavior by swapping lines 105 and 106 in index.js. In my library https://github.com/user-docs/pexelmatch I opted to use the same algorithm as pixelmatch, so that I could use the same fixtures and maintain consistency. Let me know if ya'll think this is a good evaluation and you want to apply this change, as I'd like to keep the tests for pexelmatch consistent with this library. I used these additional fixtures to diagnose the problem: Here is the diff produced by the current algorithm Here is the diff produced by the reversing the x and y order |
We've seen quite a few false positives that are due to the anti-aliasing algorithm not quite getting it right when the shape being anti-aliased is quite thin or forms a ring.
The issue comes from the algorithm's reliance on both the darkest and lightest neighbour being "definitely not antialiased" -- which is detected via having at least 3 neighbours of equal color. However, in such cases above, one of the two will not be. Here's an example:
In the image above the orange arrow points at the candidate anti-aliased pixel, and the purple arrow at the darkest neighbour. Notice that because the shape drawn (a dark grey circle in this case) is quite thin (1px wide) we don't end up finding 3 other pixels in the neighbourhood of the dark grey pixel of equal color.
I think an idea to improve the algorithm would be to perhaps relax the constraint on one of the darkest or lightest (with the assumption that the other is a region of flat color and should have plenty of neighbours). Perhaps the relaxed condition could be simply that it has some neighbours of relatively close color? I'm not quite sure. I want to put together a few test cases.
In any case I was interested in your thoughts. Have you thought about this problem before?
The text was updated successfully, but these errors were encountered: