-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
COMBINING LATIN SMALL LETTER O and COMBINING RING ABOVE can be ignored in compare #28
Comments
Dinglehopper already uses normalization (NFC). The delta in the example is caused by different characters which look the same: u COMBINING LATIN SMALL LETTER O != u COMBINING RING ABOVE. So COMBINING LATIN SMALL LETTER O and COMBINING RING ABOVE should be handled as similar when comparing. |
I do think that COMBINING LATIN SMALL LETTER O and COMBINING RING ABOVE are entirely different characters and not equivalent. However, I do think that the normalization behaviour should be configurable by the user. So if you choose to consider these characters to be the same for your use case, then you should be able to configure this. I've done some WIP on this that I need to merge and work further on it. |
Duplicate of #11? |
@cneud, yes, the issue can be solved with substitutions which can be configured by the users. @mikegerber, sure, |
Exactly. I aim to support (Unicode terms) canonical equivalence (= using NFC consistently) and maybe the same idea for MUFI characters. And make all further equivalence considerations user configurable. The latter also means that some of the hardcoded substitutions will be moved to some kind of configuration (#11) and @stweil can make uͦ and ů equivalent. |
I would like to point out here that allowing arbitrary equivalences also makes comparing much more difficult. Ideally, there should be sensible sets of transformations (like OCR-D GT levels) which many researchers/practitioners can agree on. And then, to facilitate commensurability, ideally, the evaluation should produce multiple metrics next to each other in the report (like: always the chosen metric plus maximum normalization (GT level 1) metric plus minimum (GT level 3 / Levenshtein) metric). Also, cf. existing metrics in my module. |
Basically this comes down to a number of pre-defined common use cases or scenarios, with the added possibility for users to create their own scenarios. This is also the approach that was followed during the IMPACT project and which I believe would be a sound tradeoff between comparable results and flexibility with regard to differing applications. |
I agree mostly with what @bertsky and @cneud said. I just want to throw in some doubt on the belief that CERs are somehow comparable when produced by different tools. Do they count whitespace the same way? grapheme clusters? punctuation? Side note: Is there really a set of transformations defined for OCR-D's GT level 1? |
I too strongly doubt they are! Looking at e.g. results and metrics from ICDAR papers, many resort to their own implementation for evaluation, which obviously creates considerable blur around any exact performance comparison. |
We have to get there! As a community. Otherwise, where's the objectivity? White space should be easy, given the strictly implicit PAGE-XML whitespace model. Grapheme clusters is something we agreed on earlier, only our implementations differ (so it should be interesting to compare them to find edge cases). Punctuation, not sure what you mean – punctuation normalization in CER, or tokenization for WER? (The latter I agree is a hard one to find any one standard for...)
No, unfortunately not. That's one of the things I have been adamant about in phase 2 but never got @tboenig or @kba to implement a runnable definition 😟 |
UTF-8 allows different representations for the same character. Dinglehoppers currently does not detect that such different representations are identical characters, but handles them like a recognition error.
This can be fixed by normalizing the text before doing the comparison.
Example: We just had a case where the GT transcription used zuͦſein (u COMBINING RING ABOVE) while the OCR detected zůſein (LATIN SMALL LETTER U WITH RING ABOVE). See https://digi.bib.uni-mannheim.de/fileadmin/digi/477429599/dh_055.html.
The text was updated successfully, but these errors were encountered: