quica is a tool to run inter coder agreement pipelines in an easy and effective ways. Multiple measures are run and results are collected in a single table than can be easily exported in Latex
-
Updated
Nov 9, 2020 - Python
quica is a tool to run inter coder agreement pipelines in an easy and effective ways. Multiple measures are run and results are collected in a single table than can be easily exported in Latex
Qualitative Coding Assistant for Google Sheets
Kupper-Hafner inter-rater agreement calculation library
The official Crowd Deliberation data set.
A python script to compute kappa-coefficient, which is a statistical measure of inter-rater agreement.
Evaluation and agreement scripts for the DISCOSUMO project. Each evaluation script takes both manual annotations as automatic summarization output. The formatting of these files is highly project-specific. However, the evaluation functions for precision, recall, ROUGE, Jaccard, Cohen's kappa and Fleiss' kappa may be applicable to other domains too.
Replication package for the Archetypal Analysis conducted in the paper: Evaluating the Agreement among Technical Debt Measurement Tools: Building an Empirical Benchmark of Technical Debt Liabilities accepted at Springer's EMSE Journal.
Python tool for calculating inter-rater reliability metrics and generating comprehensive reports for multi-rater datasets. Optionally have an LLM create an interpretation report.
A calculator for two different inter-rater agreement statistics, generalized to any numbers of categories
[MICCAI ISIC 2024] Code for "Segmentation Style Discovery: Application to Skin Lesion Images"
Add a description, image, and links to the inter-rater-agreement topic page so that developers can more easily learn about it.
To associate your repository with the inter-rater-agreement topic, visit your repo's landing page and select "manage topics."