Annotation & Disagreement
Last updated
Last updated
Snorkel metal weak supervision for multi-task learning. Conversation, git
Yes, the Snorkel project has included work before on hierarchical labeling scenarios. The main papers detailing our results include the DEEM workshop paper you referenced (https://dl.acm.org/doi/abs/10.1145/3209889.3209898) and the more complete paper presented at AAAI (https://arxiv.org/abs/1810.02840). Before the Snorkel and Snorkel MeTaL projects were merged in Snorkel v0.9, the Snorkel MeTaL project included an interface for explicitly specifying hierarchies between tasks which was utilized by the label model and could be used to automatically compile a multi-task end model as well (demo here: https://github.com/HazyResearch/metal/blob/master/tutorials/Multitask.ipynb). That interface is not currently available in Snorkel v0.9 (no fundamental blockers; just hasn't been ported over yet).
There are, however, still a number of ways to model such situations. One way is to treat each node in the hierarchy as a separate task and combine their probabilities post-hoc (e.g., P(credit-request) = P(billing) * P(credit-request | billing)). Another is to treat them as separate tasks and use a multi-task end model to implicitly learn how the predictions of some tasks should affect the predictions of others (e.g., the end model we use in the AAAI paper). A third option is to create a single task with all the leaf categories and modify the output space of the LFs you were considering for the higher nodes (the deeper your hierarchy is or the larger the number of classes, the less apppealing this is w/r/t to approaches 1 and 2).
Doccano - prodigy open source alternative butwith users management & statistics out of the box
Loopr.ai - An AI powered semi-automated and automated annotation process for high quality data.object detection, analytics, nlp, active learning.
Annotating twitter sentiment using humans, 3 classes, 55% accuracy using SVMs. they talk about inter agreement etc. and their DS is partially publicly available.
They must pass an english exam
They get control questions to establish their reliability
They get a few sentences over and over again to establish inter disagreement
Two or more people get a overlapping sentences to establish disagreement
5 judges for each sentence (makes 4 useless)
They dont know each other
Simple rules to follow
Random selection of sentences
Even classes
No experts
Measuring reliability kappa/the other kappa.
Ideas:
Active learning for a group (or single) of annotators, we have to wait for all annotations to finish each big batch in order to retrain the model.
Annotate a small group, automatic labelling using knn
Find a nearest neighbor for out optimal set of keywords per “category,
For a group of keywords, find their knn neighbors in w2v-space, alternatively find k clusters in w2v space that has those keywords. For a new word/mean sentence vector in the ‘category’ find the minimal distance to the new cluster (either one of approaches) and this is new annotation.
Myth One: One Truth Most data collection efforts assume that there is one correct interpretation for every input example.
Myth Two: Disagreement Is Bad To increase the quality of annotation data, disagreement among the annotators should be avoided or reduced.
Myth Three: Detailed Guidelines Help When specific cases continuously cause disagreement, more instructions are added to limit interpretations.
Myth Four: One Is Enough Most annotated examples are evaluated by one person.
Myth Five: Experts Are Better Human annotators with domain knowledge provide better annotated data.
Myth Six: All Examples Are Created Equal The mathematics of using ground truth treats every example the same; either you match the correct result or not.
Myth Seven: Once Done, Forever Valid Once human annotated data is collected for a task, it is used over and over with no update. New annotated data is not aligned with previous data.
Conclusions:
Experts are the same as a crowd
Costs a lot less $$$.
*** The best tutorial on agreements, cohen, david, kappa, krip etc.
Cohens kappa (two people)
but you can use it to map a group by calculating agreement for each pair
Kappa and the relation with accuracy (redundant, % above chance, should not be used due to other reasons researched here)
The Kappa statistic varies from 0 to 1, where.
0 = agreement equivalent to chance.
0.1 – 0.20 = slight agreement.
0.21 – 0.40 = fair agreement.
0.41 – 0.60 = moderate agreement.
0.61 – 0.80 = substantial agreement.
0.81 – 0.99 = near perfect agreement
1 = perfect agreement.
Fleiss’ kappa, from 3 people and above.
Kappa ranges from 0 to 1, where:
0 is no agreement (or agreement that you would expect to find by chance),
1 is perfect agreement.
Fleiss’s Kappa is an extension of Cohen’s kappa for three raters or more. In addition, the assumption with Cohen’s kappa is that your raters are deliberately chosen and fixed. With Fleiss’ kappa, the assumption is that your raters were chosen at random from a larger population.
Kendall’s Tau is used when you have ranked data, like two people ordering 10 candidates from most preferred to least preferred.
Krippendorff’s alpha is useful when you have multiple raters and multiple possible ratings.
Krippendorfs alpha
Can handle various sample sizes, categories, and numbers of raters.
Applies to any measurement level (i.e. (nominal, ordinal, interval, ratio).
Values range from 0 to 1, where 0 is perfect disagreement and 1 is perfect agreement. Krippendorff suggests: “[I]t is customary to require α ≥ .800. Where tentative conclusions are still acceptable, α ≥ .667 is the lowest conceivable limit (2004, p. 241).”
MACE - the new kid on the block. -
learns in an unsupervised fashion to
a) identify which annotators are trustworthy and
b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions
MACE does exactly that. It tries to find out which annotators are more trustworthy and upweighs their answers.
Git -
When evaluating redundant annotatio
ns (like those from Amazon's MechanicalTurk), we usually want to
aggregate annotations to recover the most likely answer
find out which annotators are trustworthy
evaluate item and task difficulty
MACE solves all of these problems, by learning competence estimates for each annotators and computing the most likely answer based on those competences.
Calculating agreement
Compare against researcher-ground-truth
Self-agreement
Inter-agreement
Imbalance data sets, i.e., why my Why is reliability so low when percentage of agreement is high?
Interpreting agreement, Accuracy precision kappa