Annotation & Disagreement

Tools

  1. Snorkel - using weak supervision to create less noisy labelled datasets

  2. Snorkel metal weak supervision for multi-task learning. Conversation, git

    1. Yes, the Snorkel project has included work before on hierarchical labeling scenarios. The main papers detailing our results include the DEEM workshop paper you referenced (https://dl.acm.org/doi/abs/10.1145/3209889.3209898) and the more complete paper presented at AAAI (https://arxiv.org/abs/1810.02840). Before the Snorkel and Snorkel MeTaL projects were merged in Snorkel v0.9, the Snorkel MeTaL project included an interface for explicitly specifying hierarchies between tasks which was utilized by the label model and could be used to automatically compile a multi-task end model as well (demo here: https://github.com/HazyResearch/metal/blob/master/tutorials/Multitask.ipynb). That interface is not currently available in Snorkel v0.9 (no fundamental blockers; just hasn't been ported over yet).

    2. There are, however, still a number of ways to model such situations. One way is to treat each node in the hierarchy as a separate task and combine their probabilities post-hoc (e.g., P(credit-request) = P(billing) * P(credit-request | billing)). Another is to treat them as separate tasks and use a multi-task end model to implicitly learn how the predictions of some tasks should affect the predictions of others (e.g., the end model we use in the AAAI paper). A third option is to create a single task with all the leaf categories and modify the output space of the LFs you were considering for the higher nodes (the deeper your hierarchy is or the larger the number of classes, the less apppealing this is w/r/t to approaches 1 and 2).

  3. Doccano - prodigy open source alternative butwith users management & statistics out of the box

  4. Loopr.ai - An AI powered semi-automated and automated annotation process for high quality data.object detection, analytics, nlp, active learning.

  5. Vader annotation

    1. They must pass an english exam

    2. They get control questions to establish their reliability

    3. They get a few sentences over and over again to establish inter disagreement

    4. Two or more people get a overlapping sentences to establish disagreement

    5. 5 judges for each sentence (makes 4 useless)

    6. They dont know each other

    7. Simple rules to follow

    8. Random selection of sentences

    9. Even classes

    10. No experts

    11. Measuring reliability kappa/the other kappa.

Ideas:

  1. Active learning for a group (or single) of annotators, we have to wait for all annotations to finish each big batch in order to retrain the model.

  2. Annotate a small group, automatic labelling using knn

  3. Find a nearest neighbor for out optimal set of keywords per “category,

  4. For a group of keywords, find their knn neighbors in w2v-space, alternatively find k clusters in w2v space that has those keywords. For a new word/mean sentence vector in the ‘category’ find the minimal distance to the new cluster (either one of approaches) and this is new annotation.

Myths

  1. 7 myths of annotation

    1. Myth One: One Truth Most data collection efforts assume that there is one correct interpretation for every input example.

    2. Myth Two: Disagreement Is Bad To increase the quality of annotation data, disagreement among the annotators should be avoided or reduced.

    3. Myth Three: Detailed Guidelines Help When specific cases continuously cause disagreement, more instructions are added to limit interpretations.

    4. Myth Four: One Is Enough Most annotated examples are evaluated by one person.

    5. Myth Five: Experts Are Better Human annotators with domain knowledge provide better annotated data.

    6. Myth Six: All Examples Are Created Equal The mathematics of using ground truth treats every example the same; either you match the correct result or not.

    7. Myth Seven: Once Done, Forever Valid Once human annotated data is collected for a task, it is used over and over with no update. New annotated data is not aligned with previous data.

Crowd Sourcing

  • Conclusions:

    • Experts are the same as a crowd

    • Costs a lot less $$$.

Disagreement

Inter agreement

*** The best tutorial on agreements, cohen, david, kappa, krip etc.

  1. Cohens kappa (two people)

but you can use it to map a group by calculating agreement for each pair

  1. Kappa and the relation with accuracy (redundant, % above chance, should not be used due to other reasons researched here)

The Kappa statistic varies from 0 to 1, where.

  • 0 = agreement equivalent to chance.

  • 0.1 – 0.20 = slight agreement.

  • 0.21 – 0.40 = fair agreement.

  • 0.41 – 0.60 = moderate agreement.

  • 0.61 – 0.80 = substantial agreement.

  • 0.81 – 0.99 = near perfect agreement

  • 1 = perfect agreement.

  1. Fleiss’ kappa, from 3 people and above.

Kappa ranges from 0 to 1, where:

  • 0 is no agreement (or agreement that you would expect to find by chance),

  • 1 is perfect agreement.

  • Fleiss’s Kappa is an extension of Cohen’s kappa for three raters or more. In addition, the assumption with Cohen’s kappa is that your raters are deliberately chosen and fixed. With Fleiss’ kappa, the assumption is that your raters were chosen at random from a larger population.

  • Kendall’s Tau is used when you have ranked data, like two people ordering 10 candidates from most preferred to least preferred.

  • Krippendorff’s alpha is useful when you have multiple raters and multiple possible ratings.

  1. Krippendorfs alpha

  1. MACE - the new kid on the block. -

learns in an unsupervised fashion to

  1. a) identify which annotators are trustworthy and

  2. b) predict the correct underlying labels. We match performance of more complex state-of-the-art systems and perform well even under adversarial conditions

  3. MACE does exactly that. It tries to find out which annotators are more trustworthy and upweighs their answers.

  4. Git -

When evaluating redundant annotatio

ns (like those from Amazon's MechanicalTurk), we usually want to

  1. aggregate annotations to recover the most likely answer

  2. find out which annotators are trustworthy

  3. evaluate item and task difficulty

MACE solves all of these problems, by learning competence estimates for each annotators and computing the most likely answer based on those competences.

Calculating agreement

  1. Compare against researcher-ground-truth

  2. Self-agreement

Troubling shooting agreement metrics

  1. Interpreting agreement, Accuracy precision kappa

Machine Vision annotation

Last updated