Annotation Process

From GM-RKB
Jump to navigation Jump to search

An Annotation Process is a information processing process to create annotated artifacts.



References

2024

  • (Tan et al., 2024) ⇒ Zhen Tan, Alimohammad Beigi, Song Wang, Ruocheng Guo, Amrita Bhattacharjee, Bohan Jiang, Mansooreh Karami, Jundong Li, Lu Cheng, and Huan Liu. (2024). "Large Language Models for Data Annotation: A Survey." arXiv preprint arXiv:2402.13446. [arXiv](https://arxiv.org/abs/2402.13446).
    • NOTES: This survey explores the use of large language models (LLMs) for data annotation, discussing various techniques like zero-shot and few-shot prompts, and highlighting the potential for LLMs to generate high-quality, context-sensitive annotations.
    • QUOTE: “Data annotation generally refers to the labeling or generating of raw data with relevant information, which could be used for improving the efficacy of machine learning models. The process, however, is labor-intensive and costly. The emergence of advanced Large Language Models (LLMs), exemplified by GPT-4, presents an unprecedented opportunity to automate the complicated process of data annotation. While existing surveys have extensively covered LLM architecture, training, and general applications, we uniquely focus on their specific utility for data annotation. This survey contributes to three core aspects: LLM-Based Annotation Generation, LLM-Generated Annotations Assessment, and LLM-Generated Annotations Utilization. Furthermore, this survey includes an in-depth taxonomy of data types that LLMs can annotate, a comprehensive review of learning strategies for models utilizing LLM-generated annotations, and a detailed discussion of the primary challenges and limitations associated with using LLMs for data annotation. Serving as a key guide, this survey aims to assist researchers and practitioners in exploring the potential of the latest LLMs for data annotation, thereby fostering future advancements in this critical field.”

2018

2016