PPLRE Automated Evaluation Task
Jump to navigation
Jump to search
- The PPLRE Automated Evaluation Task is the subtask within the general PPLRE Evaluation Task where the performance of PPLRE Relation Recognition Algorithms are automatically evaluated along with the PPLRE Information Extraction System which these algorithms support.
- Context:
- Is evaluated using the PPLRE Automated Evaluation System.
- See: Relation Recognition Task.
Overview
The PPLRE Automated Evaluation Task is an objective analysis of a PPLRE Relation Recognition Algorithm's ability to recognize all of the OCL relations within the PPLRE Curated Corpus.
Evaluation Data
The current version is v1.3
v1.3
This version divides the data into a Test Set, a labelled Train Set, and an unlabelled Train Set.
- Labeled Train Set.
- 614 abstracts
- 333 are Positive Examples from 213 abstracts
- 647 are Negative Examples from 411 abstracts
- 980 relations
- Test Set.
- 132 abstracts
- 65 Positive Example of Passage with relation
- 145 Negative Example of Passage without relation
- 210 OPL Relations in total
- Unlabeled Train Set.
- ~20,000 abstracts
PPLRE Automated Evaluation Results
This section describes the results of the PPLRE Automated Evaluation performed.
The following PPLRE Information Extraction Algorithms have been (or will shortly be) evaluated:
- PPLRE Evaluation - Cooccurrence.
- PPLRE Evaluation - Snowball.
- PPLRE Evaluation - ZParser.
- PPLRE Evaluation - ZParser Bootstrapped.
- PPLRE Evaluation - BRN.