Word Error Rate (WER) Measure
A Word Error Rate (WER) Measure is an error rate measure that is a performance metric for evaluating natural language processing systems.
- Context:
- It can (often) be a Speech-to-Text Performance Measure (for StT tasks)
- It can be derived from the Levenshtein distance.
- It can be used as NLP Benchmark Task performance metric.
- …
- Example(s):
- $WER = \dfrac{S+D+I}{S+D+C}$
where $S$ is the number of word substitutions, $D$ is the number of deletions, $I$ the number of insertions, and $C$ is the number of correct words.
- $\mathrm{WER}=\dfrac{1}{N_{ref}^{*}}\displaystyle \sum_{k=1}^{K} \min _{r} d_{L}\left(r e f_{k, r}, h y p_{k}\right)$
where $d_{L}\left(r e f_{k, r}, h y p_{k}\right)$ is the Levenshtein distance between the reference sentence $ref_{k,r}$ and the hypothesis sentence $hyp_k$.
- $\operatorname{WER}(p)=\frac{1}{N_{\text {ref }}^{*}} \displaystyle\sum_{k=1}^{K} n\left(p, err_{k}\right)$
where $n(p, err_k)$ is the number of errors in $err_k$ produced by words with POS class $p$.
- OOV Word Error Rate,
- e-WER (Ali & Renals, 2018),
- ...
- …
- $WER = \dfrac{S+D+I}{S+D+C}$
- Counter-Example(s):
- See: Recall, Information Retrieval, Speech Recognition, Machine Translation, Levenshtein Distance, Phoneme.
References
2020
- (Szymanski et al., 2020) ⇒ Piotr Szymanski, Piotr Zelasko, Mikolaj Morzy, Adrian Szymczak, Marzena Zyla-Hoppe, Joanna Banaszczak, Lukasz Augustyniak, Jan Mizgajski, and Yishay Carmiel. (2020). “WER We Are and WER We Think We Are.” In: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings (EMNLP 2020) Online Event.
- ABSTRACT: Natural language processing of conversational speech requires the availability of high-quality transcripts. In this paper, we express our skepticism towards the recent reports of very low Word Error Rates (WERs) achieved by modern Automatic Speech Recognition (ASR) systems on benchmark datasets. We outline several problems with popular benchmarks and compare three state-of-the-art commercial ASR systems on an internal dataset of real-life spontaneous human conversations and HUB'05 public benchmark. We show that WERs are significantly higher than the best reported results. We formulate a set of guidelines which may aid in the creation of real-life, multi-domain datasets with high quality annotations for training and testing of robust ASR systems.
2018
- (Ali & Renals, 2018) ⇒ Ahmed Ali, and Steve Renals. (2018). “Word Error Rate Estimation for Speech Recognition: E-WER.” In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018) Volume 2: Short Papers.
- QUOTE: Word Error Rate (WER) is the standard approach to evaluate the performance of a large vocabulary continuous speech recognition (LVCSR) system. The word sequence hypothesised by the ASR system is aligned with a reference transcription, and the number of errors is computed as the sum of substitutions ($S$), insertions ($I$), and deletions ($D$). If there are $N$ total words in the reference transcription, then the word error rate WER is computed as follows:
$WER = \dfrac{I + D + S}{N} \times 100$. |
(1) |
- To obtain a reliable estimate of the WER, at least two hours of test data are required for a typical LVCSR system.
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/word_error_rate Retrieved:2017-4-7.
- Word error rate (WER) is a common metric of the performance of a speech recognition or machine translation system.
The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort.
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate. Word error rate can then be computed as:
[math]\displaystyle{ \mathit{WER} = \frac{S+D+I}{N} = \frac{S+D+I}{S+D+C} }[/math]
where
- S is the number of substitutions,
- D is the number of deletions,
- “I is the number of insertions,
- C is the number of the corrects,
- N is the number of words in the reference (N=S+D+C)
- The intuition behind 'deletion' and 'insertion' is how to get from the reference to the hypothesis. So if we have the reference "This is wikipedia" and hypothesis "This — wikipedia", we call it a deletion.
When reporting the performance of a speech recognition system, sometimes word accuracy (WAcc) is used instead:
[math]\displaystyle{ \mathit{WAcc} = 1 - \mathit{WER} = \frac{N-S-D-I}{N} = \frac{H-I}{N} }[/math] where
- H is N-(S+D), the number of correctly recognized words.
- IF I=0 then WAcc will be equivalent to Recall (information retrieval) a ratio of correctly recognized words 'H' to Total number of words in reference 'N'.
Note that since N is the number of words in the reference, the word error rate can be larger than 1.0, and thus, the word accuracy can be smaller than 0.0. This problem can be overcome by using the hit rate with respect to the total number of test-reference match pairs found by the matching process used in scoring, (H+S+D+I), rather than with respect to the number of reference words, (H+S+D). This gives the match-accuracy rate as MAcc = H/(H+S+D+I) and match error rate, MER = 1-MAcc = (S+D+I)/(H+S+D+I). [1] WAcc and WER as defined above are, however, the de facto standard most often used in speech recognition.
- Word error rate (WER) is a common metric of the performance of a speech recognition or machine translation system.
2007
- (Popovic & Ney, 2007) ⇒ Maja Popovic, and Hermann Ney. (2007). “Word Error Rates: Decomposition over POS Classes and Applications for Error Analysis.” In: Proceedings of the Second Workshop on Statistical Machine Translation (WMT@ACL 2007).
- QUOTE: Evaluation of machine translation output is a very important but difficult task. Human evaluation is expensive and time consuming. Therefore a variety of automatic evaluation measures have been studied over the last years. The most widely used are Word Error Rate (WER), Position independent word Error Rate (PER), the BLEU score (Papineni et al., 2002) and the NIST score (Doddington, 2002). (...)
Calculation of WER: The WER of the hypothesis $hyp$ with respect to the reference $ref$ is calculated as:
$\mathrm{WER}=\dfrac{1}{N_{ref}^{*}} \sum_{k=1}^{K} \min _{r} d_{L}\left(r e f_{k, r}, h y p_{k}\right)$where $d_{L}\left(r e f_{k, r}, h y p_{k}\right)$ is the Levenshtein distance between the reference sentence $ref_{k,r}$ and the hypothesis sentence $hyp_k$. The calculation of WER is performed using a dynamic programming algorithm.
(...)The dynamic programming algorithm for WER enables a simple and straightforward identification of each erroneous word which actually contributes to WER. Let $err_k$ denote the set of erroneous words in sentence $k$ with respect to the best reference and $p$ be a POS class. Then $n(p, err_k)$ is the number of errors in $err_k$ produced by words with POS class $p$. It should be noted that for the substitution errors, the POS class of the involved reference word is taken into account. POS tags of the reference words are also used for the deletion errors, and for the insertion errors the POS class of the hypothesis word is taken. The WER for the word class $p$ can be calculated as:
$\operatorname{WER}(p)=\frac{1}{N_{\text {ref }}^{*}} \displaystyle\sum_{k=1}^{K} n\left(p, e r r_{k}\right)$The sum over all classes is equal to the standard overall WER.
- QUOTE: Evaluation of machine translation output is a very important but difficult task. Human evaluation is expensive and time consuming. Therefore a variety of automatic evaluation measures have been studied over the last years. The most widely used are Word Error Rate (WER), Position independent word Error Rate (PER), the BLEU score (Papineni et al., 2002) and the NIST score (Doddington, 2002).