Multi-Speaker Transcription Task
Jump to navigation
Jump to search
A Multi-Speaker Transcription Task is an transcription task for multiple linguistic agents.
- AKA: Speaker Diarisation.
- Example(s):
- transcribing a conference call.
- …
- Counter-Example(s):
- See: Speech Recognition, Speaker Recognition.
References
2016
- (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/Speaker_diarisation Retrieved:2016-4-8.
- Speaker diarisation (or diarization) is the process of partitioning an input audio stream into homogeneous segments according to the speaker identity. It can enhance the readability of an automatic speech transcription by structuring the audio stream into speaker turns and, when used together with speaker recognition systems, by providing the speaker’s true identity. It is used to answer the question "who spoke when?" Speaker diarisation is a combination of speaker segmentation and speaker clustering. The first aims at finding speaker change points in an audio stream. The second aims at grouping together speech segments on the basis of speaker characteristics. With the increasing number of broadcasts, meeting recordings and voice mail collected every year, speaker diarisation has received much attention by the speech community, as is manifested by the specific evaluations devoted to it under the auspices of the National Institute of Standards and Technology for telephone speech, broadcast news and meetings.