Finite State Transducer

From GM-RKB
(Redirected from Finite-State Transducer)
Jump to navigation Jump to search

A Finite State Transducer is a finite state machine with two tapes (an input tape and an output tape).



References

2011

  • http://en.wikipedia.org/wiki/Finite_state_transducer
    • A finite state transducer (FST) is a finite state machine with two tapes: an input tape and an output tape. This contrasts with an ordinary finite state automaton (or finite state acceptor), which has a single tape. ...
    • Formally, a finite transducer T is a 6-tuple (Q, Σ, Γ, I, F, δ) such that:
      • Q is a finite set, the set of states ;
      • Σ is a finite set, called the input alphabet ;
      • Γ is a finite set, called the output alphabet ;
      • I is a subset of Q, the set of initial states ;
      • F is a subset of Q, the set of final states ; and
      • [math]\displaystyle{ \delta \subseteq Q \times (\Sigma\cup\{\epsilon\}) \times (\Gamma\cup\{\epsilon\}) \times Q }[/math] (where ε is the empty string) is the transition relation.
    • We can view (Q, δ) as a labeled directed graph, known as the transition graph of T: the set of vertices is Q, and [math]\displaystyle{ (q,a,b,r)\in\delta }[/math] means that there is a labeled edge going from vertex q to vertex r. We also say that a is the input label and b the output label of that edge.

2009

  • http://www.openfst.org/
    • OpenFst is a library for constructing, combining, optimizing, and searching weighted finite-state transducers (FSTs). Weighted finite-state transducers are automata where each transition has an input label, an output label, and a weight. The more familiar finite-state acceptor is represented as a transducer with each transition's input and output label equal. Finite-state acceptors are used to represent sets of strings (specifically, regular or rational sets); finite-state transducers are used to represent binary relations between pairs of strings (specifically, rational transductions). The weights can be used to represent the cost of taking a particular transition.
    • FSTs have key applications in speech recognition and synthesis, machine translation, optical character recognition, pattern matching, string processing, machine learning, information extraction and retrieval among others. Often a weighted transducer is used to represent a probabilistic model (e.g., an n-gram model, pronunciation model). FSTs can be optimized by determinization and minimization, models can be applied to hypothesis sets (also represented as automata) or cascaded by finite-state composition, and the best results can be selected by shortest-path algorithms.

2003

1996