Hierarchical Recurrent Encoder-Decoder (HRED) Neural Network Training Algorithm: Difference between revisions

From GM-RKB
Jump to navigation Jump to search
m (Text replacement - ". " to ". ")
m (Text replacement - "** ..." to "** …")
 
Line 2: Line 2:
* <B>Example(s):</B>
* <B>Example(s):</B>
** [[Hierarchical Recurrent Encoder-Decoder Algorithm for Query Suggestion]] ([[2015_AHierarchicalRecurrentEncoderDe|Sordoni et al., 2015]]),
** [[Hierarchical Recurrent Encoder-Decoder Algorithm for Query Suggestion]] ([[2015_AHierarchicalRecurrentEncoderDe|Sordoni et al., 2015]]),
** ...
**
* <B>Counter-Example(s):</B>
* <B>Counter-Example(s):</B>
** a [[Hierarchical Attention Network Training Algorithm]],
** a [[Hierarchical Attention Network Training Algorithm]],

Latest revision as of 16:45, 6 January 2023

A Hierarchical Recurrent Encoder-Decoder (HRED) Neural Network Training Algorithm is a feedforward NNet training algorithm that implements a hierarchical recurrent encoder-decoder neural network.



References

2015

2015 AHierarchicalRecurrentEncoderDe Fig3.png
Figure 3: The hierarchical recurrent encoder-decoder (HRED) for query suggestion. Each arrow is a non-linear transformation. The user types cleveland gallerylake erie art. During training, the model encodes cleveland gallery, updates the session-level recurrent state and maximize the probability of seeing the following query lake erie art. The process is repeated for all queries in the session. During testing, a contextual suggestion is generated by encoding the previous queries, by updating the session-level recurrent states accordingly and by sampling a new query from the last obtained session-level recurrent state. In the example, the generated contextual suggestion is cleveland indian art.