Skip to content

Detail of publication

Citation

M. Bulín and L. Šmídl and J. Švec : On Using Stateful LSTM Networks for Key-Phrase Detection . Text, Speech and Dialogue, 24, Springer Nature, Kamil Ekštein, 2019.

Abstract

In this paper, we focus on LSTM (Long Short-Term Memory) networks and their implementation in a popular framework called Keras. The goal is to show how to take advantage of their ability to pass the context by holding the state and to clear up what the stateful property of LSTM recurrent Neural Network implemented in Keras actually means. The main outcome of the work is then a general algorithm for packing arbitrary context-dependent data, capable of 1/ packing the data to fit the stateful models; 2/ making the training process efficient by supplying multiple frames together; 3/ on-the-fly (frame-by-frame) prediction by the trained model. Two training methods are presented, a window-based approach is compared with a fully-stateful approach. The analysis is performed on the Speech commands dataset. Finally, we give guidance on how to use stateful LSTMs to create a key-phrase detection system.

Detail of publication

Title: On Using Stateful LSTM Networks for Key-Phrase Detection
Author: M. Bulín ; L. Šmídl ; J. Švec
Language: English
Date of publication: 11 Sep 2019
Year: 2019
Type of publication: Papers in proceedings of reviewed conferences
Title of journal or book: Text, Speech and Dialogue
Chapter: 24
Editor: Kamil Ekštein
Publisher: Springer Nature
Date: 11 Sep 2019 - 13 Sep 2019
/ 2019-09-16 12:16:17 /

Keywords

LSTM, Stateful, Context modeling, Key-phrase detection, ASR

BibTeX

@ARTICLE{MBulin_2019_OnUsingStateful,
 author = {M. Bul\'{i}n and L. \v{S}m\'{i}dl and J. \v{S}vec},
 title = {On Using Stateful LSTM Networks for Key-Phrase Detection},
 year = {2019},
 publisher = {Springer Nature},
 journal = {Text, Speech and Dialogue},
 editor = {Kamil Ek\v{s}tein},
 chapter = {24},
 url = {http://www.kky.zcu.cz/en/publications/MBulin_2019_OnUsingStateful},
}