文摘
In this paper, we propose a novel continuous vigilance estimation approach using LSTM Neural Networks and combining Electroencephalogram (EEG) and forehead Electrooculogram (EOG) signals. We combine these two modalities to leverage their complementary information using a multimodal deep learning method. Moreover, since the change of vigilance level is a time dependent process, temporal dependency information is explored in this paper, which significantly improves the performance of vigilance estimation. We introduce two LSTM Neural Network architectures, the F-LSTM and the S-LSTM, to encode the time sequences of EEG and EOG into a high level combined representation, from which we can predict the vigilance levels. The experimental results demonstrate that both of the two LSTM multimodal structures can improve the performance of vigilance estimation models in comparison with the single modality models and non-temporal dependent models.