Multimodal Data Fusion for Brain-Computer Interfaces: A literature review
Résumé
Detailed observation of human activity involves recording and analyzing different physiological and
behavioral data. Electroencephalogram (EEG) is a measure of the patient’s brain activity and cognitive
abilities. However, other signals such as electrocardiogram (ECG) or electromyogram (EMG) can
provide complementary information about the patient’s state. Consequently, their combined analysis is
particularly interesting. However, merging different types of data is not an easy task. New approaches
and methods have emerged in recent years to tackle the fusion task, especially within deep neural
networks. We carried out a literature review targeting the fusion of EEG and other data in the context
of medical applications. The fusion of data can be done at multiple levels of the architecture. Input-level
fusion suggests that multimodal signals are processed jointly from the first layer; at the decision-level,
input signals are analyzed independently of each other in parallel networks and their results are
combined as a final step of the architecture. However, both approaches do not guarantee that the relations
between the signals are properly learned. This is why mid-level features fusion is proposed here: features
from multimodal signals are preliminary extracted before being fused at an intermediary-level of the
architecture using feature-level or score-level fusions. The literature shows that such techniques can
greatly improve the performance of the medical system by enhancing learning and representation. Thus,
Brain-Computer Interfaces can benefit significantly from the integration of additional physiological data
at mid or final level.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |