heal.abstract |
The analysis of human emotions is a widely researched topic in the scientific fields of Psychology and Neuroscience, trying to investigate the nature and elicitation mechanisms of our feelings. From a computational perspective, however, it remains rather underexplored. While Artificial Intelligence has made overwhelming progress in modeling rational intelligence, there are yet no highly reliable systems to analyze affect, as considerable barriers exist in this process: Emotion expression can be highly subjective and its interpretation varies depending on the context, whereas it poses an inter-subject variability. Yet, most Signal Processing and Machine Learning studies concentrate on behavioral processing of emotions, through modalities like speech, text and facial expressions. To address the challenges of Affective Analysis, in this thesis we choose to process brain signals, and specifically the Electroencephalogram (EEG), as a means to derive emotional information. Recorded physiological and neural signals are capable of being more objective and reliable affective indicators, whereas they can also contribute to develop human-aid systems for applications like the treatment or rehabilitation from brain diseases. Importantly, we consider music as the means to induce emotions for the EEG recordings, since music is known to have a deep emotional impact on humans.
Our approach can be divided into two main parts: In the first one, we analyze the complex structure of the EEG and examine novel feature extraction schemes that are based on two multifractal algorithms, namely Multiscale Fractal Dimension and Multifractal Detrended Fluctuation Analysis. In this way we attempt to quantify the variability of the observed signals' complexity across multiple timescales. Our proposed EEG features surpass widely used baselines on Emotion Recognition, whereas they show competitive results in challenging subject-independent experiments and recognition of arousal, indicating that it is highly correlated with the EEG's fragmented structure. In the second part, we utilize a two-branch neural network as a bimodal EEG-music framework, which learns common latent representations between the EEG signals and their music stimuli in order to examine their correspondence. Through this model, we perform supervised emotion recognition experiments and retrieval of music rankings to EEG input queries. By applying this system to independent subject data, we also extract interesting patterns regarding the latent similarity of brain and music signals, the temporal variation of the music-induced emotions and the activated brain regions in each case. As a whole, this study deals with core problems regarding the interpretation of complex EEG signals and illustrates multiple ways that music stimulates the brain activity. |
en |