I am an Inria research scientist at Paris Brain Institute in the NERV Lab.
My research currently focuses on the development of tools to address the “Brain-Computer Interface (BCI) inefficiency” issue, reflected by a non-negligible portion of users who cannot control the device even after several training sessions. I essentially consider two main approaches: the search for neurophysiological markers of BCI training and the integration of multimodal data to enrich the information provided to the classifier.
I previously served as secretary general of the French academic association promoting the advances in BCI, called CORTICO, and as co-chair of the Postdocs and Students Committee of the BCI Society.
You can download my CV in pdf.
Don’t hesitate to contact me if you want any additional information or if you are interested by a research collaboration!
PhD in Biomedical instrumentation, 2015
CEA-LETI (Grenoble, France)
MSc in Neuropsychology and Clinical Neurosciences, 2015
Grenoble Alpes University
MEng in Information and Communications Technology for Health, 2012
IMT Atlantique (Brest, France)
Functional connectivity and brain network reorganization underlying longitudinal processes, mainly BCI training
Development of methods to enhance subjets’ mental state classification. They can be divided in two main approaches: the integration of multimodal information and the search for alternative features
Development of cryogenic-free sensors for magnetocardiography and magnetoencephalography
Brain–Computer Interface (BCI) systems allow to perform actions by translating brain activity into commands. Such systems require training a classification algorithm to discriminate between mental states, using specific features from the brain signals. This step is crucial and presents specific constraints in clinical contexts. HappyFeat is an open-source software making BCI experiments easier in such contexts: effortlessly extracting and selecting adequate features for training, in a single GUI. Novel features based on Functional Connectivity can be used, allowing graph-oriented approaches. We describe HappyFeat’s mechanisms, showing its performances in typical use cases, and showcasing how to compare different types of features.
Large-scale interactions among multiple brain regions manifest as bursts of activations called neuronal avalanches, which reconfigure according to the task at hand and, hence, might constitute natural candidates to design brain-computer interfaces (BCI). To test this hypothesis, we used source-reconstructed magneto/electroencephalography, during resting state and a motor imagery task performed within a BCI protocol. To track the probability that an avalanche would spread across any two regions we built an avalanche transition matrix (ATM) and demonstrated that the edges whose transition probabilities significantly differed between conditions hinged selectively on premotor regions in all subjects. Furthermore, we showed that the topology of the ATMs allows task-decoding above the current gold standard. Hence, our results suggest that Neuronal Avalanches might capture interpretable differences between tasks that can be used to inform brain-computer interfaces.
Background and Objectives The epilepsy diagnosis still represents a complex process, with misdiagnosis reaching 40%. Here, we aimed at building an automatable workflow, to help the clinicians in the diagnostic process, differentiating between controls and a population of patients with temporal lobe epilepsy (TLE). While primarily interested in correctly classifying the participants, we used data features providing hints on the underlying pathophysiological processes. Specifically, we hypothesized that neuronal avalanches (NA) may represent a feature that encapsulates the rich brain dynamics better than the classically used functional connectivity measures (Imaginary Coherence; ImCoh). Methods We recorded 10 minutes of resting state activity with high-density scalp electroencephalography (hdEEG; 128 channels). We analyzed large-scale activation bursts (NA) from source activation, to capture altered dynamics. Then, we used machine-learning algorithms to classify epilepsy patients vs. controls, and we described the goodness of the classification as well as the effect of the durations of the data segments on the performance. Results Using a support vector machine (SVM), we reached a classification accuracy of 0.87 ± 0.10 (SD) and an area under the curve (AUC) of 0.94 ± 0.06. The use of avalanches-derived features, generated a mean increase of 16% in the accuracy of diagnosis prediction, compared to ImCoh. Investigating the main features informing the model, we observed that the dynamics of the entorhinal cortex, superior and inferior temporal gyri, cingulate cortex and prefrontal dorsolateral cortex were informing the model with NA. Finally, we studied the time-dependent accuracy in the classification. While the classification performance grows with the duration of the data length, there are specific lengths, at 30s and 180s at which the classification performance becomes steady, with intermediate lengths showing greater variability. Classification accuracy reached a plateau at 5 minutes of recording. Discussion We showed that NA represents a better EEG feature for an automated epilepsy identification, being related with neuronal dynamics of pathology-relevant brain areas. Furthermore, the presence of specific durations and the performance plateau might be interpreted as the manifestation of the specific intrinsic neuronal timescales altered in epilepsy. The study represents a potentially automatable and noninvasive workflow aiding the clinicians in the diagnosis.
In this chapter, we present the main characteristics of electroencephalography (EEG) and magnetoencephalography (MEG). More specifically, this chapter is dedicated to the presentation of the data, the way they can be acquired and analyzed. Then, we present the main features that can be extracted and their applications for brain disorders with concrete examples to illustrate them. Additional materials associated with this chapter are available in the dedicated Github repository.
Objective: Relying on the idea that functional connectivity provides important insights on the underlying dynamic of neuronal interactions, we propose a novel framework that combines functional connectivity estimators and covariance-based pipelines to improve the classification of mental states, such as motor imagery. Methods: A Riemannian classifier is trained for each estimator and an ensemble classifier combines the decisions in each feature space. A thorough assessment of the functional connectivity estimators is provided and the best performing pipeline among those tested, called FUCONE, is evaluated on different conditions and datasets. Results: Using a meta-analysis to aggregate results across datasets, FUCONE performed significantly better than all state-of-the-art methods. Conclusion: The performance gain is mostly imputable to the improved diversity of the feature spaces, increasing the robustness of the ensemble classifier with respect to the inter- and intra-subject variability. Significance: Our results offer new insights into the need to consider functional connectivity-based methods to improve the BCI performance.
Riemannian BCI based on EEG covariance have won many data competitions and achieved very high classification results on BCI datasets. To increase the accuracy of BCI systems, we propose an approach grounded on Riemannian geometry that extends this framework to functional connectivity measures. This paper describes the approach submitted to the Clinical BCI Challenge-WCCI2020 and that ranked 1st on the task 1 of the competition.
Brain-computer interfaces (BCIs) constitute a promising tool for communication and control. However, mastering non-invasive closed-loop systems remains a learned skill that is difficult to develop for a non-negligible proportion of users. The involved learning process induces neural changes associated with a brain network reorganization that remains poorly understood. To address this inter-subject variability, we adopted a multilayer approach to integrate brain network properties from electroencephalographic (EEG) and magnetoencephalographic (MEG) data resulting from a four-session BCI training program followed by a group of healthy subjects. Our method gives access to the contribution of each layer to multilayer network that tends to be equal with time. We show that regardless the chosen modality, a progressive increase in the integration of somatosensory areas in the α band was paralleled by a decrease of the integration of visual processing and working memory areas in the β band. Notably, only brain network properties in multilayer network correlated with future BCI scores in the α2 band: positively in somatosensory and decision-making related areas and negatively in associative areas. Our findings cast new light on neural processes underlying BCI training. Integrating multimodal brain network properties provides new information that correlates with behavioral performance and could be considered as a potential marker of BCI learning.
Brain-computer interfaces (BCIs) have been largely developed to allow communication, control, and neurofeedback in human beings. Despite their great potential, BCIs perform inconsistently across individuals and the neural processes that enable humans to achieve good control remain poorly understood. To address this question, we performed simultaneous high-density electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings in a motor imagery-based BCI training involving a group of healthy subjects. After reconstructing the signals at the cortical level, we showed that the reinforcement of motor-related activity during the BCI skill acquisition is paralleled by a progressive disconnection of associative areas which were not directly targeted during the experiments. Notably, these network connectivity changes reflected growing automaticity associated with BCI performance and predicted future learning rate. Altogether, our findings provide new insights into the large-scale cortical organizational mechanisms underlying BCI learning, which have implications for the improvement of this technology in a broad range of real-life applications.
We adopted a fusion approach that combines features from simultaneously recorded electroencephalogram (EEG) and magnetoencephalogram (MEG) signals to improve classification performances in motor imagery-based brain–computer interfaces (BCIs). We applied our approach to a group of 15 healthy subjects and found a significant classification performance enhancement as compared to standard single-modality approaches in the alpha and beta bands. Taken together, our findings demonstrate the advantage of considering multimodal approaches as complementary tools for improving the impact of noninvasive BCIs.