Riemannian methods have established themselves as stateof-the-art approaches in Brain-Computer Interfaces (BCI) in terms of performance. However, their adoption by experimenters is often hindered by a lack of interpretability. In this work, we propose a set of tools designed to enhance practitioners’ understanding of the decisions made by Riemannian methods. Specifically, we develop techniques to quantify and visualize the influence of the different sensors on classification outcomes. Our approach includes a visualization tool for high-dimensional covariance matrices, a classifieragnostic tool that focuses on the classification process, as well as methods that leverage the data’s topology to better characterize the role of each sensor. We demonstrate these tools on a specific dataset and provide Python code to facilitate their use by practitioners, thereby promoting the adoption of Riemannian methods in BCI.