Brain computer interfaces rely on cognitive tasks easy at first sight but that reveal to be complex to perform. In this context, providing engaging feedback and subject’s embodiment is one of the keys for the overall system performance. However, noninvasive brain activity alone has been demonstrated to be often insufficient to precisely control all the degrees of freedom of complex external devices such as a robotic arm. Here, we developed a hybrid BCI that also integrates eye-tracking technology to improve the overall sense of agency of the subject.While this solution has been explored before, the best strategy on how to combine gaze and brain activity to obtain effective results has been poorly studied. To address this gap, we explore two different strategies where the timing to perform motor imagery changes; one strategy could be less intuitive compared to the other and this would result in differences of performance.