Code-modulated visual evoked-potential (c-VEP) based reactive brain-computer interfaces (BCIs) deliver high information-transfer rates with minimal calibration, yet performance often collapses when models are transferred between users. We therefore pursue a two-fold aim: first, to pinpoint neurophysiological predictors that explain this inter-participant variability; second, to identify a decoding pipeline that sustains accuracy across users in a burst-c-VEP paradigm (brief, aperiodic flashes at 3 Hz). From 24 participants we find that stronger inter-epoch correlation (R≈0.80), larger peak-to-peak amplitude of the flash-VEP, larger α bandpower, larger θ bandpower and lower δ bandpower are 5 neurophysiological predictors that correlates between high performers (>90% accuracy) and low performers (<70%), enabling a 22s “go/no-go” calibration. We then compare three preprocessing schemes (small, combined, participant-specific) paired with three decoders—a convolutional neural network, a Riemannian xDAWN–LDA baseline, and GREEN, a wavelet-based symmetric positive definite neural network. Subject-specific alignment plus GREEN achieves 93% trial-level accuracy in both intra- and cross-participant settings, eliminating the 15–20% transfer loss obtained with the other tested decoding models while keeping the total calibration under one minute. In conclusion, rapid user screening with these neurophysiological predictors, followed by this lightweight, user-specific pipeline, yields burst-c-VEP control that is fast to deploy and robust across individuals.