Challenged the previous disentanglement assumption in feature extraction process, by removing residual speaker information from speaker-independent attributes (linguistic and prosodic features). The removal is achieved by adding differentially-private noise in these features, which allows us to provide formal provable guarantees of privacy leakage.
Detailed analysis of the submissions of the first VoicePrivacy challenge 2020 with objective and subjective evaluation.
We compare three different metrics (EER, Cllr and Linkability) for measuring privacy in speaker anonymization algorithms. Interesting insights are derived.
We investigate the effect of various design choices in x-vector based speaker anonymization method, on Privacy and Utility. Some choices seem to be more robust than others in VoicePrivacy challenge setup.
Framework to mobilise speech anonymization community through a series of challenges.
We aim a paradigm shift in context of speaker privacy evaluation from "security by obscurity" to Semi-Informed and Informed attackers. We show that privacy obtained by voice transformation techniques can be breached by an informed attacker.
In this work, we propose a privacy-preserving framework based on speaker-adversarial training of end-to-end ASR. We evaluate the system using closed-set and open-set identification and observe a strange disparity in results.