Microsoft Unveils Whisper Leak: AI Chat Topic Exposure in Encrypted Traffic
Microsoft has revealed a novel cybersecurity vulnerability named Whisper Leak, a side-channel attack that exploits streaming mode responses from large language models (LLMs) to infer conversation topics despite end-to-end encryption. This attack leverages characteristics of encrypted traffic such as packet sizes and timing intervals—metadata not obscured by TLS encryption—to classify whether user prompts relate to sensitive subjects like financial crimes or political dissent. Cyber attackers positioned at network levels such as ISPs, local Wi-Fi, or government surveillance can observe these encrypted traffic patterns without decrypting actual content. Whisper Leak has been tested across various AI models from Microsoft, OpenAI, Mistral, and others, achieving classification accuracies of over 98%. To mitigate this threat, major AI providers have introduced obfuscation techniques by adding random-length gibberish text sequences to responses, masking token lengths and significantly reducing the attack’s effectiveness. This side-channel risk highlights the critical need for privacy protections beyond encryption, addressing metadata leakage in AI-driven communications. The attack’s efficacy can improve with longer monitoring and multiple conversations from the same user, posing prolonged privacy concerns worldwide.
