A recent breakthrough in the security of mixed reality devices by Dr. Shuo Wang‘s research group has received widespread attention, including a feature in the Top Picks section of Wired magazine. The research revealed a significant vulnerability in Apple’s Vision Pro headset that could allow attackers to decipher what users type, including sensitive information like passwords and PINs, by analyzing eye movement data.
The attack, dubbed GAZEploit, focuses on how the Vision Pro’s unique eye-tracking capabilities, used to control a virtual keyboard with a Persona, or a virtual avatar, could inadvertently leak information. The research team demonstrated that by tracking the movements of the avatar, it’s possible to reconstruct what the user is typing.

Hanqiu Wang, a PhD student in Dr. Shuo Wang’s lab and one of the lead researchers, explained to Wired, “Based on the direction of the eye movement, the hacker can determine which key the victim is now typing.” In tests, the researchers were able to identify letters typed with the virtual keyboard with up to 92% accuracy in messages and 77% accuracy for passwords after just five guesses.
A key finding of the work is that this attack doesn’t require direct access to the device, just access to the eye movements of the Persona avatar. The avatar could be shared during a Zoom meeting or FaceTime call, or on other virtual meeting or live video-sharing platforms. By training deep learning models on recordings of avatars typing, the team showed this attack was not only possible but highly effective in laboratory settings.
The startling nature of the vulnerability captured the attention of Wired and other media, underscoring both the promise and the risks of advanced wearable technologies. The team disclosed the vulnerability to Apple, and the company moved quickly to issue patches.
The research’s impact extends beyond media coverage. The research findings led to a published paper, authored by Hanqiu Wang, Zihao Zhan, Haoqi Shan, Siqi Dai, Max Panoff, and Shuo Wang. The collaborative paper between team members from University of Florida, CertiK, and Texas Tech University was accepted at the ACM Conference on Computer and Communications Security (CCS) 2024.
Experts note that this research is a wake-up call about the privacy implications of rapidly evolving wearable technology. As devices like headsets, smart glasses, and smartwatches become integrated into daily life, understanding and mitigating these types of vulnerabilities becomes crucial. The team’s work not only highlights a concrete risk with gaze-based input but also signals the need for vigilant privacy safeguards as new forms of human-computer interaction proliferate.
More information about Gazeploit can be found on the research webpage.