Student Spotlight: Tre’ R. Jeter

Tre’ R. Jeter is a PhD Student of Computer Science working in the Adaptive Learning and Optimization Lab since Fall 2021. He is advised by Dr. My T. Thai, UF Research Foundation Professor of Computer & Information Sciences & Engineering and Associate Director of Warren B. Nelms Institute for the Connected World at the University of Florida.

Tre’s broad research interests are computer architecture, cyber security, and high-performance computing. Tre’s current research focuses on identifying risks and improving user privacy in federated learning environments.



A Foundational Paradigm of Federated Learning Allowing Data Privacy Guarantee

On April 6, 2017, Google introduced federated learning (FL) as an advanced machine learning technique that enables devices to train a global model locally on their device-specific data. The local updates of each device are sent to a central entity, usually a server, that averages these local updates to create a new global model to redistribute for continued training. This form of machine learning allows for distributed model training while also ensuring data privacy. FL promotes the promise that the data of each device is never shared with other devices or the server, only the local updates after locally training the model. The base architecture is novel, but its promise to preserve user privacy has been challenged thoroughly and has shown immaturity in protecting user data.

The current framework is riddled with attack vectors that can potentially lead to model performance degradation and data leakage leading to reconstruction. Potential data leakage is imperative to users in FL environments because it directly violates the promotion of user privacy. Currently, FL is susceptible to model inversion, membership inference (MI), and generative adversarial network (GAN) reconstruction attacks. My research includes 1) establishing a foundation of privacy risk in the FL framework and 2) a novel design of a truly privacy protecting FL principle that preserves user data and by extension, further promotes user privacy. This research specifically impacts the future of artificially intelligent (AI) systems by directly enhancing their decision-making skills and prediction metrics in virtually any domain while also preserving the authenticity and integrity of user data.

My research will focus on the following three key tasks: 1) Investigating new MI attacks by exploiting the randomness in selecting devices at each training iteration and providing an effective countermeasure by incorporating blockchain in an FL environment; 2) Defending data-reconstruction attacks by a novel data augmentation method; and 3) exploiting new attack vectors while designing new defenses; establishing the foundation of a truly privacy-preserving FL system.


Recent Publications:

  1. Blockchain-based Secure Client Selection in Federated Learning
  2. Advances in Blockchain Security
  3. SpackNVD: A Vulnerability Audit Tool for Spack Packages