Student Demo & Poster Sessions

Organized with the IoT Students Club at University of Florida, we had a large diverse body of undergraduate and graduate students presenting posters and demonstrations on their research projects spanning various aspects of smart and connected systems. The student poster and demo session would be a great avenue to connect with our talented students.

Congratulations to our winners! We had 25 student demo and 80+ student poster presentations in the conference.

Student demo winners:

1st Place

F1Tenth Autonomous Driving

Presented by: Christopher Oeltjen, Carson Sobolewski, Lorant Domokos, University of Florida
Abstract: We will be demoing our f1tenth cars’ ability to drive around a non-predetermined track and map it out to create an optimal racing line and then follow that racing line.

2nd Place

PLASMA: Platoon Security Against Multi-Channel Perception Adversaries

Presented by: Chengwei Duan, University of Florida
Abstract: A connected and autonomous vehicle (CAV) platoon constitutes a group of vehicles that coordinate their movements and operate as a single unit. The vehicle at the head sets a trajectory emulated by other following vehicles. CAV platoon is vulnerable to security attacks by adversaries that tamper the sensory or communication components of participating vehicles, resulting in the breakdown of coordination and consequently compromising the safety, stability, and efficiency of the platoon. In this paper, we develop a framework, PLASMA, to protect multi-vehicle CAV platoon against attacks on perception components of constituting vehicles. A unique feature of PLASMA is the ability to detect and mitigate adversarial activities in realtime, using a combination of kinematics and machine learning techniques. We perform extensive experiments to demonstrate the viability of PLASMA under diverse attack scenarios. Furthermore, a key outcome of our work is a disciplined analysis of various adversary models in CAV security and a methodology for systematic identification of mitigation techniques.

3rd Place

Energy-Efficient, High-Security Body Area Network Enabled by Human Body Communication

Presented by: Anyu Jiang, Asif Iftekhar Omi, University of Florida
Abstract: Body Area Networks (BANs) are systems of interconnected devices designed to operate in, on, or around the human body, supporting a range of applications. These devices, including wearable sensors, implantable medical devices, and external monitoring units, communicate to collect, process, and transmit personal health data. Traditional BANs often rely on wireless communication technologies such as Bluetooth or Wi-Fi. However, two major drawbacks of using high-frequency wireless channels are: (1) operating at high frequencies results in significant energy consumption, and (2) signal leakage, extending up to approximately 5 meters outside the human body, compromises communication security.Noticed that these devices share a common channel: the human body itself. Due to its high water content, the human body can act as a conductor, enabling a wireline like communication. According to previous studies, compared to traditional RF communication, this communication technique has a 100 times better energy efficiency (reducing from the conventional 1 nJ/bit to sub-10 pJ/bit) and  more than 30 times better security (reducing signal leakage from 5 meters to approximately 15 cm outside the body).
This demo illustrates the concept of Human Body Communication by showing signals transmitted through the human body, which are visualized on an monitoring device.

Honorable mention 1

Exploring the Underwater Frontier with CavePI

Presented by: Alankrit Gupta, Xianyao Li, University of Florida
Abstract: CavePI is an underwater Remotely Operated Vehicle (ROV) designed for autonomous and semi-autonomous exploration of submerged cave environments. Equipped with dual cameras, sonar, a depth sensor, and a flight controller with an inbuilt IMU, CavePI is engineered to overcome the unique challenges of confined and low-visibility underwater spaces. The ROV’s primary capabilities include rope-following for navigation assistance, depth-hold for stable vertical positioning, yaw-hold for maintaining orientation, and precise movement along straight lines with controlled turns. These features enable CavePI to explore caves autonomously or with minimal operator input, making it an invaluable tool for scientific research, environmental monitoring, and geological surveying.

Honorable mention 2

Wearable Sensor System for Scapular Motion Monitoring

Presented by: Junjun Huan, University of Florida
Abstract: A wearable sensing system developed to monitor scapular movement in the human body during post-shoulder replacement surgery recovery.

Honorable mention 3

LATENT: Leveraging Automated Test Pattern Generation for Hardware Trojan Detection

Presented by: Sudipta Paria, University of Florida
Abstract: Due to the globalization of the semiconductor supply chain and the adoption of the zero trust model, hardware Trojan attacks pose significant security threats introduced by untrusted entities. Hardware Trojans involve malicious modifications to a design before fabrication, leading to unintended behaviors such as Denial of Service (DoS) attacks or leakage of sensitive information. Detecting these Trojans in fabricated chips is challenging due to the vast attack space. Conventional post-manufacturing Automatic Test Pattern Generation (ATPG) methods struggle with the activation (trigger) and observation (payload) of Trojans, making detection practically infeasible. While statistical testing techniques have been proposed for post-silicon Trojan detection, they suffer from limited trigger and payload coverage and scalability issues. In this demo, we show a scalable, payload-aware statistical test pattern generation framework, named LATENT that enhances Trojan detection by leveraging ATPG solutions. We show the performance of LATENT in detecting randomly inserted virtual Trojans using optimized patterns in open-source designs and demonstrate promising results for trigger and Trojan coverage. We also present the flexibility of the LATENT framework with configurable parameters like rareness threshold, N-detect, number of Trojans, etc. Additionally, we showcase the visualization of the hypergraphs generated from test designs and show the locality of triggers and payloads of inserted Trojans.

Student Poster winners:

1st Place

ML Model Extraction Attacks on IoT Devices: Strategies, Challenges and Defenses for Industry Applications

Presented by: Tushar Nayan, Florida International University
Abstract: The adoption of machine learning (ML) within the Internet of Things (IoT) ecosystem has transformed edge computing, enabling real-time intelligence in applications such as smart homes, healthcare, and autonomous systems. However, this shift introduces significant security risks. Adversaries can exploit ML models through attacks such as model extraction, leading to intellectual property theft, data privacy breaches, and operational failures in safety-critical IoT systems.

Despite advancements, current defenses often fail in deployment due to scalability limitations, performance overheads, and IoT-specific constraints. We conducted an extensive evaluation of over 50 research projects, including 30 open-source works, systematically categorizing different threat models into application, device, communication, and model-based attacks and defenses. Filtering 16.5K real-world ML models from 210K Android application packages, we present the first comprehensive study of state-of-the-art model extraction attacks and defense and their practical limitations.

We found that over 48% of real-world IoT models are vulnerable to naive app-based attacks, despite the availability of encryption-based defenses. Side channels and inference patterns highlight further vulnerabilities, to replicate models. Trusted Execution Environments (TEEs) is a potential defense solution but faces deployment barriers due to hardware dependencies and resource constraints. Moreover, energy demands of defense mechanisms remain prohibitive for resource-limited IoT devices.

2nd Place

Zero-shot Safety Prediction for Autonomous Robots with Foundation World Models

Presented by: Zhenjiang Mao, University of Florida
Abstract: A world model creates a surrogate world to train a controller and predict safety violations by learning the internal dynamic model of systems. However, the existing world models rely solely on statistical learning of how observations change in response to actions, lacking precise quantification of how accurate the surrogate dynamics are, which poses a significant challenge in safety-critical systems. To address this challenge, we propose foundation world models that embed observations into meaningful and causally latent representations. This enables the surrogate dynamics to directly predict causal future states by leveraging a training-free large language model. In two common benchmarks, this novel model outperforms standard world models in the safety prediction task and has a performance comparable to supervised learning despite not using any data. We evaluate its performance with a more specialized and system-relevant metric by comparing estimated states instead of aggregating observation-wide error.

3rd Place

Exploring the Underwater Frontier with CavePI

Presented by: Xianyao Li, Alankrit Gupta, University of Florida
Abstract: CavePI is an underwater Remotely Operated Vehicle (ROV) designed for autonomous and semi-autonomous exploration of submerged cave environments. Equipped with dual cameras, sonar, a depth sensor, and a flight controller with an inbuilt IMU, CavePI is engineered to overcome the unique challenges of confined and low-visibility underwater spaces. The ROV’s primary capabilities include rope-following for navigation assistance, depth-hold for stable vertical positioning, yaw-hold for maintaining orientation, and precise movement along straight lines with controlled turns. These features enable CavePI to explore caves autonomously or with minimal operator input, making it an invaluable tool for scientific research, environmental monitoring, and geological surveying.

Honorable mention 1

Enabling Decentralized Privacy-Preserving FL for Edge Computing

Presented by: Richard Hernandez, Florida International University
Abstract: Federated Learning (FL) has emerged as a promising approach for decentralized machine learning, but its reliance on a trusted aggregator presents significant privacy risks. We propose a novel FL framework leveraging Secure Multiparty Computation (MPC) to address these challenges and enable secure and privacy-preserving aggregation in untrusted environments. By incorporating the SPDZ protocol, our framework ensures that the aggregation process remains private, even in malicious environments. Unlike differential privacy-based methods, this approach preserves model accuracy while mitigating collusion risks among participants.

Our system includes mechanisms to resist poisoning attacks by employing cosine similarity filtering on private models and a game-theoretic analysis to validate collusion resistance. Initial experiments demonstrate that the proposed framework achieves comparable accuracy to centralized FL training while maintaining robust security guarantees. This solution facilitates secure collaboration among edge devices, effectively integrating IoT data into training without exposing sensitive information.

Honorable mention 2

Language-Enhanced Latent Representations for Out-of-Distribution Detection in Autonomous Driving

Presented by: Dong-You Jhong, University of Florida
Abstract: Out-of-distribution (OOD) detection is essential in autonomous driving, to determine when learning-based components encounter unexpected inputs. Traditional detectors typically use encoder models with fixed settings, thus lacking effective human interaction capabilities. With the rise of large foundation models, multimodal inputs offer the possibility of taking human language as a latent representation, thus enabling language-defined OOD detection. In this paper, we use the cosine similarity of image and text representations encoded by the multimodal model CLIP as a new representation to improve the transparency and controllability of latent encodings used for visual anomaly detection. We compare our approach with existing pre-trained encoders that can only produce latent representations that are meaningless from the user’s standpoint. Our experiments on realistic driving data show that the language-based latent representation performs better than the traditional representation of the vision encoder and helps improve the detection performance when combined with standard representations.

Honorable mention 3

Bridging Dimensions: Confident Reachability for High-Dimensional Controllers

Presented by: Yuang Geng, University of Florida
Abstract: Autonomous systems are increasingly implemented using end-to-end learning-based controllers. Such controllers make decisions that are executed on the real system, with images as one of the primary sensing modalities. Deep neural networks form a fundamental building block of such controllers. Unfortunately, the existing neural-network verification tools do not scale to inputs with thousands of dimensions, especially when the individual inputs (such as pixels) are devoid of clear physical meaning. This paper takes a step towards connecting exhaustive closed-loop verification with high-dimensional controllers. Our key insight is that the behavior of a high-dimensional vision-based controller
can be approximated with several low-dimensional controllers. To balance the approximation accuracy and verifiability of our low-dimensional controllers, we leverage the latest verification-aware knowledge distillation. Then, we inflate low-dimensional reachability results with statistical approximation errors, yielding a high-confidence reachability guarantee for the high-dimensional controller.

list of Student demos:

F1Tenth Autonomous Driving

Presented by: Christopher Oeltjen, Carson Sobolewski, Lorant Domokos
Abstract: We will be demoing our f1tenth cars’ ability to drive around a non-predetermined track and map it out to create an optimal racing line and then follow that racing line.

Exploring the Underwater Frontier with CavePI

Presented by: Alankrit Gupta, Xianyao Li, Md Jahidul Islam
Abstract: CavePI is an underwater Remotely Operated Vehicle (ROV) designed for autonomous and semi-autonomous exploration of submerged cave environments. Equipped with dual cameras, sonar, a depth sensor, and a flight controller with an inbuilt IMU, CavePI is engineered to overcome the unique challenges of confined and low-visibility underwater spaces. The ROV’s primary capabilities include rope-following for navigation assistance, depth-hold for stable vertical positioning, yaw-hold for maintaining orientation, and precise movement along straight lines with controlled turns. These features enable CavePI to explore caves autonomously or with minimal operator input, making it an invaluable tool for scientific research, environmental monitoring, and geological surveying.

Underwater Scene Synthesis by Physics-Informed Waterbody Fusion

Presented by: Md Abu Bakr Siddique
Abstract: We propose a physics-informed approach for synthesizing realistic waterbody properties in underwater scenes. Our method leverages the physical principles of light propagation underwater to achieve waterbody fusion that ensures geometrically consistent rendering and accurate data augmentation. By transferring waterbody characteristics from one scene to the object contents of another, our approach preserves the depth consistency and object geometry of the original scene. Unlike conventional data-driven style transfer techniques, our method guarantees structural integrity across the synthesized images. Extensive experiments on diverse underwater environments demonstrate that our approach preserves over 94% depth consistency and 90-95% structural similarity with the original scenes. Additionally, we show that it enables accurate 3D view synthesis while maintaining object geometry, even when adapting to the fusion of waterbody effects. This work opens new possibilities for underwater scene synthesis, with potential applications in robotics, computer vision, and underwater imaging.

Visual navigation and exploration powered by on-device intelligence on a custom designed UAV platform

Presented by: Yuxuan Zhang
Abstract: We’d like to present (1) a fully custom designed & 3D printed high performance rover platform and (2) a visual navigation system runs entirely on device, enabling full autonomous navigation and exploration of an unknown space.

Microblaze FPGA Drone

Presented by: Wade Fortney
Abstract: This demo will be presenting a drone using a zynq7000 series FPGA. This drone is using a Microblaze soft core CPU as the flight controller that runs a Rotation Rate PID and collects sensor data. At start-up, the Zynq processing system runs a bootloader that loads the bit stream baked with the Microblaze program binary file to the Programable Logic from on-board SPI flash. This drone will be used as a platform for research for drones in extreme conditions. We plan on having the PS run Petalinux which will be used for AI applications. The PS, being the Flight Computer, will take images and build a map for a return to home action if it loses connection to the wireless controller. The PL will contain the Flight controller, image processing hardware and other data processing acceleration. To move, the PS will be sending commands to the PL containing the Microblaze flight controller using block ram.

Wireless sensor network implementation through remote access control

Presented by: Xi Wang
Abstract: Wireless sensor networks play an important role in daily life. Traditional wireless sensor network implementations rely on local control through a computer. However, this approach lacks flexibility, and other users cannot access the system. In this demo, we are showing a wireless sensor network implementation with remote access control. Users no longer need to stay near the devices; instead, they can control them remotely. This implementation enhances the scalability and flexibility of wireless networks.

Lightweight and Edge-Optimized Autonomous Driving Model on DonkeyCar Platform

Presented by: Dong-You Jhong, Zhenjiang Mao, Nathan Noronha, Mrinall Umasudhan, Cesar Valentin
Abstract: We present an edge-compatible, deep learning-based steering control model integrated with the DonkeyCar platform. The model employs a custom CNN architecture designed for resource-constrained environments, focusing on efficient steering angle predictions. The training data consists of preprocessed grayscale images, enabling faster inference while maintaining accuracy. In this session, attendees will see the model’s seamless deployment on DonkeyCar, along with its ability to navigate dynamic tracks in real-time. This work underscores the potential for deploying lightweight neural networks in IoT-enabled autonomous vehicles.

Autonomous RC Car Drift Detection

Presented by: Lorant Domokos, Christopher Oeltjen, Carson Sobolewski
Abstract: Autonomous RC car drift detection using on-board odometry for slip detection and road-tire static friction estimation using physics based methods.

Innovative Assessment of Great Toe Strength Using ToeScale: A Portable IoT Device

Presented by: Raghuveer Chandrashekhar, Hongwu Wang
Abstract: The strength and range of motion of the great toe are crucial for normal walking and balance. However, existing methods for assessing great toe strength (GTS) have limitations, including the need for costly equipment and/or specialized personnel. To overcome these challenges, we developed ToeScale, a novel and portable IoT device utilizing a 3-node network with a central node, a sensor node for measuring muscle strength, and an actuator node with RGB LED lights for visual feedback. The central node powers the end nodes and facilitates the communication between them as well as stores the recorded data. This demo will showcase the novel ToeScale with the visual feedback interface and its unique output, a time-series dataset of GTS over 10s.
Our preliminary validation shows suggest that the ToeScale has excellent repeatability in both benchtop and human testing, with Intraclass Correlation Coefficients (ICCs) ranging from 0.90 to 1.00. Additionally, the results suggest it could potentially detect more subtle changes or asymmetries in muscle strength than manual muscle testing (MMT), the current clinical standard for GTS measurement.
The demo will highlight the device’s scalability and how it can be integrated into the existing clinical infrastructure to enhance outcomes. We will also address the challenges encountered during IoT implementation and discuss future research directions for ToeScale development.

Advancing Quantum Computing: Techniques for Efficiency and Scalability

Presented by: Hoang Ngo
Abstract: Quantum computing is an emergent field of cutting-edge computer science harnessing the unique qualities of quantum mechanics to solve problems beyond the ability of even the most powerful classical computers. Among its paradigms, quantum annealing stands out as a leading approach, employing quantum effects for optimization tasks. This poster highlights the advantages and fundamental challenges of quantum annealing, along with our recent contributions aimed at enhancing its workflow.

Taming Thunder: Demonstrating HWIL Fuzzing on Cloud-Connected Firmware

Presented by: Austin Kee
Abstract: It is well known that fuzzing firmware is a challenging endeavor, especially when attempting to automate the process.  The Fuzzware system attempts to tackle the problem through rehosting and fuzzing, monitoring how fuzzer inputs affect code coverage and identifying interesting inputs through this trial-and-error process.  We investigate if restoring hardware into the fuzzing loop to provide more realistic base inputs increases Fuzzware’s fuzzing performance.
A fuzzing workflow is demonstrated where real inputs from a cloud provider are captured to be used as seed inputs for fuzzing.  The fuzzing target is the readily available FreeRTOS AWS quick-connect firmware, running on an STM32L4+ Discovery development kit (B-L4S5I-IOT01A).  Testing is focused on the network interface implementation, specifically the MQTT protocol parsing and data handling.  JTAG boundary scan is used to record real transactions between the FreeRTOS firmware, the SPI-connected Wi-Fi card, and the AWS MQTT broker service.  Once a sufficient base dataset is collected, the FreeRTOS firmware is fuzzed via rehosting using the Fuzzware toolkit.  Performance is quantified by the time taken to find crashes and interesting firmware states compared to a format-naive fuzzing campaign.

Baseline Clinical Characteristics and Machine Learning Predict tDCS Treatment Outcomes for Anxiety

Presented by: Junfu Cheng, Ruogu Fang
Abstract: Background and Objectives: Transcranial Direct Current Stimulation (tDCS) combined with cognitive training (CT) demonstrates potential for enhancing mental health in older adults, especially in mitigating anxiety disorders, a known risk factor for Alzheimer’s Disease and Related Dementias (ADRD). However, outcome variability persists. This study aims to identify baseline predictors  influencing tDCS + CT efficacy in reducing anxiety.
Methods: Data from the Augmenting Cognitive Training in Older Adults (ACT trial, NCT02851511) were analyzed to predict  anxiety improvement based on baseline characteristics. The study included 333 participants aged 65-89 from CT + Active tDCS and CT + Sham tDCS groups, assessed at baseline, and 3 months. In the Active tDCS group, participants received 20 minutes of 2 mA tDCS targeting F3/F4 during a two-week CT program paired with an N-back task. Lasso Logistic Regression (LASSO) identified predictors of anxiety improvement using 75 key baseline variables from demographic information, neurocognitive function, functional abilities, quality of life, mental, and medical health features.
Results: LASSO demonstrated strong efficacy in predicting post-treatment anxiety outcomes in older adults receiving tDCS. The model for predicting the changeover 3 months is 77.47%. Notably, tDCS significantly reduced anxiety symptoms compared to sham stimulation.
Conclusions: This study highlights LASSO’s potential to predict anxiety treatment outcomes, identifying key predictors such as baseline STAI state score, tDCS active/sham, sex, assistive device usage, childhood difficulties, race, and medication usage to reduce pain symptoms. These findings support the development of personalized tDCS-based interventions for anxiety disorders in older adults.

Uncoded Storage Coded Transmission Elastic Computing for Matrix-Matrix Multiplications

Presented by: Xi Zhong
Abstract: Elastic computing proposed by Yang in 2018 is designed to mitigate the impact of elasticity in a homogeneous cloud system, where machines have the same computation speed and they join and leave the network arbitrarily over different computing steps.  Some limitations of this approach include that it cannot tolerate stragglers for matrix-matrix multiplications, and our real measurements over Amazon EC2 show that the virtual instances often have different computing speeds even if they have the same configurations. In this demo, we introduce a new combinatorial optimization framework, named uncoded storage coded transmission elastic computing (USCTEC), for heterogeneous systems, where machines have the different computation speeds. With the goal of minimizing the overall computation time,  we propose optimal USCTEC schemes with straggler tolerance. Furthermore, we establish a trade-off in computation time and the capacity of straggler tolerance. We evaluate the performance of our design on Amazon EC2 for matrix-matrix multiplications. Evaluation results show that the proposed heterogeneous design outperforms the homogeneous design by 30%, and outperforms the baseline method using uncoded computing by more than 50% when stragglers are tolerated.

Exploration of Immersive Virtual Environment for Automotive Security

Presented by: Mustafa Mohammad Shaky
Abstract: The lack of a platform that allows for the practical investigation of automotive security flaws is a major barrier to the community’s understanding of cybersecurity issues in automotive systems. By creating an interactive exploration platform that helps users to understand the possible attacks on autonomous automotive systems, this project addresses the aforementioned issue by paying attention to ranging sensor assaults. An autonomous car uses these sensors to create an internal perception model of its surroundings. The car may be forced into risky or ineffective driving practices by an adversary or a hacker who gives it inaccurate or deceptive sensor information. Through the utilization of virtual reality infrastructure, this platform, IVE (for “Immersive Virtual Environment”), allows users to simulate real-world operational scenarios while playing with different car security breaches.

MM-PUF

Presented by: Peyman Dehghanzadeh
Abstract: Traditional PUFs face challenges such as design overhead, susceptibility to model-building attacks, and confinement to specific chip regions. To address these challenges, this demo introduces Multi-functional Memory-based PUF (MM-PUF) – a custom memory-based PUF structure tailored for Application-Specific Integrated Circuits (ASICs). This innovative PUF minimizes area overhead, enables flexible distribution within the design, and leverages process variation across the die as a source of entropy to produce device-specific, unique identifiers. Moreover, the resources allocated to this PUF structure can serve an alternative purpose within the ASIC when not used as an entropy source by functioning as storage elements to retain useful data, thereby optimizing area and resource utilization. To achieve this change in functionality, programming bits are applied to the cell. A comprehensive evaluation of PUF signatures, carried out through circuit-level simulations and experimental measurements on test chips fabricated in a 65~nm CMOS process, highlights the superior quality of the PUF in terms of uniqueness, randomness, and robustness, while also highlighting its additional role as programmable storage. Notably, this exceptional quality is achieved with minimal overhead, concurrently offering on-demand data storage capabilities for diverse applications, test points for improved testability, constant coefficients for digital signal processing (DSP), and weights for neural networks.

Detecting Fake Honey Through Portable NMR System

Presented by: Tianjun Wang, Rohan Reddy Kalavakonda, Kelsey Horace-Heron
Abstract: Fake honey is very prevalent problem. People make fake honey by using cane sugar, jaggery, and other materials. Machine Learning model is trained on the NMR data for the honey. Various machine learning models can identify honey samples using NMR data with high accuracy. The combination of Machine Learning and portable NMR demonstrates to be a promising solution to accessible food safety solution.

Wearable Sensor System for Scapular Motion Monitoring

Presented by: Junjun Huan
Abstract: A wearable sensing system developed to monitor scapular movement in the human body during post-shoulder replacement surgery recovery.

Targeted Fault Attack on DNN Accelerator Using Clock Glitch

Presented by: Kazi Mejbaul Islam
Abstract: This demonstration presents a targeted fault attack on deep neural networks (DNNs) implemented on FPGAs, leveraging clock glitching to induce controlled misclassification. By manipulating clock glitch parameters width, offset, and the number of glitch cycles we demonstrate precise fault injection that disrupts critical computations within the DNN. The fault’s impact on model behavior is systematically analyzed using the MNIST dataset, showcasing real-time misclassifications of input images. The experiment highlights the vulnerability of FPGA-based DNNs to clock-based fault attacks and provides insights into the correlation between glitch parameters and DNN corruption levels. This work underscores the importance of developing robust fault-tolerant mechanisms for hardware-accelerated AI models.

Energy-Efficient, High-Security Body Area Network Enabled by Human Body Communication

Presented by: Anyu Jiang, Asif Iftekhar Omi
Abstract: Body Area Networks (BANs) are systems of interconnected devices designed to operate in, on, or around the human body, supporting a range of applications. These devices, including wearable sensors, implantable medical devices, and external monitoring units, communicate to collect, process, and transmit personal health data. Traditional BANs often rely on wireless communication technologies such as Bluetooth or Wi-Fi. However, two major drawbacks of using high-frequency wireless channels are: (1) operating at high frequencies results in significant energy consumption, and (2) signal leakage, extending up to approximately 5 meters outside the human body, compromises communication security.
Noticed that these devices share a common channel—the human body itself. Due to its high water content, the human body can act as a conductor, enabling a wireline like communication. According to previous studies, compared to traditional RF communication, this communication technique has a 100 times better energy efficiency (reducing from the conventional 1 nJ/bit to sub-10 pJ/bit) and  more than 30 times better security (reducing signal leakage from 5 meters to approximately 15 cm outside the body).
This demo illustrates the concept of Human Body Communication by showing signals transmitted through the human body, which are visualized on an monitoring device.

CRISP

Presented by: Atri Chatterjee
Abstract:

Real-time Inferencing for Handwritten Character Detection from Neural Signals on a Portable Hardware Device

Presented by: Ovishake Sen
Abstract: Brain-computer interfaces (BCIs) are increasingly recognized for their potential to address critical medical and societal challenges. Enhancing BCI technologies offers significant benefits for individuals with motor and communication disabilities, enabling improved interaction and functionality in daily life. Research on decoding handwriting from neural signals is particularly promising, providing a practical solution to assist these individuals with tasks like writing and communication. This demo focuses on real-time handwriting recognition from neural signals on portable hardware devices, using a publicly available dataset. Benchmark machine learning algorithms are employed to analyze handwritten data for 31 English characters, with the aim of predicting handwritten sentences in real-time. Lightweight machine learning models are evaluated on the Nvidia Jetson TX2 platform to ensure efficient and accurate processing of neural signal data. To prevent overfitting, data augmentation techniques such as random noise injection and time-shifting were applied, improving the model’s robustness. During real-time inference, the model recorded an average word error rate (WER) of <1% and a character error rate (CER) of <2%, highlighting its high accuracy and reliability. This demo highlights the potential of BCIs to bridge communication barriers for individuals with disabilities and showcases the viability of implementing lightweight, real-time handwriting recognition systems on portable hardware devices.

Platforms for Engineering Education

Presented by: Rohan Reddy Kalavakonda
Abstract: The custom-designed hardware platforms, AHA and HaHa, are innovative experimental development boards designed to facilitate teaching and learning across various aspects of electronic hardware design and hardware security. Each board supports over a dozen hands-on experiments, encompassing topics such as building intelligent IoT systems, AI-powered edge devices, and exploring hardware security attacks and countermeasures.

PLASMA: Platoon Security Against Multi-Channel Perception Adversaries

Presented by: Chengwei Duan
Abstract: A connected and autonomous vehicle (CAV) platoon constitutes a group of vehicles that coordinate their movements and operate as a single unit. The vehicle at the head sets a trajectory emulated by other following vehicles. CAV platoon is vulnerable to security attacks by adversaries that tamper the sensory or communication components of participating vehicles, resulting in the breakdown of coordination and consequently compromising the safety, stability, and efficiency of the platoon. In this paper, we develop a framework, PLASMA, to protect multi-vehicle CAV platoon against attacks on perception components of constituting vehicles. A unique feature of PLASMA is the ability to detect and mitigate adversarial activities in realtime, using a combination of kinematics and machine learning techniques. We perform extensive experiments to demonstrate the viability of PLASMA under diverse attack scenarios. Furthermore, a key outcome of our work is a disciplined analysis of various adversary models in CAV security and a methodology for systematic identification of mitigation techniques.

LATENT: Leveraging Automated Test Pattern Generation for Hardware Trojan Detection

Presented by: Sudipta Paria
Abstract: Due to the globalization of the semiconductor supply chain and the adoption of the zero trust model, hardware Trojan attacks pose significant security threats introduced by untrusted entities. Hardware Trojans involve malicious modifications to a design before fabrication, leading to unintended behaviors such as Denial of Service (DoS) attacks or leakage of sensitive information. Detecting these Trojans in fabricated chips is challenging due to the vast attack space. Conventional post-manufacturing Automatic Test Pattern Generation (ATPG) methods struggle with the activation (trigger) and observation (payload) of Trojans, making detection practically infeasible. While statistical testing techniques have been proposed for post-silicon Trojan detection, they suffer from limited trigger and payload coverage and scalability issues. In this demo, we show a scalable, payload-aware statistical test pattern generation framework, named LATENT that enhances Trojan detection by leveraging ATPG solutions. We show the performance of LATENT in detecting randomly inserted virtual Trojans using optimized patterns in open-source designs and demonstrate promising results for trigger and Trojan coverage. We also present the flexibility of the LATENT framework with configurable parameters like rareness threshold, N-detect, number of Trojans, etc. Additionally, we showcase the visualization of the hypergraphs generated from test designs and show the locality of triggers and payloads of inserted Trojans.

A Neuro-Symbolic AI System for Autonomous ISR Applications

Presented by: William English
Abstract: The ability to autonomously execute challenging and complex ISR missions is a highly desired capability for the department of defense. An example of such a mission would be to localize entities within an environment given some symbolic spatial relationships with other entities or objects in the environments. This is a challenging problem for state-of-the-art AI models that struggle to reason over spatio-temporal relationships. In this demo, we present a Neuro-Symbolic AI System for solving such missions with symbolic relational constraints. Our system is based on a modular approach consisting of control, maneuver, and perception components. The Control node receives input from the Perception node and interacts with the Maneuver node to complete the mission. It is responsible for managing known and unknown information about the environment and scenario. The Perception node receives input from the camera to discern environmental features and identify target entities, while the Maneuver node controls the position of the drone in response to commands issued by the Control node. To account for anomalies and fuzzy perception reports caused by inclement weather, we also implement a system of constraint relaxation, allowing for a measure of uncertainty when reasoning about possible target locations.