Agrusa Competition 2023

Jaric Zola wearing a fedora and scarf over a jacket stands next to Liyao Li. Li holds a plaque while both of them smile.

Jaric Zola (left) stands with first place winner Liyao Li (right). 

Jaric Zola wearing a fedora and scarf over a jacket stands next to Xi Tan. Xi holds a plaque while both of them smile.

Xi Tan (right) won second place

Jaric Zola wearing a fedora and scarf over a jacket stands next to Grant Iraci. Grant holds a plaque while both of them smile.

Grant Iraci (pictured, right) and Cheng-En Chuang (not pictured) won third place. 

In 2023, the Agrusa Competition awarded a total of $10,000 in prize money.

This year's competitors included 11 teams made up of 24 students who were advised by 10 faculty members.  

On this page:

Award Winners

Place Title
1st Cyclops: A Nanomaterial-based, Battery-Free
Intraocular Pressure (IOP) Monitoring System inside Contact Lens (Liyao Li)
2nd Sherloc: Secure and Holistic Control-Flow Violation
Detection on Embedded Systems
3rd Validating IoT and Wearable
Devices with Rate-Based Session Types (Grant Iraci and Cheng-En Chuang
Honorable Mention UltraFeel: A Portable and High Definition Haptic Reality Platform Using a Low-cost Ultrasonic Array (Alexander Gherardl)
Honorable Mention PRIMAL (Naresh Kumar Devulapally)

Competitors

This is the complete list of student projects that competed for the prize money.

Deepfakepedia - A Media Forensics Learning Platform for Teenagers

Given the widespread adoption and significant expansion of AI-generated content (AIGC), distinguishing between authentic and synthesized content has become a growing challenge for viewers. To combat the potential malicious use of AIGC for misinformation and disinformation, we are in the process of developing a learning platform named DeepfakePedia. This platform aims to help teenage viewers gain a basic knowledge of generative AI, be aware of its potential threats to information integrity, and teach them how to identify and distinguish between real and fake media. The DeepfakePedia learning platform will contain:

  • A public web-based platform to enroll teenagers in educational learning about AIGC and media forensics. 
  • A general introduction to popular AIGC generation tools and methods.
  • A basic guidance to teach teenagers to identify different types of AIGC. 
  • To captivate viewers, there are fun quizzes/games to captivate viewers and test their knowledge at the end.

Sherloc: Secure and Holistic Control-Flow Violation Detection on Embedded Systems

The SHERLOC project has developed a proof-of-concept system for secure and comprehensive control-flow violation detection (CFVD) in embedded systems. SHERLOC is designed to monitor control-flow transfers within and between both privileged and unprivileged components, securing microcontroller-based systems. SHERLOC achieves the prototype by leveraging hardware features available on microcontrollers to record control-flow transfers. The system develops an interrupt- and scheduling-aware algorithm and uses a combination of backward-edge and forward-edge policies to detect control-flow violations, including function calls, returns, indirect jumps, and exceptions/interrupts. To evaluate SHERLOC's effectiveness, a series of experiments were conducted with various programs and systems, such as BEEBS programs, Blinky bare-metal systems, and FreeRTOS. The results show that SHERLOC is both effective and efficient in detecting control-flow violations in embedded systems.

FingerCraft: Pioneering Fresh Fingerprint Generation

Our project marks a significant achievement by demonstrating the effectiveness of generative AI in generating lifelike fingerprints. With ongoing research into leveraging GANs and conditional-diffusion models to achieve even finer-grained control over synthetic fingerprint characteristics, these models, especially the diffusion model, though state-of-the-art for image generation, have not been used in biometric data generation, motivating our project. Our proof-of-concept system uses style GAN-generated synthetic fingerprints for training. Within this system, we've implemented the SimCLR framework to train a neural network. This network plays a central role by converting fingerprints into high-dimensional embeddings, serving as the foundation for our fingerprint authentication process. This authentication method relies on cosine distance metrics for robust identity verification. Our innovative approach not only addresses biometrics challenges, including cost reduction and privacy concerns associated with real fingerprint data collection and recognition, but also explores advancements in unsupervised learning and generative AI.

Cyclops: A Nanomaterial-based, Battery-Free Intraocular Pressure (IOP) Monitoring System inside Contact Lens

We introduce Cyclops, a low-cost, battery-free contact lens-based eye pressure sensing system that supports long-range data reading of up to 1 m. This design eliminates the need for a bulky data reader in close proximity to the user. We achieve long-range communication by integrating backscatter communication with the pressure sensor, all embedded within a compact contact lens. Compared with existing works, our system significantly improves the communication distance. Our design features a three-layer, sandwich-like antenna structure that includes two metallic antenna layers with a central sensing layer in between. The sensing layer shows significant variations in its intrinsic properties in response to pressure changes. As the sensing layer is integrated into the antenna assembly, it influences the antenna's overall impedance. Through careful optimization of the structure of the two metallic layers, we align the antenna's impedance with that of the backscatter chip, enabling long-range communication.

UBuddy - a Social Interactive Humanoid Robot

In any organization, scattered information across multiple documents and websites can be daunting for users to navigate. Our goal is to create an interactive platform that can answer users' questions from such diverse sources of information. In this project, we have built Ubuddy, an interactive social university robot using the Pepper platform. This innovative system activates on a wake word and delivers precise answers to UB-specific queries by employing a language retriever model that integrates ChatGPT and LLama2. It provides real-time information on university events and programs, sourced directly from official university channels, ensuring accuracy and relevance. The robot's display tablet not only shows directions on a map but also answers questions about university leadership. It also entertains users with dances and gestures. Additionally, it has the capability to detect emotions and identify individuals through facial and audio cues, which enhances personalized engagement. We have successfully demonstrated at multiple forums our system that exploits web scraping and robotics to effectively serve the university community.

UltraFeel: A Portable and High Definition Haptic Reality Platform Using a Low-cost Ultrasonic Array

UltraFeel is a portable, high-definition, and low-cost solution for integrating haptic feedback into mixed reality, or more concisely, haptic reality for the hands. The hardware platform consists of a 16-by-16 phased array of 40kHz ultrasonic emitters, an I/O-intensive FPGA controller, an ESP32-based networking solution, and an AR/VR device (i.e., Quest 2). The 256 ultrasonic emitters work collaboratively to generate high-resolution convergence points. The amplitude modulation rate of 200Hz allows the ultrasonic airflow to mimic high-definition haptic feedback to users. The prototype, an 11.3 in² board, includes 544 components in total: 256 ultrasonic emitters, 160 capacitors/resistors, 128 MOSFET drivers, and 32 shift registers. The project efforts entailed:

  1. Hardware design and implementation (Altium Designer, PCB fabrication, and assembly)
  2. FPGA firmware development (Verilog programming)
  3. ESP32-based communication software (C++ programming)
  4. AR/VR system (Unity and C#)
  5. Multi-point convergence algorithm (Math and physics modeling)

Secure Vascular Biometric Recognition

Our project addresses privacy concerns within biometrics by implementing an end-to-end vascular feature recognition system designed with a strong emphasis on security and confidentiality. Our solution combines the power of the ResNet50 architecture with biohashing to securely capture and manage individuals' biometric data. It relies on near-infrared light scanning to acquire images, which are then processed to generate a unique digital representation of the user's vein pattern, known as a biohash.

Furthermore, our system incorporates an advanced matching algorithm that compares the user's biohash with the securely stored biohash to establish identity verification. The core model achieves an accuracy rate of 95.58%, and by utilizing biohashing, we attain an Equal Error Rate (EER) of 0, signifying a remarkable 100% enhancement in the efficiency of biohashing in protecting biometric data and privacy.

We'd also like to mention that the paper related to this project has been submitted and accepted, and is set to be published on IEEE Xplore. Furthermore, it was recently presented at the 2023 IEEE Western New York Image and Signal Processing Workshop:

Chris Humphry, Sunil Rufus, and Nalini Ratha. “Secure Vascular Biometric Recognition”. 2023 IEEE Western New York Image and Signal Processing Workshop (WNYISPW). IEEE, 2023.

PriML: Advancing Privacy in Machine Learning with FHE

Privacy issues in current AI systems raise serious ethical and legal concerns, compromising individual rights and societal well-being as identified in the US President’s recent executive order on AI. Protecting privacy in critical applications like biometrics and medical data is crucial for responsible AI development, trust, and security.

Fully Homomorphic Encryption (FHE) is the pinnacle of privacy and security, enabling computations on encrypted data. First, we developed an end-to-end privacy-preserving Convolutional Neural Network (CNN) for detecting sleep apnea from encrypted ECG signals using FHE, achieving an accuracy of 97.6%. This work establishes a fresh benchmark for secure and reliable sleep apnea detection.

Next, we enhanced the privacy of face embeddings for prominent applications like face recognition through FHE and multivariate polynomial transformations, establishing an innovative dual-layer security protocol. This work pioneers advanced biometric protection using FHE, enabling encrypted face recognition and setting new standards in biometrics, privacy, and security.

Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in Large Language Models

Our project has accomplished a novel framework called HATEGUARD, which addresses the challenge of detecting and mitigating new waves of online hate. HATEGUARD incorporates a chain-of-thought (CoT) reasoning approach, empowering large language models (LLMs) with reasoning capabilities to determine whether new content exhibits hateful characteristics. It also includes a prompt-based zero-shot classification, which streamlines the updating process by focusing solely on the detection policy. Our framework takes a first step towards practically moderating new waves of online hate by harnessing the potency of LLMs.

Based on this project, we have a paper titled "Moderating New Waves of Online Hate with Chain-of-Thought Reasoning in Large Language Models," accepted by the 45th IEEE Symposium on Security and Privacy (S&P), a top conference in computer security known as a "Big 4" Security Conference, with an acceptance rate of 14.9%.

EI2-AICare: Emotionally Intelligent, Interactive Assistant Prototype to Support Informal Caregivers

This project has developed an AI-powered emotionally intelligent interactive agent, EI2-AICare, for informal caregivers, who are an essential component of the healthcare system. Caregivers often face challenges in performing care giving tasks and are at risk of developing burnout and mental health issues. EI2-AICare leverages a Large Language Model (LLM)-powered human-robot interaction mechanism to provide interventions for caregivers.

These interventions include:

  • Answering queries regarding care giving duties
  • Recommending potential solutions or resources to facilitate care giving tasks
  • Suggesting reference demonstration videos to illustrate how to execute specific tasks
  • Helping caregivers get social support by connecting with emergency support services or other caregivers worldwide
  • Recommending latest findings or trending topics related to their queries or general interests
  • Checking on caregivers' well-being and recommending connecting with their families or friends

The interactive agent supports caregivers in executing their care giving tasks while also caring for their personal well-being.