I am a fourth year Ph.D. student at Carnegie Mellon University, Computer Science
Department. My research interest lies at the intersection of mobile computing,
computer vision, and human-computer interaction. Particularly, I’m working on
wearable cognitive assistance running on
cloudlets under the guidance of Prof. Mahadev
Satyanarayanan (Satya). I aim to apply recent
advancement in mobile and computer vision to blur the boundary between the
physical and virtual world, build portable and intelligent cognitive systems,
and enhance users’ abilities to interact with the real world. I won the
Siemens FutureMakers
Challenge
in 2018 to work on development frameworks for creating object detectors with Deep Neural
Networks. More recently, I proposed my thesis Scaling Wearable Cognitive
Assistance. You can find my proposal here.
Research Experience
Bandwidth-efficient Live Video Analytics for Drones via Edge Computing (SEC'18)
Graduate Research Assistant, Carnegie Mellon University, Advisor: Prof. Satya
Designed an edge-based architecture enabling live video analytics on small autonomous drones for search and rescue tasks.
Proposed and evaluated four different techniques to reduce wireless bandwidth usage when offloading computation, e.g. early discard, just-in-time learning, reachback, and context-awareness.
A Scalable and Privacy-Aware IoT Service for Live Video Analytics (MMSys'17 and TOMM) Best Paper Award
Graduate Research Assistant, Carnegie Mellon University, Advisor: Prof. Satya
Measured network latency for cloudlets, small-scale datacenters located at the edge of the Internet, under Wi-Fi and 4G LTE networks
Measured energy consumption on mobile devices when offloading heavy computation to cloudlets
Analyzed system design trade-off between response latency and energy consumption in edge computing
Maintained and debugged in-lab cellular base station
TPOD: Tools for Painless Object Detection
Graduate Research Assistant, Carnegie Mellon University, Advisor: Prof. Satya
Designed a web-based system that enables users to create state-of-art object detectors quickly without computer vision knowledge.
Automated creating deep neural network based object detectors using Faster-RCNN and transfer learning.
Implemented a proactive object labeling web interface that employs tracking for auto-annotation and effectively reduces manual labeling workload to 10%.
UbiK: Ubiquitous Keyboard for Small Mobile Devices (MobiSys'14)
Undergraduate Research Assistant, University of Wisconsin-Madison, Advisor: Prof. Xinyu Zhang
Designed an end-to-end machine learning system to automatically understand privacy-related user feedback and assign fine-grained privacy tags to users' text feedback.
Implemented and trained a recurrent neural network (RNN) using Tensorflow to capture semantic meaning of feedback text.
Achieved ~85% accuracy of the tag assignment.
Digital Hardware Interim Engineering Intern @ Qualcomm
San Diego, California
Developed a python framework using OpenOffice API to automatically generate SoC clock synthesis constraint files from clock signal catalogs in Excel