My research interests are in HCI, Mixed Reality, and Computer Vision. I focus on creating mixed reality interfaces that utilize computer vision systems to assist users with task completion.
LabelAR: A Spatial Guidance Interface for Fast Computer Vision Image Collection
Computer vision is applied in an ever expanding range of applications, many of which require custom training data to perform well. We present a novel interface for rapid collection and labeling of training images to improve computer vision based object detectors. LabelAR leverages the spatial tracking capabilities of an AR-enabled camera, allowing users to place persistent bounding volumes that stay centered on real-world objects. The interface then guides the user to move the camera to cover a wide variety of viewpoints. We eliminate the need for post-hoc manual labeling of images by automatically projecting 2D bounding boxes around objects in the images as they are captured from AR-marked viewpoints. In a user study with 12 participants, LabelAR significantly outperforms existing approaches in terms of the trade-off between model performance and collection time.
HindSight: Enhancing Spatial Awareness by Sonifying Detected Objects in Real-Time 360-Degree Video
HindSight increases the environmental awareness of cyclists by warning them of vehicles approaching from outside their visual field. A panoramic camera mounted on a bicycle helmet streams real-time, 360-degree video to a laptop running YOLOv2, a neural object detector designed for real-time use. Detected vehicles are passed through a filter bank to find the most relevant. Resulting vehicles are sonified using bone conduction headphones, giving cyclists added margin to react.
Projects completed in classes or as side projects. Sometimes these are conducted to practice the research process and may contain 3-5 page project reports.
Completed for EE249A - Embedded Systems
A major problem with virtual reality is the lack of feedback the user receives during interactions with virtual objects. One solution to this is to use physical props to provide haptic feedback, but this still lacks visual feedback cues that may be helpful or necessary for some tasks. This can negatively affect the user’s sense of embodiment in the virtual space. VirtuWheel provides visual feedback during virtual object interactions by using touch sensors on the surface a steering wheel controller which we can use to visualize interactions with the steering wheel in VR.
I am currently looking for internship opportunities in research and software engineering. Prior to returning to school, I enjoyed a career as a gameplay programmer in the video game industry. Designing interactive experiences is still an interest of mine.
VR Software Engineer, Jacobs Institute for Design Innovation - UC BerkeleyJanuary 2019 - May 2019
Research Assistant, Berkeley Institute of Design - UC BerkeleyJanuary 2018 - May 2018
Undergraduate Research Assistant, Berkeley Institute of Design - UC BerkeleyJune 2017 - December 2017
Undergraduate Research Assistant, Video and Image Processing Lab - UC BerkeleyNovember 2015 - January 2017
Math and Physics Tutor - Sierra CollegeJanuary 2014 - May 2015
Gameplay Programmer - Sony Online EntertainmentJune 2008 - October 2010
Gameplay Programmer - Zombie StudiosMay 2005 - June 2008
My undergraduate coursework included the following upper division classes: Signals and Systems, Convex Optimization, Operating Systems, Internet Architecture, CS Theory and Algorithms, Computational Photography, Machine Learning and Computer Vision.
I've also taken graduate coursework in HCI, AR/VR, Embedded Systems, and Data Visualization.
When I'm not doing research or programming, I like to snowboard, go bouldering, ride my motorcycle, or play D&D. I am also an amatuer speedcuber.