Andrew Mendez
My name is Andrew Mendez and I am a masters student in the Connective Media program at Cornell Tech. My research interests are in augmented reality, semantic segmentation, and bayesian deep learning. You can view my online portfolio, my linkedin, and my github.
Publications
2017 | |
Mendez, Andrew; Liu, Zexi; Chien, Eric Jui-Chun; Belongie, Serge Smart Kitchen: A Multi-Modal AR System 2017. @techreport{Mendez2017, title = {Smart Kitchen: A Multi-Modal AR System}, author = {Andrew Mendez and Zexi Liu and Eric Jui-Chun Chien and Serge Belongie}, url = {https://vision.cornell.edu/se3/wp-content/uploads/2017/12/AR-Smart-Kitchen-1.pdf}, year = {2017}, date = {2017-12-15}, abstract = {Augmented Reality(AR) is a novel technology that will revolutionize how we work, learn, and play. AR utilizes computer vision, computer graphics, spatial and tangible interaction to augment our perception and understanding of the environment. As academia and industry are improving computer vision and computer graphics methods to facilitate better AR use, current display technologies, such as mobile and projector-camera (Pro-Cam) hinder its widespread adoption and usefulness due to considerable usability challenges. We propose a novel multi-modal AR system that combines both mobile and pro-cam display technologies. By combining mobile and pro-cam systems will not only remove each display’s limitations, but complement each other’s usability strengths to provide longer, higher-fidelity AR usage. We apply this contribution in the form of a smart kitchen application, that allows users to be provided instant cooking recommendations and intuitive instructions to prepare dishes. We evaluate our system on several participants, and discuss the potential this system brings for widespread adoption of AR.}, keywords = {} } Augmented Reality(AR) is a novel technology that will revolutionize how we work, learn, and play. AR utilizes computer vision, computer graphics, spatial and tangible interaction to augment our perception and understanding of the environment. As academia and industry are improving computer vision and computer graphics methods to facilitate better AR use, current display technologies, such as mobile and projector-camera (Pro-Cam) hinder its widespread adoption and usefulness due to considerable usability challenges. We propose a novel multi-modal AR system that combines both mobile and pro-cam display technologies. By combining mobile and pro-cam systems will not only remove each display’s limitations, but complement each other’s usability strengths to provide longer, higher-fidelity AR usage. We apply this contribution in the form of a smart kitchen application, that allows users to be provided instant cooking recommendations and intuitive instructions to prepare dishes. We evaluate our system on several participants, and discuss the potential this system brings for widespread adoption of AR. |