Concept Embeddings with SNaCK
This paper presents our work on “SNaCK,” a low-dimensional concept embedding algorithm that combines human expertise with automatic machine similarity kernels. Both parts are complimentary: human insight can capture relationships that are not apparent from the object’s visual similarity and the machine can help relieve the human from having to exhaustively specify many constraints.
As input, our SNaCK algorithm takes two sources:
- Several “relative similarity comparisons.” Each triplet has the form \((a,b,c)\), meaning that in the lower-dimension embedding \(Y\), \(Y_a\) should be closer to \(Y_b\) than it is to \(Y_c\). Experts can generate many of these constraints using crowdsourcing.
- Feature vector representations of each point. For instance, such features could come from HOG, SIFT, a deep-learned CNN, word embeddings, and so on.
SNaCK then generates an embedding that satisfies both classes of constraints.
A Metric Learning Reality Check
European Conference on Computer Vision (ECCV), Glasgow, Scotland, 2020.
International Conference on World Wide Web (WWW), Perth, 2017.
International Conference on Learning Representations (ICLR), San Juan, PR, 2016.
International Conference on Computer Vision (ICCV), 2015.
A Python implementation of SNaCK is freely available. View SNaCK code and documentation on Github. If you are using Anaconda an Linux or Mac OS X, SNaCK is easy to install. Run $ conda install snack
Otherwise, please follow the instructions in the README file on Github.
Download the Food-10k dataset here: Food-10k.tar.xz. This dataset includes 10,000 Yummly IDs of foods and 958,479 triplet constraints collected using crowdsourcing.
The CU-Birds 200 dataset can be found at the Caltech-UCSD Birds 200-2011 webpage. In our experiments, we used the “Birdlets” subset, consisting of the following 14 classes: