Multimodal Concept-dependant Active Learning for Image Retrieval

← Back to Community Suggestions

Multimodal Concept-dependant Active Learning for Image Retrieval

Multimodal Concept-dependant Active Learning for Image Retrieval

Contributed by Morgan Fritz on 03 Apr 2014

King-Shy Goh, Edward Y. Chang, and Wei-Cheng Lai. 2004. Multimodal concept-dependent active learning for image retrieval. In Proceedings of the 12th annual ACM international conference on Multimedia (MULTIMEDIA '04). ACM, New York, NY, USA, 564-571. DOI=10.1145/1027527.1027664 http://doi.acm.org/10.1145/1027527.1027664 It has been established that active learning is effective for learning complex, subjective query concepts for image retrieval. However, active learning has been applied in a concept independent way, (i.e., the kernel-parameters and the sampling strategy are identically chosen) for learning query concepts of differing complexity. In this work, we first characterize a concept's complexity using three measures: hit-rate, isolation and diversity. We then propose a multimodal learning approach that uses images' semantic labels to guide a concept-dependent, active-learning process. Based on the complexity of a concept, we make intelligent adjustments to the sampling strategy and the sampling pool from which images are to be selected and labeled, to improve concept learnability. Our empirical study on a $300$K-image dataset shows that concept-dependent learning is highly effective for image-retrieval accuracy.


Read more at http://dl.acm.org/citation.cfm?id=1027664

You need to be logged in to report.