Abstract
Inspired by studies [4, 23, 40] which compared rankings obtained by search engines and human observers, in this paper we compare texture rankings derived by 51 sets of computational features against perceptual texture rankings obtained from a free-grouping experiment with 30 human observers, using a unify evaluation framework. Experimental results show that the MRSAR [37], VZNEIGHBORHOOD [62], LBPHF [2] and LBPBASIC [3] feature sets perform better than their counterparts. However, none of those feature sets are ideal. The best average G and M measures (measures of ranking accuracy from 0 to 1) [15, 5] obtained are 0.36 and 0.25 respectively. We suggest that this poor performance may be due to the small local neighborhood used to calculate higher-order features which cannot capture the long-range interactions that humans have been shown to exploit [14, 16, 49, 56]. Copyright is held by the owner/author(s).
Original language | English |
---|---|
Title of host publication | ICMR 2014 - Proceedings of the ACM International Conference on Multimedia Retrieval 2014 |
Publisher | Association for Computing Machinery |
Pages | 281-288 |
Number of pages | 8 |
DOIs | |
Publication status | Published - 1 Jan 2014 |
Event | 2014 4th ACM International Conference on Multimedia Retrieval - Glasgow, United Kingdom Duration: 1 Apr 2014 → 4 Apr 2014 |
Conference
Conference | 2014 4th ACM International Conference on Multimedia Retrieval |
---|---|
Abbreviated title | ICMR 2014 |
Country/Territory | United Kingdom |
City | Glasgow |
Period | 1/04/14 → 4/04/14 |
Keywords
- Computational features
- Evaluation
- Perceptual texture ranking
- Texture ranking
- Texture retrieval
- Texture similarity
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design
- Human-Computer Interaction
- Software