{"@context":"http://iiif.io/api/presentation/2/context.json","@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/manifest.json","@type":"sc:Manifest","label":"Modeling guidance and recognition in categorical search: bridging human and computer object detection","metadata":[{"label":"dc.description.sponsorship","value":"This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree."},{"label":"dc.format","value":"Monograph"},{"label":"dc.format.medium","value":"Electronic Resource"},{"label":"dc.identifier.uri","value":"http://hdl.handle.net/1951/59822"},{"label":"dc.language.iso","value":"en_US"},{"label":"dc.publisher","value":"The Graduate School, Stony Brook University: Stony Brook, NY."},{"label":"dcterms.abstract","value":"Although various object detection methods have been widely studied, state-of-the-art performance of object detectors still lag far behind human performance. Humans can perform object detection tasks on various object categories hundreds of times a day in an effortless manner. The main effort in computer vision community is aiming at improving the performance of object detectors, while on the other side only little research has been done on understanding how humans perform in the object detection process. In this thesis, we analyze the relationship between human behaviors and object detection methods in computer vision on both guidance and recognition task. In our experiment, human observers searched for a categorically-defined teddy bear or butterfly target among non-targets rated as having HIGH, MEDIUM or LOW visual similarity to target classes. Actual targets show very strong search guidance, measured by the first fixated objects. Also guidance to non-targets objects are in proportion to their visual similarity to the target; high-similarity objects were first fixated the most and low-similarity objects the least. We design several computational experiments: First, we propose a computational model that uses C2 features and SVMs in the context of Target Acquisition Model (TAM), to model human behavior in an object detection task. Eye movement behavior of our computation model matched human behavior almost perfectly, showing strong guidance to targets and same pattern of first fixation on target-similar objects. We conclude that categorical search is guided, and that driving this guidance are visual similarity relationships that can be quantified in terms of distance from a SVM classification boundary. Second, we train and evaluate computational vision models for object category recognition and compare their output to the human behavior. Some algorithms do well at predicting which object humans will fixate first, but there are differences between which features perform best for classification and which predict human behavior most closely. This is a critical question for developing visual search algorithms that produce perceptually meaningful results. In additional, we demonstrate that the information available in the fixation behavior of subjects is often sufficient to decode the category of their search target--essentially reading a person's mind by analyzing what they look at using a technique that we refer to as behavioral decoding. Our results show we can predict an observer's search target based on their fixation pattern using two SVM-based classifiers, especially when one of the distractors were rated as being visually similar to the target category. These findings have implications for the visual similarity relationships underlying search guidance and distractor rejection, and demonstrate the feasibility in using these relationships to decode a person's task or goal."},{"label":"dcterms.available","value":"2013-05-22T17:35:24Z"},{"label":"dcterms.contributor","value":"Berg, Tamara."},{"label":"dcterms.creator","value":"Peng, Yifan"},{"label":"dcterms.dateAccepted","value":"2015-04-24T14:47:14Z"},{"label":"dcterms.dateSubmitted","value":"2015-04-24T14:47:14Z"},{"label":"dcterms.description","value":"Department of Computer Science"},{"label":"dcterms.extent","value":"66 pg."},{"label":"dcterms.format","value":"Application/PDF"},{"label":"dcterms.identifier","value":"http://hdl.handle.net/1951/59822"},{"label":"dcterms.issued","value":"2012-12-01"},{"label":"dcterms.language","value":"en_US"},{"label":"dcterms.provenance","value":"Made available in DSpace on 2013-05-22T17:35:24Z (GMT). No. of bitstreams: 1\nPeng_grad.sunysb_0771M_11187.pdf: 2401760 bytes, checksum: bfe00cadb5299d306259edb3af06949f (MD5)\n Previous issue date: 1"},{"label":"dcterms.publisher","value":"The Graduate School, Stony Brook University: Stony Brook, NY."},{"label":"dcterms.subject","value":"Computer science--Cognitive psychology"},{"label":"dcterms.title","value":"Modeling guidance and recognition in categorical search: bridging human and computer object detection"},{"label":"dcterms.type","value":"Thesis"},{"label":"dc.type","value":"Thesis"}],"description":"This manifest was generated dynamically","viewingDirection":"left-to-right","sequences":[{"@type":"sc:Sequence","canvases":[{"@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/canvas/page-1.json","@type":"sc:Canvas","label":"Page 1","height":1650,"width":1275,"images":[{"@type":"oa:Annotation","motivation":"sc:painting","resource":{"@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/11%2F80%2F04%2F118004044555735319174125471895287859774/full/full/0/default.jpg","@type":"dctypes:Image","format":"image/jpeg","height":1650,"width":1275,"service":{"@context":"http://iiif.io/api/image/2/context.json","@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/11%2F80%2F04%2F118004044555735319174125471895287859774","profile":"http://iiif.io/api/image/2/level2.json"}},"on":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/canvas/page-1.json"}]}]}]}