{"@context":"http://iiif.io/api/presentation/2/context.json","@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/manifest.json","@type":"sc:Manifest","label":"Volumetric Focus+Context Visualization Techniques","metadata":[{"label":"dc.description.sponsorship","value":"This work is sponsored by the Stony Brook University Graduate School in compliance with the requirements for completion of degree."},{"label":"dc.format","value":"Monograph"},{"label":"dc.format.medium","value":"Electronic Resource"},{"label":"dc.identifier.uri","value":"http://hdl.handle.net/11401/77324"},{"label":"dc.language.iso","value":"en_US"},{"label":"dc.publisher","value":"The Graduate School, Stony Brook University: Stony Brook, NY."},{"label":"dcterms.abstract","value":"This thesis introduces new techniques and applications for volumetric visualization. Focus+context visualization and interaction techniques are used to navigate and interact with objects in information spaces. They provide in-place magnification of a region of the display without consequently losing any context representation. Diverse focus+context visualization techniques are of broad use in different application domains, such as geovirtual environments, navigation and visualization of large graphs or hierarchies, as well as the volume rendering (e.g., for medical applications). However, how to accurately represent and highlight the focus objects while maximally keeping all the important context information (e.g., shape features and area size) becomes a major challenge. To overcome the limitations generated by traditional optical lenses and to effectively facilitate the data exploration and analysis (e.g., organ segmentation and cancer detection for the medical data), new focus+context methods have been proposed and used for the design of real-time volumetric visualization techniques for both 2D and 3D applications. In general, detailed views of a focus volumetric object or multiple objects are combined seamlessly with abstracted or compressed views of the context within a single rendered image. To perform the real-time display required for interactive visualization, dedicated parallel processors (GPUs) are used for computing and rendering. For this purpose, the design and implementation of appropriate computer graphics and modeling based techniques and visualization rendering pipelines are necessary. Meanwhile, effective and efficient highlighting can enable users to quickly locate and easily decode relevant information. Therefore, high-dimensional transfer functions are used as highlighting techniques for the visualization of various objects-of-interests. With the purpose of exploration and navigation of the volumetric data, there are basically three categories relevant to the scope of this thesis. First component focuses on the enhancement methods: two high dimensional transfer function systems are proposed to accurately segment ROIs in 3D medical data and provide the enhanced visualization display to allow the user to easily perceive the focus data. Second part describes and introduces the focus+context visualization techniques. Two frameworks are based on geometric theories to generate focus+context visualization styles with angle-preservation or area-preservation. The conformal magnifier, works as a novel geometric model based lens design framework to serve as the focus+context visualization for various medical applications, which provides a smooth transition between focus and context regions and optimized local shape preservation everywhere. Meanwhile, the area-preservation visualization is obtained using a novel area-preservation mapping method based on the Monge-Brenier theory based optimal mass transport technique, which is rigorous and solid in theory, efficient and parallel in computation, and general for various applications."},{"label":"dcterms.available","value":"2017-09-20T16:52:31Z"},{"label":"dcterms.contributor","value":"Tannenbaum, Allen."},{"label":"dcterms.creator","value":"Zhao, Xin"},{"label":"dcterms.dateAccepted","value":"2017-09-20T16:52:31Z"},{"label":"dcterms.dateSubmitted","value":"2017-09-20T16:52:31Z"},{"label":"dcterms.description","value":"Department of Computer Science."},{"label":"dcterms.extent","value":"96 pg."},{"label":"dcterms.format","value":"Application/PDF"},{"label":"dcterms.identifier","value":"http://hdl.handle.net/11401/77324"},{"label":"dcterms.issued","value":"2013-12-01"},{"label":"dcterms.language","value":"en_US"},{"label":"dcterms.provenance","value":"Made available in DSpace on 2017-09-20T16:52:31Z (GMT). No. of bitstreams: 1\nZhao_grad.sunysb_0771M_11643.pdf: 45841172 bytes, checksum: c46d8e67e7760e2ba91e53217c83178d (MD5)\n Previous issue date: 1"},{"label":"dcterms.publisher","value":"The Graduate School, Stony Brook University: Stony Brook, NY."},{"label":"dcterms.subject","value":"Focus+Context Visualization, Visualization applications"},{"label":"dcterms.title","value":"Volumetric Focus+Context Visualization Techniques"},{"label":"dcterms.type","value":"Thesis"},{"label":"dc.type","value":"Thesis"}],"description":"This manifest was generated dynamically","viewingDirection":"left-to-right","sequences":[{"@type":"sc:Sequence","canvases":[{"@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/canvas/page-1.json","@type":"sc:Canvas","label":"Page 1","height":1650,"width":1275,"images":[{"@type":"oa:Annotation","motivation":"sc:painting","resource":{"@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/78%2F17%2F34%2F78173402027832325497809453736567581609/full/full/0/default.jpg","@type":"dctypes:Image","format":"image/jpeg","height":1650,"width":1275,"service":{"@context":"http://iiif.io/api/image/2/context.json","@id":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/78%2F17%2F34%2F78173402027832325497809453736567581609","profile":"http://iiif.io/api/image/2/level2.json"}},"on":"https://repo.library.stonybrook.edu/cantaloupe/iiif/2/canvas/page-1.json"}]}]}]}