Data Visualization

Höfundur Alexandru C. Telea

Útgefandi Taylor & Francis

Snið ePub

Print ISBN 9781466585263

Útgáfa 2

Útgáfuár 2015

8.290 kr.

Description

Efnisyfirlit

  • Front Matter
  • Dedication
  • Preface to Second Edition
  • Chapter 1 Introduction
  • 1.1 How Visualization Works
  • Figure 1.1. Types of questions targeted by the visualization process.
  • Visualization and insight.
  • Concrete questions:
  • Quantitative vs. qualitative questions.
  • Exact vs. fuzzy questions.
  • Discover the unknown:
  • Examples.
  • Figure 1.2. Visualization examples targeting different types of questions.
  • Subfields of data visualization.
  • Scientific visualization:
  • Information visualization:
  • Visual anaytics:
  • Figure 1.3. Conceptual view of the visualization process.
  • Interactive exploration.
  • 1.2 Positioning in the Field
  • Interactive Data Visualization: Foundations, Techniques, and Applications.
  • The Visualization Toolkit.
  • The Visualization Handbook.
  • Information Visualization Literature.
  • 1.3 Book Structure
  • Chapter 2.
  • Chapter 3.
  • Chapter 4.
  • Chapters 5–7.
  • Chapter 8.
  • Chapter 9.
  • Chapter 10.
  • Chapter 11.
  • Chapter 12.
  • Appendix.
  • 1.4 Notation
  • 1.5 Online Material
  • Acknowledgments
  • Chapter 2 From Graphics to Visualization
  • 2.1 A Simple Example
  • Figure 2.1. Elevation plot for the function drawn using 30 × 30 sample points.
  • Listing 2.1. Drawing a height plot.
  • Listing 2.2. Drawing a height plot using a sampled dataset.
  • Figure 2.2. Elevation plot for the function A coarse grid of 10 × 10 samples is used.
  • Figure 2.3. Elevation plot for the function rendered using (a) a uniform grid of 100 × 100 samples and (b) an adaptively sampled grid.
  • 2.2 Graphics-Rendering Basics
  • Rendering equation.
  • Figure 2.4. The Phong local lighting model.
  • 2.3 Rendering the Height Plot
  • Flat shading.
  • Listing 2.3. Drawing a quadrilateral using OpenGL.
  • Smooth shading.
  • Figure 2.5. Elevation plot for the function (Gouraud shaded).
  • Listing 2.4. Drawing a Gouraud-shaded height plot.
  • Computing vertex normals.
  • 2.4 Texture Mapping
  • Figure 2.6. Texture mapping. (a) Texture image. (b) Texture-mapped object.
  • Listing 2.5. Defining a 2D texture.
  • Listing 2.6. Drawing a texture-mapped height plot.
  • 2.5 Transparency and Blending
  • Figure 2.7. The height plot in (b) is drawn on top of the current screen contents in (a) with additive blending to obtain the half-transparent plot result in (c). A black background is used.
  • Figure 2.8. The height plot drawn half-transparently on top of the domain grid using a white background.
  • 2.6 Viewing
  • Virtual camera.
  • Figure 2.9. Extrinsic parameters of the OpenGL camera.
  • Projection.
  • Figure 2.10. Intrinsic parameters of the OpenGL camera.
  • Figure 2.11. OpenGL viewport transform from view area on the view plane to a screen area.
  • Viewport.
  • 2.7 Putting It All Together
  • Initialization.
  • Listing 2.7. OpenGL application structure.
  • Viewing.
  • Drawing.
  • Improvements.
  • 2.8 Conclusion
  • Figure 2.12. Visualization process steps for the elevation plot.
  • Chapter 3 Data Representation
  • 3.1 Continuous Data
  • 3.1.1 What Is Continuous Data?
  • 3.1.2 Mathematical Continuity
  • Figure 3.1. Function continuity. (a) Discontinuous function. (b) First-order C0 continuous function. (c) High-order Ck continuous function.
  • 3.1.3 Dimensions: Geometry, Topology, and Attributes
  • 3.2 Sampled Data
  • Interpolation.
  • Grids and cells.
  • Figure 3.2. Gaussian function reconstructed with constant basis functions.
  • Figure 3.3. Basis functions, interpolation, and coordinate transformations for the quad cell.
  • Putting it all together.
  • Figure 3.4. Overview of sampling and reconstruction.
  • Table 3.1. Combinations of geometry and lighting interpolation types.
  • 3.3 Discrete Datasets
  • 3.4 Cell Types
  • 3.4.1 Vertex
  • Figure 3.5. Cell types in world (red) and reference (green) coordinate systems.
  • 3.4.2 Line
  • 3.4.3 Triangle
  • Figure 3.6. Coordinate transformation Ttri explained using barycentric coordinates.
  • 3.4.4 Quad
  • 3.4.5 Tetrahedron
  • 3.4.6 Hexahedron
  • 3.4.7 Other Cell Types
  • Figure 3.7. Converting quadratic cells to linear cells.
  • 3.5 Grid Types
  • 3.5.1 Uniform Grids
  • Listing 3.1. Computing vertex indices from a cell index.
  • Figure 3.8. Uniform grids. 2D rectangular domain (left) and 3D box domain (right).
  • 3.5.2 Rectilinear Grids
  • Figure 3.9. Rectilinear grids. 2D rectangular domain (left) and 3D box domain (right).
  • 3.5.3 Structured Grids
  • Figure 3.10. Structured grids. Circular domain (left), curved surface (middle), and 3D volume (right). Structured grid edges and corners are drawn in red and green, respectively.
  • 3.5.4 Unstructured Grids
  • Figure 3.11. A domain consisting of a square with a hole in the middle cannot be represented by a structured grid. The domain border, consisting of two separate components, is drawn in red. Unstructured grids can easily model such shapes, whether using (a) a combination of several cell types, such as quads and triangles, or (b) a single cell type, such as, in this case, triangles.
  • Figure 3.12. Unstructured grids. Circle (left), head slice (middle), and 3D bunny surface (right).
  • 3.6 Attributes
  • 3.6.1 Scalar Attributes
  • 3.6.2 Vector Attributes
  • 3.6.3 Color Attributes
  • RGB space.
  • Figure 3.13. Color-space representations. (a) RGB cube. (b) RGB hexagon. (c) HSV color wheel. (d) HSV color widget (Windows).
  • HSV space.
  • Figure 3.14. Surface in the RGB cube representing colors with the constant value V = 0.5.
  • Converting between RGB and HSV.
  • Listing 3.2. Mapping colors from RGB to the HSV space.
  • Listing 3.3. Mapping colors from HSV to the RGB space.
  • Color perception.
  • 3.6.4 Tensor Attributes
  • Curvature as a tensor.
  • Figure 3.15. Curvature of two-dimensional surfaces.
  • Tensors, vectors, and scalars.
  • Figure 3.16. Scalar value, gradient vector, and curvature tensor for a function f(x, y).
  • 3.6.5 Non-Numerical Attributes
  • Table 3.2. Scalars, gradient vectors, and tensors for a function f(x, y).
  • 3.6.6 Properties of Attribute Data
  • Completeness.
  • Multivariate data.
  • Node vs. cell attributes.
  • High-variate interpolation.
  • Normals:
  • Vectors:
  • Colors:
  • Tensors:
  • 3.7 Computing Derivatives of Sampled Data
  • 3.8 Implementation
  • 3.8.1 Grid Implementation
  • Listing 3.4. Grid implementation.
  • Uniform grids.
  • Rectilinear grids.
  • Listing 3.5. Uniform grid implementation.
  • Listing 3.6. Implementing findCell for uniform grids.
  • Listing 3.7. Rectilinear grid implementation.
  • Structured grids.
  • Listing 3.8. Structured grid implementation.
  • Figure 3.17. Spatial subdivision of a 2D point cloud using (a) kd-trees and (b) bd-trees.
  • Unstructured grids.
  • Listing 3.9. Unstructured grid implementation.
  • 3.8.2 Attribute Data Implementation
  • Listing 3.10. Implementing nearest neighbor interpolation for scalar data.
  • Scalar attributes.
  • Vector attributes.
  • Listing 3.11. Implementing linear interpolation for scalar data.
  • Listing 3.12. Implementing nearest neighbor interpolation for vector data.
  • Listing 3.13. Implementing linear interpolation for vector attributes.
  • Storing several attribute instances.
  • 3.9 Advanced Data Representation
  • 3.9.1 Data Resampling
  • Figure 3.18. Converting cell to vertex attributes. The vertex value equals the area-weighted average of the cell values using vertex i.
  • Cell to vertex resampling.
  • Vertex to cell resampling.
  • Subsampling and supersampling.
  • 3.9.2 Scattered Point Interpolation
  • Constructing a grid from scattered points.
  • Gridless interpolation.
  • Performance issues.
  • Shepard interpolation.
  • Figure 3.19. Shepard interpolation of a scalar signal from a 2D point cloud.
  • 3.10 Conclusion
  • Chapter 4 The Visualization Pipeline
  • 4.1 Conceptual Perspective
  • Figure 4.1. Functional view on the visualization pipeline.
  • Figure 4.2. The visualization process seen as a composition of functions.
  • 4.1.1 Importing Data
  • 4.1.2 Data Filtering and Enrichment
  • See what is relevant.
  • Handle large data.
  • Ease of use.
  • 4.1.3 Mapping Data
  • Mapping vs. rendering.
  • Figure 4.3. The direct and inverse mapping in the visualization process.
  • Desirable mapping properties.
  • Inverting the mapping.
  • Figure 4.4. Inverse mapping for a weather map visualization.
  • Distance preservation.
  • Organization levels.
  • Table 4.1. Organization levels of visual variables.
  • Further reading.
  • 4.1.4 Rendering Data
  • 4.2 Implementation Perspective
  • Dataflow design.
  • Listing 4.1. Visualization operation implementation.
  • Figure 4.5. Visualization pipeline represented as a network of objects.
  • Dataflow implementation.
  • Visual dataflow programming.
  • Figure 4.6. The height-plot application in the VISSION application builder [Telea and van Wijk 99].
  • Figure 4.7. The height-plot application in the MeVisLab application builder [MeVis Inc. 13].
  • Figure 4.8. A visualization application in the AVS application builder [AVS, Inc. 06].
  • Simplified visual programming.
  • Figure 4.9. The height-plot application in the ParaView application builder [Henderson 04].
  • 4.3 Algorithm Classification
  • 4.4 Conclusion
  • Chapter 5 Scalar Visualization
  • 5.1 Color Mapping
  • 5.2 Designing Effective Colormaps
  • Color legends.
  • Figure 5.1. Construction of rainbow colormap.
  • Rainbow colormap.
  • Listing 5.1. Rainbow colormap construction.
  • Other colormap designs.
  • Grayscale:
  • Figure 5.2. Scalar visualization with various colormaps.
  • Two-hue:
  • Heat map:
  • Diverging:
  • Zebra colormap:
  • Figure 5.3. Visualizing the scalar function e with (a) a luminance colormap and (b) a zebra colormap. The luminance colormap shows absolute values, whereas the zebra colormap emphasizes rapid value variations.
  • Interpolation issues.
  • Figure 5.4. Vertex-based color mapping. The sphere geometry is sampled with (a) 64 × 64 points, (b) 32 × 32 points, and (c) 16 × 16 points.
  • Figure 5.5. Texture-based color mapping. The sphere geometry is sampled with (a) 64 × 64 points, (b) 32 × 32 points, and (c) 16 × 16 points.
  • Color banding.
  • Figure 5.6. Color banding caused by a small number of colors in a look-up table.
  • Additional issues.
  • 5.3 Contouring
  • Figure 5.7. Relationship between color banding and contouring.
  • Contour properties.
  • Figure 5.8. Isoline properties.
  • Figure 5.9. The gradient of a scalar field is perpendicular on the field’s contours.
  • Computing contours.
  • Figure 5.10. Constructing the isoline for the scalar value v = 0.48. The numbers in the figure indicate scalar values at the grid vertices.
  • Figure 5.11. Contour ambiguity for a quad cell (drawn in red). The isovalue is equal to 0.37. Numbers in the figures indicate the scalar values at the cell vertices.
  • 5.3.1 Marching Squares
  • Figure 5.12. Topological states of a quad cell (marching squares algorithm). Red indicates “inside” vertices. Bold indices mark ambiguous cases.
  • Listing 5.2. Marching squares pseudocode.
  • Figure 5.13. Topological states of a hex cell (marching cubes algorithm). “Inside” vertices are marked in red. Bold indices mark ambiguous cases.
  • 5.3.2 Marching Cubes
  • Figure 5.14. Ambiguous cases for marching cubes. Each case has two contouring variants.
  • Figure 5.15. Ringing artifacts on isosurface. (a) Overview. (b) Detail mesh.
  • Figure 5.16. Two nested isosurfaces of a tooth scan dataset.
  • Figure 5.17. Relationships between isosurfaces, isolines, and slicing.
  • Marching algorithm variations.
  • Dividing cubes algorithm.
  • 5.4 Height Plots
  • Figure 5.18. (a) Non-planar surface. (b) Height plot over this surface.
  • Figure 5.19. (a) Grayscale color mapping of scalar dataset. (b) Height plot of the same dataset, emphasizing fine-grained data variations.
  • 5.4.1 Enridged Plots
  • Figure 5.20. (a) Height plot. (b–d) Enridged height-plot variations for the same dataset.
  • Figure 5.21. Average rainfall and temperature over Europe for January (a) and July (b), visualized using enridged plots.
  • 5.5 Conclusion
  • Chapter 6 Vector Visualization
  • 6.1 Divergence and Vorticity
  • Figure 6.1. Divergence and curl in 2D. (a) Divergence construction. (b) Source point. (c) Sink point. (d) Rotor construction. (e) High-vorticity field.
  • Vorticity.
  • Figure 6.2. (a) Divergence of a 2D vector field. (b) Absolute value of vorticity of a 2D vector field.
  • Streamwise vorticity.
  • Helicity.
  • Figure 6.3. Vorticity of a 2D fluid flow field. Note the alternation between vortices with opposite spinning directions.
  • 6.2 Vector Glyphs
  • Line glyphs.
  • Figure 6.4. Hedgehog visualization of a 2D magnetohydrodynamic velocity field.
  • Cone and arrow glyphs.
  • Figure 6.5. Different glyph types. (a) Cones. (b) Arrows.
  • 6.2.1 Vector Glyph Discussion
  • Vector glyphs in 2D.
  • Figure 6.6. Visual interpolation of vector glyphs. (a) Small data variations are easily interpolated. (b) Large data variations create more problems.
  • Figure 6.7. (a) Vector glyphs on a dataset regularly subsampled on a rotated sample grid. (b) Subsampling artifacts are alleviated by random sampling. Both visualization display 1200 glyphs.
  • Figure 6.8. Glyph-based visualization of a 3D vector field.
  • Vector glyphs in 3D.
  • Vector glyphs on 3D surfaces.
  • Figure 6.9. Glyph-based vector visualization on a 3D velocity isosurface.
  • 6.3 Vector Color Coding
  • Figure 6.10. Vector color coding. (a) Orientation and magnitude. (b) Orientation only.
  • Color coding on 2D surfaces.
  • Color coding on 3D surfaces.
  • Figure 6.11. Color coding the tangency of a vector field to a given surface. The angle between the vector and surface normal is encoded via a rainbow colormap.
  • 6.4 Displacement Plots
  • Figure 6.12. Displacement plots of planar surfaces in a 3D vector field.
  • Figure 6.13. Displacement plots constructed using a box and a spherical surface.
  • 6.5 Stream Objects
  • 6.5.1 Streamlines and Their Variations
  • Streamlines.
  • Pathlines.
  • Streaklines.
  • Figure 6.14. Streamlines in a 2D flow field. The small gray balls indicate the seed points.
  • Listing 6.1. Streamline tracing.
  • Parameter setting.
  • Accuracy:
  • Stop criterion:
  • Geometry:
  • Streamline seeding.
  • Figure 6.15. Dense 2D streamline seeding using the farthest-seedpoint method.
  • Figure 6.16. Dense 3D streamline seeding on 3D surfaces.
  • 6.5.2 Stream Tubes
  • Figure 6.17. Stream tubes with arrow heads. The construction can ensure that either (a) the seed points or (b) the arrow heads are arranged on a regular grid.
  • Figure 6.18. Stream tubes with radius and luminance modulated by normalized tube length.
  • 6.5.3 Streamlines and Tubes in 3D Datasets
  • Figure 6.19. Streamlines in a 3D flow dataset.
  • Figure 6.20. Stream tubes traced from a seed area placed at the flow inlet.
  • 6.5.4 Stream Ribbons
  • Figure 6.21. Stream ribbons in a 3D flow dataset. (a) Two thick ribbons. (b) 20 thin ribbons.
  • 6.5.5 Stream Surfaces
  • Figure 6.22. Stream tubes in a 3D vector field (a). Corresponding stream surface (b) and zoomed-in surface detail showing tears in the stream surface (c).
  • 6.5.6 Streak Surfaces
  • Figure 6.23. Streak surface in a 3D vector field describing fluid flow around an obstacle.
  • 6.6 Texture-Based Vector Visualization
  • Line integral convolution.
  • Figure 6.24. Line integral convolution principle. The color T(p) of pixel p is given by integrating the colors of a noise texture N along the streamline S(p, s) passing through p. The red color intensity along S shows the magnitude of the weight function k(s).
  • Figure 6.25. Line integral convolution visualization. (a) Input noise. (b) The resulting LIC texture.
  • 6.6.1 IBFV Method
  • Figure 6.26. Principle of image-based flow visualization (IBFV).
  • Figure 6.27. Time-dependent noise signal design. The red curves show the periodic function f shifted by random phase values. A vertical cross section of the grayscale stripes gives the noise texture N(x, t) at a given moment t.
  • 6.6.2 IBFV Implementation
  • Parameters.
  • Putting it all together.
  • Listing 6.2. IBFV implementation in OpenGL.
  • 6.6.3 IBFV Examples
  • Figure 6.28. (a) Texture-based visualization with color-coded velocity magnitude. (b) Texture-based visualization with luminance-coded velocity magnitude and three ink sources.
  • IBFV on curved surfaces.
  • Figure 6.29. Image-based flow visualization for (a) 3D surface and (b) volumetric datasets.
  • IBFV on 3D volumes.
  • 6.7 Simplified Representation of Vector Fields
  • 6.7.1 Vector Field Topology
  • Figure 6.30. Image-based visualization of vector field topology.
  • Topology analysis.
  • Figure 6.31. Classification of critical points for a 2D vector field.
  • Interpolation issues:
  • Excluding critical points:
  • Boundaries:
  • 6.7.2 Feature Detection Methods
  • 6.7.3 Field Decomposition Methods
  • Top-down decomposition.
  • Bottom-up decomposition.
  • Listing 6.3. Bottom-up greedy clustering of vector data.
  • Figure 6.32. Simplified vector field visualization via bottom-up clustering of (a) a 2D field and (b) a 3D field.
  • Multiscale decomposition.
  • Figure 6.33. AMG flow-field decomposition. (a) Basis functions. (b) Regions and streamline-based visualization.
  • Figure 6.34. Vector field decomposition using the AMG technique. Three decomposition levels are visualized.
  • Figure 6.35. AMG-based simplified visualization of wind vector field on the surface of the Earth.
  • Multiscale IBFV.
  • Figure 6.36. (a) IBFV visualization and (b, c, d) multiscale IBFV visualization on three different scales of the same field.
  • 6.8 Illustrative Vector Field Rendering
  • Depth-dependent halos
  • Figure 6.37. Depth-dependent halos for the visualization of 3D vector fields.
  • 6.9 Conclusion
  • Chapter 7 Tensor Visualization
  • 7.1 Principal Component Analysis
  • Figure 7.1. Principal directions of curvature for a surface.
  • Figure 7.2. Principal directions of the curvature tensor for various shapes. Cross sections tangent to the eigenvectors are colored to denote the eigenvalue type. Red denotes the major eigenvector direction; yellow denotes the minor eigenvector direction. Blue denotes cases when the eigenvector directions are arbitrary, as eigenvalues are equal.
  • 7.2 Visualizing Components
  • 7.3 Visualizing Scalar PCA Information
  • Figure 7.3. Visualization of the nine scalar components hij of a 3 × 3 diffusion tensor from a DT-MRI scan.
  • Diffusivity.
  • Figure 7.4. Visualization of the mean diffusivity over sagittal, axial, and coronal slices. The small image in the lower-right corner displays the brain surface together with the three slices for orientation purposes.
  • Anisotropy.
  • Figure 7.5. Different anisotropy measures for diffusion tensor data.
  • 7.4 Visualizing Vector PCA Information
  • Figure 7.6. Major eigenvector visualized with line glyphs colored by direction.
  • Figure 7.7. Major eigenvector direction color-coded on a slice plane.
  • 7.5 Tensor Glyphs
  • Figure 7.8. Different types of tensor glyphs. (a) Ellipsoids. (b) Cuboids. (c) Cylinders. (d) Superquadrics.
  • Figure 7.9. Zoomed-in view of a DT-MRI dataset visualized with (a) ellipsoid, (b) cuboid, (c) cylinder, and (d) superquadric glyphs.
  • 7.6 Fiber Tracking
  • Figure 7.10. Fiber tracking from a user-selected region in the corpus callosum constructed with the Slicer 3D medical visualization tool.
  • Focus and context.
  • Figure 7.11. (a) Fiber tracking detail of Figure 7.10. (b) Fiber clustering based on the mean closest-point distance.
  • Figure 7.12. Two views from a focus-and-context DTI visualization showing tensor ellipsoids, fiber tracks, and a slice plane and isosurface of the anisotropy measure.
  • Fiber clustering.
  • Tracking challenges.
  • 7.7 Illustrative Fiber Rendering
  • Figure 7.13. Illustrative rendering of a fiber dataset (a) using alpha blending (b), anisotropy-based blending (c), sprite textures (d), and depth-dependent-halos (e).
  • Fiber generation.
  • Alpha blending.
  • Anisotropy simplification.
  • Illustrative rendering.
  • Fiber bundling.
  • Figure 7.14. Bundled visualizations of the fiber dataset in Figure 7.13(a). (a) Isotropic fiber bundles rendered in the context of the original fiber dataset, drawn in gray. (b) Fiber bundles rendered as tubes. (c) Anisotropic fiber bundling. (d) Fiber bundles rendered in the context of a volume-rendered CT scan, drawn in gray.
  • Fibers in context.
  • 7.8 Hyperstreamlines
  • Figure 7.15. Hyperstreamline construction. The major, medium, and minor eigenvectors at the hyperstreamline’s start and end points A and B are depicted in blue, red, and green, respectively. The streamline of the major eigenvector field e1 is drawn dashed.
  • Figure 7.16. DT-MRI brain dataset visualized with hyperstreamlines colored by direction.
  • 7.9 Conclusion
  • Chapter 8 Domain-Modeling Techniques
  • 8.1 Cutting
  • 8.1.1 Extracting a Brick
  • Figure 8.1. (a) Brick extraction. (b) Selection of cells with scalar value above 50.
  • 8.1.2 Slicing in Structured Datasets
  • Figure 8.2. Slicing with planes perpendicular to the x-axis (left), y-axis (middle), and z-axis (right).
  • 8.1.3 Implicit Function Cutting
  • 8.1.4 Generalized Cutting
  • 8.2 Selection
  • Selecting cells.
  • Thresholding, segmentation, and contouring.
  • 8.3 Grid Construction from Scattered Points
  • 8.3.1 Triangulation Methods
  • Delaunay triangulations.
  • Voronoi diagrams.
  • Figure 8.3. (a) Delaunay triangulation and (b) Voronoi diagram of a random point cloud. (c) Angle-constrained and (d) area-constrained Delaunay triangulations.
  • Variation of the basic techniques.
  • Implementation.
  • 8.3.2 Surface Reconstruction and Rendering
  • Using radial basis functions.
  • Using signed distance functions.
  • Figure 8.4. Scattered point cloud (left) and surface reconstruction with isosurface (right).
  • Figure 8.5. Mesh reconstruction from scattered points with local triangulations.
  • Local triangulations.
  • Multiple local triangulations.
  • Figure 8.6. Segmentation and reconstruction of intersecting surfaces from noisy point clouds.
  • Alpha shapes.
  • Figure 8.7. Alpha shapes of a 2D point cloud.
  • Figure 8.8. Alpha shapes of a 3D point cloud.
  • Ball pivoting.
  • Figure 8.9. Ball pivoting principle (sketched in 2D). Reconstruction is shown in red.
  • Figure 8.10. Ball pivoting reconstruction of a manifold (a) and non-manifold (b) point cloud.
  • Poisson reconstruction.
  • Figure 8.11. Poisson reconstruction of a point cloud.
  • Surface splatting.
  • Figure 8.12. Radial basis functions for surface reconstruction.
  • Figure 8.13. Point-based rendering and surface reconstruction from scattered points.
  • Sphere splatting.
  • Figure 8.14. Sphere splatting for surface reconstruction from oriented point clouds.
  • Figure 8.15. Sphere splatting for surface reconstruction from oriented point clouds.
  • 8.4 Grid-Processing Techniques
  • 8.4.1 Geometric Transformations
  • 8.4.2 Grid Simplification
  • Triangle mesh decimation.
  • Figure 8.16. Decimation of a surface grid. (a) Original grid with 36,000 points and (b) decimated grid with 3510 points. (c) Original isosurface with 373,000 points and (b) decimated version with 6536 points.
  • Vertex clustering.
  • Simplification envelopes.
  • Progressive meshes.
  • 8.4.3 Grid Refinement
  • Loop subdivision.
  • Figure 8.17. Loop subdivision cases.
  • Figure 8.18. Refining an isosurface. (a) Original grid. (b) Simplified grid. (c) Refined grid. The zoomed-in insets show the grid quality.
  • Figure 8.19. (a) Nonuniform surface mesh and (b) result of two Loop subdivision steps.
  • Advanced subdivision tools.
  • 8.4.4 Grid Smoothing
  • Figure 8.20. Laplacian smoothing principle for (a) 2D and (b) 3D geometries.
  • Figure 8.21. Laplacian smoothing of an isosurface. (a) Original surface. (b) Surface curvature. (c) Smoothed surface. (d) Comparison of original and smoothed surfaces.
  • 8.5 Conclusion
  • Chapter 9 Image Visualization
  • 9.1 Image Data Representation
  • 2D Images.
  • Higher-dimension images.
  • 9.2 Image Processing and Visualization
  • 9.3 Basic Imaging Algorithms
  • 9.3.1 Basic Image Processing
  • Transfer functions.
  • Figure 9.1. Image contrast enhancement. Images (top), linear histograms (middle), and logarithmic histograms (bottom). (a) Original image. (b) Contrast-enhanced image using nonlinear transfer function.
  • 9.3.2 Histogram Equalization
  • Figure 9.2. Histogram equalization showing images (top) and their logarithmic histograms (bottom). (a) Original image. (b) Image after histogram equalization.
  • Listing 9.1. Histogram computation and equalization.
  • 9.3.3 Gaussian Smoothing
  • Figure 9.3. Fourier approximation (drawn in red) of a square pulse signal (drawn in black). Approximation with n = 10 terms (top) and n = 24 terms (bottom). The values of the coefficients an and bn are shown in the right images.
  • Fourier transform.
  • Convolution for filtering.
  • Figure 9.4. (a) Noisy image. (b) Result after filtering with a Gaussian filter.
  • 9.3.4 Edge Detection
  • Figure 9.5. Edge detection using image derivatives. Image (top), first derivative (middle), and second derivative (bottom). Edges correspond to maxima of the first derivative or, alternatively, zero-crossings of the second derivative.
  • Gradient-based edge detection.
  • Figure 9.6. Edge detection. (a) Original image and (b–f) several edge detectors. Edge strength is mapped to image luminance.
  • Roberts operator.
  • Sobel operator.
  • Prewitt operator.
  • Laplacian-based edge detection.
  • 9.4 Shape Representation and Analysis
  • Figure 9.7. Imaging and shape analysis pipeline.
  • 9.4.1 Basic Segmentation
  • Figure 9.8. Histogram-based image thresholding. The red rectangles on the histograms indicate the selected value range.
  • 9.4.2 Advanced Segmentation
  • Snakes.
  • Figure 9.9. Examples (c–h) of advanced segmentation methods applied to a dermatoscopic skin image (a). Image (b) shows a manual segmentation.
  • Normalized cuts.
  • Mean shift.
  • Image foresting transform.
  • Level sets.
  • Threshold sets.
  • 9.4.3 Connected Components
  • Listing 9.2. Connected components detection.
  • Figure 9.10. Connected components.
  • 9.4.4 Morphological Operations
  • Figure 9.11. Morphological operators. The image in (a) is segmented in (b). (c) Dilation and erosion are used to close holes. (d) The largest connected component is selected.
  • Dilation and erosion.
  • 9.4.5 Distance Transforms
  • Figure 9.12. Distance and feature transforms of a 2D shape (a–c) and a 3D shape (d–h).
  • Distance transform properties.
  • Figure 9.13. Level sets of the distance transform of a 2D shape. (a) Shape, (b) level sets, and (c) elevation plot of the distance transform.
  • Brute-force implementation.
  • Distance transforms using OpenGL.
  • Listing 9.3. Brute-force distance transform.
  • Figure 9.14. Computing distance transforms by texture splatting. (a) Contour and overlaid distance splat and (b) resulting distance transform.
  • Listing 9.4. Computing distance transforms using hardware splatting.
  • Fast Marching Method.
  • Figure 9.15. Fast Marching Method algorithm.
  • Figure 9.16. Distance computation in the Fast Marching Method.
  • Listing 9.5. Fast Marching Method algorithm.
  • Other distance transform algorithms.
  • 9.4.6 Skeletonization
  • Centeredness.
  • Figure 9.17. Examples of skeletons of 2D shapes. The skeleton is the one-dimensional structure located inside the shape’s closed boundary.
  • Structural and topological encoding.
  • Geometrical encoding.
  • Multiscale shape encoding.
  • Applications.
  • 9.4.7 Skeleton Computation in 2D
  • Figure 9.18. Skeleton sampling issues. (a) Continuous skeleton and (b) its counterpart as computed on a discrete grid of even pixel width. The discrete skeleton misses the central branch (marked in gray in the right image).
  • Using distance field singularities.
  • Figure 9.19. Computing the skeleton using image-processing operations.
  • Using boundary collapse metric.
  • Listing 9.6. One-point feature transform.
  • Figure 9.20. Skeletonization algorithm. (a) One-point feature transform. (b) Importance given by collapsed arc length metric. (c) Simplified skeleton.
  • Figure 9.21. Isolines for the boundary ID field (orthogonal to the boundary) and the distance transform field (parallel to the boundary), equally spaced at 5 units. Note how the two isoline sets are orthogonal to each other, except along the skeleton.
  • Figure 9.22. Skeletonization examples. (a,b) Rice plant roots. (c,d) Neural cell.
  • Applications.
  • 9.4.8 Skeleton Computation in 3D
  • Surface skeletons.
  • Figure 9.23. Skeletons of three-dimensional shapes. The shape is rendered transparent.
  • Curve skeletons.
  • Figure 9.24. Curve skeletons of the 3D shapes shown in Figure 9.23.
  • Figure 9.25. Centerlines of a human colon isosurface (a) in the original position and (b) in an unfolded position.
  • Thinning methods.
  • Distance field methods.
  • Geodesic methods.
  • Figure 9.26. Defining curve skeletons using geodesics between feature points. Point p has two equal-length shortest geodesics ||γA|| = ||γB|| (the red and blue dotted curves) between its feature points f1 and f2, so it is on the curve skeleton C.
  • Mesh contraction methods.
  • Curve skeleton comparison.
  • Figure 9.27. Curve skeletons computed by eight methods. Top row: mesh contraction methods. Bottom row: voxel-based methods.
  • 9.5 Conclusion
  • Chapter 10 Volume Visualization
  • 10.1 Motivation
  • Figure 10.1. Visualizing a 3D scalar dataset. (a) Surface plot. (b) Slice plane. (c) Isosurface.
  • Figure 10.2. Visualization consisting of two isosurfaces.
  • Figure 10.3. Visualization of scalar volume using (a) volume-aligned slices and (b) view direction-aligned slices.
  • Figure 10.4. Conceptual principle of volume visualization.
  • 10.2 Volume Visualization Basics
  • 10.2.1 Classification
  • 10.2.2 Maximum Intensity Projection Function
  • Figure 10.5. Maximum intensity projection rendering.
  • Figure 10.6. Average intensity rendering.
  • 10.2.3 Average Intensity Function
  • 10.2.4 Distance to Value Function
  • 10.2.5 Isosurface Function
  • 10.2.6 Compositing Function
  • Figure 10.7. Different isosurface techniques. (a) Marching cubes. (b) Isosurface ray function, software ray casting. (c) Graphics hardware ray casting. (d–f) Compositing with box opacity function, different integration step sizes.
  • Figure 10.8. Volumetric illumination model: color c(t) emitted at position t along a view ray gets attenuated by the values τ(u) of the points u situated between t and the view plane to yield the contribution C(0, t) of c(t) to the view plane.
  • Transfer functions.
  • Figure 10.9. (a) Volume rendering of head dataset. (b) The transfer function used emphasizes skin, soft bone, and hard bone.
  • Integration issues.
  • Figure 10.10. (a) Volume rendering of flow field velocity magnitude and (b) corresponding transfer functions.
  • Examples.
  • 10.2.7 Volumetric Shading
  • Figure 10.11. Volumetric lighting. (a) No lighting. (b) Diffuse lighting. (c) Specular lighting.
  • Figure 10.12. Examples of volume rendering. (a) Electron density. (b) Engine block. (c) Bonsai tree. (d) Carp fish.
  • 10.3 Image Order Techniques
  • 10.3.1 Sampling and Interpolation Issues
  • Figure 10.13. Volume rendering of head dataset for different values of the integration step size (in voxels). Trilinear interpolation of scalar values is used. The color and opacity transfer functions used are shown at the bottom of the image.
  • Figure 10.14. Volume rendering of head dataset for two step size values. Nearest-neighbor interpolation is used.
  • Figure 10.15. Sample step jittering for color banding artifacts removal. (a) Banding caused by δ = 1. (b) Banding alleviated by using jittering, δ = 1. (c) High-quality image, δ = 0.2.
  • 10.3.2 Classification and Interpolation Order
  • Figure 10.16. Comparison of (a) postclassification and (b) preclassification techniques. The insets show a zoomed-in detail region from the large image.
  • 10.4 Object Order Techniques
  • 2D texture methods.
  • 3D texture methods.
  • 10.5 Volume Rendering vs. Geometric Rendering
  • Aims.
  • Complexity.
  • Mixed methods.
  • 10.6 Conclusion
  • Chapter 11 Information Visualization
  • 11.1 What Is Infovis?
  • 11.2 Infovis vs. Scivis: A Technical Comparison
  • 11.2.1 Dataset
  • Figure 11.1. Examples of (a) scivis and (b) infovis datasets.
  • 11.2.2 Data Domain
  • 11.2.3 Data Attributes
  • Table 11.1. Attribute data types in infovis.
  • 11.2.4 Interpolation
  • Table 11.2. Comparison of dataset notions in scivis and infovis.
  • 11.3 Table Visualization
  • Printing the contents.
  • Figure 11.2. Textual visualization of a database table containing stock exchange data.
  • Mapping values.
  • Figure 11.3. Table visualization enhanced using multiple sorting, evolution icons, bar graphs, and same-value (date) row cues.
  • Sampling issues.
  • Figure 11.4. The table lens technique allows us to create overviews of large tables as well as show context information.
  • 11.4 Visualization of Relations
  • 11.4.1 Tree Visualization
  • Node-link visualization.
  • Rooted tree layout.
  • Figure 11.5. File hierarchy of the FFmpeg software distribution visualized using a rooted tree.
  • Figure 11.6. Radial-tree layout for the same file hierarchy as in Figure 11.5.
  • Radial tree layout.
  • Figure 11.7. Bubble-tree layout for the same file hierarchy as in Figure 11.5.
  • Bubble tree layout.
  • Figure 11.8. Cone-tree layout for the same file hierarchy as in Figure 11.5.
  • Cone tree layout.
  • Treemaps.
  • Figure 11.9. Treemap layout for the same file hierarchy as in Figure 11.5. Colors indicate file types; rectangle areas indicate file sizes.
  • Squarified treemaps.
  • Cushion treemaps.
  • Figure 11.10. Improved treemap visualization using squarified layout and shaded cushion rendering.
  • Figure 11.11. (a) The tree structure is visualized with (b) a cushion treemap. The actual cushion surface is indicated by the bold black line in (b). The same color is used to indicate the same node in the node-link tree drawing, the treemap, and the cushion profiles.
  • Figure 11.12. The Map of the Market [SmartMoney 13] rendered using a treemap.
  • 11.4.2 Graph Visualization
  • Hierarchical graph visualization.
  • Figure 11.13. The evolution of the UNIX operating system, displayed as a hierarchical graph.
  • Figure 11.14. The call graph of a program visualized using a hierarchical graph layout. Note the separation between the main program and library subsystem.
  • Orthogonal layouts.
  • Figure 11.15. Containment and dependency relations in a software system, visualized using a hierarchical graph layout with orthogonal edge routing.
  • Hierarchical edge bundling.
  • Figure 11.16. The call graphs of two programs visualized together with the programs’ hierarchical layering. The layout used suggests that the left system is more modular than the right system.
  • Image-based edge bundling.
  • Figure 11.17. Call-and-dependency graph of a software system (a). Simplified image-based edge bundle visualization of the same graph (b).
  • Force-directed layouts.
  • Listing 11.1. Force-directed graph layout algorithm.
  • Figure 11.18. Call graph of a C++ program visualized using a force-directed layout. The node colors indicate the function types. The graph contains 314 nodes (functions) and 718 edges (calls).
  • Figure 11.19. Call graphs of Firefox plug-ins: libgklayout (a,b) and libembed (c,d) visualized using edge bundling (a,c) and force-directed layouts (b,d).
  • Figure 11.20. Inheritance relations in the VTK class library visualized using the GEM force-directed layout. Specialization subtrees are indicated by blue outlines and labeled by the respective subtree root class.
  • Multiple views.
  • Figure 11.21. Hierarchical and call relations in a software system visualized with a combination of tree and force-directed layouts. The bottom view shows the entire system hierarchy, where two subsystems of interest have been selected (rendered in red). The top-left view shows the call and hierarchy relations in the selected subsystems using a force-directed layout. The top-right view shows a simplified view of the latter, where several call relations have been filtered out. The arrows between the images show the order of creating and examining the visualizations.
  • Graph splatting.
  • Figure 11.22. Software dependency graph visualized with (a) force-directed layout and (b) graph splatting (b). The splatting density is scaled by the number of dependent modules. Warm colors in (b) emphasize high-level system modules. The nodes, positioned identically to the layout shown in (a), are depicted by white dots.
  • General graph-edge bundling.
  • FDEB:
  • Figure 11.23. General graph bundling examples. Images (a) and (d) show the unbundled graphs.
  • GBEB:
  • WR:
  • SBEB:
  • KDEEB:
  • Comparing bundling algorithms:
  • Visualizing dynamic graphs.
  • Types of dynamic graphs:
  • Online vs. offline drawing:
  • Visualizing small numbers of keyframes:
  • Figure 11.24. Visualizing two keyframes (a), respectively six keyframes (b) in sequence hierarchies.
  • Using animation:
  • Figure 11.25. Eight frames from a graph animation for visualizing a dynamic hierarchical call graph.
  • 11.4.3 Diagram Visualization
  • Figure 11.26. Diagram visualizations atop of a relational dataset.
  • 11.5 Multivariate Data Visualization
  • 11.5.1 Parallel Coordinate Plots
  • Figure 11.27. Schematic description of (a) table visualization vs. (b) parallel coordinate plots. A K-dimensional point pj is shown in blue in both plots.
  • Figure 11.28. Parallel coordinate plot showing six attributes (miles-per-gallon, cylinders, horsepower, weight, acceleration, and manufacturing year) for about 400 cars. A selected car is shown in the image as a red polyline with the individual attribute values displayed as labels.
  • Figure 11.29. Using brushing to select the low-acceleration cars. The selected cars are shown in red. An interesting outlier is highlighted further.
  • Figure 11.30. Enhancing parallel coordinates. The orientation of the axes whose labels are marked in red has been swapped as compared to Figure 11.29. Histograms show the attribute value distribution over 10 equally sized ranges for each axis.
  • 11.5.2 Dimensionality Reduction
  • 11.5.3 Multidimensional Scaling
  • Figure 11.31. Computation of projection coordinates in FastMap using only point-wise distances.
  • 11.5.4 Projection-Based Dimensionality Reduction
  • 11.5.5 Advanced Dimensionality Reduction Techniques
  • 1. Least Square Projection (LSP):
  • 2. Part-Linear Multidimensional Projection (PLMP):
  • 3. Local Affine Multidimensional Projection (LAMP):
  • Implementations:
  • Figure 11.32. Exploring the relationship of projections with high-dimensional attribute values. Different attributes are mapped to color in each image.
  • 11.5.6 Explaining Projections
  • Figure 11.33. Explanatory visualization of high-dimensional to low-dimensional axis mapping.
  • Attribute axes:
  • Axis legends:
  • 11.5.7 Assessing Projection Quality
  • Aggregate point-wise error:
  • Figure 11.34. Visualizing aggregate projection error (a), false neighbors (b), and missing neighbors shown with color mapping (c,d) and edge bundles (e,f). Markers in images (c–f) indicate selected points.
  • False neighbors:
  • Missing neighbors:
  • Group members:
  • Figure 11.35. Visualizing missing group neighbors for two groups in a projection.
  • Comparing projections:
  • Figure 11.36. Comparison of LAMP, LSP, PLMP, and [Pekalska et al. 99] projections for the Segmentation dataset. Color mapping is normalized so that the same color indicates the same error value for all projections.
  • 11.6 Text Visualization
  • 11.6.1 Content-Based Visualization
  • Figure 11.37. Visualization of an electronic (PDF) version of this book in the Adobe Acrobat system. Four design elements are emphasized. (a) The document’s detailed content. (b) A page-level overview. (c) The document structure. (d) Annotation metadata.
  • 11.6.2 Visualizing Program Code
  • Figure 11.38. Visualization of C source code using the SeeSoft tool. Color shows the code age. Red depicts recently modified code, while blue shows code unchanged for a long time. The smaller window in front shows detail for a region in focus in the form of actual source code text.
  • Figure 11.39. Visualization of C++ source code using shaded cushions. Color shows the occurrence of selected construct types. The cushion luminance profiles emphasize the syntactic nesting of structures.
  • 11.6.3 Visualizing Evolving Documents
  • Figure 11.40. Visualization of the evolution of the VTK software project. Files are shown as horizontal pixel strips colored by file type. File strips are stacked on the vertical axis in the order they appear in the directories. Yellow dots indicate the file modification events.
  • Analyzing the project structure.
  • Analyzing activity.
  • Figure 11.41. Visualization of author contributions in the VTK software project. The file versions are colored by the author who modified them. File strips are stacked on the vertical axis in decreasing order of activity, with the most modified files shown at the top.
  • Analyzing growth.
  • Visualizing quality metrics.
  • Figure 11.42. Visualization of growth and author contributions in a software repository.
  • Figure 11.43. Visualization of code quality metrics evolution over time.
  • 11.7 Conclusion
  • Chapter 12 Conclusion
  • Scientific visualization.
  • Information visualization.
  • Synergies and challenges.
  • The way forward.
  • Efficiency and effectiveness:
  • Measuring value:
  • Integration:
  • Explorers vs. practitioners:
  • Specialists vs. generalists:
  • Back Matter
  • Appendix Visualization Software
  • A.1 Taxonomies of Visualization Systems
  • A.2 Scientific Visualization Software
  • The Visualization Toolkit (VTK)
  • MeVisLab
  • AVS/Express
  • IRIS Explorer
  • SCIRun
  • ParaView
  • MayaVi
  • A.3 Imaging Software
  • The Insight Toolkit (ITK)
  • 3D Slicer
  • Teem
  • ImageJ
  • Binvox
  • OpenVDB
  • A.4 Grid Processing Software
  • MeshLab
  • PCL
  • CGAL
  • A.5 Information Visualization Software
  • The Infovis Toolkit (IVTK)
  • Prefuse
  • GraphViz
  • Tulip
  • Gephi
  • ManyEyes
  • Treemap
  • XmdvTool
  • Bibliography
  • Index
Show More

Additional information

Veldu vöru

Rafbók til eignar

Reviews

There are no reviews yet.

Be the first to review “Data Visualization”

Netfang þitt verður ekki birt. Nauðsynlegir reitir eru merktir *

Aðrar vörur

0
    0
    Karfan þín
    Karfan þín er tómAftur í búð