Para View Manual.v4.1
User Manual:
Open the PDF directly: View PDF .
Page Count: 433
Download | |
Open PDF In Browser | View PDF |
Version 4.0 Contents Articles Introduction About Paraview Loading Data 1 1 9 Data Ingestion 9 Understanding Data 13 VTK Data Model 13 Information Panel 23 Statistics Inspector 27 Memory Inspector 28 Multi-block Inspector 32 Displaying Data Views, Representations and Color Mapping Filtering Data 35 35 70 Rationale 70 Filter Parameters 70 The Pipeline 72 Filter Categories 76 Best Practices 79 Custom Filters aka Macro Filters 82 Quantative Analysis 85 Drilling Down 85 Python Programmable Filter 85 Calculator 92 Python Calculator 94 Spreadsheet View 99 Selection 102 Querying for Data 111 Histogram 115 Plotting and Probing Data 116 Saving Data 118 Saving Data 118 Exporting Scenes 121 3D Widgets Manipulating data in the 3D view Annotation Annotation Animation Animation View Comparative Visualization Comparative Views Remote and Parallel Large Data Visualization 123 123 130 130 141 141 149 149 156 Parallel ParaView 156 Starting the Server(s) 159 Connecting to the Server 164 Distributing/Obtaining Server Connection Configurations 168 Parallel Rendering and Large Displays 172 About Parallel Rendering 172 Parallel Rendering 172 Tile Display Walls 178 CAVE Displays 180 Scripted Control 188 Interpreted ParaView 188 Python Scripting 188 Tools for Python Scripting 210 Batch Processing 212 In-Situ/CoProcessing 217 CoProcessing 217 C++ CoProcessing example 229 Python CoProcessing Example 236 Plugins 242 What are Plugins? 242 Included Plugins 243 Loading Plugins 245 Appendix 248 Command Line Arguments 248 Application Settings 255 List of Readers 264 List of Sources 293 List of Filters 308 List of Writers 397 How to build/compile/install 405 Building ParaView with Mesa3D 414 How to write parallel VTK readers 416 References Article Sources and Contributors 423 Image Sources, Licenses and Contributors 425 Article Licenses License 429 1 Introduction About Paraview What is ParaView? ParaView is an open-source, multi-platform application for the visualization and analysis of scientific datasets, primarily those that are defined natively in a two- or three-dimensional space including those that extend into the temporal dimension. The front end graphical user interface (GUI) has an open, flexible and intuitive user interface that still gives you fine-grained and open-ended control of the data manipulation and display processing needed to explore and present complex data as you see fit. ParaView has extensive scripting and batch processing capabilities. The standard scripting interface uses the widely used python programming language for scripted control. As with the GUI, the python scripted control is easy to learn, including the ability to record actions in the GUI and save them out as succinct human readable python programs. It is also powerful, with the ability to write scripted filters that run on the server that have access to every bit of your data on a large parallel machine. ParaView's data processing and rendering components are built upon a modular and scalable distributed-memory parallel architecture in which many processors operate synchronously on different portions of the data. ParaView's scalable architecture allows you to run ParaView directly on anything from a small netbook class machine up to the world's largest supercomputer. However, the size of the datasets ParaView can handle in practice varies widely depending on the size of the machine that ParaView's server components are run on. Thus people frequently do both, taking advantage of ParaView's client/server architecture to connect to and control the supercomputer from the netbook. ParaView is meant to be easily extended and customized into new applications and be used by or make use of other tools. Correspondingly there are a number of different interfaces to ParaView's data processing and visualization engine, for example the web-based ParaViewWeb [1]. This book does not focus on these nor does it describe in great detail the programmers' interface to the ParaView engine. The book instead focuses on understanding the standard ParaView GUI based application. About Paraview 2 User Interface The different sections of ParaView's Graphical User Interface (GUI) are shown below. Of particular importance in the following discussion are the: • • • • File and Filter menus which allow you to open files and manipulate data Pipeline Browser which displays the visualization pipeline Properties panel where you can can control any given module within the pipeline View area where data is displayed in one or more windows Figure 1.1 ParaView GUI Overview Modality One very important thing to keep in mind when using ParaView is that the GUI is very modal. At any given time you will have one "active" module within the visualization pipeline, one "active" view, and one "active" selection. For example, when you click on the name of a reader or source within the Pipeline Browser, it becomes the active module and the properties of that filter are displayed in the Properties panel. Likewise when you click within a different view, that view becomes the active view and the visibility "eye" icons in the Pipeline Browser are changed to show what filters are displayed within this View. These concepts will be described in detail in later chapters (Multiple Views [2],Pipeline Basics [3],Selection [4]). For now you should be aware that the information displayed in the GUI always pertains to these active entities. About Paraview 3 Features Modern graphical applications allow users to treat the GUI as a document where informations can be queried and used by Copy/Paste from one place to another and that's precisely where we are heading to with ParaView. Typically user can query any Tree/Table/List view widget in the UI by activating that component and by hitting the Ctrl+F or Command+F on Mac keyboard shortcut, while the view widget is in focus. This will enable a dynamic widget showing up which get illustrated in the following screenshot. This search-widget will be closed when the view widget lost focus, or the Esc button is pressed, or the Close button on the search-widget is clicked. Figure 1.2 Searching in lists Figure 1.3 Searching in trees In order to retrieve data from spreadsheet or complex UI, you will need to double click on the area that you are interested in and select the portion of text that you want to select to Copy. The set of screenshots below illustrate About Paraview 4 different selection use case across the UI components. Figure 1.4 Copying time values from Information Tab Figure 1.5 Copying values from trees on the Information Tab Figure 1.6 Copying values from Spreadsheet View About Paraview 5 Figure 1.7 Copying values from Information Tab Basics of Visualization Put simply, the process of visualization is taking raw data and converting it to a form that is viewable and understandable to humans. This enables a better cognitive understanding of our data. Scientific visualization is specifically concerned with the type of data that has a well-defined representation in 2D or 3D space. Data that comes from simulation meshes and scanner data is well suited for this type of analysis. There are three basic steps to visualizing your data: reading, filtering, and rendering. First, your data must be read into ParaView. Next, you may apply any number of filters that process the data to generate, extract, or derive features from the data. Finally, a viewable image is rendered from the data and you can then change the viewing parameters or rendering modality for best the visual effect. The Pipeline Concept In ParaView, these steps are made manifest in a visualization pipeline. That is, you visualize data by building up a set of modules, each of which takes in some data, operates on it, and presents the result as a new dataset. This begins with a reader module that ingests data off of files on disk. Reading data into ParaView is often as simple as selecting Open from the File menu, and then clicking the glowing Accept button on the Properties panel. ParaView comes with support for a large number of file formats [5], and its modular architecture makes it possible to add new file readers [6]. Once a file is read, ParaView automatically renders it in a view. In ParaView, a view is simply a window that shows data. There are different types of views, ranging from qualitative computer graphics rendering of the data to quantitative spreadsheet presentations of the data values as text. ParaView picks a suitable view type for your data automatically, but you are free to change the view type, modify the rendering parameters of the data in the view, and even create new views simultaneously as you see fit to better understand what you have read in. Additionally, high-level meta information about the data including names, types and ranges of arrays, temporal ranges, memory size and geometric extent can be found in the Information tab. You can learn a great deal about a given dataset with a one element visualization pipeline consisting of just a reader module. In ParaView, you can create arbitrarily complex visualization pipelines, including ones with multiple readers, merging and branching pipelines. You build up a pipeline by choosing the next filter in the sequence from the Filters menu. Once you click Accept, this new filter will read in the data produced by the formerly active filter and perform some processing on that data. The new filter then becomes the active one. Filters then are created differently from but operate in the same manner as readers. At all points you use the Pipeline Inspector to choose the active filter and then the Properties panel to configure it. The Pipeline Browser is where the overall visualization pipeline is displayed and controlled from. The Properties panel is where the specific parameters of one particular module within the pipeline are displayed and controlled from. The Properties panel has two sections: Properties section presents the parameters for the processing done within that module, Display section presents the parameters of how the output of that module will be displayed in a view (namely, the active view). There's also in Information panel, which presents the meta information about the About Paraview 6 data produced by the module as described above. Figure 1.8 demonstrates a three-element visualization pipeline, where the output of each module in the the pipeline is displayed in its own view. A reader takes in a vector field, defined on a curvilinear grid, which comes from a simulation study of a wind turbine. Next a slice filter produces slices of the field on five equally spaced planes along the X-axis. Finally, a warp filter warps those planes along the direction of the vector field, which primarily moves the planes downwind but also shows some complexity at the location of the wind turbine. Figure 1.8 A three-element visualization pipeline There are more than one hundred filters available to choose from, all of which manipulate the data in different ways. The full list of filters is available in the Appendix [5] and within the application under the Help menu. Note that many of the filters in the menu will be grayed out and not selectable at any given time. That is because any given filter may only operate on particular types of data. For example, the Extract Subset filter will only operate on structured data sets so it is only enabled when the module you are building on top of produces image data, rectilinear grid data, or structured grid data. (These input restrictions are also listed in the Appendix [7] and help menu). In this situation you can often find a similar filter which does accept your data, or apply a filter which transforms your data into the required format. In ParaView 3.10, you can ask ParaView to try to do the conversion for you automatically, by clicking "Auto convert properties" in the application settings [8]. The mechanics of applying filters are described fully in the Manipulating Data [9] chapter. About Paraview Making Mistakes Frequently, new users of ParaView falter when they open their data, or apply a filter, and do not see it immediately because they have not pressed the Apply button. ParaView was designed to operate on large datasets, for which any given operation could take a long time to finish. In this situation you need the Apply button so that you have a chance to be confident of your change before it takes effect. The highlighted Apply button is a reminder that the parameters of one or more filters are out of sync with the data that you are viewing. Hitting the Apply button accepts your change (or changes) whereas hitting the Reset button reverts the options back to the last time you hit Apply. If you are working with small datasets, you may want to turn off this behavior with the Auto Accept setting under the Application Settings [8]. The Apply behavior circumvents a great number of mistakes but not all of them. If you make some change to a filter or to the pipeline itself and later find that you are not satisfied with the result, hit the Undo button. You can undo all the way back to the start of your ParaView session and redo all the way forward if you like. You can also undo and redo camera motion by using the camera undo and redo buttons located above each view window. Persistent Sessions If on the other hand you are satisfied with your visualization results, you may want to save your work so that you can return to it at some future time. You can do so by using ParaView's Save State (File|Save State) and Save Trace (Tools | Save Trace) features. In either case, ParaView produces human readable text files (XML files for State and Python script for Trace) that can be restored or modified and restored later. This is very useful for batch processing, which is discussed in the Python Scripting [10] chapter. To save state means to save enough information about the ParaView session to restore it later and thus show exactly the same result. ParaView does so by saving the current visualization pipeline and parameters of the filters within it. If you turn on a trace recording when you first start using ParaView, saving a trace can be used for the same purpose as saving state. However, a trace records all of your actions, including the ones that you later undo, as you do them. It is a more exact recording of not only what you did, but how you did it. Traces are saved as python scripts, which ParaView can play back in either batch mode or within an interactive GUI session. You can therefore use traces then to automate repetitive tasks by recording just that action. It is also an ideal tool to learn ParaView's python scripting API. Client/Server Visualization With small datasets it is usually sufficient to run ParaView as a single process on a small laptop or desktop class machine. For large datasets, a single machine is not likely to have enough processing power and, much more importantly, memory to process the data. In this situation you run an MPI parallel ParaView Server process on a large machine to do computationally and memory expensive data processing and/or rendering tasks and then connect to that server within the familiar GUI application. When connected to a remote server the only difference you will see will be that the visualization pipeline displayed in the Pipeline Browser will begin with the name of the server you are connected to rather than the word 'builtin' which indicates that you are connected to a virtual server residing in the same process as the GUI. When connected to a remote server, the File Open dialog presents the list of files that live on the remote machine's file system rather than the client's. Depending on the server's capabilities, the data size and your application settings (Edit|Settings|Render View|Server) the data will either be rendered remotely and pixels will be sent to the client or the geometry will be delivered and rendered locally. Large data visualization is described fully in the Client Server Visualization [11] Chapter. 7 About Paraview References [1] http:/ / www. paraview. org/ Wiki/ ParaViewWeb [2] http:/ / paraview. org/ Wiki/ ParaView/ Displaying_Data#Multiple_Views [3] http:/ / paraview. org/ Wiki/ ParaView/ UsersGuide/ Filtering_Data#Pipeline_Basics [4] http:/ / paraview. org/ Wiki/ ParaView/ Users_Guide/ Selection [5] http:/ / paraview. org/ Wiki/ ParaViewUsersGuide/ List_of_readers [6] http:/ / paraview. org/ Wiki/ Writing_ParaView_Readers [7] http:/ / paraview. org/ Wiki/ ParaViewUsersGuide/ List_of_filters [8] http:/ / paraview. org/ Wiki/ ParaView/ Users_Guide/ Settings [9] http:/ / paraview. org/ Wiki/ ParaView/ UsersGuide/ Filtering_Data [10] http:/ / paraview. org/ Wiki/ ParaView/ Python_Scripting [11] http:/ / paraview. org/ Wiki/ Users_Guide_Client-Server_Visualization 8 9 Loading Data Data Ingestion Introduction Loading data is a fundamental operation in using ParaView for visualization. As you would expect, the Open option from the File menu and the Open Button from the toolbar both allow you to load data into ParaView. ParaView understands many scientific data file formats. The most comprehensive list is given in the List of Readers [5] appendix. Because of ParaView's modular design it is easy to integrate new readers. If the formats you need are not listed, ask the mailing list first to see if anyone has a reader for the format or, if you want to create your own readers for ParaView see the Plugin HowTo [1] section and the Writing Readers [6] appendix of this book. Opening File / Time Series ParaView recognizes file series by using certain patterns in the name of files including: • • • • • • • • fooN.vtk foo_N.vtk foo-N.vtk foo.N.vtk Nfoo.vtk N.foo.vtk foo.vtk.N foo.vtk-sN In the above file name examples, N is an integer (with any number of leading zeros). To load a file series, first make sure that the file names match one of the patterns described above. Next, navigate to the directory where the file series is. The file browser should look like Figure 2.1: Data Ingestion 10 Figure 2.1 Sample browser when opening files You can expand the file series by clicking on the triangle, as shown in the above diagram. Simply select the group (in the picture named blow..vtk) and click OK. The reader will store all the filenames and treat each file as a time step. You can now animate, use the annotate time filter, or do anything you can do with readers that natively support time. If you want to load a single step of a file series just expand the triangle and select the file you are interested in. Data Ingestion 11 Opening Multiple Files ParaView supports loading multiple files as long as they exist in the same directory. Just hold the Ctrl key down while selecting each file (Figure 2.2), or hold shift to select all files in a range. Figure 2.2 Opening multiple files State Files Another option is to load a previously saved state file (File|Load State). This will return ParaView to its state at the time the file was saved by loading data files, applying filters. Advanced Data Loading If you commonly load the same data into ParaView each time, you can streamline the process by launching ParaView with the data command-line argument (--data=data_file). Properties Panel Note that opening a file is a two step process, and so you do not see any data after opening a data file. Instead, you see that the Properties Panel is populated with several options about how you may want to read the data. Data Ingestion 12 Figure 2.3 Using the Properties panel Once you have enabled all the options on the data that you are interested in click the Apply button to finish loading the data. For a more detailed explanation of the object inspector read the Properties Section . References [1] http:/ / www. paraview. org/ Wiki/ ParaView/ Plugin_HowTo#Adding_a_Reader 13 Understanding Data VTK Data Model Introduction To use ParaView effectively, you need to understand the ParaView data model. This chapter briefly introduces the VTK data model used by ParaView. For more details, refer to one of the VTK books. The most fundamental data structure in VTK is a data object. Data objects can either be scientific datasets such rectilinear grids or finite elements meshes (see below) or more abstract data structures such as graphs or trees. These datasets are formed of smaller building blocks: mesh (topology and geometry) and attributes. Mesh Even though the actual data structure used to store the mesh in memory depends on the type of the dataset, some abstractions are common to all types. In general, a mesh consists of vertices (points) and cells (elements, zones). Cells are used to discretize a region and can have various types such a tetrahedra, hexahedra etc. Each cell contains a set of vertices. The mapping from cells to vertices is called the connectivity. Note that even though it is possible to define data elements such as faces and edges, VTK does not represent these explicitly. Rather, they are implied by a cell's type and its connectivity. One exception to this rule is the arbitrary polyhedron which explicitly stores its faces. Figure 3.1 is an example mesh that consists of 2 cells. The first cell is defined by vertices (0, 1, 3, 4) and the second cell is defined by vertices (1, 2, 4, 5). These cells are neighbors because they share the edge defined by the points (1, 4). Figure 3.1 Example of a mesh A mesh is fully defined by its topology and the spatial coordinates of its vertices. In VTK, the point coordinates may be implicit or explicitly defined by a data array of dimensions (number_of_points x 3). VTK Data Model 14 Attributes (fields, arrays) An attribute (or a data array or field) defines the discrete values of a field over the mesh. Examples of attributes include pressure, temperature, velocity and stress tensor. Note that VTK does not specifically define different types of attributes. All attributes are stored as data arrays which can have an arbitrary number of components. ParaView makes some assumptions in regards to the number of components. For example, a 3-component array is assumed to be an array of vectors. Attributes can be associated with points or cells. It is also possible to have attributes that are not associated with either. Figure 3.2 demonstrates the use of a point-centered attribute. Note that the attribute is only defined on the vertices. Interpolation is used to obtain the values everywhere else. The interpolation functions used depend on the cell type. See VTK documentation for details. Figure 3.2 Point-centered attribute in a data array or field Figure 3.3 demonstrates the use of a cell-centered attribute. Note that cell-centered attributes are assumed to be constant over each cell. Due to this property, many filters in VTK cannot be directly applied to cell-centered attributes. It is normally required to apply a Cell Data to Point Data filter. In ParaView, this filter is applied automatically when necessary. VTK Data Model 15 Figure 3.3 Cell-centered attribute Uniform Rectilinear Grid (Image Data) Figure 3.4 Sample uniform rectilinear grid VTK Data Model 16 A uniform rectilinear grid, or image data, defines its topology and point coordinates implicitly. To fully define the mesh for an image data, VTK uses the following: • Extents - these define the minimum and maximum indices in each direction. For example, an image data of extents (0, 9), (0, 19), (0, 29) has 10 points in the x-direction, 20 points in the y-direction and 30 points in the z-direction. The total number of points is 10*20*30. • Origin - this is the position of a point defined with indices (0, 0, 0) • Spacing - this is the distance between each point. Spacing for each direction can defined independently The coordinate of each point is defined as follows: coordinate = origin + index*spacing where coordinate, origin, index and spacing are vectors of length 3. Note that the generic VTK interface for all datasets uses a flat index. The (i,j,k) index can be converted to this flat index as follows: idx_flat = k*(npts_x*npts_y) + j*nptr_x + i. A uniform rectilinear grid consists of cells of the same type. This type is determined by the dimensionality of the dataset (based on the extents) and can either be vertex (0D), line (1D), pixel (2D) or voxel (3D). Due to its regular nature, image data requires less storage than other datasets. Furthermore, many algorithms in VTK have been optimized to take advantage of this property and are more efficient for image data. Rectilinear Grid Figure 3.5 Rectilinear grid A rectilinear grid such as Figure 3.5 defines its topology implicitly and point coordinates semi-implicitly. To fully define the mesh for a rectilinear grid, VTK uses the following: • Extents - these define the minimum and maximum indices in each direction. For example, a rectilinear grid of extents (0, 9), (0, 19), (0, 29) has 10 points in the x-direction, 20 points in the y-direction and 30 points in the z-direction. The total number of points is 10*20*30. VTK Data Model 17 • Three arrays defining coordinates in the x-, y- and z-directions. These arrays are of length npts_x, npts_y and npts_z. This is a significant savings in memory as total memory used by these arrays is npts_x+npts_y+npts_z rather than npts_x*npts_y*npts_z. The coordinate of each point is defined as follows: coordinate = (coordinate_array_x(i), coordinate_array_y(j), coordinate_array_z(k))". Note that the generic VTK interface for all datasets uses a flat index. The (i,j,k) index can be converted to this flat index as follows: idx_flat = k*(npts_x*npts_y) + j*nptr_x + i. A rectilinear grid consists of cells of the same type. This type is determined by the dimensionality of the dataset (based on the extents) and can either be vertex (0D), line (1D), pixel (2D) or voxel (3D). Curvilinear Grid (Structured Grid) Figure 3.6 Curvilinear or structured grid A curvilinear grid, such as Figure 3.6, defines its topology implicitly and point coordinates explicitly. To fully define the mesh for a curvilinear grid, VTK uses the following: • Extents - these define the minimum and maximum indices in each direction. For example, a curvilinear grid of extents (0, 9), (0, 19), (0, 29) has 10*20*30 points regularly defined over a curvilinear mesh. • An array of point coordinates. This arrays stores the position of each vertex explicitly. The coordinate of each point is defined as follows: coordinate = coordinate_array(idx_flat)". The (i,j,k) index can be converted to this flat index as follows: idx_flat = k*(npts_x*npts_y) + j*nptr_x + i. A curvilinear grid consists of cells of the same type. This type is determined by the dimensionality of the dataset (based on the extents) and can either be vertex (0D), line (1D), quad (2D) or hexahedron (3D). VTK Data Model 18 AMR Dataset Figure 3.7 AMR dataset VTK natively support Berger-Oliger type AMR (Adaptive Mesh Refinement) datasets, as shown in Figure 3.7. An AMR dataset is essentially a collection of uniform rectilinear grids grouped under increasing refinement ratios (decreasing spacing). VTK's AMR dataset does not force any constraint on whether and how these grids should overlap. However, it provides support for masking (blanking) sub-regions of the rectilinear grids using an array of bytes. This allows VTK to process overlapping grids with minimal artifacts. VTK can automatically generate the masking arrays for Berger-Oliger compliant meshes. VTK Data Model 19 Unstructured Grid Figure 3.8 Unstructured grid An unstructured grid such as Figure 3.8 is the most general primitive dataset type. It stores topology and point coordinates explicitly. Even though VTK uses a memory-efficient data structure to store the topology, an unstructured grid uses significantly more memory to represent its mesh. Therefore, use an unstructured grid only when you cannot represent your dataset as one of the above datasets. VTK supports a large number of cell types, all of which can exist (heterogeneously) within one unstructured grid. The full list of all cell types supported by VTK can be found in the file vtkCellType.h in the VTK source code. Here is the list as of when this document was written: VTK_EMPTY_CELL VTK_VERTEX VTK_POLY_VERTEX VTK_LINE VTK_POLY_LINE VTK_TRIANGLE VTK_TRIANGLE_STRIP VTK_POLYGON VTK_PIXEL VTK_QUAD VTK_TETRA VTK_VOXEL VTK_HEXAHEDRON VTK_WEDGE VTK_PYRAMID VTK_PENTAGONAL_PRISM VTK_HEXAGONAL_PRISM VTK_QUADRATIC_EDGE VTK_QUADRATIC_TRIANGLE VTK_QUADRATIC_QUAD VTK_QUADRATIC_TETRA VTK_QUADRATIC_HEXAHEDRON VTK_QUADRATIC_WEDGE VTK_QUADRATIC_PYRAMID VTK Data Model 20 VTK_BIQUADRATIC_QUAD VTK_TRIQUADRATIC_HEXAHEDRON VTK_QUADRATIC_LINEAR_QUAD VTK_QUADRATIC_LINEAR_WEDGE VTK_BIQUADRATIC_QUADRATIC_WEDGE VTK_BIQUADRATIC_QUADRATIC_HEXAHEDRON VTK_BIQUADRATIC_TRIANGLE VTK_CUBIC_LINE VTK_CONVEX_POINT_SET VTK_POLYHEDRON VTK_PARAMETRIC_CURVE VTK_PARAMETRIC_SURFACE VTK_PARAMETRIC_TRI_SURFACE VTK_PARAMETRIC_QUAD_SURFACE VTK_PARAMETRIC_TETRA_REGION VTK_PARAMETRIC_HEX_REGION Many of these cell types are straightforward. For details, see VTK documentation. Polygonal Grid (Polydata) Figure 3.9 Polygonal grid A polydata such as Figure 3.9 is a specialized version of an unstructured grid designed for efficient rendering. It consists of 0D cells (vertices and polyvertices), 1D cells (lines and polylines) and 2D cells (polygons and triangle strips). Certain filters that generate only these cell types will generate a polydata. Examples include the Contour and Slice filters. An unstructured grid, as long as it has only 2D cells supported by polydata, can be converted to a polydata using the Extract Surface filter. A polydata can be converted to an unstructured grid using Clean to Grid. VTK Data Model 21 Table Table 3.1 A table like Table 3.1 is a tabular dataset that consists of rows and columns. All chart views have been designed to work with tables. Therefore, all filters that can be shown within the chart views generate tables. Also, tables can be directly loaded using various file formats such as the comma separated values format. Tables can be converted to other datasets as long as they are of the right format. Filters that convert tables include Table to Points and Table to Structured Grid. Multiblock Dataset Figure 3.10 Multiblock dataset You can think of a multi-block dataset as a tree of datasets where the leaf nodes are "simple" datasets. All of the data types described above, except AMR, are "simple" datasets. Multi-block datasets are used to group together datasets that are related. The relation between these datasets is not necessarily defined by ParaView. A multi-block dataset can represent an assembly of parts or a collection of meshes of different types from a coupled simulation. VTK Data Model 22 Multi-block datasets can be loaded or created within ParaView using the Group filter. Note that the leaf nodes of a multi-block dataset do not all have to have the same attributes. If you apply a filter that requires an attribute, it will be applied only to blocks that have that attribute. Multipiece Dataset Figure 3.11 Multipiece dataset Multi-piece datasets such as Figure 3.11 are similar to multi-block datasets in that they group together simple datasets with one key difference. Multi-piece datasets group together datasets that are part of a whole mesh - datasets of the same type and with same attributes. This data structure is used collect datasets produced by a parallel simulation without having to append the meshes together. Note that there is no way to create a multi-piece dataset within ParaView, but only by using certain readers. Furthermore, multi-piece datasets act, for the most part, as simple datasets. For example, it is not possible to extract individual pieces or obtain information about them. Information Panel 23 Information Panel Introduction Clicking on the Information button on the Object Inspector will take you to the Information Panel. The purpose of this panel is to provide you with information about the output of the currently selected source, reader or filter. The information on this panel is presented in several sections. We start by describing the sections that are applicable to all dataset types then we describe data specific sections. File Properties Figure 3.12 File properties If the current pipeline object is a reader, the top section will display the name of the file and its full path, as in Figure 3.12. Data Statistics Figure 3.13 Data statistics The Statistics section displays high-level information about the dataset including the type, number of points and cells and the total memory used. Note that the memory is for the dataset only and does not include memory used by the representation (for example, the polygonal mesh that may represent the surface). All of this information is for the current time step. Information Panel 24 Array Information Figure 3.14 Array information The data shown in Figure 3.14 shows the association (point, cell or global), name, type and range of each array in the dataset. In the example, the top three attributes are point arrays, the middle three are cell arrays and the bottom three are global (field) arrays. Note that for vectors, the range of each component is shown separately. In case, the range information does not fit the frame, the tooltip will display all of the values. Bounds Figure 3.15 Bounds information The Bounds section will display the spatial bounds of the dataset. These are the coordinates of the smallest axis-aligned hexahedron that contains the dataset as well as its dimensions, as in Figure 3.15. Information Panel 25 Timesteps Figure 3.16 Time section showing timesteps The Time section (see Figure 3.16) shows the index and value of all time steps available in a file or produceable by a source. Note that this section display values only when a reader or source is selected even though filters downstream of such sources also have time varying outputs. Also note that usually only one time step is loaded at a time. Extents Figure 3.17 Extents The Extents section, seen in Figure 3.17, is available only for structured datasets (uniform rectilinear grid, rectilinear grid and curvilinear grid). It displays the extent of all three indices that define a structured datasets. It also displays the dimensions (the number of points) in each direction. Note that these refer to logical extents and the labels X Extent, Y Extent and Z Extent can be somehow misleading for curvilinear grids. Information Panel 26 Data Hierarchy (AMR) Figure 3.18 Data hierarchy for AMR For AMR datasets, the Data Hierarchy section, Figure 3.18, shows the various refinement levels available in the dataset. Note that you can drill down to each level by clicking on it. All of the other sections will immediately update for the selected level. For information on the whole dataset, select the top parent called "AMR Dataset." Data Hierarchy (Multi-Block Dataset) Figure 3.19 Data hierarchy for multi-block datasets For multi-block datasets, the Data Hierarchy section shows the tree that forms the multi-block dataset. By default, only the first level children are shown. You can drill down further by clicking on the small triangle to the left of each node. Note that you can drill down to each block by clicking on it. All of the other sections will immediately update for the selected block. For information on the whole dataset, select the top parent called "Multi-Block Dataset". Statistics Inspector 27 Statistics Inspector Statistics Inspector Figure 3.20 The Statistics Inspector The Statistics Inspector (View| Statistics Inspector) can be used to obtain high-level information about the data produced by all sources, readers and filters in the ParaView pipeline. Some of this information is also available through the Information panel. The information presented in the Statistics Inspector include the name of the pipeline object that produced the data, the data type, the number of cells and points, memory used by the dataset, memory used by the visual representation of the dataset (usually polygonal data), and the spatial bounds of the dataset (the minimum and maximum time values for all available time steps). Note that the selection in the Statistics Inspector is linked with the Pipeline Browser. Selecting an entry in the Selection Inspector will update the Pipeline Browser and vice versa. The Statics Inspector shows memory needed/used by every pipeline filter or source. However, it must be noted that the actual memory used may still not align with this information due to the following caveats: 1. Shallow Copied Data: Several filters, such as Calculator, Shrink etc. that don't change the topology often pass the attribute arrays without copying any of the heavy data (known as shallow copying). In that case though the Statics Inspector will overestimate the memory used. 2. Memory for Data Datastructures: All data in VTK/ParaView is maintained in data-structures i.e. vtkDataObject subclasses. Any data-structure requires memory. Generally, this memory needed is considerably small compared to the heavy data i.e. the memory needed to save the topology, geometry, attribute arrays, etc., however in case of composite datasets and especially, AMR datasets with very large number of blocks in the order of 100K blocks, the memory used for the meta-data starts growing and can no longer be ignored. The Statistics Inspector as well as the Information Tab does not take this memory needed for datastructures into consideration and hence in such cases underestimates the memory needed. ParaView 3.14 adds "Memory Inspector" widget for users to directly inspect the memory used on all the ParaView processes. Memory Inspector Memory Inspector Memory Inspector The ParaView Memory Inspector Panel provides users a convenient way to monitor ParaView's memory usage during interactive visualization, and developers a point-and-click interface for attaching a debugger to local or remote client and server processes. As explained earlier, both the Information panel, and the Statistics inspector are prone to over and under estimate the total memory used for the current pipeline. The Memory Inspector addresses those issues through direct queries to the operating system. A number of diagnostic statistics are gathered and reported. For example, the total memory used by all processes on a per-host basis, the total cumulative memory use by ParaView on a per-host basis, and the individual per-rank use by each ParaView process are reported. When memory consumption reaches a critical level, either the cumulatively on the host or in an individual rank, the corresponding GUI element will turn red alerting the user that they are in danger of potentially being shut down. This potentially gives them a chance to save state and restart the job with more nodes avoiding loosing their work. On the flip side, knowing when you're not close to using the full capacity of available memory can be useful to conserver computational resources by running smaller jobs. Of course the memory foot print is only one factor in determining the optimal run size. Figure The main UI elements of the memory inspector panel. A: Process Groups, B: Per-Host statistics, C: Per-Rank statistics, and D: Update controls. 28 Memory Inspector User Interface and Layout The memory inspector panel displays information about the current memory usage on the client and server hosts. Figure 1 shows the main UI elements labeled A-D. A number of additional features are provided via specialized context menus accessible from the Client and Server group's, Host's, and Rank's UI elements. The main UI elements are: A. Process Groups Client There is always a client group which reports statistics about the ParaView client. Server When running in client-server mode a server group reports statistics about the hosts where pvserver processes are running. Data Sever When running in client-data-render server mode a data server group reports statistics about the hosts where pvdataserver processes are running. Render Sever When running in client-data-render server mode a render server group reports statistics about the hosts where pvrenderserver processes are running. B. Per-Host Statistics Per-host statics are reported for each host where a ParaView process is running. Hosts are organized by host name which is shown in the first column. Two statics are reported: 1) total memory used by all processes on the host, and 2) ParaView's cumulative usage on this host. The absolute value is printed in a bar that shows the percentage of the total available used. On systems where job-wide resource limits are enforced, ParaView is made aware of the limits via the PV_HOST_MEMORY_LIMIT environment variable in which case, ParaView's cumulative percent used is computed using the smaller of the host total and resource limit. C. Per-Rank Statistics Per-rank statistics are reported for each rank on each host. Ranks are organized by MPI rank number and process id, which are shown in the first and second columns. Each rank's individual memory usage is reported as a percentage used of the total available to it. On systems where either job-wide or per process resource limits are enforced, ParaView is made aware of the limits via the PV_PROC_MEMORY_LIMIT environment variable or through standard usage of Unix resource limits. The ParaView rank's percent used is computed using the smaller of the host total, job-wide, or Unix resource limits. D. Update Controls By default, when the panel is visible, memory use statistics are updated automatically as pipeline objects are created, modified, or destroyed, and after the scene is rendered. Updates may be triggered manually by using the refresh button. Automatic updates may be disabled by un-checking the Auto-update check box. Queries to remote system have proven to be very fast even for fairly large jobs , hence the auto-update feature is enabled by default. Host Properties Dialog The Host context menu provides a Host properties dialog which reports various system details such as the OS version, CPU version, and memory installed and available to the the host context and process context. While, the Memory Inspector panel reports memory use as a percent of the available in the given context, the host properties dialog reports the total installed and available in each context. Comparing the installed and available memory can be used to determine if you are impacted by resource limits. 29 Memory Inspector 30 Figure: Host properties dialog Advanced Debugging Features Remote Commands Figure The remote command dialog. The Memory Inspector Panel provides a remote (or local) command feature allowing one to execute a shell command on a given host. This feature is exposed via a specialized Rank item context menu. Because we have information such as a rank's process id, individual processes may be targeted. For example this allows one to quickly attach a debugger to a server process running on a remote cluster. If the target rank is not on the same host as the client then the command is considered remote otherwise it is consider local. Therefor remote commands are executed via ssh while local commands are not. A list of command templates is maintained. In addition to a number of pre-defined command templates, users may add templates or edit existing ones. The default templates allow one to: Memory Inspector • attach gdb to the selected process • run top on the host of the selected process • send a signal to the selected process Prior to execution, the selected template is parsed and a list of special tokens are replaced with runtime determined or user provide values. User provided values can be set and modified in the dialog's parameter group. The command, with tokens replaced, is shown for verification in the dialog's preview pane. The following tokens are available and may be used in command templates as needed: $TERM_EXEC$ The terminal program which will be used execute commands. On Unix systems typically xterm is used, while on Windows systems typically cmd.exe is used. If the program is not in the default path then the full path must be specified. $TERM_OPTS$ Command line arguments for the terminal program. On Unix these may be used to set the terminals window title, size, colors, and so on. $SSH_EXEC$ The program to use to execute remote commands. On Unix this is typically ssh, while on Windows one option is plink.exe. If the program is not in the default path then the full path must be specified. $FE_URL$ Ssh URL to use when the remote processes are on compute nodes that are not visible to the outside world. This token is used to construct command templates where two ssh hops are made to execute the command. $PV_HOST$ The hostname where the selected process is running. $PV_PID$ The process-id of the selected process. Note: On Window's the debugging tools found in Microsoft's SDK need to be installed in addition to Visual Studio (eg. windbg.exe). The ssh program plink.exe for Window's doesn't parse ANSI escape codes that are used by Unix shell programs. In general the Window's specific templates need some polishing. Stack Trace Signal Handler The Process Group's context menu provides a back trace signal handler option. When enabled, a signal handler is installed that will catch signals such as SEGV, TERM, INT, and ABORT and print a stack trace before the process exits. Once the signal handler is enabled one may trigger a stack trace by explicitly sending a signal. The stack trace signal handler can be used to collect information about crashes, or to trigger a stack trace during deadlocks, when it's not possible to ssh into compute nodes. Often sites that restrict users ssh access to compute nodes often provide a way to signal a running processes from the login node. Note, that this feature is only available on systems the provide support for POSIX signals, and currently we only have implemented stack-trace for GNU compatible compilers. 31 Memory Inspector 32 Compilation and Installation Considerations If the system on which ParaView will run has special resource limits enforced, such as job-wide memory use limits, or non-standard per-process memory limits, then the system administrators need to provide this information to the running instances of ParaView via the following environment variables. For example those could be set in the batch system launch scripts. PV_HOST_MEMORY_LIMIT for reporting host-wide resource limits PV_PROC_MEMORY_LIMIT for reporting per-process memory limits that are not enforced via standard Unix resource limits. A few of the debugging features (such as printing a stack trace) require debug symbols. These features will work best when ParaView is built with CMAKE_BUILD_TYPE=Debug or for release builds CMAKE_BUILD_TYPE=RelWithDebugSymbols. Multi-block Inspector Introduction The Multi-Block Inspector panel allows users to change the rendering and display properties for individual blocks within a multi-block data set. Multi-block Inspector Panel Multi-block Inspector 33 Block Visibility The visibility of individual blocks can be changed by toggling the check box next to their name. By default, blocks will inherit the visibility status of their parent block. Thus, changing the visibility of a non-leaf block will also change the visibility of each child block. Selection Linking Selections made in the render-view will be linked with the items in the multi-block inspector and visa versa. Using block selection (keyboard shortcut: 'b') and clicking on a block will result in that block being highlighted in the tree view. Block Selection Linking Multi-block Inspector 34 Context Menu A multi-block aware context menu was added to the 3D render-view allowing individual block properties to be changed by right-clicking on a particular block. Multi-block Context Menu 35 Displaying Data Views, Representations and Color Mapping This chapter covers different mechanisms in ParaView for visualizing data. Through these visualizations, users are able to gain unique insight on their data. Understanding Views Views When the ParaView application starts up, you see a 3D viewport with an axes at the center. This is a view. In ParaView, views are frames in which the data can be seen. There are different types of views. The default view that shows up is a 3D view which shows rendering of the geometry extracted from the data or volumes or slices in a 3D scene. You can change the default view in the Settings dialog (Edit | Settings (in case of Mac OS X, ParaView | Preferences)). Figure 4.1 ParaView view screen There may be parameters that are available to the user that control how the data is displayed e.g. in case of 3D view, the data can be displayed as wireframes or surfaces, where the user selects the color of the surface or uses a scalar for coloring etc. All these options are known as Display properties and are accessible from the Display tab in the Object Views, Representations and Color Mapping 36 Inspector. Since there can be multiple datasets shown in a view, as well as multiple views, the Display tabs shows the properties for the active pipeline object (changed by using the Pipeline Browser, for example) in the active view. Multiple Views ParaView supports showing multiple views side by side. To create multiple views, use the controls in the top right corner of the view to split the frame vertically or horizontally. You can also maximize a particular view to temporarily hide other views. Once a view-frame is split, you will see a list of buttons showing the different types of views that you can create to place in that view. Simply click the button to create the view of your choice. You can swap view position by dragging the title bar for a view frame and dropping it into the title bar for another view. Figure 4.2 View options in ParaView Starting with ParaView 3.14, users can create multiple tabs to hold a grid of views. When in tile-display mode, only the active tab is shown on the tile-display. Thus, this can be used as a easy mechanism for switching views shown on a tile display for presentations. Views, Representations and Color Mapping Figure 4.3 Multiple Tabs for laying out views in ParaView Some filters, such as Plot Over Line may automatically split the view frame and show the data in a particular type of view suitable for the data generated by the filter. Active View Once you have multiple views, the active view is indicated by a colored border around the view frame. Several menus as well as toolbar buttons affect the active view alone. Additionally, they may become enabled/disabled based on whether that corresponding action is supported by the active view. The Display tab affects the active view. Similarly, the eye icon in the Pipeline Browser, next to the pipeline objects, indicates the visibility state for that object in the active view. When a new filter, source or reader is created, if possible it will be displayed by default in the active view, otherwise, if will create a new view. 37 Views, Representations and Color Mapping 38 Types of Views This section covers the different types of views available in ParaView. For each view, we will talk about the controls available to change the view parameters using View Settings as well as the parameters associated with the Display Tab for showing data in that view. 3D View 3D view is used to show the surface or volume rendering for the data in a 3D world. This is the most commonly used view type. When running in client-server mode, 3D view can render data either by bringing the geometry to the client and then rendering it there or by rendering it on the server (possibly in parallel) and then delivering the composited images to the client. Refer to the Client-Server Visualization chapter for details. This view can also be used to visualize 2D dataset by switching its interaction mode to the 2D mode. This can be achieved by clicking on the button labelled "3D" in the view local toolbar. The label will automatically turn to 2D and the 2D interaction will be used as well as parallel projection. Interaction Interacting with the 3D view will typically update the camera. This makes it possible to explore the visualization scene. The default buttons are shown in Table 4.1 and they can be changed using the Application Settings dialog. Table 4.1 Modifier Left Button Middle Button Right Button Rotate Pan Zoom Shift Roll Rotate Pan Control Zoom Rotate Zoom This view can dynamically switch to a 2D mode and follow the interaction shown in Table 4.2 and they can be changed using the Application Settings dialog. Table 4.2 Modifier Left Button Middle Button Right Button Pan Pan Zoom Shift Zoom Zoom Zoom Control Zoom Zoom Pan This view supports selection. You can select cells or points either on the surface or those within a frustum. Selecting cells or points makes it possible to extract those for further inspection or to label them. Details about data querying and selection can be found the Quantitative analysis chapter. Views, Representations and Color Mapping 39 View Settings The View Settings dialog is accessible through the Edit | View Settings menu or the tool button in the left corner of the view can be used to change the view settings per view. General Figure 4.4 General tab in the View Settings menu The General tab allows the user to choose the background color. You can use a solid color, gradient or a background image. By default the camera uses perspective projection. To switch to parallel projection, check the Use Parallel Projection checkbox in this panel. Lights Figure 4.5 Lights tab in the View Settings menu The 3D View requires lights to illumniate the geometry being rendered in the scene. You can control these lights using this pane. Views, Representations and Color Mapping 40 Annotation Figure 4.6 Annotation tab in the View Settings menu The annotation pane enables control of the visibility of the center axes and the orientation widget. Users can also make the orientation widget interactive so that they can manually place the widget at location of their liking. Display Properties Users can control how the data from any source or filter is shown in this view using the Display tab. This section covers the various options available to a user for controlling appearance of the rendering in the 3D view. View The View menu has three options for controlling how the data is viewed. These are described in Table 4.3. Figure 4.6 View menu Table 4.3 Name Usage Visible Checkbox used to toggle the visibility of the data in the view. If it disabled, it implies that the data cannot be shown in this view. Selectable Checkbox used to toggle whether the data gets selected when using the selection mechanism for selecting and sub-setting data. Zoom to Data Click this button to zoom the camera so that the dataset is completely fits within the viewport. Views, Representations and Color Mapping 41 Color Figure 4.8 Color options The color group allows users to pick the scalar to color with or set a fixed solid color for the rendering. The options in Figure 4.8 are described in detail in Table 4.4 Table 4.4 Name Usage Interpolate Scalars If selected, the scalars will be interpolated within polygons and the scalar mapping happens on a per pixel basis. If not selected, then color mapping happens at points and colors are interpolated which is typically less accurate. This only affects when coloring with point arrays and has no effect otherwise. This is disabled when coloring using a solid color. Map Scalars If the data array can be directly interpreted as colors, then you can uncheck this to not use any lookup table. Otherwise, when selected, a lookup table will be used to map scalars to colors. This is disabled when the array is not of a type that can be interpreted as colors (i.e. vtkUnsignedCharArray). Apply Texture This feature makes it possible to apply a texture over the surface. This requires that the data has texture coordinates. You can use filters like Texture Map to Sphere, Texture Map to Cylinder or Texture Map to Plane to generate texture coordinates when they are not present in the data. To load a texture, select Load from the combo box which will pop up a dialog allowing you to choose an image. Otherwise, select from already loaded textures listed in the combo box. Color By This feature enables coloring of the surface/volume. Either choose the array to color with or set the solid color to use. When volume rendering, solid coloring is not possible, you must choose the data array to volume render with. Set solid color Used to set the solid color. This is available only when Color By is set to use Solid Color. ParaView defines a notion of a color palette consisting of different color categories. To choose a color from one of these predefined categories, click the arrow next to this button. It will open up a drop down with options to choose from. If you use a color from the palette, it is possible to globally change the color by changing the color palette e.g. for printing or for display on screen etc. Edit Color Map... You can edit the color map or lookup table by clicking the Edit Color Map button. It is only shown when an array is chosen in the Color By combo-box. Views, Representations and Color Mapping 42 Slice Figure 4.9 Slice options The slice controls are available only for image datasets (uniform rectilinear grids) when the representation type is Slice. The representation type is controlled using the Style group on the Display tab. These allow the user to pick the slice direction as well as the slice offset. Annotation Figure 4.10 Annotation options Cube axes is an annotation box that can be used to show a scale around the dataset. Use the Show cube axes checkbox to toggle its visibility. You can further control the apperance of the cube axes by clicking Edit once the cube-axes is visible. Figure 4.11 Show cube axes example Views, Representations and Color Mapping 43 Style Figure 4.12 shows the Style dialog box. The options in this dialog box are described in detail in Table 4.5 below. Figure 4.12 Sytle dialog box 'Table 4.5 Name Usage Representation Use this to change how the data is represented i.e. as a surface, volume, wireframe, points, or surface with edges. Interpolation Choose the method used to shade the geometry and interpolate point attributes. Point Size If your dataset contains points or vertices, this adjusts the diameter of the rendered points. It also affects the point size when Representation is Points. Line width If your dataset contains lines or edges, this scale adjusts the width of the rendered lines. It also affects the rendered line width when Representation is Wireframe or Surface With Edges. Opacity Set the opacity of the dataset's geometry. ParaView uses hardware-assisted depth peeling, whenever possible, to remove artifacts due to incorrect sorting order of rendered primitives. Volume Mapper When Representation is Volume, this combo box allows the user to choose a specific volume rendering technique. The techniques available change based on the type of the dataset. Set Edge Color This is available when Representation is Surface with Edges. It allows the user to pick the color to use for the edges rendered over the surface. Views, Representations and Color Mapping 44 Backface Style Figure 4.13 Backface Style dialog box The Backface Style dialog box allows the user to define backface properties. In computer graphics, backface refers to the face of a geometric primitive with the normal point away from the camera. Users can choose to hide the backface or front face, or specify different characteristics for the two faces using these settings. Transformation Figure 4.14 Transformation dialog box These settings allow the user to transform the rendered geometry, without actually transforming the data. Note that since this transformation happens during rendering, any filters that you apply to this data source will still be working on the original, untransformed data. Use the Transform filter if you want to transform the data instead. 2D View This view does not exist anymore as it has been replaced by a more flexible 3D view that can switch from a 3D to 2D mode dynamically. For more information, please see the 3D view section. Spreadsheet View Spreadsheet View is used to inspect the raw data in a spreadsheet. When running in client-server mode, to avoid delivering the entire dataset to the client for displaying in the spreadsheet (since the data can be very large), this view streams only visible chunks of the data to the client. As the user scrolls around the spreadsheet, new data chunks are fetched. Unlike some other views, this view can only show one dataset at a time. For composite datasets, it shows only one block at a time. You can select the block to show using the Display tab. Views, Representations and Color Mapping 45 Interaction In regards to usability, this view behaves like typical spreadsheets shown in applications like Microsoft Excel or Apple Pages: • You can scroll up and down to inspect new rows. • You can sort any column by clicking on the header for the column. Repeated clicking on the column header toggles the sorting order. When running in parallel, ParaView uses sophisticated parallel sorting algorithms to avoid memory and communication overheads to sort large, distributed datasets. • You can double-click on a column header to toggle a mode in which only that column is visible. This reduces clutter when you are interested in a single attribute array. • You can click on rows to select the corresponding elements i.e. cells or points. This is not available when in "Show selected only mode." Also, when you create a selection in other views e.g. the 3D view, the rows corresponding to the selected elements will be highlighted. Header Unlike other views, Spreadsheet View has a header. This header provides quick access to some of the commonly used functionality in this view. Figure 4.17 Spreadsheet View Header Since this view can only show one dataset at a time, you can quickly choose the dataset to show using the Showing combo box. You can choose the attribute type i.e. point attributes, cell attributes, to display using the Attribute combo box. The Precision option controls the number of digits to show after decimal point for floating point numbers. Lastly, the last button allows the user to enter the view in a mode where it only shows the selected rows. This is useful when you create a selection using another view such as the 3D view and want to inspect the details for the selected cells or points. Views, Representations and Color Mapping 46 View Settings Currently, no user settable settings are available for this view. Display Properties Figure 4.18 Display tab in the Object Inspector The display properties for this view provide the same functionality as the header. Additionally, when dealing with composite datasets, the display tab shows a widget allowing the user to choose the block to display in the view. Line Chart View A traditional 2D line plot is often the best option to show trends in small quantities of data. A line plot is also a good choice to examine relationships between different data values that vary over the same domain. Any reader, source, or filter that produces plottable data can be displayed in an XY plot view. ParaView stores its plotable data in a table (vtkTable). Using the display properties, users can choose which columns in the table must be plotted on the X and Y axes. As with the other view types, what is displayed in the active XY plot view is displayed by and controllable with the eye icons in the Pipeline Browser panel. When an XY plot view is active, only those filters that produce plotable output have eye icons. The XY plot view is the preferred view type for the Plot over Line, Plot Point over Time, Plot Cell over Time, Plot Field Variable over Time, and Probe Location over Time filters. Creating any one of these filters will automatically create an XY plot view for displaying its output. Figure 4.19 shows a plot of the data values within a volume as they vary along three separate paths. The top curve comes from the line running across the center of the volume, where the largest values lie. The other two curves come from lines running near the edges of the volume. Views, Representations and Color Mapping 47 Unlike the 3D and 2D render view, the charting views are client-side views i.e. they deliver the data to be plotted to the client. Hence ParaView only allows results from some standard filters such as Plot over Line in the line chart view by default. However it is also possible to plot cell or point data arrays for any dataset by apply the Plot Data filter. Figure 4.19 Plot of data values within a volume Interaction The line chart view supports the following interaction modes: • • • • Right-click and drag to pan Left-click and drag to select Middle-click and drag to zoom to region drawn. Hover over any line in the plot to see the details for the data at that location. To reset the view, use the Reset Camera button in the Camera Toolbar. Views, Representations and Color Mapping 48 View Settings The View Settings for Line Chart enable the user to control the appearance of the chart including titles, axes positions etc. There are several pages available in this dialog. The General page controls the overall appearance of the chart, while the other pages controls the appearance of each of the axes. General Settings Page Figure 4.20 General Settings panel This page allows users to edit settings not related to any of the axes. Chart Title Specify the text and characteristics (such as color, font) for the title for the entire chart. To show the current animation time in the title text, simply use the keyword ${TIME}. Chart Legend When data is plotted in the view, ParaView shows a legend. Users can change the location for the legend. Tooltip Specify the data formatting for the hover tooltips. The default Standard switches between scientific and fixed point notations based on the data values. Views, Representations and Color Mapping 49 Axis Settings Page On this page you can change the properties of a particular axis. Four pages are provided for each of the axes. By clicking on the name of the axis, you can access the settings page for the corresponding axes. Figure 4.21 Axis Settings panel Left/Bottom/Right/Top Axis • Show Axis Grid: controls whether a grid is to be drawn perpendicular to this axis • Colors: controls the axis and the grid color Axis Title Users can choose a title text and its appearance for the selected axis. Axis Labels Axis labels refers to the labels drawn at tick marks along the axis. Users can control whether the labels are rendered and their appearance including color, font and formatting. User can control the locations at which the labels are rendered on the Layout page for the axis. • • • • Show Axis Labels When Space is Available : controls label visibility along this axis Font and Color: controls the label font and color Notation: allows user to choose between Mixed, Scientific and Fixed point notations for numbers Precision: controls precision after '.' in Scientific and Fixed notations Views, Representations and Color Mapping 50 Axis Layout Page This page allows the user to change the axis range as well as label locations for the axis. Figure 4.22 Axis Layout panel Axis Range Controls how the data is plotted along this axis. • Use Logarithmic Scale When Available: Check this to use a log scale unless the data contains numbers <= 0. • Compute axis range automatically: Select this button to let the chart use the optimal range and spacing for this axis. The chart will adjust the range automatically every time the data displayed in the view changes. • Specify axis range explicitly: Select this button to specify the axis range explicitly. When selected, user can enter the minimum and maximum value for the axis. The range will not change even when the data displayed in the view changes. However, if the user manually interacts with the view (i.e. pans, or zooms), then the range specified is updated based on the user's interactions. Views, Representations and Color Mapping 51 Axis Labels Controls how the labels are rendered along this axis. Users can control the labeling independently of the axis range. • Compute axis labels automatically: Select this button to let the chart pick label locations optimally based on the viewport size and axis range. • Specify axis labels explicitly: Select this button to explicitly specify the data values at which labels should be drawn. Display Properties Display Properties for the Line Chart view allow the user to choose what arrays are plotted along which of the axes and the appearance for each of the lines such as its color, thickness and style. Figure 4.24 Display Properties within the Object Inspector • Attribute Mode: pick which attribute arrays to plot i.e. point arrays, cell arrays, etc. • X Axis Data: controls the array to use as the X axis. • Use Array Index From Y Axis Data: when checked, results in ParaView using the index in data-array are plotted on Y as the X axis. • Use Data Array: when checked the user can pick an array to be interpreted as the X coordinate. • Line Series: controls the properties of each of the arrays plotted along the Y axis. • Variable: check the variable to be plotted. • Legend Name: click to change the name used in the legend for this array. Select any of the series in the list to change following properties for that series. You can select multiple entries to change multiple series. Views, Representations and Color Mapping • • • • Line Color: controls the color for the series. Line Thickness: controls the thickness for the series. Line Style: controls the style for the line. Marker Style: controls the style used for those markers, which can be placed at every data point. Bar Chart View Traditional 2D graphs present some types of information much more readily than 3D renderings do; they are usually the best choice for displaying one and two dimensional data. The bar chart view is very useful for examining the relative quantities of different values within data, for example. The bar chart view is used most frequently to display the output of the histogram filter. This filter divides the range of a component of a specified array from the input data set into a specified number of bins, producing a simple sequence of the number of values in the range of each bin. A bar chart is the natural choice for displaying this type of data. In fact, the bar chart view is the preferred view type for the histogram filter. Filters that have a preferred view type will create a view of the preferred type whenever they are instantiated. When the new view is created for the histogram filter, the pre-existing 3D view is made smaller to make space for the new chart view. The chart view then becomes the active view, which is denoted with a red border around the view in the display area. Clicking on any view window makes it the active view. The contents of the Object Inspector and Pipeline Browser panels change and menu items are enabled or disabled whenever a different view becomes active to reflect the active view’s settings and available controls. In this way, you can independently control numerous views. Simply make a view active, and then use the rest of the GUI to change it. By default, the changes you make will only affect the active view. As with the 3D View, the visibility of different datasets within a bar chart view is displayed and controlled by the eye icons in the Pipeline Browser. The bar chart view can only display datasets that contain chartable data, and when a bar chart view is active, the Pipeline Browser will only display the eye icon next to those datasets that can be charted. ParaView stores its chartable data in 1D Rectilinear Grids, where the X locations of the grid contain the bin boundaries, and the cell data contain the counts within each bin. Any source or filter that produces data in this format can be displayed in the bar chart view. Figure 4.25 shows a histogram of the values from a slice of a data set. The Edit View Options for chart views dialog allows you to create labels, titles, and legends for the chart and to control the range and scaling of each axis. The Interaction, Display Properties as well as View Settings for this view and similar to those for the Line Chart. 52 Views, Representations and Color Mapping Figure 4.25 Histogram of values from a slice of a dataset Plot Matrix View The new Plot-Matrix-View (PMV) allows visualization of multiple dimensions of your data in one compact form. It also allows you to spot patterns in the small scatter plots, change focus to those plots of interest and perform basic selection. It is still at an early stage, but the basic features should already be useable, including iterative selection for all charts (add, remove and toggle selections with Ctrl or Shift modifiers on mouse actions too). The PMV can be used to manage the array of plots and the vtkTable mapping of columns to input of the charts. Any filters or sources with an output of vtkTable type should be able to use the view type to display their output. The PMV include a scatter plot, which consists of charts generated by plotting all vtkTable columns against each other, bar charts (histograms) of vtkTable columns, and an active plot which shows the active chart that is selected in the scatter plot. The view offer offers new selection interactions to the charts, which will be describe below in details. As with the other view types, what is displayed in the active PMV is displayed by and controllable with the eye icons in the Pipeline Browser panel. Like XY chart views, the PMVs are also client-side views i.e. they deliver the data to be plotted to the client. 53 Views, Representations and Color Mapping 54 Plot Matrix View Plots of data values in a vtkTable Interaction The scatter plot does not support direct user interactions on its charts, except click. When clicking any charts within the scatter plot, the active plot (The big chart in the top right corner) will be updated to show the selected chart and user can interact with the big chart as described below. The Active Plot in PMV supports the following interaction modes: By default, • Left-click and drag to pan • Middle-button to zoom • Hover over any point in the plot to see the details for the data at that location. There are also four type of selection mode will change the default user interactions. These mode can be invoked by clicking one the buttons shown at the top left corner of the PMV window, where the "View Setting" and camera buttons are. Selection Modes • • • • Start Selection will make Left-click and drag to select Add selection will select and add to current selection Subtract selection will subtract from current selection Toggle selection will toggle current selection Views, Representations and Color Mapping 55 View Settings The View Settings for PMV enable the user to control the appearance of the PMV including titles of the active plot, the plot/histogram colors, the border margin and gutter size of the scatter plot, etc. There are several pages available in this dialog. The General page controls the overall appearance of the chart, while the other pages controls the appearance of other of each plot types. General Settings Page Plot Matrix View General Settings This page allows users to change the title, border margins and layout spacings. To show the current animation time in the title text, simply use the keyword ${TIME}. Users can further change the font and alignment for the title. Views, Representations and Color Mapping 56 Active Plot Settings Page On this page you can change the properties of the axis, grid color, background color, and tooltips properties for the active plot. Plot Matrix View Active Plot Settings Views, Representations and Color Mapping 57 Scatter Plot Settings Page This page allows the user to change the same settings as the Active Plot, and also color for selected charts. Plot Matrix View Scatter Plot Settings • Selected Row/Column Color is for the charts has the same row or column as the selected chart. • Selected Active Color is for the selected chart. Views, Representations and Color Mapping 58 Histogram Plots setting Page This page also allows the user to change the same settings as the active plot for the histogram plots. Plot Matrix View Histogram Plots Settings Views, Representations and Color Mapping 59 Display Properties Display Properties for the PMV allow the user to choose what arrays are plotted and some appearance properties for each type of the plots, such as their color, marker size, and marker style. Plot Matrix View Display Properties Views, Representations and Color Mapping 60 Linked Selections The point selections made in the Active Plot (top right chart) will be displayed in the bottom left triangle (scatter plots). Also, the selection is linked with other view types too. Plot Matrix View Linked Selection Slice View The Slice View allow the user to slice along the three axis (X,Y,Z) any data that get shown into it. The range of the scale for each axis automatically update to fit the bounding box of the data that is shown. By default no slice is created and the user will face as a first step just an empty Outline representation. • In order to Add a new slice along an axis, just double click between the axis and the 3D view for the axis you are interested in at the position you want. • To Remove a slice, double click on the triangle that represent that slice on a given axis. • To toggle the 'Visibility of a slice, right click on the triangle that represent that slice on a given axis. Views, Representations and Color Mapping Slice View of a the Cow (Surface mesh) and the Wavelet (Image Data) A video going over its usage can be seen at the following address: https://vimeo.com/50316342 Python usage The Slice View can easily be managed through Python. To do so, you will need to get a reference to the view proxy and then you will be able to change the slice location of the representations that are shown in the view by just changing the property for each axis. The following code snippet illustrate a usage through the Python shell inside ParaView. > > > > > > > multiSliceView = GetRenderView() Wavelet() Show() multiSliceView.XSlicesValues = [-2.5, -2, -1, 0, 5] multiSliceView.YSlicesValues = range(-10,10,2) multiSliceView.ZSlicesValues = [] Render() Moreover, from Python you can even change slice origins and normals. Here is the list of property that you can change with their default values: • XSlicesNormal = [1,0,0] • XSlicesOrigin = [0,0,0] • XSlicesValues = [] • YSlicesNormal = [0,1,0] • YSlicesOrigin = [0,0,0] 61 Views, Representations and Color Mapping 62 • YSlicesValues = [] • ZSlicesNormal = [0,0,1] • ZSlicesOrigin = [0,0,0] • ZSlicesValues = [] The Python integration can be seen in video here: https://vimeo.com/50316542 Quad View The Quad View come from a plugin that is provided along with ParaView. That view allow the user to slice along 3 planes any data that get shown into it. A point widget is used to represent the planes intersection across all the view and can be grabbed and moved regardless the view we are interacting with. Information such as intersection position for each axis is represented with a text label in each of the slice view. The slice views behave as 2D views by providing pan and zoom interaction as well as parallel projection. In the bottom-right quarter there is a regular 3D view that can contains the objects that are sliced but this object can be shown using a regular representations or the "Slices" one which will show an Outline with the corresponding 3 cuts inside it. Quad View A video going over its usage can be seen at the following address: http:/ / vimeo. com/ 50320103 That view also provide a dedicated option panel that allow the user to customize the cutting plane normals as well as the view up of the slice views. Moreover, the slice origin can be manually entered in that panel for greater precision. Views, Representations and Color Mapping 63 Quad View Option Panel Python Usage The Quad View can easily be managed through Python. To do so, you will need to get a reference to the view proxy and then you will be able to change the slice location of the representations that are shown in the view by just changing the view properties. The following code snippet illustrate a usage through the Python shell inside ParaView. > > > > > quadView = GetRenderView() Wavelet() Show() quadView.SlicesCenter = [1,2,3] Render() Moreover, from Python you can change also the slice normals. Here is the list of property that you can change with their default values: • • • • SlicesCenter = [0,0,0] TopLeftViewUp = [0,1,0] TopRightViewUp = [-1,0,0] BottomLeftViewUp = [0,1,0] • XSlicesNormal = [1,0,0] • XSlicesValues = [0] • YSlicesNormal = [0,1,0] • YSlicesValues = [0] Views, Representations and Color Mapping 64 • ZSlicesNormal = [0,0,1] • ZSlicesValues = [0] And the layout is as follow: • TopLeft = X • TopRight = Y • BottomLeft = Z Color Transfer Functions The interface for changing the color mapping and properties of the scalar bar is accessible from the Display tab of the Object Inspector. Pressing the Edit Color Map button displays the interface for manipulating the color map and scalar bar. The UI of Color Scale Editor as been both simplified and improved in many way. The first time the Color Editor get shown, it will appear in its simple mode which appears to be enough for most ParaView users. Although, in order to get full control on the Color Mapping in ParaView, you will need to select the Advanced checkbox. For volume representation, the UI was fully revisited for a better management but for other type of representations, the color editor is pretty much the same except that some buttons are rearranged and there are two more UI components added. The status of the Advanced checkbox is kept into ParaView internal settings therefore the next time you get back to the Color Editor it will allow come back the way you use it. Simplified Color Editor Views, Representations and Color Mapping 65 Advanced Surface Color Editor Views, Representations and Color Mapping 66 Advanced Volume Color Editor The two new UI controls are "Render View Immediately" checkbox and "Apply" button so that users can have control whether the render views should be updated immediately while editing the color transfer functions. This is very helpful when working with very large dataset. The main changes for the color editor is the separation of editing opacity function from the color-editing function for volume representation. For surface representation, only one color-editing widget will show up (see screenshot "Surface Color Editor"), which is essentially the same as before. The scalar range of this color map editor is shown below the Automatically Rescale to Fit Data Range check box. The leftmost sphere corresponds to the minimum scalar value, and the rightmost one corresponds to the maximum. Any interior nodes correspond to values between these two extremes. New nodes may be added to the color editor by left-clicking in the editor; this determines the scalar value associated with the node, but that value may be changed by typing a new value in the Scalar Value text box below the color map editor or by clicking and dragging a node. The scalar value for a particular node may not be changed such that it is less than that for a node left of it or greater than that for a node right of it. When volume rendering (see screenshot "Volume Color Editor", two function-editing widgets will show up: the top color-editing widget behave the same as for surface representation, which is used for editing scalar colors; the second one is the new opacity-editing widget for editing opacity only. The vertical height of a node indicates its opacity. Also, as in the color-editing widget, the leftmost sphere corresponds to the minimum scalar value, and the rightmost one corresponds to the maximum. Any interior nodes correspond to values between these two extremes. Again, new nodes may be added to the opacity-editor by left-clicking in the editor; this determines the scalar value associated with the node, but that value may be changed by typing a new value in the Scalar Value text box below the opacity editor or by clicking and dragging a node. Some new features are added to edit the opacity function (see below screenshot "Opacity Function Editor", which is the same editor as "Volume Color Editor", but resized vertically to have more space to show the opacity-editor) Views, Representations and Color Mapping 67 Opacity Function Editor When a node is double-clicked in the opacity editor, four green handle widgets will be displayed based on the middle point position and curve sharpness between this node and the nodes before and after it. When the mouse is moved over the green sphere handle, it will become active (its center changes to magenta color) and can be dragged to adjust the middle point position (horizontal handle) or curve sharpness (vertical handle). To exit this mode, just click on another node. When a node in the color-editor is clicked, it becomes highlighted (i.e., drawn larger than the other spheres in the editor). In the "Volume Color Editor" above, the third node from the left has been selected. Clicking again on the selected node displays a color selector from which you may select a new color for the node. The new color will also be applied to the opacity-editor. Pressing the ‘d’ or Delete key while a node is selected removes that node from the color-editor. Only the endpoint nodes may not be deleted. The same is true for removing nodes from opacity-editor. For surface rendering, opacity is determined for an entire data set, not based on the underlying scalar values. Below the color-editor is a text box for changing the scalar value associated with a given node. Only the scalar value is associated with surface rendering. The scalar values at the endpoints may only be changed if the Automatically Rescale to Fit Data Range check box (discussed later in this section) is unmarked. When volume rendering, there are a set of three text boxes below opacity-editor that you may specify the scalar value, its opacity and scale per node in the editor for the selected node. In volume rendering, the opacity is accumulated as you step through the volume being rendered. The Scale value determines the unit distance over which the opacity is accumulated. There are also controls to specify the color space and any color map preset you wish to save or use. The color spaces available are RGB (red, green, blue), HSV (hue, saturation, value), Wrapped HSV, and CIELAB (a more perceptually linear color space). The color space determines how the colors are interpolated between specified values; the colors at the color map (or transfer function) editor nodes will remain the same regardless of the color space chosen. If wrapped HSV is used, the interpolation will use the shortest path in hue, even going through the Views, Representations and Color Mapping value hue = 0. For non-wrapped HSV, the hue interpolation will not pass through 0. A hue of zero sets the color to red. In addition to choosing the color space and modifying the color map or transfer function nodes, you may also create and load preset color scales. When volume rendering, only the color map is stored; the scalar-to-opacity mapping is not. To store your current settings as a preset, click the Save button. In the dialog box that appears, you may enter a name for your new preset. By default, the scalar values from the data array being used are stored in the preset. If you wish these values to be normalized between 0 and 1, press the Normalize button. Figure 4.27 Dialog for selecting color scale presets Any presets you save, in addition to the default ones provided by ParaView, are available by pressing the Choose Preset button, causing the dialog shown below to be displayed. Selecting a preset and clicking OK causes the current color map to be set to the chosen preset. Any user-defined presets may be normalized (as discussed above) or removed from the list of presets entirely using the Normalize and Remove buttons, respectively. The default presets are already normalized and may not be removed from the application. Any of the color scale presets may be exported to a file using the Export button in the above dialog. The resulting file(s) may then be copied to another computer for use with ParaView on a different machine. In order to load presets that are stored in such files, press the Import button on the above dialog, and navigate to the desired color preset file. If the current dataset is colored by an array of vectors, the Component menu will be enabled. It determines whether the data is colored by a single vector component (X, Y, or Z) or by the vector’s Magnitude (the default). If the data is colored by a single-component (scalar) array, then the Component menu is disabled. If Use Logarithmic Scale is checked, then instead of the scalar values in the data array being used directly to determine the colors, the base-10 logarithm of the data array values is computed, and the resulting value is used for extracting a color from the color map. If the data array contains values for which a logarithm would produce invalid results (i.e., any values less than or equal to 0), the range for the color map is changed to [0, 10] so that the logarithm produces valid results. By default, any data attribute that has been used to color a dataset currently loaded in ParaView, and whose name and number of components match that of the array selected in the Color by menu, contributes to the range of the color map. To change this behavior, first uncheck the Automatically Rescale to Fit Data Range check box. This 68 Views, Representations and Color Mapping 69 ensures that the range of the color map is not reset when the range of the data attribute changes. The minimum and maximum values of the color map can be overridden by pressing the Rescale Range button, entering different Minimum and Maximum values in the dialog that appears, and pressing Rescale. This rescales all the nodes in the color map so that the scalar values lie at the same normalized positions. Alternatively, you may modify the scalar values of any node (including the endpoints if Automatically Rescale to Fit Data Range is off) by clicking a node to highlight it and typing a new value in the Scalar Value entry box. By changing the minimum and maximum color map values, it is possible to manually specify what range of data values the color map will cover. Pressing the Rescale to Data Range button on the Color Scale tab of the Color Scale Editor sets the range to cover only the current data set. If Use Discrete Colors is checked, the Resolution slider at the bottom of the dialog specifies the number of colors to use in the color map. The scale ranges from 2 to 256 (the default). The fewer the number of colors, the larger the range each color covers. This is useful if the data attribute has a small number of distinct values or if larger ranges of the array values should be mapped to the same color. Figure 4.28 Scalar Bar controls 70 Filtering Data Rationale Manipulating Data In the course of either searching for information within data or in preparing images for publication that explain data, it is often necessary to process the raw data in various ways. Examples include slicing into the data to make the interior visible, extracting regions that have particular qualities, and computing statistical measurements from the data. All of these operations involving taking in some original data and using it to compute some derived quantities. This chapter explains how you control the data processing portion of ParaView's visualization pipeline to do such analyses. A filter is the basic tool that you use to manipulate data. If data is a noun, then a filter is the verb that operates on the data. Filters operate by ingesting some data, processing it and producing some other data. In the abstract sense a data reader is a filter as well because it ingests data from the file system. ParaView creates filters when you open data files and instantiate new filters form the Filters menu. The set of filters you create becomes your visualization pipeline, and that pipeline is shown in ParaView's Pipeline Browser. Filter Parameters Filter Parameters Each time a dataset is opened from a file, a source is selected, a filter is applied, or an existing reader, source, or filter (hereafter simply referred to as filter) is selected in the Pipeline Browser, ParaView updates the Object Inspector for the corresponding output dataset. The Object Inspector has three tabs. In this chapter we are primarily concerned with the Properties tab. The Display tab gives you control over the visual characteristics of the data produced by the filter as displayed in the active view. The Information tab presents meta-information about the data produced by the filter. Filter Parameters 71 Properties From the Properties tab you modify the parameters of the filter, fine tuning what it produces from its input (if any). For example, a filter that extracts an isocontour will have a control with which to set the isovalue (or isovalues) to extract at. The specific controls and information provided on the tab then are specific to the particular vtkAlgorithm you are working with, but all filters have at least the Apply, Reset, Delete and ? (help) controls. Figure 5.1 Sample properties tab for a cone source The help button brings up the documentation for the filter in ParaView's help system in which the input restrictions to the filter, output type generated by the filter, and descriptions of each parameter are listed. The same information is repeated in the Appendices 1, 2 of this book. The Delete button removes this filter from the pipeline. The delete button is only enabled when there are no filters further down the pipeline that depend on this filter's output. You have to either use the Pipeline Browser and Object Inspector in conjunction to delete the dependent parts of the pipeline or use Delete All from the Edit menu. When a reader, source, or filter is first selected, the associated data set is not immediately created. By default (unless you turn on Auto-Accept in ParaView's settings) the filter will not run until you hit the Apply button. When you do press apply, ParaView sends the values shown on the Properties tab to the data processing engine and then the pipeline is executed. This delayed commit behavior is important when working with large data, for which any given action might take a long time to finish. Until you press Apply and any other time that the values shown on the GUI do not agree with what was last sent to the server, the the Apply button will be highlighted (in blue or green depending on your operating system). In this state the Reset button is also enabled. Pressing that returns the GUI to the last committed state, which gives you an easy way to cancel mistakes you've made before they happen. Filter Parameters The specific parameter control widgets vary from filter to filter and sometimes vary depending on the exact input to the filter. Some filters have no parameters at all and others have many. Many readers present the list and type of arrays in the file, and allow you to pick some or all of them as you need. In all cases the widgets shown on the Properties tab give you control over exactly what the filter does. If you are unsure of what they do, remember to hit the ? button to see the documentation for that filter. Note that ParaView attempts to provide reasonable default settings for the parameter settings and to some extent guards against invalid entries. A numeric entry box will not let you type in non-numerical values for example. Sliders and spin boxes typically have minimum and maximum limits built in. In some cases though you may want to ignore these default limits. Whenever there is a numeric entry beside the widget, you are able to manually type in any number you need to. Some filter parameters are best manipulated directly in the 3D View window with the mouse. For example, the Slice filter extracts slices from the data that lie on a set of parallel planes oriented in space. This type of world space interactive control is what 3D Widgets are for. The textual controls in the Properties Tab and the displayed state of the 3D widgets are always linked so that changing either updates the other. You can of course use the Reset button to revert changes you make from either place. The Pipeline Managing the Pipeline Data manipulation in ParaView is fairly unique because of the underlying pipeline architecture that it inherits from VTK. Each filter takes in some data and produces something new from it. Filters do not directly modify the data that is given to them and instead copy unmodified data through via reference (so as to conserve memory) and augment that with newly generated or changed data. The fact that input data is never altered in place means that unlike many visualization tools, you can apply several filtering operations in different combinations to your data during a single ParaView session. You see the result of each filter as it is applied, which helps to guide your data exploration work, and can easily display any or all intermediate pipeline outputs simultaneously. The Pipeline Browser depicts ParaView's current visualization pipeline and allows you to easily navigate to the various readers, sources, and filters it contains. Connecting an initial data set loaded from a file or created from a ParaView source to a filter creates a two filter long visualization pipeline. The initial dataset read in becomes the input to the filter, and if needed the output of that filter can be used as the input to a subsequent filter. For example, suppose you create a sphere in ParaView by selecting Sphere from the Sources menu. In this example, the sphere is the initial data set. Next create a Shrink filter from the Alphabetical submenu of the Filters menu. Because the sphere source was the active filter when the shrink filter was created, the shrink filter operates on the sphere source's output. Optionally, use the Properties tab of the Object Inspector to set initial parameters of the shrink filter and then hit Apply. Next create an Elevation filter to filter the output of the shrink filter and hit Apply again. You have just created a simple three element linear pipeline in ParaView. You will now see the following in the Pipeline Browser. 72 The Pipeline 73 Figure 5.2 Linear pipeline Within the Pipeline Browser, to the left of each entry is an "eye" icon indicating whether that dataset is currently visible. If there is no eye icon, it means that the data produced by that filter is not compatible with the active view window. Otherwise, a dark eye icon indicates that the data set is visible. When a dataset is viewable but currently invisible, its icon is drawn in light gray. Clicking on the eye icon toggles the visibility of the corresponding data set. In the above example, all three filters are potentially visible, but only the ElevationFilter is actually being displayed. The ElevationFilter is also highlighted in blue, indicating that it is the "active" filter. Since it is the "active" filter, the Object Inspector reflects its content and the next filter created will use it as the input. You can always change parameters of any filter in the pipeline after it has been applied. Left-click on the filter in the Pipeline Browser to make it active. The Properties, Display, and Information tabs always reflect the active filter. When you make changes in the Properties tab and apply your changes, all filters beyond the changed one are automatically updated. Double-clicking the name of one of the filters causes the name to become editable, enabling you to change it to something more meaningful than the default chosen by ParaView. By default, each filter you add to the pipeline becomes the active filter, which is useful when making linear pipelines. Branching pipelines are also very useful. The simplest way to make one is to click on some other, further upstream filter in the pipeline before you create a new filter. For example, select ShrinkFilter1 in the Pipeline Browser then apply Extract Edges from the Alphabetical submenu of the Filters menu. Now the output of the shrink filter is being used as the input to both the elevation and extract edges filters. You will see the following in the Pipeline Browser. Figure 5.3 Branching Pipeline Right-clicking a filter in the Pipeline Browser displays a context menu from which you can do several things. For reader modules you can use this to load a new data file. All modules can be saved (the filter and the parameters) as a Custom Filter (see the last section of this chapter), or deleted if it is at the end of the visualization pipeline. For filter modules, you can also use this menu to change the input to the filter, and thus rearrange the visualization pipeline. The Pipeline 74 Figure 5.4 Context menu in the Pipeline Browser To rearrange the pipeline select Change Input from the context menu. That will bring up the Input Editor dialog box as shown in Figure 5.5. The name of the dialog box reflects the filter that you are moving. The middle Select Source pane shows the pipeline as it stands currently and uses this pane to select a filter to move the chosen filter on to. This pane only allows you to choose compatible filters, i.e., ones that produce data that can be ingested by the filter you are moving. It also does not allow you to create loops in the pipeline. Left-click to make your choice from the allowed set of filters and then the rightmost Pipeline Preview pane will show what the pipeline will look like once you commit your change. Click OK to do so or Cancel to abort the move. Figure 5.5 Input Editor dialog shown while moving an elevation filter to operate directly on a reader. Some filters require more than one input. (See the discussion of merging pipelines below). For those the leftmost input port pane of the dialog shows more than one port. Use that together with the Select Source pane to specify each input in turn. Conversely, some filters produce more than one output. Thus another way to make a branching pipeline is to open a reader that produces multiple distinct data sets, for example the SLAC reader that produces both a polygonal output and a structured data field output. Whenever you have a branching pipeline keep in mind that it is important to select the proper branch on which to extend the pipeline. For example, if you want to apply a filter like the Extract Subset filter, which operates only on structured data, you click on the reader's structured data output and make that the active one, if the SLAC reader's polygonal output was selected. The Pipeline 75 Some filters that produce multiple datasets do so in a different way. Instead of producing several fundamentally distinct data sets, they produce a single composite dataset which contains many sub data sets. See the Understanding Data chapter for an explanation of composite data sets. With composite data it is usually best to treat the entire group as one entity. Sometimes though, you want to operate on a particular set of sub datasets. To do that apply the Extract Block filter. This filter allows you to pick the desired sub data set(s). Next apply the filter you are interested in to the extract filters output. An alternative is to hit 'B' to use Block Selection in a 3D View and then use the Extract Selection filter. Pipelines merge as well, whenever they contain filters that take in more than one input to produce their own output (or outputs). There are in fact two different varieties of merging filters. The Append Geometry and Group Data Sets filters are examples of the first kind. These filters take in any number of fundamentally similar datasets. Append for example takes in one or more polygonal datasets and combines them into a single large polygonal dataset. Group takes in a set of any type of datasets and produces a single composite dataset. To use this type of merging filter, select more than one filter within the Pipeline Browser by left clicking to select the first input and then shift-left clicking to select the rest. Now create the merging filter from the Filters menu as usual. The pipeline in this case will look like the one in Figure 5.6. Figure 5.6 Merging pipelines Other filters take in more than one, fundamentally different data sets. An example is the Resample with Dataset filter which takes in one dataset (the Input) that represents a field in space to sample values from and another data set (the Source) to use as a set of locations in space to sample the values at. Begin this type of merge by choosing either of the two inputs as the active filter and then creating the merging filter from the Filters menu. A modified version of the Change Input dialog shown in Figure 5.5 results (this one that lacks a Pipeline Preview pane). Click on either of the ports listed in the Available Inputs pane and specify an input for it from the Select Input pane. Then switch to the other input in the Available Inputs port and choose the other input on the Select Input pane. When you are satisfied with your choices click OK on the dialog and then Apply on the Pipeline Browser to create the merging filter. Filter Categories 76 Filter Categories Available Filters There are many filters available in ParaView (1) (and even more in VTK). Because ParaView has a modular architecture, it is routine for people to add additional filters(2). Some filters have obscure purposes and are rarely used, but others are more general purpose and used very frequently. These most common filters are found easily on the Common (View|Toolbars) toolbar. Figure 5.7 Common Filters Toolbar These filters include: • Calculator - Evaluates a user-defined expression on a per-point or per-cell basis (3) • Contour - Extracts the points, curves, or surfaces where a scalar field is equal to a user-defined value. This surface is often also called an isosurface. • Clip - Intersects the geometry with a half space. The effect is to remove all the geometry on one side of a user-defined plane. • Slice - Intersects the geometry with a plane. The effect is similar to clipping except that all that remains is the geometry where the plane is located. • Threshold - Extracts cells that lie within a specified range of a scalar field. • Extract Subset - Extracts a subset of a grid by defining either a volume of interest or a sampling rate. • Glyph - Places a glyph, a simple shape, on each point in a mesh. The glyphs may be oriented by a vector and scaled by a vector or scalar. • Stream Tracer - Seeds a vector field with points and then traces those seed points through the (steady state) vector field. • Warp - Displaces each point in a mesh by a given vector field. • Group Datasets - Combines the output of several pipeline objects into a single multi block dataset. • Group Extract Level - Extract one or more items from a multi block dataset. These eleven filters are a small sampling of what is available in ParaView. In the alphabetical submenu of the Filters menu you will find all of the filters that are useable in your copy of ParaView. Currently there are mote than one hundred of them, so to make them easier to find the Filters menu is organized into submenus. These submenus are organized as follows. • • • • Recent - The filters you've used recently. Common - The common filters. This is the same set of filters as on the common filters toolbar. Cosmology - This contains filters developed at LANL for cosmology research. Data Analysis - The filters designed to retrieve quantitative values from the data. These filters compute data on the mesh, extract elements from the mesh, or plot data. • Statistics - This contains filters that provide descriptive statistics of data, primarily in tabular form. • Temporal - Filters that analyze or modify data that changes over time. All filters can work on data that changes over time because they are re-executed at each time step. Filters in this category have the additional capability to inspect and make use of or even modify the temporal dimension. • Alphabetical - Many filters do not fit into the above categories so all filters can be found here (see Figure 5.8). Filter Categories 77 Figure 5.8 A portion of the Alphabetical submenu of the Filters menu. Searching through these lists of filters, particularly the full alphabetical list, can be cumbersome. To speed up the selection of filters, you should use the quick launch dialog. Choose the first item from the filters menu, or alternatively press either CTRL and SPACE BAR (Windows or Linux) or ALT and SPACE BAR (Macintosh) together to bring up the Quick Launch dialog. As you type in words or word fragments the dialog lists the filters whose names contain them. Use the up and down arrow key to select from among them and hit ENTER to create the filter. Figure 5.9 Quick Launch Why can't I apply the filter I want? Note that many of the filters in the menu will be grayed out and not selectable at any given time. That is because any given filter may only operate on particular types of data. For example, the Extract Subset filter will only operate on structured datasets so it is only enabled when the module you are building on top of produces image data, rectilinear Filter Categories grid data, or structured grid data. Likewise, the contour filter requires scalar data and cannot operate directly on datasets that have only vectors. The input restrictions for all filters are listed in the Appendix and help menus. When the filter you want is not available you should look for a similar filter which will accept your data or apply an intermediate filter which transforms your data into the required format. In ParaView 3.10 you can also ask ParaView to try to do the conversion for you automatically by clicking "Auto convert Properties" in the application settings. What does that filter do? A description of what each filter does, what input data types it accepts and what output data types it produces can be found in the Appendix and help menus. For a more complete understanding, remember that most ParaView filters are simply VTK algorithms, each of which is documented online in the VTK (http:/ / www. vtk. org/ doc/ release/ 5. 6/ html/ classes. html) and ParaView (http:/ / www. paraview. org/ ParaView3/ Doc/ Nightly/ html/ classes. html) Doxygen wiki pages. When you are exploring a given dataset, you do not want to have to hunt through the detailed descriptions of all of the filters in order to find the one filter that is needed at any given moment. It is useful then to be aware of the general high-level taxonomy of the different operations that the filters can be logically grouped into. These are: • Attribute Manipulation : Manipulates the field aligned, point aligned and cell aligned data values and in general derive new aligned quantities, including Curvature, Elevation, Generate IDs, Generate Surface Normals, Gradient, Mesh Quality, Principal Component Analysis, and Random Vectors. • Geometric Manipulation : Operates on or manipulates the shape of the data in a spatial context, including Reflect, Transform, and Warp • Topological operations : Manipulates the connected structure of the data set itself, usually creating or destroying cells, for instance to reduce the data sets memory size while leaving it in the same place in space, including Cell Centers, Clean, Decimate, Extract Surface, Quadric Clustering, Shrink, Smooth, and Tetrahedralize. • Sampling : Computes new datasets that represent some essential features from the datasets that they take as input, including Clip, Extract Subset, Extract Selection, Glyph, Streamline, Probe, Plot, Histogram, and Slice. • Data Type Conversion : Converts between the various VTK data structures VTK Data Model and joins or splits entire data structures, including Append DataSets, Append Geometry, Extract Blocks, Extract AMR Blocks, and Group DataSets. • White Box Filters : Performs arbitrary processing as specified at runtime by you the user, including the Calculator and Python Programmable filters. 78 Best Practices Best Practices Avoiding Data Explosion The pipeline model that ParaView presents is very convenient for exploratory visualization. The loose coupling between components provides a very flexible framework for building unique visualizations, and the pipeline structure allows you to tweak parameters quickly and easily. The downside of this coupling is that it can have a larger memory footprint. Each stage of this pipeline maintains its own copy of the data. Whenever possible, ParaView performs shallow copies of the data so that different stages of the pipeline point to the same block of data in memory. However, any filter that creates new data or changes the values or topology of the data must allocate new memory for the result. If ParaView is filtering a very large mesh, inappropriate use of filters can quickly deplete all available memory. Therefore, when visualizing large datasets, it is important to understand the memory requirements of filters. Please keep in mind that the following advice is intended only for when dealing with very large amounts of data and the remaining available memory is low. When you are not in danger of running out of memory, the following advice is not relevant. When dealing with structured data, it is absolutely important to know what filters will change the data to unstructured. Unstructured data has a much higher memory footprint, per cell, than structured data because the topology must be explicitly written out. There are many filters in ParaView that will change the topology in some way, and these filters will write out the data as an unstructured grid, because that is the only dataset that will handle any type of topology that is generated. The following list of filters will write out a new unstructured topology in its output that is roughly equivalent to the input. These filters should never be used with structured data and should be used with caution on unstructured data. • • • • • • • • • • • • • • • • • • • Append Datasets Append Geometry Clean Clean to Grid Connectivity D3 Delaunay 2D/3D Extract Edges Linear Extrusion Loop Subdivision Reflect Rotational Extrusion Shrink Smooth Subdivide Tessellate Tetrahedralize Triangle Strips Triangulate Technically, the Ribbon and Tube filters should fall into this list. However, as they only work on 1D cells in poly data, the input data is usually small and of little concern. This similar set of filters also outputs unstructured grids, but also tends to reduce some of this data. Be aware though that this data reduction is often smaller than the overhead of converting to unstructured data. Also note that the 79 Best Practices reduction is often not well balanced. It is possible (often likely) that a single process may not lose any cells. Thus, these filters should be used with caution on unstructured data and extreme caution on structured data. • • • • • • Clip Decimate Extract Cells by Region Extract Selection Quadric Clustering Threshold Similar to the items in the preceding list, Extract Subset performs data reduction on a structured dataset, but also outputs a structured dataset. So the warning about creating new data still applies, but you do not have to worry about converting to an unstructured grid. This next set of filters also outputs unstructured data, but it also performs a reduction on the dimension of the data (for example 3D to 2D), which results in a much smaller output. Thus, these filters are usually safe to use with unstructured data and require only mild caution with structured data. • Cell Centers • Contour • Extract CTH Fragments • • • • • • • Extract CTH Parts Extract Surface Feature Edges Mask Points Outline (curvilinear) Slice Stream Tracer The filters below do not change the connectivity of the data at all. Instead, they only add field arrays to the data. All the existing data is shallow copied. These filters are usually safe to use on all data. • • • • • • • • • • • • • • • • • • Block Scalars Calculator Cell Data to Point Data Curvature Elevation Generate Surface Normals Gradient Level Scalars Median Mesh Quality Octree Depth Limit Octree Depth Scalars Point Data to Cell Data Process Id Scalars Random Vectors Resample with dataset Surface Flow Surface Vectors • Texture Map to... • Transform 80 Best Practices • Warp (scalar) • Warp (vector) This final set of filters either add no data to the output (all data of consequence is shallow copied) or the data they add is generally independent of the size of the input. These are almost always safe to add under any circumstances (although they may take a lot of time). • • • • • • • • • • • • Annotate Time Append Attributes Extract Block Extract Datasets Extract Level Glyph Group Datasets Histogram Integrate Variables Normal Glyphs Outline Outline Corners • • • • • • • Plot Global Variables Over Time Plot Over Line Plot Selection Over Time Probe Location Temporal Shift Scale Temporal Snap-to-Time-Steps Temporal Statistics There are a few special case filters that do not fit well into any of the previous classes. Some of the filters, currently Temporal Interpolator and Particle Tracer, perform calculations based on how data changes over time. Thus, these filters may need to load data for two or more instances of time, which can double or more the amount of data needed in memory. The Temporal Cache filter will also hold data for multiple instances of time. Keep in mind that some of the temporal filters such as the Temporal Statistics and the filters that plot over time may need to iteratively load all data from disk. Thus, it may take an impractically long amount of time even if does not require any extra memory. The Programmable Filter is also a special case that is impossible to classify. Since this filter does whatever it is programmed to do, it can fall into any one of these categories. Culling Data When dealing with large data, it is best to cull out data whenever possible and do so as early as possible. Most large data starts as 3D geometry and the desired geometry is often a surface. As surfaces usually have a much smaller memory footprint than the volumes that they are derived from, it is best to convert to a surface early on. Once you do that, you can apply other filters in relative safety. A very common visualization operation is to extract isosurfaces from a volume using the Contour filter. The Contour filter usually outputs geometry much smaller than its input. Thus, the Contour filter should be applied early if it is to be used at all. Be careful when setting up the parameters to the Contour filter because it still is possible for it to generate a lot of data. which can happen if you specify many isosurface values. High frequencies such as noise around an isosurface value can also cause a large, irregular surface to form. Another way to peer inside of a volume is to perform a Slice on it. The Slice filter will intersect a volume with a plane and allow you to see the data in the volume where the plane intersects. If you know the relative location of an interesting feature in your large dataset, slicing is a good way to view it. 81 Best Practices If you have little a-priori knowledge of your data and would like to explore the data without the long memory and processing time for the full dataset, you can use the Extract Subset filter to subsample the data. The subsampled data can be dramatically smaller than the original data and should still be well load balanced. Of course, be aware that you may miss small features if the subsampling steps over them and that once you find a feature you should go back and visualize it with the full data set. There are also several features that can pull out a subset of a volume: Clip, Threshold, Extract Selection, and Extract Subset can all extract cells based on some criterion. Be aware, however, that the extracted cells are almost never well balanced; expect some processes to have no cells removed. All of these filters, with the exception of Extract Subset, will convert structured data types to unstructured grids. Therefore, they should not be used unless the extracted cells are of at least an order of magnitude less than the source data. When possible, replace the use of a filter that extracts 3D data with one that will extract 2D surfaces. For example, if you are interested in a plane through the data, use the Slice filter rather than the Clip filter. If you are interested in knowing the location of a region of cells containing a particular range of values, consider using the Contour filter to generate surfaces at the ends of the range rather than extract all of the cells with the Threshold filter. Be aware that substituting filters can have an effect on downstream filters. For example, running the Histogram filter after Threshold will have an entirely different effect then running it after the roughly equivalent Contour filter. Custom Filters aka Macro Filters Macros (aka Custom Filters) It often happens that once you figure out how to do some specific data processing task, you want to repeat it often. You may, for example, want to reuse particular filters with specific settings (for example complicated calculator or programmable filter expressions) or even entire pipeline sections consisting on new datasets without having to manually enter the parameters each time. You can do this via the clever use of state files or more conveniently python scripts and python macros (1) . Saving, editing and reusing state files gives you the ability to recreate entire ParaView sessions, but not fine enough control for small, repeatedly reused tasks. Python Tracing does give you fine grained control, but this assumes that you have python enabled in your copy of ParaView (which is usually but not always the case) and that you remember to turn on Trace recording before you did whatever it was that you want to play back. Both techniques largely require that you think like a programmer when you initially create and setup the scripts. Another alternative is to use ParaView's Custom Filters which let you create reusable meta-filters strictly within the GUI. A Custom Filter is a black box filter that encapsulates one or more filters in a sub-pipeline and exposes only those parameters from that sub-pipeline that the Custom Filter creator chose to make available. For example, if you capture a ten element pipeline in your Custom Filter where each filter happened to have eight parameters, you could choose to expose anywhere from zero to eighty parameters in your Custom Filter's Properties tab. 82 Custom Filters aka Macro Filters 83 Figure 5.10 Custom Filter concept Once you have set up some pipeline that performs the data processing that you want to reuse, the process of creating a Custom Filter consists of three steps. First, select one or more filters from the Pipeline Browser using the mouse. Second, from the Tools menu select Create Custom Filter. From that dialog choose the filter in your chosen sub-pipeline who's input is representative of where you want data to enter into your Custom Filter. This is usually the topmost filter. If you are creating a multi-input filter, click the + button to add additional inputs and configure them in the same way. Clicking Next brings you to a similar dialog in which you choose the outputs of your Custom Filter. Third, click Next again to get to the last dialog, where you specify which parameters of the internal filters you want to expose to the eventual user of the custom filter. You can optionally give each parameter a descriptive label here as well. The three dialogs are shown below. Step 1: configure one or more inputs to your new filter. Step 2: configure one or more outputs of your new filter. Step 3: identify and name the controls you want to expose of your new filter. Figure 5.11 Creating a Custom Filter Custom Filters aka Macro Filters Once you create a Custom Filter it is added to the Alphabetical sub menu of the Filters menu. It is automatically saved in ParaView's settings, so the next time you start ParaView on the same machine you can use it just like any of the other filters that came with your copy of ParaView. Custom Filters are treated no differently than other filters in ParaView and are saveable and restorable in state files and python scripts. If you find that you no longer need some Custom Filter and want to get rid of it, use the Manage Custom Filters dialog box under the Tools menu to remove it. If, on the other hand, you find that a Custom Filter is very useful, you may instead want to give it to a colleague. On that same dialog are controls for exporting and importing Custom Filters. When you save a Custom Filter you are prompted for a location and filename to save the filter in. The file format is a simple XML text file that you can simply email or otherwise deliver. 84 85 Quantative Analysis Drilling Down ParaView 2 was almost entirely a qualitative analysis tool. It was very good at drawing pictures of large scientific datasets so that you could view the data and tell if it looked "right," but it was not easy to use for finding hard quantitative information about the data and verifying that that was the case. The recommended use was to use ParaView to visualize, interact with and subset your data and then export the result into a format that could be imported by a different tool. A major goal of ParaView 3 has been to add quantitative analysis capabilities to turn it into a convenient and comprehensive tool in which you can visualize, interact with and drill all the way down into the data. These capabilities vary from semi-qualitative ones such as the Ruler source and Cube Axis representation (see Users_Guide_Annotation) to Selection which allows you to define and extract arbitrary subsets of the data based, to the spreadsheet view which presents the data in textual form. Taken together with features like the statistical analysis filters, calculator filters, 2D plot and chart views and programmable filters (which give you the ability to run arbitrary code on the server and have access to every data point) these give you the ability to inspect the data from the highest level view all the way down to the hard numbers. This chapter describes the various tools that ParaView gives you to support quantitative analysis. Python Programmable Filter Introduction The Programmable Filter is a ParaView filter that processes one or more input datasets based on a Python script provided by the user. The parameters of the filter include the output data type, the script and a toggle that controls whether the input arrays are copied to the output. This chapter introduces the use of the Programmable Filter and gives a summary of the API available to the user. Note that the Programmable Filter depends on Python. All ParaView Figure 6.1 binaries distributed by Kitware are built with Python enabled. If you have built ParaView yourself, you have to make sure that PARAVIEW_ENABLE_PYTHON is turned on when configuring the ParaView build. Python Programmable Filter Since the entire VTK API as well as any module that can be imported through Python is available through this filter, we can only skim the surface of what can be accomplished with this filter here. If you are not familiar with Python, we recommend first taking a look at one of the introductory guides such as the official Python Tutorial [1]. If you are going to do any programming beyond the very basics, we recommend reading up on the VTK API. The VTK website [2] has links to VTK books and online documentation. For reference, you may need to look at the VTK class documentation [3]. There is also more information about the Programmable Filter and some good recipes on the ParaView Wiki (Python_Programmable_Filter). Basic Use Requirements: 1. You are applying Programmable Filter to a "simple" dataset and not a composite dataset such as multi-block or AMR. 2. You have NumPy installed. The most basic reason to use the Programmable Filter is to add a new array by deriving it from arrays in the input, which can also be achieved by using the Python Calculator. One reason to use the Programmable Filter instead may be that the calculation is more involved and trying to do it in one expression may be difficult. Another reason may be that you need access to a program flow construct such as if or for. The Programmable Filter can be used to do everything the Calculator does and more. Note: Since what is described here builds on some of the concepts introduced in the Python Calculator section, please read it first if you are not familiar with the Calculator. If you leave the "Output Dataset Type" parameter in the default setting of "Same as Input," the Programmable Filter will copy the topology and geometry of the input to the output before calling your Python script. Therefore, if you Apply the filter without filling the script, you should see a copy of the input without any of its arrays in the output. If you also check the Copy Arrays option, the output will have all of the input arrays. This behavior allows you to focus on creating new arrays without worrying about the mesh. Create a Sphere source and then apply the Programmable Filter and use the following script. normals = inputs[0].PointData['Normals'] output.PointData.append(normals[:,0], "Normals_x") This should create a sphere with an array called "Normals_x". There a few things to note here: • You cannot refer to arrays directly by name as in the Python Calculator. You need to access arrays using the .PointData and .CellData qualifiers. • Unlike the Python Calculator, you have to explicitly add an array to the output using the append function. Note that this function takes the name of the array as the second argument. You can use any of the functions available in the Calculator in the Programmable Filter. For example, the following code creates two new arrays and adds them to the output. normals = inputs[0].PointData['Normals'] output.PointData.append(sin(normals[:,0]), "sin of Normals_x") output.PointData.append(normals[:,1] + 1, "Normals_y + 1") 86 Python Programmable Filter Intermediate Use Mixing VTK and NumPy APIs The previous examples demonstrate how the Programmable Filter can be used as an advanced Python Calculator. However, the full power of the Programmable Filter can only be harnessed by using the VTK API. The following is a simple example. Create a Sphere source and apply the Programmable Filter with the following script. input = inputs[0] newPoints = vtk.vtkPoints() numPoints = input.GetNumberOfPoints() for i in range(numPoints): x, y, z = input.GetPoint(i) newPoints.InsertPoint(i, x, y, 1 + z*0.3) output.SetPoints(newPoints) Start with creating a new instance of vtkPoints: newPoints = vtk.vtkPoints() vtkPoints is a data structure that VTK uses to store the coordinates of points. Next, loop over all points of the input and insert a new point in the output with coordinates (x, y, 1+z*0.3) for i in range(numPoints): x, y, z = input.GetPoint(i) newPoints.InsertPoint(i, x, y, 1 + z*0.3) Finally, replace the output points with the new points we created using the following: output.SetPoints(newPoints) Note: Python is an interpreted language and Python scripts do not execute as efficiently as compiled C++ code. Therefore, using a for loop that iterates over all points or cells may be a significant bottleneck when processing large datasets. The NumPy and VTK APIs can be mixed to achieve good performance. Even though this may seem a bit complicated at first, it can be used with great effect. For instance, the example above can be rewritten as follows. from paraview.vtk.dataset_adapter import numpyTovtkDataArray input = inputs[0] newPoints = vtk.vtkPoints() zs = 1 + input.Points[:,2]*0.3 coords = hstack([input.Points[:,0:2],zs]) newPoints.SetData(numpyTovtkDataArray(coords)) output.SetPoints(newPoints) 87 Python Programmable Filter Even though this produces exactly the same result, it is much more efficient because the for loop was moved it from Python to C. Under the hood, NumPy uses C and Fortran for tight loops. If you read the Python Calculator documentation, this example is straightforward except the use of numpyTovtkDataArray(). First, note that you are mixing two APIs here: the VTK API and NumPy. VTK and NumPy uses different types of objects to represents arrays. The basic examples previously used carefully hide this from you. However, once you start manipulating VTK objects using NumPy, you have to start converting objects between two APIs. Note that for the most part this conversion happens without "deep copying" arrays, for example copying the raw contents from one memory location to another. Rather, pointers are passed between VTK and NumPy whenever possible. The dataset_adapter provides two methods to do the conversions described above: • vtkDataArrayToVTKArray: This function creates a NumPy compatible array from a vtkDataArray. Note that VTKArray is actually a subclass of numpy.matrix and can be used anywhere matrix can be used. This function always copies the pointer and not the contents. Important: You should not directly change the values of the resulting array if the argument is an array from the input. • numpyTovtkDataArray: Converts a NumPy array (or a VTKArray) to a vtkDataArray. This function copies the pointer if the argument is a contiguous array. There are various ways of creating discontinuous arrays with NumPy including using hstack and striding. See NumPy documentation for details. Multiple Inputs Like the Python Calculator, the Programmable Filter can accept multiple inputs. First, select two or more pipeline objects in the pipeline browser and then apply the Programmable Filter. Then each input can be accessed using the inputs[] variable. Note that if the Output Dataset Type is set to Same as Input, the filter will copy the mesh from the first input to the output. If Copy Arrays is on, it will also copy arrays from the first input. As an example, the following script compares the Pressure attribute from two inputs using the difference operator. output.append(inputs[1].PointData['Pressure'] inputs[0].PointData['Pressure'], "difference") Dealing with Composite Datasets Thus far, none of the examples used apply to multi-block or AMR datasets. When talking about the Python Calculator, you did not have to differentiate between simple and composite datasets. This is because the calculator loops over all of the leaf blocks of composite datasets and applies the expression to each one. Therefore, inputs in an expression are guaranteed to be simple datasets. On the other hand, the Programmable Filter does not perform this iteration and passes the input, composite or simple, as it is to the script. Even though this makes basic scripting harder for composite datasets, it provides enormous flexibility. To work with composite datasets you need to know how to iterate over them to access the leaf nodes. for block in inputs[0]: print block Here you iterate over all of the non-NULL leaf nodes (i.e. simple datasets) of the input and print them to the Output Messages console. Note that this will work only if the input is multi-block or AMR. When Output Dataset Type is set to "Same as Input," the Programmable Filter will copy composite dataset to the output - it will copy only the mesh unless Copy Arrays is on. Therefore, you can also iterate over the output. A simple trick is to turn on Copy Arrays and then use the arrays from the output when generating new ones. Below is an example. You should use the can.ex2 file from the ParaView testing dataset collection. 88 Python Programmable Filter def process_block(block): displ = block.PointData['DISPL'] block.PointData.append(displ[:,0], "displ_x") for block in output: process_block(block) Alternatively, you can use the MultiCompositeDataIterator to iterate over the input and output block simultaneously. The following is equivalent to the previous example: def process_block(input_block, output_block): displ = input_block.PointData['DISPL'] output_block.PointData.append(displ[:,0], "displ_x") from paraview.vtk.dataset_adapter import MultiCompositeDataIterator iter = MultiCompositeDataIterator([inputs[0], output]) for input_block, output_block in iter: process_block(input_block, output_block) Advanced Changing Output Type Thus far, all of the examples discussed depended on the output type being the same as input and that the Programmable Filter copied the input mesh to the output. If you set the output type to something other than Same as Input, the Programmable Filter will create an empty output of the type we specified but will not copy any information. Even though it may be more work, this provides a lot of flexibility. Since this is approaching the realm of VTK filter authoring, a very simple example is used. If you are already familiar with VTK API, you will realize that this is a great way of prototyping VTK filters. If you are not, reading up on VTK is recommended. Create a Wavelet source, apply a Programmable Filter, set the output type to vtkTable and use the following script: rtdata = inputs[0].PointData['RTData'] output.RowData.append(min(rtdata), 'min') output.RowData.append(max(rtdata), 'max') Here, you added two columns to the output table. The first one has one value - minimum of RTData - and the second one has the maximum of RTData. When you apply this filter, the output should automatically be shown in a Spreadsheet view. You could also use this sort of script to chart part of the input data. For example, the output of the following script can be display as a line chart. rtdata = inputs[0].PointData['RTData'] output.RowData.append(rtdata, 'rtdata') Changing the output type is also often necessary when using VTK filters within the script, which is demonstrated in the following section. 89 Python Programmable Filter Dealing with Structured Data Output A curvilinear gris, for instance a bluntfin.vts from the ParaView testing data, is used as a good example. If you would like to volume render a subset of this grid, since as of 3.10, ParaView does not support volume rendering of curvilinear grids, you have two choices: • Resample to an image data • Convert to unstructured grid This example demonstrates how to resample to image data using the Programmable Filter. This can be accomplished using the Resample with Dataset filter, but it is a good example nevertheless. Start with loading bluntfin.vts, then apply the Programmable Filter. Make sure to set the output type to vtkImageData. Here is the script: pinput = vtk.vtkImageData() pinput.SetExtent(0, 10, 0, 10, 0, 10) pinput.SetOrigin(0, 1, 0) pinput.SetSpacing(0.5, 0.5, 0.5) probe = vtk.vtkProbeFilter() probe.SetInput(pinput) input_copy = inputs[0].NewInstance() input_copy.UnRegister(None) input_copy.ShallowCopy(inputs[0].VTKObject) probe.SetSource(input_copy) probe.Update() output.ShallowCopy(probe.GetOutput()) Note: See the next section for details about using a VTK filter within the Programmable Filter. If you already applied, you may notice that the output looks much bigger than it should be because an important piece is missing. You need to use the following as the RequestInformation script: from paraview.util import SetOutputWholeExtent SetOutputWholeExtent(self, [0, 10, 0, 10, 0, 10]) VTK expects that all data sources and filters that produce structured data (rectilinear or curvilinear grids) to provide meta data about the logical extents of the output dataset before full execution. Thus the RequestInformation is called by the Programmable Filter before execution and is where you should provide this meta data. This is not required if the filter is simply copying the mesh as the meta data would have been provided by another pipeline object upstream. However, if you are using the Programmable Filter to produce a structured data with a different mesh than the input, you need to provide this information. The RequestUpdateExtent script can be used to augment the request that propagates upstream before execution. This is used to ask for a specific data extent, for example. This is an advanced concept and is not discussed further here. 90 Python Programmable Filter Using VTK Filters with Programmable Filter The previous example demonstrated how you can use a VTK filter (vtkProbeFilter in this case) from with the Programmable Filter. We will explain that example in more detail here. pinput = vtk.vtkImageData() pinput.SetExtent(0, 10, 0, 10, 0, 10) pinput.SetOrigin(0, 1, 0) pinput.SetSpacing(0.5, 0.5, 0.5) probe = vtk.vtkProbeFilter() probe.SetInput(pinput) input_copy = inputs[0].NewInstance() input_copy.UnRegister(None) input_copy.ShallowCopy(inputs[0].VTKObject) probe.SetSource(input_copy) probe.Update() output.ShallowCopy(probe.GetOutput()) There are two important tricks to use a VTK filter from another VTK filter. First, do not directly set the input to the outer filter as the input of the inner filter. (It is difficult to explain why without getting into VTK pipeline mechanics). Instead, make a shallow copy as follows: input_copy = inputs[0].NewInstance() input_copy.UnRegister(None) input_copy.ShallowCopy(inputs[0].VTKObject) The UnRegister() call is essential to avoid memory leaks. The second trick is to use ShallowCopy() to copy the output of the internal filter to the output of the outer filter as follows: output.ShallowCopy(probe.GetOutput()) This should be enough to get you started. There are a large number of VTK filters so it is not possible to describe them here. Refer to the VTK documentation for more information. References [1] http:/ / docs. python. org/ tutorial/ [2] http:/ / www. vtk. org [3] http:/ / www. vtk. org/ doc/ release/ 5. 6/ html/ 91 Calculator 92 Calculator Basics The Calculator Filter can be use to calculate derived quantities from existing attributes. The main parameter of the Calculator is an expression that describes how to calculate the derived quantity. You can enter this expression as free-form text or using some of the shortcuts (buttons and menus provided). There are some "hidden" expressions for which there are no buttons. Operands that are accessible only by typing in the function name include: • min(expr1, expr2) Returns the lesser of the two scalar expressions • max(expr1, expr2) Returns the greater of the two scalar expressions • cross(expr1, expr2) Returns the vector cross product of the two vector expressions • sign(expr) Returns -1, 0 or 1 depending if the scalar expression is less than, equal to or greater than zero respectively • if(condition,true_expression,false_expression) Evaluates the conditional expression and then evaluates and returns one of the two expressions • > Numerical "greater than" conditional test • • • • Figure 6.2 < Numerical "less than" conditional test = Numerical "equal to" conditional test & Boolean "and" test conjunction | Boolean "or" test conjunction Note: It is recommended that you use the Python Calculator instead of Calculator if possible. The Python Calculator is more flexible, has more functions and is more efficient. However, it requires that ParaView is compiled with Python support and that NumPy is installed. Create a Wavelet source and then apply the Calculator using "1" as the expression. Note: You can enter an expression by clicking in the expression entry box and typing. This should create a point array called "Result" in the output. A few things to note: • The Calculator copies the input mesh to the output. It is possible to have the calculator change the point coordinates, which is discussed. • The expression is calculated for each element in the output point or cell data (depending on the Attribute Mode). Next, change the expression to be "5 * RTData" and the Result Array Name to be "5 times rtdata" (without the quotes). If you change to surface representation and color by the new array, you will notice that the filter calculated "5 * RTData" at each point. The main use case for the Calculator is to utilize one or more input arrays to calculate derived quantities. The Calculator can either work on point centered attributes or cell centered attributes (but not both). In order to help enter Calculator the names of the input arrays, the Calculator provides two menus accessible through the "Scalars" and "Vectors" buttons. If you select an array name from either menus, it will be inserted to the expression entry box at the cursor location. You can also use the other buttons to enter any of the functions available to the Calculator. Working with Vectors To start with an example, create a Wavelet source then apply the Random Vectors filter. Next, apply the Calculator. Now look at the Scalars and Vectors menus on the Object Inspector panel. You will notice that BrownianVectors shows up under Vectors, whereas BrownianVectors_X, _Y and _Z show up under scalars. The Calculator allows access to individual components of vectors using this naming convention. So if you use BrownianVectors_X as the expression, the Calculator will extract the first component of the BrownianVectors attribute. All of the Calculator's functions are applicable to vectors. Most of these functions treat the vector attributes the same as scalars, mainly applying the same functions to all components of all elements. However, the following functions work only on vectors: • v1 . v2 : Dot product of two vectors. Returns a scalar. • norm : Creates a new array that contains normalized versions of the input vectors. • mag : Returns the magnitude of input vectors. You may have noticed that four calculator buttons on the Object Inspector are not actually functions. Clear is straightforward. It cleans the expression entry box. iHat, jHat and kHat on the other hand are not as clear. These represent unit vectors in X, Y and Z directions. They can be used to construct vectors from scalars. Take for example the case where you want to set the Z component of BrownianVectors from the previous example to 0. The expression to do that is "BrownianVectors_X *iHat+BrownianVectors_Y*jHat+0*kHat". This expression multiplies the X unit vector with the X component of the input vector, the Y unit vector with the Y component, and the Z unit vector with 0 and add them together. You can use this sort of expression to create vectors from individual components of a vector if the reader loaded them separately, for example. Note: You did not really need the 0*kHat bit, which was for demonstration. Working with Point Coordinates You may have noticed that one point-centered vector and its three components are always available in the Calculator. This vector is called "coords" and represents the point coordinates. You can use this array in your expression like any other array. For instance, in the previous example you could use "mag(coords)*RTData" to scale RTData with the distance of the point to the origin. It is also possible to change the coordinates of the mesh by checking the "Coordinate Results" box. Note that this does not work for rectilinear grids (uniform and non-uniform) since their point coordinates cannot be adjusted one-by-one. Since the previous examples used a uniform rectilinear grid, you cannot use them. Instead, start with the Sphere source, then use this expression: "coords+2*iHat". Make sure to check he "Coordinate Results" box. The output of the Calculator should be a shifted version of the input sphere. 93 Calculator 94 Dealing with Invalid Results Certain functions are not applicable to certain arguments. For example, sqrt() works only on positive numbers since the calculator does not support complex numbers. Unless the "Replace invalid results" option is turned on, an expression that tries to evaluate the square root of a negative number will return an error such as this: ERROR: In /Users/berk/Work/ParaView/git/VTK/Common/vtkFunctionParser.cxx, line 697 vtkFunctionParser (0x128d97730): Trying to take a square root of a negative value However, if you turn on the "Replace invalid results" option, the calculator will silently replace the result of the invalid expression with the value specified in "Replacement value". Note that this will happen only when the expression result is an invalid result so some of the output points (or cells) may have the Replacement Value whereas others may have valid results. Python Calculator Introduction Figure 6.3 The Python Calculator is a ParaView filter that processes one or more input arrays based on an expression provided by the user to produce a new output array. The parameters of the filter include the expression, the association of the output array (Point or Cell Data), the name of output array and a toggle that controls whether the input arrays are copied to the output. This section introduces the use of the Python Calculator and provides a list of functions available to the user. Note that the Python Calculator depends on Python and NumPy. All ParaView binaries distributed by Kitware are built with these to enable the calculator. If you have built ParaView yourself, you have to make sure that NumPy is installed and that PARAVIEW_ENABLE_PYTHON is turned on when configuring the ParaView build. Python Calculator Basic Tutorial Start by creating a Sphere source and applying the Python Calculator to it. As the first expression, use the following and apply: 5 This should create an array name "result" in the output point data. Note that this is an array that has a value of 5 for each point. When the expression results in a single value, the calculator will automatically make a constant array. Next, try the following: Normals Now the "result" array should be the same as the input array Normals. As described in detail later, various functions are available through the calculator. For example, the following is a valid expression. sin(Normals) + 5 It is very important to note that the Python Calculator has to produce one value per point or cell depending on the Array Association parameter. Most of the functions described here apply individually to all point or cell values and produce an array the same dimensions as the input. However, some of them (such as min() and max()) produce single values. Accessing Data There are several ways of accessing input arrays within expressions. The simplest way is to access it by name: sin(Normals) + 5 This is equivalent to: sin(inputs[0].PointData['Normals']) + 5 The example above requires some explanation. Here inputs[0] refer to the first input (dataset) to the filter. Python Calculator can accept multiple inputs. Each input can be accessed as inputs[0], inputs[1], ... You can access the point or cell data of an input using the .PointData or .CellData qualifiers. You can then access individual arrays within the point or cell data containers using the [] operator. Make sure to use quotes or double-quotes around the array name. Arrays that have names with certain characters (such as space, +, -, *, /) in their name can only be accessed using this method. Certain functions apply directly on the input mesh. These filters expect an input dataset as argument. For example, area(inputs[0]) For data types that explicitly define the point coordinates, you can access the coordinates array using the .Points qualifier. The following extracts the first component of the coordinates array: inputs[0].Points[:,0] Note that certain data types, mainly image data (uniform rectilinear grid) and rectilinear grid, point coordinates are defined implicitly and cannot be accessed as an array. 95 Python Calculator Comparing Multiple Datasets The Python Calculator can be used to compare multiple datasets, as shown by the following example. 1. Go to the Menu Bar, and select File > Disconnect to clear the Pipeline. 2. Select Source > Mandelbrot, and then click Apply, which will set up a default version of the Mandelbrot Set. The data for this set are stored in a 251x251 scalar array. 3. Select Source > Mandelbrot again, and then go to the Properties panel and set the Maximum Number of Iterations to 50. Click Apply, which will set up a different version of the Mandelbrot Set, represented by the same size array. 4. Hold the Shift key down and select both of the Mandelbrot entries in the Pipeline Inspector, and then go to the Menu Bar and select Filter > Python Calculator. The two Mandelbrot entries will now be shown as linked, as inputs, to the Python Calculator. 5. In the Properties panel for the Python Calculator filter, enter the following into the Expression box: inputs[1].PointData['Iterations'] - inputs[0].PointData['Iterations'] This expression specifies the difference between the second and first Mandelbrot arrays. The result is saved in a new array called 'results'. The prefixes in the names for the array variables, inputs[1] and inputs[0], refer to the first and second Mandelbrot entries, respectively, in the Pipeline. PointData specifies that the inputs contain point values. The quoted label 'Iterations' is the local name for these arrays. Click Apply to initiate the calculation. Click the Display tab in the Properties panel for the Python Calculator, and go to the first tab to the right of the 'Color by' label. Select the item results in that tab, which will cause the display window to the right to show the results of the expression we entered in the Python Calculator. The scalar values representing the difference between the two Mandelbrot arrays are represented by colors that are set by the current color map (see Edit Color Map... for details). There are a few things to note: • Python Calculator will always copy the mesh from the first input to its output. • All operations are applied point by point. In most cases, this requires that the input meshes (topology and geometry) are the same. At the least, it requires that the inputs have the same number of points and cells. • In parallel execution mode, the inputs have to be distributed exactly the same way across processes. Basic Operations The Python calculator supports all of the basic arithmetic operations using the +, -, * and / operators. These are always applied element-by-element to point and cell data including scalars, vectors and tensors. These operations also work with single values. For example, the following adds 5 to all components of all Normals. Normals + 5 The following adds 1 to the first component, 2 to the second component and 3 to the third component: Normals + [1,2,3] This is specially useful when mixing functions that return single values. For example, the following normalizes the Normals array: (Normals - min(Normals))/(max(Normals) - min(Normals)) A common use case in a calculator is to work on one component of an array. This can be accomplished with the following: Normals[:, 0] 96 Python Calculator The expression above extracts the first component of the Normals vector. Here, : is a placeholder for "all elements". One element can be extracted by replacing : with an index. For example, the following creates a constant array from the first component of the normal of the first point: Normals[0, 0] Whereas the following assigns the normal of the first point to all points: Normals[0, :] It is also possible to merge multiple scalars into an array using the hstack() function: hstack([velocity_x, velocity_y, velocity_z]) Note the use of square brackets ([]). Under the cover, the Python Calculator uses NumPy. All arrays in the expression are compatible with NumPy arrays and can be used where NumPy arrays can be used. For more information on what you can do with these arrays, consult with the NumPy book, which can be downloaded here [1]. Functions The following is a list of functions available in the Python Calculator. Note that this list is partial since most of the NumPy and SciPy functions can be used in the Python Calculator. Many of these functions can take single values or arrays as argument. abs (x) : Returns the absolute value(s) of x. add (x, y): Returns the sum of two values. x and y can be single values or arrays. Same as x+y. area (dataset) : Returns the surface area of each cell in a mesh. aspect (dataset) : Returns the aspect ratio of each cell in a mesh. aspect_gamma (dataset) : Returns the aspect ratio gamma of each cell in a mesh. condition (dataset) : Returns the condition number of each cell in a mesh. cross (x, y) : Returns the cross product for two 3D vectors from two arrays of 3D vectors. curl (array): Returns the curl of an array of 3D vectors. divergence (array): Returns the divergence of an array of 3D vectors. divide (x, y): Element by element division. x and y can be single values or arrays. Same as x/y. det (array) : Returns the determinant of an array of 2D square matrices. determinant (array) : Returns the determinant of an array of 2D square matrices. diagonal (dataset) : Returns the diagonal length of each cell in a dataset. dot (a1, a2): Returns the dot product of two scalars/vectors of two array of scalars/vectors. eigenvalue (array) : Returns the eigenvalue of an array of 2D square matrices. eigenvector (array) : Returns the eigenvector of an array of 2D square matrices. exp (x): Returns power(e, x). global_max(array): Returns the maximum value of an array of scalars/vectors/tensors among all process. Not yet supported for multi-block and AMR datasets. global_mean (array) : Returns the mean value of an array of scalars/vectors/tensors among all process. Not yet supported for multi-block and AMR datasets. 97 Python Calculator global_min(array): Returns the minimum value of an array of scalars/vectors/tensors among all process. Not yet supported for multi-block and AMR datasets. gradient(array): Returns the gradient of an array of scalars/vectors. inv (array) : Returns the inverse an array of 2D square matrices. inverse (array) : Returns the inverse of an array of 2D square matrices. jacobian (dataset) : Returns the jacobian of an array of 2D square matrices. laplacian (array) : Returns the jacobian of an array of scalars. ln (array) : Returns the natural logarithm of an array of scalars/vectors/tensors. log (array) : Returns the natural logarithm of an array of scalars/vectors/tensors. log10 (array) : Returns the base 10 logarithm of an array of scalars/vectors/tensors. max (array): Returns the maximum value of the array as a single value. Note that this function returns the maximum within a block for AMR and multi-block datasets, not across blocks/grids. Also, this returns the maximum within each process when running in parallel. max_angle (dataset) : Returns the maximum angle of each cell in a dataset. mag (a) : Returns the magnigude of an array of scalars/vectors. mean (array) : Returns the mean value of an array of scalars/vectors/tensors. min (array) : Returns the minimum value of the array as a single value. Note that this function returns the minimum within a block for AMR and multi-block datasets, not across blocks/grids. Also, this returns the minimum within each process when running in parallel. min_angle (dataset) : Returns the minimum angle of each cell in a dataset. mod (x, y): Same as remainder (x, y). multiply (x, y): Returns the product of x and y. x and y can be single values or arrays. Note that this is an element by element operation when x and y are both arrays. Same as x * y. negative (x): Same as -x. norm (a) : Returns the normalized values of an array of scalars/vectors. power (x, a): Exponentiation of x with a. Here both x and a can either be a single value or an array. If x and y are both arrays, a one-by-one mapping is used between two arrays. reciprocal (x): Returns 1/x. remainder (x, y): Returns x − y*floor(x/y). x and y can be single values or arrays. rint (x): Rounds x to the nearest integer(s). shear (dataset) : Returns the shear of each cell in a dataset. skew (dataset) : Returns the skew of each cell in a dataset. square (x): Returns x*x. sqrt (x): Returns square root of x. strain (array) : Returns the strain of an array of 3D vectors. subtract (x, y): Returns the difference between two values. x and y can be single values or arrays. Same as x - y. surface_normal (dataset) : Returns the surface normal of each cell in a dataset. trace (array) : Returns the trace of an array of 2D square matrices. volume (dataset) : Returns the volume normal of each cell in a dataset. vorticity(array): Returns the vorticity/curl of an array of 3D vectors. vertex_normal (dataset) : Returns the vertex normal of each point in a dataset. 98 Python Calculator Trigonometric Functions Below is a list of supported triginometric functions. sin (x) cos (x) tan (x) arcsin (x) arccos (x) arctan (x) hypot (x1, x2) sinh(x) cosh (x) tanh (x) arcsinh (x) arccosh (x) arctanh (x) References [1] http:/ / www. tramy. us/ guidetoscipy. html Spreadsheet View In several cases it is very useful to look at the raw dataset. This is where the Spreadsheet view is particularly useful. Spreadsheet View allows users to explore the raw data in a spreadsheet-like display. Users can inspect cells and points and data values associated with these using this view. This makes it a very useful for drilling down into the data. Spreadsheet View, as the name suggests, is a view that users can create by splitting the main view frame in the application client-area. Refer to the chapter on Views for details on views. Spreadsheet View can only show one dataset at a time. However, users can use ParaView’s multi-view capabilities to create multiple spreadsheet views for inspecting different datasets. 99 Spreadsheet View 100 Figure 6.4 Spreadsheet View Inspecting Large Datasets Spreadsheet View is a client-only view i.e. it delivers the necessary data to the client when running in client-server mode. Additionally, it is not available through Python scripting (except when running through the Pyhton shell provided by the ParaView application) or batch scripting. Furthermore, when running on a tile-display, the area covered by the spreadsheet on the client simply shows up as a blank region on the tiles. Unlike complex views like the 3D render view that can generate the renderings on the server and simply deliver images to the client, the spreadsheet view requires that the data is available on the client. This can be impractical when inspecting large datasets since the client may not even have enough memory, even if infinite bandwidth is assumed. To address such issues, the Spreadsheet View streams the data to the client, only fetching the data for the row currently visible in the viewport. Spreadsheet View Double Precision Using the precision spin-box in the view header, users can control the precision for floating point numbers. The value determines the number of significant digits after the decimal point. Selection with Spreadsheet View Spreadsheet view, in many ways, behaves like typical spreadsheet applications. You can scroll, select rows using mouse clicks, arrow keys and modifiers like Shift and Ctrl keys, or sort columns by clicking on the header. Additionally, you can double-click on a column header to toggle the maximization of a column for better readability. On selecting rows the corresponding cells or points will get selected and ParaView will highlight those in other views, such as the 3D view. Conversely, when you make a selection in the 3D View or the chart views, one of the rows corresponding to the selected cells/points will be highlighted in the spreadsheet view. Of course, for the view to highlight the selection, the selected dataset must be the one that is being shown in the spreadsheet view. To make it easier to inspect just the selected elements, you check the “Show only selected” button on the view header. When in “Show only Selected” mode, you can no longer create selections on the spreadsheet view. You have to use the other views to make the selection and the spreadsheet view will automatically update to show the details for the items that got selected. Spreadsheet view can show data associated with cells, points or even field data. To choose what attribute the view shows, you can use the attribute-type combo-box. The selection created by the view depends on the attribute-type, i.e. if a user selects in the view when attribute type is “Points”, points in the dataset will be selected. The spreadsheet view also performs selection conversions if possible, i.e. if you select a cell in the 3D view, but spreadsheet view is setup to show points, then the view will highlight the points that form the selected cell. The spreadsheet view may add several additional data columns that may not be present in the actual data. These data columns are either derived information such as the (i, j, k) coordinates for structured data or provide additional information about the data, e.g. block index for composite datasets, or provide additional information about the data distribution such as process-id when connected to a parallel server (pvserver or pvdataserver). 101 Spreadsheet View 102 Working with Composite Datasets Spreadsheet view works seamlessly with different kinds of dataset types including composite datasets such as multi-block datasets or AMR datasets. When dealing with composite datasets, the view shows one block at a time. Users can choose the block to inspect by using the Display section of the Properties panel. Figure 6.5 Properties panel showing widget to select the block to display Selection Selection is the mechanism for identifying a subset of a dataset by using user-specified criteria. This subset can be a set of points, cells or a block of composite dataset. This functionality allows users to focus on a smaller subset that is important. For example, the elements of a finite-element mesh that have pressure above a certain threshold can be identified very easily using the threshold selection. Furthermore, this selection can be converted to a set of global element IDs in order to plot the attribute values of those elements over time. This section discusses the mechanism for creating selections using Views and Selection Inspector. The following section details another powerful and flexible mechanism of creating selection using queries. ParaView supports a single active selection. This selection is associated with a data source (here data source refers to any reader, source or filter) and is shown in every view that displays the data source’s output. This article uses a use-case driven approach to demonstrate how this selection can be described and used. The next section introduces the main GUI components that are used in the article and subsequent sections address different use cases. Selection Selection Inspector ParaView provides a selection inspector (referred to simply as the inspector in this article) to inspect and edit the details about the active selection. You can toggle the inspector visibility from the View menu. The inspector can be used to create a new active selection, view/edit the properties of the active selection as well as change the way the selection is displayed in the 3D window e.g. change the color, show labels etc. Spreadsheet View Figure 6.6 Spreadsheet View in ParaView showing the display tab for a source producing a multi-block dataset. The selected cells are highlighted in the 3D view as well as the spreadsheet view. The active selection can be inspected using the selection inspector. Spreadsheet View provides data exploration capabilities. One of the common complaints many users have is not being able to look at the raw data directly. Spreadsheet view provides exactly that. It allows the user to look at the raw cell data, point data or field data associated with a dataset. For more details on how to use the Spreadsheet View, please refer to Spreadsheet View Chapter. Create a Selection This section covers different ways of creating a selection. Select cells/points on the Surface One of the simplest use-cases is to select cells or points on the surface of the dataset. It is possible to select surface cells by drawing a rubber-band on the 3D view. With the 3D view showing the dataset active, click on Select Cells (or Points) On in the Selection Controls toolbar or under the Edit menu (you can also use the ‘S’ key a shortcut for ‘Select Cells On’). This will put ParaView into a selection mode. In this mode, click and drag over the surface of the dataset in the active view to select the cells (or points) on the surface. If anything was selected, it will be highlighted 103 Selection 104 in all the views showing the data and the source producing the selected dataset will become active in the Pipeline Browser. ParaView supports selecting only one source at a time. Hence, even if you draw the rubber band such that it covers data from multiple sources, only one of them will be selected (the one that has the largest number of selected cells or points). As mentioned earlier, when data from a source is selected, all the views displaying the data show the selection. This includes spreadsheet view as well. If the spreadsheet view will show cell or point attributes of the selected data, then it will highlight the corresponding rows. When selecting points, the spreadsheet view will show the selection only if point attributes are being displayed. When selecting cells, it will highlight the cells in the cell attribute mode, and highlight the points forming the cells in the point attribute mode. For any decent sized dataset, it can be a bit tricky to locate the selected rows. In that case, the Show only selected elements on the display tab can be used to hide all the rows that were not selected. When selecting cells (or points) on the surface, ParaView determines the cell (or point) IDs for each of the cell (or point) rendered within the selection box. The selection is simply the IDs for cells (or points) thus determined. Select cells/points using a Frustum Figure 6.7: Selection using a Frustum. Note that all cells that lie within the specified frustum are selected. The selection inspector shows the details of the selection. This is similar to selecting on the surface except that instead of selecting the cells (or points) on the surface of the dataset, it selects all cells (or points) that lie within the frustum formed by extruding the rectangular rubber band drawn on the view into 3D space. To perform a frustum selection, use Select Cells (or Points) Through in the Selection Controls toolbar or under the Edit menu. As with surface selection, the selected cells/points are shown in all the views in which the data is shown including the spreadsheet view. Unlike surface selection, the indices of the cells or points are not computed after a frustum selection. Instead ParaView performs intersections to identify the Selection 105 cells (or points) that lie within the frustum. Note that this selection can produce a very large selection and thus may be time consuming and can increase the memory usage significantly. Select Blocks in a Composite Dataset Figure 6.8: Selecting a block in multi-block dataset. All the cells in the selected block are highlighted. The selection inspector shows the selected block. Composite datasets are multi-block or AMR (adaptive mesh refinement) datasets. In the case of multi-block datasets, each block may represent different components of a large assembly e.g. tires, chassis, etc. for a car dataset. Just like selecting cells or points, it is possible to select entire blocks. To enter the block selection mode use Select Block in the Selection Controls toolbar or under the Edit menu (you can also use the ‘B’ key as a shortcut for Select Block). Once in block selection mode, you can simply click on the block in the 3D view to select a single block or click and drag to select multiple blocks. When a block is selected, its surface cells will be highlighted. Select using the Spreadsheet View Thus far the examples have looked at defining the selection on the 3D view. This section focuses on how to create selections using the spreadsheet view. As discussed earlier, the spreadsheet view simply shows the raw cell (point or field) data in a table. Each row represents a unique cell (or point) from the dataset. Like with any other spreadsheet application, you can select a cell (or a point) by simply clicking on the row to select. You can expand the selection using Ctrl, Shift keys while clicking. If the spreadsheet view is currently showing point attributes, then selecting it will create a point based selection. Similarly, if it is showing cell attributes then it will create a cell based selection. Selection cannot be created when showing field attributes which are not associated with any cell or point. All views showing a selected dataset show the selection. The spreadsheet view showing the data from a source selected in the 3D view highlights the corresponding cells/points. Conversely, when creating a selection in the Selection 106 spreadsheet view, the corresponding cell (or point) gets highlighted in all the 3D views showing the source. Select using the Selection Inspector Figure 6.9: Location based selection showing the cross hairs used to specify the locations. Sometimes you may want to tweak the selection or create a new selection with a known set of cell (or point) IDs or create selections based of value of any array or location etc. This is possible using the Selection Inspector. Whenever a selection is created in any of the views, it becomes the active selection. The active selection is always shown in the Selection Inspector. For example, if you select cells on the surface of a dataset, then as shown in Figure 6.8, the selection inspector will show indices for the cells selected. The selection inspector has three sections: the topmost Current Object and Create Selection are used to choose the source whose output you want to create a selection on. The Active Selection group shows the details of the active selection, if any. The Display Style group makes it possible to change the way the selection in shown in the active 3D view. To create a new selection, choose the source whose output needs to be selected in the Current Object combo-box and then hit Create Selection. An empty selection will be created and its properties will be shown in the active selection group. Alternatively, you can use any of the methods described earlier to create a selection; it will still be shown in the selection inspector. When you select cells (or points) on the surface or using the spreadsheet view, the selection type is set to IDs. Creating a frustum selection results in a selection with the selection type set to Frustum, while selecting a block in a composite dataset creates a Block selection. Field Type indicates whether cells or points are to be selected. In the active selection group, Selection Type indicates the type of the active selection. You can change the type by choosing one of the available options. Selection As shown in Figure 6.8 for IDs selection, the inspector lists all the selected cell or point indices. You can edit the list of ids to add or remove values. When connected to a parallel server, cell or point ids are not unique. Hence, you have to additionally specify the process number for each cell or point ID. Process number -1 implies that the cell (or point) at the given index is selected on all processes. For multi-block datasets, you also need to indicate the block to which the cell or point belongs. For AMR datasets, you need to specify the (AMR level, index) pair. As shown in Figure 6.6, for Frustum selection, currently only the Show Frustum option is available. When this option is turned on, ParaView shows the selection frustum in the 3D view. In the future, we will implement a 3D widget to modify the frustum. As shown in Figure 6.8, for Block selection, the full composite tree is shown in the inspector with the selected blocks checked. Using the selection inspector you can create a selection based on thresholds for scalars in the dataset. Choose the scalar array and then add value ranges for the selected cells (or points). The Selection Inspector can be used to create location-based selection. When field type is CELL, cells at the indicated 3D locations will be selected. When field type is POINT, the point closest to the location within a certain threshold is selected. If Select cells that include the selected points is checked, then all the cells that contain the selected point are selected. It is possible to specify more than one location. To aid in choosing positions, you can turn the Show location widgets option on. As a result, ParaView will show cross hairs in the active 3D view, which can be moved interactively, as shown in Figure 6.6. The Selection Inspector also provides a means to create global ID-based selection. These are similar to index based selection however since global IDs are unique across all processes and blocks, you do not need to specify any additional as needed by the ID-based selection. Convert Selections The Selection Inspector can also be used to convert a selection of one type to another. With some valid active selection present, if you change the selection type then ParaView will try to convert the current selection to the new type, still preserving the cells (or points) that were selected, if possible. For example, if you create a frustum-based selection and then change the selection type to IDs, ParaView will determine the indices for all the cells (or points) that lie with the frustum and initialize the new index-based selection with those indices. Note that the number of cells or points that get selected in frustum selection mode can potentially be very large; hence this conversion can be slow and memory expensive. Similarly, if the dataset provides global IDs, then it is possible to convert between IDs selection and global ID-based selection. i It is not possible to convert between all types of selections due to obvious reasons. Conversions between ID-based and global ID-based selections, conversions from frustum to ID-based selections, and conversions from frustum to global ID-based selections are supported by the Selection Inspector. Label Selected Cell/Points Once an active selection is created, you can label the cells or points or their associated values in a 3D view. This can be done using the Selection Inspector. At the bottom of the selection inspector panel, there are two tabs, Cell Label and Point Label, which can be used to change the cell or point label visibility and other label attributes such color, font etc. These tabs are enabled only if the active view is a 3D view. Any changes made in the Display Style group (including the labels) only affect the active 3D view. 107 Selection 108 Selection 109 Extract Selection Figure 6.11: Extract selection using a frustum selection Selection makes it possible to highlight and observe regions of interest. Often, once a region of interest has been identified, users would like to apply additional operations on it, such as apply filters to only the selected section of the data. This can be achieved using the Extract Selection filter. To set the selection to extract, create a selection using any of the methods already described. Then apply the extract selection filter to the source producing the selected data. To copy the active selection to the filter, use the Copy Active Selection button. You can change the active selection at any time and update the filter to use it by using this button. Figure 6.7 shows the extract selection filter applied after a frustum selection operation. Now, you can treat this as any other data source and apply filters to it, save state, save data etc. Selection 110 Plot Selection over Time Figure 6.12: Plot selection over time. For time varying datasets, you may want to analyze how the data variables change over time for a particular cell or a point. This can be done using the Plot Selection Over Time filter. This filter is similar to the Extract Selection filter, except that it extracts the selection for all time-steps provided by the data source (typically a reader) and accumulates the values for all the cell (or point) attributes over time. Since the selection can comprise of multiple cells or points, the display tab provides the Select Block widget which can be used to select the cell or point to plot, as shown in Figure [6]. Currently only one cell (or point) can be plotted at once in the same xy-plot view. You can create multiple plot views to show multiple plots simultaneously. Querying for Data Querying for Data Find Data Dialog As previously described, Selection is a mechanism in ParaView for sub-setting and focusing on a particular elements in the dataset. Different views provides different mechanisms for selecting elements, for example, you can select visible cells or points using the 3D View. Another mechanism for creating selections is by specifying a selection criteria. For example, suppose you want to select all cells where the pressure value is between a certain threshold. In such cases, you can use the Find Data dialog. The Find Data dialog performs a dual role: not only does it enable specifying the selection criteria but also show details of the selected elements in a spreadsheet. This makes it easier to inspect the selected elements. To open the Find Data dialog, go to Edit|Find Data. Figure 6.13 Query based on field "Global ID" / Query based on Python expression (Generated by the query on the left) When to use Find Data This feature is useful when you run into situations where you want to know the cell or the point at which a certain condition happens For example: • What are the cells at which PRESSURE >= 12? • What are the points with TEMP values in the range (12, 133)? • Locate the cell at ID 122, in Block 2. This feature provides a convenient way of creating selections based on certain criteria that can then be extracted from the data if needed. 111 Querying for Data 112 Using the Find Data dialog The dialog is designed to be used in two distinct operations: • Define the selection criteria or query • Process the selected cells/points e.g. show labels in active 3D view, extract selection etc. You must define the selection query and then execute the query, using the Run Query button before being able to inspect or process the selected elements. Defining the Query First, decide what type of elements you are interested in selecting, that is cells or points and from what data source. This can be done using the following combo boxes. Note that as you change these, any previous selections/queries will be cleared. Figure 6.14 Find Data options Next, you must define the query string. The syntax for specifying the query string is similar to the expressions used in the Python Calculator. In fact, ParaView indeed uses Python and numpy under the covers to parse the queries. In addition, based on the data type and the nature of the current session, there may be additional rows that allow users to qualify the selection using optional parameters such as process number (when running in parallel), or block number (for composite datasets). Once you have defined your query, hit Run Query button to execute the query. If any elements get selected, then they will be shown in the spreadsheet in this dialog. Also, if any of the views are showing in the dataset that is selected, then they will be highlighted the selected elements as well, just like regular view-based selection. Sample Queries • Select elements with a particular id id == 100 • Select elements given a list of ids contains(id, [100,101, 102]) or in1d(id, [100,101, 102]) • Select elements with matching a criteria with element arrays Temp and V. Querying for Data 113 Temp > 200 or Temp == 200 or contains(Temp, [200,300,400]) or (Temp > 300) & (Temp < 400) # don't forget the parenthesis or (Temp > 300) | (mag(V) > 0) • Select cells with cell volume matching certain criteria volume(cell) > 0 or volume(cell) == max(volume(cell)) Rules for defining queries • For the element type chosen, every element array becomes available as a field with same name as the array. Thus, if you are selecting points and the dataset has point arrays named "Temp" and "pressure", then you can construct queries using these arrays. • Special fields id (corresponding to element id), cell (corresponding to the cell), dataset (corresponding to the dataset) are available and can be used to compute quantities to construct queries e.g. to compare volume of cells, use volume(cell). • Queries can be combined using operators '&' and '|'. Query generator The combobox allow the user to create queries in a more intuitive but in a more limited way. Although, this could be useful specially when you want to learn how to write more complex query. To do so, you will need to execute your selection by using a field directly from the combobox instead of the "Query" key word. Any selection execution will internally generate a Query string which can be seen by switching back to the "Query" combobox value. Such query can then be used as part of a more complex one if needed. Querying for Data 114 Displaying the Selection Once a query is executed, the selected elements will be highlighted in all views where the selected data is visible. If the active view is a 3D view, you can choose whether the show labels for selected elements as well as the color to use for showing the selected elements using the controls on the Find Data dialog itself. Figure 6.15 Extracting the Selection The results of a query are temporary. They get replaced when a new query is executed or when the user creates a selection using any of the selection mechanisms. Sometimes, however, users may want to further analyze the selected elements such as apply more filters to only the selected elements, or plot the change in attributes on the selected elements over time. In that case, you should extract the selection. That creates a new filter that is setup to run the query on its input and produce a dataset matching the selection criteria. Both Extract Selection and Plot Selection Over Time are filters available through the Filters menu. The Find Data dialog provides shortcut buttons to quickly create those filters and set then up with the selection criteria chosen. Figure 6.16 Histogram 115 Histogram The Histogram filter produces output indicating the number of occurrences of each value from a chosen data array. It takes in a vtkDataObject but the object must have either point or cell data to operate on. The filter cannot be directly used with a table but the Table to Points filter can be used to convert the table to a polydata for input into the Histogram filter. The bar chart is the default view for the output of the Histogram filter. Other views that can be used are the line chart view, parallel coordinate view, and spreadsheet view. The chart views are useful for observing trends in the data while the spreadsheet view is useful for seeing exact numbers. An example Histogram filter output in a bar chart view is shown in Figure 6.17. Figure 6.17 Histogram filter output in bar chart view Options for the Histogram filter are: • Which array to process. This can be either a point data array or a cell data array. For arrays with more than one component, the user can specify which component to compute with respect to. The default is the first component. • The number of bins to put the results in as well as the range the bins should be divided from. • An option to average other field information of the same type for each of the bins. • Which variables are to be displayed in the view. This is under the Display section on the Properties panel. Plotting and Probing Data 116 Plotting and Probing Data There are multiple ways of probing a dataset for point or cell values. The simplest is the Probe Location filter. Additionally, there are a variety of filters to plot data with respect to time, location, or grid object ID. Probe Filter The probe filter can be used to query a dataset for point or cell data. Options for the probe filter are: • Show Point button is used to show where the center of the sphere is for generating the probe points. • Center on bounds will place the sphere center at the center of the dataset's bounds. • The sphere center can either be set in the properties tab of the object inspector or in the 3D view with the mouse cursor. To choose a new point in the 3D view window, press P and then click on the desired point. Note that the Show Point option must be selected in order to activate the point widget. The output for this filter is a single cell comprised of a set of points in a vtkPolyData. Line Plots The line plots have similar control functionality for displaying the results. These options are in the Display tab of the Object Inspector and include selecting which point and/or cell data to include in the plot, what should be used as the x axis values (e.g. point/cell index or a specified point or cell data array), and plot preferences such as line thickness and marker style. See the line chart view section for more information on setting the display properties for these filters. An example line plot is shown in Figure 6.18.. Figure 6.18 A Line Plot Plotting and Probing Data Plot Data / Scatter Plot The Plot Data filter and Scatter Plot filter are very similar in functionality, with the Scatter Plot filter being a deprecated version of the Plot Data filter. The main difference is that the Scatter Plot filter's output is a 1D rectilinear grid while the Plot Data filter's output is the same type as the input. The Plot Data filter plots point or cell data over the entire data set. By default, the X axis values are determined with respect to their point or cell index. Plot Over Line The Plot Over Line filter is used to plot point data over a specified straight line. The Plot Over Line filter will not display cell data over the line so the user can use the Cell Data to Point Data filter to convert cell data to point data in order to use this filter for viewing desired cell data. The line geometry can specified either by setting the points in the Properties tab of the Object Inspector or by pressing P and selecting the beginning and ending points of the line. Note that the Show Line option must be selected to activate the line widget. The resolution option is used to specify at how many evenly spaced points on the line to query the dataset for information on the point data. Plot on Sorted Lines The Plot on Sorted Lines filter is used when polydata with line cell types have been extracted from a dataset and the user wants to see how point data varies over the line. This filter orders the lines by their connectivity instead of their point index. The output of the filter is a multiblock with a block for each contiguous line. Plot on Intersection Curves The Plot on Intersection Curves filter is used to plot point data where the data set intersects the given polygonal slice object. The results is a multiblock of polydatas where each block represents a contiguous set of cells. Note that the polydatas will only have 1D cell types. Thus if the slice type object intersection with the data set has 2D geometry the filter will take the boundary of the slice type object in order to reduce down to 1D geometry. If the slice type is a sphere that is wholly contained in a volumetric dataset then the filter parameters are invalid and no lines will be output. Plot Selection Over Time The Plot Selection Over Time filter can be used to visualize the variation of point or cell data from a selection with respect to time. The selection should be made on the dataset of the filter that precedes the Plot Selection Over Time filter in the pipeline in order to ensure that the proper points or cells are selected. Note that information for only a single point or cell can be plotted at a time. Plot Global Variables Over Time The Plot Global Variables Over Time filter plots field data that is defined over time. Filters must be set up explicitly to provide the proper information as this will not work for field data in general. As an example, see the Exodus reader in ParaView. 117 118 Saving Data Saving Data Saving Data Once you have created a visualization of your data in ParaView, you can save the resulting work as raw data, or an image, movie, or geometry. Save raw data Any object in the ParaView pipeline browser can be saved to a data file by selecting Save Data from the File menu. The available file types will change based on the dataset type of the current dataset. The file formats in which ParaView can save data are listed on List of Writers Figure 7.1 Saving Files in ParaView Saving Data 119 Save screenshots ParaView allows the user to save either the active view or all the views. The resulting image will be saved on the client machine, even when running with a remote server. The dialog allows you to control the following: • • • • • Image size Aspect ratio of the image Image quality Color palette Stereo mode Figure 7.2 Save Snapshot Resolution Dialog Box Saving Data 120 Save Animation Once you have created an animation of your data, you can save the animation to disk either as a series of images (one per animation frame) or as a movie file. The animation will contain all the visible views. To do this, select Save Animation from the File menu. The Animation Settings Dialog then appears, which lets you set properties for the recorded animation. Figure 7.3 Animation Settings Dialog Box Once you press the Save Animation button, a save file dialog box will allow you to choose where to save your image or movie file(s). Enter a file name then select an image file type (.jpg, .tif, or .png) or a movie file type (.avi). Once you choose a file name, the animation will play from beginning to end, and the animation frames generated will be used to create the selected type of image or movie file(s). While the image is playing the render window in ParaView will not update with the correct frame. If you are connected to a remote server then the Disconnect Before Saving Animation checkbox will be enabled. If you select this before saving the animation, then the ParaView client will disconnect from the server immediately, and the server will continue generating and saving images until the animation completes. When it is finished recording the animation, the server will shut down. Saving Data 121 Save geometries In addition to saving images of each time step in your animation, you may also wish to save the geometry itself. Select Save Geometry from the File menu to do this. This will cause a file navigation dialog box to be displayed. Navigate to the location where you wish to save the geometry, and enter a file name. You must save your data using ParaView’s .pvd file format. Unlike an animation, save geometry will only save the visible geometry of the active view for each time step. The resulting .pvd file will contain a pointer to each of these files, which are saved in a folder with the same as the .pvd file. You can later reload the .pvd file into ParaView as a time varying dataset. If multiple datasets were displayed while the animation was running, they will be grouped together as a multi-block dataset for each time step. If you then want to operate on the parts individually, run the Extract Blocks filter to select the appropriate block(s). Exporting Scenes ParaView provides functionality to export any scene set up with polygonal data (i.e without volume rendering). Currently X3D [1] (ASCII as well as binary), VRML (Virtual Reality Modeling Language) [2], and POV-Ray [3] are supported. To export a scene, set up the scene in a 3D view. Only one view can be exported at a time. With the view to be exported active, choose File | Export. A new HTML/WebGL exporter is also now available in ParaView/master or in ParaView 4 as a plugin. The file-open dialog will list the available types. The type is determined based on the extensions of the file written out: • *.vrml -- VRML [4] • *.x3d -- X3D ASCII [5] • *.x3db -- X3D Binary [5] • *.pov -- POV-Ray [6] • *.html -- Using WebGL to render a surfacique 3D scene into a web page. Static Standalone WebGL example from ParaViewWeb [7] Figure 7.4 Export option in the File menu can be used to export the scene set up in a 3D view Exporting Scenes 122 ParaView: [Welcome | Site Map [8] ] References [1] [2] [3] [4] [5] [6] [7] [8] http:/ / www. web3d. org/ x3d/ http:/ / www. web3d. org/ x3d/ vrml http:/ / www. povray. org/ http:/ / www. vtk. org/ doc/ nightly/ html/ classvtkVRMLExporter. html http:/ / www. vtk. org/ doc/ nightly/ html/ classvtkX3DExporter. html http:/ / www. vtk. org/ doc/ nightly/ html/ classvtkPOVExporter. html http:/ / paraviewweb. kitware. com/ PWApp/ WebGL?name=mummy-state& button=View http:/ / www. paraview. org/ Wiki/ Category:ParaView 123 3D Widgets Manipulating data in the 3D view 3D Widgets In addition to being controlled manually through entry boxes, sliders, etc., parameters of some of the filters and sources in ParaView can be changed interactively by manipulating 3D widgets in a 3D view. Often the 3D widgets are used to set the parameters approximately, and then the manual controls are used for fine-tuning these values. In the manual controls for each 3D widget, there is a check box for toggling whether the 3D widget is drawn in the scene. The label for the check box depends on the type of 3D widget being used. The following 3D widgets are supported in ParaView. Line Widget The line widget is used to set the orientation and the position of a line. It is used in both the stream tracer and elevation filters. The position of the line can be changed by clicking on any point on the line, except the endpoints, and dragging. To position the widget accurately, the user may need to change the camera position as well. Holding Shift while interacting will restrict the motion of the line widget to one of the X, Y, or Z planes. (The plane chosen is the one most closely aligned with the direction of the initial mouse movement.) To move one of the endpoints, simply use one of the point widgets on each end of the line. These are marked by spheres that become red when clicked. You can also reposition the endpoints by pressing the “P” key; the endpoint nearest to the mouse cursor will be Figure 8.1 Line Widget Results placed at the position on the dataset surface beneath the mouse position. Left-clicking while the cursor is over the line and dragging will reposition the entire line. Doing the same with the right mouse button causes the line to resize. Upward mouse motion increases the length of the line; downward motion decreases it. Manipulating data in the 3D view 124 Figure 8.2 Line Widget User Interface The show line check box toggles the visibility of the line in the 3D view. The controls shown in Figure 8.2 can be used to precisely set the endpoint coordinates and resolution of the line. The X Axis, Y Axis, and Z Axis buttons cause the line to be along the selected axis and pass through the center of the bounds of the dataset. Depending on the source or filter using this widget, the resolution spin box may not be displayed. The value of the resolution spin box determines the number of segments composing the line. Manipulating data in the 3D view 125 Plane Widget The plane widget is used for clipping and cutting. The plane can be moved parallel to its normal by left-clicking on any point on the plane except the line center and dragging. Right-clicking on the plane, except on the normal line center, and dragging scales the plane widget. Upward mouse motion increases the size of the plane; downward motion decreases it. The plane normal can be changed by manipulating one of the point widgets (displayed as cones that become red when clicked) at each end of the normal vector. Shown in Figure 8.4, the standard user interface for this widget provides entry boxes for setting the center position (Origin) and the normal direction of the plane as well as toggling the plane Figure 8.3 The Plane Widget widget’s visibility (using the show plane check box). Buttons are provided for positioning the plane at the center of the bounding box of the dataset (Center on Bounds) and for aligning the plane’s normal with the normal of the camera (Camera Normal), the X axis, the Y axis, or the Z axis (X Normal, Y Normal, and Z Normal, respectively). If the bounds of the dataset being operated on change, then you can use the Reset Bounds button to cause the bounding box for the plane widget to match the new bounds of the dataset and reposition the origin of the plane at the center of the new bounds. Using only the Center on Bounds button in this case would move the origin to the center of the new bounds, but the bounds of the widget would not be updated. Figure 8.4 Plane widget user interface Manipulating data in the 3D view 126 Box Widget The box widget is used for clipping and transforming datasets. Each face of the box can be positioned by moving the handle (sphere) on that face. Moving the handle at the center of the box causes the whole box to be moved. This can also be achieved by holding Shift while interacting. The box can be rotated by clicking and dragging with the left mouse button on a face (not at the handle) of the box. Clicking and dragging inside the box with the right mouse button uniformly scales the box. Dragging upward increases the box’s size; downward motion decreases it. Figure 8.5 Box Widget specify translation, scaling, and orientation in three dimensions. Figure 8.6 Box widget user interface Traditional user interface controls, shown in Figure 8.6, are also provided if more precise control over the parameters of the box is needed. These controls provide the user with entry boxes and thumb wheels or sliders to Manipulating data in the 3D view 127 Sphere Widget The sphere widget is used in clipping and cutting. The sphere can be moved by left-clicking on any point on the sphere and dragging. The radius of the sphere is manipulated by right-clicking on any point on the sphere and dragging. Upward mouse motion increases the sphere’s radius; downward motion decreases it. As shown in Figure 8.8, the center and radius can also be set manually from the entry boxes on the user interface. There is also a button to position the sphere at the center of the bounding box of the current dataset. Figure 8.7 Sphere Widget Figure 8.8 Sphere widget user interface Manipulating data in the 3D view 128 Point Widget The point widget is used to set the position of a point or the center of a point cloud. It is used by the Stream Tracer, Probe Location and Probe Location over Time filters. The position of the point can be changed by left-clicking anywhere on it and dragging. Right-clicking and dragging anywhere on the widget changes the size of the point widget in the scene. To position the widget accurately, the user may need to change the camera position as well. Holding Shift while interacting will restrict the motion of the point to one of the X, Y or Z planes. The plane chosen is the one that is most closely aligned with the direction of the initial mouse movement. As shown in Figure 8.10, entry boxes allow the user to specify the coordinates of the point, and a button is provided to position the point at the center of the bounds of the current dataset. If the point widget is being used to position a point cloud instead of a single point, entry boxes are also provided to specify the radius of the point cloud and the number of points the cloud contains. Figure 8.9 Point Widget Figure 8.10 Point widget user interface Manipulating data in the 3D view 129 Spline Widget The spline widget is used to define a path through 3D space. It is used by the Spline Source source and in the Camera animation dialog. The widget consists of a set of control points, shown as spheres in the 3D scene that can be clicked on with the mouse and dragged perpendicular to the viewing direction. The points are ordered and the path through them defines a smoothly varying path through 3D space. As shown in Figure 8.12, the text control box for the spline widget allows you to add or delete control points and specify their locations exactly. You can hide or show the widget and can choose to close the spline to create a loop, which adds a path segment from the last control point back to the first. Figure 8.11 Spline Widget Figure 8.12 Spline widget user interface 130 Annotation Annotation Annotation In ParaView, there are several ways to annotate the data to better understand or explain it. Several of the annotations can be interactively placed in the 3D scene. The parameters of the annotations are controlled using traditional user interface elements. Some types of annotation are discussed in other parts of this book. See the section on Selection for information about labeling selected points or cells in a data set. Scalar Bar The most straightforward way to display a scalar bar (or color legend) in the active 3D view is to use the Color Legend Visibility button on the Active Variables Control toolbar. When the data set selected in the Pipeline Browser is being colored by a variable (i.e., something other than Solid Color is selected in the Color by menu on the Display tab), this button is active. Clicking it toggles the visibility (in the selected 3D view) of a scalar bar showing the mapping from data values to colors for that variable. Figure 9.1 Scalar Bar displaying X-component of Normals Clicking the Edit Color Map button to the right of the Color Legend button brings up the Color Scale Editor dialog. As described in the Displaying Data [1] chapter, you have precise control over the mapping between data values and visible colors on the Color Scale tab of that dialog. Annotation 131 On the Color Legend tab, there are several different parameters that allow you to control the appearance of the color legend itself. First is the Show Color Legend check box which toggles the visibility of the scalar bar (color legend) in the 3D view. This has the same effect as using the color legend visibility button. Below the Show Color Legend check button are the controls for displaying the title and labels on the scalar bar. In the Title section, the Text entry box specifies the label that appears above the scalar bar. It defaults to the name of the array used for coloring the data set. If the current data set is being colored by a vector array, the value of the second entry box defaults to specifying how the color is determined from the vector (i.e.,X, Y, Z, or Magnitude). The entry box labeled Labels contains formatting text specifying the form of the scalar bar labels (numbers). The format specification used is the same as that used by the print function in C++. Figure 9.2 The color legend tab of the Color Scale Editor dialog Below each of the Title and Labels entry boxes are controls for determining how the title and labels will be drawn in the display area. The leftmost control is a menu for choosing the font; the available fonts are Arial (the default), Courier, and Times. Next are three formatting attribute buttons controlling whether the text is boldfaced, italicized, or shadowed. The next interface control is a spin box for controlling the text’s opacity. It ranges from 0 (transparent) to 1 (opaque). At the bottom of this tab is a Number of Labels spin box. This determines how many scalar-value labels will be shown alongside the scalar bar. To the right of that is an Aspect Ratio spin box which allows you to make the scalar bar relatively thinner or thicker. When the scalar bar is displayed in the 3D scene, it can be positioned and resized interactively, similar to interacting with 3D widgets. Clicking and dragging the scalar bar with the left mouse button repositions it in the display area. If the scalar bar begins to move beyond the left-or-right side of the display area, it is reoriented vertically. If it is moving off-screen at the top or bottom of the display area then it is reoriented in a horizontal direction. The scalar bar can be resized by left-clicking and dragging any of its sides or corners. Annotation 132 Orientation Axes When interacting with data in ParaView, it can be difficult to determine how the data set is oriented in 3D. To remedy this problem, a marker showing labeled 3D axes (i.e., orientation axes) can be displayed. The orientation of the axes matches that of the data, and the axes reorient themselves as the camera is rotated about the 3D scene. Figure 9.3 3D Orientation Axes The user interface controls for the orientation axes are located in the Annotation section of the View Settings dialog (Edit|View Settings) for the 3D view. The check-box beside the Orientation Axes label toggles the visibility of the orientation axes. The Interactive check box controls whether the orientation axes can be repositioned and resized through mouse interaction, similar to interacting with 3D widgets. Left-clicking and dragging the orientation axes repositions them in the 3D view. When the orientation axes are in interactive mode, a bounding rectangle (outline) is displayed around the axes when the mouse moves over them. Left-clicking and dragging on the edges of this outline resizes the orientation axes. From the Set Outline Color button (enabled when Interactive is checked), you can change the color used for displaying the bounding rectangle. This is useful for making the outline more visible if its current color is similar to the color of the background or any object behind the outline (if the orientation axes are placed in front of a data set). The orientation axes are always drawn in front of any data set occupying the same portion of the 3D view. The Axis Label Color button allows you to change the color of the labels for the three axes. The reasons for changing the color of the axis labels are similar to those for changing the outline color. Annotation 133 Figure 9.4 User interface controls for orientation axes (in 3D view's view settings dialog) Center of Rotation On the same dialog box, you also have control over whether or not the center of rotation should be annotated. This annotation demarcates the location in which the camera rotates. ParaView uses the center of the initial data set loaded into the pipeline as the default value. You can control this placement and display of the center of rotation display via the Center Axes Control toolbar (View|Toolbars|Center Axes Control). If you need to specify an exact coordinate you may do so via the Adjust Camera dialog exposed by the rightmost button on the left set of buttons above the 3D view. Text Display Often users want to display additional text in the 3D view (e.g., when generating a still image or an animation). In ParaView, there are a variety of ways to do this, all through the use of sources and filters. The Text source displays 2D text in the 3D scene, the 3D Text source does the same for 3D text, and the Annotate Time filter shows the current time using a text annotation in the 3D view. Text Source The Text source allows you display arbitrary 2D text on top of a 3D view. It is available from the Sources menu. On its Properties tab, enter whatever text you wish to be displayed in the Text entry area, and press the Apply button. Once the text has been added to the 3D view, you can reposition it by left-clicking and dragging the text. (A bounding box will be displayed around the text.) By default, the text is displayed in the lower left corner of the 3D view. Annotation 134 The Display section on the Properties panel for the Text source is different than for most other sources and filters displayed in the 3D view. Specific controls are given for positioning the text and setting various font properties. Figure 9.5 Properties panel for the Text source The first check box, Show Text, toggles the visibility of the text in the 3D view. This corresponds to the eye icon in the Pipeline Browser for this source. The Interactive check box below it determines whether the text may be interactively repositioned by clicking and dragging in the 3D view. The Text Property section of the Properties panel allows you to specify various parameters relating to the font used for displaying the text. The Font Size spin box determines how large the letters in the text will be. The Font menu determines which font type will be used: Ariel, Courier, or Times. The three buttons beside this menu indicate whether the text will be drawn boldfaced, italicized, and/or using shadows. The Align menu specifies whether the text will be left, center, or right-aligned within its bounds box. The Opacity spin box determines how opaque the text appears. An opacity value of 0 corresponds to completely transparent text, while an opacity value of 1 means completely opaque text. The Color button beside the Opacity spin box allows you to choose a color for the text. The Text Position of the Properties panel is used for specifying exactly where in the 3D view the text appears. If Use Window Location is unchecked, then the Lower Left Corner section is active. The two spin boxes provided determine the X and Y coordinates of the lower left corner of the bounding box of the text, specified in normalized coordinates (starting from the lower left corner of the view). If Use Window Location is checked, then the six buttons in that section of the interface become available. They allow you to choose various locations within the view for anchoring the text. Using the top row of buttons, you may place the text in the upper left corner, in the top center of the view, or in the upper right corner. The bottom row of button behaves similarly for the bottom portion of the 3D view. Annotation 135 3D Text Source Whereas the Text source positions text in 2D, overlaying the 3D scene, the 3D Text source places text in the 3D scene. The text is affected by rotation, panning, and zooming as is any other object in the 3D scene. You can control the placement of the 3D text in two ways. First, the Display tab for all sources and filters shown in a 3D view have controls to position, translate, rotate, and scale the object via textual controls. However, it is often easier to use a the Transform filter for the same purpose. The transform filter allows you to drag, rotate and scale the object via either the 3D widget in the scene, or via the text controls on its Properties tab. Annotate Time The Annotate Time source and filter are useful for labeling a time-varying data set or animation with ParaView’s current time in a 3D view. The distinction between the two is somewhat abstract. VTK's temporal support works by have the pipeline request data for a particular time; the text source displays exactly this. VTK's sources and filters are allowed to produce data at different times (usually nearby producing data that changes over a step function in time). The annotate time filter shows the data produces by an given filter. In either case, the annotated time value is drawn as a 2D label, similar to the output of the Text source. The Properties tab for this filter provides a single text box labeled Format. From this text box, you may specify a format string (print style) indicating how the label will appear in the scene. As with the output of the Text source, this label may be interactively repositioned in the scene. The Display tab for this source is the same as the one for the Text source; it is described earlier in this section. Figure 9.6 Using the Annotate Time filter to display the current time in a time-varying data set Annotation 136 Other sources as Annotations Most of the other items in the Sources menu can also be used for annotation of the 3D scene. Placement of these is similar to placement of the 3D Text source. In particular the 2D Glyph, Ruler, and Outline sources are frequently used for annotation. As the name implies, the ruler allows for some quantitative measurement of the data. The ruler produces a Line widget in the 3D scene. It is most effective to use the 'P' key to place the two ends of the line in the 3D scene in turn. When placed the Properties tab, the ruler lists the length of the line you've placed in space. In Figure 9.7, a ruler is being used to measure the distance between the cows horns. Cube Axes Finally, in the Display section of the Properties tab of any object shown in a 3D view, you can toggle the display of a Cube Axes Display. Like the ruler source, the cube axes allows you to inspect to world space extent of objects in the scene. When this display is enabled, a gridded, labelled bounding box is drawn around the object that lets you quickly determine the world space extent in each dimension of the object. That box is generally Axis aligned but as been extended to support arbitrary axis. In case of arbitrary axis, the source or the filter needs to provide such meta-information to the CubeAxes. Figure 9.7 Cube Axes of a sheared cube and Rulers used to measure a cow and its horns Pressing the Edit button next to the enable check box brings up a dialog that lets you customize the presentation of the axes display. You have one tab on this dialog for each of the three axes. Use the Title entry to specify a label. Use the Show Axes check box to enable or disable the given axes. Show Ticks and Show Minor Ticks let you customize the division markers along the axis. Show Grid Lines extends the major tick marks so that they completely surround the object. The Original bounds as range allow to rescale your object inside the Display section without Annotation 137 changing the ruler legend. The Custom Bounds entry allows you to specify your own extent for that cube axis where as the Custom Range only change the range of the values that are used as label. At the bottom of the dialog box are controls over all three axes. The Fly Mode drop-down controls the way the axes are drawn on different axes dependent on the camera orientation. Outer Edges draws opposing edges so as to space the annotation as widely as possible. Closest Triad and Furthest Triad draw the nearest and furthest corners respectively. The two static options lock the choice of edges to draw so that they do not change as the camera moves. Tick Location draws the tick marks inward toward the center of the object, outward away from the center, or in both directions simultaneously. Corner Offset puts some space between the actual bounds and the displayed axes, which is useful for minimizing visual clutter. The Grid line location allow to either show the Grid Lines around the whole object or choose if you only want to see them on the front or on the back of the object that you are looking at. The visibility of the grid lines automatically update when you rotate your object. (In figure 9.7, we set that value to: Furthest Faces) Finally, you have control over the color of the axes through the Set Axes Color pop-up color selector. Figure 9.8 Edit Cube Axes Dialog Annotation 138 Python annotation filter This filter uses Python to calculate an expression. It depends heavily on the numpy and paraview.vtk modules. To use the parallel functions, mpi4py is also necessary. The expression is evaluated and the resulting string or value is going to be output in a 2D text. This filter tries to make it easy for the user to write expressions by defining certain variables. The filter tries to assign each array to a variable of the same name. If the name of the array is not a valid Python variable, it has to be accessed through a dictionary called arrays (i.e. arrays['array_name']). The Python expression evaluated during execution. FieldData arrays are direclty available through their name. Here is the list of available variables that can be used as is: • • • • • • • input: represent the input of that filter. inputMB: represent an array of block if 'input' is a Multi-block dataset. t_value or time_value: represent the real time of the data that is provided by the input. t_steps or time_steps: is an array with all the possible value available for the time. t_range or time_range: is an array composed with the minimum and maximum time. t_index or time_index: represent the index of the time_value inside the time_steps array. input.FieldData, input.PointData, input.CellData: are shortcut to access any data array from the input. Here is a set of expressions that can be used with the can.ex2 dataset once the multi block has been merged: • • • • • • "Momentum: (%f, %f, %f)" % (XMOM[t_index,0], YMOM[t_index,0], ZMOM[t_index,0]) ) QA_Records[20,:8] input.PointData['DISPL'][0:3,0] : Display component 0 of the first 3 DISPL points data. input.PointData['DISPL'][0:3] : Display full vector of the first 3 DISPL points data. input.PointData['DISPL'][0] : Display full vector of the first DISPL points data. input.PointData['DISPL'] [ [ 0,10,20,30 ] ] : Display full vector of the DISPL points data [0,10,20,30]. The following example illustrate some time expression: Figure 9.9 Python Annotation Filter showing time information The following example illustrate a complex text generation composed of several data field. Annotation 139 Figure 9.10 Python Annotation Filter showing field data In order to understand what can be done, here is the list of fields that are available in the can.ex2 dataset. In fact to highlight the easy access of the field data, the following expression are exactly the same, but as you can see the second way of writing it is much shorter: • input.FieldData['KE'][0,0] <=> KE[0,0] • input.FieldData['QA_Records'][0,0:5] <=> QA_Records[0,0:5] Figure 9.11 Available fields in can.ex2 Global annotation filter This filter rely on the Python Annotation Filter to extract and show a field value inside the view but in a very simple and intuitive manner. Moreover, if the selected field, like shown in the following figure, has the same number of elements than the time, we assume it's time dependent and automatically update it's value to match the time index. If you are feeling limited by that filter, you can always go back to the Python Annotation Filter to achieve a more complex data annotation. Annotation 140 Figure 9.12 Annotate Global Data Filter with can.ex2 References [1] http:/ / paraview. org/ Wiki/ ParaView/ Displaying_Data 141 Animation Animation View In ParaView, you can create animations by recording a series of keyframes. At each keyframe, you set values for the properties of the readers, sources, and filters that make up the visualization pipeline, as well as the position and orientation of the camera. Once you have chosen the parameters, you can play through the animation. When you play the animation, you can cache the geometric output of the visualization pipeline in memory. When you subsequently replay the animation, playback will be much faster, because very little computation must be done to generate the images. Also, the results of the animation can be saved to image files (one image per animation frame) or to a movie file. The geometry rendered at each frame can also be saved in ParaView’s PVD file format, which can be loaded back into ParaView as a time varying data set. Animation View Animation View is the user interface used to create animations by adding keyframes. It is modeled similar to popular animation and key-frame editing application with ability to create tracks for animating multiple parameters. The Animation View is accessible from the View menu. Figure 10.1 Animation View As seen in Figure 10.1, this view is presented as a table. Above the table are the controls that administers how time progresses in the animation. These were discussed previously. Within the table, the tracks of the animation appear as rows, and animation time is presented as increasing from left-to-right. The first row in the table, simply labeled Time, shows the total span of time that the animation can cover. The current displayed time is indicated both in the Time field at the top and with a thick, vertical, draggable line within the table. Along the left side of the Animation View is an expandable list of the names of the animation tracks (i.e., a particular object and property to animate). You choose a data source and then a particular property of the data source in the bottom row. To create an animation track with keyframes for that property, click the "+" on the left-hand side; this will create a new track. In the figure, tracks already exist for SphereSource1’s Phi Resolution property and for the camera’s position. To delete a track, press the X button. You can temporarily disable a track by un-checking the check box on the right of the track. To enter values for the property, double-click within the white area to the right of Animation View 142 the track name. This will bring up the Animation Keyframes dialog. Double-clicking in the camera entry brings up a dialog like the one in Figure 10.2. Figure 10.2 Editing the camera track From the Animation Keyframes dialog, you can press New to create new keyframes or press Delete or Delete All to delete some or all of them. Clicking New will add a new row to the table. In any row, you can click within the Time column to choose a particular time for the keyframe and click in the right-hand column to enter values for the parameter. The exact user interface components that let you set values for the property at the keyframe time vary. When available, you can change the interpolation between two keyframes by double-clicking on the central interpolation column. Within the tracks of the Animation View, the place in time where each keyframe occurs is shown as a vertical line. The values chosen for the property at that time and the interpolation function used between that value and the next are shown as text when appropriate. In previous figure for example, the sphere resolution begins at 10 and then changes to 20 varying by linear interpolation between them. The camera values are too lengthy to show as text so they are not displayed in the track, but we can easily see that there are four keyframes spaced throughout the animation. The vertical lines in the tracks themselves are draggable, so you can easily adjust the time at which each keyframe occurs. Animation View 143 Animation View Header The Animation View has a header-bar that lets you control some properties of the animation itself, as you can see in Figure 10.3. Figure 10.3 Animation View Header Mode controls the animation playback mode. ParaView supports 3 modes for playing animation. In Sequence mode, the animation is played as a sequence of images (or frames) generated one after the other and rendered in immediate succession. The number of frames is controlled by the No. Frames spinbox at the end of the header. Note that the frames are rendered as fast as possible. Thus, the viewing frame rate depends on the time needed to generate and render each frame. In Real Time mode, the Duration spinbox (replacing the No. Frames spinbox) indicates the time in seconds over which the entire animation should run. Each frame is rendered using the current wall clock time in seconds relative to the start time. The animation runs for nearly the number of seconds specified by the Duration (secs) spinbox. In turn, the number of frames actually generated (or rendered) depends on the time to generate (or render) each frame. In Snap To TimeSteps mode, the number of frames in the animation is determined by the number of time values in the data set being animated. This is the animation mode used for ParaView's default animations: playing through the time values in a data set one after the other. Default animations are created by ParaView when a data set with time values is loaded; no action is required by the user to create the animation. Note that using this mode when no time-varying data is loaded will result in no animation at all. In Sequence mode, the final item in the header is the No. Frames spinbox. This spinbox lets you pick the total number of frames for the animation. Similarly in Real Time mode, the final line lets you choose the duration of the animation. In Snap To TimeSteps mode, the total number of frames is dictated by the data set, and therefore the spinbox is disabled. The Time entry-box shows the current animation time which is same as shown by a vertical marker in this view. You can change the current animation time by either entering a value in this box, if available, or dragging the vertical marker. The Start Time and End Time entry-boxes display the start and end times for the animation. By default, when you load time varying data sets, the start and end times are automatically adjusted to cover the entire time range present in the data. The lock check-buttons to the right of the Start Time and End Time widgets will prevent this from happening, so that you can ensure that your animation covers a particular time domain of your choosing. Animating Time-Varying Data When you load time-varying data, ParaView automatically creates a default animation that allows you to play through the temporal domain of the data without manually creating an animation to do so. With the Animation View, you can uncouple the data time from the animation time so that you can create keyframes that manipulate the data time during animation as well. If you double-click in the TimeKeeper – Time track, the Animation Keyframes dialog, an example of which is shown in Figure below, appears. In this dialog, you can make data time progress in three fundamentally different ways. If the Animation Time radio-button is selected, the data time will be tied to and scaled with the animation time so that as the animation progresses, you will see the data evolve naturally. You can select Constant Time instead if you want to ignore the time varying nature of the data. In this case, you choose a particular time value at which the Animation View 144 data will be displayed for the duration of the animation. Finally, you can select the Variable Time radio-button to have full control over data time and control it as you do any other animatible property in the visualization pipeline. In the example shown in Figure 10.4 below, time is made to progress forward for the first 15 frames of the animation, backward for the next 30, and finally forward for the final 15. Figure 10.4 Controlling Data Time with keyframes Animation Settings Figure 10.5 Animation Settings Additional animation properties can be changed using the Animation page in the Edit|Settings dialog. Using these settings seen in Figure 10.5, you can control whether geometry must be cached to improve playback performance during looping, as well as the maximum size of geometry to cache to avoid memory overflows. Animation View 145 Playing an Animation Once you have designed you animation, you can play through it with the VCR controls toolbar seen in Figure 10.6. Figure 10.6 VCR controls toolbar Animating the Camera Figure 10.7 Add camera track Animation View 146 Just like you can change parameters on sources and filters in an animation, you can also change the camera parameters. As seen above in Figure 10.7, you can add animation tracks to animate the camera for all the 3D render views in the setup separately. To add a camera animation track for a view, with the view selected, click on the "+" button after choosing Camera from the first drop-down. The second drop down allows users to choose how to animate the camera. There are three possible options each of which provides different mechanisms to specify the keyframes. It's not possible to change the mode after the animation track has been added, but you can simply delete the track and create a new one. Interpolate Camera Locations In this mode, the user specifies camera position, focal point, view angle, and up direction at each keyframe. The animation player interpolates between these specified locations. As with other parameters, to edit the keyframes, double-click on the track. It is also possible to capture the current location as a keyframe by using the Use Current button. Figure 10.8 Setting animation parameters It can be quite challenging to add keyframes correctly and frequently to ensure that the animation results in a smooth visualization using this mode. Orbit This mode makes it possible to quickly create a camera animation in which the camera revolves around an object(s) of interest. Before adding the Camera track, select the object(s) in the pipeline browser that you want to revolve around; then choose Orbit from the Camera combo-box in the Animation View and hit "+". This will pop-up a dialog where you can edit the orbit parameters such as the center of revolution, normal for the plane of revolution, and the origin (i.e. a point on the plane where the revolution begins). By default, the Center is the center of the bounds of the Animation View 147 selected objects, Normal is the current up direction used by the camera while the origin is the current camera position. Figure 10.9 Creating a camera orbit Follow Path In this mode, users get the opportunity to specify the path taken by the camera position and camera focal point. By default, the path is set-up to orbit around the selected objects. Users can then edit the keyframe to change the paths. Figure 10.10 shows the dialog for editing these paths for a keyframe. When Camera Position or Camera Focus is selected, a widget is shown in the 3D view that can be used to set the path. Use Ctrl+Left Click to insert new control points, and Shift+Left Click to remove control points. You can also toggle when path should be closed or not. Animation View 148 Figure 10.10 Creating a camera path ] 149 Comparative Visualization Comparative Views Introduction Figure 11.1 Comparative Views ParaView provides a collection of views that can be used to display a set of visualization on a regular grid. These views are referred to as Comparative Views. Comparative Views can be used to display results from a parameter study or to perform parameter studies within ParaView. Comparative Views 150 Quick Start We will start with a short tutorial. Later sections will describe Comparative Views in more detail. First, start a Wavelet source. Next, apply a Contour filter; then close the default 3D view by clicking on the small "x" on the upper-right corner of the view, as shown in Figure 11.2. Figure 11.2 Closing This should bring up an empty view with several buttons for selecting the view type. Select 3D View (Comparative). Now you should see an empty 2x2 grid in the view. Next, turn on the visibility of the Contour filter. You should see the output of the Contour filter in all four views. To vary the contour value across the grid, bring up the Comparative View Inspector (View|Comparative View Inspector). This inspector allows you to configure the parameters of the Comparative View. Comparative Views 151 Figure 11.3 Comparative View panel From the drop-down menu, select Contour1. The parameter should be set to Isosurfaces. If it is not, change it. Next, in the grid that has the numbers, click on the upper left value and drag it to the lower right value. This should bring up the following dialog. Enter 100 and 200 for the two values. Figure 11.4 Comparative View parameter range When you click OK, the Comparative View should update and you should see something like Figure 11.5 below. Comparative Views 152 Figure 11.5 Comparative View parameter output View There are three types of Comparative Views: • Comparative 3D View • Comparative Line Chart View • Comparative Bar Chart View All of these views contain sub-views laid-out in an m-by-n grid where m and n are determined by the Comparative View Inspector settings. The type of the sub-view depends on the type of the comparative view: 3D View for comparative view, line chart for line chart comparative view etc. Each sub-view displays the output from the same pipeline objects, depending on what is set visible in the Pipeline Inspector. The only things that change from view-to-view are the parameters that are varied through the Comparative View Inspector. Furthermore, the view settings are synchronized between all sub-views. For example, changing the camera in one 3D sub-view will cause the camera on all other sub-views to change. Note: Not all features of the single views are supported in their comparative siblings. For example, it is not possible to perform surface selection on a Comparative 3D View. Comparative Views 153 Comparative View Inspector The Comparative View Inspector (View|Comparative View Inspector) is where you can configure all of the parameters of comparative views. Note that if you have more than one comparative view, the Inspector will change the setting of the active one. Below we describe various parts of the Comparative View Inspector. Layout File 11.6 Layout widget The layout widget seen above in Figure 11.6 allows you to configure the number of sub-views within a comparative view. The first value controls how many cells there are in the horizontal direction whereas the second value controls how many cells there are in the vertical direction. Note that if you already set-up one or more parameters to vary in the view, the Inspector will try to maintain your values when you adjust the size as much as possible. If you manually entered any individual values, they will not change when you add more rows or columns. On the other hand, all values that have been automatically computed based on a range will be updated as the number of cells change. Parameter Selection File 11.7 Parameter selection This widget allows you to add new parameters to vary and also to select a property if more than one is available. To add a property, first select the pipeline object from the first menu, then select its parameter from the second value. Once you are done with the selection, click on the "+" button to add the parameter. This parameter will now show in the list of parameters. To delete a parameter from the list, click on the corresponding "x" on the left side of the parameter selection widget. Note that once you add a new parameter, ParaView will try to assign good default values to each cell. For example, if you add Contour:Isosurfaces, ParaView will assign values ranging from minimum scalar value to maximum scalar value. Comparative Views 154 Editing Parameter Values Figure 11.8 Parameter editing in the spreadsheet widget You can edit the parameter values using the spreadsheet widget. You can either: • Change individual value by double clicking on a cell and editing it. • Change a group of cells by clicking on the first one and dragging it to the last one. ParaView will then ask you to enter minimum and maximum values. If you selected cells that span more than one direction, it will also ask you to choose which way values vary. Let's examine the range selection a bit further. Say that you selected a 2x2 area of cells and entered 0 for the minimum and 10 for the maximum. If you select Vary Horizontally First, the values will be: 0 3.33 6.66 10 If you select, Vary Vertically First, the values will be: 0 6.66 3.33 10 If you select, Vary Only Horizontally, the value will be: 0 10 0 10 If you select, Vary Only Vertically, the value will be: 0 0 10 10 The last two options may sound useless. Why have multiple cells with the same values? However, if you consider that more than one parameter can be varied in a comparative view, you will realize how useful they are. For example, you can change parameter A horizontally while varying parameter B vertically to create a traditional spreadsheet of views. Comparative Views Performance Computational ParaView will run the pipeline connected to all visible pipeline objects for each cell serially. Therefore, the time to create a Comparative Visualization of N cells should be on the order of N times the time to create the visualization of one cell. Memory ParaView will only store what is needed to display the results for each cell except the last one. The last cell will contain the representation as well as the full dataset, same as any single view. For example, when using the surface representation, the total memory used will be the total of memory used by the geometry in each cell plus the memory used by the dataset of the last cell. 155 156 Remote and Parallel Large Data Visualization Parallel ParaView Parallel ParaView One of the main purposes of ParaView is to allow users to create visualizations of large data sets that reside on parallel systems without first collecting the data to a single machine. Transferring the data is often slow and wasteful of disk resources, and the visualization of large data sets can easily overwhelm the processing and especially memory resources of even high-performance workstations. This chapter first describes the concepts behind the parallelism in ParaView. We then discuss in detail the process of starting-up ParaView's parallel server components. Lastly, we explain how a parallel visualization session is initiated from within the user interface. Parallel rendering is an essential part of parallel ParaView, so essential that we've given it its own chapter in this version of the book. The task of setting up a cluster for visualization is unfortunately outside of the scope of this book. However, there are several online resources that will help to get you started including: • http://paraview.org/Wiki/Setting_up_a_ParaView_Server, • http://www.iac.es/sieinvens/siepedia/pmwiki.php?n=HOWTOs.ParaviewInACluster • http://paraview.org/Wiki/images/a/a1/Cluster09_PV_Tut_Setup.pdf Parallel ParaView 157 Parallel Structure ParaView has three main logical components: client, data server, and render server. The client is responsible for the GUI and is the interface between you the user and ParaView as a whole. The data server reads in files and processes the data through the pipeline. The render server takes the processed data and renders it to present the results to you. Figure 12.1 Parallel Architecture The three logical components can be combined in various different configurations. When ParaView is started, the client is connected to what is called the built-in server; in this case, all three components exist within the same process. Alternatively, you can run the server as an independent program and connect a remote client to it, or run the server as a standalone parallel batch program without a GUI. In this case, the server process contains both the data and render server components. The server can also be started as two separate programs: one for the data server and one for the render server. The server programs are data-parallel programs that can be run as a set of independent processes running on different CPUs. The processes use MPI to coordinate their activities as each works on different pieces of the data. Parallel ParaView 158 Figure 12.2 Common Configurations of the logical components Client The client is responsible for the user interface of the application. ParaView’s general-purpose client was written to make powerful visualization and analysis capabilities available from an easy-to-use interface. The client component is a serial program that controls the server components through the Server Manager API. Data Server The data server is primarily constructed from VTK readers, sources, and filters. It is responsible for reading and/or generating data, processing it, and producing geometric models that the render server and client will display. The data server exploits data parallelism by partitioning the data, adding ghost levels around the partitions as needed, and running synchronous parallel filters. Each data server process has an identical VTK pipeline, and each process is told which partition of the data it should load and process. By splitting the data, ParaView is able to use the entire aggregate system memory and thus make large data processing possible. Render Server The render server is responsible for rendering the geometry. Like the data server, the render server can be run in parallel and has identical visualization pipelines (only the rendering portion of the pipeline) in all of its processes. Having the ability to run the render server separately from the data server allows for an optimal division of labor between computing platforms. Most large computing clusters are primarily used for batch simulations and do not have hardware rendering resources. Since it is not desirable to move large data files to a separate visualization system, the data server can run on the same cluster that ran the original simulation. The render server can be run on a separate visualization cluster that has hardware rendering resources. Parallel ParaView It is possible to run the render server with fewer processes than the data server but never more. Visualization clusters typically have fewer nodes than batch simulation clusters, and processed geometry is usually significantly smaller than the original simulation dump. ParaView repartitions the geometric models on the data server before they are sent to the render server. MPI Availability Until recently, in order to use ParaView's parallel processing features, you needed to build ParaView from the source code as described in the Appendix. This was because there are many different versions of MPI, the library ParaView’s servers use internally for parallel communication, and for high-performance computer users it is extremely important to use the version that is delivered with your networking hardware. As of ParaView 3.10 however, we have begun to package MPI with our binary releases. If you have a multi-core workstation, you can now simply turn on the Use Multi-Core setting under ParaView's Settings to make use of all of them. This option makes Parallel Data Server mode the default configuration, which can be very effective when you are working on computationally bound intensive processing tasks. Otherwise, and when you need to run ParaView on an actual distributed memory cluster, you need to start-up the various components and establish connections between them as is described in the next section. Starting the Server(s) Client / Server Mode Client / Server Mode refers to a parallel ParaView session in which data server and render server components reside within the same set of processes, and the client is completely separate. The pvserver executable combines the two server components into one process. You can run pvserver as a serial process on a single machine. If ParaView was compiled with parallel support, you can also run it as an MPI parallel program on a group of machines. Instructions for starting a program with MPI are implementation and system-dependent, so contact your system administrator for information about starting an application with MPI. With the MPICH implementation of MPI, the command to start the server in parallel usually follows the format shown here: mpirun –np number_of_processes path_to/pvserver arguments for pvserver By default, pvserver will start and then wait for the client to connect to it. See the next section for a full description. Briefly, to make the connection, select Connection from the File menu, select (or make and then select) a configuration for the server, and click Connect. Note that you must start the server before the client attempts to connect to it. If the computer running the server is behind a firewall, it is useful to have the server initiate the connection instead of the client. The ‑-reverse-connection (or -rc) command-line option tells the server to do this. The server must know the name of the machine to which it should connect; this is specified with the --client-host (or -ch) argument. Note that when the connection is reversed, you must start the client and instruct it to wait for a connection before the server attempts to connect to it. The client-to-server connection is made over TCP, using a default port of 11111. If your firewall puts restrictions on TCP ports, you may want to choose a different port number. In the client dialog, simply choose a port number in the Port entry of the Configure New Server dialog. Meanwhile, give pvserver the same port number by including --server-port (or –sp) in its command-line argument list. 159 Starting the Server(s) 160 An example command line to start the server and have it initiate the connection to a particular client on a particular port number is given below: pvserver –rc –ch=magrathea –sp=26623 Render/Server Mode The render server allows you to designate a separate group of machines (i.e., apart from the data server and the client) to perform rendering. This parallel mode lets you use dedicated rendering machines for parallel rendering rather than relying on the data server machines, which may have limited or no rendering capabilities. In ParaView, the number of machines (N) composing the render server must be no more than the number (M) composing the data server. Note that it is true that some large installations with particularly high-performing, parallel rendering resources it can be very efficient to run ParaView in Render/Server Mode. However, we have found in practice that it is almost always the case that the data transfer time between the two servers overwhelms the speed gained by rendering on the dedicated graphics cluster. For this reason, we typically recommend that you combine the data and render servers together as one component, and either render in software via Mesa on the data processing cluster or do all of the visualization processing directly on the GPU cluster. If you still want to break-up the data processing and rendering tasks, there are two sets of connections that must be made for ParaView to run in render-server mode. The first connection set is between the client and the first node of each of the data and render servers. The second connection set is between the nodes of the render server and the first n nodes of the data server. Once all of these connections are established, they are bi-directional. The diagram in Figure 12.3 depicts the connections established when ParaView is running in render server mode. Each double-ended arrow indicates a bi-directional TCP connection between pairs of machines. The dashed lines represent MPI connections between all machines within a server. In all the diagrams in this section, the render server nodes are denoted by RS 0, RS 1, …, RS N. The data server nodes are similarly denoted by DS 0, DS 1, …, DS N, …, DS M. Figure 12.3 Connections required in render server mode The establishment of connections between client and servers can either be forward (from client-to-servers) or reverse (from servers-to-client). Likewise, the connections between render server and data server nodes can be established either from the data server to the render server, or from the render server to the data server. The main reason for reversing the direction of any of the initial connections is that machines behind firewalls are able to initiate connections to machines outside the firewall, but not vice versa. If the data server is behind a firewall, the data server should initiate the connection with the client, and the data server nodes should connect to the render Starting the Server(s) 161 server nodes. If the render server is behind the firewall instead, both servers should initiate connections to the client, but the render server nodes should initiate the connections with the nodes of the data server. In the remaining diagrams in this section, each arrow indicates the direction in which the connection is initially established. Double-ended arrows indicate bi-directional connections that have already been established. In the example command lines, optional arguments are enclosed in []’s. The rest of this section will be devoted to discussing the two connections required for running ParaView in render server mode. Connection 1: Connecting the client and servers The first connection that must be established is between the client and the first node of both the data and render servers. By default, the client initiates the connection to each server, as shown in Figure 12.4. In this case, both the data server and the render server must be running before the client attempts to connect to them. Figure 12.4 Starting ParaView in render-server mode using standard connections To establish the connections shown above, do the following. First, from the command line of the machine that will run the data server, enter pvdataserver to start it. Next, from the command line of the machine that will run the render server, enter pvrenderserver to start the render server. Now, from the machine that will run the client, start the client application, and connect to the running servers, as described in section 1.2 and summarized below. Start ParaView and select Connect from the File menu to open the Choose Server dialog. Select Add Server to open the Configure New Server dialog. Create a new server connection with a Server Type of Client / Data Server / Render Server. Enter the machine names or IP addresses of the server machines in the Host entries. Select Configure, and then in the Configure Server dialog, choose Manual. Save the server configuration and connect to it. At this point, ParaView will establish the two connections. This is similar to running ParaView in client/server mode, but with the addition of a render server. The connection between the client and the servers can also be initiated by the servers. As explained above, this is useful when the servers are running on machines behind a firewall. In this case, the client must be waiting for both servers when they start. The diagram indicating the initial connections is shown in Figure 12.5. Figure 12.5 Reversing the connections between the servers and the client To establish the connections shown above, start by opening the Configure New Server dialog on the client. Choose Client|Data Server|Render Server (reverse connection) for the Server Type in the Configure New Server dialog. Next, add both --reverse-connection (-rc) and --client-host (-ch) to the command lines for the data server and render server. The value of the --client-host parameter is the name or IP address of the machine running the client. You can use the default port numbers for these connections, or you can specify ports in the client dialog by adding the --data-server-port (–dsp) and --render-server-port (–rsp) command-line arguments to the data server and render Starting the Server(s) 162 server command lines. The port numbers for each server must agree with the corresponding Port entries in the dialog, and they must be different from each other. For the remainder of this chapter, -rc will be used instead of --reverse-connection when the connection between the client and the servers is to be reversed. Connection 2: Connecting the render and data servers After the connections are made between the client and the two servers, the servers will establish connections with each other. In parallel runs, this server-to-server connection is a set of connections between all N nodes of the render server and the first N nodes of the data server. By default, the data server initiates the connection to the render server, but this can be changed with a configuration file. The format of this file is described below. The server that initiates the connection must know the name of the machine running the other server and the port number it is using. In parallel runs, each node of the connecting server must know the name of the machine for the corresponding process in the other server to which it should connect. The port numbers are randomly assigned, but they can be assigned in the configuration file as described below. The default set of connections is illustrated in Figure 12.6. To establish these connections, you must give the data server the connection information discussed above, which you specify within a configuration file. Use the --machines (–m) command line argument to tell the data server the name of the configuration file. In practice, the same file should be given to all three ParaView components. This ensures that the client, the render server, and the data server all agree on the network parameters. Figure 12.6 Initializing the connection from the data server to the render server An example network configuration file, called machines.pvx in this case, is given below:Sample command-line arguments that use the configuration file above to initiate the network illustrated in Figure 12.6 are given below: mpirun –np 2 pvdataserver –m=machines.pvx mpirun –np 2 pvrenderserver -m=machines.pvx paraview –m=machines.pvx It should be noted that the machine configuration file discussed here is a distinct entity and has a different syntax from the server configuration file discussed at the end of the next section. That file is read only by the client; the file discussed here will be given to the client and both servers. In the machine file above, the render-node-port entry in the render server’s XML element tells the render server the port number on which it should listen for a connection from the data server, and it tells the data server what port number it should attempt to contact. This entry is optional and if it does not appear in the file, the port number will be chosen automatically. Note that it is not possible to assign port numbers to individual machines within the server; all will be given the same port number or use the automatically-chosen one. Note also that each render server machine is given a display environment variable in this file. This is not required to establish the connections, but it is helpful if you need to assign particular X11 display names to the various render server nodes. The initial connection between the nodes of the two servers is made from the data server to the render server. You can reverse this such that the render server nodes connect to the corresponding nodes of the data server instead as shown in Figure 12.7. Figure 12.7 Reversing the connection between the servers and client and connecting the render server to the data server Starting the Server(s) Typically, when the server connection is reversed, the direction of the connection between the client and the servers is also reversed (e.g., if the render server is behind a firewall). In this case, the render server must have the machine names and a connection port number to connect to the data server. The same XML file is used for this arrangement as is with the standard connection. The only difference in this case is that the render-node-port entry, if it is used, must appear in the data server’s XML element instead of the render server’s element. Example command-line arguments to initiate this type of network are given here: paraview –m=machines.pvx mpirun –np M pvdataserver –m=machines.pvx -rc -ch=client mpirun –np N pvrenderserver –m=machines.pvx -rc -ch=client Connecting to the Server Connecting the Client You establish connections between the independent programs that make up parallel ParaView from within the client’s user interface. The user interface even allows you to spawn the external programs and then automatically connect to them. Once you specify the information that ParaView needs to connect or spawn and connect to the server components, ParaView saves it to make it easy to reuse the same server configuration at a later time. Some visualization centers provide predefined configurations, which makes it trivial to connect to a server that is tailored to that center by simply choosing one from a list in ParaView's GUI. The Choose Server dialog shown in Figure 12.8 is the starting point for making and using server configurations. The Connect entry on ParaView client’s File menu brings it up. The dialog shows the servers that you have previously configured. To connect to a server, click its name to select it; then click the Connect button at the bottom of the dialog box. To make changes to settings for a server, select it and click Edit Server. To remove a server from the list, select it and click Delete Server. 164 Connecting to the Server 165 Figure 12.8 The Choose Server dialog establishes connections to configured servers To configure a new server connection, click the Add Server button to add it to the list. The dialog box shown below will appear. Enter a name in the first text entry box; this is the name that will appear in the Choose Server dialog (shown above). Figure 12.9 Configuring a new server Connecting to the Server 166 Next, select the type of connection you wish to establish from the Server Type menu. The possibilities are as follows. The “reverse connection” entries mean that the server connects to the client instead of the client connecting to the server. This may be necessary when the server is behind a firewall. Servers are usually run with multiple processes and on a machine other than where the client is running. • • • • Client / Server: Attach the ParaView client to a server. Client / Server (reverse connection): Connect a server to the ParaView client. Client / Data Server / Render Server: Attach the ParaView client to separate data and render servers. Client / Data Server / Render Server (reverse connection): Attach both a data and a render server to the ParaView client. In either of the client / server modes, you must specify the name or IP address of the host machine (node 0) for the server. You may also enter a port number to use, or you may use the default (11111). If you are running in client / data server / render server mode, you must specify one host machine for the data server and another for the render server. You will also need two port numbers. The default one for the data server is 11111; the default for the render server is 22221. When all of these values have been set, click the Configure button at the bottom of the dialog. This will cause the Configure Server dialog box, shown in Figure 12.10, to appear. You must first specify the start-up type. The options are Command and Manual. Choose Manual to connect to a server that has been started or will be started externally, on the command line for instance, outside of the ParaView user interface. After selecting Manual, click the Save button at the bottom of the dialog. Figure 12.10 Configure the server manually. It must be started outside of ParaView If you choose the Command option, in the text window labeled "Execute an external command to start the server:," you must give the command(s) and any arguments for starting the server. This includes commands to execute a command on a remote machine (e.g., ssh) and to run the server in parallel (e.g., mpirun). You may also specify an Connecting to the Server 167 amount of time to wait after executing the startup command(s) and before making the connection between the client and the server(s). (See the spin box at the bottom of the dialog.) When you have finished, click the Save button at the bottom of the dialog. Figure 12.11 Enter a command that will launch a new server Clicking the Save button in the Configure Server dialog will return you to the Choose Server dialog (shown earlier in this section). The server you just configured will now be in the list of servers you may choose from. Thereafter, whenever you run ParaView, you can connect to any of the servers that you have configured. You can also give the ParaView client the –-server=server_config_name command-line argument to make it automatically connect to any of the servers from the list when it starts. You can save and/or load server configurations to and/or from a file using the Save Servers and Load Servers buttons, respectively, on the Choose Server dialog. This is how some visualization centers provide system-wide server configurations to allow the novice user to simply click his or her choice and connect to an already configured ParaView server. The format of the XML file for saving the server configurations is discussed online at http:/ / paraview.org/Wiki/Server_Configuration. Distributing/Obtaining Server Connection Configurations Distributing/Obtaining Server Connection Configurations Motivation Server configuration (PVSC) files are used to simplify connecting to remote servers. The configuration XMLs can be used to hide all the complexities dealing with firewalls, setting up ssh tunnels, launching jobs using PBS or other job scheduler. However, with ParaView versions 3.12 and earlier, there is no easy way of sharing configuration files, besides manually passing them around. With ParaView 3.14, it is now possible for site maintainers to distribute PVSC files by putting them on a web-server. Users can simply add the URL to the list of locations to fetch the PVSC files from. ParaView will provide the user with a list of available configurations that the user can then choose import locally. User Interface To fetch the pvsc files from a remote server, go to the Server Connect dialog, accessible from File | Connect menu. Figure 1 Server Connect Dialog Click on the Fetch Servers button (new in v3.14). ParaView will access existing URLs to obtain the list of configurations, if any and list them on the Fetch Server Configurations page. 168 Distributing/Obtaining Server Connection Configurations Figure 2 Fetch Server Configuration Page To change the list of URLs that are accessed to fetch these configurations, click on the Edit Sources button. 169 Distributing/Obtaining Server Connection Configurations Figure 3 Edit Server Configuration Sources Click Save will save the changes and cause ParaView to fetch the configurations from the updated URLs. Once the updated configurations are listed on the Fetch Server Configuration Page, simply select and Import the configurations you would like to use and they are then accessible from the standard server's list shown in the Server Connect dialog. URL Search Sequence For each URL specified, ParaView tries that following paths until the first path that returns valid XML file. 1. 2. 3. 4. 5. 6. 7. {URL} {URL}/v{MAJOR_VERSION}.{MINOR_VERSION}/{CLIENT OS}/servers.pvsc {URL}/v{MAJOR_VERSION}.{MINOR_VERSION}/{CLIENT OS}/servers.xml {URL}/v{MAJOR_VERSION}.{MINOR_VERSION}/servers.pvsc {URL}/v{MAJOR_VERSION}.{MINOR_VERSION}/servers.xml {URL}/servers.pvsc {URL}/servers.xml Where: • • • • {URL} : the URL specified {MAJOR_VERSION} : major version number of the ParaView client e.g. for ParaView 3.14, it will be 3. {MINOR_VERSION} : minor version number of the ParaView client e.g. for ParaView 3.14, it will be 14 {CLIENT_OS} : Either win32, macos, or nix based of the client OS. This search sequence makes it easy for PVSC maintainers to provide different pvsc files based on the client OS, if needed. It also enables partial specialization e.g. if the maintainer wants to provide a special pvsc for ParaView 3.14 170 Distributing/Obtaining Server Connection Configurations on windows but a common pvsc for all other version and/or platforms, then he simply sets up the webserver directory structure as follows: 1. {URL}/v3.14/win32/servers.pvsc 2. {URL}/servers.pvsc And advertises to his users simply the {URL} to use as a configuration source. 171 172 Parallel Rendering and Large Displays About Parallel Rendering One of ParaView's strengths is its ability to off-load the often demanding rendering task. By offload, we mean that ParaView allows you to connect to a remote machine, ideally one that is closer to the data and to high-end rendering hardware, to do the rendering on that machine and still interact with the data from a convenient location. Abstracting away the location where rendering takes place opens up many possibilities. First, it opens up the possibility to parallelize the job of rendering to make it possible to render huge data sets at interactive rates. Rendering is done in the parallel Render Server component, which may be part of, or separate from, the parallel Data Server component. In the next section, we describe how parallel rendering works and explain the controls you have over it. Second, huge datasets often require high-resolution displays to view the intricate details within while maintaining a high-level view to maintain context. In the following section, we explain how ParaView can be used to drive tile display walls. Lastly, with the number of displays free to vary, it becomes possible to use ParaView to drive multi-display Virtual Reality systems. That is described in the final section of this chapter. Parallel Rendering X Forwarding - Not a Good Idea Parallel Rendering implies that many processors have some context to render their pixels into. Even though X11 forwarding might be available, you should not run the client remotely and forward its X calls. ParaView will be far more efficient if you let it directly handle the data transfer between local and remote machines. When doing hardware accelerated rendering in GPUs, this implies having X11 or Windows display contexts (either off-screen of on-screen). Otherwise. this implies using off-screen Mesa (OSMesa) linked in to ParaView to do the rendering entirely off-screen into software buffers. Onscreen GPU Accelerated Rendering via X11 Connections One of the most common problems people have with setting up the ParaView server is allowing the server processes to open windows on the graphics card on each process' node. When ParaView needs to do parallel rendering, each process will create a window that it will use to render. This window is necessary because you need the "x" window before you can create an OpenGL context on the graphics hardware. There is a way around this. If you are using the Mesa as your OpenGL implementation, then you can also use the supplemental OSMesa library to create an OpenGL context without an "x" window. However, Mesa is strictly a CPU rendering library so, use the OSMesa solution if and only if your server hardware does not have rendering hardware. If your cluster does not have graphics hardware, then compile ParaView with OSMesa support and use the --use-offscreen-rendering flag when launching the server. Assuming that your cluster does have graphics hardware, you will need to establish the following three things: 1. Have xdm run on each cluster node at start-up. Although xdm is almost always run at startup on workstation installations, it is not as commonplace to be run on cluster nodes. Talk to your system administrators for help in setting this up. Parallel Rendering 173 2. Disable all security on the "x" server. That is, allow any process to open a window on the "x" server without having to log-in. Again, talk to your system administrators for help. 3. Use the -display flag for pvserver to make sure that each process is connecting to the display localhost:0 (or just :0). To enable the last condition, you would run something like: mpirun -np 4 ./pvserver -display localhost:0 An easy way to test your setup is to use the glxgears program. Unlike pvserver, it will quickly tell you (or, rather, fail to start) if it cannot connect to the local "x" server. mpirun -np 4 /usr/X11R6/bin/glxgears -display localhost:0 Offscreen Software Rendering via OSMesa When running ParaView in a parallel mode, it may be helpful for the remote rendering processes to do their rendering in off-screen buffers. For example, other windows may be displayed on the node(s) where you are rendering; if these windows cover part of the rendering window, depending on the platform and graphics capabilities, they might even be captured as part of the display results from that node. A similar situation could occur if more than one rendering process is assigned to a single machine and the processes share a display. Also, in some cases, the remote rendering nodes are not directly connected to a display and otherwise if your cluster does not have graphics hardware, then compile ParaView with OSMesa support and use the --use-offscreen-rendering flag when launching the server. The first step to compiling OSMesa support is to make sure that you are compiling with the Mesa 3D Graphics Library [1]. It is difficult to tell an installation of Mesa from any other OpenGL implementation (although the existence of an osmesa.h header and a libOSMesa library is a good clue). If you are not sure, you can always download your own copy from http://mesa3d.org.We recommend using either Mesa version 7.6.1 or 7.9.1. There are three different ways to use Mesa as ParaView's openGL library: • You can use it purely as a substitute for GPU enabled onscreen rendering. To do this, set the CMake variable OPENGL_INCLUDE_DIR to point to the Mesa include directory (the one containing the GL subdirectory) and set the OPENGL_gl_LIBRARY and OPENGL_glu_LIBRARY to the libGL and libGLU library files, respectively. Variable Value PARAVIEW_BUILD_QT_GUI ON VTK_USE_COCOA ON VTK_OPENGL_HAS_OSMESA OFF Description Mac Only. X11 is not supported. Disable off screen rendering. OPENGL_INCLUDE_DIR > Set this to the include directory in MESA. OPENGL_gl_LIBRARY libGL Set this to the libGL.a or libGL.so file in MESA. OPENGL_glu_LIBRARY libGLU Set this to the libGLU.a or libGLU.so file in MESA. • You can use it as a supplement to on-screen rendering. This mode requires that you have a display (X11 is running). In addition to specifying the GL library (which may be a GPU implementation of the Mesa one above), you must tell ParaView where Mesa's OSMesa library is. Do that by turning the VTK_OPENGL_HAS_OSMESA variable to ON. After you configure again, you will see a new CMake variable called OSMESA_LIBRARY. Set this to the libOSMesa library file. Parallel Rendering 174 Variable Value PARAVIEW_BUILD_QT_GUI ON VTK_USE_COCOA ON VTK_OPENGL_HAS_OSMESA ON Description Mac Only. X11 is not supported. Turn this to ON to enable software rendering. OSMESA_INCLUDE_DIR Set this to the include directory for MESA. OPENGL_INCLUDE_DIR Set this to the include directory for MESA. OPENGL_gl_LIBRARY libGL Set this to the libGL.a or libGL.so file. OPENGL_glu_LIBRARY libGLU Set this to the libGLU.a or libGLU.so file. OSMESA_LIBRARY libOSMesa Set this to the libOSMesa.a or libOSMesa.so file. • You can use it for pure off-screen rendering, which is necessary when there is no display (or even X11 libraries). To do this, make sure that the OPENGL_gl_LIBRARY variable is empty and that VTK_USE_X is off. Specify the location of OSMesa and OPENGL_glu_LIBRARY as above and turn on the VTK_USE_OFFSCREEN variable. Variable Value Description PARAVIEW_BUILD_QT_GUI OFF When using offscreen rendering there is no gui VTK_USE_COCOA OFF Mac only. VTK_USE_X OFF VTK_USE_OFFSCREEN ON VTK_OPENGL_HAS_OSMESA ON Turn this to ON to enable Off Screen MESA. OSMESA_INCLUDE_DIR Set this to the include directory for MESA. OPENGL_INCLUDE_DIR Set this to the include directory for MESA. OPENGL_gl_LIBRARY Set this to empty. OPENGL_glu_LIBRARY libGLU Set this to the libGLU.a or libGLU.so file. OSMESA_LIBRARY libOSMesa Set this to the libOSMesa.a or libOSMesa.so file. Note that unless you ensure that VTK_USE_OFFSCREEN is ON, offscreen rendering will not take effect unless you launch the server with the --use-offscreen-rendering flag or alternatively, set the PV_OFFSCREEN environment variable on the server to one. Compositing Given that you are connected to a server that is capable of rendering, you have a choice of whether to do the rendering remotely or to do it locally. ParaView’s server performs all data-processing tasks. This includes generation of a polygonal representation of the full data set and of decimated LOD models. Once the data is generated on the server, it is sometimes better to do the rendering remotely and ship pixels to the client for display; other times, it's better to instead shift geometry to the client and have the client render it locally. In many cases, the polygonal representation of the data set is much smaller than the original data set. In an extreme case, a simple outline may be used to represent a very large structured mesh; in these instances, it may be better to transmit the polygonal representation from the server to the client, and then let the client render it. The client can render the data repeatedly, when the viewpoint is changed for instance, without causing additional network traffic. Network traffic will only occur when the data changes. If the client workstation has high-performance rendering hardware, it can sometimes render even large data sets interactively in this way. Parallel Rendering 175 The second option is to have each node of the server render its geometry and send the resulting images to the client for display. There is a penalty per rendered frame for compositing images and sending the image across the network. However, ParaView’s image compositing and delivery is very fast, and there are many options to ensure interactive rendering in this mode. Therefore, although small models may be collected and rendered on the client interactively, ParaView’s distributed rendering can render models of all sizes interactively. ParaView automatically chooses a rendering strategy to achieve the best rendering performance. You can control the rendering strategy explicitly, forcing rendering to occur entirely on the server or entirely on the client for example, by choosing Settings from the Edit menu of ParaView. Double-click on Render View from the window on the left-hand side of the Settings dialog, and then click on Server. The rendering strategy parameters shown in Figure 13.1 will now be visible. Here we explain in detail the most important of these controls. For an explanation of all controls, see the Appendix. Figure 13.1 Parallel rendering parameters Remote Render Threshold: This slider determines how large the dataset must be in order for parallel rendering with image compositing and delivery to be used (as opposed to collecting the geometry to the client). The value of this slider is measured in megabytes. Only when the entire data set consumes more memory than this value will compositing of images occur. If the check-box beside the Remote Render Threshold slider is unmarked, then compositing will not happen; the geometry will always be collected. This is only a reasonable option when you can be sure the dataset you are using is very small. In general, it is safer to move the slider to the right than to uncheck the box. ParaView uses IceT to perform image compositing. IceT is a parallel rendering library that takes multiple images rendered from different portions of the geometry and combines them into a single image. IceT employs several image-compositing algorithms, all of which are designed to work well on distributed memory machines. Examples of two such image-compositing algorithms are depicted in Figure 13.2 and Figure 13.3. IceT will automatically Parallel Rendering 176 choose a compositing algorithm based on the current workload and available computing resources. File 13.2 Tree compositing on four processes File 13.3 Binary swap on four processes Interactive Subsample Rate: The time it takes to composite and deliver images is directly proportional to the size of the images. The overhead of parallel rendering can be reduced by simply reducing the size of the images. Parallel Rendering 177 ParaView has the ability to subsample images before they are composited and inflate them after they have been composited. The Interactive Subsample Rate slider specifies the amount by which images are subsampled. This is measured in pixels, and the subsampling is the same in both horizontal and vertical directions. Thus, a subsample rate of two will result in an image that is ¼ the size of the original image. The image is scaled to full-size before it is displayed on the user interface, so the higher the subsample rate, the more obviously pixilated the image will be during interaction as demonstrated in Figure 13.4. When the user is not interacting with the data, no subsampling will be used. If you want subsampling to always be off, unmark the check-box beside the Interactive Subsample Rate slider. [[File:ParaView_UsersGuide_NoSubsampling.png link=]] [[File:ParaView_UsersGuide_TwoPixelSubsampling.png link=]] [[File:ParaView_UsersGuide_EightPixelSubsampling.png link=]] No Subsampling Subsample Rate: 8 pixels Subsample Rate: 2 pixels Figure 13.4 The effect of subsampling on image quality Squirt Compression: When ParaView is run in client/server mode, it uses image compression to optimize the image transfer. The compression uses an encoding algorithm optimized for images called SQUIRT (developed at Sandia National Laboratories). SQUIRT uses simple run-length encoding for its compression. A run-length image encoder will find sequences of pixels that are all the same color and encode them as a single run length (the count of pixels repeated) and the color value. ParaView represents colors as 24-bit values, but SQUIRT will optionally apply a bit mask to the colors before comparing them. Although information is lost when this mask is applied, the sizes of the run lengths are increased and the compression improves. The bit masks used by SQUIRT are carefully chosen to match the color sensitivity of the human visual system. A 19-bit mask employed by SQUIRT greatly improves compression with little or no noticeable image artifacts. Reducing the number of bits further can improve compression even more, but it can lead to more noticeable color-banding artifacts. The Squirt Compression slider determines the bit mask used during interactive rendering (i.e., rendering that occurs while the user is changing the camera position or otherwise interacting with the data). During still rendering (when the user is not interacting with the data), lossless compression is always used. The check-box to the left of the Squirt Compression slider toggles whether the SQUIRT compression algorithm is used at all. References [1] http:/ / mesa3d. org Tile Display Walls Tile Display Walls Tiled Display ParaView’s parallel architecture makes it possible to visualize massive amounts of data interactively. When the data is of sufficient resolution that parallel processing is necessary for interactive display, it is often the case that high-resolution images are needed to inspect the data in adequate detail. If you have a 2D grid of display devices, you can run ParaView in tiled display mode to take advantage of it. To put ParaView in tiled display mode, give pvserver (or pvrenderserver) the X and Y dimensions of the 2D grid with the --tile-dimensions-x (or -tdx) and --tile-dimensions-y (or ‑tdy) arguments. The X and Y dimensions default to 0, which disables tiled display mode. If you set only one of them to a positive value on the command line, the other will be set to 1. In tiled display mode, there must be at least as many server (in client / server mode) or render server (in client / data server / render server mode) nodes as tiles. The example below will create a 3x2 tiled display. pvserver -tdx=3 -tdy=2 Unless you have a high-end display wall, it is likely that each monitor's bezel creates a gap between the images shown in your tiled display. You can compensate for the bezel width by considering them to be like mullions on a window. To do that, specify the size of the gap (in pixels) through the --tile-mullion-x (-tmx) and --tile-mullion-y (-tmy) command line arguments. The IceT library, which ParaView uses for its image compositing, has custom compositing algorithms that work on tiled displays. Although compositing images for large tiled displays is a compute-intensive process, IceT reduces the overall amount of work by employing custom compositing strategies and removing empty tiles from the computation, as demonstrated in Figure 13.5. If the number of nodes is greater than the number of tiles, then the image compositing work will be divided amongst all the processes in the render server. In general, rendering to a tiled display will perform significantly better if there are many more nodes in the cluster than tiles in the display it drives. It also greatly helps if the geometry to be rendered is spatially distributed. Spatially distributed data is broken into contiguous pieces that are contained in small regions of space and are therefore rendered to smaller areas of the screen. IceT takes advantage of this property to reduce the amount of work required to composite the final images. ParaView includes the D3 filter, which redistributes data amongst the processors to ensure a spatially distributed geometry and thus improves tiled rendering performance. 178 Tile Display Walls 179 Figure 13.5 Compositing images for 8 processes on a 4-tile display Unlike other parallel rendering modes, composited images are not delivered to the client. Instead, image compositing is reserved for generating images on the tiled display, and the desktop renders its own images from a lower resolution version of the geometry to display in the UI. In Tiled Display Mode, ParaView automatically decimates the geometry and sends it to the client to make this happen. However, when the data is very large, even a decimated version of the geometry can overwhelm the client. In this case, ParaView will replace the geometry on the client with a bounding box. You have several controls at run-time over the tiled rendering algorithm that you can tune to maintain interactivity while visualizing very large data on very high-resolution tiled displays. These are located on the Tile Display Parameters section of the Render View / Server page of the application settings dialog. These controls are described in the Application Settings section of the Appendix. CAVE Displays CAVE Displays Introduction • ParaView has basic support for visualization in CAVE-like/VR or Virtual Environments (VE). • Like typical computing systems, VE's consists of peripheral displays (output) and input devices. However, unlike regular systems, VE's keep track of the physical location of their IO devices with respect to an assumed base coordinates in the physical room (room coordinates). • Typically VE's consists of multiple stereoscopic displays that are configured based on the room coordinates. These base coordinates also typically serve as the reference coordinate for tracked inputs commonly found in VE's. • VR support in ParaView includes the ability to 1. Configure displays (CAVE/Tiled Walls etc) and 2. The ability to configure inputs using VRPN/VRUI. • ParaView can operate in a Client / Server fashion. The VR/CAVE module leverage this by assigning different processes on the server to manage different displays. Displays are thus configured on the server side by passing a *.pvx configuration XML file during start-up. • The client acts as a central point of control. It can initiate the scene (visualization pipeline) and also relay tracked interactions to each server process managing the displays. The VR plugin on the client side enables this. Device and interaction style configurations are brought in through the ParaView state file (This will also have a GUI component in the near future). Architecture and Design Overview The picture describes the data flow, control flow, synchronization and configuration aspects of ParaView designed for VE's. The 4 compartments in the picture depicts the following ideas. 1. 2. 3. 4. We leverage the Render Server mechanism to drive the various displays in a CAVE/VE. Devices and control signals are relayed through the client using a plugin (VRPlugin) Rendering and inputs are synchronized using synchronous proxy mechanism inherent in Paraview (PV). Inputs are configured at the client end and displays are configured at the server end. 180 CAVE Displays 181 Process to Build, Configure and Run ParaView in VE (For ParaView 4.0 or lower) ParaView VR in 4.0 Process to Build, Configure and Run ParaView in VE (For ParaView 3.10 or lower) Enabling ParaView for VR is a five stage process. 1. 2. 3. 4. 5. Building ParaView with the required parameters. Preparing the display configuration file. Preparing the input configuration file. Starting the server (with display config.pvx) Starting the client (with input config.pvsm) Building Follow regular build instructions from http:/ / paraview. org/ Wiki/ ParaView:Build_And_Install. In addition following steps needs to be performed • Make sure Qt and MPI are installed cause they are required for building. If using VRPN make sure it is build and installed as well. • When configuring cmake enable BUILD_SHARED_LIB, PARAVIEW_BUILD_QT_GUI, PARAVIEW_USE_MPI and PARAVIEW_BUILD_PLUGIN_VRPlugin . • On Apple and Linux platforms the Plugin uses VRUI device daemon client as its default and on Windows VRPN is the default. • If you want to enable the other simply enable PARAVIEW_USE_VRPN or PARAVIEW_USE_VRUI . • If VRPN is not found in the default paths then VRPN_INCLUDE_DIR and VRPN_LIBRARY may also need be set. Configuring Displays The PV server is responsible for configuring displays. The display configuration is stored on a *.pvx file. ParaView has no concept of units and therefore user have to make sure that the configuration values are in the same measurements units as what tracker has producing. So for example if tracker data is in meters, then everything is considered in meters, if feet then feet becomes the unit for configuration. Structure of PVX Config File Example Config.pvx • The following example is for a six sided cave with origin at (0,0,0): elements in the same pvx file. -----------------------------------------------------| Executable | Applicable Process Type | pvserver | server, dataserver, renderserver | | CAVE Displays 182 | pvrenderserver | server, renderserver | | pvdataserver | | server, dataserver -------------------------------------------------------> elements in a element, each one identifying the configuration for a process. All attributes are optional. name="hostname" Environment: the environment for the process. LowerLeft|LowerRight|UpperRight --> • A sample PVX is provided in ParaView/Documentation/cave.pvx. This can be used to play with different display configurations. Notes on PVX file usage • PVX file should be specified as the last command line argument for any of the server processes. • The PVX file is typically specified for all the executables whose environment is being changed using the PVX file. In case of data-server/render-server configuration, if you are setting up the environment for the two processes groups, then the PVX file must be passed as a command line option to both the executables: pvdataserver and pvrenderserver. • When running in parallel the file is read on all nodes, hence it must be present on all nodes. • ParaView has no concept of units. • Use tracker units as default unit for corner points values, eye separation and anything else that requires real world units. Configure Inputs • Configuring inputs is a two step process 1. Define connections to one or more VRPN/VRUI servers. 2. Mapping data coming in from the server to ParaView interactor styles. Device Connections • Connections are clients to VRPN servers and or VRUI device daemons. • Connection are defined using XML as follows. • VRConnectionManager can define multiple connection. Each connection can of of type VRUIConnection or VRPNConnection. • Each connection has a name associated with it. Also data coming in from these connection can be given specific names. • In our example above a VRUI connection is defined and named travel. We also define a name to the tracking data on this connection calling it head. We can henceforth refer to this data using a dotted notation as follows travel.head • Like all data being read into the system can be assigned some virtual names which will be used down the line. • Following is a pictorial representation of the process. Interaction Styles • Interactor styles define certain fixed sets of interaction that we usually perform in VE's. • There are fixed number of hand-coded interaction styles. • vtkVRStyleTracking maps tracking data to a certain paraview proxy object. • vtkVRStyleGrabNUpdateMatrix takes button press and updates some transformation based on incoming tracking data. • vtkVRStyleGrabNTranslateSliceOrigin takes a button press and updates a position based on the position of the tracked input. • vtkVRStyleGrabNRotateSliceNormal takes a button press and updates a vector based on the orientation of the tracked input. • Interactor styles are specified using the following XML format. 184 CAVE Displays • The following is a pictorial representation of the entire process. Piggy-Backing the state file • The VRPlugin uses the state loading mechanism to read its configuration. This was just a quick and dirty way and will most probably be replaced with different method (perhaps a GUI even). • More about the state file can be found here [1]. • A state file is a representation of the visualization pipeline in PV. • So the first step before configuring the inputs is to load and create a scene in PV. • Export the corresponding state into a state file (config.pvsm). A state file has an extention *.pvsm with content as follows185 CAVE Displays 186 • We extend the state file to introduce the VRConnectionManager and VRInteractorStyles tags as follows ..... ..... ..... ..... .....