FLASH Manual
User Manual:
Open the PDF directly: View PDF .
Page Count: 573
FLASH User’s Guide
Version 4.4
October 2016 (last updated November 4, 2016)
Flash Center for Computational Science
University of Chicago
License
0.1
Acknowledgments in Publication
All publications resulting from the use of the FLASH Code must acknowledge the Flash Center. Addition of the following text to the paper acknowledgments will be sufficient.
”The software used in this work was in part developed by the DOE NNSA-ASC OASCR Flash Center
at the University of Chicago.”
The users should visit the bibliography hosted at flash.uchicago.edu/site/publications/flash_pubs.shtml
to find the relevant papers to cite in their work.
This is a summary of the rules governing the dissemination of the ”Flash Code” by the Flash Center for
Computational Science to users outside the Center, and constitutes the License Agreement for users of the
Flash Code. Users are responsible for following all of the applicable rules described below.
0.2
Full License Agreement
Below is a summary of the rules governing the dissemination of the ”FLASH Code” by the Flash Center for
Computational Science to users outside the Center, and constitutes the License Agreement for users of the
FLASH Code. Users are responsible for following all of the applicable rules described below.
• Public Release. Publicly released versions of the FLASH Code are available via the Center’s website.
We expect to include any external contributions to the Code in public releases that occur after the end
of a negotiated time.
• Decision Process. At present, release of the FLASH Code to users not located at the University of
Chicago or at Argonne National Laboratory is governed solely by the Center’s Director and Management Committee; decisions related to public release of the FLASH Code will be made in the same
manner.
• License and Distribution Rights. The University of Chicago owns the copyright to all Code developed
by the members of the Flash Center at the University of Chicago. External contributors may choose
to be included in the Copyright Assertion. The FLASH Code, or any part of the code, can only be
released and distributed by the Flash Center; individual users of the FLASH Code are not free to
re-distribute the FLASH Code, or any of its components, outside the Center. All users of the FLASH
Code must sign a hardcopy version of this License Agreement and send it to the Center. Distribution
of the FLASH Code can only occur once we receive a signed License Agreement.
• Modifications and Acknowledgments. Users may make modifications to the FLASH Code, and they are
encouraged to send such modifications to the Center. Users are not free to distribute the FLASH Code
to others, as noted in Section 3 above. As resources permit, we will incorporate such modifications in
subsequent releases of the FLASH Code, and we will acknowledge these external contributions. Note
that modifications that do not make it into an officially-released version of the FLASH Code will not
be supported by us.
i
ii
LICENSE
If a user modifies a copy or copies of the FLASH Code or any portion of it, thus forming a work based
on the FLASH Code, to be included in a FLASH release it must meet the following conditions:
– a)The software must carry prominent notices stating that the user changed specified portions of
the FLASH Code. This will also assist us in clearly identifying the portions of the FLASH Code
that the user has contributed.
– b)The software must display the following acknowledgement: ”This product includes software developed by and/or derived from the Flash Center for Computational Science (http://flash.uchicago.edu)
to which the U.S. Government retains certain rights.”
– c)The FLASH Code header section, which describes the origins of the FLASH Code and of its
components, must remain intact, and should be included in all modified versions of the code.
Furthermore, all publications resulting from the use of the FLASH Code, or any modified version
or portion of the FLASH Code, must acknowledge the Flash Center for Computational Science;
addition of the following text to the paper acknowledgments will be sufficient:
”The software used in this work was developed in part by the DOE NNSA ASC- and DOE Office of
Science ASCR-supported Flash Center for Computational Science at the University of Chicago.”
The Code header provides information on software that has been utilized as part of the FLASH
development effort (such as the AMR). The Center website includes a list of key scientific journal
references for the FLASH Code. We request that such references be included in the reference
section of any papers based on the FLASH Code.
• Commercial Use. All users interested in commercial use of the FLASH Code must obtain prior written
approval from the Director of the Center. Use of the FLASH Code, or any modification thereof, for
commercial purposes is not permitted otherwise.
• Bug Fixes and New Releases. As part of the FLASH Code dissemination process, the Center has set
up and will maintain as part of its website mechanisms for announcing new FLASH Code releases,
collecting requests for FLASH Code use, and collecting and disseminating relevant documentation.
We support the user community through mailing lists and by providing a bug report facility.
• User Feedback. The Center requests that all users of the FLASH Code notify the Center about all
publications that incorporate results based on the use of the code, or modified versions of the code or
its components. All such information can be sent to infoflash.uchicago.edu.
• Disclaimer. The FLASH Code was prepared, in part, as an account of work sponsored by an agency
of the United States Government. THE FLASH CODE IS PROVIDED “AS IS” AND NEITHER
THE UNITED STATES, NOR THE UNIVERSITY OF CHICAGO, NOR ANY CONTRIBUTORS
TO THE FLASH CODE, NOR ANY OF THEIR EMPLOYEES OR CONTRACTORS, MAKES ANY
WARRANTY, EXPRESS OR IMPLIED (INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE), OR
ASSUMES ANY LEGAL LIABILITY OR RESPONSIBILITY FOR THE ACCURACY, COMPLETENESS, OR USEFULNESS OF ANY INFORMATION, APPARATUS, PRODUCT, OR PROCESS
DISCLOSED, OR REPRESENTS THAT ITS USE WOULD NOT INFRINGE PRIVATELY OWNED
RIGHTS.
IN NO EVENT WILL THE UNITED STATES, THE UNIVERSITY OF CHICAGO OR ANY CONTRIBUTORS TO THE FLASH CODE BE LIABLE FOR ANY DAMAGES, INCLUDING DIRECT,
INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES RESULTING FROM EXERCISE OF
THIS LICENSE AGREEMENT OR THE USE OF THE SOFTWARE.
0.2. FULL LICENSE AGREEMENT
iii
Acknowledgments
The Flash Center for Computational Science at the University of Chicago is supported by the DOE
NNSA-ASC and NSF. Some of the test calculations described here were performed on machines at LLNL,
LANL, San Diego Supercomputing Center, and ANL. The current contributors to the code from the Center
include:
Sean Couch, Norbert Flocke, Dongwook Lee, Petros Tzeferacos, and Klaus Weide.
Considerable external and past contributors include:
Katie Antypas, John Bachan, Robi Banerjee, Edward Brown, Peter Brune, Alvaro Caceres, Alan Calder,
Christopher Daley, Anshu Dubey, Jonathan Dursi, Milad Fatenejad, Christoph Federrath, Robert Fisher,
Bruce Fryxell, Nathan Hearn, Mats Holmström, J. Brad Gallagher, Murali Ganapathy Krishna, Nathan
Goldbaum, Shravan K. Gopal, William Gray, Timur Linde, Zarija Lukic, Andrea Mignone, Joshua Miller,
Prateeti Mohapatra, Kevin Olson, Salvatore Orlando, Tomek Plewa, Kim Robinson, Lynn Reid, Paul Rich,
Paul Ricker, Katherine Riley, Chalence Safranek-Shrader, Anthony Scopatz, Daniel Sheeler, Andrew Siegel,
Noel Taylor, Frank Timmes, Dean Townsley, Marcos Vanella, Natalia Vladimirova, Greg Weirs, Richard
Wunsch, Mike Zingale, and John ZuHone.
PARAMESH was developed under NASA Contracts/Grants NAG5-2652 with George Mason University;
NAS5-32350 with Raytheon/STX; NAG5-6029 and NAG5-10026 with Drexel University; NAG5-9016 with
the University of Chicago; and NCC5-494 with the GEST Institute. For information on PARAMESH please
contact its main developers, Peter MacNeice (macneice@alfven.gsfc.nasa.gov) and Kevin Olson
(olson@physics.drexel.edu) and see the website
http://www.physics.drexel.edu/~olson/paramesh-doc/Users_manual/amr.html.
Contents
License
0.1 Acknowledgments in Publication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0.2 Full License Agreement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
i
i
i
1 Introduction
1.1 What’s New in FLASH4 . . .
1.2 External Contributions . . . .
1.3 Known Issues in This Release
1.4 About the User’s Guide . . .
1
1
5
6
7
I
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Getting Started
9
2 Quick Start
11
2.1 System requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Unpacking and configuring FLASH for quick start . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Running FLASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Setting Up New Problems
3.1 Creating a Config file . . . . . . . . . . . . .
3.2 Creating a Makefile . . . . . . . . . . . . . .
3.3 Creating a Simulation data.F90 . . . . . . .
3.4 Creating a Simulation init.F90 . . . . . . .
3.5 Creating a Simulation initBlock.F90 . . .
3.6 Creating a Simulation freeUserArrays.F90
3.7 The runtime parameter file (flash.par) . . .
3.8 Running your simulation . . . . . . . . . . . .
II
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
The FLASH Software System
19
20
20
21
21
23
27
28
31
33
4 Overview of FLASH architecture
4.1 FLASH Inheritance . . . . . . . . . . . . . . . . . . . . . . .
4.2 Unit Architecture . . . . . . . . . . . . . . . . . . . . . . . .
4.2.1 Stub Implementations . . . . . . . . . . . . . . . . .
4.2.2 Subunits . . . . . . . . . . . . . . . . . . . . . . . . .
4.2.3 Unit Data Modules, _init, and _finalize routines
4.2.4 Private Routines: kernels and helpers . . . . . . . .
4.3 Unit Test Framework . . . . . . . . . . . . . . . . . . . . . .
v
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
35
36
36
37
38
38
39
40
vi
CONTENTS
5 The
5.1
5.2
5.3
5.4
5.5
5.6
5.7
5.8
5.9
6 The
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
III
FLASH configuration script (setup)
Setup Arguments . . . . . . . . . . . . . . . . . . .
Comprehensive List of Setup Arguments . . . . . .
Using Shortcuts . . . . . . . . . . . . . . . . . . . .
Setup Variables and Preprocessing Config Files . .
Config Files . . . . . . . . . . . . . . . . . . . . . .
5.5.1 Configuration file syntax . . . . . . . . . . .
5.5.2 Configuration directives . . . . . . . . . . .
Creating a Site-specific Makefile . . . . . . . . . .
Files Created During the setup Process . . . . . .
5.7.1 Informational files . . . . . . . . . . . . . .
5.7.2 Code generated by the setup call . . . . . .
5.7.3 Makefiles generated by setup . . . . . . . .
Setup a hybrid MPI+OpenMP FLASH application
Setup a FLASH+Chombo application . . . . . . .
5.9.1 Overview . . . . . . . . . . . . . . . . . . .
5.9.2 Build procedure . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
41
42
43
56
56
58
59
60
68
68
69
69
70
71
73
73
73
Flash.h file
UNK, FACE(XYZ) Dimensions . . . . . . . . . .
Property Variables, Species and Mass Scalars
Fluxes . . . . . . . . . . . . . . . . . . . . . .
Scratch Vars . . . . . . . . . . . . . . . . . .
Fluid Variables Example . . . . . . . . . . . .
Particles . . . . . . . . . . . . . . . . . . . .
6.6.1 Particles Types . . . . . . . . . . . . .
6.6.2 Particles Properties . . . . . . . . . .
Non-Replicated Variable Arrays . . . . . . . .
6.7.1 Per-Array Macros . . . . . . . . . . .
6.7.2 Array Partitioning Macros . . . . . . .
6.7.3 Example . . . . . . . . . . . . . . . . .
Other Preprocessor Symbols . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
79
79
80
81
81
82
83
83
83
84
84
84
85
86
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Driver Unit
7 Driver Unit
7.1 Driver Routines . . . . . . . . . .
7.1.1 Driver initFlash . . . .
7.1.2 Driver evolveFlash . . .
7.1.3 Driver finalizeFlash .
7.1.4 Driver accessor functions
IV
.
.
.
.
.
.
.
.
.
.
.
.
.
87
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Infrastructure Units
8 Grid Unit
8.1 Overview . . . . . . . . . . . . . . . . . . .
8.2 GridMain Data Structures . . . . . . . . . .
8.3 Computational Domain . . . . . . . . . . .
8.4 Boundary Conditions . . . . . . . . . . . . .
8.4.1 Boundary Condition Types . . . . .
8.4.2 Boundary Conditions at Obstacles .
8.4.3 Implementing Boundary Conditions
8.5 Uniform Grid . . . . . . . . . . . . . . . . .
89
89
90
90
92
92
95
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
97
99
101
102
103
103
104
105
107
CONTENTS
vii
8.5.1 FIXEDBLOCKSIZE Mode . . . . . . . . . . . . . . . . . . . .
8.5.2 NONFIXEDBLOCKSIZE mode . . . . . . . . . . . . . . . . .
8.6 Adaptive Mesh Refinement (AMR) Grid with Paramesh . . . . . . . .
8.6.1 Additional Data Structures . . . . . . . . . . . . . . . . . . . .
8.6.2 Grid Interpolation . . . . . . . . . . . . . . . . . . . . . . . . .
8.6.3 Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7 Chombo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.7.1 Using Chombo in a UG configuration . . . . . . . . . . . . . .
8.7.2 Using Chombo in a AMR configuration . . . . . . . . . . . . .
8.8 GridMain Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9 GridParticles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.1 GridParticlesMove . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.2 GridParticlesMapToMesh . . . . . . . . . . . . . . . . . . . . .
8.10 GridSolvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10.1 Pfft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10.2 Poisson equation . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10.3 Using the Poisson solvers . . . . . . . . . . . . . . . . . . . . .
8.10.4 HYPRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.11 Grid Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.11.1 Understanding Curvilinear . . . . . . . . . . . . . . . . . . . .
8.11.2 Choosing a Geometry . . . . . . . . . . . . . . . . . . . . . . .
8.11.3 Geometry Information in Code . . . . . . . . . . . . . . . . . .
8.11.4 Available Geometries . . . . . . . . . . . . . . . . . . . . . . . .
8.11.5 Conservative Prolongation/Restriction on Non-Cartesian Grids
8.12 Unit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9 IO Unit
9.1 IO Implementations . . . . . . . . . . . . . . . .
9.2 Output Files . . . . . . . . . . . . . . . . . . . .
9.2.1 Checkpoint files - Restarting a Simulation
9.2.2 Plotfiles . . . . . . . . . . . . . . . . . . .
9.2.3 Particle files . . . . . . . . . . . . . . . . .
9.2.4 Integrated Grid Quantities – flash.dat . .
9.2.5 General Runtime Parameters . . . . . . .
9.3 Restarts and Runtime Parameters . . . . . . . .
9.4 Output Scalars . . . . . . . . . . . . . . . . . . .
9.5 Output User-defined Arrays . . . . . . . . . . . .
9.6 Output Scratch Variables . . . . . . . . . . . . .
9.7 Face-Centered Data . . . . . . . . . . . . . . . .
9.8 Output Filenames . . . . . . . . . . . . . . . . .
9.9 Output Formats . . . . . . . . . . . . . . . . . .
9.9.1 HDF5 . . . . . . . . . . . . . . . . . . . .
9.9.2 Parallel-NetCDF . . . . . . . . . . . . . .
9.9.3 Direct IO . . . . . . . . . . . . . . . . . .
9.9.4 Output Side Effects . . . . . . . . . . . .
9.10 Working with Output Files . . . . . . . . . . . .
9.11 Unit Test . . . . . . . . . . . . . . . . . . . . . .
9.12 Chombo . . . . . . . . . . . . . . . . . . . . . . .
9.13 Derived data type I/O . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
107
108
108
110
111
112
115
115
115
117
118
119
121
125
125
128
141
146
149
150
151
151
151
154
155
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
157
159
161
161
163
164
165
166
167
167
167
168
168
168
169
169
175
175
175
176
176
177
177
viii
CONTENTS
10 Runtime Parameters Unit
10.1 Defining Runtime Parameters . . . . .
10.2 Identifying Valid Runtime Parameters
10.3 Routine Descriptions . . . . . . . . . .
10.4 Example Usage . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
179
179
179
180
181
11 Multispecies Unit
11.1 Defining Species . . . . . . . . . . . . . . . . . . . . . . . . .
11.2 Initializing Species Information in Simulation_initSpecies
11.3 Specifying Constituent Elements of a Species . . . . . . . . .
11.4 Alternative Method for Defining Species . . . . . . . . . . . .
11.5 Routine Descriptions . . . . . . . . . . . . . . . . . . . . . . .
11.6 Example Usage . . . . . . . . . . . . . . . . . . . . . . . . . .
11.7 Unit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
183
183
184
186
186
187
189
189
12 Physical Constants Unit
12.1 Available Constants and Units
12.2 Applicable Runtime Parameters
12.3 Routine Descriptions . . . . . .
12.4 Unit Test . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
191
192
192
192
193
V
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Physics Units
195
13 3T Capabilities for Simulation of HEDP Experiments
14 Hydrodynamics Units
14.1 Gas hydrodynamics . . . . . . . . . . . . . . . .
14.1.1 Usage . . . . . . . . . . . . . . . . . . .
14.1.2 The piecewise-parabolic method (PPM)
14.1.3 The unsplit hydro solver . . . . . . . . .
14.1.4 Multitemperature extension for Hydro .
14.1.5 Chombo compatible Hydro . . . . . . .
14.2 Relativistic hydrodynamics (RHD) . . . . . . .
14.2.1 Overview . . . . . . . . . . . . . . . . .
14.2.2 Equations . . . . . . . . . . . . . . . . .
14.2.3 Relativistic Equation of State . . . . . .
14.2.4 Additional Runtime Parameter . . . . .
14.3 Magnetohydrodynamics (MHD) . . . . . . . . .
14.3.1 Description . . . . . . . . . . . . . . . .
14.3.2 Usage . . . . . . . . . . . . . . . . . . .
14.3.3 Algorithm: The Unsplit Staggered Mesh
14.3.4 Algorithm: The Eight-wave Solver . . .
14.3.5 Non-ideal MHD . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Solver
. . . .
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
197
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15 Incompressible Navier-Stokes Unit
16 Equation of State Unit
16.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
16.2 Gamma Law and Multigamma . . . . . . . . . . . . . . .
16.2.1 Ideal Gamma Law for Relativistic Hydrodynamics
16.3 Helmholtz . . . . . . . . . . . . . . . . . . . . . . . . . . .
16.4 Multitemperature extension for Eos . . . . . . . . . . . .
16.4.1 Gamma . . . . . . . . . . . . . . . . . . . . . . . .
16.4.2 Multigamma . . . . . . . . . . . . . . . . . . . . .
201
203
203
203
204
209
212
212
212
213
213
213
214
214
215
216
220
222
223
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
225
225
226
227
227
229
229
230
CONTENTS
16.4.3 Tabulated . . . . . . . . .
16.4.4 Multitype . . . . . . . . .
16.5 Usage . . . . . . . . . . . . . . .
16.5.1 Initialization . . . . . . .
16.5.2 Runtime Parameters . . .
16.5.3 Direct and Wrapped Calls
16.6 Unit Test . . . . . . . . . . . . .
ix
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
231
231
231
231
232
232
233
17 Local Source Terms
17.1 Burn Unit . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.1.1 Algorithms . . . . . . . . . . . . . . . . . . . . . .
17.1.2 Reaction networks . . . . . . . . . . . . . . . . . .
17.1.3 Detecting shocks . . . . . . . . . . . . . . . . . . .
17.1.4 Energy generation rates and reaction rates . . . .
17.1.5 Temperature-based timestep limiting . . . . . . . .
17.2 Ionization Unit . . . . . . . . . . . . . . . . . . . . . . . .
17.2.1 Algorithms . . . . . . . . . . . . . . . . . . . . . .
17.2.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . .
17.3 Stir Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.3.1 Stir Unit: Generate Implementation . . . . . . . .
17.3.2 Stir Unit: FromFile Implementation . . . . . . . .
17.3.3 Using the StirFromFile Unit . . . . . . . . . . . . .
17.3.4 Stirring Unit Test . . . . . . . . . . . . . . . . . .
17.4 Energy Deposition Unit . . . . . . . . . . . . . . . . . . .
17.4.1 Ray Tracing in the Geometric Optics Limit . . . .
17.4.2 Laser Power Deposition . . . . . . . . . . . . . . .
17.4.3 Laser Energy Density . . . . . . . . . . . . . . . .
17.4.4 Algorithmic Implementations of the Ray Tracing .
17.4.5 Setting up the Laser Pulse . . . . . . . . . . . . . .
17.4.6 Setting up the Laser Beam . . . . . . . . . . . . .
17.4.7 Setting up the Rays . . . . . . . . . . . . . . . . .
17.4.8 3D Laser Ray Tracing in 2D Cylindrical Symmetry
17.4.9 Synchronous and Asynchronous Ray Tracing . . .
17.4.10 Usage . . . . . . . . . . . . . . . . . . . . . . . . .
17.4.11 Unit Tests . . . . . . . . . . . . . . . . . . . . . . .
17.5 Heatexchange . . . . . . . . . . . . . . . . . . . . . . . . .
17.5.1 Spitzer Heat Exchange . . . . . . . . . . . . . . . .
17.5.2 LeeMore Heat Exchange . . . . . . . . . . . . . . .
17.6 Flame . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17.6.1 Reaction-Diffusion Forms . . . . . . . . . . . . . .
17.6.2 Unit Structure . . . . . . . . . . . . . . . . . . . .
17.7 Turbulence Measurement . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
235
236
236
237
241
241
242
242
243
243
244
244
244
246
248
248
248
249
251
251
254
255
258
262
269
273
277
284
284
286
287
287
288
289
18 Diffusive Terms
18.1 Diffuse Unit . . . . . . . . . . . . . . . . . . . . . .
18.1.1 Diffuse Flux-Based implementations . . . .
18.1.2 General Implicit Diffusion Solver . . . . . .
18.1.3 Flux Limiters . . . . . . . . . . . . . . . . .
18.1.4 Stand-Alone Electron Thermal Conduction
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
291
291
292
292
295
297
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
x
CONTENTS
19 Gravity Unit
19.1 Introduction . . . . . . . . . . . . . . . . . . . .
19.2 Externally Applied Fields . . . . . . . . . . . .
19.2.1 Constant Gravitational Field . . . . . .
19.2.2 Plane-parallel Gravitational field . . . .
19.2.3 Gravitational Field of a Point Mass . .
19.2.4 User-Defined Gravitational Field . . . .
19.3 Self-gravity . . . . . . . . . . . . . . . . . . . .
19.3.1 Coupling Gravity with Hydrodynamics .
19.3.2 Tree Gravity . . . . . . . . . . . . . . .
19.4 Usage . . . . . . . . . . . . . . . . . . . . . . .
19.4.1 Tree Gravity Unit Usage . . . . . . . . .
19.5 Unit Tests . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
299
299
300
300
300
300
300
300
301
302
303
303
305
20 Particles Unit
20.1 Time Integration . . . . . . . . . . . .
20.1.1 Active Particles (Massive) . . .
20.1.2 Charged Particles - Hybrid PIC
20.1.3 Passive Particles . . . . . . . .
20.2 Mesh/Particle Mapping . . . . . . . .
20.2.1 Quadratic Mesh Mapping . . .
20.2.2 Cloud in Cell Mapping . . . . .
20.3 Using the Particles Unit . . . . . . . .
20.3.1 Particles Runtime Parameters .
20.3.2 Particle Attributes . . . . . . .
20.3.3 Particle I/O . . . . . . . . . . .
20.3.4 Unit Tests . . . . . . . . . . . .
20.4 Sink Particles . . . . . . . . . . . . . .
20.4.1 Basics of Sink Particles . . . .
20.4.2 Using the Sink Particle Unit .
20.4.3 The Sink Particle Method . . .
20.4.4 Sink Particle Unit Test . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
307
309
309
311
313
317
318
320
320
322
322
323
323
324
324
324
325
326
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
21 Cosmology Unit
329
21.1 Algorithms and Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
21.2 Using the Cosmology unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
21.3 Unit Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
22 Material Properties Units
22.1 Thermal Conductivity . . . . . . . . . . . . . . . . .
22.2 Magnetic Resistivity . . . . . . . . . . . . . . . . . .
22.2.1 Constant resistivity . . . . . . . . . . . . . .
22.2.2 Spitzer HighZ resistivity . . . . . . . . . . . .
22.3 Viscosity . . . . . . . . . . . . . . . . . . . . . . . . .
22.4 Opacity . . . . . . . . . . . . . . . . . . . . . . . . .
22.4.1 Constant Implementation . . . . . . . . . . .
22.4.2 Constcm2g Implementation . . . . . . . . . .
22.4.3 BremsstrahlungAndThomson Implementation
22.4.4 OPAL Implementation . . . . . . . . . . . . .
22.4.5 Multispecies Implementation . . . . . . . . .
22.4.6 The IONMIX EOS/Opacity Format . . . . .
22.5 Mass Diffusivity . . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
333
334
335
335
335
336
336
336
336
336
337
337
339
342
23 Physics Utilities
343
23.1 PlasmaState . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
CONTENTS
xi
24 Radiative Transfer Unit
24.1 Multigroup Diffusion . . . . . . . . . . . . . .
24.1.1 Using Multigroup Radiation Diffusion
24.1.2 Using Mesh Replication with MGD . .
24.1.3 Specifying Initial Conditions . . . . .
24.1.4 Altering the Radiation Spectrum . . .
VI
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Monitor Units
345
346
346
347
348
350
353
25 Logfile Unit
25.1 Meta Data . . . . . . . . . . . . . . . . . . . .
25.2 Runtime Parameters, Physical Constants, and
25.3 Accessor Functions and Timestep Data . . . .
25.4 Performance Data . . . . . . . . . . . . . . .
25.5 Example Usage . . . . . . . . . . . . . . . . .
. . . . . . . . . . .
Multispecies Data
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
355
356
356
358
359
360
26 Timer and Profiler Units
26.1 Timers . . . . . . . . . .
26.1.1 MPINative . . .
26.1.2 Tau . . . . . . .
26.2 Profiler . . . . . . . . .
361
361
361
362
363
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
26.3 Proton Imaging Unit . . . . . . . . . . . .
26.3.1 Proton Deflection by Lorentz Force
26.3.2 Setting up the Proton Beam . . .
26.3.3 Creating the Protons . . . . . . . .
26.3.4 Setting up the Detector Screens . .
26.3.5 Time Resolved Proton Imaging . .
26.3.6 Usage . . . . . . . . . . . . . . . .
26.3.7 Unit Test . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
VII
VIII
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Diagnostic Units
365
Numerical Tools Units
367
367
370
371
372
373
374
377
381
27 Interpolate Unit
27.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27.2 Piecewise Cubic Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27.3 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
383
383
383
385
28 Roots Unit
28.1 Introduction . . . . . . . . . . . . . . . .
28.2 Roots of Quadratic Polynomials . . . . .
28.3 Roots of Cubic Polynomials . . . . . . .
28.4 Roots of Quartic Polynomials . . . . . .
28.5 Usage . . . . . . . . . . . . . . . . . . .
28.6 Unit Tests . . . . . . . . . . . . . . . . .
28.6.1 Quadratic Polynomials Root Test
28.6.2 Cubic Polynomials Root Test . .
28.6.3 Quartic Polynomials Root Test .
387
387
387
388
388
388
388
389
390
391
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
xii
CONTENTS
29 RungeKutta Unit
29.1 Introduction . . . . . . . . . . . .
29.2 Runge Kutta Integration . . . . .
29.3 Usage . . . . . . . . . . . . . . .
29.4 Unit Tests . . . . . . . . . . . . .
29.4.1 Runge Kutta FLASH Test
IX
. .
. .
. .
. .
for
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
a 2D Elliptical Path
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Simulation Units
30 The Supplied Test Problems
30.1 Hydrodynamics Test Problems . . . . . . . . . . . . . . . . . .
30.1.1 Sod Shock-Tube . . . . . . . . . . . . . . . . . . . . . .
30.1.2 Variants of the Sod Problem in Curvilinear Geometries
30.1.3 Interacting Blast-Wave Blast2 . . . . . . . . . . . . . .
30.1.4 Sedov Explosion . . . . . . . . . . . . . . . . . . . . . .
30.1.5 Isentropic Vortex . . . . . . . . . . . . . . . . . . . . . .
30.1.6 The double Mach reflection problem . . . . . . . . . . .
30.1.7 Wind Tunnel With a Step . . . . . . . . . . . . . . . . .
30.1.8 The Shu-Osher problem . . . . . . . . . . . . . . . . . .
30.1.9 Driven Turbulence StirTurb . . . . . . . . . . . . . . .
30.1.10 Relativistic Sod Shock-Tube . . . . . . . . . . . . . . . .
30.1.11 Relativistic Two-dimensional Riemann . . . . . . . . . .
30.1.12 Flow Interactions with Stationary Rigid Body . . . . . .
30.2 Magnetohydrodynamics Test Problems . . . . . . . . . . . . . .
30.2.1 Brio-Wu MHD Shock Tube . . . . . . . . . . . . . . . .
30.2.2 Orszag-Tang MHD Vortex . . . . . . . . . . . . . . . . .
30.2.3 Magnetized Accretion Torus . . . . . . . . . . . . . . . .
30.2.4 Magnetized Noh Z-pinch . . . . . . . . . . . . . . . . . .
30.2.5 MHD Rotor . . . . . . . . . . . . . . . . . . . . . . . . .
30.2.6 MHD Current Sheet . . . . . . . . . . . . . . . . . . . .
30.2.7 Field Loop . . . . . . . . . . . . . . . . . . . . . . . . .
30.2.8 3D MHD Blast . . . . . . . . . . . . . . . . . . . . . . .
30.3 Gravity Test Problems . . . . . . . . . . . . . . . . . . . . . . .
30.3.1 Jeans Instability . . . . . . . . . . . . . . . . . . . . . .
30.3.2 Homologous Dust Collapse . . . . . . . . . . . . . . . .
30.3.3 Huang-Greengard Poisson Test . . . . . . . . . . . . . .
30.3.4 MacLaurin . . . . . . . . . . . . . . . . . . . . . . . . .
30.4 Particles Test Problems . . . . . . . . . . . . . . . . . . . . . .
30.4.1 Two-particle Orbit . . . . . . . . . . . . . . . . . . . . .
30.4.2 Zel’dovich Pancake . . . . . . . . . . . . . . . . . . . . .
30.4.3 Modified Huang-Greengard Poisson Test . . . . . . . . .
30.5 Burn Test Problem . . . . . . . . . . . . . . . . . . . . . . . . .
30.5.1 Cellular Nuclear Burning . . . . . . . . . . . . . . . . .
30.6 RadTrans Test Problems . . . . . . . . . . . . . . . . . . . . . .
30.6.1 Infinite Medium, Energy Equilibration . . . . . . . . . .
30.6.2 Radiation Step Test . . . . . . . . . . . . . . . . . . . .
30.7 Other Test Problems . . . . . . . . . . . . . . . . . . . . . . . .
30.7.1 The non-equilibrium ionization test problem . . . . . . .
30.7.2 The Delta-Function Heat Conduction Problem . . . . .
30.7.3 The HydroStatic Test Problem . . . . . . . . . . . . . .
30.7.4 Hybrid-PIC Test Problems . . . . . . . . . . . . . . . .
30.7.5 Full-physics Laser Driven Simulation . . . . . . . . . . .
30.8 3T Shock Simulations . . . . . . . . . . . . . . . . . . . . . . .
393
393
393
396
397
397
403
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
405
405
406
409
410
414
419
420
424
427
430
430
437
438
440
440
442
449
450
450
451
456
456
458
458
462
464
465
470
470
471
474
474
474
478
478
478
480
480
485
485
485
487
496
CONTENTS
xiii
30.8.1 Shafranov Shock . . . . . . . . . . . .
30.8.2 Non-Equilibrium Radiative Shock . .
30.8.3 Blast Wave with Thermal Conduction
30.9 Matter+Radiation Simulations . . . . . . . .
30.9.1 Radiation-Inhibited Bondi Accretion .
30.9.2 Radiation Blast Wave . . . . . . . . .
X
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Tools
496
497
499
500
500
502
503
31 VisIt
507
32 Serial FLASH Output Comparison Utility (sfocu)
509
32.1 Building sfocu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
32.2 Using sfocu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
33 Drift
33.1 Introduction . . . . . . .
33.2 Enabling drift . . . . . .
33.3 Typical workflow . . . .
33.4 Caveats and Annoyances
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
34 FLASH IDL Routines (fidlr3.0)
34.1 Installing and Running fidlr3.0 . . . . . . . . . . . . . . .
34.1.1 Setting Up fidlr3.0Environment Variables . . . . .
34.1.2 Running IDL . . . . . . . . . . . . . . . . . . . . . .
34.2 xflash3: A Widget Interface to Plotting FLASH Datasets .
34.2.1 File Menu . . . . . . . . . . . . . . . . . . . . . . . .
34.2.2 Defaults Menu . . . . . . . . . . . . . . . . . . . . .
34.2.3 Colormap Menu . . . . . . . . . . . . . . . . . . . .
34.2.4 X/Y plot count Menu . . . . . . . . . . . . . . . . .
34.2.5 Plotting options available from the GUI . . . . . . .
34.2.6 Plotting buttons . . . . . . . . . . . . . . . . . . . .
34.3 Comparing two datasets . . . . . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
511
511
512
512
513
.
.
.
.
.
.
.
.
.
.
.
515
515
515
516
516
516
517
518
518
518
520
522
35 convertspec3d
525
35.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
35.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
XI
Going Further with FLASH
527
36 Adding new solvers
529
37 Porting FLASH to other machines
531
37.1 Writing a Makefile.h . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
38 Multithreaded FLASH
38.1 Overview . . . . . . . . . . . .
38.2 Threading strategies . . . . . .
38.3 Running multithreaded FLASH
38.3.1 OpenMP variables . . .
38.3.2 FLASH variables . . . .
38.3.3 FLASH constants . . . .
38.4 Verifying correctness . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
535
535
535
536
536
536
537
537
xiv
CONTENTS
38.5 Performance results . .
38.5.1 Multipole solver
38.5.2 Helmholtz EOS .
38.5.3 Sedov . . . . . .
38.5.4 LaserSlab . . . .
38.6 Conclusion . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
537
537
538
539
540
541
References
543
Runtime Parameters
549
API Index
553
Index
555
Chapter 1
Introduction
The FLASH code is a modular, parallel multiphysics simulation code capable of handling general compressible
flow problems found in many astrophysical environments. It is a set of independent code units put together
with a Python language setup tool to form various applications. The code is written in FORTRAN90 and
C. It uses the Message-Passing Interface (MPI) library for inter-processor communication and the HDF5 or
Parallel-NetCDF library for parallel I/O to achieve portability and scalability on a variety of different parallel
computers. FLASH4 has three interchangeable discretization grids: a Uniform Grid, and a block-structured
oct-tree based adaptive grid using the PARAMESH library, and a block-structured patch based adaptive grid
using Chombo. Both PARAMESH and Chombo place resolution elements only where they are needed most.1 The
code’s architecture is designed to be flexible and easily extensible. Users can configure initial and boundary
conditions, change algorithms, and add new physics units with minimal effort.
The Flash Center was founded at the University of Chicago in 1997 under contract to the United States
Department of Energy as part of its Accelerated Strategic Computing Initiative (ASCI) (now the Advanced
Simulation and Computing (ASC) Program). The scientific goal of the Center then was to address several
problems related to thermonuclear flashes on the surface of compact stars (neutron stars and white dwarfs),
in particular Type Ia supernovae, and novae. The software goals of the center were to develop new simulation
tools capable of handling the extreme resolution and physical requirements imposed by conditions in these
explosions and to make them available to the community through the public release of the FLASH code. Since
2009 the several new scienfic and computational code development projects have been added to the Center,
the notable one among them are: Supernova Models, High-Energy Density Physics (HEDP), Fluid-Structure
Interaction, and Implicit Solvers for stiff parabolic and hyperbolic systems with AMR.
The FLASH code has become a key hydrodynamics application used to test and debug new machine
architectures because of its modular structure, portability, scalability and dependence on parallel I/O libraries. It has a growing user base and has rapidly become a shared code for the astrophysics community
and beyond, with hundreds of active users who customize the code for their own research.
1.1
What’s New in FLASH4
This Guide describes the release version 4.4 of FLASH4. FLASH4 includes all the well tested capabilities
of FLASH3. There were a few modules in the official releases of FLASH2 which were added and tested by
local users, but did not have standardized setups that could be used to test them after the migration to
FLASH3. Those modules are not included in the official releases of FLASH3 or FLASH4, however, they are
being made available to download ”as is” from the Flash Center’s website. We have ensured that they have
been imported into FLASH4 to the extent that they conform to the architecture and compile. We cannot
guarantee that they work correctly; they are meant to be useful starting points for users who need their
functionality. We also welcome setups contributed by the users that can meaningfully test these units. If
such setups become available to us, the units will be released in future.
1 The Chombo grid in FLASH has had limited testing, and supports only a limited set of physics units. At this time the use
of Chombo within FLASH for production is not recommended.
1
2
CHAPTER 1. INTRODUCTION
In terms of the code architecture, FLASH4 closely follows FLASH3. The major changes from FLASH3
are several new capabilities in both physics solvers and infrastructure. Major effort went into the design of
the FLASH3 architecture to ensure that the code can be easily modified and extended by internal as well
as external developers. Each code unit in FLASH4, like in FLASH3 has a well defined interface and follows
the rules for inheritance and encapsulation defined in FLASH3. One of the largest achievements of FLASH3
was the separation of the discretized ‘grid’ architecture from the actual physics. This untangling required
changes in the deepest levels of the code, but has demonstrated its worth by allowing us to import a new
AMR package Chombo into the code.
Because of the increasing importance of software verification and validation, the Flash code group has
developed a test-suite application for FLASH3. The application is called FlashTest and can be used to
setup, compile, execute, and test a series of FLASH code simulations on a regular basis. FlashTest is
available without a license and can be downloaded from the Code Support Web Page. There is also a more
general open-source version of FlashTest which can be used to test any software in which an application is
configured and then executed under a variety of different conditions. The results of the tests can then be
visualized in a browser with FlashTestView, a companion to FlashTest that is also open-source.
Many but not all parts of FLASH4 are backwards compatible with FLASH2, and they are all compatible
with FLASH3. The Flash code group has written extensive documentation detailing how to make the
transition from FLASH2 to FLASH3 as smooth as possible. The user should follow the ”Name changes
from FLASH2 to FLASH3” link on the Code Support Web Page for help on transitioning to FLASH4 from
FLASH2. The transition from FLASH3 to FLASH4 does not require much effort from the users except in
any custom implementation they may have.
The new capabilities in FLASH4 that were not included in FLASH3 include
• 3T capabilities in the split and unsplit Hydro solvers. There is support for non-cartesian geometry and
the unsplit solver also supports stationary rigid body.
• Upwind biased constrained transport (CT) scheme in the unsplit staggered mesh MHD solver
• Full corner transport upwind (CTU) algorithm in the unsplit hydro/MHD solver
• Cylindrical geometry support in the unsplit staggered mesh MHD solver on UG and AMR. A couple
of MHD simulation setups using cylindrical geometry.
• Units for radiation diffusion, conduction, and heat exchange.
• Equation-of state unit includes table based multi-material multi-temperature implementation.
• The Opacities unit with the ability to use hot and cold opacities.
• The laser drive with threading for performance
• Ability to replicate mesh for multigroup diffusion or other similar applications.
• Several important solvers have been threaded at both coarse-grain (one block per thread) and fine-grain
(threads within a block) levels.
• Several new HEDP simulation setups.
• A new multipole solver
• Ability to add particles during evolution
The enhancements and bug fixes to the existing capabilities since FLASH4-beta release are :
• The HLLD Riemann solver has been improved to handle MHD degeneracy.
• PARAMESH’s handling for face-centered variables in order to ensure divergence-free magnetic fields
evolution on AMR now uses gr_pmrpDivergenceFree=.true. and gr_pmrpForceConsistency=.true.
by default.
1.1. WHAT’S NEW IN FLASH4
3
• The HEDP capabilities of the code have been exercised and are therefore more robust.
• Laser 3D in 2D ray tracing has been added. The code traces rays in a real 3D cylindrical domain using
a computational 2D cylindrical domain and is based on a polygon approximation to the angular part.
• In non-fixedblocksize mode, restart with particles did not work when starting with a different processor
count. This bug has now been fixed.
• All I/O implementations now support reading/writing 0 blocks and 0 particles.
• There is support for particles and face variables in PnetCDF
• Initializaton of of the computation domain has been optimized by eliminating unnecessary invocations
of PARAMESH’s “digital orrery” algorithm at simulation startup. It is possible to run the orrery in a
reduced communicator in order to speed up FLASH initialization.
• The custom region code and corresponding Grid API routines have been removed.
• PARAMESH4DEV is now the default PARAMESH implementation.
The new capabilities in FLASH4.2 . . . FLASH4.2.2 since FLASH4.0.1 include:
• New Core-Collapse Super Nova (CCSN) physics, with complete nuclear EOS routines, local neutrino
heating/cooling and multispecies neutrino leakage.
• New unsplit Hydro and MHD implementations, highly optimized for performance. These implementations are now the default option. We have retained the old implementations as an unsplit old
alternative for compatibility reasons.
• New support for 3T magnetohydrodynamics, designed for HEDP problems.
• A new magnetic resistivity implementation, SpitzerHighZ, for HEDP problems. We have also extended
the support for resistivity in cylindrical geometry in the unsplit solver.
• New threading capabilities for unsplit MHD, compatible with all threading strategies followed by the
code.
• New, improved multipole Poisson solver, implementing the algorithmic refinements described in http:
//dx.doi.org/10.1088/0004-637X/778/2/181 and http://arxiv.org/abs/1307.3135.
• Reorganization of the EnergyDeposition unit. A new feature has been included that allows EnergyDeposition to be called once every n time steps.
The new capabilities in FLASH4.3 since FLASH4.2.2 include:
• The sink particles implementation now has support for particles to remain active when leaving the grid
domain (in case of outflow boundary conditions).
• New Proton Imaging unit: The new unit is a simulated diagnostic of the Proton Radiography used in
HEDP experiments.
• Flux-limited-diffusion for radiation (implemented in RadTransMain/MGD) is now available for astrophysical problem setups:
– MatRad3 (matter+rad [2T] stored in three components) implementations for several Eos types:
Gamma, Multigamma, and (experimentally) Helmholtz/SpeciesBased.
– Implemented additional terms in FLD Rad-Hydro equations to handle streaming and transitionto-streaming regimes better - including radiation pressure. This is currently available as a variant
of the unsplit Hydro solver code, under HydroMain/unsplit rad . We call this RADFLAH Radiation Flux-Limiter Aware Hydro. Setup with shortcut +uhd3tR instead of +uhd3t . This has
had limited testing, mostly in 1D spherical geometry.
4
CHAPTER 1. INTRODUCTION
– New test setups under Simulation/SimulationMain/radflaHD: BondiAccretion, RadBlastWave
– Various fixes in Eos implementations.
– New ”outstream” diffusion solver boundary condition for streaming limit. (currently 1D spherical
only)
– Added Levermore-Pomraning flux limiter.
– More flexible setup combinations are now easily possible - can combine, e.g., species declared on
setup command line with SPECIES in Config files and initialized with Simulation initSpecies, by
setup with ManualSpeciesDirectives=True.
– Created an ”Immediate” HeatExchange implementation.
– EXPERIMENTAL: ExpRelax variant of RadTrans diffusion solver, implements the algorithm
described in Gittings et al (2008) for the RAGE code, good for handling strong matter-radiation
coupling; for one group (grey) only.
– EXPERIMENTAL: Unified variant of RadTrans diffusion solver, for handling several coupled
scalar equations with HYPRE.
– EXPERIMENTAL: More accurate implementation of flux limiting (and evaluation of diffusion
coeffs): apply limiter to face values, not cell centered values.
• Gravity can now be used in 3T simulations.
• Laser Energy Deposition: New ray tracing options added based on cubic interpolation techniques. Two
variants: 1) Piecewise Parabolic Ray Tracing (PPRT) and 2) Runge Kutta (RK) ray tracing.
• Introduction of new numerical tool units: 1) Interpolate: currently contains the routines to set up
and perform cubic interpolations on rectangular 1D,2D,3D grids, 2) Roots: (will) contain all routines
that solve f (x) = 0 (currently contains quadratic, cubic and quartic polynomial root solvers, 3) Runge
Kutta: sets up and performs Runge Kutta integration of arbitrary functions (passed as arguments).
• Unsplit Hydro/MHD: Local CFL factor using CFL VAR. (Declare a ”VARIABLE cfl” and initialize it
appropriately.)
• Unsplit Hydro/MHD: Significant reorganization.
– reorganized definition and use of scratch data. Memory savings.
– use hy memAllocScratch and friends.
– hy fullRiemannStateArrays (instead of FLASH UHD NEED SCRATCHVARS)
– New runtime parameter hy fullSpecMsFluxHandling, default TRUE. resulting in flux-corrected
handling for species and mass scalars, including USM.
– Use shockLowerCFL instead of shockDetect runtime parameter.
– Revived EOSforRiemann option.
– More accurate handling of geometric effects close to the origin in 1D spherical geometry.
Important changes in FLASH4.4 since FLASH4.3 include:
• The default Hydro implementation has changed from split PPM to unsplit Hydro. A new shortcut
+splitHydro can be used to request a split Hydro implementation.
• Updated values of many physical constants to 2014 CODATA values. This may cause differences from
previously obtained results. The previous values of constants provided by the PhysicalConstants unit
can be restored by replacing the file PhysicalCosntants˙init.F90 with an older version; the version
from FLASH4.3 is included as PhysicalConstants_init.F90.flash43. This should only be done to
reproduce previous simulation results to bit accuracy.
1.2. EXTERNAL CONTRIBUTIONS
5
• An improved Newton-Raphson search in the 3T Multi-type Eos implemention (MTMMMT, including
Eos based on IONMIX tables) can prevent some cases of convergence failure by bounding the search.
This implementation follows original improvements made to the Helmholtz Eos implementation by
Dean Townsley.
• Added new Poisson solvers (Martin-Cartwright Geometric Multigrid and BiPCGStab, which uses multigrid aspreconditioner). Combinations of homogeneous Dirichlet, Neumann, and periodic boundary
conditions are supported (although not yet “isolated” boundaries for self-gravity).
• Added the IncompNS physics unit, which provides a solver for incompressible flow problems on rectangular domains. Multistep and Runge-Kutta explicit projection schemes are used for time integration.
Implementations on staggered grid arrangement for both uniform grid (UG) and adaptive mesh refinement (AMR) are provided. The new Poisson solvers are employed for AMR cases, whereas the
homogeneous trigonometric solver + PFFT can be used in UG. Typical velocity boundary conditions
for this problem are implemented.
• The ProtonImaging diagnostics code has been improved. Time resolved proton imaging is now possible,
where protons are traced through the domain during several time steps. The original version (tracing
of protons during one time step with fixed domain) is still available.
• The code for Radiation-Fluxlimiter-Aware Hydro has been updated. Smoothing of the flux-limiter
function within the enhanced Hydro implementation has been implemented and has been shown effective in increasing stability in 1D simulations.
• New Opacity implementations: BremsstrahlungAndThomson and OPAL. These are for gray opacities.
• In addition to the FLASH4.4 release, the publicly available Python module opacplot2 has received
significant development (credit to JT Laune). It can assist in handling EoS/opacity tables, and includes
command line tools to convert various table formats to IONMIX and to compare between different
tables. More information can be found in the Flash Center’s GitHub repository at https://github.
com/flash-center/opacplot2.
The following features are provided on an EXPERIMENTAL basis. They may only work in limited circumstances and/or have not yet been tested to our satisfaction.
• New Laser - Async communication (experimental).
• Electron-Entropy Advection in Hydro for non-ideal Eos.
• New cubic and quartic equation solvers have been added and are ready to be used. They return only
real cubic and quartic roots. The routines can be found in the flashUtilites/general section
• An alternative setup tool “setup alt” intended to be a compatible replacement with a cleaner structure.
1.2
External Contributions
Here we list some major contributions to FLASH4 from outside the Flash Center that are included in the
distribution. For completeness, we also list such contributions in FLASH3 which have long been included in
the release.
• Huang-Greengard based multigrid solver, contributed by Paul Ricker. This contribution was first
distributed in FLASH3. Reference: http://adsabs.harvard.edu/abs/2008ApJS..176..293R
• Direct solvers for Uniform Grid contributed by Marcos Vanella. The solvers have been in the release
since FLASH3. Reference: http://dx.doi.org/10.1002/cpe.2821
• Additional Poisson solvers (Martin-Cartwright Geometric Multigrid ported from FLASH2, and new
BiPCGStab), and Incompressible Navier-Stokes solver unit, from Marcos Vanella; added to the release
code in FLASH4.4.
6
CHAPTER 1. INTRODUCTION
• Hybrid-PIC code, contributed by Mats Holmström. The contribution has been in the distribution since
FLASH4-alpha. Reference: http://adsabs.harvard.edu/abs/2011arXiv1104.1440H
• Primordial Chemistry contributed by William Gray. This contribution was added in FLASH4.0. Reference: http://iopscience.iop.org/0004-637X/718/1/417/.
• Barnes Hut tree gravity solver contributed by Richard Wunsch. This contribution has been further
extended in FLASH 4.2.2 and in the current release and has been developed in collaboration with
Frantisek Dinnbier (responsible for periodic and mixed boundary conditions) and Stefanie Walch.
• Sink Particles contributed by Christoph Federrath et al. This contribution has received significant
updates over several release. Please refer to http://iopscience.iop.org/0004-637X/713/1/269/
for details.
• Since FLASH4.2.2, there is a new ’FromFile’ implementation of the Stir unit, contributed by Christoph
Federrath. The new implementations is sitting beside the older ’Generate’ implementation.
• New Flame and Turb units contributed by Dean Townsley, with code developed by Aaron Jackson and
Alan Calder. A corresponding paper (Jackson, Townsley, & Calder 2014) on modeling turbulent flames
has been published, see http://stacks.iop.org/0004-637X/784/i=2/a=174. More information can
be found in Section 17.6 and Section 17.7.
1.3
Known Issues in This Release
• The outflow boundary condition for face-centered variables in the use of solenoidal magnetic field
evolution on AMR fails to ensure the solenoidal constraint at the physical outflow boundaries. However, numerical solutions with respect to this error are still physically correct away from the outflow
boundaries. This issue may be resolved in future releases.
• The upwind-biased electric field implementation (i.e., E upwind=.true.) for the unsplit staggered
mesh solver in some cases fails to satisfy divergence-free magnetic field evolutions at restart. Users can
still use (i.e., E upwind=.false.) in most applications.
• The new multipole solver is missing the ability to treat a non-zero minimal radius for spherical geometries, and the ability to specify a point mass contribution to the potential.
• The “Split” implementation of the diffusion solver is essentially meant for testing purposes. Moreover,
it has not been exercised with PARAMESH.
• Some configurations of hydrodynamic test problems with Chombo grid show worse than expected mass
and total energy conservation. Please see the Chombo section in the Hydro chapter of this Guide for
details.
• We have experienced the following abort when running IsentropicVortex problem with Chombo Grid:
”MayDay: TreeIntVectSet.cpp:1995: Assertion ‘bxNumPts != 0’ failed. !!!” We have been in contact
with the Chombo team to resolve this issue.
• The unsplit MHD solver doesn’t support the mode ”use GravPotUpdate=.true.” for 1D when selfgravity is utilized. The solver will still work if it is set to be .false. In this case the usual reconstruction
schemes will be used in computing the gravitational accelerations at the predictor step (i.e., at the
n+1/2 step) rather than calling the Poisson solver to compute them.
• Time limiting due to burning, even though has an implementation, is turned off in most simulations
by keeping the value of parameter enucDtFactor very high. The implementation is therefore not well
tested and should be used with care.
• Mesh replication is only supported for parallel HDF5 and the experimental derived data type ParallelNetCDF (+pnetTypeIO) I/O implementations. Flash will fail at runtime if multiple meshes are in use
without using one of these I/O implementations.
1.4. ABOUT THE USER’S GUIDE
7
• The unsplit staggered mesh MHD solver shows slight differences (of relative magnitudes of order 10−12 )
in restart comparisons (e.g., sfocu comparison), when there are non-zero values of face-centered magnetic fields. With PARAMESH4DEV, the current default Grid implementation, we have observed this
problem only with the ‘force consistency’ flag (see Runtime Parameter gr pmrpForceConsistency)
turned on.
• In some cases with the default refinement criteria implementation, the refinement pattern at a given
point in time of a PARAMESH AMR simulation may be slightly different depending on how often plotfiles
and checkpoints are written; with resulting small changes in simulation results. The effect is expected
to also be present in previous FLASH versions. This is a side effect of Grid restrictAllLevels calls
that happen in IO to prepare all grid blocks for being dumped to file. We have determined that this can
only impact how quickly coarser blocks next to a refinement boundary are allowed to further derefine
when their better resolved neighbors also derefine, in cases where second-derivative criteria applied to
the block itself would allow such derefinement. Users who are concerned with this effect may want to
replace the call to amr restrict in Grid updateRefinement with a call to Grid restrictAllLevels,
at the cost of a slight increase in runtime.
• The PG compiler fails to compile source files which contain OpenMP parallel regions that reference
threadprivate data. This happens in the threaded versions of the Multipole solver and the within block
threaded version of split hydro. A workaround is to remove “default(none)” from the OpenMP parallel
region.
• Absoft compiler (gcc 4.4.4/Absoft Pro fortran 11.1.x86_64 with mpich2 1.4.1p1) generates incorrectly behaving code with some files when used with any optimization. More specifically, we have
seen this behavior with gr markRefineDerefine.F90, but other files may be vulnerable too. An older
version (Absoft Fortran 95 9.0 EP/gcc 4.5.1 with mpich-1.2.7p1) works.
• The -index-reorder setup flag does not work in all the configurations. If you wish to use it please
contact the FLASH team.
• The -noclobber setup option will not force a rebuild of all necessary source files in a FLASH application
with derived data type I/O (+hdf5TypeIO or +pnetTypeIO). Do not use -noclobber with derived data
type I/O.
1.4
About the User’s Guide
This User’s Guide is designed to enable individuals unfamiliar with the FLASH code to quickly get acquainted
with its structure and to move beyond the simple test problems distributed with FLASH, customizing it to
suit their own needs. Code users and developers are encouraged to visit the FLASH code documentation
page for other references to supplement the User’s Guide.
Part I provides a rapid introduction to working with FLASH. Chapter 2 (Quick Start) discusses how to
get started quickly with FLASH, describing how to setup, build, and run the code with one of the included
test problems and then to examine the resulting output. Users unfamiliar with the capabilities of FLASH,
who wish to quickly ‘get their feet wet’ with the code, should begin with this section. Users who want to
get started immediately using FLASH to create new problems of their own will want to refer to Chapter 3
(Setting Up New Problems) and Chapter 5 (The FLASH Configuration Script).
Part II begins with an overview of both the FLASH code architecture and a brief overview of the
units included with FLASH. It then describes in detail each of the units included with the code, along
with their subunits, runtime parameters, and the equations and algorithms of the implemented solvers.
Important note: We assume that the reader has some familiarity both with the basic physics
involved and with numerical methods for solving partial differential equations. This familiarity
is absolutely essential in using FLASH (or any other simulation code) to arrive at meaningful solutions to
physical problems. The novice reader is directed to an introductory text, examples of which include
Fletcher, C. A. J. Computational Techniques for Fluid Dynamics (Springer-Verlag, 1991)
8
CHAPTER 1. INTRODUCTION
Laney, C. B. Computational Gasdynamics (Cambridge UP, 1998)
LeVeque, R. J., Mihalas, D., Dorfi, E. A., and Müller, E., eds. Computational Methods for Astrophysical
Fluid Flow (Springer, 1998)
Roache, P. Fundamentals of Computational Fluid Dynamics (Hermosa, 1998)
Toro, E. F. Riemann Solvers and Numerical Methods for Fluid Dynamics, 2nd Edition (Springer, 1997)
The advanced reader who wishes to know more specific information about a given unit’s algorithm is directed
to the literature referenced in the algorithm section of the chapter in question.
Part VII describes the different test problems distributed with FLASH. Part VIII describes in more detail
the analysis tools distributed with FLASH, including fidlr and sfocu.
Part I
Getting Started
9
Chapter 2
Quick Start
This chapter describes how to get up-and-running quickly with FLASH with an example simulation, the
Sedov explosion. We explain how to configure a problem, build it, run it, and examine the output using IDL.
2.1
System requirements
You should verify that you have the following:
• A copy of the FLASH source code distribution (as a Unix tar file). To request a copy of the distribution,
click on the “Code Request” link on the FLASH Center web site. You will be asked to fill out a short
form before receiving download instructions. Please remember the username and password you use to
download the code; you will need these to get bug fixes and updates to FLASH.
• A F90 (Fortran 90) compiler and a C compiler. Most of FLASH is written in F90. Information available
at the Fortran Company web site can help you select an F90 compiler for your system. FLASH has been
tested with many Fortran compilers. For details of compilers and libraries, see the RELEASE-NOTES
available in the FLASH home directory.
• An installed copy of the Message-Passing Interface (MPI) library. A freely available implementation
of MPI called MPICH is available from Argonne National Laboratory.
• To use the Hierarchical Data Format (HDF) for output files, you will need an installed copy of the
freely available HDF library. The serial version of HDF5 is the current default FLASH format. HDF5
is available from the HDF Group (http://www.hdfgroup.org/) of the National Center for Supercomputing Applications (NCSA) at http://www.ncsa.illinois.edu. The contents of HDF5 output files
produced by the FLASH units are described in detail in Section 9.1.
• To use the Parallel NetCDF format for output files, you will need an installed copy of the freely
available PnetCDF library. PnetCDF is available from Argonne National Lab at
http://www.mcs.anl.gov/parallel-netcdf/. For details of this format, see Section 9.1.
• To use Chombo as the (Adaptive Mesh Refinement) AMR option, you will need an installed copy of
the library, available freely from Lawrence Berkeley National Lab at https://seesar.lbl.gov/anag/
chombo. The use of Chombo is described in Section 8.7
• To use the Diffuse unit with HYPRE solvers, you will need to have an installed copy of HYPRE,
available for free from Lawrence Livermore National Lab at
https://computation.llnl.gov/casc/hypre/software.html.
HYPRE is required for using several critical HEDP capabilities including multigroup radiation diffusion
and thermal conduction. Please make sure you have HYPRE installed if you want these capabilities.
11
12
CHAPTER 2. QUICK START
• To use the output analysis tools described in this section, you will need a copy of the IDL language from
ITT Visual Information Solutions. IDL is a commercial product. It is not required for the analysis of
FLASH output, but the fidlr3.0 tools described in this section require it. (FLASH output formats
are described in Section 9.9. If IDL is not available, another visual analysis option is ViSit, described
in Chapter 31.) The newest IDL routines, those contained in fidlr3.0, were written and tested with
IDL 6.1 and above. You are encouraged to upgrade if you are using an earlier version. Also, to use the
HDF5 version 1.6.2, analysis tools included in IDL require IDL 6.1 and above. New versions of IDL
come out frequently, and sometimes break backwards compatibility, but every effort will be made to
support them.
• The GNU make utility, gmake. This utility is freely available and has been ported to a wide variety of
different systems. For more information, see the entry for make in the development software listing at
http://www.gnu.org/. On some systems make is an alias for gmake. GNU make is required because
FLASH uses macro concatenation when constructing Makefiles.
• A copy of the Python language, version 2.2 or later is required to run the setup script. Python can
be downloaded from http://www.python.org.
2.2
Unpacking and configuring FLASH for quick start
To begin, unpack the FLASH source code distribution.
tar -xvf FLASHX.Y.tar
where X.Y is the FLASH version number (for example, use FLASH4-alpha.tar for FLASH version 4-alpha,
or FLASH3.1.tar for FLASH version 3.1). This will create a directory called FLASHX /. Type ‘cd FLASHX ’
to enter this directory. Next, configure the FLASH source tree for the Sedov explosion problem using the
setup script. Type
./setup Sedov -auto
This configures FLASH for the 2d Sedov problem using the default hydrodynamic solver, equation of state,
Grid unit, and I/O format defined for this problem, linking all necessary files into a new directory, called
‘object/’. For the purpose of this example, we will use the default I/O format, serial HDF5. In order
to compile a problem on a given machine FLASH allows the user to create a file called Makefile.h which
sets the paths to compilers and libraries specific to a given platform. This file is located in the directory
sites/mymachine.myinstitution.mydomain/. The setup script will attempt to see if your machine/platform
has a Makefile.h already created, and if so, this will be linked into the object/ directory. If one is not
created the setup script will use a prototype Makefile.h with guesses as to the locations of libraries on
your machine. The current distribution includes prototypes for AIX, IRIX64, Linux, Darwin, and TFLOPS
operating systems. In any case, it is advisable to create a Makefile.h specific to your machine. See
Section 5.6 for details.
Type the command cd object to enter the object directory which was created when you setup the Sedov
problem, and then execute make. This will compile the FLASH code.
cd object
make
If you have problems and need to recompile, ‘make clean’ will remove all object files from the object/
directory, leaving the source configuration intact; ‘make realclean’ will remove all files and links from
object/. After ‘make realclean’, a new invocation of setup is required before the code can be built.
The building can take a long time on some machines; doing a parallel build (make -j for example) can
significantly increase compilation speed, even on single processor systems.
Assuming compilation and linking were successful, you should now find an executable named flashX in
the object/ directory, where X is the major version number (e.g., 4 for X.Y = 4.0). You may wish to check
that this is the case.
If compilation and linking were not successful, here are a few common suggestions to diagnose the problem:
2.2. UNPACKING AND CONFIGURING FLASH FOR QUICK START
13
• Make sure the correct compilers are in your path, and that they produce a valid executable.
• The default Sedov problem uses HDF5 in serial. Make sure you have HDF5 installed. If you do not
have HDF5, you can still setup and compile FLASH, but you will not be able to generate either a
checkpoint or a plot file. You can setup FLASH without I/O by typing
./setup Sedov -auto +noio
• Make sure the paths to the MPI and HDF libraries are correctly set in the Makefile.h in the object/
directory.
• Make sure your version of MPI creates a valid executable that can run in parallel.
These are just a few suggestions; you might also check for further information in this guide or at the
FLASH web page.
FLASH by default expects to find a text file named flash.par in the directory from which it is run. This
file sets the values of various runtime parameters that determine the behavior of FLASH. If it is not present,
FLASH will abort; flash.par must be created in order for the program to run (note: all of the distributed
setups already come with a flash.par which is copied into the object/ directory at setup time). There is
command-line option to use a different name for this file, described in the next section. Here we will create
a simple flash.par that sets a few parameters and allows the rest to take on default values. With your text
editor, edit the flash.par in the object directory so it looks like Figure 2.1.
# runtime parameters
lrefine_max = 5
basenm = "sedov_"
restart = .false.
checkpointFileIntervalTime = 0.01
nend = 10000
tmax = 0.05
gamma = 1.4
xl_boundary_type = "outflow"
xr_boundary_type = "outflow"
yl_boundary_type = "outflow"
yr_boundary_type = "outflow"
plot_var_1 = "dens"
plot_var_2 = "temp"
plot_var_3 = "pres"
Figure 2.1: FLASH parameter file contents for the quick start example.
This example instructs FLASH to use up to five levels of adaptive mesh refinement (AMR) (through
the lrefine max parameter) and to name the output files appropriately (basenm). We will not be starting
from a checkpoint file (“restart = .false.” — this is the default, but here it is explicitly set for clarity).
Output files are to be written every 0.01 time units (checkpointFileIntervalTime) and will be created until
t = 0.05 or 10000 timesteps have been taken (tmax and nend respectively), whichever comes first. The ratio
14
CHAPTER 2. QUICK START
of specific heats for the gas (gamma = γ) is taken to be 1.4, and all four boundaries of the two-dimensional
grid have outflow (zero-gradient or Neumann) boundary conditions (set via the [xy][lr] boundary type
parameters).
Note the format of the file – each line is of the form variable = value, a comment (denoted by a hash
mark, #), or a blank. String values are enclosed in double quotes ("). Boolean values are indicated in the
FORTRAN style, .true. or .false.. Be sure to insert a carriage return after the last line of text. A full
list of the parameters available for your current setup is contained in the file setup params located in the
object/ directory, which also includes brief comments for each parameter. If you wish to skip the creation of a
flash.par, a complete example is provided in the source/Simulation/SimulationMain/Sedov/ directory.
2.3
Running FLASH
We are now ready to run FLASH. To run FLASH on N processors, type
mpirun -np N flashX
remembering to replace N and X with the appropriate values. Some systems may require you to start MPI
programs with a different command; use whichever command is appropriate for your system. The FLASH4
executable accepts an optional command-line argument for the runtime parameters file. If “-par file filename” is present, FLASH reads the file specified on command line for runtime parameters, otherwise it reads
flash.par.
You should see a number of lines of output indicating that FLASH is initializing the Sedov problem,
listing the initial parameters, and giving the timestep chosen at each step. After the run is finished, you
should find several files in the current directory:
• sedov.log echoes the runtime parameter settings and indicates the run time, the build time, and the
build machine. During the run, a line is written for each timestep, along with any warning messages.
If the run terminates normally, a performance summary is written to this file.
• sedov.dat contains a number of integral quantities as functions of time: total mass, total energy, total
momentum, etc. This file can be used directly by plotting programs such as gnuplot; note that the
first line begins with a hash (#) and is thus ignored by gnuplot.
• sedov hdf5 chk 000* are the different checkpoint files. These are complete dumps of the entire simulation state at intervals of checkpointFileIntervalTime and are suitable for use in restarting the
simulation.
• sedov hdf5 plt cnt 000* are plot files. In this example, these files contain density, temperature, and
pressure in single precision. If needed, more variables can be dumped in the plotfiles by specifying them
in flash.par. They are usually written more frequently than checkpoint files, since they are the primary
output of FLASH for analyzing the results of the simulation. They are also used for making simulation
movies. Checkpoint files can also be used for analysis and sometimes it is necessary to use them since
they have comprehensive information about the state of the simulation at a given time. However, in
general, plotfiles are preferred since they have more frequent snapshots of the time evolution. Please
see Chapter 9 for more information about IO outputs.
We will use the xflash3 routine under IDL to examine the output. Before doing so, we need to set
the values of three environment variables, IDL DIR, IDL PATH and XFLASH3 DIR. You should usually have
IDL DIR already set during the IDL installation. Under csh the two additional variables can be set using
the commands
setenv XFLASH3 DIR "/tools/fidlr3.0"
setenv IDL PATH "${XFLASH3 DIR}:$IDL PATH"
where is the location of the FLASHX.Y directory. If you get a message indicating that
IDL PATH is not defined, enter
2.3. RUNNING FLASH
15
setenv IDL PATH "${XFLASH3 DIR}":${IDL DIR}:${IDL DIR}/lib
where ${IDL DIR} points to the directory where IDL is installed. Fidlr assumes that you have a version of
IDL with native HDF5 support.
FLASH Transition
Please note! The environment variable used with FLASH3 onwards is XFLASH3 DIR. The
main routine name for interactive plotting is xflash3.
Now run IDL (idl or idl start linux) and enter xflash3 at the IDL> prompt. You should see the
main widget as shown in Figure 2.2.
Select any of the checkpoint or plot files through the File/Open Prototype... dialog box. This will define
a prototype file for the dataset, which is used by fidlr to set its own data structures. With the prototype
defined, enter the suffixes ’0000’, ’0005’ and ’1’ in the three suffix boxes. This tells xflash3 which files to
plot. xflash3 can generate output for a number of consecutive files, but if you fill in only the beginning
suffix, only one file is read. Click the auto box next to the data range to automatically scale the plot to
the data. Select the desired plotting variable and colormap. Under ‘Options,’ select whether to plot the
logarithm of the desired quantity and select whether to plot the outlines of the computational blocks. For
very highly refined grids, the block outlines can obscure the data, but they are useful for verifying that
FLASH is putting resolution elements where they are needed.
When the control panel settings are to your satisfaction, click the ‘Plot’ button to generate the plot. For
Postscript or PNG output, a file is created in the current directory. The result should look something like
Figure 2.3, although this figure was generated from a run with eight levels of refinement rather than the five
used in the quick start example run. With fewer levels of refinement, the Cartesian grid causes the explosion
to appear somewhat diamond-shaped. Please see Chapter 34 for more information about visualizing FLASH
output with IDL routines.
FLASH is intended to be customized by the user to work with interesting initial and boundary conditions.
In the following sections, we will cover in more detail the algorithms and structure of FLASH and the sample
problems and tools distributed with it.
16
CHAPTER 2. QUICK START
Figure 2.2: The main xflash3 widget.
2.3. RUNNING FLASH
Figure 2.3: Example of xflash output for the Sedov problem with eight levels of refinement.
17
18
CHAPTER 2. QUICK START
Chapter 3
Setting Up New Problems
A new FLASH problem is created by making a directory for it under FLASH4/source/Simulation/SimulationMain.
This location is where the FLASH setup script looks to find the problem-specific files. The FLASH distribution includes a number of pre-written simulations; however, most FLASH users will want to simulate their
own problems, so it is important to understand the techniques for adding a customized problem simulation.
Every simulation directory contains routines to initialize the FLASH grid. The directory also includes a
Config file which contains information about infrastructure and physics units, and the runtime parameters
required by the simulation (see Chapter 5). The files that are usually included in the Simulation directory
for a problem are:
Config
Lists the units and variables required for the problem, defines runtime parameters and initializes them with default values.
Makefile
The make include file for the Simulation.
Simulation data.F90
Fortran module which stores data and parameters specific to the
Simulation.
Simulation init.F90
Fortran routine which reads the runtime parameters, and performs
other necessary initializations.
Simulation initBlock.F90
Fortran routine for setting initial conditions in a single block.
Simulation initSpecies.F90 Optional Fortran routine for initializing species properties if multiple species are being used.
flash.par
A text file that specifies values for the runtime parameters. The
values in flash.par override the defaults from Config files.
In addition to these basic files, a particular simulation may include some files of its own. These files could
provide either new functionality not available in FLASH, or they may include customized versions of any
of the FLASH routines. For example, a problem might require a custom refinement criterion instead of the
one provided with FLASH. If a customized implementation of Grid markRefineDerefine is placed in the
Simulation directory, it will replace FLASH’s own implementation when the problem is setup. In general,
users are encouraged to put any modifications of core FLASH files in the SimulationMain directory in
which they are working rather than by altering the default FLASH routines. This encapsulation of personal
changes will make it easier to integrate Flash Center patches, and to upgrade to more recent versions of the
code. The user might also wish to include data files in the SimulationMain necessary for initial conditions.
Please see the LINKIF and DATAFILES keywords in Section 5.5.1 for more information on linking in datafiles
or conditionally linking customized implementations of FLASH routines.
The next few paragraphs are devoted to the detailed examination of the basic files for an example setup.
The example we describe here is a hydrodynamical simulation of the Sod shock tube problem, which has
a one-dimensional flow discontinuity. We construct the initial conditions for this problem by establishing
a planar interface at some angle to the x and y axes, across which the density and pressure values are
discontinuous. The fluid is initially at rest on either side of the interface. To create a new simulation,
we first create a new directory Sod in Simulation/SimulationMain and then add the Config, Makefile,
flash.par, Simulation initBlock.F90, Simulation init.F90 and Simulation data.F90 files. Since this
is a single fluid simulation, there is no need for a Simulation initSpecies file. The easiest way to construct
19
20
CHAPTER 3. SETTING UP NEW PROBLEMS
these files is to use files from another setup as templates.
3.1
Creating a Config file
The Config file for this example serves two principal purposes; (1) to specify the required units and (2) to
register runtime parameters.
# configuration file for our example problem
REQUIRES Driver
REQUIRES physics/Eos/EosMain/Gamma
REQUIRES physics/Hydro
The lines above define the FLASH units required by the Sod problem. Note that we do not ask for particular
implementations of the Hydro unit, since for this problem many implementations will satisfy the requirements.
However, we do ask for the gamma-law equation of state (physics/Eos/EosMain/Gamma) specifically, since
that implementation is the only valid option for this problem. In FLASH4-alpha, the PARAMESH 4 Grid
implementation is passed to Driver by default. As such, there is no need to specify a Grid unit explicitly,
unless a simulation requires an alternative Grid implementation. Also important to note is that we have
not explicitly required IO, which is included by default. In constructing the list of requirements for a
problem, it is important to keep them as general as the problem allows. We recommend asking for specific
implementations of units as command line options or in the Units file when the problem is being setup, to
avoid the necessity of modifying the Config files. For example, if there was more than one implementation
of Hydro that could handle the shocks, any of them could be picked at setup time without having to modify
the Config file. However, to change the Eos to an implementation other than Gamma, the Config file would
have to be modified. For command-line options of the setup script and the description of the Units file see
Chapter 5.
After specifying the units, the Config file lists the runtime parameters specific to this problem. The names
of runtime parameters are case-insensitive. Note that no unit is constrained to use only the parameters defined
in its own Config file. It can legitimately access any runtime parameter registered by any unit included in
the simulation.
PARAMETER
PARAMETER
PARAMETER
PARAMETER
PARAMETER
PARAMETER
PARAMETER
PARAMETER
PARAMETER
sim_rhoLeft
sim_rhoRight
sim_pLeft
sim_pRight
sim_uLeft
sim_uRight
sim_xangle
sim_yangle
sim_posn
REAL
REAL
REAL
REAL
REAL
REAL
REAL
REAL
REAL
1.
0.125
1.
0.1
0.
0.
0.
90.
0.5
[0
[0
[0
[0
to
to
to
to
]
]
]
]
[0 to 360]
[0 to 360]
Here we define (sim rhoLeft), (sim pLeft) and (sim uLeft) as density, pressure and velocity to the left of
the discontinuity, and (sim rhoRight), (sim pRight) and (sim uRight) as density, pressure and velocity to
the right of the discontinuity. The parameters (sim xangle) and (sim yangle) give the angles with respect
to the x and y axes, and (sim posn) specifies the intersection between the shock plane and x axis. The
quantities in square brackets define the permissible range of values for the parameters. The default value
of any parameter (like sim_xangle) can be overridden at runtime by including a line (i.e. sim xangle =
45.0) defining a different value for it in the flash.par file.
3.2
Creating a Makefile
The file Makefile included in the Simulation directory does not have the standard Makefile format for
make/gmake. Instead, the setup script generates a complete compilation Makefile from the machine/system
specific one (see Section 5.6) and the unit Makefiles (see Section 5.7.3).
3.3. CREATING A SIMULATION DATA.F90
21
In general, standard module and routine dependencies are figured out by the setup script or are inherited
from the directory structure. The Makefile for this example is very simple, it only adds the object file for
Simulation data to the Simulation unit Makefile. The additional object files such as Simulation_init.o
are already added in the directory above SimulationMain.
3.3
Creating a Simulation data.F90
The Fortran module Simulation data is used to store data specific to the Simulation unit. In FLASH4alpha there is no central ‘database’, instead, each unit stores its own data in its Unit data Fortran module. Data needed from other units is accessed through that unit’s interface. The basic structure of the
Simulation data module is shown below:
module Simulation_data
implicit none
!! Runtime
real, save
real, save
real, save
Parameters
:: sim_rhoLeft, sim_rhoRight, sim_pLeft, sim_pRight
:: sim_uLeft, sim_uRight, sim_xAngle, sim_yAngle, sim_posn
:: sim_gamma, sim_smallP, sim_smallX
!! Other unit variables
real, save :: sim_xCos, sim_yCos, sim_zCos
end module Simulation_data
Note that all the variables in this data module have the save attribute. Without this attribute the storage
for the variable is not guaranteed outside of the scope of Simulation data module with many compilers.
Also notice that there are many more variables in the data module than in the Config file. Some of them,
such as ’sim smallX’ etc, are runtime parameters from other units, while others such as ’sim xCos’ are
simulation specific variables that are available to all routines in the Simulation unit. The FLASH4-alpha
naming convention is that variables that begin with sim are used or “belong” to the Simulation unit.
3.4
Creating a Simulation init.F90
The routine Simulation init is called by the routine Driver initFlash at the beginning of the simulation.
Driver initFlash calls Unit init.F90 routines of every unit to initialize them. In this particular case, the
Simulation init routine will get the necessary runtime parameters and store them in the Simulation data
Fortran module, and also initialize other variables in the module. More generally, all one-time initialization
required by the simulation are implemented in the Simulation init routine.
FLASH Transition
In FLASH2, the contents of the if (.firstcall.) clause are now in the Simulation init
routine in FLASH4.
The basic structure of the routine Simulation init should consist of
22
CHAPTER 3. SETTING UP NEW PROBLEMS
1. Fortran module use statement for the Simulation data
2. Fortran module use statement for the Unit_interfaces to access the interface
of the RuntimeParameters unit, and any other units being used.
3. Variable typing implicit none statement
4. Necessary #include header files
5. Declaration of arguments and local variables.
6. Calls to the RuntimeParameters unit interface to obtain the values of runtime
parameters.
7. Calls to the PhysicalConstants unit interface to initialize any necessary physical
constants.
8. Calls to the Multispecies unit interface to initialize the species’ properties, if
there multiple species are in use
9. Initialize other unit scope variables, packages and functions
10. Any other calculations that are needed only once at the beginning of the run.
In this example after the implicit none statement we include two files, "constants.h", and "Flash.h".
The "constants.h" file holds global constants defined in the FLASH code such as MDIM, MASTER PE, and
MAX STRING LENGTH. It also stores constants that make reading the code easier, such as IAXIS, JAXIS, and
KAXIS, which are defined as 1,2, and 3, respectively. More information is available in comments in the
distributed constants.h. A complete list of defined constants is available on the Code Support Web Page.
The "Flash.h" file contains all of the definitions specific to a given problem. This file is generated by
the setup script and defines the indices for the variables in various data structures. For example, the index
for density in the cell centered grid data structure is defined as DENS VAR. The "Flash.h" file also defines
the number of species, number of dimensions, maximum number of blocks, and many more values specific
to a given run. Please see Chapter 6 for complete description of the Flash.h file.
FLASH Transition
The defined constants in "Flash.h" file allows the user direct access to the variable index
in ‘unk.’ This direct access is unlike FLASH2, where the user would first have to get the
integer index of the variable by calling a data base function and then use the integer variable
idens as the variable index. Previously:
idens=dBaseKeyNumber(’dens’)
ucons(1,i) = solnData(idens,i,j,k)
Now, the syntax is simpler:
ucons(1,i) = solnData(DENS_VAR,i,j,k)
This new syntax also allows discovery of specification errors at compile time.
subroutine Simulation_init()
use Simulation_data
use RuntimeParameters_interface, ONLY : RuntimeParameters_get
implicit none
3.5. CREATING A SIMULATION INITBLOCK.F90
23
#include "Flash.h"
#include "constants.h"
! get the runtime parameters relevant for this problem
call
call
call
call
call
call
call
call
call
call
call
call
RuntimeParameters_get(’smallp’, sim_smallP)
RuntimeParameters_get(’smallx’, sim_smallX)
RuntimeParameters_get(’gamma’, sim_gamma)
RuntimeParameters_get(’sim_rhoLeft’, sim_rhoLeft)
RuntimeParameters_get(’sim_rhoRight’, sim_rhoRight)
RuntimeParameters_get(’sim_pLeft’, sim_pLeft)
RuntimeParameters_get(’sim_pRight’, sim_pRight)
RuntimeParameters_get(’sim_uLeft’, sim_uLeft)
RuntimeParameters_get(’sim_uRight’, sim_uRight)
RuntimeParameters_get(’sim_xangle’, sim_xAngle)
RuntimeParameters_get(’sim_yangle’, sim_yAngle)
RuntimeParameters_get(’sim_posn’, sim_posn)
! Do other initializations
! convert the shock angle parameters
sim_xAngle = sim_xAngle * 0.0174532925 ! Convert to radians.
sim_yAngle = sim_yAngle * 0.0174532925
sim_xCos = cos(sim_xAngle)
if (NDIM ==
sim_xCos
sim_yCos
sim_zCos
1) then
= 1.
= 0.
= 0.
elseif (NDIM == 2) then
sim_yCos = sqrt(1. - sim_xCos*sim_xCos)
sim_zCos = 0.
elseif (NDIM == 3) then
sim_yCos = cos(sim_yAngle)
sim_zCos = sqrt( max(0., 1. - sim_xCos*sim_xCos - sim_yCos*sim_yCos) )
endif
end subroutine Simulation_init
3.5
Creating a Simulation initBlock.F90
The routine Simulation initBlock is called by the Grid unit to apply initial conditions to the physical
domain. If the AMR grid PARAMESH is being used, the formation of the physical domain starts at the lowest
level of refinement. Initial conditions are applied to each block at this level by calling Simulation initBlock.
The Grid unit then checks the refinement criteria in the blocks it has created and refines the blocks if the
criteria are met. It then calls Simulation initBlock to initialize the newly created blocks. This process
repeats until the grid reaches the required refinement level in the areas marked for refinement. The Uniform
Grid has only one level, with same resolution everywhere. Therefore, only one block per processor is created
and Simulation initBlock is called to initialize this single block. It is important to note that a problem’s
24
CHAPTER 3. SETTING UP NEW PROBLEMS
Simulation initBlock routine is the same regardless of whether PARAMESH or Uniform Grid is being used.
The Grid unit handles these differences, not the Simulation unit.
The basic structure of the routine Simulation initBlock should be as follows:
1. A use statement for the Simulation data
2. One of more use statement to access other unit interfaces being used, for example
use Grid_interface, ONLY: Grid_putPointData
3. Variable typing implicit none statement
4. Necessary #include header files
5. Declaration of arguments and local variables.
6. Generation of initial conditions either from a file, or directly calculated in the
routine
7. Calls to the various Grid putData routines to store the values of solution variables.
We continue to look at the Sod setup and describe its Simulation initBlock in detail. The first part
of the routine contains all the declarations as shown below. The first statement in routine is the use
statement, which provides access to the runtime parameters and other unit scope variables initialized in
the Simulation_init routine. The include files bring in the needed constants, and then the arguments are
defined. The declaration of the local variables is next, with allocatable arrays for each block.
subroutine Simulation_initBlock(blockID)
! get the needed unit scope data
use Simulation_data, ONLY: sim_posn, sim_xCos, sim_yCos, sim_zCos,&
sim_rhoLeft, sim_pLeft, sim_uLeft,
&
sim_rhoRight, sim_pRight, sim_uRight, &
sim_smallX, sim_gamma, sim_smallP
use Grid_interfaces, ONLY : Grid_getBlkIndexLimits, Grid_getCellCoords, &
Grid_putPointData
implicit none
! get all the constants
#include "constants.h"
#include "Flash.h"
! define arguments and indicate whether they are input or output
integer, intent(in) :: blockID
! declare all local variables.
integer :: i, j, k, n
integer :: iMax, jMax, kMax
real :: xx, yy, zz, xxL, xxR
real :: lPosn0, lPosn
! arrays to hold coordinate information for the block
real, allocatable, dimension(:) :: xCenter, xLeft, xRight, yCoord, zCoord
! array to get integer indices defining the beginning and the end
! of a block.
integer, dimension(2,MDIM) :: blkLimits, blkLimitsGC
3.5. CREATING A SIMULATION INITBLOCK.F90
25
! the number of grid points along each dimension
integer :: sizeX, sizeY, sizeZ
integer, dimension(MDIM) :: axis
integer :: dataSize
logical :: gcell = .true.
! these variables store the calculated initial values of physical
! variables a grid point at a time.
real :: rhoZone, velxZone, velyZone, velzZone, presZone, &
enerZone, ekinZone
Note that FLASH promotes all floating point variables to double precision at compile time for maximum
portability. We therefore declare all floating point variables with real in the source code. In the next part
of the code we allocate the arrays that will hold the coordinates.
FLASH Transition
FLASH4-alpha supports blocks that are not sized at compile time to generalize the Uniform
Grid, and to be able to support different AMR packages in future. For this reason, the
arrays are not sized with the static NXB etc. as was the case in FLASH2. Instead they are
allocated on a block by block basis in Simulation initBlock. Performance is compromised
by the use of allocatable arrays, however, since this part of the code is executed only at
the beginning of the simulation, it has negligible impact on the overall execution time in
production runs.
! get the integer endpoints of the block in all dimensions
! the array blkLimits returns the interior end points
! whereas array blkLimitsGC returns endpoints including guardcells
call Grid_getBlkIndexLimits(blockId,blkLimits,blkLimitsGC)
! get the size along each dimension for allocation and then allocate
sizeX = blkLimitsGC(HIGH,IAXIS)
sizeY = blkLimitsGC(HIGH,JAXIS)
sizeZ = blkLimitsGC(HIGH,KAXIS)
allocate(xLeft(sizeX))
allocate(xRight(sizeX))
allocate(xCenter(sizeX))
allocate(yCoord(sizeY))
allocate(zCoord(sizeZ))
The next part of the routine involves setting up the initial conditions. This section could be code for
interpolating a given set of initial conditions, constructing some analytic model, or reading in a table of initial
values. In the present example, we begin by getting the coordinates for the cells in the current block. This
is done by a set of calls to Grid getCellCoords . Next we create loops that compute appropriate values for
each grid point, since we are constructing initial conditions from a model. Note that we use the blkLimits
array from Grid getBlkIndexLimits in looping over the spatial indices to initialize only the interior cells
in the block. To initialize the entire block, including the guardcells, the blkLimitsGC array should be used.
xCoord(:) = 0.0
yCoord(:) = 0.0
zCoord(:) = 0.0
26
CHAPTER 3. SETTING UP NEW PROBLEMS
call
call
call
call
call
Grid_getCellCoords(IAXIS,
Grid_getCellCoords(IAXIS,
Grid_getCellCoords(IAXIS,
Grid_getCellCoords(JAXIS,
Grid_getCellCoords(KAXIS,
blockID,
blockID,
blockID,
blockID,
blockID,
LEFT_EDGE,
CENTER,
RIGHT_EDGE,
CENTER,
CENTER,
gcell,
gcell,
gcell,
gcell,
gcell,
xLeft,
xCenter,
xRight,
yCoord,
zCoord,
sizeX)
sizeX)
sizeX)
sizeY)
sizeZ)
!----------------------------------------------------------------------------! loop over all of the zones in the current block and set the variables.
!----------------------------------------------------------------------------do k = blkLimits(LOW,KAXIS),blkLimits(HIGH,KAXIS)
zz = zCoord(k) ! coordinates of the cell center in the z-direction
lPosn0 = sim_posn - zz*sim_zCos/sim_xCos ! Where along the x-axis
! the shock intersects
! the xz-plane at the current z.
do j = blkLimits(LOW,JAXIS),blkLimits(HIGH,JAXIS)
yy = yCoord(j) ! center coordinates in the y-direction
lPosn = lPosn0 - yy*sim_yCos/sim_xCos ! The position of the
! shock in the current yz-row.
dataSize = 1 ! for Grid put data function, we are
! initializing a single cell at a time and
! sending it to Grid
do i = blkLimits(LOW,IAXIS),blkLimits(HIGH,IAXIS)
xx = xCenter(i) ! center coordinate along x
xxL = xLeft(i)
! left coordinate along y
xxR = xRight(i) ! right coordinate along z
For the present problem, we create a discontinuity along the shock plane. We do this by initializing
the grid points to the left of the shock plane with one value, and the grid points to the right of the shock
plane with another value. Recall that the runtime parameters which provide these values are available to
us through the Simulation data module. At this point we can initialize all independent physical variables
at each grid point. The following code shows the contents of the loops. Don’t forget to store the calculated
values in the Grid data structure!
if (xxR <= lPosn) then
rhoZone = sim_rhoLeft
presZone = sim_pLeft
velxZone = sim_uLeft * sim_xCos
velyZone = sim_uLeft * sim_yCos
velzZone = sim_uLeft * sim_zCos
! initialize cells which straddle the shock. Treat them as though 1/2 of
! the cell lay to the left and 1/2 lay to the right.
elseif ((xxL < lPosn) .and. (xxR > lPosn)) then
rhoZone = 0.5 * (sim_rhoLeft+sim_rhoRight)
presZone = 0.5 * (sim_pLeft+sim_pRight)
velxZone = 0.5 *(sim_uLeft+sim_uRight) * sim_xCos
velyZone = 0.5 *(sim_uLeft+sim_uRight) * sim_yCos
velzZone = 0.5 *(sim_uLeft+sim_uRight) * sim_zCos
3.6. CREATING A SIMULATION FREEUSERARRAYS.F90
27
! initialize cells to the right of the initial shock.
else
rhoZone = sim_rhoRight
presZone = sim_pRight
velxZone = sim_uRight * sim_xCos
velyZone = sim_uRight * sim_yCos
velzZone = sim_uRight * sim_zCos
endif
axis(IAXIS) = i
axis(JAXIS) = j
axis(KAXIS) = k
! Get the position of the cell in the block
! Compute the gas energy and set the gamma-values
! needed for the equation of state.
ekinZone = 0.5 * (velxZone**2 + velyZone**2 + velzZone**2)
enerZone
enerZone
enerZone
enerZone
=
=
=
=
presZone / (sim_gamma-1.)
enerZone / rhoZone
enerZone + ekinZone
max(enerZone, sim_smallP)
! store the variables in the current zone via the Grid_putPointData method
call
call
call
call
call
call
call
call
Grid_putPointData(blockId,
Grid_putPointData(blockId,
Grid_putPointData(blockId,
Grid_putPointData(blockId,
Grid_putPointData(blockId,
Grid_putPointData(blockId,
Grid_putPointData(blockId,
Grid_putPointData(blockId,
CENTER,
CENTER,
CENTER,
CENTER,
CENTER,
CENTER,
CENTER,
CENTER,
DENS_VAR,
PRES_VAR,
VELX_VAR,
VELY_VAR,
VELZ_VAR,
ENER_VAR,
GAME_VAR,
GAMC_VAR,
EXTERIOR,
EXTERIOR,
EXTERIOR,
EXTERIOR,
EXTERIOR,
EXTERIOR,
EXTERIOR,
EXTERIOR,
axis,
axis,
axis,
axis,
axis,
axis,
axis,
axis,
rhoZone)
presZone)
velxZone)
velyZone)
velzZone)
enerZone)
sim_gamma)
sim_gamma)
When Simulation initBlock returns, the Grid data structures for physical variables have the values
of the initial model for the current block. As mentioned before, Simulation initBlock is called for every
block that is created as the code refines the initial model.
3.6
Creating a Simulation freeUserArrays.F90
From within Simulation init, the user may create large allocatable arrays that are used for initialization
within Simulation initBlock or some other routine. An example would be a set of arrays that are used to
interpolate fields onto the blocks as they are being created. If these arrays use a lot of allocated memory,
the subroutine Simulation freeUserArrays gives the user a chance to free this memory using deallocate
statements. This subroutine is called after all initalization steps have been performed, and the default
implementation is a stub which does nothing. A customized version for a particular setup may be made by
copying the stub from the Simulation directory and editing it as need be.
28
3.7
CHAPTER 3. SETTING UP NEW PROBLEMS
The runtime parameter file (flash.par)
The FLASH executable expects a flash.par file to be present in the run directory, unless another name
for the runtime input file is given as a command-line option. This file contains runtime parameters, and
thus provides a mechanism for partially controlling the runtime environment. The names of runtime parameters are case-insensitive. Copies of flash.par are kept in their respective Simulation directories for easy
distribution.
The flash.par file for the example setup is
# Density, pressure, and velocity on either side of interface
sim_rhoLeft = 1.
sim_rhoRight = 0.125
sim_pLeft
sim_pRight
sim_uLeft
sim_uRight
=
=
=
=
1.
0.1
0.
0.
# Angle and position of interface relative to x and y axes
sim_xangle = 0
sim_yangle = 90.
sim_posn = 0.5
# Gas ratio of specific heats
gamma = 1.4
geometry = cartesian
# Size
xmin =
xmax =
ymin =
ymax =
of computational volume
0.
1.
0.
1.
# Boundary conditions
xl_boundary_type = "outflow"
xr_boundary_type = "outflow"
yl_boundary_type = "outflow"
yr_boundary_type = "outflow"
# Simulation (grid, time, I/O) parameters
cfl = 0.8
basenm = "sod_"
restart = .false.
# checkpoint file output parameters
checkpointFileIntervalTime = 0.2
checkpointFileIntervalStep = 0
checkpointFileNumber = 0
# plotfile output parameters
plotfileIntervalTime = 0.
3.7. THE RUNTIME PARAMETER FILE (FLASH.PAR)
29
plotfileIntervalStep = 0
plotfileNumber = 0
nend = 1000
tmax = .2
run_comment = "Sod problem, parallel to x-axis"
log_file = "sod.log"
eint_switch = 1.e-4
plot_var_1 = "dens"
plot_var_2 = "pres"
plot_var_3 = "temp"
# AMR refinement parameters
lrefine_max = 6
refine_var_1 = "dens"
# These parameters are used only for the uniform grid
#iGridSize = 8
#defined as nxb * iprocs
#jGridSize = 8
#kGridSize = 1
iProcs = 1 #number or procs in the i direction
jProcs = 1
kProcs = 1
# When using UG, iProcs, jProcs and kProcs must be specified.
# These are the processors along each of the dimensions
#FIXEDBLOCKSIZE mode ::
# When using fixed blocksize, iGridSize etc are redundant in
# runtime parameters. These quantities are calculated as
# iGridSize = NXB*iprocs
# jGridSize = NYB*jprocs
# kGridSize = NZB*kprocs
#NONFIXEDBLOCKSIZE mode ::
# iGridSize etc must be specified. They constitute the global
# number of grid points in the physical domain without taking
# the guard cell into account. The local blocksize is calculated
# as iGridSize/iprocs etc.
In this example, flags are set to start the simulation from scratch and to set the grid geometry, boundary
conditions, and refinement. Parameters are also set for the density, pressure and velocity values on either
side of the shock, and also the angles and point of intersection of the shock with the “x” axis. Additional
parameters specify details of the run, such as the number of timesteps between various output files, and the
initial, minimum and final values of the timestep. The comments and alternate values at the end of the file
are provided to help configure uniform grid and variably-sized array situations.
When creating the flash.par file, another very helpful source of information is the setup params file
which gets written by the setup script each time a problem is setup. This file lists all possible runtime
parameters and their default values from the Config files, as well as a brief description of the parameters.
It is located in the object/ directory created at setup time.
Figure 3.1 shows the initial distribution of density for the 2-d Sod problem as setup by the example
described in this chapter.
30
CHAPTER 3. SETTING UP NEW PROBLEMS
Figure 3.1: Image of the initial distribution of density in example setup.
3.8. RUNNING YOUR SIMULATION
3.8
31
Running your simulation
You can run your simulation either in the object directory or in a separate run directory.
Running in the object directory is especially convenient for development and testing. The command for
starting a FLASH simulation may be system dependent, but typically you would type something like
mpirun -np N flash4
or
mpiexec -n N flash4
to start a run on N processing entities. On many systems, you can also simply use
./flash4
to start a run on 1 processing entity, i.e., without process-level parallelisation.
If you want to invoke FLASH in a separate run directory, the best way is to copy the flash4 binary
into the run directory, and then proceed as above. However, before starting FLASH you also need to do the
following:
• Copy a FLASH parfile, normally flash.par, into the run directory.
• If FLASH was configured with Paramesh4.0 in LIBRARY mode (this is not the default): copy the file
amr runtime parameters, into the run directory. This file should have been generated in the object
directory by running ./setup.
• Some code units use additional data or table files, which are also copied into the object directory by
./setup. (These files typically match one of the patterns *.dat or *_table*.) copy or move those
files into the run directory, too, if your simulation needs them.
32
CHAPTER 3. SETTING UP NEW PROBLEMS
Part II
The FLASH Software System
33
Chapter 4
Overview of FLASH architecture
The files that make up the FLASH source are organized in the directory structure according to their functionality and grouped into components called units. Throughout this manual, we use the word ‘unit’ to
refer to a group of related files that control a single aspect of a simulation, and that provide the user with an
interface of publicly available functions. FLASH can be viewed as a collection of units, which are selectively
grouped to form one application.
A typical FLASH simulation requires only a subset of all of the units in the FLASH code. When the user
gives the name of the simulation to the setup tool, the tool locates and brings together the units required
by that simulation, using the FLASH Config files (described in Chapter 5) as a guide. Thus, it is important
to distinguish between the entire FLASH source code and a given FLASH application. the FLASH units can
be broadly classified into five functionally distinct categories: infrastructure, physics, monitor, driver,
and simulation.
The infrastructure category encompasses the units responsible for FLASH housekeeping tasks such as
the management of runtime parameters, the handling of input and output to and from the code, and the
administration of the grid, which describes the simulation’s physical domain.
Units in the physics category such as Hydro (hydrodynamics), Eos (equation of state), and Gravity
implement algorithms to solve the equations describing specific physical phenomena.
The monitoring units Logfile, Profiler, and Timers track the progress of an application, while the
Driver unit implements the time advancement methods and manages the interaction between the included
units.
The simulation unit is of particular significance because it defines how a FLASH application will be
built and executed. When the setup script is invoked, it begins by examining the simulation’s Config file,
which specifies the units required for the application, and the simulation-specific runtime parameters. Initial
conditions for the problem are provided in the routines Simulation_init and Simulation_initBlock.
As mentioned in Chapter 3, the Simulation unit allows the user to overwrite any of FLASH’s default
function implementations by writing a function with the same name in the application-specific directory.
Additionally, runtime parameters declared in the simulation’s Config file override definitions of same-named
parameters in other FLASH units. These helpful features enable users to customize their applications,
and are described in more detail below in Section 4.1 and online in Architecture Tips. The simulation
unit also provides some useful interaces for modifying the behaviour of the application while it is running.
For example there is an interface Simulation adjustEvolution which is called at every time step. Most
applications would use the null implementation, but its implementation can be placed in the Simulation
directory of the application to customize behavior. The API functions of the Simulation unit are unique
in that except Simulation initSpecies, none of them have any default general implementations. At the
API level there are the null implementations, actual implementations exist only for specific applications.
The general implementations of Simulation initSpecies exist for different classes of applications, such as
those utilizing nuclear burning or ionization.
35
36
CHAPTER 4. OVERVIEW OF FLASH ARCHITECTURE
FLASH Transition
Why the name change from “modules” in FLASH2 to “units” in FLASH3? The term
“module” caused confusion among users and developers because it could refer both to a
FORTRAN90 module and to the FLASH-specific code entity. In order to avoid this problem,
FLASH3 started using the word “module” to refer exclusively to an F90 module, and the
word “unit” for the basic FLASH code component. Also, FLASH no longer uses F90 modules
to implement units. Fortran’s limitation of one file per module is too restrictive for some of
FLASH4’s units, which are too complex to be described by a single file. Instead, FLASH4
uses interface blocks, which enable the code to take advantage of some of the advanced
features of FORTRAN90, such as pointer arguments and optional arguments. Interface
blocks are used throughout the code, even when such advanced features are not called for.
For a given unit, the interface block will be supplied in the file "Unit_interface.F90".
Please note that files containing calls to API-level functions must include the line use Unit,
ONLY: function-name1, function-name2, etc. at the top of the file.
4.1
FLASH Inheritance
FORTRAN90 is not an object-oriented language like Java or C++, and as such does not implement those
languages’ characteristic properties of inheritance. But FLASH takes advantage of the Unix directory structure to implement an inheritance hierarchy of its own. Every child directory in a unit’s hierarchy inherits
all the source code of its parent, thus eliminating duplication of common code. During setup, source files in
child directories override same-named files in the parent or ancestor directories.
Similarly, when the setup tool parses the source tree, it treats each child or subdirectory as inheriting all
of the Config and Makefile files in its parent’s directory. While source files at a given level of the directory
hierarchy override files with the same name at higher levels, Makefiles and configuration files are cumulative.
Since functions can have multiple implementations, selection for a specific application follows a few simple
rules applied in order described in Architecture Tips.
However, we must take care that this special use of the directory structure for inheritance does not
interfere with its traditional use for organization. We avoid any problems by means of a careful naming
convention that allows clear distinction between organizational and namespace directories.
To briefly summarize the convention, which is described in detail online in Architecture Tips, the top
level directory of a unit shares its name with that of the unit, and as such always begins with a capital letter.
Note, however, that the unit directory may not always exist at the top level of the source tree. A class
of units may also be grouped together and placed under an organizational directory for ease of navigation;
organizational directories are given in lower case letters. For example the grid management unit, called
Grid, is the only one in its class, and therefore its path is source/Grid, whereas the hydrodynamics unit,
Hydro, is one of several physics units, and its top level path is source/physics/Hydro. This method for
distinguishing between organizational directories and namespace directories is applied throughout the entire
source tree.
4.2
Unit Architecture
A FLASH unit defines its own Application Programming Interface (API), which is a collection of routines
the unit exposes to other units in the code. A unit API is usually a mix of accessor functions and routines
which modify the state of the simulation.
A good example to examine is the Grid unit API. Some of the accessor functions in this unit are
Grid getCellCoords, Grid getBlkData, and Grid putBlkData, while Grid fillGuardCells and
Grid updateRefinement are examples of API routines which modify data in the Grid unit.
A unit can have more than one implementation of its API. The Grid Unit, for example, has both an
Adaptive Grid and a Uniform Grid implementation. Although the implementations are different, they
4.2. UNIT ARCHITECTURE
37
both conform to the Grid API, and therefore appear the same to the outside units. This feature allows users
to easily swap various unit implementations in and out of a simulation without affecting they way other
units communicate. Code does not have to be rewritten if the users decides to implement the uniform grid
instead of the adaptive grid.
4.2.1
Stub Implementations
Since routines can have multiple implementations, the setup script must select the appropriate implementation for an application. The selection follows a few simple rules described in Architecture Tips. The top
directory of every unit contains a stub or null implementation of each routine in the Unit’s API. The stub
functions essentially do nothing. They are coded with just the declarations to provide the same interface to
callers as a corresponding “real” implementation. They act as function prototypes for the unit. Unlike true
prototypes, however, the stub functions assign default values to the output-only arguments, while leaving
the other arguments unaltered. The following snippet shows a simplified example of a stub implementation
for the routine Grid getListOfBlocks.
subroutine Grid_getListOfBlocks(blockType, listOfBlocks, count)
implicit none
integer, intent(in) :: blockType
integer,dimension(*),intent(out) :: listOfBlocks
integer, intent(out) :: count
count=0
listOfBlocks(1)=0
return
end subroutine Grid_getListOfBlocks
While a set of null implementation routines at the top level of a unit may seem like an unnecessary added
layer, this arrangement allows FLASH to include or exclude units without the need to modify any existing
code. If a unit is not included in a simulation, the application will be built with its stub functions. Similarly,
if a specific implementation of the unit finds some of the API functions irrelevant, it need not provide
any implementations for them. In those situations, the applications include stubs for the unimplemented
functions, and full implementations of all the other ones. Since the stub functions do return valid values
when called, unexpected crashes from un-initialized output arguments are avoided.
The Grid updateRefinement routine is a good example of how stub functions can be useful. In the
case of a simulation using an adaptive grid, such as PARAMESH, the routine Driver evolveFlash calls
Grid_updateRefinement to update the grid’s spacing. The Uniform Grid however, needs no such routine because its grid is fixed. There is no error, however, when Driver_evolveFlash calls Grid_updateRefinement
during a Uniform Grid simulation, because the stub routine steps in and simply returns without doing anything. Thus the stub layer allows the same Driver_evolveFlash routine to work with both the Adaptive
Grid and Uniform Grid implementations.
FLASH Transition
While the concept of “null” or “stub” functions existed in FLASH2, FLASH3 formalized it
by requiring all units to publish their API (the complete Public Interface) at the top level of
a unit’s directory. Similarly, the inheritance through Unix directory structure in FLASH4 is
essentially the same as that of FLASH2, but the introduction of a formal naming convention
has clarified it and made it easier to follow. The complete API can be found online at
http://flash.uchicago.edu/site/flashcode/user_support/.
38
CHAPTER 4. OVERVIEW OF FLASH ARCHITECTURE
4.2.2
Subunits
One or more subunits sit under the top level of a unit. Among them the unit’s complete API is implemented.
The subunits are considered peers to one another. Each subunit must implement at least one API function,
and no two subunits can implement the same API function. The division of a unit into subunits is based
upon identifying self-contained subsets of its API. In some instances, a subunit may be completely excluded
from a simulation, thereby saving computational resources. For example, the Grid unit API includes a few
functions that are specific to Lagrangian tracer particles, and are therefore unnecessary to simulations that
do not utilize particles. By placing these routines in the GridParticles subunit, it is possible to easily
exclude them from a simulation. The subunits have composite names; the first part is the unit name, and
the second part represents the functionality that the subunit implements. The primary subunit is named
UnitMain, which every unit must have. For example, the main subunit of Hydro unit is HydroMain and that
of the Eos unit is EosMain.
In addition to the subunits, the top level unit directory may contain a subdirectory called localAPI.
This subdirectory allows a subunit to create a public interface to other subunits within its own unit; all
stub implementations of the subunit public interfaces are placed in localAPI. External units should not call
routines listed in the localAPI; for this reason these local interfaces are not shown in the general source
API tree.
A subunit can have a hierarchy of its own. It may have more than one unit implementation directories
with alternative implementations of some of its functions while other functions may be common between
them. FLASH exploits the inheritance rules described in Architecture Tips. For example, the Grid unit has
three implementations for GridMain: the Uniform Grid (UG), PARAMESH 2, and PARAMESH 4. The procedures
to apply boundary conditions are common to all three implementations, and are therefore placed directly
in GridMain. In addition, GridMain contains two subdirectories. One is UG, which has all the remaining
implementations of the API specific to the Uniform Grid. The other directory is organized as paramesh,
which in turn contains two directories for the package of PARAMESH 2 and another organizational directory
paramesh4. Finally, paramesh4 has two subdirectories with alternative implementations of the PARAMESH 4
package. The directory paramesh also contains all the function implementations that are common between
PARAMESH 2 and PARAMESH 4. Following the naming convention described in Architecture Tips, paramesh is
all lowercase, since it has child directories that have some API implementation. The namespace directories
Paramesh2, Paramesh4.0 andParamesh4dev contain functions unique to each implementation. An example
of a unit hierarchy is shown in Figure 4.1. The kernels are described below in Section 4.2.4.
4.2.3
Unit Data Modules, _init, and _finalize routines
Each unit must have a F90 data module to store its unit-scope local data and an Unit init file to initialize
it. The Unit init routines are called by the Driver unit once by the routine Driver initFlash at the start
of a simulation. They get unit specific runtime parameters from the RuntimeParameters unit and store
them in the unit data module.
Every unit implementation directory of UnitMain, must either inherit a Unit_data module, or have its
own. There is no restriction on additional unit scope data modules, and individual Units determine how best
to manage their data. Other subunits and the underlying computational kernels can have their own data
modules, but the developers are encouraged to keep these data modules local to their subunits and kernels
for clarity and maintainability of the code. It is strongly recommended that only the data modules in the
Main subunit be accessible everywhere in the unit. However, no data module of a unit may be known to
any other unit. This restriction is imposed to keep the units encapsulated and their data private. If another
part of the code needs access to any of the unit data, it must do so through accessor functions.
Additionally, when routines use data from the unit’s data module the convention is to indicate what
particular data is being used with the ONLY keyword, as in use Unit data, ONLY : un someData. See the
snippet of code below for the correct convention for using data from a unit’s FORTRAN Data Module.
subroutine Driver_evolveFlash()
use Driver_data, ONLY: dr_myPE, dr_numProcs, dr_nbegin, &
dr_nend, dr_dt, dr_wallClockTimeLimit, &
4.2. UNIT ARCHITECTURE
39
Figure 4.1: The unit hierarchy and inheritance.
dr_tmax, dr_simTime, dr_redshift, &
dr_nstep, dr_dtOld, dr_dtNew, dr_restart, dr_elapsedWCTime
implicit none
integer
:: localNumBlocks
Each unit must also have a Unit_finalize routine to clean up the unit at the termination of a FLASH
run. The finalization routines might deallocate space or write out completion messages.
4.2.4
Private Routines: kernels and helpers
All routines in a unit that do not implement the API are classified as private routines. They are divided into
two broad categories: the kernel is the collection of routines that implement the unit’s core functionality
and solvers, and helper routines are supplemental to the unit’s API and sometimes act as a conduit to
its kernel. A helper function is allowed to know the other unit’s APIs but is itself known only locally
within the unit. The concept of helper functions allows minimization of the unit APIs, which assists in code
maintenance. The helper functions follow the convention of starting with an “un ” in their name, where “un”
is in some way derived from the unit name. For example, the helper functions of the Grid unit start with
gr , and those of Hydro unit start with hy . The helper functions have access to the unit’s data module,
and they are also allowed to query other units for the information needed by the kernel, by using their
accessor functions. If the kernel has very specific data structures, the helper functions can also populate
them with the collected information. An example of a helper function is gr_expandDomain, which refines an
AMR block. After refinement, equations of state usually need to be called, so the routine accesses the EOS
routines via Eos_wrapped.
The concept of kernels, on the other hand, facilitates easy import of third party solvers and software
into FLASH. The kernels are not required to follow either the naming convention or the inheritance rules
of the FLASH architecture. They can have their own hierarchy and data modules, and the top level of
40
CHAPTER 4. OVERVIEW OF FLASH ARCHITECTURE
the kernel typically resides at leaf level of the FLASH unit hierarchy. This arrangement allows FLASH to
import a solver without having to modify its internal code, since API and helper functions hide the higher
level details from it, and hide its details from other units. However, developers are encouraged to follow the
helper function naming convention in the kernel where possible to ease code maintenance.
The Grid unit and the Hydro unit both provide very good examples of private routines that are clearly
distinguishable between helper functions and kernel. The AMR version of the Grid unit imports the PARAMESH
version 2 library as a vendor supplied branch in our repository. It sits under the lowest namespace directory
Paramesh2 in Grid hierarchy and maintains the library’s original structure. All other private functions in
the paramesh branch of Grid are helper functions and their names start with gr . In the Hydro unit the
entire hydrodynamic solver resides under the directory PPM, which was imported from the PROMETHEUS
code (see Section 14.1.2). PPM is a directional solver and requires that data be passed to it in vector form.
Routines like hy sweep and hy block are helper functions that collect data from the Grid unit, and put it
in the format required by PPM. These routines also make sure that data stay in thermodynamic equilibrium
through calls to the Eos unit. Neither PARAMESH 2, nor PPM has any knowledge of units outside their own.
4.3
Unit Test Framework
In keeping with good software practice, FLASH4 incorporates a unit test framework that allows for rigorous
testing and easy isolation of errors. The components of the unit test show up in two different places in
the FLASH source tree. One is a dedicated path in the Simulation unit, Simulation/SimulationMain/unitTest/UnitTestName, where UnitTestName is the name of a specific unit test. The other place is a
subdirectory called unitTest, somewhere in the hierarchy of the corresponding unit which implements a
function Unit unitTest and any helper functions it may need. The primary reason for organizing unit
tests in this somewhat confusing way is that unit tests are special cases of simulation setups that also need
extensive access to internal data of the unit being tested. By splitting the unit test into two places, it
is possible to meet both requirements without violating unit encapsulation. We illustrate the functioning
of the unit test framework with the unit test of the Eos unit. For more details please see Section 16.6.
The Eos unit test needs its own version of the routine Driver evolveFlash which makes a call to its
Eos unitTest routine. The initial conditions specification and unit test specific Driver evolveFlash are
placed in Simulation/SimulationMain/unitTest/Eos, since the Simulation unit allows any substitute
FLASH function to be placed in the specific simulation directory. The function Eos unitTest resides in
physics/Eos/unitTest, and therefore has access to all internal Eos data structures and helper functions.
Chapter 5
The FLASH configuration script
(setup)
The setup script, found in the FLASH root directory, provides the primary command-line interface to
configuring the FLASH source code. It is important to remember that the FLASH code is not a single
application, but a set of independent code units which can be put together in various combinations to create
a multitude of different simulations. It is through the setup script that the user controls how the various
units are assembled.
The primary job of the setup script is to
• traverse the FLASH source tree and link necessary files for a given application to the object/ directory
• find the target Makefile.h for a given machine.
• generate the Makefile that will build the FLASH executable.
• generate the files needed to add runtime parameters to a given simulation.
• generate the files needed to parse the runtime parameter file.
More description of how setup and the FLASH4 architecture interact may be found in Chapter 4. Here
we describe its usage.
The setup script determines site-dependent configuration information by looking for a directory
sites/ where is the hostname of the machine on which FLASH is running.1 Failing
this, it looks in sites/Prototypes/ for a directory with the same name as the output of the uname command.
The site and operating system type can be overridden with the -site and -ostype command-line options
to the setup command. Only one of these options can be used at one time. The directory for each site and
operating system type contains a makefile fragment Makefile.h that sets command names, compiler flags,
library paths, and any replacement or additional source files needed to compile FLASH for that specific
machine and machine type.
The setup script uses the contents of the problem directory and the site/OS type, together with a Units
file, to generate the object/ directory, which contains links to the appropriate source files and makefile
fragments. The Units file lists the names of all units which need to be included while building the FLASH
application. This file is automatically generated when the user commonly provides the command-line -auto
option, although it may be assembled by hand. When -auto option is used, the setup script starts with the
Config file of the problem specified, finds its REQUIRED units and then works its way through their Config
files. This process continues until all the dependencies are met and a self-consistent set of units has been
found. At the end of this automatic generation, the Units file is created and placed in the object/ directory,
where it can be edited if necessary. setup also creates the master makefile (object/Makefile) and several
FORTRAN include files that are needed by the code in order to parse the runtime parameters. After running
setup, the user can create the FLASH executable by running gmake in the object directory.
1 if
a machine has multiple hostnames, setup tries them all
41
42
CHAPTER 5. THE FLASH CONFIGURATION SCRIPT (SETUP)
FLASH Transition
In FLASH2, the Units file was located in the FLASH root directory. In FLASH4, this file
is found in the object/ directory.
Save some typing
• All the setup options can be shortened to unambiguous prefixes, e.g. instead of
./setup -auto one can just say ./setup -a since
there is only one setup option starting with a.
• The same abbreviation holds for the problem name as well.
./setup
-a IsentropicVortex can be abbreviated to ./setup -a Isen assuming that
IsentropicVortex is the only problem name which starts with Isen.
• Unit names are usually specified by their paths relative to the source directory. However, setup also allows unit names to be prefixed with an extra “source/”, allowing
you to use the TAB-completion features of your shell like this
./setup -a Isen -unit=source/IO/IOMain/hdf5
• If you use a set of options repeatedly, you can define a shortcut for them. FLASH4
comes with a number of predefined shortcuts that significantly simplify the setup
line, particularly when trying to match the Grid with a specific I/O implementation.
For more details on creating shortcuts see Section 5.3. For detailed examples of I/O
shortcuts please see Section 9.1in the I/O chapter.
Reduce compilation time
• To reuse compiled code when changing setup configurations, use the -noclobber setup
option. For details see Section 5.2.
5.1
Setup Arguments
The setup script accepts a large number of command line arguments which affect the simulation in various
ways. These arguments are divided into three categories:
1. Setup Options (example: -auto) begin with a dash and are built into the setup script itself. Many of
the most commonly used arguments are setup options.
2. Setup Variables (example: species=air,h2o) are defined by individual units. When writing a Config
file for any unit, you can define a setup variable. Section 5.4 explains how setup variables can be
created and used.
3. Setup Shortcuts (example: +ug) begin with a plus symbol and are essentially macros which automatically include a set of setup variables and/or setup options. New setup shortcuts can be easily defined,
see Section 5.3 for more information.
Table 5.1 shows a list of some of the basic setup arguments that every FLASH user should know about.
A comprehensive list of all setup arguments can be found in Section 5.2 alongside more detailed descriptions
of these options.
5.2. COMPREHENSIVE LIST OF SETUP ARGUMENTS
43
Table 5.1: List of Commonly Used Setup Arguments
Argument
-auto
-unit=
-objdir=
-debug
-opt
-n[xyb]b=#
-maxblocks=#
-[123]d
-maxblocks=#
+cartesian
+cylindrical
+polar
+spherical
+noio
+ug
+nofbs
+pm2
+pm40
+pm4dev
+uhd
+usm
+splitHydro
5.2
Description
this option should almost always be set
include a specified unit
specify a different object directory location
compile for debugging
enable compiler optimization
specify block size in each direction
specify maximum number of blocks per process
specify number of dimensions
specify maximum number of blocks per process
use Cartesian geometry
use cylindrical geometry
use polar geometry
use spherical geometry
disable IO
use the uniform grid in a fixed block size mode
use the uniform grid in a non-fixed block size mode
use the PARAMESH2 grid
use the PARAMESH4.0 grid
use the PARAMESH4DEV grid
use the Unsplit Hydro solver
use the Unsplit Staggered Mesh MHD solver
use a split Hydro solver
Comprehensive List of Setup Arguments
-verbose=
Normally setup prints summary messages indicating its progress. Use the -verbose to make
the messages more or less verbose. The different levels (in order of increasing verbosity) are
ERROR,IMPINFO,WARN,INFO,DEBUG. The default is WARN.
-auto
Normally, setup requires that the user supply a plain text file called Units (in the object directory
2
) that specifies the units to include. A sample Units file appears in Figure 5.1. Each line is
either a comment (preceded by a hash mark (#)) or the name of a an include statement of the
form INCLUDE unit. Specific implementations of a unit may be selected by specifying the complete
path to the implementation in question; If no specific implementation is requested, setup picks
the default listed in the unit’s Config file.
2 Formerly,
(in FLASH2) it was located in the FLASH root directory
44
CHAPTER 5. THE FLASH CONFIGURATION SCRIPT (SETUP)
The -auto option enables setup to generate a “rough draft” of a Units file for the user. The
Config file for each problem setup specifies its requirements in terms of other units it requires. For
example, a problem may require the perfect-gas equation of state (physics/Eos/EosMain/Gamma)
and an unspecified hydro solver (physics/Hydro). With -auto, setup creates a Units file by
converting these requirements into unit include statements. Most users configuring a problem for
the first time will want to run setup with -auto to generate a Units file and then to edit it directly
to specify alternate implementations of certain units. After editing the Units file, the user must
re-run setup without -auto in order to incorporate his/her changes into the code configuration.
The user may also use the command-line option -with-unit= in conjunction with the
-auto option, in order to pick a specific implementation of a unit, and thus eliminate the need to
hand-edit the Units file.
-[123]d
By default, setup creates a makefile which produces a FLASH executable capable of solving
two-dimensional problems (equivalent to -2d). To generate a makefile with options appropriate
to three-dimensional problems, use -3d. To generate a one-dimensional code, use -1d. These
options are mutually exclusive and cause setup to add the appropriate compilation option to the
makefile it generates.
-maxblocks=#
This option is also used by setup in constructing the makefile compiler options. It determines
the amount of memory allocated at runtime to the adaptive mesh refinement (AMR) block
data structure. For example, to allocate enough memory on each processor for 500 blocks, use
-maxblocks=500. If the default block buffer size is too large for your system, you may wish to try
a smaller number here; the default value depends upon the dimensionality of the simulation and
the grid type. Alternatively, you may wish to experiment with larger buffer sizes, if your system
has enough memory. A common cause of aborted simulations occurs when the AMR grid creates
greater than maxblocks during refinement. Resetup the simulation using a larger value of this
option.
-nxb=# -nyb=# -nzb=#
These options are used by setup in constructing the makefile compiler options. The mesh on
which the problem is solved is composed of blocks, and each block contains some number of cells.
The -nxb, -nyb, and -nzb options determine how many cells each block contains (not counting
guard cells). The default value for each is 8. These options do not have any effect when running
in Uniform Grid non-fixed block size mode.
[-debug|-opt|-test]
The default Makefile built by setup will use the optimized setting (-opt) for compilation and
linking. Using -debug will force setup to use the flags relevant for debugging (e.g., including -g
in the compilation line). The user may use the option -test to experiment with different combinations of compiler and linker options. Exactly which compiler and linker options are associated
with each of these flags is specified in sites//Makefile* where is the
hostname of the machine on which FLASH is running.
For example, to tell an Intel Fortran compiler to use real numbers of size 64 when the -test
option is specified, the user might add the following line to his/her Makefile.h:
FFLAGS_TEST = -real_size 64
-objdir=
Overrides the default object directory with . Using this option allows you to have different
simulations configured simultaneously in the FLASH4 distribution directory.
-with-unit=, -unit=
Use the specified in setting up the problem.
5.2. COMPREHENSIVE LIST OF SETUP ARGUMENTS
45
#Units file for Sod generated by setup
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
INCLUDE
Driver/DriverMain/Split
Grid/GridBoundaryConditions
Grid/GridMain/paramesh/interpolation/Paramesh4/prolong
Grid/GridMain/paramesh/interpolation/prolong
Grid/GridMain/paramesh/paramesh4/Paramesh4.0/PM4_package/headers
Grid/GridMain/paramesh/paramesh4/Paramesh4.0/PM4_package/mpi_source
Grid/GridMain/paramesh/paramesh4/Paramesh4.0/PM4_package/source
Grid/GridMain/paramesh/paramesh4/Paramesh4.0/PM4_package/utilities/multigrid
Grid/localAPI
IO/IOMain/hdf5/serial/PM
IO/localAPI
PhysicalConstants/PhysicalConstantsMain
RuntimeParameters/RuntimeParametersMain
Simulation/SimulationMain/Sod
flashUtilities/contiguousConversion
flashUtilities/general
flashUtilities/interpolation/oneDim
flashUtilities/nameValueLL
monitors/Logfile/LogfileMain
monitors/Timers/TimersMain/MPINative
physics/Eos/EosMain/Gamma
physics/Hydro/HydroMain/split/PPM/PPMKernel
Figure 5.1: Example of the Units file used by setup to determine which Units to include
-curvilinear
Enable code in PARAMESH 4 that implements geometrically correct data restriction for curvilinear
coordinates. This setting is automatically enabled if a non-cartesian geometry is chosen with
the -geometry flag; so specifying -curvilinear only has an effect in the Cartesian case.
-defines=[,]...
is of the form SYMBOL or SYMBOL=value. This causes the specified pre-processor symbols
to be defined when the code is being compiled. This is mainly useful for debugging the code.
For e.g., -defines=DEBUG ALL turns on all debugging messages. Each unit may have its own
DEBUG UNIT flag which you can selectively turn on.
[-fbs|-nofbs]
Causes the code to be compiled in fixed-block or non-fixed-block size mode. Fixed-block mode is
the default. In non-fixed block size mode, all storage space is allocated at runtime. This mode is
available only with Uniform Grid.
-geometry=
Choose one of the supported geometries cartesian, cylindrical, spherical, or polar. Some
Grid implementations require the geometry to be known at compile-time while others don’t.
This setup option can be used in either case; it is a good idea to specify the geometry here
if it is known at setup-time. Choosing a non-Cartesian geometry here automatically sets the
-gridinterpolation=monotonic option below.
46
CHAPTER 5. THE FLASH CONFIGURATION SCRIPT (SETUP)
-gridinterpolation=
Select a scheme for Grid interpolation. Two schemes are currently supported:
• monotonic
This scheme attempts to ensure that monotonicity is preserved in interpolation, so that
interpolation does not introduce small-scale non-monotonicity in the data.
The monotonic scheme is required for curvilinear coordinates and is automatically enabled
if a non-cartesian geometry is chosen with the -geometry flag. For AMR Grid implementations, This flag will automatically add additional directories so that appropriate data
interpolation methods are compiled it. The monotonic scheme is the default (by way of the
+default shortcut), unlike in FLASH2.
• native
Enable the interpolation that is native to the AMR Grid implementation (PARAMESH 2 or
PARAMESH 4) by default. This option is only appropriate for Cartesian geometries.
Change in Interpolation
Note that the default interpolation behavior has changed as of the FLASH3 beta release:
the native interpolation used to be default.
When to use native Grid interpolation
The monotonic interpolation method requires more layers of coarse guard cells next to a
coarse guard cell in which interpolation is to be applied. It may therefore be necessary to
use the native method if a simulation is set up to include fewer than four layers of guard
cells.
-makefile=
setup normally uses the Makefile.h from the directory determined by the hostname of the
machine and the -site and -os options. If you have multiple compilers on your machine you
can create Makefile.h. for different compilers. e.g., you can have a Makefile.h
and Makefile.h.intel and Makefile.h.lahey for the three different compilers. setup will still
use the Makefile.h file by default, but supplying -makefile=intel on the command-line causes
setup to use Makefile.h.intel instead.
-index-reorder
Instructs setup that indexing of unk and related arrays should be changed. This may be needed
in FLASH4 for compatibility with alternative grids. This is supported by both the Uniform Grid
as well as PARAMESH, and is currently required for the Chombo grid.
-makehide
Ordinarily, the commands being executed during compilation of the FLASH executable are sent to
standard out. It may be that you find this distracting, or that your terminal is not able to handle
these long lines of display. Using the option -makehide causes setup to generate a Makefile so
that gmake only displays the names of the files being compiled and not the exact compiler call
and flags. This information remains available in setup flags in the object/ directory.
5.2. COMPREHENSIVE LIST OF SETUP ARGUMENTS
47
-noclobber
setup normally removes all code in the object directory before linking in files for a simulation.
The ensuing gmake must therefore compile all source files anew each time setup is run. The
-noclobber option prevents setup from removing compiled code which has not changed from the
previous setup in the same directory. This can speed up the gmake process significantly.
-os=
If setup is unable to find a correct sites/ directory it picks the Makefile based on the operating
system. This option instructs setup to use the default Makefile corresponding to the specified
operating system.
-parfile=
This causes setup to copy the specified runtime-parameters file in the simulation directory to the
object directory with the new name flash.par
-particlemethods=TYPE=[,INIT=][,MAP=
Source Exif Data:
File Type : PDF
File Type Extension : pdf
MIME Type : application/pdf
PDF Version : 1.4
Linearized : No
Page Count : 573
Page Mode : UseOutlines
Author :
Title :
Subject :
Creator : LaTeX with hyperref package
Producer : pdfeTeX-1.21a
Create Date : 2016:11:04 02:07:16-05:00
PTEX Fullbanner : This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4) kpathsea version 3.5.4
EXIF Metadata provided by EXIF.tools