User Guide

User Manual: Pdf

Open the PDF directly: View PDF PDF.
Page Count: 59

The Abdus Salam
International Centre for Theoretical Physics
Strada Costiera, 11 I - 34151 Trieste, Italy
Earth System Physics Section - ESP
Regional Climatic Model RegCM
User’s Guide
Version 4.5
Trieste, Italy May 23, 2016
Filippo Giorgi, Fabien Solmon, Graziano Giuliani
2
Contents
1 Release Notes 5
2 Obtaining the model 7
2.1 SimpleModelUser .......................................... 7
2.2 ModelDeveloper ........................................... 7
3 Installation procedure 8
3.1 Softwarerequirements ........................................ 8
3.2 Conguringbuild........................................... 9
3.2.1 Model configuration at build stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.3 Buildthemodelexecutables ..................................... 10
4 Accessing global datasets 11
4.1 Global dataset directory Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2 StaticSurfaceDataset......................................... 11
4.3 CLMDataset ............................................. 12
4.4 CLM4.5Dataset ........................................... 12
4.5 SeaSurfaceTemperature ....................................... 13
4.6 Atmosphere and Land temperature Global Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Running a test simulation using the model 14
5.1 Settinguptherunenvironment.................................... 14
5.2 Create the DOMAIN file using terrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.3 Create the SST using the sst program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.4 Create the ICBC files using the icbc program . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.5 FirstRegCMmodelsimulation.................................... 17
6 Localizing the model and running your simulation 19
6.1 Thecommentednamelist....................................... 19
6.1.1 dimparamnamelist...................................... 19
6.1.2 coreparamnamelist...................................... 20
6.1.3 geoparamnamelist ...................................... 21
6.1.4 terrainparamnamelist..................................... 22
6.1.5 debugparamnamelist..................................... 23
6.1.6 boundaryparamnamelist................................... 23
6.1.7 globdatparamnamelist.................................... 24
6.1.8 fnestparamnamelist ..................................... 25
6.1.9 perturbparamnamelist .................................... 25
6.1.10 restartparamnamelist..................................... 26
6.1.11 timeparamnamelist...................................... 26
6.1.12 outparamnamelist ...................................... 27
6.1.13 physicsparamnamelist .................................... 28
3
6.1.14 nonhydroparamnamelist................................... 29
6.1.15 subexparamnamelist..................................... 30
6.1.16 microparamnamelist..................................... 30
6.1.17 grellparam, emanparam, tiedtkeparam and kfparam namelists . . . . . . . . . . . . . . . 31
6.1.18 holtslagparamnamelist.................................... 32
6.1.19 uwparamnamelist ...................................... 33
6.1.20 slabocparamnamelist..................................... 33
6.1.21 tweakparamnamelist..................................... 33
6.1.22 rrtmparamnamelist...................................... 33
6.1.23 chemparamnamelist ..................................... 34
6.2 TheCLMoptions........................................... 37
6.3 TheCLM4.5options......................................... 38
6.4 Sensitivityexperimentshint...................................... 39
7 Postprocessing tools 41
7.1 Commandlinetools.......................................... 41
7.1.1 netCDFlibrarytools ..................................... 41
7.1.2 NetCDFoperatorsNCO ................................... 42
7.1.3 Climate data Operators CDO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.2 GrADSprogram............................................ 43
7.2.1 GrADSlimits......................................... 43
7.3 CISLs NCL : NCAR Command Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.4 R Statistical Computing Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.5 Nonfreetools............................................. 44
8 Getting help and reporting bugs 45
8.1 TheGforgesite ............................................ 45
9 Appendices 47
9.1 IdentifyProcessor........................................... 47
9.2 Chosecompiler ............................................ 48
9.3 Environmentsetup .......................................... 49
9.4 Pre requisite library installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4
Chapter 1
Release Notes
The Regional Climate Model developed at ICTP has now reached its 4.5 release. The code base is actively
developed by a community of developers internal and external to ICTP, and this work is merged on the Gforge
site on gforge.ictp.it site.
The main new technical features of the release 4.5 are summarized in the following points:
The MM5 non-hydrostatic dynamical core has been ported to the ICTP RegCM without removing the
existing hydrostatic core. Now the user can select in the input namelist which of the two dynamical cores he
wish to use.
The SAV file strategy has changed to allow a better integration of the RegCM into the RegESM framework
and the user is required to modify the setting from previous version namelist files.
The source for the topography is now the GMTED dataset which is an update to the USGS GTOPO dataset.
The DUST 12 bin option has been added to the aerosol simulation types
The chemistry/aerosol option now works along the CLM4.5 option
The model supports soil moisture initialization from a previous run for both the BATS and the CLM4.5
surface models.
Bug Fixing:
Bugs present in the CN schemes for the CLM4.5 implemented in RegCM have been fixed
A problematic interaction between the UW pbl and the new Microphisics has been fixed
Next release V 5.X :
New dynamical core from Giovanni Tumolo semi-implicit, semi-Lagrangian, p-adaptive Discontinuous
Galerkin method three dimensional model.
The model code is in Fortran 2003 ANSI standard. The development is done on Linux boxes, and the model
is known to run on IBM AIXTMplatforms, MacOSTMplatforms. No porting effort has been done towards non
Unix-like Operating Systems. We will for this User Guide assume that the reference platform is a recent Linux
distributon with a bash shell. Typographical convention is the following:
Table 1.1: Conventions
$> normal shell prompt
#> root shell prompt
$SHELL_VARIABLE a shell variable
Any shell variable is supposed to be set by the User with the following example syntax:
5
$> export REGCM_ROOT="/home/user/RegCM-4.5.0"
Hope you will find this document useful. Any error found belongs to the RegCNET and can be reported to be
corrected in future revisions. Enjoy.
6
Chapter 2
Obtaining the model
2.1 Simple Model User
A packed archive file with the model code can be downloaded from:
http://gforge.ictp.it/gf/project/regcm/frs
and it can be later on decompressed and unpacked using:
$> tar -zxvf RegCM-4.5.0.tar.gz
2.2 Model Developer
If you plan to become a model developer, source code can be obtained via svn. The RegCM team strongly
encourages the contributing developers to enroll on the gforge site to always be up to date and to check on-line all
the news of the package.
https://gforge.ictp.it/gf/project/regcm
The correct procedure is first to register on the G-forge site, then ask the ICTP scientific team head Filippo
Giorgi to be enrolled as a model developer. After being officially granted the status, you will gain access to the
model subversion repository.
Check that Subversion software is installed on your machine typing the following command:
$> svn --version
If your system answers command not found, refer to your System Administrator or software installation
manual of your OS to install the subversion software. As an example, on Scientific Linux the command to install
it as root is:
#> yum install subversion
If Subversion is installed, after enrolled just type the following command:
$> svn checkout https://gforge.ictp.it/svn/regcm/branches/regcm-core
To setup the model code from the SVN for your system, run the bootstrap.sh script:
$> ./bootstrap.sh
The system must have installed the autoconf,automake and libtool programs.
7
Chapter 3
Installation procedure
Whatever method is chosen to download the code, we assume that you have now on your working directory a new
directory, named RegCM-4.5.x. That directory will be for the rest of this guide referred as $REGCM_ROOT .
All the operations to build the model binaries will be performed in this directory.
3.1 Software requirements
In order to configure and install the RegCM code, the following software are needed:
1. GNU Make program
2. Fortran 2003 compiler
3. One in:
(a) C compiler for the serial option (enable-mpi-serial at configure)
(b) MPI2 Message Passing Library compiled with the above fortran compiler for parallel runs using
multiple core single machines or cluster of machines (default). Source code for the implementation
code was tested with can be obtained at:
http://www.open-mpi.org
4. netCDF (Rew and Davis (1990)) Format I/O library compiled with the above Fortran compiler with netCDF4
format support. Source code can be found from
ftp://ftp.unidata.ucar.edu/pub/netcdf
Optional requirements strongly suggested are :
1. NCO netCDF Operators for manging netCDF file. Most Linux distribution have this already packed, and
you should refer to your System Administrator or OS Software Installation manual to obtain it. Source code
is at:
http://nco.sourceforge.net/src
2. CDO Climatic data Operators for managing netCDF file. Most Linux distribution have this already packed,
and you should refer to your System Administrator or OS Software Installation manual to obtain it. Source
code is at:
https://code.zmaw.de/projects/cdo/files
3. A Scientific Plotting and Data Analysis Software such as:
IGES GrADS 2.0 Graphical Analysis and Display System. Convenient helpers are packed in RegCM
to use GrADS with RegCM netCDF output files. Binaries and source code can be obtained from
http://www.iges.org/grads/downloads.html
8
NCL, NCAR CISL Command Language. The NCL can read netCDF output files, and sample scripts
can be found in the Tools/Scripts/NCL directory. Binaries and source code can be obtained from
http://www.ncl.ucar.edu
4. A quick viewer for netCDF files like NcView:
http://meteora.ucsd.edu/ pierce/ncview home page.html
A script is shipped in the RegCM codebase in the Tools directory to help user compile and install required
packages, and its usage is detailed in chapter 9.
3.2 Configuring build
The RegCM Version 4.5 is configured by a configure script, which will select the known working configuration for
the supported architectures.
Currently tested and supported configurations (OS/Compiler) are:
1. Linux with GNU gfortran compiler version 4.6
2. Linux with IntelTMifort compiler version 12.0
3. Linux with PortlandTMpgf95 compiler version 12.0
4. Mac OsXTMwith gfortran compiler 4.6 from MacPorts
5. IBM AIXTMwith xlf2003 compiler
The 4.5 version of the RegCM model relies on the standard GNU autotools to configure and build the model
code.
The first step is to change working directory to $REGCM_ROOT and run the configure script giving as arguments
the chosen compilers:
$> cd $REGCM_ROOT
$> ./configure # or, for intel : ./configure CC=icc FC=ifort
To know the list of arguments that can be given to the configure script, the script can be launched with the
--help command line argument.
$> ./configure --help
The useful arguments to successfully build the model are:
--with-netcdf Path to NetCDF installation (default: NETCDF
environment)
--with-hdf5 Path to HDF5 installation (default: HDF5
environment)
--with-szip Path to SZIP installation (default: SZIP
environment)
CC= C compiler command
CFLAGS= C compiler flags
LDFLAGS= linker flags, e.g. -L<lib dir> if you have libraries in a
nonstandard directory <lib dir>
LIBS= libraries to pass to the linker, e.g. -l<library>
CPPFLAGS= (Objective) C/C++ preprocessor flags, e.g. -I<include dir> if
you have headers in a nonstandard directory <include dir>
CPP= C preprocessor
FC= Fortran compiler command
FCFLAGS= Fortran compiler flags
MPIFC= MPI Fortran compiler command
9
3.2.1 Model configuration at build stage
1. Enable debug
--enable-debug Enable debugging flags and per processor log file
If enabled, the model will be compiled using debug flags for the compiler, which will allow the use of a
debugger such as gdb. More diagnostics will also be generated during model run. The default is to build
production binaries with all optimization flags turned on.
2. Serial code using stub MPI library
--enable-mpiserial Use the included MPI replacement library for single
processor
The model is coded to use an MPI2 library to run in parallel mode using multiple cores/processors or run on
a cluster. To enable instead a serial compilation option, a stub MPI library with empty callbacks needs to be
compiled and linked to the executable. The RegCM team strongly suggest to build MPI enabled model also
on standalone systems, to take advantage of the multicore capabilities of any modern processor.
3. CLM option
--enable-clm Supply this option if you plan on using CLM option.
This option switches off the default Land model of RegCM (derived from BATS1e), and enables the use of
the Community Land Model V3.5 inside RegCM. The default is to use the RegCM BATS Land Model.
4. CLM 4.5 option
--enable-clm45 Supply this option if you plan on using CLM45 option.
This option switches off the default Land model of RegCM (derived from BATS1e), and enables the use of
the Community Land Model V4.5 inside RegCM. The default is to use the RegCM BATS Land Model.
3.3 Build the model executables
Now that everything is hopefully configured, you may use the make program to build executables.
$> make
This target will builds all model parts. The compilation is started in the whole model tree (PreProc, Main and
PostProc). Lot of messages will appear on screen, abd at the end all executables are built int the source directories.
To copy them to a bin directory in the model root or into a bin directory in the path specified with the prefix
argument to the configure script, esplicitly issue the command:
$> make install
Congratulations! You can now go to next step and run a test simulation.
10
Chapter 4
Accessing global datasets
The first step to run a test simulation is to obtain static data to localize model DOMAIN and Atmosphere and
Ocean global model dataset to build initial and boundary conditions ICBC to run a local area simulation.
ICTP maintains a public accessible web repository of datasets on:
http://clima-dods.ictp.it/regcm4
We will in the following substitute this URL with a shell variable:
$> export ICTP_DATASITE=http://clima-dods.ictp.it/regcm4
As of now you are requested to download required global data on your local disk storage before any run attempt.
In the future, the ICTP ESP team has planned to make available an OpenDAP THREDDS Server to give remote
access to global dataset for creating DOMAIN and ICBC without the need to download the global dataset, but just
the required subset in space and time, using the ICTP web server capabilities to create that subset.
Our advice to you is to use handy transfer program such as uget, but below we will show you how to use
command line download tools curl and wget to get data.
4.1 Global dataset directory Layout
You are suggested to establish a convenient location for global datasets on your local storage. Keep in mind that
required space for a year of global data can be as large as 8 GBytes.
Having this in mind, we will now consider that you the user have identified on your system or have network
access to such a storage resource to store say 100 GB of data, and have it reachable on your system under the
$REGCM_GLOBEDAT location. On this directory, you are required to make the following directories:
$> cd $REGCM_GLOBEDAT
$> mkdir SURFACE CLM CLM45 SST EIN15
This does not fill all possible global data sources paths, but will be enough for the scope of running the model
for testing its capabilities.
4.2 Static Surface Dataset
The model needs to be localized on a particular DOMAIN. The needed information are topography, land type
classification and optionally lake depth (to run the Hostetler lake model) and soil texture classification (to run the
chemistry option with DUST enabled).
This means downloading four files, which are global archives at 30second horizontal resolution on a global
latitude-longitude grid of the above data.
11
$> cd $REGCM_GLOBEDAT
$> cd SURFACE
$> curl -o GTOPO_DEM_30s.nc ${ICTP_DATASITE}/SURFACE/GTOPO_DEM_30s.nc
$> curl -o GLCC_BATS_30s.nc ${ICTP_DATASITE}/SURFACE/GLCC_BATS_30s.nc
Optional Lake and Texture datasets:
$> cd $REGCM_GLOBEDAT
$> cd SURFACE
$> curl -o ETOPO_BTM_30s.nc ${ICTP_DATASITE}/SURFACE/ETOPO_BTM_30s.nc
$> curl -o GLZB_SOIL_30s.nc ${ICTP_DATASITE}/SURFACE/GLZB_SOIL_30s.nc
4.3 CLM Dataset
If you are planning to enable the CLM option in the model, you will need a series of files with global land surface
characteristics datasets.
$> cd $REGCM_GLOBEDAT
$> cd CLM
$> curl -o mksrf_fmax.nc ${ICTP_DATASITE}/CLM/mksrf_fmax.nc
$> curl -o mksrf_glacier.nc ${ICTP_DATASITE}/CLM/mksrf_glacier.nc
$> curl -o mksrf_lai.nc ${ICTP_DATASITE}/CLM/mksrf_lai.nc
$> curl -o mksrf_lanwat.nc ${ICTP_DATASITE}/CLM/mksrf_lanwat.nc
$> curl -o mksrf_navyoro_20min.nc ${ICTP_DATASITE}/CLM/mksrf_navyoro_20min.nc
$> curl -o mksrf_pft.nc ${ICTP_DATASITE}/CLM/mksrf_pft.nc
$> curl -o mksrf_soicol_clm2.nc ${ICTP_DATASITE}/CLM/mksrf_soicol_clm2.nc
$> curl -o mksrf_soitex.10level.nc ${ICTP_DATASITE}/CLM/mksrf_soitex.10level.nc
$> curl -o mksrf_urban.nc ${ICTP_DATASITE}/CLM/mksrf_urban.nc
$> curl -o pft-physiology.c070207 ${ICTP_DATASITE}/CLM/pft-physiology.c070207
$> curl -o pft-physiology.c070207.readme \
> ${ICTP_DATASITE}/CLM/pft-physiology.c070207.readme
$> curl -o rdirc.05.061026 ${ICTP_DATASITE}/CLM/rdirc.05.061026
This is the input file for the clm2rcm program (see at 6.2).
4.4 CLM 4.5 Dataset
If you are planning to enable the CLM 4.5 option in the model, you will need a series of files with global land
surface characteristics datasets.
$> cd $REGCM_GLOBEDAT
$> cd CLM45
$> mkdir megan pftdata snicardata surface
$> for dir in megan pftdata snicardata surface; do cd $dir; \
wget ${ICTP_DATASITE}/CLM45/$dir/ -O - | \
wget -A ".nc" -l1 --no-parent --base=${ICTP_DATASITE}/CLM45/$dir/ \
-nd -Fri -; done
This is the input file for the mksurfdata program (see at 6.3).
12
4.5 Sea Surface Temperature
The model needs a global SST dataset to feed the model with ocean temperature. You have multiple choices for
SST data, but we will for now for our test run download just CAC OISST weekly for the period 1981 - present.
$> cd $REGCM_GLOBEDAT
$> cd SST
$> CDCSITE="ftp.cdc.noaa.gov/pub/Datasets/noaa.oisst.v2"
$> curl -o sst.wkmean.1981-1989.nc \
> ftp://$CDCSITE/sst.wkmean.1981-1989.nc
$> curl -o sst.wkmean.1990-present.nc \
> ftp://$CDCSITE/sst.wkmean.1990-present.nc
4.6 Atmosphere and Land temperature Global Dataset
The model needs to build initial and boundary conditions for the regional scale, interpolating on the RegCM grid
the data from a Global Climatic Model output. The GCM dataset can come from any of the supported models, but
we will for now for our test run download just the EIN15 dataset for the year 1990 (Jan 01 00:00:00 UTC to Dec
31 18:00:00 UTC)
$> cd $REGCM_GLOBEDAT
$> cd EIN15
$> mkdir 1990
$> cd 1990
$> for type in "air hgt rhum uwnd vwnd"
> do
> for hh in "00 06 12 18"
> do
> curl -o ${type}.1990.${hh}.nc \
> ${ICTP_DATASITE}/EIN15/1990/${type}.1990.${hh}.nc
> done
> done
With this datasets we are now ready to go through the RegCM Little Tutorial in the next chapter of this User
Guide.
13
Chapter 5
Running a test simulation using the model
We will in this chapter go through a sample session in using the model with a sample configuration file prepared
for this task.
5.1 Setting up the run environment
The model executables prepared in chapter 3 are waiting for us to use them. So let’s give them a chance.
The model test run proposed here requires around 100Mb of disk space to store the DOMAIN and ICBC in
input and the output files. We will assume here that you, the user, have already established a convenient directory
on a disk partition with enough space identified in the following discussion with $REGCM_RUN
We will setup in this directory a standard environment where the model can be executed for the purpose of
learning how to use it.
$> cd $REGCM_RUN
$> mkdir input output
$> ln -sf $REGCM_ROOT/bin .
$> cp $REGCM_ROOT/Testing/test_001.in .
$> cd $REGCM_RUN
Now we are ready to modify the input namelist file to reflect this directory layout. A namelist file in FORTRAN
is a convenient way to give input to a program in a formatted file, read at runtime by the program to setup its
execution behaviour. So the next step is somewhat tricky, as you need to edit the namelist file respecting its well
defined syntax. Open your preferred text file editor and load the test_001.in file. You will need to modify for
the scope of the present tutorial the following lines:
FROM:
dirter = ’/set/this/to/where/your/domain/file/is’,
TO:
dirter = ’input/’,
FROM:
inpter = ’/set/this/to/where/your/surface/dataset/is’,
TO:
inpter = ’$REGCM_GLOBEDAT’,
where $REGCM_GLOBEDAT is the directory where input data have been downloaded in chapter 4.
FROM:
dirglob = ’/set/this/to/where/your/icbc/for/model/is’,
TO:
dirglob = ’input/’,
14
FROM:
inpglob = ’/set/this/to/where/your/input/global/data/is’,
TO:
inpglob = ’$REGCM_GLOBEDAT’,
and last bits:
FROM:
dirout=’/set/this/to/where/your/output/files/will/be/written’
TO:
dirout=’output/’
These modifications just reflect the above directory layout proposed for this tutorial, and any of these paths can
point anywhere on your system disks. The path is limited to 256 characters. We are now ready to execute the first
program of the RegCM model.
5.2 Create the DOMAIN file using terrain
The first step is to create the DOMAIN file to localize the model on a world region. The program which does this
for you reading the global databases is terrain .
To launch the terrain program, enter the following commands:
$> cd $REGCM_RUN
$> ./bin/terrain test_001.in
If everything is correctly configured up to this point, the model should print something on stdout, and the last
lines will be:
Grid data written to output file
Successfully completed terrain fields generation
In the input directory the program will write the following two files:
$> ls input
test_001_DOMAIN000.nc test_001_LANDUSE
The DOMAIN file contains the localized topography and landuse databases, as well as projection information
and land sea mask. The second file is an ASCII encoded version of the landuse, used for modifying it on request.
We will cover it’s usage later on. To have a quick look at the DOMAIN file content, you may want to use the
GrADSNcPlot program:
$> ./bin/GrADSNcPlot input/test_001_DOMAIN000.nc
If not familiar with GrADS program, enter in sequence the following commands at the ga-> prompt:
ga-> q file
ga-> set gxout shaded
ga-> set mpdset hires
ga-> set cint 50
ga-> d topo
ga-> c
ga-> set cint 1
ga-> d landuse
ga-> quit
this will plot the topography and the landuse on the X11 window.
15
5.3 Create the SST using the sst program
We are now ready to create the Sea Surface Temperature for the model, reading a global dataset. The program
which does this for you is the sst program, which is executed with the following commands:
$> cd $REGCM_RUN
$> ./bin/sst test_001.in
If everything is correctly configured up to this point, the model should print something on stdout, and the last
line will be:
Successfully generated SST
The input directory now contains a new file:
$> ls input
test_001_DOMAIN000.nc test_001_LANDUSE test_001_SST.nc
The SST file contains the Sea Surface temperature to be used in generating the Initial and Boundary Conditions
for the model for the period specified in the namelist file. Again you may want to use the GrADSNcPlot program
to look at file content:
$> ./bin/GrADSNcPlot input/test_001_SST.nc
If not familiar with GrADS program, enter in sequence the following commands at the ga-> prompt:
ga-> q file
ga-> set gxout shaded
ga-> set mpdset hires
ga-> set cint 2
ga-> d sst
ga-> quit
this will plot the interpolated sst field on the X11 window.
5.4 Create the ICBC files using the icbc program
Next step is to create the ICBC (Initial Condition, Boundary Conditions) for the model itself. The program which
does this for you is the icbc program, executed with the following commands:
$> cd $REGCM_RUN
$> ./bin/icbc test_001.in
If everything is correctly configured up to this point, the model should print something on stdout, and the last
line will be:
Successfully completed ICBC
The input directory now contains two more files:
$> ls -1 input
test_001_DOMAIN000.nc
test_001_ICBC.1990060100.nc
test_001_ICBC.1990070100.nc
test_001_LANDUSE
test_001_SST.nc
16
The ICBC files contain the surface pressure, surface temperature, horizontal 3D wind components, 3D
temperature and mixing ratio for the RegCM domain for the period and time resolution specified in the input
file. Again you may want to use the GrADSNcPlot program to look at file content:
$> ./bin/GrADSNcPlot input/test_001_ICBC.1990060100.nc
If not familiar with GrADS program, enter in sequence the following commands at the ga-> prompt:
ga-> q file
ga-> set gxout shaded
ga-> set mpdset hires
ga-> set cint 2
ga-> d ts
ga-> c
ga-> set lon 10
ga-> set lat 43
ga-> set t 1 last
ga-> d ts
ga-> quit
this will plot the interpolated surface temperature field on the X11 window, first at first time step and then a
time section in one of the domain points for a whole month.
We are now ready to run the model!
5.5 First RegCM model simulation
The model has now all needed data to allow you to launch a test simulation, the final goal of our little tutorial.
The model command line now will differ if you have prepared the Serial or the MPI version. For the MPI
enabled version we will assume that your machine is a dual core processor (baseline for current machines, even
for laptops). Change the -np 2 argument to the number of processors you have on Your platform (on my laptop
QuadCore I use -np 4).
MPI version 1
$> cd $REGCM_RUN
$> mpirun -np 8 ./bin/regcmMPI test_001.in
Serial version 2
$> cd $REGCM_RUN
$> ./bin/regcmSerial test_001.in
Now the model will start running, and a series of diagnostic messages will be printed on screen. As this is a
simulation known to behave well, no stoppers will appear, so you may want now to have a coffee break and come
back in 10 minutes from now.
At the end of the run, the model will print the following message:
RegCM V4 simulation successfully reached end
The output directory now contains four files:
$> ls output
test_001_ATM.1990060100.nc test_001_SRF.1990060100.nc
test_001_RAD.1990060100.nc test_001_SAV.1990070100.nc
1Use regcmMPICLM if the CLM version has been configured
2Deprecated. Support will be dropped in future releases.
17
the ATM file contains the atmosphere status from the model, the SRF file contains the surface diagnostic
variables, and the RAD file contains radiation fluxes information. The SAV file stores the status of the model at
the end of the simulation period to enable a restart, thus allowing a long simulation period to be splitted in shorter
simulations.
To have a look for example at surface fields, you may want to use the following command:
$> ./bin/GrADSNcPlot output/test_001_SRF.1990060100.nc
Assuming the previous crash course in using GraDS was received, you should be able to plot the variables in
the file.
This is the end of this little tutorial, and in the next chapter we will examine how to configure the model for
your research needs.
18
Chapter 6
Localizing the model and running your
simulation
We will examine in this chapter in more detail the namelist configuration file, to give you the User a deeper
knowledge of model capabilities.
6.1 The commented namelist
In this section we will show you the commented namelist input file you will find under $REGCM_ROOT/Doc with the
name README.namelist . All model programs seen so far, with the exception of the GrADS helper program, use
as input this namelist file, which is unique to a particular simulation. The model input namelist file is composed
by a number of different namelists, each one devoted to configuring the model capabilities. A namelist in the
namelist file is identified with a starting &character followed by namelist name, and ends on a single line with the
\character.
6.1.1 dimparam namelist
This namelist contains the base X,Y,Z domain dimension information, used by the model dynamic memory
allocator to request the Operating System the memory space to store the model internal variables.
&dimparam
iy = 34, ! This is number of points in the N/S direction
jx = 48, ! This is number of points in the E/W direction
kz = 23, ! Number of vertical levels
dsmin = 0.01, ! Minimum sigma spacing (only used if kz is not 14, 18, or 23)
dsmax = 0.05, ! Maximum sigma spacing (only used if kz is not 14, 18, or 23)
nsg = 1, ! For subgridding, number of points to decompose. If nsg=1,
! no subgridding is performed. CLM3.5 does NOT work with
! subgridding enabled.
njxcpus = -1, ! Number of CPUS to be used in the jx (lon) dimension.
! If <=0 , the executable will try to figure out a suitable
! decomposition.
niycpus = -1, ! Number of CPUS to be used in the iy (lat) dimension.
! If <=0 , the executable will try to figure out a suitable
! decomposition.
/
The things you need to know here:
1. In the current version 4.5 the model parallelizes execution dividing the work between the processors, with
the minimum work per processor is 9 points or a box 3 ×3, so the maximum theorical number of processors
which can be used in a parallel run for the above configuration is roughly 150, but for the communication
overhead the optimal would be around 12 or a 10 ×10 patch per processor.
19
2. If a custom number of sigma level is chosen (not 14, 18 or 23), the actual sigma values are calculated
mimimizing the a,bcoefficients for the equation:
dsig(i) = dsmax ai1b0.5(i2)(i1)(6.1)
derived from the recursive relation:
dsig(i) = a(i)dsig(i1)(6.2)
where a(i) = ba(i1). We at ICTP normally use 23 levels for the non-hydrostatic core and 18 for the
hydrostatic core.
3. Specifying an nsg number greater than one triggers the subgrid BATS/CLM45 model on. There is no plan
to extend this feature to CLM3.5 model. This affects only surface variable calculations. All dynamical
variables are calculated still on the coarser grid. Rain in the current implementation is also calculated on the
coarser grid.
4. The njxcpus,niycpus parameters can be used to force a particular domain decomposition for any particular
hardware architecture. Normally the algorithm in the model code should be able to fix the supposedly
optimally balanced decomposition.
6.1.2 coreparam namelist
This namelist is new to the RegCM 4.5 version, and controls which dynamical core is used in the model.
&coreparam
idynamic = 1, ! Choice of dynamical core
! 1 = MM4 hydrostatic core
! 2 = MM5 NON hydrostatic core
/
The things you need to know here:
1. For details about the two dynamical cores, refer to the Reference Manual.
2. The hydrostatic core is cheaper computationally, but physically it should be limited to resolution greater than
15km. For higher resolution, the non hydrostatic core is to be used.
3. The output variables of the two dynamical cores differ in the ATM files because of the different state variables
used. Vertical levels also differ.
4. The ICBC for the non-hydrostatic cannot be used for the hydrostatic run and the other way round.
5. Nesting using the FNEST option is supported in these configurations:
(a) hydrostatic nested into hydrostatic
(b) non-hydrostatic nested into hydrostatic
(c) non-hydrostatic nested into non-hydrostatic
6. Selecting the non-hydrostatic core, the nonhydroparam namelist is read in.
20
6.1.3 geoparam namelist
This namelist is used by the terrain program to geolocate the model grid on the earth surface. The RegCM
model uses a limited number of projection engines. The value here are used by the other model programs to assert
consistency with the geolocation information written by the terrain program in the DOMAIN file.
The first step in any application is the selection of model domain and resolution. There are no strict rules for
this selection, which in fact is mostly determined by the nature of the problem and the availability of computing
resources. The domain should be large enough to allow the model to develop its own circulations and to include
all relevant forcings and processes, and the resolution should be high enough to capture local processes of interest
(e.g. due to complex topography or land surface).
On the other hand the model computational cost increases rapidly with resolution and domain size, so a
compromise needs to be usually reached between all these factors.
This is usually achieved by experience, understanding of the problem or trial and error, however one tip to
remember is to avoid that the boundaries of the domain cross major topographical systems.
This is because the mismatch in the resolution of the coarse scale lateral driving fields and the model fields in
the presence of steep topography may generate spurious local effects (e.g. localized precipitation areas) which can
affect the model behavior, at least in adjacent areas.
&geoparam
iproj = ’LAMCON’, ! Domain cartographic projection. Supported values are:
! ’LAMCON’, Lambert conformal.
! ’POLSTR’, Polar stereographic. (Doesn’t work)
! ’NORMER’, Normal Mercator.
! ’ROTMER’, Rotated Mercator.
ds = 60.0, ! Grid point horizontal resolution in km
ptop = 5.0, ! Pressure of model top in cbar
clat = 45.39, ! Central latitude of model domain in degrees
! North hemisphere is positive
clon = 13.48, ! Central longitude of model domain in degrees
! West is negative.
plat = 45.39, ! Pole latitude (only for rotated Mercator Proj)
plon = 13.48, ! Pole longitude (only for rotated Mercator Proj)
truelatl = 30.0, ! Lambert true latitude (low latitude side)
truelath = 60, ! Lambert true latitude (high latitude side)
i_band = 0, ! Use this to enable a tropical band. In this case the ds,
! iproj, clat, clon parameters are not considered.
/
The things you need to know here:
1. The different projection engines produce better results depending on the position and extent of the domain.
In particular, regardless of hemisphere:
Middle latitudes (around 45 degrees) - Lambert Conformal
Polar latitudes (more than 75 degrees) - Polar Stereographic
Low latitudes (up to 30 degrees and crossing the equator) - Mercator
Crossing more than 45 degrees extent in latitude - Rotated Mercator
2. The model hydrostatic engine does not allow a resolution lower than 20km. If you want a higher resolution
consider using the subgridding scheme. ICTP plans to introduce in the future a non-hydrostatic compressible
core to the RegCM model.
3. Lowering the top pressure of the model can give you problems in regions with complex topography. Touch
the default after thinking twice on that.
4. Always specify clat and clon, the central domain point, and do fine adjustment of the position moving
it around a little bit. A little shift in position and some tests can help you obtain a better representation of
coastlines and topography at the coarse resolutions.
21
5. If using LAMCON projection, take care to place the two true latitudes at around one fourth and three fourth of
the domain latitude space to better correct the projection distortion of the domain.
6. The pole position for the rotated mercator position should be as near as possible to the center domain
position.
7. For the i_band parameter, selecting this will enable the tropical band experiment, and the horizontal
resolution will be calculated from the number of jx points. The projection is set to Normal Mercator, the
center of the projection is set to clat = 0.0,clon = 180.0, and the grid point resolution is calculated as:
2π6370.0
jx (6.3)
Just remember:
(a) The model for a tropical band simulation is heavy, as the number of points is usually huge to obtain a
good horizontal resolution. Check any memory limit is disabled on your platform before attempting a
run.
(b) The model scales well on a cluster with a large number of processors.
6.1.4 terrainparam namelist
This namelist is used by the terrain program to know how you want to generate the DOMAIN file. You can control
its work using a number of parameters to obtain what you consider the best representation of the physical reality.
Do not underestimate what you can do at this early stage, having a good representation of the surface can lead to
valuable results later when the model calculates climatic parameters.
&terrainparam
domname = ’AQWA’, ! Name of the domain/experiment.
! Controls naming of input files
smthbdy = .false., ! Smoothing Control flag
! true -> Perform extra smoothing in boundaries
lakedpth = .false., ! If using lakemod (see below), produce from
! terrain program the domain bathymetry
ltexture = .false., ! If using DUST tracers (see below), produce
! the domain soil texture dataset
lsmoist = .false., ! Use Satellite Soil Moisture Dataset for
! initialization of soil moisture.
fudge_lnd = .false., ! Fudging Control flag, for landuse of grid
fudge_lnd_s = .false., ! Fudging Control flag, for landuse of subgrid
fudge_tex = .false., ! Fudging Control flag, for texture of grid
fudge_tex_s = .false., ! Fudging Control flag, for texture of subgrid
fudge_lak = .false., ! Fudging Control flag, for lake of grid
fudge_lak_s = .false., ! Fudging Control flag, for lake of subgrid
h2opct = 50., ! Surface min H2O percent to be considered water
h2ohgt = .true., ! Allow water points to have hgt greater than 0
ismthlev = 1, ! How many times apply the 121 smoothing
dirter = ’input/’, ! Output directory for terrain files
inpter = ’globdata/’, ! Input directory for SURFACE dataset
moist_filename = ’moist.nc’, ! Read initial moisture and snow from this file
/
The things you need to know here:
1. The domname will control the output file naming convention, all generated files will add this prefix to the old
V3 naming convention, giving you the capability to recognize different runs. Try to use always meaningful
names.
2. You can control the final land-water mask using the h2opct parameter. This parameter can be used to have
more land points than calculated by the simple interpolation engine. Try it with different values to find best
land shapes. A zero value means use just the interpolation engine, higher values will extend into ocean
points the land at land-water interface. The h2ohgt parameter allows also water points to have elevation
greater than zero to avoid wall effects on the coasts.
22
3. A number of flags control the capability of the terrain program to modify on request the class type variables
in the DOMAIN file. You can modify on request the landuse, the texture and the lake/land interface. Running
once the terrain program, it will generate for you aside from the DOMAIN file a series of ASCII files you
can modify with any text editor. Running the terrain program the second time and setting a fudge flag,
will tell the program to overwrite the selected variable with the modified value in the ASCII file. This can
be useful for sensitivity experiments in the BATS surface model or to design a scenario experiment.
4. Some of the land surface types in BATS have been little tested and used or are extremely simplified and thus
should be used cautiously. Specifically the types are: sea ice, bog/marsh, irrigated crop, glacier. If such
types are present in a domain, the user is advised to carefully check the model behavior at such points and
eventually substitute these types with others.
5. The inpter directory is expected to contain a SURFACE directory where the actual netCDF global dataset
are stored. The overall path is limited to 256 characters.
6. If the netCDF library is compiled with OpenDAP support, an URL can be used as a path in the dirter and
inpter variables. Note that the 256 character limit for paths holds in the whole program.
7. The lsmoist namelist entry triggers the interpolation on the domain area of a global dataset of satellite
measured surface soil moisture to estimate the initial model soil moisture. A simple algorithm extend the
surface soil moisture to all the soil layers.
8. The moist_filename if present triggers reading from the specified file the soil moisture on all model vertical
layers and use it to initialize the soil moisture content in an initial run. Note that for this a DOMAIN file created
for a CLM45 run cannot be used for a BATS run.
6.1.5 debugparam namelist
This namelist is used by all RegCM programs to enable/disable some debug printout. In the current release this
flag is honored only by the model itself. If you are not a developer you may find this flags useless.
&debugparam
debug_level = 0, ! Currently value of 2 and 3 control previous DIAG flag
dbgfrq = 3, ! Interval for printout if debug_level >= 3
/
Just note that with current implementation, the output file syncing is left to the netCDF library. If You want to
examine step by step the output while the model is running, set the debug_level at value 3.
6.1.6 boundaryparam namelist
Being a limited area model, in order to be run RegCM4 requires the provision of meteorological initial and
time dependent lateral boundary conditions, typically for wind components, temperature, water vapor and surface
pressure. These are obtained by interpolation from output from reanalysis of observations or global climate model
simulations, which thus drive the regional climate model.
The lateral boundary conditions (LBC) are provided through the so called relaxation/diffusion technique which
consists of:
1. selecting a lateral buffer zone of n grid point width (nspgx)
2. interpolating the driving large scale fields onto the model grid
3. applying the relaxation + diffusion term
∂α
t=F(n)F1(αLBC αmod )F(n)F22(αLBC αmod )(6.4)
where αis a prognostic variable (wind components, temperature, water vapor, surface pressure). The first
term on the rhs is a Newtonian relaxation term which brings the model solution (mod) towards the LBC field
23
(LBC) and the second term diffuses the differences between model solution and LBC. F(n)is an exponential
function given by:
F(n) = exp (n1)
anudge(k)(6.5)
Where nis the grid point distance from the boundary (varying from 1 to nspgx): n1 is the outermost
grid point, n=2 the adjacent one etc. The anudge array determines the strength of the LBC forcing and
depends on the model level k. In practice F(n)is equal to 1 at the outermost grid point row and decreases
exponentially to 0 at the internal edge of the buffer zone (nspgd) at a rate determined by anudge. Larger
buffer zones and larger values of anudge will yield a greater forcing by the LBC.
Typically for domain sizes of 100 grid points we use a buffer zone width of 10 12 grid points, for large
domains this buffer zone can increase to values of 15 or even 20.
In the model anudge has three increasing values from the lower, to the mid and higher troposphere. For example
for nspgx =10 we use anudge equals to 1,2,3 for the lower, mid and upper troposphere, respectively.
This allows a stronger forcing in the upper troposphere to insure a greater consistency of large scale circulations
with the forcing LBC while allowing more freedom to the model in the lower troposphere where local high
resolution forcings (e.g. complex topography) are more important.
For nspgx of 15 20, for example, anudge values could be increased to 2,3,4. As a rule of thumb, the choice
of the maximum anudge value should follow the conditions:
(nspgx 1)
anudge(k)3 (6.6)
&boundaryparam
nspgx = 12, ! nspgx-1 represent the number of cross point slices on
! the boundary sponge or relaxation boundary conditions.
nspgd = 12, ! nspgd-1 represent the number of dot point slices on
! the boundary sponge or relaxation boundary conditions.
high_nudge = 3.0, ! Nudge value high range
medium_nudge = 2.0, ! Nudge value medium range
low_nudge = 1.0 ! Nudge value low range
/
6.1.7 globdatparam namelist
This namelist is used by the sst and icbc ICBC programs. You can tell them how to build initial and bondary
conditions.
&globdatparam
ibdyfrq = 6, ! boundary condition interval (hours)
ssttyp = ’OI_WK’, ! Type of Sea Surface Temperature used
! One in: GISST, OISST, OI2ST, OI_WK, OI2WK,
! FV_A2, FV_B2, EH5A2, EH5B1, EHA1B,
! EIN75, EIN15, ERSST, ERSKT, CCSST,
! CA_XX, HA_XX, EC_XX, IP_XX, GF_XX,
! CN_XX
dattyp = ’EIN15’, ! Type of global analysis datasets used
! One in: ECMWF, ERA40, EIN75, EIN15, EIN25,
! ERAHI, NNRP1, NNRP2, NRP2W, GFS11,
! FVGCM, FNEST, EH5A2, EH5B1, EHA1B,
! CCSMN, ECEXY, CA_XX, HA_XX, EC_XX,
! IP_XX, GF_XX, CN_XX, MP_XX
chemtyp = ’MZCLM’, ! Type of Global Chemistry boundary conditions
! One in : MZ6HR, 6 hours MOZART output
! : MZCLM, MOZART climatology 1999-2009
gdate1 = 1990060100, ! Start date for ICBC data generation
gdate2 = 1990070100, ! End data for ICBC data generation
calendar = ’gregorian’, ! Calendar to use (gregorian, noleap or 360_day)
dirglob = ’input/’, ! Path for ICBC produced input files
inpglob = ’globdata/’, ! Path for ICBC global input datasets.
ensemble_run = .false., ! If this is a member of a perturbed ensemble
24
! run. Activate random noise added to input
! Look http://users.ictp.it/˜pubregcm/RegCM4/globedat.htm
! on how to download them.
/
Things you need to know here:
1. The gdate time window to build ICBC must be always greater or equal to the time window you plan to
run the model in. Different GCMs and reanalysis products have different length of the year. For example,
the reanalysis products employ the real year length (365 days + real leap years, i.e. and average length of
365.2422), the CCSM has a length of 365 days (no leap year), the HadCM has a length of 360 days (30 day
months). The RegCM4 length of the year has to be the same as in the forcing fields, and this can be set in
the variable dayspy. Please remember to always check the consistency of the length of the year.
2. Even if listed, not all the input engines are fully tested. Some of them need data which have been reformatted
by ICTP (they are not in the original format with which they are distributed by the institution producing
them). Some input data are not freely distibutable by ICTP, and you need a special agreement with the
owner to use them. Hopefully the situation is changing, and data exchange is becoming more and more the
basis for good science in the climatic field.
3. The chemtyp paramter is read by the chem_icbc program. See below in 6.1.23 about chemistry boundary
conditions.
4. For notes on path, you can see the above in terrainparam namelist description at 5.
6.1.8 fnestparam namelist
This namelist is read if the FNEST is selected as dattyp in globdatparam namelist (see above in 6.1.7), and
permits the user to specify the output directory of the coarse resolution run already completed and the name of
the coarse domain, i.e. the domname namelist parameter used in the terrainparam coarse namelist (see above in
6.1.4).
The nested domain must be contained inside the coarse domain, and possibly the nested domain should not
overlap the boundary region of the coarse run. Those checks are left to the user.
If nothing is specified or namelist not present default is to search for a directory RegCM inside inpglob directory
for files without a domname, i.e. like ATM.YYYYMMDDHH.nc.
!
! Nesting control
!
&fnestparam
coarse_outdir = ’globdata/RegCM’, ! Coarse domain output dir if FNEST
coarse_domname = ’EUROPE’, ! Coarse domain domname
/
6.1.9 perturbparam namelist
This namelist lets you control to which input field and of what fractional level a perturbation is added at ICBC
stage on the input fields. It is read by the ICBC program if the ensemble_run parameter in the globdatparam
namelist is set to true.
!
! Perturbation control for ensembles
!
&perturbparam
lperturb_topo = .false., ! Add perturbation to surface elevation
perturb_frac_topo = 0.001D0, ! Fractional value of the perturbation on topo
lperturb_ts = .false., ! Add perturbation to surface temeprature
perturb_frac_ts = 0.001D0, ! Fractional value of the perturbation on ts
lperturb_ps = .false., ! Add perturbation to surface pressure
perturb_frac_ps = 0.001D0, ! Fractional value of the perturbation on ps
25
lperturb_t = .false., ! Add perturbation to temperature
perturb_frac_t = 0.001D0, ! Fractional value of the perturbation on t
lperturb_q = .false., ! Add perturbation to humidity mixing ratio
perturb_frac_q = 0.001D0, ! Fractional value of the perturbation on q
lperturb_u = .false., ! Add perturbation to zonal velocity
perturb_frac_u = 0.001D0, ! Fractional value of the perturbation on u
lperturb_v = .false., ! Add perturbation to meridional velocity
perturb_frac_v = 0.001D0, ! Fractional value of the perturbation on v
/
Things you need to know here:
1. The perturb_frac should not exceed a percent of the field value. The algorithm detail of the applied noise
can be found in O’Brien et al. (2011).
6.1.10 restartparam namelist
This namelist lets you control the time period the model is currently simulating in this particular run. You may
want to split longer runs for which you have prepared the ICBC’s into shorter runs, to schedule HPC resource
usage in a more collaborative way with other researcher sharing it: the regcm model allows restart, so be friendly
with other research projects which may not have this fortune (unless you are late for publication).
&restartparam
ifrest = .false. , ! If a restart
mdate0 = 1990060100, ! Global start (is gdate1, most probably)
mdate1 = 1990060100, ! Start date of this run
mdate2 = 1990060200, ! End date for this run
/
Things you need to know here:
1. After the simulation starts, on restart NEVER change the mdate0 value. The correct scheme for restart is:
Set ifrest to .true.
Set mdate1 to the value in mdate2
Define the new value for mdate2
2. Consider that current RegCM convention is to place midnight of first day of month as the last timestep in
previous month, except on first model output file (ifrest =.false.). It is for this reason better to use as
start and end time a month boundary. We usually consider a month data file the basic unit of output, each
time you cross a month a new output file will be created for you.
6.1.11 timeparam namelist
This namelist contains model internal timesteps, used by the model as basic integration timestep and triggers for
calling internal parametric schemes.
&timeparam
dt = 150., ! time step in seconds
dtrad = 0., ! time interval solar radiation calculated (minutes)
dtsrf = 0., ! time interval at which land model is called (seconds)
dtcum = 0., ! time interval at which cumuls is called (seconds)
dtabem = 0., ! time interval absorption-emission calculated (hours)
dtchem = 900., ! time interval for chemistry reactions (seconds)
/
Things you need to know here:
1. If only dt is chosen, the other values are computed to align with dt and output frequencies to have 5minutes
for cumulus, 10minutes for surface, 30minutes for radiation, 18hr for absorption-emission.
26
2. All the internal timesteps need to be multiples of the base timestep dt. Note that the units are different, so
you need to convert the other timesteps in seconds before the check.
3. Surface, radiation and cumulus timesteps must be aligned on a 24htime window.
4. The dynamical hydrostatical core of RegCM requires a fixed timestep, and you need to manually find the
correct value which permits not to break the CourantFriedrichsLewy condition considering R. Courant and
Lewy (1928). A good rule of thumb is to have a dt not greater than three times the ds value in km specified in
the geoparam namelist at 6.1.3. A greater value may lower computing time, but in case of strong advection
may lead to non accurate computation or even the violation of CFL condition and the divergence of the
solution.
5. If you hit a non stable condition, the restart capability of the model may help find the correct timestep just
for a particular period, using a different timestep at different times.
6. The surface model timestep should be of the order of 10minutes to keep computational cost low: lower
values may be used only in the case of very strong surface gradients.
7. A negative value for dtcum (the default) actually sets dtcum =dt, calling the cumulus convection scheme
every model timestep. For small dt, this value should be set to 5minute.
8. The absorption-emission computations are the most expensive in computational time in the radiation scheme,
which accounts overall for around 30% of the total execution time. The 18hours default allows for a
compromise for accuracy/cost and should avoid aliasing on a daily timeframe window.
6.1.12 outparam namelist
This namelist controls the model output engine, allowing you to enable/disable any of the output file writeout, or
to modify the frequency the fields are written in the files.
&outparam
ifsave = .true. , ! Create SAV files for restart
savfrq = 48., ! Frequency in hours to create them
ifatm = .true. , ! Output ATM ?
atmfrq = 6., ! Frequency in hours to write to ATM
ifrad = .true. , ! Output RAD ?
radfrq = 6., ! Frequency in hours to write to RAD
ifsts = .true. , ! Output STS (frequence is daily) ?
ifsrf = .true. , ! Output SRF ?
srffrq = 3., ! Frequency in hours to write to SRF
ifsub = .true. , ! Output SUB ?
subfrq = 6., ! Frequency in hours to write to SUB
iflak = .true., ! Output LAK ?
lakfrq = 6., ! Frequency in hours to write to LAK
ifchem = .true., ! Output CHE ?
ifopt = .false., ! Output OPT ?
chemfrq = 6., ! Frequency in hours to write to CHE
enable_atm_vars = 67*.true., ! Mask to eventually disable variables ATM
enable_srf_vars = 35*.true., ! Mask to eventually disable variables SRF
enable_rad_vars = 25*.true., ! Mask to eventually disable variables RAD
enable_sub_vars = 18*.true., ! Mask to eventually disable variables SUB
enable_sts_vars = 18*.true., ! Mask to eventually disable variables STS
enable_lak_vars = 18*.true., ! Mask to eventually disable variables LAK
enable_opt_vars = 19*.true., ! Mask to eventually disable variables OPT
enable_che_vars = 26*.true., ! Mask to eventually disable variables CHE
dirout = ’./output’, ! Path where all output will be placed
lsync = .false., ! If sync of output files at every timestep is
! requested. Note, it has a performance impact.
! Enabled by default if debug_level > 2
idiag = 0, ! Enable tendency diagnostic output in the ATM
! file. NOTE: output file gets HUGE.
do_parallel_netcdf_in = .false., ! This enables paralell input
! Each processors reads its slice in the
27
! input file. Enable ONLY in case of
! HUGE input bandwidth,
do_parallel_netcdf_out = .false., ! This enables paralell output if the
! hdf5/netcdf libraries support it and
! the model is compiled with :
! --enable-nc4-parallel
/
Things you need to know here:
1. The surface fields are the mean values in the interval specified by the frequency values. The dynamical fields
are instead the point value at the output time. Refer to the Reference Manual Giorgi (2011) for a detailed
description of the model output fields.
2. If the chemistry or lake model are not enabled, the values specified in the control flags are not considered. If
nsg is not greater than one in dimparam at 6.1.1, the ifsub flag is not considered.
3. For the output directory, the path variable has a limit of 256 characters. This path must be a local path on
disk where the user running the model has write permissions granted.
4. The enablevar logical arrays can be used to avoid saving one of the time dependent variables in the output
file, in the order they are saved in the output file itself. Note that geolocation and pressure variables cannot
be disabled.
6.1.13 physicsparam namelist
This namelist controls the model physics. You have a number of option here, and the best way to select the right set
is to carefully read the the Reference Manual Giorgi (2011). We are for the purposes of this User Guide not going
in detail in here, except in saying that probably you will need to run some experiments especially with different
cumulus convection schemes before finding out the best model setting. Although the mixed convection scheme
(Grell over land and Emanuel over ocean) seems to provide an overall better performance, our experience is that
there is no scheme that works best everywhere, therefore we advice to always do some sensitivity experiments to
select the best scheme for your application.
&physicsparam
iboudy = 5, ! Lateral Boundary conditions scheme
! 0 => Fixed
! 1 => Relaxation, linear technique.
! 2 => Time-dependent
! 3 => Time and inflow/outflow dependent.
! 4 => Sponge (Perkey & Kreitzberg, MWR 1976)
! 5 => Relaxation, exponential technique.
isladvec = 0, ! Semilagrangian advection scheme for tracers and
! humidity
! 0 => Disabled
! 1 => Enable Semi Lagrangian Scheme
iqmsl = 1, ! Quasi-monotonic Semi Lagrangian
! 0 => Standard Semi-Lagrangian
! 1 => Bermejo and Staniforth 1992 QMSL scheme
ibltyp = 1, ! Boundary layer scheme
! 0 => Frictionless
! 1 => Holtslag PBL (Holtslag, 1990)
! 2 => UW PBL (Bretherton and McCaa, 2004)
icup_lnd = 4, ! Cumulus convection scheme Over Land
icup_ocn = 4, ! Cumulus convection scheme Over Icean
! 1 => Kuo
! 2 => Grell
! 3 => Betts-Miller (1986) DOES NOT WORK !!!
! 4 => Emanuel (1991)
! 5 => Tiedtke (1996)
! 6 => Kain-Fritsch (1990), Kain (2004)
igcc = 2, ! Grell Scheme (icup == 2) Cumulus closure scheme
! 1 => Arakawa & Schubert (1974)
28
! 2 => Fritsch & Chappell (1980)
ipptls = 1, ! Moisture scheme
! 1 => Explicit moisture (SUBEX; Pal et al 2000)
! 2 => Explicit moisture Nogherotto/Tompkins
iocncpl = 0, ! Ocean SST from coupled Ocean Model through RegESM
! 1 => Coupling activated
iwavcpl = 0, ! Ocean roughness from coupled Wave Model through RegESM
! 1 => Coupling activated
iocnflx = 2, ! Ocean Flux scheme
! 1 => Use BATS1e Monin-Obukhov
! 2 => Zeng et al (1998)
! 3 => Coare bulk flux algorithm
iocnrough = 1, ! Zeng Ocean model roughness formula to use.
! 1 => (0.0065*ustar*ustar)/egrav
! 2 => (0.013*ustar*ustar)/egrav + 0.11*visa/ustar
! 3 => (0.017*ustar*ustar)/egrav
! 4 => Huang 2012 free convection and swell effects
! 5 => four regime formulation
iocnzoq = 1, ! Zeng Ocean model factors for t,q roughness
! 1 => 2.67*(re**d_rfour) - 2.57
! 2 => min(4.0e-4, 2.0e-4*re**(-3.3))
! 3 => COARE formulation as in bulk flux above
ipgf = 0, ! Pressure gradient force scheme
! 0 => Use full fields
! 1 => Hydrostatic deduction with pert. temperature
iemiss = 0, ! Use computed long wave emissivity
lakemod = 0, ! Use lake model
ichem = 0, ! Use active aerosol chemical model
scenario = ’A1B’, ! IPCC Scenario to use in A1B,A2,B1,B2
! RCP Scenarios in RPC2.6,RCP4.5,RCP6,RCP8.5
idcsst = 0, ! Use diurnal cycle sst scheme
iseaice = 0, ! Model seaice effects
idesseas = 0, ! Model desert seasonal albedo variability
iconvlwp = 1, ! Use convective algo for lwp in the large-scale
! This is reset to zero if using ipptls = 2
icldfrac = 0, ! Cloud fraction algorithm
! 0 : Original SUBEX
! 1 : Xu-Randall empirical
icldmstrat = 1, ! Simulate stratocumulus clouds
icumcloud = 1, ! Formulas to use for cumulus clouds (cf and lwc)
! Cloud fractions, only if mass fluxes are not
! available (Kuo and BM):
! 0,1 => cf = 1-(1-clfrcv)**(1/kdepth)
! 2 => cf = cloud profile
! Liquid water content:
! 0 => constant in cloud
! 1,2 => function of temperature
irrtm = 0, ! Use RRTM radiation scheme instead of CCSM
iclimao3 = 0, ! Use O3 climatic dataset from SPARC CMIP5
isolconst = 0, ! Use a constant 1367 W/mˆ2 instead of the prescribed
! TSI recommended CMIP5 solar forcing data.
islab_ocean = 0, ! Activate the SLAB ocean model
itweak = 0, ! Enable tweak scenario
\
6.1.14 nonhydroparam namelist
This namelist controls non-hydrostatic core for the upper radiative boundary conditions and the base state
temperature vertical profile curve.
&nonhydroparam
logp_lrate = 50.0, ! Logp lapse rate d(T)/d(ln P) K/ln(Pa)
ifupr = 0, ! Upper radiative boundary condition (Klemp and Durran,
! Bougeault, 1983)
ckh = 1.0, ! Background diffusion multiplication factor
diffu_hgtf = 1, ! Add topographic effect to diffusion
29
nhbet = 0.4, ! Ikawa beta parameter (0.=centered, 1.=backward)
! determines the time-weighting, where zero gives a
! time-centered average and positive values give a bias
! towards the future time step that can be used for
! acoustic damping. In practice, values of
! nhbet = 0.2 - 0.4 are used (MM5 manual, Sec. 2.5.1)
nhxkd = 0.1, ! Time weighting for weighting old/new pp
/
1. The upper radiative boundary layer option has a high computational cost.
6.1.15 subexparam namelist
This namelist controls the SUBEX moisture scheme. Please consider carefully reporting in your work the tuning
you perform on this parameters. The parameters below are the ones currently used at ICTP.
&subexparam
ncld = 1, ! # of bottom model levels with no clouds (rad only)
qck1land = 0.0005, ! Autoconversion Rate for Land
qck1oce = 0.0005, ! Autoconversion Rate for Ocean
gulland = 0.65, ! Fract of Gultepe eqn (qcth) when prcp occurs (land)
guloce = 0.30, ! Fract of Gultepe eqn (qcth) for ocean
rhmax = 1.01, ! RH at whicn FCC = 1.0
rhmin = 0.01, ! RH min value
rh0land = 0.80, ! Relative humidity threshold for land
rh0oce = 0.90, ! Relative humidity threshold for ocean
tc0 = 238.0, ! Below this temp, rh0 begins to approach unity
cevaplnd = 1.0e-5, ! Raindrop evap rate coef land [[(kg m-2 s-1)-1/2]/s]
cevapoce = 1.0e-5, ! Raindrop evap rate coef ocean [[(kg m-2 s-1)-1/2]/s]
caccrlnd = 6.0, ! Raindrop accretion rate land [m3/kg/s]
caccroce = 6.0, ! Raindrop accretion rate ocean [m3/kg/s]
cllwcv = 0.3e-3, ! Cloud liquid water content for convective precip.
clfrcvmax = 0.75, ! Max cloud fractional cover for convective precip.
cftotmax = 0.75, ! Max total cover cloud fraction for radiation
conf = 1.00, ! Condensation efficiency
rcrit = 13.5, ! Mean critical radius
coef_ccn = 2.0, ! Geometric mean Diameter and standard deviation
abulk = 0.9, ! Bulk activation ratio
lsrfhack = .false. ! Surface radiation hack
/
We found that RegCM4 is especially sensitive to:
1. cevap : increasing cevap will generally decrease precipitation
2. gulland,guloce : increase of guland/guloce will generally lead to reduce precipitation
6.1.16 microparam namelist
This namelist controls the new microphysics scheme.
&microparam
stats = .false., ! Produce debug variables in output files
budget_compute = .false., ! Verify enthalpy and moisture conservation
nssopt = 1, ! Supersaturation Computation
! 0 => No scheme
! 1 => Tompkins
! 2 => Lohmann and Karcher
! 3 => Gierens
iautoconv = 4, ! Choose the autoconversion paramaterization
! => 1 Klein & Pincus (2000)
! => 2 Khairoutdinov and Kogan (2000)
! => 3 Kessler (1969)
! => 4 Sundqvist
rsemi = 1.0, ! Implicit/Explicit control
30
! NOT ACTIVATED YET - IT DOES NOT WORK!
! rsemi == 0 => scheme is fully explicit
! rsemi == 1 => scheme is fully implicit
! 0<rsemi<1 => scheme is semi-implicit
vfqr = 4.0, ! Rain fall speed (default is 4 m/s)
vfqi = 0.15, ! Ice fall speed (default is 0.15 m/s)
vfqs = 1.0, ! Snow fall speed (default is 1 m/s)
auto_rate_khair = 0.355, ! Autoconversion coefficient for iautoconv=2
auto_rate_kessl = 1.e-3, ! Autoconversion coefficient for iautoconv=3
auto_rate_klepi = 0.5e-3, ! Autoconversion coefficient for iautoconv=1
rkconv = 1.666e-4, ! Autoconversion coefficient for iautoconv=4
rcovpmin = 0.1, ! Minimum precipitation coverage
rpecons = 5.547e-5, ! Evaporation constant Kessler
/
6.1.17 grellparam, emanparam, tiedtkeparam and kfparam namelists
You are allowed here to tune the convection scheme selected above in 6.1.13 with the icup_lnd or icup_ocn
number if selected number is 2,4,5.
&grellparam
gcr0 = 0.0020, ! Conversion rate from cloud to rain
edtmin = 0.20, ! Minimum Precipitation Efficiency land
edtmin_ocn = 0.20, ! Minimum Precipitation Efficiency ocean
edtmax = 0.80, ! Maximum Precipitation Efficiency land
edtmax_ocn = 0.80, ! Maximum Precipitation Efficiency ocean
edtmino = 0.20, ! Minimum Tendency Efficiency (o var) land
edtmino_ocn = 0.20, ! Minimum Tendency Efficiency (o var) ocean
edtmaxo = 0.80, ! Maximum Tendency Efficiency (o var) land
edtmaxo_ocn = 0.80, ! Maximum Tendency Efficiency (o var) ocean
edtminx = 0.20, ! Minimum Tendency Efficiency (x var) land
edtminx_ocn = 0.20, ! Minimum Tendency Efficiency (x var) ocean
edtmaxx = 0.80, ! Maximum Tendency Efficiency (x var) land
edtmaxx_ocn = 0.80, ! Maximum Tendency Efficiency (x var) ocean
shrmin = 0.30, ! Minimum Shear effect on precip eff. land
shrmin_ocn = 0.30, ! Minimum Shear effect on precip eff. ocean
shrmax = 0.90, ! Maximum Shear effect on precip eff. land
shrmax_ocn = 0.90, ! Maximum Shear effect on precip eff. ocean
pbcmax = 50.0, ! Max depth (mb) of stable layer b/twn LCL & LFC
mincld = 150.0, ! Min cloud depth (mb).
htmin = -250.0, ! Min convective heating
htmax = 500.0, ! Max convective heating
skbmax = 0.4, ! Max cloud base height in sigma
dtauc = 30.0D0 ! Fritsch & Chappell (1980) ABE Removal Timescale (min)
/
&emanparam
minsig = 0.95, ! Lowest sigma level from which convection can originate
elcrit_ocn = 0.0011, ! Autoconversion threshold water content (g/g) (ocean)
elcrit_lnd = 0.0011, ! Autoconversion threshold water content (g/g) (land)
tlcrit = -55.0, ! Below tlcrit auto-conversion threshold is zero
entp = 1.5, ! Coefficient of mixing in the entrainment formulation
sigd = 0.05, ! Fractional area covered by unsaturated dndraft
sigs = 0.12, ! Fraction of precipitation falling outside of cloud
omtrain = 50.0, ! Fall speed of rain (Pa/s)
omtsnow = 5.5, ! Fall speed of snow (Pa/s)
coeffr = 1.0, ! Coefficient governing the rate of rain evaporation
coeffs = 0.8, ! Coefficient governing the rate of snow evaporation
cu = 0.7, ! Coefficient governing convective momentum transport
betae = 10.0, ! Controls downdraft velocity scale
dtmax = 0.9, ! Max negative parcel temperature perturbation below LFC
alphae = 0.2, ! Controls the approach rate to quasi-equilibrium
damp = 0.1, ! Controls the approach rate to quasi-equilibrium
epmax_ocn = 0.999, ! Maximum precipitation efficiency (ocean)
epmax_lnd = 0.999, ! Maximum precipitation efficiency (land)
/
31
&tiedtkeparam
iconv = 4, ! Actual used scheme.
entrmax = 1.75e-3, ! Max entrainment iconv=[1,2,3]
entrdd = 3.0e-4, ! Entrainment rate for cumulus downdrafts
entrpen = 1.75e-3, ! Entrainment rate for penetrative convection
entrscv = 3.0e-4, ! Entrainment rate for shallow convection iconv=[1,2,3]
entrmid = 1.0e-4, ! Entrainment rate for midlevel convection iconv=[1,2,3]
cprcon = 1.0e-4, ! Coefficient for determining conversion iconv=[1,2,3]
detrpen = 0.75e-4, ! Detrainment rate for penetrative convection iconv=4
entshalp = 2.0, ! shallow entrainment factor for entrorg iconv=4
rcuc_lnd = 0.05, ! Convective cloud cover for rain evporation iconv=4
rcuc_ocn = 0.05, ! Convective cloud cover for rain evporation iconv=4
rcpec_lnd = 5.55e-5, ! Coefficient for rain evaporation below cloud iconv=4
rcpec_ocn = 5.55e-5, ! Coefficient for rain evaporation below cloud iconv=4
rhebc_lnd = 0.7, ! Critical rh below cloud for evaporation iconv=4
rhebc_ocn = 0.9, ! Critical rh below cloud for evaporation iconv=4
rprc_lnd = 1.4e-3, ! conversion coefficient from cloud water iconv=4
rprc_ocn = 1.4e-3, ! conversion coefficient from cloud water iconv=4
rcrit1 = 13.5, ! Mean critical radius for ccn
/
&kfparam
kf_entrate = 0.03, ! Entrainment rate
kf_min_pef = 0.2, ! Minimum precipitation efficiency
kf_max_pef = 0.9, ! Maximum precipitation efficiency
kf_dpp = 150.0, ! Start elevation for downdraft above cloud base (mb)
kf_tkemax = 5.0, ! Maximum turbolent kinetic energy in sub cloud layer
kf_min_dtcape = 1800.0, ! Consumption time of CAPE low limit
kf_max_dtcape = 3600.0, ! Consumption time of CAPE high limit
/
Things you need to know here:
1. In case of mixed cumulus schemes (land/ocean), both the selected schemes configuration namelist are read
in. Note in this case for the schemes only the relevant (Ocean or Land) control values are used.
2. Minimum and maximum values of the fraction of reevaporated water in the downdraft for the Grell scheme
is essentially a measure of the precipitation efficiency: increasing their value generally decrease convective
precipitation.
3. Again, read carefully the Reference Manual before attempting any tuning, and report in any work
modification of this parameters.
6.1.18 holtslagparam namelist
You are allowed here to tune the Holtslag PBL scheme selected above in 6.1.13 with the ibltyp number if selected
number is 1.
&holtslagparam
ricr_ocn = 0.25, ! Critical Richardson Number over Ocean
ricr_lnd = 0.25, ! Critical Richardson Number over Land
zhnew_fac = 0.25, ! Multiplicative factor for zzhnew in holtpbl
ifaholtth10 = 1, ! First approximation for obhukov length, th10 formula:
! 1 => 0.5 * (t+tg) * (1+0.61*q)
! 2 => (0.25*t + 0.75*tg) * (1+0.61*q)
! 3 => theta + hf/(k*us)*log(z/10)
! t = air temp., tg = ground temp., q = wv mix. ratio
! hf = total heat flux, z = elevation
! theta = virt. pot. t
ifaholt = 1, ! th10 final adjustment:
! 0 => no adjustment
! 1 => max(th10,tg)
! 2 => min(th10,tg)
/
32
6.1.19 uwparam namelist
You are allowed here to tune the UW PBL scheme selected above in 6.1.13 with the ibltyp number if selected
number is 2.
&uwparam
iuwvadv = 0, ! 0=standard T/QV/QC advection, 1=GB01-style advection
! 1 is ideal for MSc simulation, but may have stability issues
atwo = 15.0, ! Efficiency of enhancement of entrainment by cloud evap.
! see Grenier and Bretherton (2001) Mon. Wea. Rev.
! and Bretherton and Park (2009) J. Clim.
rstbl = 1.5, ! Scaling parameter for stable boundary layer eddy length
! scale. Higher values mean stronger mixing in stable
! conditions
czero = 5.869, ! Czero constant in UW PBL (eqn 44a and pgs 856-857)
nuk = 5.0, ! Multiplicative factor for diffusion coefficients
/
6.1.20 slabocparam namelist
Here you define parameter and stage fot the Ocean q-flux adjusted mixed layer model.
&slabocparam
do_qflux_adj = .false., ! Activate SLAB Ocean model surface fluxes adjust
! from an already created climatology
do_restore_sst = .true., ! Create during the run the SLAB Ocean model surface
! fluxes climatology to be used in a subsequent run
sst_restore_timescale = 5.0D0, ! Time interval in days in building the
! q-flux adjustment
mixed_layer_depth = 50.0D0, ! Depth in meters of the Ocean mixed layer.
/
6.1.21 tweakparam namelist
This namelist controls the tweaking of the model to obtain custom scenarioes, Is enabled if itweak is 1 in 6.1.13.
&tweakparam
itweak_sst = 0, ! Enable adding sst_tweak to input TS
itweak_temperature = 0, ! Enable adding temperature_tweak to input T
itweak_solar_irradiance = 0, ! Add solar_tweak to solar constant
itweak_greenhouse_gases = 0, ! Multiply gas_tweak_factors to GG concentrations
sst_tweak = 0.0D0, ! In K
temperature_tweak = 0.0D0, ! In K
solar_tweak = 0.0D0, ! In W m-2 (1367.0 is default solar)
gas_tweak_factors = 1.0D0, 1.0D0 , 1.0D0 , 1.0D0 , 1.0D0,
! CO2 CH4 N2O CFC11 CFC12
6.1.22 rrtmparam namelist
You are allowed here to tune the RRTM radiative scheme selected above in 6.1.13 with the irrtm number if
selected number is 1.
&rrtmparam
inflgsw = 2, ! 0 = use the optical properties calculated in prep_dat_rrtm
! (same as standard radiation)
! 2 = use RRTM option to calculate cloud optical properties
! from water path and cloud drop radius
iceflgsw = 3, ! Flag for ice particle specification
! 0 => ice effective radius, r_ec, (Ebert and Curry, 1992),
! r_ec must be >= 10.0 microns
! 1 => ice effective radius, r_ec, (Ebert and Curry, 1992),
! r_ec range is limited to 13.0 to 130.0 microns
! 2 => ice effective radius, r_k, (Key, Streamer Ref. Manual,
! 1996), r_k range is limited to 5.0 to 131.0 microns
! 3 => generalized effective size, dge, (Fu, 1996),
33
! dge range is limited to 5.0 to 140.0 microns
! [dge = 1.0315 * r_ec]
liqflgsw = 1, ! Flag for liquid droplet specification
! 0 => Compute the optical depths due to water clouds in ATM
! (currently not supported)
! 1 => The water droplet effective radius (microns) is input
! and the optical depths due to water clouds are computed
! as in Hu and Stamnes, J., Clim., 6, 728-742, (1993).
inflglw = 2, ! Flag for cloud optical properties as above but for LW
iceflglw = 3, ! Flag for ice particle specification as above but for LW
liqflglw = 1, ! Flag for liquid droplet specification as above but for LW
icld = 1, ! Cloud Overlap hypothesis
irng = 1, ! mersenne twister random generator for McICA COH
/
6.1.23 chemparam namelist
This namelist controls the chemistry and aerosol options in the RegCM model.
&chemparam
chemsimtype = ’CBMZ ’, ! Which chemical tracers to be activated.
! One in :
! DUST : Activate 4 dust bins scheme
! SSLT : Activate 2 bins Sea salt scheme
! DUSS : Activate DUST +SSLT
! DU12 : Activate 12 dust bins scheme
! CARB : Activate 4 species black/anthropic
! carbon simulations
! SULF : Activate SO2 and SO4 tracers
! SUCA : Activate both SUKF and CARB
! AERO : Activate all DUST, SSLT, CARB and SULF
! CBMZ : Activate gas phase and sulfate
! DCCB : Activate CBMZ +DUST +CARB
! POLLEN : Activate POLLEN transport scheme
ichsolver = 1, ! Activate the gas phase chemical solver CBMZ
ichsursrc = 1, ! Enable the emissions fluxes.
ichdrdepo = 1, ! 1 = enable tracer surface dry deposition. For dust,
! it is calculated by a size settling and dry
! deposition scheme. For other aerosol,a dry
! deposition velocity is simply prescribed further.
ichebdy = 1, ! Enable reading of chemical boundary conditions, otherwise
! put 0 in boundary conditions.
ichcumtra = 1, ! 1 = enable tracer convective transport.
ichremlsc = 1, ! 1 = enable tracer rainout
ichremcvc = 1, ! 1 = enable tracer washout
ichdustemd = 1, ! Choice for parametrisation of dust emission size distribution
! 1 = use the standard scheme (Alfaro et al., Zakey et al.)
! 2 = use a revised soil granulometry ref +
! Kok et al emission size distribution :
! Menut et al.,2012; + Kok et al., 2011
ichdiag = 0, ! 1 = enable writing of additional tracer tendency
! diagnostics in the output
idirect = 1, ! CHoice to enable or not aerosol feedbacks on radiation and
! dynamics (aerosol direct and semi direct effects)
! possible choice 1 or 2:
! 1 = no coupling to dynamic and thermodynamic. However
! the clear sky surface and top of atmosphere
! aerosol radiative forcings are diagnosed.
! 2 = allows aerosol feedbacks on radiative,
! thermodynamic and dynamic fields.
iindirect = 0, ! Enable sulfate first indirect effect in radiation scheme
! based on Qian et al., 2001
rdstemfac = 1.0,! Dust emission adjustment factor ( soil erodibility)
! linearly reduce or increase the dust flux
/
34
The chemsimtype parameter select one in a number of fixed sets which define the nature and number of
chemical species and/or transported aerosols, together with wich relevant scheme is to be used in the simulation.
The implemented possible simulation types for the aerosol/chemistry options are:
1. DUST : Activate 4 dust bins scheme, with on line emission, transport and removal.
2. SSLT : Activate 2 sea salt bins scheme, with on line emission, transport and removal.
3. DUSS : Activate Dust and seasalt scheme, 6 tracers.
4. DU12 : Activate 12 dust bins scheme
5. CARB : Activate 4 species organic and black carbon in both hydrophobic and hydrophilic aerosol scheme,
with on line emission, transport and removal.
6. SULF : Activate SO2 and SO4 tracers with simple sulfate oxidation from oxidant climatology, with on line
emission, transport and removal.
7. SUCA : Activate both SULF and CARB.
8. AERO : Activate all DUST, SSLT, CARB and SULF
9. CBMZ : Activate CBMZ gas phase only option : 37 tracer are considered here.
10. DCCB : Activate CBMZ +DUST +CARB + sulfate-nitrate-ammonium aerosol calculated with the
ISORROPIA gas-aerosol thermodynamical scheme: 50 tracers are considered here.
The more tracer are used, the heavier computationally are the simulations and the outputs. The chemistry
outputs consist of one netCDF file per tracer, named explicitely and containing concentration fields + different
diagnostics, and one netCDF file giving the optical properties of the total aerosol mixture i.e. aerosol optical depth
and radiative forcing. For a big domain, this can require a huge amount of disk space to store the model results.
We will now detail the steps required to run a chemistry/aerosol simulation with the RegCM model.
Pre Processing
We need to perform a couple of operations in the pre-processing stage to prepare input datasets for an
aerosol/chemistry simulation.
1. In the case of a DUST,AERO or DCCB simulation, we need the model to prepare soil type dataset to be used
for dust emission calculation at the terrain program stage. The ltexture parameter in the terrainparam
namelist (see above in 6.1.4) should be set to .true..
2. After having prepared the static and boundary condition data with the icbc program for the atmosphere, we
need also to define chemical emissions and chemical boundary conditions.
The data needed for this second task come from different sources, both measurements data or GCM model
with a chemistry parametrization active.
1. Emission dataset. The pre processor can manage CMIP5 RCP and IASA anthropogenic emissions for present
day and future emission. For this, the global RCP emission dataset have first been processed by ICTP to
match the species considered in the chemical solver CBMZ, and to aggregate different sector of emissions
that are present in the RCP fields (e.g. CO emission from biomass burning + fossil fuel + ship + . . .). The
resulting global files, as well as grid informations are publicly available on ICTP site :
http://clima-dods.ictp.it/regcm4/RCP_EMGLOB_PROCESSED
35
2. Chemical boundary conditions for important tracersare available through ICTP. We use monthly boundary
condition coming from global simulations (CAM + EC-EARTH) for aerosols, following different RCP
scenarios (HIST + future). For gas phase species, we now have 6 hourly chemical boundary conditions
issued from the MOZART CTM. Alternatively, we can use a climatology representative of monthly average
concentrations over the period 2000-2007 coming from the MOZART CTM. The oxidant fields, used in the
simple sulfate scheme, is also a climatology coming from MOZART CTM. The data are available on ICTP
site.
The steps to prepare the chemistry boundary conditions data are the following:
1. In case of a chemistry simulation type (CBMZ or DCCB), the global emission files must first be interpolate to
the RegCM model grid using the following procedure:
Create the RegCM model grid description file to be used by cdo to calculate weghts for a conservative
remapping interpolation:
$> cd $REGCM_RUN
$> ./bin/emcre_grid test_001.in
Interpolate the global data on the RegCM grid with the interpolation script:
$> cd $REGCM_RUN
$> ./bin/interp_emissions test_001.in
The cdo program installation is mandatory in this case to perform this step. The interpolation is mass
conservative and is consistent for any ratio of model resolution/global emission resolution. Note the
programs and script uses the same root path of terrain and icbc programs for input and data directory. By
default, we expect the global emission to be at the same level than e.g. EIN15 in your data path layout. This
results in a file named *_CHEMISS.nc of monthly emission for the whole RCP period. You dont need to
reprocess the file if you change the date of your simulation, as long as you are in the RCP temporal windows
(for now, Historical from 1990-2010) . Which scenario to use is controlled by the scenario variable in the
physicsparam parameter namelist as above in 6.1.13
2. In function of the value of the chemsimtype parameter, the relevant boundary conditions will be produced
on the RegCM domain by running :
$> cd $REGCM_RUN
$> ./bin/chem_icbc test_001.in
That will result in 6 hourly chemical boundary conditions in your input directory (*_CHBC*.nc and/or
*_AEBC*.nc).
Run time parameters
The other chemparam namelist parameters, let you control run time behaviour of the model chemistry and aerosol
schemes.
1. ichsolver : relevant for gas-phase chemistry options, activate the chemical solver CBMZ. If different from
1, there is no chemical reactions and the tracer are only emitted, transported and removed.
2. ichsursrc : if set different from 1, the emission term is suppressed and only boundary conditions are
generating tracer in the domain.
3. ichdrdepo : if set different from 1, the dry deposition and sedimentation of tracers is disabled for chemistry
species.
4. ichremlsc : if set different from 1, rainout of chemical species is disabled.
5. ichremcvc : if set different from 1, washout of chemical species is disabled.
36
6. ichcumtra : if set different form 1, the convective transport of tracers is disabled.
7. = ichdiag : if set to 1, the writing of additional diagnostics in the chemistry output is enabled. Particularly,
all the 3D tendency terms (advection, turbulence, convection, boundary condition, chemistry, removal, etc.)
of the tracer equation are outputted at the frequency ichfreq. This is usefull for budget and sensitivity
studies, as well as debugging. This potentially can generate HUGE files.
8. idirect : Enable aerosol feedbacks on radiation.
if equal to 1, only aerosol radiative forcing is calculated and outputted but there is NO aerosol radiative
feedback on climate. This can be viewed as a control run option.
if equal to 2, there is a feedback of aerosol radiative forcing on climate fields, via perturbation of the
temperature tendency. This can be view as the perturbed run option.
9. ichdustemd : Choice for parametrisation of dust emission size distribution:
if set to 1, the standard scheme s used (Alfaro et al., Zakey et al., 2006)
if set to 2, the revised soil granulometry + Kok et al., 2011 emission size distribution is used.
10. rdstemfac : Scaling factor (erodibility) for tuning DUST emission flux
Sparse notes
1. Outputs are in netCDF, so process with your favorite software.
2. The flux and tendency variables, as well as radiative forcings, are accumulated and averaged between 2
output time steps (like precipitations). The concentration, burden and aerosol optical depth are instantaneous.
3. The outputs size can be huge, especially for full chemistry and diagnostic options. In the future, we might
have the choice of outputting selected variables only.
Not every possible dynamical configuration has been tested for the chemistry option, so bugs might appear:
please report! CLM enables to calculate on line biogenic volatile hydrocarbon emissions, as well as chemical
deposition that can be used in RegCM. There are some flags to activate when compiling CLM, we will update the
documentation when fully tested.
The Tiedke and Emmanuel schemes, when activated, offer a more detailed treatment of convective transport
than the simple mixing hypothesis used with other schemes. The UW planetary boundary layer option integrate
directly the emission and deposition flux terms as part of the calculation of trurbulent tracer tendency.
6.2 The CLM options
We will now discuss from the user point of view how to use the model setup which need to be activated at configure
stage.
The CLM option if activated allows the user to run a simulation using the CLM surface model instead of the
default BATS1e model. We will not here go in deep in the difference between the two models, read the Reference
Manual for this. The executable of the model is different in the case of the CLM, and is named regcmMPICLM. Note
that in the CLM case only the MPI enabled compilation is supported (no serial), and no subgridding is possible
(nsg is always 1).
Enable
At configure stage (see 3.2.1), the option is to be enabled with the right command line argument to the configure
script
--enable-clm Supply this option if you plan on using CLM option.
This will enable a preprocessing flag, and build a different model executable. Note that no modifications are
needed for any other part of the model, but this triggers the building of another pre-processing program, clm2rcm.
37
Prepare and run
The CLM configuration requires a separate namelist in the namelist input file.
&clmparam
dirclm = ’input/’, ! CLM path to Input data produced by clm2rcm. If
! relative, It should be how to reach the Input dir
! from the Run dir.
clmfrq = 12., ! Frequency for CLM own output write
imask = 1, ! For CLM, Type of land surface parameterization
! 1 => using DOMAIN.INFO for landmask (same as BATS)
! 2 => using mksrf_navyoro file landfraction for
! landmask and perform a weighted average over
! ocean/land gridcells; for example:
! tgb = tgb_ocean*(1-landfraction)+tgb_land*landfraction
/
Things you need to know here:
1. The inpter path defined in terrainparam namelist described in 6.1.4 is used also by the clm2rcm program.
See at 4.3 how to obtain needed datasets.
2. The file pft-physiology.c070207 should be manually copied in the dirclm directory before running the
model.
3. The clmfrq is relative to the output produced by the CLM model itself, and does not control the RegCM model
output. To know the CLM output file content, refer to CLM 3.5 documentation.
4. The imask = 2 option cannot be used with the icup_lnd or icup_ocn cumulus convection schemes 2,
which rely on the BATS1e landmask.
In the case of CLM run, the user needs to run, after the terrain program, the clm2rcm program, and copy the
pft-physiology.c070207 in the input directory:
$> cd $REGCM_RUN
$> ./bin/terrain regcm.in
$> ./bin/clm2rcm regcm.in
$> cp $REGCM_GLOBEDAT/CLM/pft-physiology.c070207 input/
The clm2rcm program interpolates global land characteristics datasets to the RegCM projected grid. The
content of the pft-physiology.c070207 file are described in the pft-physiology.c070207.readme file. All
the other pre-processing steps are just equal to the one detailed in chapter 5. To run the CLM option in the RegCM
model, just substitute the executable name:
$> mpirun -np 2 ./bin/regcmMPICLM regcm.in
Note that the CLM land model is much heavier than the BATS1e model, and computing time increases.
6.3 The CLM 4.5 options
We will now discuss from the user point of view how to use the model setup which need to be activated at configure
stage.
The CLM 4.5 option if activated allows the user to run a simulation using the CLM version 4.5 surface model
instead of the default BATS1e model. We will not here go in deep in the difference between the two models, read
the Reference Manual for this. The executable of the model is different in the case of the CLM 4.5, and is named
regcmMPICLM45. Note that in the CLM 4.5 case only the MPI enabled compilation is supported (no serial).
38
Enable
At configure stage (see 3.2.1), the option is to be enabled with the right command line argument to the configure
script
--enable-clm45 Supply this option if you plan on using CLM45 option.
This will enable a preprocessing flag, and build a different model executable. Note that no modifications
are needed for any other part of the model, but this triggers the building of another pre-processing program,
mksurfdata.
Prepare and run
The CLM45 configuration requires a separate entries in the namelist input file.
&clm_inparm
fpftcon = ’pft-physiology.c130503.nc’,
fsnowoptics = ’snicar_optics_5bnd_c090915.nc’,
fsnowaging = ’snicar_drdt_bst_fit_60_c070416.nc’,
hist_nhtfrq = 0,
/
&clm_soilhydrology_inparm
h2osfcflag = 1,
origflag = 0,
/
&clm_hydrology1_inparm
oldfflag = 0,
/
Things you need to know here:
1. The namelist 6.1.7 is used also by the mksurfdata program. See at 4.4 how to obtain needed datasets.
2. The hist_nhtfrq is relative to the output produced by the CLM 4.5 model itself, and does not control the
RegCM model output.
3. To know the CLM 4.5 output file content, refer to CLM 4.5 documentation.
In the case of CLM 4.5 run, the user needs to run, after the terrain program, the mksurfdata program.
$> cd $REGCM_RUN
$> ./bin/terrain regcm.in
$> ./bin/mksurfdata regcm.in
$> cp $REGCM_GLOBEDAT/CLM/pft-physiology.c070207 input/
All the other pre-processing steps are just equal to the one detailed in chapter 5. To run the CLM 4.5 option in
the RegCM model, just substitute the executable name:
$> mpirun -np 2 ./bin/regcmMPICLM45 regcm.in
6.4 Sensitivity experiments hint
Although the LBC forcing does provide a constraint for the model, as any RCM, RegCM4 is characterized by a
certain level of internal variability due to its non-liner processes (e.g. convection).
For example, if small perturbations are introduced in the initial or lateral boundary conditions, the model will
generally produce different patterns of, e.g. precipitation, that appear as (sometimes seemingly organized) noise
when compared to the control simulation.
This noise depends on domain size and climatic regimes, for example it is especially pronounced in warm
climate regimes (e.g. tropics or during the summer season) and large doamins.
39
When doing for example sensitivity experiments to model modifications, e.g. to land use change, this internal
variability noise can be misinterpreted as a model response to the factor modified.
Users of RegCM4 should be aware of this when they do sensitivity experiments. The best way to filter out this
noise is to perform ensembles of simulations and lok at the ensemble averages to extract the real model response
from the noise.
40
Chapter 7
Postprocessing tools
The new netCDF output format allows users to use a number of general purpose tools to postprocess model output
files. We will in this section do a quick review of some of the Open Source and Free Software ones.
7.1 Command line tools
Three major set of tools may help you do even complex calculation just from command line prompt.
7.1.1 netCDF library tools
The netCDF library itself offers three basic tools to play with netCDF archived data.
ncdump program, generates a text representation of a specified netCDF file on standard output. The text
representation is in a form called CDL (network Common Data form Language) that can be viewed, edited,
or serve as input to ncgen, thus ncdump and ncgen can be used as inverses to transform data representation
between binary and text representations. ncdump may also be used as a simple browser for netCDF datasets,
to display the dimension names and lengths; variable names, types, and shapes; attribute names and values;
and optionally, the values of data for all variables or selected variables in a netCDF dataset. Sample usage
patterns:
1. Look at the structure of the data in the netCDF dataset:
ncdump -c test_001_SRF.1990060100.nc
2. Produce a fully-annotated (one data value per line) listing of the data for the variables time and tas,
using FORTRAN conventions for indices, and show the floating-point data with only four significant
digits of precision and the time values with ISO format:
ncdump -v time,tas -p 4 -t -f \
fortran test_001_SRF.1990060100.nc
ncgen program, the reverse of the ncdump program: generates a netCDF file or a C or FORTRAN program
that creates a netCDF dataset from a CDL input. Sample usage patterns:
1. From a CDL file, generate a binary netCDF file:
ncgen -o test_001_SRF.1990060100_modif.nc \
test_001_SRF.1990060100.cdl
2. From a CDL file, generate a Fortran program to write the netCDF file:
ncgen -f test_001_SRF.1990060100.cdl > prog.f
nccopy utility copies an input netCDF file to an output netCDF file, in any of the four format variants, if
possible, and in function of the selected output format add compression filter and/or data chunking. Sample
usage patterns:
41
1. Convert a netCDF dataset to a netCDF 4 classic model compressed data file using shuffling to enhance
compression level:
nccopy -k 4 -d 9 -s test_001_SRF.1990060100.nc \
test_001_SRF.1990060100_compressed.nc
You can also find, in the Tools/Programs/RegCM_read directory under $REGCM_ROOT a sample program to
read an output file using the netCDF library you can modify to fit your needs.
7.1.2 NetCDF operators NCO
This set of tools can be considered a swiss army knife to manage netCDF datasets. There are multiple operators,
and Each operator takes netCDF files as input, then operates (e.g., derives new data, averages, hyperslabs,
manipulates metadata) and produces a netCDF output file. The single-command style of NCO allows users
to manipulate and analyze files interactively, or with simple scripts that avoid some overhead of higher level
programming environments. The major tools are:
ncap2 netCDF Arithmetic Processor
ncatted netCDF Attribute Editor
ncbo netCDF Binary Operator
ncea netCDF Ensemble Averager
ncecat netCDF Ensemble Concatenator
ncflint netCDF File Interpolator
ncks netCDF Kitchen Sink
ncpdq netCDF Permute Dimensions Quickly, Pack Data Quietly
ncra netCDF Record Averager
ncrcat netCDF Record Concatenator
ncrename netCDF Renamer
ncwa netCDF Weighted Averager
A comprehensive user guide can be found at:
http://nco.sourceforge.net/nco.html
Sample usage patterns:
1. Get value of tas variable at a particular point for all timesteps with a prescribed format one per line on stdout:
ncks -C -H -s "%6.2f\n" -v tas -d iy,16 -d jx,16 \
test_001_SRF.1990060100.nc
2. Extract one timestep of tas from a file and save into a new netCDF file:
ncks -c -v tas -d time,6 test_001_SRF.1990060100.nc \
test_001_SRF.1990060212.nc
3. Cat together a year worth of output data for the single tas variable into a single file:
ncrcat -c -v tas test_001_SRF.1990??0100.nc \
test_001_T2M.1990.nc
42
4. Get the DJF mean value of the tempertaure from a multiyear run:
ncra -c -v tas test_001_SRF.????120100.nc \
test_001_SRF.????010100.nc \
test_001_SRF.????020100.nc \
test_001_DJF_T2M.nc
We strongly encourage you to read the on-line user guide of the NCO tools. You will for sure get a boost on
your data manipulation and analysis skills.
7.1.3 Climate data Operators CDO
The monolithic cdo program from the Max Planck Institut f´
’ur Meteorologie implements a really comprehensive
collection of command line Operators to manipulate and analyse Climate and NWP model Data either in netCDF
or GRIB format. There are more than 400 operators available, covering the following topics:
File information and file operations
Selection and Comparision
Modification of meta data
Arithmetic operations
Statistical analysis
Regression and Interpolation
Vector and spectral Transformations
Formatted I/O
Climate indices
We wont make here a comprehensive analysis of this tool, but you can find some ideas in the PostProc
directory on $REGCM_ROOT reading the two sample average and regrid scripts, which use a combination of NCO
programs and cdo operators to reach goal. A very simple usage pattern for example to obtain a monthly mean is:
cdo monmean test_001_T2M.1990.nc
7.2 GrADS program
This tool is the one mostly used at ICTP to analyze and plot model output results. It can be used either as an
interactive tool either as a batch data analysis tool. We have already written in chapter 5 about the helper program
GrADSNcPlot which can be used to interactively plot model output results. We will here detail why an helper
program is needed and how it does work. For information regarding the grads program itself, a comprehensive
guide may be found at:
http://www.iges.org/grads/gadoc/users.html
7.2.1 GrADS limits
The grads program is powerful, yet has limits:
1. Only the equirectangular projection or Plate Carr´
ee is supported. Some other projections can be used through
a pdef entry in the CTL file using the internal direct preprojection engines, but not all RegCM supported
projections are supported using direct engine.
43
2. NetCDF format allows multidimensional variables, while grads supports just four dimensional
(time,level,latitude,longitude) variables.
Luckily, these limits can be exceeded, carefully telling grads the RegCM data structure using the CTL file and
one ancillary proj file:
1. The grads program allows usage of the pdef BILIN option in the CTL file, which allows the user to specify
a supplementary file name. In this file are stored three lat-lon floating-point grids which have for each point
on the equirectangular grid the indexes i,j on the projected grid, as well as wind rotation values.
2. The grads program allows identifying four dimentional slices of a multidimensional variable as new
variables, providing them a unique name. This is how we are able to see in grads chemical output variables.
While the GrADSNcPlot program allows interactive plotting and after quitting the grads program removes the
CTL file and the proj file, the GrADSNcPrepare program only creates this two files, allowing share of the proj file
between multiple CTL files sharing the same RegCM domain (i.e. it creates just only once the proj file). To use
the grads program, you need to have both this ancillary files together with the data netCDF file.
To create the CTL file for the history CLM output file, you need to provide to the helper programs the path to
both the history CLM and the RegCM DOMAIN file, as in:
$ GrADSNcPrepare clmoutput.clm2.h0.2000-07-30-00000.nc test_DOMAIN000.nc
A collection of sample grads scripts commonly used at ICTP to plot simulation results can be found in the
Tools/Scripts/GrADS directory under $REGCM_ROOT.
7.3 CISLs NCL : NCAR Command Language
This awesome tool from NCAR is an interpreted language designed for scientific data analysis and visualization.
Noah Diffenbaugh and Mark Snyder have created a website dedicated to visualizing RegCM3 output using
the NCAR Command Language (NCL). These scripts where built using RegCM3 model output converted to
netCDF using an external converter. They have been adapted to serve as very basic example scripts to process
a native RegCM 4.2 output data file or do some data analysis using the NCL language and are available in the
Tools/Scripts/NCL/examples directory. Travis O’Brien from the User Community also contributed sample
scripts, which may be found under the Tools/Scripts/NCL directory.
7.4 R Statistical Computing Language
The Rstatistical computing language is able with an add on package to load into interal data structure a
meteorological field read from a netCDF RegCM output. A sample script to load and plot the 2m Temperature at
a selected timestep can be used as a reference to develop a real powerful statistical analysis of model results: it is
under Tools/Scripts/R.
7.5 Non free tools
Note that the netCDF format, using plugins or native capabilities, allows clean access to model output from a
number of non free tools like MatlabTMor IDLTM.
For a more complete list of tools, you are invited to scroll down the very long list of tools at:
http://www.unidata.ucar.edu/software/netcdf/software.html
44
Chapter 8
Getting help and reporting bugs
8.1 The Gforge site
A new welcoming home for the RegCM Community has been built with the help of Italian National Research
Council CNR Democritos Group on the e-science Lab E-Forge web site:
https://gforge.ictp.it/gf/project/regcm
Figure 8.1: The Gforge site
On this site you have access with a simple registration to a friendly bug tracking system under the tracker link,
allowing the users to post problems and bugs they discover.
It allows posting also of files to give you the opportunity to provide as much information as possible about
the environment the model is running at your institution, helping us better understand and solve efficiently your
problems.
Help us grow the model to fit your requirements, giving the broader user community the benefit of a valuable
tool to do better research.
45
46
Chapter 9
Appendices
We will review here a sample installation session of software needed to install the RegCM model.
The starting point is here a Linux system on a multicore processor box, and the final goal is to have an optimized
system to run the model. I will use bash as my shell and assume that GNU development tools like make,sed,awk
are installed as part of the default Operating System environment as is the case in most Linux distro. I will require
also for commodity a command line web downloader such as curl installed on the system, along its development
libraries to be used to enable OpenDAP remote data access protocol capabilities of netCDF library. Standard file
management tools such as tar and gzip and wget are also required. The symbol $> will stand for a shell prompt.
I will assume that the process is performed as a normal system user, which will own all the toolchain. I will be
now just the regcm user.
9.1 Identify Processor
First step is to identify the processor to know its capabilities:
$> cat /proc/cpuinfo
This command will ask to the operating system to print processor informations. A sample answer on my laptop
is:
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 30
model name : Intel(R) Core(TM) i7 CPU Q 740 @ 1.73GHz
stepping : 5
cpu MHz : 933.000
cache size : 6144 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
47
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3
cx16 xtpr pdcm sse4_1 sse4_2 popcnt lahf_lm ida dts tpr_shadow vnmi
flexpriority ept vpid
bogomips : 3467.81
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
repeated eight time with Processor Ids from 0 to 7: I have a Quad Core Intel with Hyperthreading on (this
multiply by 2 the reported processor list). The processor reports here also to support Intel Streaming SIMD
Extensions V4.2, which can be later used to speed up code execution vectorizing floating point operation on any
single CPU core.
9.2 Chose compiler
Depending on the processor, we can chose which compiler to use. On a Linux box, we have multiple choices:
GNU Gfortran
G95
Intel ifort compiler
Portland compiler
Absoft ProFortran
NAG Fortran Compiler
and for sure other which I may not be aware of. All of these compilers have pros and cons, so I am just for
now selecting one in the pool only to continue the exposition. I am not selecting the trivial solution of Gfortran
as most Linux distributions have it already packaged, and all the other required software as well (most complete
distribution I am aware of for this is Fedora: all needed software is packaged and it is a matter of yum install).
So let us assume I have licensed the Intel Composer XE Professional Suite 13.0.0 on my laptop. My system
administrator installed it on the default location under /opt/intel, and I have my shell environment update
loading vendor provided script:
$> source /opt/intel/bin/compilervars.sh intel64
With some modification (the path, the script, the arguments to the script), same step is to be performed for all
non-GNU compilers in the above list, and is documented in the installation manual of the compiler itself.
In case of Intel, to check the correct behaviour of the compiler, try to type the following command:
$> ifort --version
ifort (IFORT) 13.0.0 20120731
Copyright (C) 1985-2012 Intel Corporation. All rights reserved.
I am skipping here any problem that may arise from license installation for any of the compilers, so I am
assuming that if the compiler is callable, it works. As this step is usually performed by a system administrator on
the machine, I am assuming a skilled professional will take care of that.
48
9.3 Environment setup
We will now use the prereq_install shell script provided in the Tools/Scripts directory. Given the above
environment, we can edit the file and decomment the lines relative to the selected compiler.
# Working CC Compiler
#CC=gcc
CC=icc
#CC=pgcc
# Working C++ Compiler
#CXX=g++
CXX=icpc
#CXX=pgCC
# Working Fortran Compiler
#FC=gfortran
FC=ifort
#FC=pgf90
# Destination directory
DEST=/home/regcm/intelsoft
I am now ready to compile software.
9.4 Pre requisite library installation
The help script will build netCDF V4 and MPI libraries to be used to compile the RegCM model.
Then we can just execute the script:
$> ./prereq_install.sh
This script installs the netCDF/mpi librares in the
/home/regcm/intelsoft
directory. If something goes wrong, logs are saved in
/home/regcm/intelsoft/logs
Downloading ZLIB library...
Downloading HDF5 library...
Downloading netCDF Library...
Downloading MPICH2 Library...
Compiling MPI library.
Compiled MPI library.
Compiling zlib Library.
Compiled zlib library.
Compiling HDF5 library.
Compiled HDF5 library.
Compiling netCDF Library.
Compiled netCDF C library.
Compiled netCDF Fortran library.
Done!
To link RegCM with this librares use:
PATH=/home/regcm/intelsoft/bin:$PATH ./configure \
49
CC=icc FC=ifort \
CPPFLAGS=-I/home/regcm/intelsoft/include \
LDFLAGS=-L/home/regcm/intelsoft/lib \
LIBS="-lnetcdff -lnetcdf -lhdf5_hl -lhdf5 -lz"
The admins who must compile the pre requisite libraries are invited to look at the script, identifying the various
steps. The normal user should be content of the last printout message which details how to use the just built
libraries to compile RegCM model sources against. At run time an environment variable must be added to set
correct paths:
$> export PATH=/home/regcm/intelsoft/bin:$PATH
The above is needed to be repeated for any shell which is used to run RegCM programs, and can be appended
for convenience on the user shell startup scripts.
50
Bibliography
Giorgi, F., Regcm version 4.1 reference manual, Tech. rep., ICTP Trieste, 2011.
O’Brien, T. A., L. C. Sloan, and M. A. Snyder, Can ensembles of regional climate model simulations improve
results from sensitivity studies?, Climate Dynamics,37, 1111–1118, doi:10.1007/s00382-010-0900-5, 2011.
R. Courant, K. F., and H. Lewy, ¨
uber die partiellen differenzengleichungen der mathematischen physik,
Mathematische Annalen,100(1), 3274, 1928.
Rew, R. K., and G. P. Davis, Netcdf: An interface for scientific data access, IEEE Computer Graphics and
Applications,10(4), 76–82, 1990.
51
52
GNU Free Documentation License
Version 1.3, 3 November 2008
Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc.
<http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
0. PREAMBLE
The purpose of this License is to make a manual, textbook, or other
functional and useful document "free" in the sense of freedom: to
assure everyone the effective freedom to copy and redistribute it,
with or without modifying it, either commercially or noncommercially.
Secondarily, this License preserves for the author and publisher a way
to get credit for their work, while not being considered responsible
for modifications made by others.
This License is a kind of "copyleft", which means that derivative
works of the document must themselves be free in the same sense. It
complements the GNU General Public License, which is a copyleft
license designed for free software.
We have designed this License in order to use it for manuals for free
software, because free software needs free documentation: a free
program should come with manuals providing the same freedoms that the
software does. But this License is not limited to software manuals;
it can be used for any textual work, regardless of subject matter or
whether it is published as a printed book. We recommend this License
principally for works whose purpose is instruction or reference.
1. APPLICABILITY AND DEFINITIONS
This License applies to any manual or other work, in any medium, that
contains a notice placed by the copyright holder saying it can be
distributed under the terms of this License. Such a notice grants a
world-wide, royalty-free license, unlimited in duration, to use that
work under the conditions stated herein. The "Document", below,
refers to any such manual or work. Any member of the public is a
licensee, and is addressed as "you". You accept the license if you
copy, modify or distribute the work in a way requiring permission
under copyright law.
A "Modified Version" of the Document means any work containing the
Document or a portion of it, either copied verbatim, or with
modifications and/or translated into another language.
A "Secondary Section" is a named appendix or a front-matter section of
the Document that deals exclusively with the relationship of the
publishers or authors of the Document to the Document’s overall
subject (or to related matters) and contains nothing that could fall
directly within that overall subject. (Thus, if the Document is in
part a textbook of mathematics, a Secondary Section may not explain
any mathematics.) The relationship could be a matter of historical
connection with the subject or with related matters, or of legal,
commercial, philosophical, ethical or political position regarding
them.
The "Invariant Sections" are certain Secondary Sections whose titles
are designated, as being those of Invariant Sections, in the notice
that says that the Document is released under this License. If a
section does not fit the above definition of Secondary then it is not
allowed to be designated as Invariant. The Document may contain zero
53
Invariant Sections. If the Document does not identify any Invariant
Sections then there are none.
The "Cover Texts" are certain short passages of text that are listed,
as Front-Cover Texts or Back-Cover Texts, in the notice that says that
the Document is released under this License. A Front-Cover Text may
be at most 5 words, and a Back-Cover Text may be at most 25 words.
A "Transparent" copy of the Document means a machine-readable copy,
represented in a format whose specification is available to the
general public, that is suitable for revising the document
straightforwardly with generic text editors or (for images composed of
pixels) generic paint programs or (for drawings) some widely available
drawing editor, and that is suitable for input to text formatters or
for automatic translation to a variety of formats suitable for input
to text formatters. A copy made in an otherwise Transparent file
format whose markup, or absence of markup, has been arranged to thwart
or discourage subsequent modification by readers is not Transparent.
An image format is not Transparent if used for any substantial amount
of text. A copy that is not "Transparent" is called "Opaque".
Examples of suitable formats for Transparent copies include plain
ASCII without markup, Texinfo input format, LaTeX input format, SGML
or XML using a publicly available DTD, and standard-conforming simple
HTML, PostScript or PDF designed for human modification. Examples of
transparent image formats include PNG, XCF and JPG. Opaque formats
include proprietary formats that can be read and edited only by
proprietary word processors, SGML or XML for which the DTD and/or
processing tools are not generally available, and the
machine-generated HTML, PostScript or PDF produced by some word
processors for output purposes only.
The "Title Page" means, for a printed book, the title page itself,
plus such following pages as are needed to hold, legibly, the material
this License requires to appear in the title page. For works in
formats which do not have any title page as such, "Title Page" means
the text near the most prominent appearance of the work’s title,
preceding the beginning of the body of the text.
The "publisher" means any person or entity that distributes copies of
the Document to the public.
A section "Entitled XYZ" means a named subunit of the Document whose
title either is precisely XYZ or contains XYZ in parentheses following
text that translates XYZ in another language. (Here XYZ stands for a
specific section name mentioned below, such as "Acknowledgements",
"Dedications", "Endorsements", or "History".) To "Preserve the Title"
of such a section when you modify the Document means that it remains a
section "Entitled XYZ" according to this definition.
The Document may include Warranty Disclaimers next to the notice which
states that this License applies to the Document. These Warranty
Disclaimers are considered to be included by reference in this
License, but only as regards disclaiming warranties: any other
implication that these Warranty Disclaimers may have is void and has
no effect on the meaning of this License.
2. VERBATIM COPYING
You may copy and distribute the Document in any medium, either
commercially or noncommercially, provided that this License, the
copyright notices, and the license notice saying this License applies
to the Document are reproduced in all copies, and that you add no
other conditions whatsoever to those of this License. You may not use
technical measures to obstruct or control the reading or further
copying of the copies you make or distribute. However, you may accept
54
compensation in exchange for copies. If you distribute a large enough
number of copies you must also follow the conditions in section 3.
You may also lend copies, under the same conditions stated above, and
you may publicly display copies.
3. COPYING IN QUANTITY
If you publish printed copies (or copies in media that commonly have
printed covers) of the Document, numbering more than 100, and the
Document’s license notice requires Cover Texts, you must enclose the
copies in covers that carry, clearly and legibly, all these Cover
Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
the back cover. Both covers must also clearly and legibly identify
you as the publisher of these copies. The front cover must present
the full title with all words of the title equally prominent and
visible. You may add other material on the covers in addition.
Copying with changes limited to the covers, as long as they preserve
the title of the Document and satisfy these conditions, can be treated
as verbatim copying in other respects.
If the required texts for either cover are too voluminous to fit
legibly, you should put the first ones listed (as many as fit
reasonably) on the actual cover, and continue the rest onto adjacent
pages.
If you publish or distribute Opaque copies of the Document numbering
more than 100, you must either include a machine-readable Transparent
copy along with each Opaque copy, or state in or with each Opaque copy
a computer-network location from which the general network-using
public has access to download using public-standard network protocols
a complete Transparent copy of the Document, free of added material.
If you use the latter option, you must take reasonably prudent steps,
when you begin distribution of Opaque copies in quantity, to ensure
that this Transparent copy will remain thus accessible at the stated
location until at least one year after the last time you distribute an
Opaque copy (directly or through your agents or retailers) of that
edition to the public.
It is requested, but not required, that you contact the authors of the
Document well before redistributing any large number of copies, to
give them a chance to provide you with an updated version of the
Document.
4. MODIFICATIONS
You may copy and distribute a Modified Version of the Document under
the conditions of sections 2 and 3 above, provided that you release
the Modified Version under precisely this License, with the Modified
Version filling the role of the Document, thus licensing distribution
and modification of the Modified Version to whoever possesses a copy
of it. In addition, you must do these things in the Modified Version:
A. Use in the Title Page (and on the covers, if any) a title distinct
from that of the Document, and from those of previous versions
(which should, if there were any, be listed in the History section
of the Document). You may use the same title as a previous version
if the original publisher of that version gives permission.
B. List on the Title Page, as authors, one or more persons or entities
responsible for authorship of the modifications in the Modified
Version, together with at least five of the principal authors of the
Document (all of its principal authors, if it has fewer than five),
unless they release you from this requirement.
C. State on the Title page the name of the publisher of the
55
Modified Version, as the publisher.
D. Preserve all the copyright notices of the Document.
E. Add an appropriate copyright notice for your modifications
adjacent to the other copyright notices.
F. Include, immediately after the copyright notices, a license notice
giving the public permission to use the Modified Version under the
terms of this License, in the form shown in the Addendum below.
G. Preserve in that license notice the full lists of Invariant Sections
and required Cover Texts given in the Document’s license notice.
H. Include an unaltered copy of this License.
I. Preserve the section Entitled "History", Preserve its Title, and add
to it an item stating at least the title, year, new authors, and
publisher of the Modified Version as given on the Title Page. If
there is no section Entitled "History" in the Document, create one
stating the title, year, authors, and publisher of the Document as
given on its Title Page, then add an item describing the Modified
Version as stated in the previous sentence.
J. Preserve the network location, if any, given in the Document for
public access to a Transparent copy of the Document, and likewise
the network locations given in the Document for previous versions
it was based on. These may be placed in the "History" section.
You may omit a network location for a work that was published at
least four years before the Document itself, or if the original
publisher of the version it refers to gives permission.
K. For any section Entitled "Acknowledgements" or "Dedications",
Preserve the Title of the section, and preserve in the section all
the substance and tone of each of the contributor acknowledgements
and/or dedications given therein.
L. Preserve all the Invariant Sections of the Document,
unaltered in their text and in their titles. Section numbers
or the equivalent are not considered part of the section titles.
M. Delete any section Entitled "Endorsements". Such a section
may not be included in the Modified Version.
N. Do not retitle any existing section to be Entitled "Endorsements"
or to conflict in title with any Invariant Section.
O. Preserve any Warranty Disclaimers.
If the Modified Version includes new front-matter sections or
appendices that qualify as Secondary Sections and contain no material
copied from the Document, you may at your option designate some or all
of these sections as invariant. To do this, add their titles to the
list of Invariant Sections in the Modified Version’s license notice.
These titles must be distinct from any other section titles.
You may add a section Entitled "Endorsements", provided it contains
nothing but endorsements of your Modified Version by various
parties--for example, statements of peer review or that the text has
been approved by an organization as the authoritative definition of a
standard.
You may add a passage of up to five words as a Front-Cover Text, and a
passage of up to 25 words as a Back-Cover Text, to the end of the list
of Cover Texts in the Modified Version. Only one passage of
Front-Cover Text and one of Back-Cover Text may be added by (or
through arrangements made by) any one entity. If the Document already
includes a cover text for the same cover, previously added by you or
by arrangement made by the same entity you are acting on behalf of,
you may not add another; but you may replace the old one, on explicit
permission from the previous publisher that added the old one.
The author(s) and publisher(s) of the Document do not by this License
give permission to use their names for publicity for or to assert or
imply endorsement of any Modified Version.
5. COMBINING DOCUMENTS
56
You may combine the Document with other documents released under this
License, under the terms defined in section 4 above for modified
versions, provided that you include in the combination all of the
Invariant Sections of all of the original documents, unmodified, and
list them all as Invariant Sections of your combined work in its
license notice, and that you preserve all their Warranty Disclaimers.
The combined work need only contain one copy of this License, and
multiple identical Invariant Sections may be replaced with a single
copy. If there are multiple Invariant Sections with the same name but
different contents, make the title of each such section unique by
adding at the end of it, in parentheses, the name of the original
author or publisher of that section if known, or else a unique number.
Make the same adjustment to the section titles in the list of
Invariant Sections in the license notice of the combined work.
In the combination, you must combine any sections Entitled "History"
in the various original documents, forming one section Entitled
"History"; likewise combine any sections Entitled "Acknowledgements",
and any sections Entitled "Dedications". You must delete all sections
Entitled "Endorsements".
6. COLLECTIONS OF DOCUMENTS
You may make a collection consisting of the Document and other
documents released under this License, and replace the individual
copies of this License in the various documents with a single copy
that is included in the collection, provided that you follow the rules
of this License for verbatim copying of each of the documents in all
other respects.
You may extract a single document from such a collection, and
distribute it individually under this License, provided you insert a
copy of this License into the extracted document, and follow this
License in all other respects regarding verbatim copying of that
document.
7. AGGREGATION WITH INDEPENDENT WORKS
A compilation of the Document or its derivatives with other separate
and independent documents or works, in or on a volume of a storage or
distribution medium, is called an "aggregate" if the copyright
resulting from the compilation is not used to limit the legal rights
of the compilation’s users beyond what the individual works permit.
When the Document is included in an aggregate, this License does not
apply to the other works in the aggregate which are not themselves
derivative works of the Document.
If the Cover Text requirement of section 3 is applicable to these
copies of the Document, then if the Document is less than one half of
the entire aggregate, the Document’s Cover Texts may be placed on
covers that bracket the Document within the aggregate, or the
electronic equivalent of covers if the Document is in electronic form.
Otherwise they must appear on printed covers that bracket the whole
aggregate.
8. TRANSLATION
Translation is considered a kind of modification, so you may
distribute translations of the Document under the terms of section 4.
Replacing Invariant Sections with translations requires special
permission from their copyright holders, but you may include
57
translations of some or all Invariant Sections in addition to the
original versions of these Invariant Sections. You may include a
translation of this License, and all the license notices in the
Document, and any Warranty Disclaimers, provided that you also include
the original English version of this License and the original versions
of those notices and disclaimers. In case of a disagreement between
the translation and the original version of this License or a notice
or disclaimer, the original version will prevail.
If a section in the Document is Entitled "Acknowledgements",
"Dedications", or "History", the requirement (section 4) to Preserve
its Title (section 1) will typically require changing the actual
title.
9. TERMINATION
You may not copy, modify, sublicense, or distribute the Document
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense, or distribute it is void, and
will automatically terminate your rights under this License.
However, if you cease all violation of this License, then your license
from a particular copyright holder is reinstated (a) provisionally,
unless and until the copyright holder explicitly and finally
terminates your license, and (b) permanently, if the copyright holder
fails to notify you of the violation by some reasonable means prior to
60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, receipt of a copy of some or all of the same material does
not give you any rights to use it.
10. FUTURE REVISIONS OF THIS LICENSE
The Free Software Foundation may publish new, revised versions of the
GNU Free Documentation License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in
detail to address new problems or concerns. See
http://www.gnu.org/copyleft/.
Each version of the License is given a distinguishing version number.
If the Document specifies that a particular numbered version of this
License "or any later version" applies to it, you have the option of
following the terms and conditions either of that specified version or
of any later version that has been published (not as a draft) by the
Free Software Foundation. If the Document does not specify a version
number of this License, you may choose any version ever published (not
as a draft) by the Free Software Foundation. If the Document
specifies that a proxy can decide which future versions of this
License can be used, that proxy’s public statement of acceptance of a
version permanently authorizes you to choose that version for the
Document.
11. RELICENSING
58
"Massive Multiauthor Collaboration Site" (or "MMC Site") means any
World Wide Web server that publishes copyrightable works and also
provides prominent facilities for anybody to edit those works. A
public wiki that anybody can edit is an example of such a server. A
"Massive Multiauthor Collaboration" (or "MMC") contained in the site
means any set of copyrightable works thus published on the MMC site.
"CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0
license published by Creative Commons Corporation, a not-for-profit
corporation with a principal place of business in San Francisco,
California, as well as future copyleft versions of that license
published by that same organization.
"Incorporate" means to publish or republish a Document, in whole or in
part, as part of another Document.
An MMC is "eligible for relicensing" if it is licensed under this
License, and if all works that were first published under this License
somewhere other than this MMC, and subsequently incorporated in whole or
in part into the MMC, (1) had no cover texts or invariant sections, and
(2) were thus incorporated prior to November 1, 2008.
The operator of an MMC Site may republish an MMC contained in the site
under CC-BY-SA on the same site at any time before August 1, 2009,
provided the MMC is eligible for relicensing.
ADDENDUM: How to use this License for your documents
To use this License in a document you have written, include a copy of
the License in the document and put the following copyright and
license notices just after the title page:
Copyright (c) YEAR YOUR NAME.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.3
or any later version published by the Free Software Foundation;
with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
A copy of the license is included in the section entitled "GNU
Free Documentation License".
If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
replace the "with...Texts." line with this:
with the Invariant Sections being LIST THEIR TITLES, with the
Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
If you have Invariant Sections without Cover Texts, or some other
combination of the three, merge those two alternatives to suit the
situation.
If your document contains nontrivial examples of program code, we
recommend releasing these examples in parallel under your choice of
free software license, such as the GNU General Public License,
to permit their use in free software.
59

Navigation menu