关键词不能为空

当前您在: 主页 > 高中公式大全 >

通用陆面(CoLM)模式模型手册

作者:高考题库网
来源:https://www.bjmy2z.cn/gaokao
2021-01-22 19:03
tags:

聆听的意思-上坡如吃屎

2021年1月22日发(作者:阎朝隐)








The Common Land Model (CoLM)
Technical & User Guide



Yongjiu Dai & Duoying Ji





School of Geography
Beijing Normal University
Beijing 100875
China




E-mail:
yongjiudai@

duoyingji@







July 7, 2008
1


2
Contents

1. Introduction
2. Creating and Running the Executable
2.1 Specification of script environment variables and header file
2.2 Surface data making
2.3 Initial data making
2.4 Time- loop calculation
3. CoLM Surface Dataset
4. CoLM Atmospheric Forcing Dataset

4.1 GSWP2 forcing dataset
4.2 PRINCETON forcing dataset
4.3 Temporal interpolation of the forcing data
5. CoLM Model Structure and Parallel Implementation

5.1 CoLM Model Structure

5.2 CoLM MPI Parallel Design

5.3 CoLM MPI Parallel Implementation

5.4 CoLM Source Code and Subroutines Outline
6. CoLM Parameter and Variables
6.1 Model Parameters
6.2 Time invariant model variables
6.3 TUNABLE constants
6.4 Time-varying state variables

6.5 Forcing
6.6 Fluxes
7. Examples

7.1 Single Point Offline Experiment

7.2 Global Offline Experiment with GSWP2 Dataset


Table 1: Model directory structure
Table 2: define.h CPP tokens
Table 3: Namelist variables for initial data making
Table 4: Namelist variables for Time-loop calculation
Table 5: The list of raw data available
Table 6: Description of 24-category (USGS) vegetation categories
Table 7: Description of 17-category soil categories
Table 8: The relative amounts of sand, soil, and clay
Table 9: netCDF File Information of the Processed Atmospheric Forcing Data


3
Table 10: Source code and Subroutines Outline
Table 11: Dimension of model array
Table 12: Control variables to determine updating on time steps
Table 13: Model time invariant variables
Table 14: Model TUNABLE constants
Table 15: Run calendar
Table 16: Time- varying Variables for restart run
Table 17: Atmospheric Forcing
Table 18: Model output in xy Grid Form

Figure 1: Flow chart of the surface data making
Figure 2: Flow chart of the initial data making
Figure 3: Flow chart of the time-looping calculation
Figure 4: Diagram of the domain partition at surface data making
Figure 5: Diagram of the domain partition at time-looping calculation
Figure 6: Diagram of the patches and grids mapping relationship

4
1. Introduction

This user

s guide provide the user with the coding implementation, and operating
instructions
for
the
Common
Land
Model
(CoLM)
which
is
the
land
surface
parameterization
used
in
offline
mode
or
with
the
global
climate
models
and
regional
climate models.

The
development
of
the
Common
Land
Model
(
hereafter
we
call
CLM
initial
version
)
can
be
described
as
the
work
of
a
community
effort.
Initial
software
specifications and development focused on evaluating the best features of existing land
models. The model performance has been validated in very extensive field data included
sites
adopted
by
the
Project
for
Intercomparison
of
Land-surface
Parameterization
Schemes
(Cabauw,
Valdai,
Red- Arkansas
river
basin)
and
others
[FIFE,
BOREAS,
HAPEX-MOBILHY, ABRACOS, Sonoran Desert, GSWP, LDAS]. The model has been
coupled
with
the
NCAR
Community
Climate
Model
(CCM3).
Documentation
for
the
CLM initial version is provided by Dai et al. (2001) while the coupling with CCM3 is
described in Zeng et al. (2002). The model was introduced to the modeling community in
Dai et al. (2003).

The
CLM initial version
was adopted as the Community Land Model (CLM2.0)
for
use
with
the
Community
Atmosphere
Model
(CAM2.0)
and
version
2
of
the
Community Climate System Model (CCSM2.0). The current version of Community Land
Model,
CLM3.0,
was
released
in
June
2004
as
part
of
the
CCSM3.0
release
(
/models/ccsm3.0/clm3/
).
The
Community
Land
Model
(CLM3.0)
is
radically
different
from
CLM
initial
version
,
particularly
from
a
software
engineering
perspective,
and
the
great
advancements
in
the
areas
of
carbon
cycling,
vegetation
dynamics,
and
river
routing.
The
major
differences
between
CLM
2.0
and
CLM initial version
are: 1) the biome-type land cover classification scheme was replaced
with a plant functional type (PFT) representation with the specification of PFTs and leaf
area index from satellite data; 2) the parameterizations for vegetation albedo and vertical
burying
of
vegetation
by
snow;
3)
canopy
scaling,
leaf
physiology,
and
soil
water
limitations
on
photosynthesis
to
resolve
deficiencies
indicated
by
the
coupling
to
a
dynamic vegetation model; 4) vertical heterogeneity in soil texture was implemented to
improve coupling with a dust emission model; 5) a river routing model was incorporated
to improve the fresh water balance over oceans; 6) numerous modest changes were made
to the parameterizations to conform to the strict energy and water balance requirements of
CCSM;
7)
Further
substantial
software
development
was
also
required
to
meet
coding
standards. Besides the changes from a software engineering perspective, the differences
between
CLM3.0
and
CLM2.0
are:
1)
several
improvements
to
biogeophysical
parameterizations to correct deficiencies; 2) stability terms were added to the formulation

5
for
2-m
air
temperature
to
correct
this;
3)
the
equation
was
modified
to
correct
a
discontinuity
in
the
equation
that
relates
the
bulk
density
of
newly
fallen
snow
to
atmospheric
temperature;
4)
a
new
formulation
was
implemented
that
provides
for
variable aerodynamic resistance with canopy density; 5) the vertical distribution of lake
layers was modified to allow for more accurate computation of ground heat flux; 6) a fix
was implemented for negative round-off level soil ice caused by sublimation; 7) a fix was
implemented to correct roughness lengths for non-vegetated areas. Documentation for the
Community
Land
Model
(CLM3.0)
was
provided
by
Oleson
et
al.
(2004).
The
simulations of CLM2.0 coupling with the Community Climate are described in Bonan et
al.
(2002).
The
simulations
of
CLM3.0
with
the
Community
Climate
System
Model
(CCSM3.0) are summarized in the Special Issue of Journal of Climate by Dickinson et al.
(2005), Bonan and S. Levis (2005).

Concurrent with the development of the Community Land Model, the CLM initial
version
was
undergoing
further
development
at
Georgia
Institute
of
Technology
and
Beijing Normal University in leaf temperature, photosynthesis and stomatal calculation.
Big-leaf treatment by CLM initial version and CLM3.0 that treat a canopy as a single leaf
tend
to
overestimate
fluxes
of
CO
2

and
water
vapor.
Models
that
differentiate
between
sunlit and shaded leaves largely overcome these problems.
A one-layered, two-big-leaf
submodel for photosynthesis, stomatal conductance, leaf temperature, and energy fluxes
was necessitated to the CLM initial version that is not in the CLM3.0.

It includes 1) an
improved
two
stream
approximation
model
of
radiation
transfer
of
the
canopy,
with
attention
to
singularities
in
its
solution
and
with
separate
integrations
of
radiation
absorption
by
sunlit
and
shaded
fractions
of
canopy;
2)
a
photosynthesis

stomatal
conductance
model
for
sunlit
and
shaded
leaves
separately,
and
for
the
simultaneous
transfers of CO
2
and water vapor into and out of the leaf

leaf physiological properties
(i.e.,
leaf
nitrogen
concentration,
maximum
potential
electron
transport
rate,
and
hence
photosynthetic capacity) vary throughout the plant canopy in response to the radiation

weight time-mean profile of photosynthetically active radiation (PAR), and the soil water
limitation
is
applied
to
both
maximum
rates
of
leaf
carbon
uptake
by
Rubisco
and
electron transport, and the model scales up from leaf to canopy separately for all sunlit
and
shaded
leaves;
3)
a
well-built
quasi-Newton

Raphson
method
for
simultaneous
solution of temperatures of the sunlit and shaded leaves. For avoiding confusion with the
Community Land Model (CLM2.0, CLM3.0 versions), we name this improved version of
the Common Land Model as CoLM.

This
was
same
as
model
now
supported
at
NCAR.
NCAR
made
extensive
modifications
mostly
to
make
more
compatible
with
NCAR
CCM
but
some
for
better
back
compatibility
with
previous
work
with
NCAR
LSM.
For
purpose
of
using
in
a
variety of other GCMs and mesoscale models, this adds a layer of complexity that may be
unnecessary.
Thus
we
have
continued
testing
further
developments
with
CLM
initial

6
version.
Some
changes
suggested
by
Land
Model
working
groups
of
CCSM
are
also
implemented, such as, stability terms to the formulation for 2-m air temperature, a new
formulation for variable aerodynamic resistance with canopy density. CoLM is radically
different from either CLM initial version or CLM2.0 or CLM3.0, the differences could be
summarized as follows,
1)

Two big leaf model for leaf temperatures, photosynthesis- stomatal resistance;
2)

Two-stream approximation for canopy albedoes calculation with the solution for
singularity
point,
and
the
calculations
for
radiation
for
the
separated
canopy
(sunlit and shaded);
3)

New
numerical
scheme
of
iteration
for
leaf
temperatures
calculation;
New
treatment
for
canopy
interception
with
the
consideration
of
the
fraction
of
convection and large-scale precipitation;
5)

Soil
thermal
and
hydrological
processes
with
the
consideration
of
the
depth
to
bedrock;
6)

Surface runoff and sub-surface runoff;
7)

Rooting fraction and the water stress on transpiration;
8)

Use a grass tile 2m height air temperature in place of an area average for matching
the routine meteorological observation;
9)

Perfect energy and water balance within every time-step;
10)

A slab ocean-sea ice model;
11)

Totally CoLM coding structure.

The
development
of
CoLM
is
trying
to
provide
a
version
for
public
use
and
further development, and share the improvement contributed by many groups.

The source code and datasets required to run the CoLM in offline mode can be
obtained via the web from:
/


The CoLM distribution consists of three tar files:
CoLM_

CoLM_src_

CoLM_
.

The file CoLM_ and CoLM_src_ contain code, scripts, the file
CoLM_ is the serial version of the CoLM, and the file CoLM_src_ is the
parallel version of the CoLM, the file CoLM_ contains raw data used to make the
model surface data. The Table 1 lists the directory structure of the parallel version model.

Table 1:


7
Directory Name
colm/rawdata/
Description

used by CoLM to generate surface datasets at
model resolution. We are currently providing 5
surface datasets with resolution 30 arc second:
DEM-USGS.30s
LWMASK-USGS.30s (
not used
)
SOILCAT.30s
SOILCATB.30s
VEG-USGS.30s
BEDROCKDEPTH (
not available
)
LAI (
not available
)
Atmospheric forcing variables suitable for
running the model in offline mode
Routines for generating surface datasets
Routines for generating initial datasets
Routines for executing the time-loop calculation
of soil temperatures, water contents and surface
fluxes
Script to build and execute the model
GrADs & NCL files for display the history files
Temporal interpolation routines used for GSWP2
& PRINCETON atmospheric forcing dataset
Useful programs related with model running
colm/data/
colm/mksrfdata/
colm/mkinidata/
colm/main/
colm/run/
colm/graph/
colm/interp/
colm/tools/


The scientific description of CoLM is given in

[1].
Dai,
Y., R.E. Dickinson,
and
Y.-P. Wang,
2004:
A two-big-leaf model
for canopy
temperature, photosynthesis and stomatal conductance.
Journal of Climate
, 17: 2281-
2299.
[2]. Oleson K. W., Y. Dai, G. Bonan, M. Bosilovich, R. E. Dickinson, P. Dirmeyer, F.
Hoffman, P. Houser, S. Levis, G. Niu, P. Thornton, M. Vertenstein, Z.-L. Yang, X.
Zeng,
2004:
Technical
Description
of
the
Community
Land
Model
(CLM).
NCAR/TN-461+STR.
[3]. Dai, Y., X. Zeng, R. E. Dickinson, I. Baker, G. Bonan, M. Bosilovich, S. Denning, P.
Dirmeyer, P. Houser, G. Niu, K. Oleson, A. Schlosser, and Z.-L. Yang, 2003: The
Common Land Model (CLM).
Bull. of Amer. Meter. Soc.
, 84: 1013-1023.
[4].

Dai,
Y.,
X.
Zeng,
and
R.E.
Dickinson,
2002:
The
Common
Land
Model:
Documentation
and User‘s Guide (
/dickinson/
).

8

We value the responses and experiences of our collaborators in using CoLM and
encourage their feedback on problems in the current model formulation and the coding,
as well as insight and suggestions for future model refinement and enhancement. It would
be particularly helpful if users would communicate such feedback informally and where
possible
share
with
us
documented
model
applications
including
manuscripts,
papers,
procedures, or individual model development.


9
2. Creating and Running the Executable

The CoLM model can run as a stand alone executable where atmospheric forcing
data is
periodically read in.
It
can
also
be run as
part of
the Atmosphere Model where
communication between the atmospheric and land models occurs via subroutine calls or
the special coupler. In this User

s Guide, we

ll focus on the parallel version CoLM, most
of the scripts and setting of the serial version CoLM are similar to the parallel version,
and even more simple.

offline mode
In
order
to
build
and
run
the
CoLM
on
offline
mode,
two
sample
scripts:

,
jobclm_
,
and
the
corresponding
Makefile

files
are
provided
in
run
and the source code directories respectively.
The
scripts,


and

jobclm_
,
create
a
model
executable,
determine
the
necessary
input
datasets,
construct
the
input
model
namelist.
Users
must
edit these scripts appropriately(
适当的
) in order to build and run the executable for their
particular
requirements
and
in
their
particular
environment.
These
scripts
are
provided
only as an example to aid the novice user in getting the CoLM up and running as quickly
as
possible.
The
script
jobclm_

used
to
do
a
single-point
offline
simulation
experiment, can be run with minimal user modification, assuming the user resets several
environment variables at the top of the script. In particular, the user must set
ROOTDIR

to point to the full disk pathname of the model root directory. And the

is used
to
do
a
global
or
regional
offline
simulation
experiment,
usually
should
be
modified
heavily to fulfill different requirements. The following part we

ll explain the


in detail.
The script

can be divided into five sections:
1)

Specification of script environment variables, creating header file
define.h
;
2)

Compiling the surface data making, initial data making, time-loop calculation
programs respectively.
3)

Surface data making, including input namelist creating;
4)

Initial data making: including input namelist creating;
5)

Time-loop calculation: including input namelist creating.

2.1 Specification of script environment variables

The user will generally not need to modify the section of

, except to:

10
1) set the model domain edges and the basic computer architecture,
2) set the model path directory,
3) create the subdirectory for output, and
4) create the header file
$$CLM_INCDIR/define.h
.

BOX 1: EXAMPLE FOR
SPECIFICATION OF SCRIPT ENVIRONMENT
VARIABLES


# set the basic computer architecture for the model running
#setenv ARCH ibm
setenv ARCH intel

# set the model domain for north, east, south, west edges
setenv EDGE_N 90.
setenv EDGE_E 180.
setenv EDGE_S -90.
setenv EDGE_W -180.

# set the number of grids of the CoLM and the forcing dataset at longitude and latitude directions
setenv NLON_CLM 360
setenv NLAT_CLM 180
setenv NLON_MET 360
setenv NLAT_MET 180

# set the number of processes used to parallel computing, MPI related.
setenv TASKS 24

# The user has to modify the ROOTDIR to his/her root directory, for example, /people.
setenv ROOTDIR /people/$$LOGNAME

# 1) set clm include directory root
setenv CLM_INCDIR $$ROOTDIR/colm/include

# 2) set clm raw land data directory root
setenv CLM_RAWDIR $$ROOTDIR/colm/rawdata

# 3) set clm surface data directory root
setenv CLM_SRFDIR $$ROOTDIR/colm/mksrfdata

# 4) set clm input data directory root
setenv CLM_DATADIR $$ROOTDIR/colm/data

# 5) set clm initial directory root
setenv CLM_INIDIR $$ROOTDIR/colm/mkinidata

# 6) set clm source directory root
setenv CLM_SRCDIR $$ROOTDIR/colm/main


11
# 7) set executable directory
setenv CLM_EXEDIR $$ROOTDIR/colm/run

# 8) create output directory
setenv CLM_OUTDIR $$ROOTDIR/colm/output
mkdir -p $$CLM_OUTDIR >/dev/null

#----------------------------- -------------------------
# build define.h in ./include directory
#------------------------- -----------------------------
cat >! .tmp << EOF
#undef coup_atmosmodel
#undef RDGRID
#undef SOILINI
#define offline
#undef BATS
#undef SIB2
#undef IGBP
#define USGS
#define EcoDynamics
#define LANDONLY
#undef LAND_SEA
#undef SINGLE_POINT
#undef MAPMASK
#define NCDATA
#define PRINCETON
#undef GSWP2
#undef DOWNSCALING
#define WR_MONTHLY
EOF

if ($$TASKS > 1) then
cat >> .tmp << EOF
#define MPI
EOF
Endif

cmp -s .tmp $$CLM_INCDIR/define.h || mv -f .tmp $$CLM_INCDIR/define.h

The
ARCH
variable is used to set the architecture of the model running, and in
the following section of the

, the
make
command will use the
ARCH
variable
to invoke different
Makefile
to compile the model. The
EDGE_N
,
EDGE_E
,
EDGE_S
,

EDGE_W
four variables are used to locate the model domain edges,
especially on the
model surface data making. The number of model grids at latitude or longitude direction
is set by the
NLAT_CLM
and
NLON_CLM
, these also are used for surface data making.
The
number
of
forcing
dataset
grids
at
latitude
or
longitude
direction
is
set
by
the
NLAT_MET

and
NLON_MET
,
these
help
do
some
simple
forcing
data
downscaling
when
the
model
grids
not
exactly
match
the
forcing
dataset
grids.
The
number
of

12
processors involved in the parallel computing is set by the
TASKS
environment variables,
if
TASKS

is
great
than
one,
the
MPI

cpp
token
will
be
specified
in
define.h

automatically,
and
the
MPI
parallel
function
will
be
build
into
the
model,
users
could
modify this logic according to your own requirements.


The
file
define.h

contains
model-dependent
C-language
cpp
tokens.
C-
preprocessor
directives
of
the
form
#include,
#if
defined,
etc.,
are
used
in
the
model
source
code
to
enhance
code
portability
and
allow
for
the
implementation
of
distinct
blocks
of
functionality
(such
as
incorporation
of
different
modes)
within
a
single
file.
Header file,
define.h
, is included with #include statements within the source code. When
make

command
is
invoked,
the
C
preprocessor
includes
or
excludes
blocks
of
code
depending on which cpp tokens have been defined in
define.h
.

Table 2: define.h CPP tokens

define.h cpp token
Description
offline
RDGRID
USGS
IGBP
SiB2
BATS
EcoDynamics
LANDONLY
LAND_SEA
MAPMASK
NCDATA
If defined, offline mode is invoked
If defined, the latitude and longitude of model grids
are provided by input data
If defined, USGS 24 categories land cover legend are
used
If defined, IGBP 17 categories land cover legend are
used
If defined, SiB2 11 categories land cover legend are
used
If defined, BATS 19 categories land cover legend are
used
If defined, dynamic vegetation model is activated
If defined, only land grid are activated
If defined, land and sea grids are activated
If defined, users should supply the base map file to
locate the specific region
If defined, netCDF format atmospheric forcing
dataset being read, currently only supporting GSWP2
& PRINCETON datasets.
If defined, the PRINCETON dataset being used.
Depending on the NCDATA token.
If defiend, the GSWP2 dataset being used. Depending
on the NCDATA token.
PRINCETON
GSWP2
DOWNSCALING
If defined, the simple downscaling method used to re-
grid the forcing data, usually used at high resolution

13
simulation experiments
MPI
If defined, the MPI parallel function being build into
the model, this token is automatically set by the

according to the
TASKS
environmental
variable
If defined, history file is write at every time step
If defined, history file is write in daily average
If defined, history file is write in monthly average
WR_HOURLY
WR_DAILY
WR_MONTHLY

2.2 Compiling the surface data making, initial data making, time-loop calculation
programs

BOX 2: EXAMPLE FOR COMPILING THE MODEL



echo 'Compiling mksrfdata...'
cd $$CLM_SRFDIR

make -f Makefile.$${ARCH} clean
make -f Makefile.$${ARCH} >>& $$CLM_EXEDIR/ || exit 5

cp -f $$CLM_SRFDIR/srf.x $$CLM_EXEDIR/srf.x

echo 'Compiling mkinidata...'
cd $$CLM_INIDIR

make -f Makefile.$${ARCH} clean
make -f Makefile.$${ARCH} >>& $$CLM_EXEDIR/ || exit 5

cp -f $$CLM_INIDIR/initial.x $$CLM_EXEDIR/initial.x

echo 'Compiling main...'
cd $$CLM_SRCDIR

make -f Makefile.$${ARCH} clean
make -f Makefile.$${ARCH} >>& $$CLM_EXEDIR/ || exit 5

cp -f $$CLM_SRCDIR/clm.x $$CLM_EXEDIR/clm.x


In
each
source
code
directory
of
the
model,
two
Makfiles
exist:
one
is

,
another
one
is

.
The
make

command
uses
the
ARCH

environment
variable
to
select
the
right
Makefile

to
compile
the
model,
including
the
surface making program, initial data making program and the time-loop main program.
After the successful compiling procedure, three executable files named
srf.x
,
initial.x
and

14
clm.x
should occur in the
$$CLM_EXEDIR
directory. If some accident happened, users
could refer to the

file at the
$$CLM_EXEDIR
directory to figure out the
problem.

2.3 Surface data making: input namelist creating and executing

In this part, the

namelist being firstly created, this namelist is used to
direct the surface making program how to produce the surface data. The model surface
data

fsurdat

is created by using the high resolution raw surface dataset, i.e.,
fgridname,
fmaskname,
flandname,
fsolaname,
fsolbname.
If
RDGRID

cpp
token
defined,
the
fgridname
should point to the file which contains the model grid information, including
the
latitude
&
longitude
of
all
grids
center,
else
the
fgridname

leaves
blank.
The
fmaskname
points to the land and ocean mask file,
fsolaname
points to the upper layer
soil category dataset (0-30cm),
fsolbname
points to the deeper layer soil category dataset
(30-100cm). Currently all these dataset comes from USGS. The
flandname
points to the
land
cover
category
classification
dataset,
currently
the
CoLM
support
USGS
,
IGBP
,
SiB2
,
BATS

four
land
category
legends,
and
each
one
could
be
set
by
modifying
the
define.h
header file. In the default CoLM_ dataset, we only provide the
USGS

land
cover
category
dataset,
users
could
download
other
land
cover
category
datasets
from
/glcc
or contact us.

Users want to simulate the limited region (domain) which is not a regular shape,
e.g.

a
city
or
state,
could
use
the
file
fmapmask

to
specify
a
base
map
file,
this
file
should be a zero/one land mask file, the value one should fill the region interested. And in
the
surface
making
process,
the
program
would
care
about
this,
and
drop
the
non-
interested
area.
The
fmapmask

file
should
be
at
the
same
resolution
as
flandname
,
fsolaname
,
fsolbname
and etc. A similar file is
fmetmask
, which is used to filter some
points without atmospheric forcing dataset, it

s also a zero/one land mask file, but it has
the resolution of the model, the points without forcing dataset are also dropped.

A
regular
grid
surface
dataset
can
be
generated
for
a
single
gridcell
or
for
gridcells
comprising
a
regional
or
global
domain,
lon_points

=1,
lat_points

=1

for
a
single
gridcell
simulation
or
lon_points
=nx,
lat_points
=ny

for
a
nx
?

ny
model
grids
simulation. The model resolution are defined by model grid (
lon_points, lat_points
) and
the domain edges, i.e.,
edgen
: northern edge of model domain (degrees north)
edges
: southern edge of model domain(degrees south)
edgew
: western edge of model domain (degrees west)
edgee
: eastern edge of model domain (degrees east)


15
The surface making program is paralleled using MPI, so developers want to add
new function should take care of it.

BOX 3: EXAMPLE FOR SURFACE DATA MAKING



cd $$CLM_EXEDIR

# Create an input parameter namelist file for srf.x

cat >! $$CLM_EXEDIR/ << EOF
&mksrfexp
fmetmask = '$$CLM_DATADIR/gswp_mask'
fmapmask = '/c2/data/CN_basemap/chinamap'
fgridname = ' '
fdemname = '$$CLM_RAWDIR/DEM-USGS.30s'
fmaskname = '$$CLM_RAWDIR/LWMASK-USGS.30s'
flandname = '$$CLM_RAWDIR/VEG-USGS.30s'
fsolaname = '$$CLM_RAWDIR/SOILCAT.30s'
fsolbname = '$$CLM_RAWDIR/SOILCATB.30s'
fsurdat = '$$CLM_DATADIR/srfdata.1deg'
lon_points = $$NLON_CLM
lat_points = $$NLAT_CLM
edgen = $$EDGE_N
edgee = $$EDGE_E
edges = $$EDGE_S
edgew = $$EDGE_W
nlon_metdat = $$NLON_MET
nlat_metdat = $$NLAT_MET
/
EOF

echo 'Executing CLM Making Surface Data'

if($$TASKS > 1)then
mpirun -prefix
$$CLM_EXEDIR/ >& $$CLM_EXEDIR/ || exit 5
else
$$CLM_EXEDIR/srf.x < $$CLM_EXEDIR/ >&
$$CLM_EXEDIR/ || exit 5
endif

echo 'CLM Making Surface Data Completed'


2.4 Initial data making: input namelist creating and executing


16
Upon successful completion of the surface data making in model grid and patches,
surface
data
file
has
been
generated
in
CLM_DATADIR
.
This
section
will
make
the
model time-constant variables and time-varying variables on the model grids and patches.

Table 3: Namelist Variables for Initial data making

Name
Description
Type
Notes

site

case name
character

greenwich

true: greenwich time, false: local time
logical
required
start_yr
starting date for run in year
integer
required
start_jday
starting date for run in julian day
integer
required
start_sec
starting seconds of the day for run in seconds
integer
required
fsurdat
full pathname of surface dataset
character
required
(for example,
'$$CLM_DATADIR/')

flaidat

full pathname of the leaf and stem area index,
character

dataset
fmetdat

full pathname of
the meteorological data
character
required
fhistTimeConst

(for example,
'$$CLM_DATADIR/')

full pathname of
time-invariant dataset
(for example,
'$$CLM_OUTDIR/VALDAI- rstTimeConst')

full pathname of
time- varying dataset
(for example,
'$$CLM_OUTDIR/VALDAI-rstTimeVar')
full pathname of
output dataset
(for example, '$$CLM_OUTDIR/VALDAI')
full pathname of
run information
(for example, '$$CLM_EXEDIR/st')
character
character
character
character
integer
integer
real
integer
required
required
required
required
required
required
required
required
fhistTimeVar
foutdat
finfolist
lon_points
lat_points
deltim
mstep


number of longitude points on model grid
number of latitude points on model grid
time step of the run in second
total model step for the run
BOX 4: EXAMPLE FOR
INITIAL DATA MAKING


# Create an input parameter namelist file for initial.x

cat >! $$CLM_EXEDIR/ << EOF
&clminiexp
site = 'GLOBAL'
greenwich = .true.

17
start_yr = 1948
start_jday = 1
start_sec = 1800
fsurdat = '$$CLM_DATADIR/srfdata.1deg'
flaidat = ' '
fsoildat = '$$CLM_DATADIR/soilini'
fmetdat = '/disk2/jidy/princeton_30min'
fhistTimeConst = '$$CLM_OUTDIR/GLOBAL-rstTimeConst'
fhistTimeVar = '$$CLM_OUTDIR/GLOBAL-rstTimeVar'
foutdat = '$$CLM_OUTDIR/GLOBAL'
finfolist = '$$CLM_EXEDIR/st'
lon_points = $$NLON_CLM
lat_points = $$NLAT_CLM
nlon_metdat = $$NLON_MET
nlat_metdat = $$NLAT_MET
deltim = 1800
mstep = 931104
/
EOF

echo 'Executing CLM Initialization'

$$CLM_EXEDIR/initial.x <$$CLM_EXEDIR/ >& $$CLM_EXEDIR/l
|| exit 5

echo 'CLM Initialization Completed'



2.5 Time-loop calculation: input namelist creating and executing

Upon successful completion of the surface data and initial data, files for the time-
constant
variables,
time-varying
variables
,
and
the
namelist
have
been
generated
in
'$$CLM_OUTDIR/VALDAI-rstTimeConst',
'$$CLM_OUTDIR/VALDAI-rstTimeVar'
,
and
the
'$$CLM_EXEDIR/st'.
These include surface data, initialization files as well as
the
namalist
file
for
the
model
time-loop
execution.
The
variables
in
the
namelist
file
st
have been specified as Table 4:

Table 4: Namelist Variables for Time-loop Calculation


Name
Description
Type

site

case name
character
flaidat

full pathname of the leaf and stem area index,
character
dataset
fmetdat

full pathname of
the meteorological data
character

18
(for example,
'$$CLM_DATADIR/')

fhistTimeConst

full pathname of
time-invariant dataset
(for example,
'$$CLM_OUTDIR/VALDAI-rstTimeConst')

fhistTimeVar
full pathname of
time- varying dataset
(for example,
'$$CLM_OUTDIR/VALDAI-rstTimeVar')
foutdat
full pathname of
output dataset
(for example, '$$CLM_OUTDIR/VALDAI')
lhistTimeConst

logical unit number of restart time-invariant file
character
character
character
integer
integer
integer
integer
integer
integer
integer
integer
real
integer
integer
integer
character
lhistTimeVar
lulai
lumet
luout
lon_points
lat_points
numpatch
deltim
mstep
spinup_dy
spinup_yr
fmetelev
nlon_metdat
nlat_metdat


logical unit number of restart time-varying file
logical unit number of LAI data
logical unit number of meteorological forcing
logical unit number of output
number of longitude points on model grid
number of latitude points on model grid
total number of patches of grids
time step of the run in second
total model step for the run
Number of days to spin-up
Number of years to spin-up
Full pathname of the grid elevation of the
atmospheric forcing dataset
Number of grids of atmospheric forcing data at
integer
longitude direction
Number of grids of atmospheric forcing data at
integer
latitude direction
As
the
following
example
showing,
the
namelist
file
used
to
run
the
time-loop
part of the CoLM model is created by initial data making program, according to the patch
number and others specified information. Before running the CoLM time-loop program, a

namelist being created, this namelist is used to direct the CoLM history output.
At
sometimes,
especially
with
high
resolution
running
case,
lots
of
output
data
is
produced by the model, and most variables in history data are useless, the

is
used to handle this situation, we could use it to filter some useless variables, each variable
headed with a

+

sign will be exported as normal, each variable headed with a

-

sign
will
be
dropped.
But
when
using
the
graph
scripts
in
graph/

directory,
users
should
modify them to comport with the

.

Also a

namelist is created following the

, which is used to
do some simple atmospheric forcing data downscaling.


19
BOX 5: EXAMPLE FOR TIME-LOOP CALCULATION

# Create an input parameter namelist file for clm.x

mv -f $$CLM_EXEDIR/st $$CLM_EXEDIR/

# Create flux export namelist file for clm.x
# Don't change the sequence of the FLUX array elements!!

set FLUX = ( +taux +tauy +fsena +lfevpa +fevpa +fsenl +fevpl
+etr +fseng +fevpg +fgrnd +sabvsun +sabvsha +sabg
+olrg +rnet +xerr +zerr +rsur +rnof +assim
+respc +tss_01 +tss_02 +tss_03 +tss_04 +tss_05 +tss_06
+tss_07 +tss_08 +tss_09 +tss_10 +wliq_01 +wliq_02 +wliq_03
+wliq_04 +wliq_05 +wliq_06 +wliq_07 +wliq_08 +wliq_09 +wliq_10
+wice_01 +wice_02 +wice_03 +wice_04 +wice_05 +wice_06 +wice_07
+wice_08 +wice_09 +wice_10 +tg +tlsun +tlsha +ldew
+scv +snowdp +fsno +sigf +green +lai +sai
+alb_11 +alb_12 +alb_21 +alb_22 +emis +z0ma +trad
+ustar +tstar +qstar +zol +rib +fm +fh
+fq +tref +qref +u10m +v10m +f10m +us
+vs +tm +qm +prc +prl +pbot +frl
+solar )

@ i = 0

set flux_exp =

foreach str ($$FLUX)
@ i = $$i + 1
if(
set flux_exp =
else
set flux_exp =
endif
end

cat >! $$CLM_EXEDIR/ << EOF
&flux_nml
$$flux_exp
/
EOF

cat >! $$CLM_EXEDIR/ << EOF
&downs_nml
edgen = $$EDGE_N
edgee = $$EDGE_E
edges = $$EDGE_S
edgew = $$EDGE_W
/

20
EOF

echo 'Executing CLM Time-looping'

setenv FORT9 $$CLM_EXEDIR/
setenv FORT7 $$CLM_EXEDIR/

if($$TASKS > 1)then
mpirun -prefix
$$CLM_EXEDIR/ >& $$CLM_EXEDIR/op
|| exit 5
else
$$CLM_EXEDIR/clm.x < $$CLM_EXEDIR/ >&
$$CLM_EXEDIR/op || exit 5
endif

echo 'CLM Running Completed'


21
3. CoLM Surface Dataset

The
data
available
as
input
to
the
programs

mksrfdat

include
global
terrain
elevation, landuse/vegetation, land-water mask, soil types, in which the raw datasets are
only
needed
if
a
surface
dataset
is
to
be
created
at
surface
data
making.
All
data
are
available at 30 arc second resolution (
Table 4
). The data arrangement and format in the
reformatted data file are as follows,
Latitude
by
latitude
from
north
to
south
in
one
latitude,
the
data
points
are
arranged from west to east, starting from 0 degree longitude (or dateline).
?

We use 2-character array to store the elevation, and 1-character array to store all
other data (values < 100).
?

All source data files are direct-access, which makes data reading efficient.
?

All data are assumed to be valid at the center of the grid box.
?


Terrain Height
30 sec. (0.925 km)
Land-Water Mask
#

30 sec. (0.925 km)
24-Category Land Cover
##

30 sec. (0.925 km)
17-Category Soil
###


#
Table 5: The list of raw data available

Resolution
Data source
Coverage
USGS
USGS

USGS
30 sec. (0.925 km)
FAO+STATSGO
Global
Global
Global
Global
Size(bytes)
1,866,240,000
933,120,000
933,120,000
933,120,000
The land- water mask data files are derived from USGS vegetation data files.
At each of
lat/lon grid points, there is one number indicating the land ( 1), water ( 0), or missing
data (-1) at that point.


##

The
24
categories
are
listed.
The
30-sec
data
are
represented
by
one
category-ID
number at each of lat/lon grid point.
.


Table 6: Description of 24-category (USGS) vegetation categories

Land Cover ID
Description
1
Urban and Built-Up Land
2
Dryland Cropland and Pasture
3
Irrigated Cropland and Pasture
4
Mixed Dryland/Irrigated Cropland and Pasture
5
Cropland/Grassland Mosaic
6
Cropland/Woodland Mosaic

22
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24

###
Grassland
Shrubland
Mixed Shrubland/Grassland
Savanna
Deciduous Broadleaf Forest
Deciduous Needleleaf Forest
Evergreen Broadleaf Forest
Evergreen Needleleaf Forest
Mixed Forest
Water Bodies(Including Ocean)
Herbaceous Wetland
Wooded Wetland
Barren or Sparsely Vegetated
Herbaceous Tundra
Wooded Tundra
Mixed Tundra
Bare Ground Tundra
Snow or Ice
FAO and STATSGO data are merged together. Both
top
soil layer (0
- 30 cm) and
bottom soil layer (30 - 100 cm) data are provided. The 17 categories are listed. Similar to
the vegetation data, the 30-sec data are represented by one category-ID number at each of
lat/lon grid point.
Table 7: Description of 17-category Soil categories

Soil Type ID
Soil Description
1
Sand
2
Loamy Sand
3
Sandy Loam
4
Silt Loam
5
Silt
6
Loam
7
Sandy Clay Loam
8
Silty Clay Loam
9
Clay Loam
10
Sandy Clay
11
Silty Clay
12
Clay
13
Organic Materials
14
Water

23
Bedrock
Other
No data
15
16
17

Table 8: The relative amounts of sand, soil, and clay
Class No.
Soil Texture Class
%
%
%
Sand
Silt
Clay
1
Sand
92
5
3
2
Loamy Sand
82
12
6
3
Sandy Loam
58
32
10
4
Silt Loam
17
70
13
5
Silt
10
85
5
6
Loam
43
39
18
7
Sandy Clay Loam
58
15
27
8
Silty Clay Loam
10
56
34
9
Clay Loam
32
34
34
10
Sandy Clay
52
6
42
11
Silt Clay
6
47
47
12
Clay
22
20
58
13
Organic materials
0
0
0
14
Water
0
0
0
15
Bedrock
0
0
0
16
Other
0
0
0
17
No data
0
0
0
24


4. CoLM Atmospheric Forcing Dataset


The
CoLM
needs
the
atmospheric
forcing
data
when
running
at
offline
mode.
Currently the CoLM support ASCII & netCDF format atmospheric forcing dataset. The
NCDATA
cpp token is used to distinguish the format being used. When
NCDATA
being
set, we could use the
GSWP2
&
PRINCETON
atmospheric dataset of netCDF format,
otherwise the ASCII
forcing dataset
is
used. The
ASCII data
format is
relative simple,
each
line
represents
a
time
record,
which
contains
short-wave
solar
radiation
[W/m
2
],
long-wave radiation [W/m
2
], precipitation rate [mm/s], air temperature [K], wind speed
[m/s], surface air pressure [Pa], specific humidity [kg/kg]. The ASCII format data is easy
to use when doing single-point validating experiments, users could arrange the observed
atmospheric variables according the above requirements and then feed them to the model.
Some special requirements about the ASCII format forcing dataset could be fulfilled by
investigating the source code file GETMET.F90 in
main/
directory.


The following two parts we

ll give some details about how to use the GSWP &
PRINCETON
dataset
in
CoLM,
the two dataset are widely used in
land surface model
validation and development.

4.1 GSWP2 Forcing Dataset


The Global Soil Wetness Project (GSWP) is an ongoing environmental modeling
research
activity
of
the
Global
Land-Atmosphere
System
Study
(GLASS)
and
the
International
Satellite
Land-Surface
Climatology
Project
(ISLSCP),
both
contributing
projects
of
the
Global
Energy
and
Water
Cycle
Experiment
(GEWEX)
in
the
World
Climate Research Program(WCRP). GSWP was charged with producing as a community
effort global estimates of soil moisture, temperature, snow water equivalent, and surface
fluxes by integrating one-way uncoupled land surface schemes (LSSs)
using externally
specified
surface
forcings
and
standardized
soil
and
vegetation
distributions.
GSWP-2
produced the best model estimates of the land-surface water and energy cycles over a ten
year period. This project included an evaluation of the uncertainties linked to the LSSs,
their parameters and the forcing variables. One of the main products of the GSWP2 is a
state-of-the- art land surface model forcing dataset, which provides a common platform to
many land surface models to evaluate their performance.


The
GSWP2
dataset
contains
solar
radiation,
long-wave
radiation,
surface
air
temperature,
surface
air
specific
humidity,
surface
air
pressure,
total
precipitation
rate,
convective precipitation rate, wind speed. Each variable has a data file for each month,
and
the
date
length
range
from
1982
to
1995,
the
time
interval
is
3hours,
the
spatial
resolution
is
1degree. Only the land points have data, so to
save the storage space, the

25
GSWP2 dataset is compressed from 2D xy array into 1D vector array, the ocean grids are
ignored, and all data files are stored in netCDF format to make it more portable among
different computer platforms.

4.2 PRINCETON Forcing Dataset

The
PRINCETON
dataset
is
a
global,
50-yr,
3-hourly,
1.0°

dataset
of
meteorological forcings that can be used to drive models of land surface hydrology. The
dataset is constructed by combining a suite of global observation-based datasets with the
National
Centers
for
Environmental
Prediction

National
Center
for
Atmospheric
Research (NCEP

NCAR) reanalysis. For the known biases in the reanalysis precipitation
and near-surface meteorology have been shown to exert an erroneous effect on modeled
land
surface
water
and
energy
budgets,
so
the
PRINCETON
dataset
corrected
these
problems
by
using
observation-based
datasets
of
precipitation,
air
temperature,
and
radiation.
This
dataset
also
made
corrections
to
the
rain
day
statistics
of
the
reanalysis
precipitation,
which
have
been
found
to
exhibit
a
spurious
wavelike
pattern
in
high-
latitude wintertime. Wind-induced undercatch of solid precipitation
was removed using
the
results
from
the
World
Meteorological
Organization
(WMO)
Solid
Precipitation
Measurement
Intercomparison.
The
statistical
downscaling
developed
with
the
Global
Precipitation
Climatology
Project
(GPCP)
daily
product
was
used
to
disaggregate
the
precipitation in space to 1.0°
resolution. Also the TRMM 3-hourly real-time dataset was
used
to
disaggregation
in
time
from
daily
to
3
hourly.
Downward
radiation,
specific
humidity, surface air pressure, and wind speed meteorological variables are downscaled
in space while accounting for changes in elevation.

The
PRINCETON
dataset
contains
download
solar
radiation,
download
long-
wave radiation, surface air temperature, surface air pressure, surface air specific humidity,
wind speed, total precipitation rate. They all are
stored in netCDF format, but with the
ocean
grids,
so
PRINCETON
dataset
occupies
a
huge
disk
spaces.
Its
long
time
series
and splendid correction methods made it a good candidate for validating and evaluating
the land surface model.

4.3 Temporal Interpolation of the Forcing Data


As stated above, GSWP & PRINCETON datasets all are netCDF format, and of
the same spatial resolution, but the PRINCETON dataset has a very long time series. And
their
time
intervals
are
3hours,
which
is
not
suitable
for
contemporary
land
surface
models. The CoLM usually uses the time step at 30 minutes, so we have to do temporal
interpolation to make the GSWP & PRINCETON dataset suitable for CoLM.


26

聆听的意思-上坡如吃屎


聆听的意思-上坡如吃屎


聆听的意思-上坡如吃屎


聆听的意思-上坡如吃屎


聆听的意思-上坡如吃屎


聆听的意思-上坡如吃屎


聆听的意思-上坡如吃屎


聆听的意思-上坡如吃屎



本文更新与2021-01-22 19:03,由作者提供,不代表本网站立场,转载请注明出处:https://www.bjmy2z.cn/gaokao/550990.html

通用陆面(CoLM)模式模型手册的相关文章

  • 爱心与尊严的高中作文题库

    1.关于爱心和尊严的作文八百字 我们不必怀疑富翁的捐助,毕竟普施爱心,善莫大焉,它是一 种美;我们也不必指责苛求受捐者的冷漠的拒绝,因为人总是有尊 严的,这也是一种美。

    小学作文
  • 爱心与尊严高中作文题库

    1.关于爱心和尊严的作文八百字 我们不必怀疑富翁的捐助,毕竟普施爱心,善莫大焉,它是一 种美;我们也不必指责苛求受捐者的冷漠的拒绝,因为人总是有尊 严的,这也是一种美。

    小学作文
  • 爱心与尊重的作文题库

    1.作文关爱与尊重议论文 如果说没有爱就没有教育的话,那么离开了尊重同样也谈不上教育。 因为每一位孩子都渴望得到他人的尊重,尤其是教师的尊重。可是在现实生活中,不时会有

    小学作文
  • 爱心责任100字作文题库

    1.有关爱心,坚持,责任的作文题库各三个 一则150字左右 (要事例) “胜不骄,败不馁”这句话我常听外婆说起。 这句名言的意思是说胜利了抄不骄傲,失败了不气馁。我真正体会到它

    小学作文
  • 爱心责任心的作文题库

    1.有关爱心,坚持,责任的作文题库各三个 一则150字左右 (要事例) “胜不骄,败不馁”这句话我常听外婆说起。 这句名言的意思是说胜利了抄不骄傲,失败了不气馁。我真正体会到它

    小学作文
  • 爱心责任作文题库

    1.有关爱心,坚持,责任的作文题库各三个 一则150字左右 (要事例) “胜不骄,败不馁”这句话我常听外婆说起。 这句名言的意思是说胜利了抄不骄傲,失败了不气馁。我真正体会到它

    小学作文