Log In | Users | Register
Web of Reto Knutti's Group
spacer

Setup a CESM 1.0.x benchmark run on a generic Linux system

This is a cookbook to setup a CESM 1.0 benchmark run on a generic Linux system. This page is still work in progress, but will give already an idea what has to be done. See also porting CESM in the CESM user's guide: http://www.cesm.ucar.edu/models/cesm1.0/cesm/cesm_doc/c2161.html

Compile NETCDF (Requirement)

  • Download NETCDF
    wget http://www.unidata.ucar.edu/downloads/netcdf/ftp/netcdf-4.1.1.tar.gz
    tar xfvz netcdf-4.1.1.tar.gz
    cd netcdf-4.1.1
    

  • Compile netcdf with the compiler you use later to compile the model. For example for intel compiler do:
    export FC=ifort
    export F77=ifort
    export F90=ifort
    export CPPFLAGS="-fPIC -DpgiFortran"
    ./configure --prefix=/usr/local/netcdf-4.1.1-intel --disable-netcdf-4 --disable-dap
    make
    make test
    

  • Install NETCDF
    make install
    

Download CESM source code

Adapt configuration files

  • Change to Machines directory
    cd cesm1_0_2
    cd scripts/ccsm_utils/Machines/
    

  • Meaning of filenames
    Filename Purpose
    env_machopts.* Set environment: Can be used to set paths to compiler, MPI library, NETCDF library
    Macros.* Set compiler name and paths to MPI library, NETCDF library. Set compiler options
    mkbatch.* Setting for queuing system
    where * corresponds to a machine.

  • For example modify brutus_io, brutus_im, brutus_po, brutus_pm to fit the local environment. (i=intel, p=pgi, o=openmpi, m=mvapich2). For example if you have intel and openmpi start with *.brutus_io files:
    emacs env_machopts.brutus_io
    emacs Macros.brutus_io
    emacs mkbatch.brutus_io
    

  • Modify config_machines.xml and set the following variables. For example for brutus_io :
    EXEROOT=                     # working directory, final location of binary, output files
    DIN_LOC_ROOT_CSMDATA=        # input data
    DIN_LOC_ROOT_CLMQIAN=        # input data
    MAX_TASKS_PER_NODE=          # how many cores per node?
    

Compile and setup simulation

  • Change to scripts directory
    cd cesm1_0_2
    cd scripts
    

  • Define a case name (can be any name), for example 2° resolution (1.9x2.5_gx1v6), fully coupled model (B), running with intel (i) and openmpi (o)
    CASE=1.9x2.5_gx1v6-B-benchmark-io
    

  • Define the machine type, resolution, compset
    MACH=brutus_im
    RES=1.9x2.5
    COMP=B
    

  • Create case
    ./create_newcase -res $RES -compset $COMP -mach $MACH -case $CASE
    

  • Change into case directory
    cd $CASE
    

  • Define layout, for example run with 128 task on 128 cores
    NTASKS=128
    ./xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_LND -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_GLC -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id TOTALPES   -val $NTASKS
    

  • Simulation should run for 1 month and no restart files should be produced:
    ./xmlchange -file env_run.xml -id STOP_OPTION -val nmonths
    ./xmlchange -file env_run.xml -id STOP_N      -val 1
    ./xmlchange -file env_run.xml -id REST_OPTION -val never
    

  • Configure case
    ./configure -case
    

  • Build/Compile the model
    ./$CASE.$MACH.build
    

Run the model

  • Run the model, for example with LSF queuing system
    bsub < $CASE.$MACH.run
    

  • Timing results can be found after the run has been successfully completed in folder timing
    cat timing/ccsm_timing.$CASE.*
    ...
    Model Throughput:         6.39   simulated_years/day 
    ...
    

spacer

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Wiki? Send feedback
Syndicate this site RSS ATOM