Log In | Users | Register
Web of Reto Knutti's Group
spacer

Setup a CESM 1.0.x benchmark run on a generic system

This is a cookbook to setup a CESM 1.0 benchmark run on a generic (Linux) system. This page is still work in progress, but will give already an idea what has to be done. See also porting CESM in the CESM user's guide: http://www.cesm.ucar.edu/models/cesm1.0/cesm/cesm_doc/c2161.html

System requirements

  • Compilers known to work: intel 10.1, pgi 7.2, 8.0, 9.0, (pathscale 3.2)
  • MPI implementations known to work: openmpi 1.4, 1.5, mvapich2 1.4, 1.5

Compile NETCDF (Requirement)

  • Download NETCDF
    wget http://www.unidata.ucar.edu/downloads/netcdf/ftp/netcdf-4.1.1.tar.gz
    tar xfvz netcdf-4.1.1.tar.gz
    cd netcdf-4.1.1
    

  • Compile netcdf with the compiler you use later to compile the model. For example for intel compiler do:
    export FC=ifort
    export F77=ifort
    export F90=ifort
    export CPPFLAGS="-fPIC -DpgiFortran"
    ./configure --prefix=/usr/local/netcdf-4.1.1-intel --disable-netcdf-4 --disable-dap
    make
    make test
    

  • Install NETCDF
    make install
    

Download CESM source code

Adapt configuration files

  • Change to Machines directory
    cd cesm1_0_2
    cd scripts/ccsm_utils/Machines/
    

  • Meaning of filenames
    Filename PurposeSorted ascending
    Macros.* Set compiler name and paths to MPI library, NETCDF library. Set compiler options
    env_machopts.* Set environment: Can be used to set paths to compiler, MPI library, NETCDF library
    mkbatch.* Setting for queuing system
    where * corresponds to a machine.

  • As starting point take configuration files of a machine that is close to your environment. For example have a look at brutus_io, brutus_im, brutus_po or brutus_pm where i=intel, p=pgi, o=openmpi, m=mvapich2

  • Let's assume you use intel and openmpi, start with *.brutus_io files:
    cp env_machopts.brutus_io env_machopts.your_machine
    cp Macros.brutus_io       Macros.your_machine
    cp mkbatch.brutus_io      mkbatch.your_machine
    

  • Add to config_machines.xml a configuration tag for your machine (your_machine) - only the important lines are listed below
    <machine MACH="your_machine"
             DESC="Test System"
             EXEROOT="/scratch/$CCSMUSER/$CASE"
             OBJROOT="$EXEROOT"
             INCROOT="$EXEROOT/lib/include" 
             DIN_LOC_ROOT_CSMDATA="/scratch/cesm1/inputdata"
             DIN_LOC_ROOT_CLMQIAN="/scratch/cesm1/inputdata/atm/datm7/atm_forcing.datm7.Qian.T62.c080727"
             BATCHQUERY="qstat -f"
             BATCHSUBMIT="qsub" 
             GMAKE_J="4" 
             MAX_TASKS_PER_NODE="4"
             MPISERIAL_SUPPORT="FALSE" />
    
    please set the following variables:
    EXEROOT=                     # working directory, final location of binary and output files
    DIN_LOC_ROOT_CSMDATA=        # input data, date will be downloaded on the fly
    DIN_LOC_ROOT_CLMQIAN=        # input data, data will be downloaded on the fly
    MAX_TASKS_PER_NODE=          # define cores per node
    

  • Configure mpirun execution: Search in mkbatch.your_machine for the line starting the executable ccsm.exe and replace it with the correct mpirun command for your system, for example something like
    mpirun -np ${maxtasks} ./ccsm.exe >&! ccsm.log.\$LID
    # or
    mpirun -x LD_LIBRARY_PATH -np ${maxtasks} ./ccsm.exe >&! ccsm.log.\$LID
    

Compile and setup simulation

  • Change to scripts directory
    cd cesm1_0_2
    cd scripts
    

  • Define a case name (can be any name), for example 2° resolution (1.9x2.5_gx1v6), fully coupled model (B)
    CASE=1.9x2.5_gx1v6-B-benchmark
    

  • Define the machine type, resolution, compset
    MACH=your_machine
    RES=1.9x2.5
    COMP=B
    

  • Create case
    ./create_newcase -res $RES -compset $COMP -mach $MACH -case $CASE
    

  • Change into case directory
    cd $CASE
    

  • Define layout, for example run with 128 task on 128 cores
    NTASKS=128
    ./xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_LND -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_GLC -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id TOTALPES   -val $NTASKS
    

  • In general, CESM is hardwired to generate monthly average data. In principle this can be turned of but needs a lot of code changes. Therefore it's not considered here. The following two cases are suggested instead:

  • CASE 1: Run a short simulation with producing almost no output (I/O)
    Simulation should run only for 20 days and no restart files should be produced at the end:
    ./xmlchange -file env_run.xml -id STOP_OPTION -val ndays
    ./xmlchange -file env_run.xml -id STOP_N      -val 20
    ./xmlchange -file env_run.xml -id REST_OPTION -val never
    

  • CASE 2: Run a larger simulation with producing monthly (better daily? - CKECK THIS) output data
    Simulation should run for 2 months and restart files should be produced at the end:
    ./xmlchange -file env_run.xml -id STOP_OPTION -val nmonths
    ./xmlchange -file env_run.xml -id STOP_N      -val 2
    ./xmlchange -file env_run.xml -id REST_OPTION -val $STOP_N
    

  • Configure case
    ./configure -case
    

  • Build/Compile the model
    ./$CASE.$MACH.build
    

Run the model

  • Run the model, for example with LSF queuing system
    bsub < $CASE.$MACH.run
    

  • To start without a queuing system just execute:
    ./$CASE.$MACH.run
    

  • Timing results can be found after the run has been successfully completed in folder timing
    cat timing/ccsm_timing.$CASE.*
    ...
    Model Throughput:         6.39   simulated_years/day 
    ...
    

Change the layout

  • To change the layout you don't have to recreate the case (but you can if you wish).
  • Change into case directory and re-define layout
    cd $CASE
    NTASKS=64
    ./xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_LND -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id NTASKS_GLC -val $NTASKS
    ./xmlchange -file env_mach_pes.xml -id TOTALPES   -val $NTASKS
    
  • Clean the case and re-configure it
    ./configure -cleanmach
    ./configure -case
    

  • Build/Compile the model
    ./$CASE.$MACH.build
    

Change the resolution

  • Recommended resolutions are T31_gx3v7 (~3°), 1.9x2.5_gx1v6 (2°), 0.9x1.25_gx1v6 (1°)
  • ALERT! To change the resolution create a new case !

Produce a summary

  • Create performance matrix for CASE 1 and CASE 2. Fill in Model Throughput in simulated_years/day

  • CASE 1: Run a short simulation with producing almost no output (I/O)
    resolution / layout (NTASKS) 16 32 64 128 256 512 1024 (a)
    T31_gx3v7         -- -- --
    1.9x2.5_gx1v6 --           --
    0.9x1.25_gx1v6 -- --          

  • CASE 2: Run a larger simulation with producing monthly (better daily? - CKECK THIS) output data
    resolution / layout (NTASKS) 16 32 64 128 256 512 1024 (a)
    T31_gx3v7         -- -- --
    1.9x2.5_gx1v6 --           --
    0.9x1.25_gx1v6 -- --          
    (a) optional

spacer

This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Wiki? Send feedback
Syndicate this site RSS ATOM