A first Sensitivity-Analysis study

This tutorial assumes that the user has gone through Quick installation and has built Melissa.

Heat-PDE use-case

Here we demonstrate a core use-case of Melissa: a sensitivity analysis which yields iteratively computed statistics based on parallel clients and server. Each individual client is simply a data-generator based on a heat diffusion equation characterized by a parallelized solver.

Note

In order to give a better insight of the Melissa language compatibility, the same solver initially developed in Fortran90 (see heat.f90 and heat_utils.f90) was turned into a C example (see heat.c) through subroutine binding. Hence, following this tutorial will yield two executables (i.e. one for each language). For the sake of brevity, the Instrumenting the solver section only specifies the C commands necessary for a coupling with Melissa. Nevertheless, their Fortran equivalent can easily be inferred from the Fortran source files.

Use case presentation

In this example, a finite-difference parallel solver is used to solve the heat equation on a cartesian grid of size . The solver input variables are:

  • the initial temperature across the domain,
  • , the left wall temperature (the other wall temperatures are set to zero).

By solving the heat equation for multiple sets of inputs, the purpose of this example is to perform a sensitivity-analysis of the solution to these temperatures.

Note

By default, the use-case is configured to only take two input parameters (nb_parameters=2). This reduces the group size (group_size=nb_parameters+2) when Sobol indices are computed. Depending on the resources at hand, the user can increase this number up to 5.

Note

The computational load of this use-case can easily be tuned by modifying the client_config dictionary of the configuration file, where the mesh refinement (, ) and the time discretization must be passed to the executable command.

Running the example

If Melissa was installed with the Quick Installation instructions, the user should remember the Melissa prefix path and update the environment variables of the shell. This will ensure all Melissa executables are found:

source /path/to/melissa/melissa-setup-env.sh

Note

If Melissa was installed with a package manager, there is no need to setup the environment. Loading the API package is enough to adjust the paths as expected.

Next, move to the example folder and build the example code:

cd /path/to/melissa/examples/heat-pde/executables
mkdir build && cd build
cmake ..
make
cd ../../heat-pde-sa    # go from .../heat-pde/executables/build to .../heat-pde/heat-pde-sa 

If the build is successful, three new executables should appear in the executables/build sub-directory:

  • heatf
  • heatc
  • heat_no_melissac

The configuration file config_<scheduler>.json is used to configure the Melissa execution (e.g. parameter sweep, computed statistics, launcher options). It must be edited at least to update the path to the executable:

"client_executable": "path/to/melissa/examples/heat-pde/executables/heatc"

Note

The example can be started with one of several batch schedulers supported by Melissa: OpenMPI, slurm, or OAR. It may be necessary to pass additional arguments directly to the batch scheduler for a successful example run. For example, starting with version 3, OpenMPI refuses to oversubscribe by default (in layman's terms, to start more processes than there are CPUs cores on a system) and requires the `"--oversubscribe" option.

In this tutorial, we use the OpenMPI scheduler and the default config_mpi.json file:

melissa-launcher --config_name /path/to/heat-pde-sa/config_mpi

Note

The heat-pde example is not a computationally challenging problem but simply due to the number of simulation processes and depending on the resources available to the user, the system may end up being oversubscribed. If so, the following launcher option should be added to the config file:

"scheduler-arg": "--oversubscribe"
This will have for effect to submit every mpirun command with this option.

All results, log files, and a copy of the config file will be stored in a dedicated directory called STUDY_OUT. If not specified in the config file, the output directory will by default have the form melissa-YYYYMMDDTHHMMSS where YYYYMMDD and THHMMSS are the current date and local time, respectively, in ISO 8601 basic format. For each time step, for each field, and for each statistic, the Melissa server will generate one file containing the statistic value for every grid point.

The statistics can be turned into a small movie with the aid of the script plot-results.py. For example, the command below will create a movie from the mean of the temperature over all time steps:

python3 plot-results-sa.py /path/to/<result-dir> temperature mean

Instrumenting the Solver

To avoid intermediate file storage and the problems associated with it, the simulations must send their data directly to the Melissa server.

Warning

A time step refers to the time steps of the sensitivity analysis; the Melissa time steps referred as samples should not be mistaken with simulation time steps; they may not be identical if the simulation does not send data after every simulation step.

The Melissa client API provides the link between the simulations (i.e. clients) and the Melissa server. The API can be found in the header file $MELISSA_INSTALL_PREFIX/include/melissa_api.h, where MELISSA_INSTALL_PREFIX is the path to the root directory of the Melissa installation.

#include <melissa_api.h>
The header file allows you to check at compile time the Melissa version and it contains declarations for all relevant functions.

Before calling any Melissa API functions, you need to decide on a set of fields or quantities that you want to have analyzed by Melissa; this information must be passed to the config file in the study_options dictionary entry "field_names": ["field_names"] and it must match the API calls in the simulation code. In the heat example, there is only one field called temperature.

Next, MPI must be initialized and an MPI communicator for each individual simulation must be created:

    MPI_Init(NULL, NULL);
    int* appnum = NULL;
    int info = -1;
    MPI_Comm_get_attr(MPI_COMM_WORLD, MPI_APPNUM, &appnum, &info);
    MPI_Comm comm_app = MPI_COMM_NULL;
    MPI_Comm_split(MPI_COMM_WORLD, *appnum, me, &comm_app);

Warning

Because of how MPI handles the simulation groups (world communicators are common to all simulations of each group), the user is advised to avoid MPI_COMM_WORLD communicators outside of these command lines.

The simulation can begin the communication with the Melissa server. The first step is to instruct the Melissa server of the field name and the number of degrees of freedom (the number of floating-point value):

    const char field_name[] = "temperature";
    melissa_init(field_name, num_dofs, comm_app);
At this point, the simulation can begin sending data to the server with melissa_send: The first argument is the field name, the second argument is a reference to an array of values:
    double* u = calloc(sizeof(double), num_dofs);
    // ...
    melissa_send(field_name, u);
After sending the data for all fields, melissa_finalize() must be called to properly disconnect from the Melissa server and releases all resources.
    melissa_finalize();
    MPI_Finalize();
This function must be called before MPI_Finalize().