Instrumenting the Solver/Client¶
Melissa treats a client, a simulation process, as managing one or more fields or quantities, such as energy, temperature, pressure, etc. Each field can have its own mesh, but all meshes must have a fixed size.
The following sections provide a general idea of how to leverage Melissa's client API and then breaks it down further based on the languages.
Initializing a field¶
To enable Melissa to analyze a specific field, the simulation must initialize it with:
Sending Data¶
At every time step, the simulation must send data to Melissa:
for (int t = 0; t < nb_time_steps; ++t) {
const double* values = ...;
melissa_send("field_name", values);
}
Note
Melissa's time steps do not need to match the simulation's time steps exactly.
Finalizing Communication¶
After all data is sent, the simulation must properly close communication with:
This step is mandatory and must be performed before calling MPI_Finalize()
.
Warning
The fields specified in the simulation must match those listed in the <config-file>.json
configuration file.
Language-specific details¶
In this section, we break down the usage of Melissa client API for,
- C
- Python
- Fortran
C API¶
The main client API functions, prefixed with melissa_
, are defined in the melissa_api.h
header file. To use them, start by including the header:
After a successful CMake installation, the API header file is installed in the MELISSA_INSTALL_PREFIX/include
directory.
Initializing MPI¶
Generally, If the MPI applications are straightforward, the usage of MPI_COMM_WORLD
is sufficient with the melissa_init
call:
Note
For Sobol experiments, we run a Group of clients, where each simulation operates as a single MPI application/specification but share the same communicator, MPI_COMM_WORLD
, by default. Sobol groups utilize MPI_APPNUM to manage grouped executions. But, since all specifications share MPI_COMM_WORLD
, we must adjust the communicator per client within a group. Therefore, we split MPI_COMM_WORLD
for each client within a group:
MPI_Init(NULL, NULL);
int* appnum = NULL;
int info = -1;
MPI_Comm_get_attr(MPI_COMM_WORLD, MPI_APPNUM, &appnum, &info);
MPI_Comm comm_app = MPI_COMM_NULL;
MPI_Comm_split(MPI_COMM_WORLD, *appnum, me, &comm_app);
Please refer to this section that elaborates on why we use MPI_COMM_WORLD
across different clients within a group.
Overall, the C API will follow this structure:
#include <stdio.h>
#include <stdlib.h>
#include <mpi.h>
#include <melissa_api.h>
int main(int argc, char** argv) {
MPI_Init(&argc, &argv);
int num_dofs = 100; // Example degrees of freedom
MPI_Comm comm_app = MPI_COMM_WORLD;
const char field_name[] = "temperature";
melissa_init(field_name, num_dofs, comm_app);
double* u = (double*)calloc(num_dofs, sizeof(double));
// Temporal loop for sending data
int nb_time_steps = 10;
for (int t = 0; t < nb_time_steps; t++) {
// Fill u with simulation data for timestep t
melissa_send(field_name, u);
}
free(u);
melissa_finalize();
MPI_Finalize();
return 0;
}
Python API¶
Python API utilizes ctypes
module to call the functions of already defined C API. Therefore, Users must import melissa_api
module in their python solvers which is installed under MELISSA_INSTALLL_PREFIX/lib/python3.x/site-packages
. This path is appended to the PYTHONPATH
when sourcing melissa_set_env.sh
script in the current shell.
Note
melissa_send
expects numpy
format arrays.
Overall, the Python API will follow this structure:
import numpy as np
from mpi4py import MPI
from melissa_api import *
# Define MPI communicator
comm = MPI.COMM_WORLD
# Field names
field_1 = "field_1"
field_2 = "field_2"
# Vector size for each field
vector_size = 1000
# Initialize fields
melissa_init(field_1, vector_size, comm)
melissa_init(field_2, vector_size, comm)
# Temporal loop
nb_time_steps = 100
for t in range(nb_time_steps):
# Fill data_field_1 at timestep t
data_field_1 = np.random.randn(vector_size)
melissa_send(field_1, data_field_1)
# Fill data_field_2 at timestep t
data_field_2 = np.random.randn(vector_size)
melissa_send(field_2, data_field_2)
# Finalize connection
melissa_finalize()
Fortran API¶
Fortran-specific routines in melissa_api.f90
define interfaces that bind to their C counterparts for reusability.
Note
Melissa currently supports mpif.h
but not the mpi
module (see Fortran Support Through the mpif.h Include File for more details). Therefore, in Fortran, ensure the following:
Since Fortran and C handle strings differently, field_name
must be null-terminated. For example, to define a field named temperature
:
Overall, the Fortran API will follow this structure:
program melissa_example
include "melissa_api.f90"
include "mpif.h"
implicit none
integer :: num_dofs, nb_time_steps, t, ierr
integer :: comm_app
character(len=12) :: field_name
real(8), allocatable :: u(:)
call MPI_Init(ierr)
num_dofs = 100 ! Example degrees of freedom
comm_app = MPI_COMM_WORLD
field_name = "temperature" // char(0)
call melissa_init(field_name, num_dofs, comm_app)
allocate(u(num_dofs))
! Temporal loop for sending data
nb_time_steps = 10
do t = 0, nb_time_steps - 1
! Fill u with simulation data for timestep t
call melissa_send(field_name, u)
end do
deallocate(u)
call melissa_finalize()
call MPI_Finalize(ierr)
end program melissa_example