IBAMR
An adaptive and distributed-memory parallel implementation of the immersed boundary (IB) method
Static Public Member Functions | List of all members
IBTK::IBTK_MPI Struct Reference

Provides C++ wrapper around MPI routines. More...

#include </home/runner/work/IBAMR/IBAMR/ibtk/include/ibtk/IBTK_MPI.h>

Static Public Member Functions

static void setCommunicator (MPI_Comm communicator)
 
static MPI_Comm getCommunicator ()
 
static MPI_Comm getSAMRAIWorld ()
 
static int getRank ()
 
static int getNodes ()
 
static void barrier ()
 
template<typename T >
static T minReduction (T x, int *rank_of_min=nullptr)
 
template<typename T >
static void minReduction (T *x, const int n=1, int *rank_of_min=nullptr)
 
template<typename T >
static T maxReduction (T x, int *rank_of_min=nullptr)
 
template<typename T >
static void maxReduction (T *x, const int n=1, int *rank_of_min=nullptr)
 
template<typename T >
static T sumReduction (T)
 
template<typename T >
static void sumReduction (T *x, const int n=1)
 
static void allToOneSumReduction (int *x, const int n, const int root=0)
 
template<typename T >
static T bcast (const T x, const int root)
 
template<typename T >
static void bcast (T *x, int &length, const int root)
 
template<typename T >
static void send (const T *buf, const int length, const int receiving_proc_number, const bool send_length=true, int tag=0)
 This function sends an MPI message with an array to another processer. More...
 
static void sendBytes (const void *buf, const int number_bytes, const int receiving_proc_number)
 This function sends an MPI message with an array of bytes (MPI_BYTES) to receiving_proc_number. More...
 
static int recvBytes (void *buf, int number_bytes)
 This function receives an MPI message with an array of max size number_bytes (MPI_BYTES) from any processer. More...
 
template<typename T >
static void recv (T *buf, int &length, const int sending_proc_number, const bool get_length=true, int tag=0)
 This function receives an MPI message with an array from another processer. More...
 
template<typename T >
static void allGather (const T *x_in, int size_in, T *x_out, int size_out)
 
template<typename T >
static void allGather (T x_in, T *x_out)
 

Detailed Description

Provides C++ wrapper around MPI routines.

The IBTK_MPI struct provides simple interfaces to common MPI routines. All function calls allow a user-provided communicator to be used.

Note that this class is a utility class to group function calls in one name space (all calls are to static functions). Thus, you should never attempt to instantiate a class of type IBTK_MPI; simply call the functions as static functions using the IBTK_MPI::function(...) syntax.

Member Function Documentation

◆ allGather()

template<typename T >
void IBTK::IBTK_MPI::allGather ( const T *  x_in,
int  size_in,
T *  x_out,
int  size_out 
)
inlinestatic

Each processor sends an array of integers or doubles to all other processors; each processor's array may differ in length. The x_out array must be pre-allocated to the correct length (this is a bit cumbersome, but is necessary to avoid the allGather function from allocating memory that is freed elsewhere). To properly preallocate memory, before calling this method, call

size_out = IBTK_MPI::sumReduction(size_in)

then allocate the x_out array.

◆ allToOneSumReduction()

void IBTK::IBTK_MPI::allToOneSumReduction ( int *  x,
const int  n,
const int  root = 0 
)
static

Perform an all-to-one sum reduction on an integer array. The final result is only available on the root processor.

◆ barrier()

void IBTK::IBTK_MPI::barrier ( )
static

Perform a global barrier across all processors.

◆ bcast()

template<typename T >
T IBTK::IBTK_MPI::bcast ( const T  x,
const int  root 
)
inlinestatic

Broadcast integer array from specified root processor to all other processors. For the root processor, "array" and "length" are treated as const.

◆ getCommunicator()

MPI_Comm IBTK::IBTK_MPI::getCommunicator ( )
static

Get the current MPI communicator. The default communicator is MPI_COMM_WORLD.

◆ getNodes()

int IBTK::IBTK_MPI::getNodes ( )
static

Return the number of processors (nodes).

◆ getRank()

int IBTK::IBTK_MPI::getRank ( )
static

Return the processor rank (identifier) from 0 through the number of processors minus one.

◆ getSAMRAIWorld()

MPI_Comm IBTK::IBTK_MPI::getSAMRAIWorld ( )
static

Get SAMRAI World communicator.

◆ maxReduction()

template<typename T >
T IBTK::IBTK_MPI::maxReduction ( x,
int *  rank_of_min = nullptr 
)
inlinestatic

Perform a max reduction on a data structure of type double, int, or float. Each processor contributes an array of values and element-wise max is returned in the same array. If the rank_of_max is not null, the rank of which processor the max is located is stored in the array.

◆ minReduction()

template<typename T >
T IBTK::IBTK_MPI::minReduction ( x,
int *  rank_of_min = nullptr 
)
inlinestatic

Perform a min reduction on a data structure of type double, int, or float. Each processor contributes an array of values and element-wise min is returned in the same array. If the rank_of_min is not null, the rank of which processor the min is located is stored in the array.

◆ recv()

template<typename T >
void IBTK::IBTK_MPI::recv ( T *  buf,
int &  length,
const int  sending_proc_number,
const bool  get_length = true,
int  tag = 0 
)
inlinestatic

This function receives an MPI message with an array from another processer.

If this processor knows in advance the length of the array, use "get_length = false;" otherwise, the sending processor will first send the length of the array, then send the data. This call must be paired with a matching call to IBTK_MPI::send.

Parameters
bufPointer to a valid type array buffer with capacity of length integers.
lengthMaximum number of integers that can be stored in buf.
sending_proc_numberProcessor number of sender.
get_lengthOptional boolean argument specifiying if we first need to send a message to determine the array size. Default value is true.
tagOptional integer argument specifying a tag which must be matched by the tag of the incoming message. Default tag is 0.

◆ recvBytes()

int IBTK::IBTK_MPI::recvBytes ( void *  buf,
int  number_bytes 
)
static

This function receives an MPI message with an array of max size number_bytes (MPI_BYTES) from any processer.

This call must be paired with a matching call to IBTK_MPI::sendBytes.

This function returns the processor number of the sender.

Parameters
bufVoid pointer to a buffer of size number_bytes bytes.
number_bytesInteger number specifing size of buf in bytes.

◆ send()

template<typename T >
void IBTK::IBTK_MPI::send ( const T *  buf,
const int  length,
const int  receiving_proc_number,
const bool  send_length = true,
int  tag = 0 
)
inlinestatic

This function sends an MPI message with an array to another processer.

If the receiving processor knows in advance the length of the array, use "send_length = false;" otherwise, this processor will first send the length of the array, then send the data. This call must be paired with a matching call to IBTK_MPI::recv.

Parameters
bufPointer to a valid type array buffer with length integers.
lengthNumber of integers in buf that we want to send.
receiving_proc_numberReceiving processor number.
send_lengthOptional boolean argument specifiying if we first need to send a message with the array size. Default value is true.
tagOptional integer argument specifying an integer tag to be sent with this message. Default tag is 0.

◆ sendBytes()

void IBTK::IBTK_MPI::sendBytes ( const void *  buf,
const int  number_bytes,
const int  receiving_proc_number 
)
static

This function sends an MPI message with an array of bytes (MPI_BYTES) to receiving_proc_number.

This call must be paired with a matching call to IBTK_MPI::recvBytes.

Parameters
bufVoid pointer to an array of number_bytes bytes to send.
number_bytesInteger number of bytes to send.
receiving_proc_numberReceiving processor number.

◆ setCommunicator()

void IBTK::IBTK_MPI::setCommunicator ( MPI_Comm  communicator)
static

Set the communicator that is used for the MPI communication routines. The default communicator is MPI_COMM_WORLD.

◆ sumReduction()

template<typename T >
T IBTK::IBTK_MPI::sumReduction ( x)
inlinestatic

Perform a sum reduction on a data structure of type double, int, or float. Each processor contributes an array of values and element-wise sum is returned in the same array.


The documentation for this struct was generated from the following files: