dwww Home | Manual pages | Find package

MPI_Exscan(3)                      Open MPI                      MPI_Exscan(3)

NAME
       MPI_Exscan,  MPI_Iexscan  -  Computes an exclusive scan (partial reduc-
       tion)

SYNTAX
C Syntax
       #include <mpi.h>
       int MPI_Exscan(const void *sendbuf, void *recvbuf, int count,
            MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

       int MPI_Iexscan(const void *sendbuf, void *recvbuf, int count,
            MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
            MPI_Request *request)

Fortran Syntax
       USE MPI
       ! or the older form: INCLUDE 'mpif.h'
       MPI_EXSCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR)
            <type>    SENDBUF(*), RECVBUF(*)
            INTEGER   COUNT, DATATYPE, OP, COMM, IERROR

       MPI_IEXSCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, REQUEST, IERROR)
            <type>    SENDBUF(*), RECVBUF(*)
            INTEGER   COUNT, DATATYPE, OP, COMM, REQUEST, IERROR

Fortran 2008 Syntax
       USE mpi_f08
       MPI_Exscan(sendbuf, recvbuf, count, datatype, op, comm, ierror)
            TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
            TYPE(*), DIMENSION(..) :: recvbuf
            INTEGER, INTENT(IN) :: count
            TYPE(MPI_Datatype), INTENT(IN) :: datatype
            TYPE(MPI_Op), INTENT(IN) :: op
            TYPE(MPI_Comm), INTENT(IN) :: comm
            INTEGER, OPTIONAL, INTENT(OUT) :: ierror

       MPI_Iexscan(sendbuf, recvbuf, count, datatype, op, comm, request, ierror)
            TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
            TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
            INTEGER, INTENT(IN) :: count
            TYPE(MPI_Datatype), INTENT(IN) :: datatype
            TYPE(MPI_Op), INTENT(IN) :: op
            TYPE(MPI_Comm), INTENT(IN) :: comm
            TYPE(MPI_Request), INTENT(OUT) :: request
            INTEGER, OPTIONAL, INTENT(OUT) :: ierror

C++ Syntax
       #include <mpi.h>
       void MPI::Intracomm::Exscan(const void* sendbuf, void* recvbuf,
            int count, const MPI::Datatype& datatype,
            const MPI::Op& op) const

INPUT PARAMETERS
       sendbuf   Send buffer (choice).

       count     Number of elements in input buffer (integer).

       datatype  Data type of elements of input buffer (handle).

       op        Operation (handle).

       comm      Communicator (handle).

OUTPUT PARAMETERS
       recvbuf   Receive buffer (choice).

       request   Request (handle, non-blocking only).

       IERROR    Fortran only: Error status (integer).

DESCRIPTION
       MPI_Exscan is used to perform an exclusive  prefix  reduction  on  data
       distributed across the calling processes. The operation returns, in the
       recvbuf of the process with rank i, the reduction (calculated according
       to  the  function  op)  of the values in the sendbufs of processes with
       ranks 0, ..., i-1. Compare this with  the  functionality  of  MPI_Scan,
       which  calculates over the range 0, ..., i (inclusive). The type of op-
       erations supported, their semantics, and the constraints  on  send  and
       receive buffers are as for MPI_Reduce.

       The  value  in  recvbuf  on  process  0  is undefined and unreliable as
       recvbuf is not significant for process  0.  The  value  of  recvbuf  on
       process 1 is always the value in sendbuf on process 0.

USE OF IN-PLACE OPTION
       The  `in  place'  option for intracommunicators is specified by passing
       MPI_IN_PLACE in the sendbuf argument. In this case, the input  data  is
       taken from the receive buffer, and replaced by the output data.

       Note  that MPI_IN_PLACE is a special kind of value; it has the same re-
       strictions on its use as MPI_BOTTOM.

       Because the in-place option converts the receive buffer  into  a  send-
       and-receive  buffer,  a  Fortran binding that includes INTENT must mark
       these as INOUT, not OUT.

NOTES
       MPI does not specify which process computes which operation. In partic-
       ular,  both  processes  0 and 1 may participate in the computation even
       though the results for both processes' recvbuf are  degenerate.  There-
       fore, all processes, including 0 and 1, must provide the same op.

       It  can be argued, from a mathematical perspective, that the definition
       of MPI_Exscan is unsatisfactory because the output at process 0 is  un-
       defined.   The  "mathematically  correct" output for process 0 would be
       the unit element of the reduction operation. However, such a definition
       of  an  exclusive scan would not work with user-defined op functions as
       there is no way for MPI to "know" the unit value for these custom oper-
       ations.

NOTES ON COLLECTIVE OPERATIONS
       The  reduction  functions  of type MPI_Op do not return an error value.
       As a result, if the functions detect an error, all they can do  is  ei-
       ther  call  MPI_Abort  or silently skip the problem. Thus, if the error
       handler is changed from MPI_ERRORS_ARE_FATAL to something  else  (e.g.,
       MPI_ERRORS_RETURN), then no error may be indicated.

       The  reason  for  this is the performance problems in ensuring that all
       collective routines return the same error value.

ERRORS
       Almost all MPI routines return an error value; C routines as the  value
       of  the  function  and Fortran routines in the last argument. C++ func-
       tions do not return errors. If the default  error  handler  is  set  to
       MPI::ERRORS_THROW_EXCEPTIONS, then on error the C++ exception mechanism
       will be used to throw an MPI::Exception object.

       Before the error value is returned, the current MPI  error  handler  is
       called.  By  default, this error handler aborts the MPI job, except for
       I/O  function  errors.  The  error  handler   may   be   changed   with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN
       may be used to cause error values to be returned. Note  that  MPI  does
       not guarantee that an MPI program can continue past an error.

       See the MPI man page for a full list of MPI error codes.

SEE ALSO
       MPI_Op_create
       MPI_Reduce
       MPI_Scan

4.1.2                            Nov 24, 2021                    MPI_Exscan(3)

Generated by dwww version 1.14 on Mon Jan 27 10:37:12 CET 2025.