MPI: blocking vs non-blocking

Blocking communication is done using MPI_Send() and MPI_Recv(). These functions do not return (i.e., they block) until the communication is finished. Simplifying somewhat, this means that the buffer passed to MPI_Send() can be reused, either because MPI saved it somewhere, or because it has been received by the destination. Similarly, MPI_Recv() returns when the receive … Read more

MPI_Bcast a dynamic 2d array

There’s three issues here – one involving allocations, one involving where it’s allocated, and one involving how MPI works, and none of the other answers quite touch on all of them. The first and most serious issue is where things are allocated. As correctly pointed out by @davidb, as it stands you’re allocating memory only … Read more

Sending 2D arrays in Fortran with MPI_Gather

The following a literal Fortran translation of this answer. I had thought this was unnecessary, but the multiple differences in array indexing and memory layout might mean that it is worth doing a Fortran version. Let me start by saying that you generally don’t really want to do this – scatter and gather huge chunks … Read more

How do I debug an MPI program?

I have found gdb quite useful. I use it as mpirun -np <NP> xterm -e gdb ./program This the launches xterm windows in which I can do run <arg1> <arg2> … <argN> usually works fine You can also package these commands together using: mpirun -n <NP> xterm -hold -e gdb -ex run –args ./program [arg1] … Read more

struct serialization in C and transfer over MPI

Jeremiah is right – MPI_Type_create_struct is the way to go here. It’s important to remember that MPI is a library, not built into the language; so it can’t “see” what a structure looks like to serialize it by itself. So to send complex data types, you have to explicitly define its layout. In a language … Read more

MPI_Rank return same process number for all process

Make sure that both mpicc and mpirun come from the same MPI implementation. When mpirun fails to provide the necessary universe information to the launched processes, with the most common reason for that being that the executable was build against a different MPI implementation (or even a different version of the same implementation), MPI_Init() falls … Read more

Ordering Output in MPI

You guessed right. The MPI standard does not specify how stdout from different nodes should be collected for printing at the originating process. It is often the case that when multiple processes are doing prints the output will get merged in an unspecified way. fflush doesn’t help. If you want the output ordered in a … Read more

Sending and receiving 2D array over MPI

Just to amplify Joel’s points a bit: This goes much easier if you allocate your arrays so that they’re contiguous (something C’s “multidimensional arrays” don’t give you automatically:) int **alloc_2d_int(int rows, int cols) { int *data = (int *)malloc(rows*cols*sizeof(int)); int **array= (int **)malloc(rows*sizeof(int*)); for (int i=0; i<rows; i++) array[i] = &(data[cols*i]); return array; } /*…*/ … Read more