ParallelLayer

From ScorecWiki

Revision as of 15:09, 3 May 2011; view current revision
←Older revision | Newer revision→
Jump to: navigation, search

Contents

MPI/Thread

Description of various operation modes:

  • MPI_THREAD_SINGLE: Only one thread will execute.
  • MPI_THREAD_FUNNELED: If the process is multithreaded, only the thread that called MPI_Init_thread will make MPI calls.
  • MPI_THREAD_SERIALIZED: If the process is multithreaded, only one thread will make MPI library calls at one time.
  • MPI_THREAD_MULTIPLE: If the process is multithreaded, multiple threads may call MPI at once with no restrictions

Delalf 11:08, 3 May 2011 (EDT) I believe we should try to evaluate MPI implementation for MPI_THREAD_MULTIPLE (best case scenario) and MPI_THREAD_FUNNELED (not great though)

IPComMan Neighboring information

As a followup of the discussion about initializing and updating neighborhood information for ipcomman, I found the following information

1- Following points from the ipcomman paper (How the application updates neighborhood information using ipcomman)

  • Neighborhoods can be represented by a graph, where each graph node is a processor and each graph edge is a communication link.
  • During the first communication phase the application identifies neighbors for each processor and provides this information to the communication package.
  • The tracking and support of dynamically changing neighborhood is also required. The application should be aware of such situations and let the communication package know that there might be new communication links created during the communication step.

Delalf 11:08, 3 May 2011 (EDT) I believe those last two points should not be taken care of by the application. Does you guys agree with that ? I see it as something (maybe not ipcomman) taking care of the node graph and distribute that graph (parmetis) over the processors.

2- When a partitioned mesh is loaded, FMDB initializes neighboring part information based on partition model information. The code for that is in meshio/pmMeshIO.cc (loadPartitionedMesh)

Delalf 11:09, 3 May 2011 (EDT) Partition model should be out of fmdb but maybe not in the parallel layer since it is information related to the mesh ... Or this should be made generic and accessible to all components --> moving to parallel layer.

3- FMDB's mesh migration also changes neighborhood information (ipcomman has a set of integers in which it maintains neighboring part ids), so the migration procedure updates ipcomman neighbors based on the new partition model information (code is in migr/_migration.cc function: _migrateMesh). (By mubarm)

Meetings

04/26/2011

More to come here

To do:

  • Dan
    • Evaluate MPI-2 standard in the context of mixing mpi with threads.
    • Evaluate to daute MPI implementation
  • Misbah
    • Evaluate neighboring information (concept and implementation)
    • What can be done to make it generic
  • Fabien/Alex
    • Evaluate IPComMan API
    • Is the API sufficient enough for our use cases (use cases to be defined)

05/03/2011

TDB

Personal tools