Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

MPI Forum: What's Next?

Feb, 28, 2013 Hi-network.com

Now that we're just starting into the MPI-3.0 era, what's next?

The MPI Forum is still having active meetings.  What is left to do?  Isn't MPI "done"?

Nope.  MPI is an ever-changing standard to meet the needs of HPC.  And since HPC keeps changing, so does MPI.

The next MPI Forum meeting is in about two weeks at the Microsoft facility in Chicago, for example.  The Forum is generally working on two things:

  • MPI-3.1: errata to MPI 3.0
  • MPI-4.0: new "big things" / major topics and features

I should say that both of those names haven't been finally settled yet ("MPI-3.1" and "MPI-4.0", but I'm thinking it's pretty safe to assume that that is what they will be called).

Here's a few of the major topics that we'll be discussing in Chicago:

  • More possibilities for integration with tools: debuggers, profilers, correctness-checkers, etc.
  • New MPI collectives (e.g., MPI_ALLTOALLWT)
  • Fault tolerance.  This major topic didn't make the cut into MPI-3.0, but there are still many on the Forum who are interested in gettingsomekind of fault tolerance definitions into the next major release of the MPI specification.
  • Various 3.1 errata
  • More one-sided extensions (e.g., non-blocking creation and allocation behavior)
  • Making MPI play nice with other parallel systems (e.g., OpenMP, proprietary threads-based packages, etc.)
  • Proposals for a few new miscellaneous routines (e.g., MPI_RECVREDUCE, MPI_SENDV / MPI_RECVV, etc.)

You can see that there's still quite a bit of activity occurring.  Many discussions are forward-looking to exascale.  Many discussions are related to real-world usage of first MPI-3.0 implementations.  And so on.

Let your local MPI Forum rep know of any new suggestions, ideas, and concerns that you have.


tag-icon Горячие метки: HPC (HPC) mpi MPI-3.0

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.