In the context of multi-site parallel applications where a single MPI program is executed on several clusters linked with grid technologies (i.e. the Globus toolkit and MPICH-G2) a problem can arise in the presence of clusters with private networks. In such an environment two nodes belonging to different clusters with no public IP address cannot communicate directly. We have seen how this problem can be solved in a portable manner by introducing user MPI processes, running on the front-end, whose only purpose is to route messages among clusters. In this work we have improved private node inter-cluster point-to-point communication by using the UDP protocol and a NAT service on the front-end. The first enhancement improves network performance by reducing the transport protocol overheads while the second enhancement avoids useless memory copies for the message on the forwarding host. Performance comparison is resented with the user MPI process solution and the impact of the two enhancements is separately evaluated.

Improving node inter-cluster point-to-point communication by using the UDP protocol and the NAT service

Gregoretti Francesco;Oliva Gennaro;
2009

Abstract

In the context of multi-site parallel applications where a single MPI program is executed on several clusters linked with grid technologies (i.e. the Globus toolkit and MPICH-G2) a problem can arise in the presence of clusters with private networks. In such an environment two nodes belonging to different clusters with no public IP address cannot communicate directly. We have seen how this problem can be solved in a portable manner by introducing user MPI processes, running on the front-end, whose only purpose is to route messages among clusters. In this work we have improved private node inter-cluster point-to-point communication by using the UDP protocol and a NAT service on the front-end. The first enhancement improves network performance by reducing the transport protocol overheads while the second enhancement avoids useless memory copies for the message on the forwarding host. Performance comparison is resented with the user MPI process solution and the impact of the two enhancements is separately evaluated.
2009
Istituto di Calcolo e Reti ad Alte Prestazioni - ICAR
Grid Computing
MPI
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14243/65399
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact