fpmas 1.6
Public Member Functions | List of all members
fpmas::synchro::hard::HardDataSync< T > Class Template Reference

#include <hard_data_sync.h>

Inheritance diagram for fpmas::synchro::hard::HardDataSync< T >:
Inheritance graph
[legend]
Collaboration diagram for fpmas::synchro::hard::HardDataSync< T >:
Collaboration graph
[legend]

Public Member Functions

 HardDataSync (fpmas::api::communication::MpiCommunicator &comm, ServerPackBase &server_pack, fpmas::api::graph::DistributedGraph< T > &graph)
 
void synchronize () override
 
void synchronize (std::unordered_set< fpmas::api::graph::DistributedNode< T > * >) override
 
- Public Member Functions inherited from fpmas::api::synchro::DataSync< T >
virtual void synchronize ()=0
 
virtual void synchronize (std::unordered_set< api::graph::DistributedNode< T > * > nodes)=0
 

Detailed Description

template<typename T>
class fpmas::synchro::hard::HardDataSync< T >

HardSyncMode fpmas::api::synchro::DataSync implementation.

Since data is always fetched directly from distance processes by the HardSyncMutex, no real data synchronization is required in this implementation. However, a termination algorithm must be applied in the synchronization process to handle incoming requests from other processes until global termination (i.e. global synchronization) is determined.

Constructor & Destructor Documentation

◆ HardDataSync()

template<typename T >
fpmas::synchro::hard::HardDataSync< T >::HardDataSync ( fpmas::api::communication::MpiCommunicator comm,
ServerPackBase server_pack,
fpmas::api::graph::DistributedGraph< T > &  graph 
)
inline

HardDataSync constructor.

Parameters
commMPI communicator
server_packassociated server pack
graphassociated graph

Member Function Documentation

◆ synchronize() [1/2]

template<typename T >
void fpmas::synchro::hard::HardDataSync< T >::synchronize ( )
inlineoverridevirtual

Synchronizes all the processes using ServerPack::terminate().

Notice that, since ServerPack is used, not only HardSyncMutex requests (i.e. data related requests) but also SyncLinker requests will be handled until termination.

This is required to avoid the following deadlock situation :

Process 0 Process 1
... ...
init HardDataSync::terminate() send link request to P0...
handles data requests... waits for link response...
handles data requests... waits for link response...
DEADLOCK

Implements fpmas::api::synchro::DataSync< T >.

◆ synchronize() [2/2]

template<typename T >
void fpmas::synchro::hard::HardDataSync< T >::synchronize ( std::unordered_set< fpmas::api::graph::DistributedNode< T > * >  )
inlineoverride

Same as synchronize().

Indeed, partial synchronization is not really relevant in the case of HardSyncMode, since data exchanges in this mode occur only when required, when read() or acquire() method are called, but not directly when the termination algorithm is performed.

So we can consider that all nodes are synchronized in any case when this method is called.


The documentation for this class was generated from the following file: