![]() |
fpmas 1.6
|
#include <hard_data_sync.h>
Public Member Functions | |
HardDataSync (fpmas::api::communication::MpiCommunicator &comm, ServerPackBase &server_pack, fpmas::api::graph::DistributedGraph< T > &graph) | |
void | synchronize () override |
void | synchronize (std::unordered_set< fpmas::api::graph::DistributedNode< T > * >) override |
![]() | |
virtual void | synchronize ()=0 |
virtual void | synchronize (std::unordered_set< api::graph::DistributedNode< T > * > nodes)=0 |
HardSyncMode fpmas::api::synchro::DataSync implementation.
Since data is always fetched directly from distance processes by the HardSyncMutex, no real data synchronization is required in this implementation. However, a termination algorithm must be applied in the synchronization process to handle incoming requests from other processes until global termination (i.e. global synchronization) is determined.
|
inline |
HardDataSync constructor.
comm | MPI communicator |
server_pack | associated server pack |
graph | associated graph |
|
inlineoverridevirtual |
Synchronizes all the processes using ServerPack::terminate().
Notice that, since ServerPack is used, not only HardSyncMutex requests (i.e. data related requests) but also SyncLinker requests will be handled until termination.
This is required to avoid the following deadlock situation :
Process 0 | Process 1 |
---|---|
... | ... |
init HardDataSync::terminate() | send link request to P0... |
handles data requests... | waits for link response... |
handles data requests... | waits for link response... |
DEADLOCK |
Implements fpmas::api::synchro::DataSync< T >.
|
inlineoverride |
Same as synchronize().
Indeed, partial synchronization is not really relevant in the case of HardSyncMode, since data exchanges in this mode occur only when required, when read() or acquire() method are called, but not directly when the termination algorithm is performed.
So we can consider that all nodes are synchronized in any case when this method is called.