hmbdc
simplify-high-performance-messaging-programming
|
A Context is like a media object that facilitates the communications for the Clients that it is holding. a Client can only be added to (or started within) once to a single Context, undefined behavior otherwise. the communication model is determined by the context_property by default it is in the nature of broadcast fashion within local process indicating by broadcast<> More...
#include <Context.hpp>
Public Member Functions | |
Context (uint32_t messageQueueSizePower2Num=MaxMessageSize?20:2, size_t maxPoolClientCount=MaxMessageSize?128:0, size_t maxMessageSizeRuntime=MaxMessageSize, size_t maxThreadSerialNumber=64) | |
ctor for construct local non-ipc Context More... | |
Context (char const *ipcTransportName, uint32_t messageQueueSizePower2Num=MaxMessageSize?20:0, size_t maxPoolClientCount=MaxMessageSize?128:0, size_t maxMessageSizeRuntime=MaxMessageSize, uint64_t purgerCpuAffinityMask=0xfffffffffffffffful, size_t maxThreadSerialNumber=64) | |
ctor for construct local ipc Context More... | |
~Context () | |
dtor More... | |
template<typename Client > | |
void | addToPool (Client &client, uint64_t poolThreadAffinityIn=0xfffffffffffffffful) |
add a client to Context's pool - the Client is run in pool mode More... | |
template<typename Client , typename ... Args> | |
void | addToPool (Client &client, uint64_t poolThreadAffinityIn, Args &&...args) |
add a bunch of clients to Context's pool - the Clients are run in pool mode More... | |
template<typename Client , typename Client2 , typename ... Args> | |
std::enable_if<!std::is_integral< Client2 >::value, void >::type | addToPool (Client &client, Client2 &client2, Args &&...args) |
add a bunch of clients to Context's pool - the Clients are run in pool mode More... | |
size_t | clientCountInPool () const |
return the numebr of clients added into pool More... | |
size_t | parallelConsumerAlive () const |
how many parallel consummers are started More... | |
template<typename ... Args> | |
void | start (Args &&... args) |
start the context by specifying what are in it (Pool and/or direct Clients) and their paired up cpu affinities. More... | |
void | stop () |
stop the message dispatching - asynchronously More... | |
void | join () |
wait until all threads (Pool threads too if apply) of the Context exit More... | |
void | setSecondsBetweenPurge (uint32_t s) |
ipc_creator Context runs a StcuClientPurger to purge crashed (or slow, stuck ...) Clients from the ipc transport to make the ipc trasnport healthy (avoiding buffer full). It periodically looks for things to purge. This is to set the period (default is 60 seconds). More... | |
void | runPoolThreadOnce (uint16_t threadSerialNumberInPool) |
normally not used until you want to run your own message loop More... | |
template<typename Client > | |
void | runClientThreadOnce (uint16_t threadSerialNumber, Client &c) |
normally not used until you want to run your own message loop More... | |
![]() | |
std::enable_if<!std::is_integral< M1 >::value, void >::type | send (M0 &&m0, M1 &&m1, Messages &&... msgs) |
try send a batch of messages to the Context or attached ipc Contexts More... | |
void | send (ForwardIt begin, size_t n) |
send a range of messages to the Context or attached ipc Contexts More... | |
void | send (Message &&m) |
send a message to the Context or attached ipc Contexts More... | |
std::enable_if<!std::is_integral< M1 >::value, bool >::type | trySend (M0 &&m0, M1 &&m1, Messages &&... msgs) |
try to send a batch of message to the Context or attached ipc Contexts More... | |
bool | trySend (ForwardIt begin, size_t n) |
try send a range of messages to the Context or attached ipc Contexts More... | |
bool | trySend (Message &&m) |
try to send a message to the Context or attached ipc Contexts if it wouldn't block More... | |
void | sendInPlace (Args &&... args) |
send a message to all Clients in the Context or attached ipc Contexts More... | |
bool | trySendInPlace (Args &&... args) |
try send a message to all Clients in the Context or attached ipc Contexts if it wouldn't block More... | |
Buffer & | buffer () |
accessor - mostly used internally More... | |
A Context is like a media object that facilitates the communications for the Clients that it is holding. a Client can only be added to (or started within) once to a single Context, undefined behavior otherwise. the communication model is determined by the context_property by default it is in the nature of broadcast fashion within local process indicating by broadcast<>
a broadcast Context contains a thread Pool powered by a number of OS threads. a Client running in such a Context can either run in the pool mode or a direct mode (which means the Client has its own dedicated OS thread) direct mode provides faster responses, and pool mode provides more flexibility. It is recommended that the total number of threads (pool threads + direct threads) not exceeding the number of available cores.
MaxMessageSize | What is the max message size if known at compile time(compile time sized); if the value can only be determined at runtime (run time sized), set this to 0. Things can still work but will lost some compile time checking advantages, see maxMessageSizeRuntime below |
ContextProperties | see context_property namespace |
|
inline |
ctor for construct local non-ipc Context
won't compile if calling it for ipc Context
messageQueueSizePower2Num | value of 10 gives message queue if size of 1024 (messages, not bytes) |
maxPoolClientCount | up to how many Clients the pool is suppose to support, only used when pool supported in the Context with broadcast property |
maxMessageSizeRuntime | if MaxMessageSize set to 0, this value is used |
maxThreadSerialNumber | the max number of threads (direct mmode clients plus pool threads) the context can manage |
|
inline |
ctor for construct local ipc Context
won't compile if calling it for local non-ipc Context
ipcTransportName | the id to identify an ipc transport that supports a group of attached together Contexts and their Clients |
messageQueueSizePower2Num | value of 10 gives message queue if size of 1024 (messages, not bytes) |
maxPoolClientCount | up to how many Clients the pool is suppose to support, only used when pool supported in the Context with broadcast property |
maxMessageSizeRuntime | if MaxMessageSize set to 0, this value is used |
purgerCpuAffinityMask | which cores to run the low profile (sleep mostly) thread in charge of purging crashed Clients. Used only for ipc_creator Contexts. |
maxThreadSerialNumber | the max number of threads (direct mmode clients plus pool threads) the context can manage |
|
inline |
dtor
if this Context owns ipc transport, notify all attached processes that read from it that this tranport is dead
|
inline |
add a client to Context's pool - the Client is run in pool mode
if pool is already started, the client is to get current Messages immediatly
Client | client type |
client | to be added into the Pool |
poolThreadAffinityIn | pool is powered by a number of threads (thread in the pool is identified (by a number) in the mask starting from bit 0) it is possible to have a Client to use just some of the threads in the Pool |
default to use all.
|
inline |
add a bunch of clients to Context's pool - the Clients are run in pool mode
if pool is already started, the client is to get current Messages immediatly
Client | client type |
client | to be added into the Pool |
poolThreadAffinityIn | pool is powered by a number of threads (thread in the pool is identified (by a number) in the mask starting from bit 0) it is possible to have a Client to use just some of the threads in the Pool |
default to use all.
args | more client and poolThreadAffinityIn pairs can follow |
|
inline |
add a bunch of clients to Context's pool - the Clients are run in pool mode
the implementatiotn tells all if the pool not started yet, the Client does not get messages or other callbacks until the Pool starts. This function is threadsafe, which means you can call it anywhere in the code
Client | client type |
Client2 | client2 type |
client | to be added into the Pool using default poolThreadAffinity |
client2 | to be added into the Pool |
args | more client (and/or poolThreadAffinityIn pairs can follow |
|
inline |
return the numebr of clients added into pool
the number could change since the clients could be added in another thread
|
inline |
wait until all threads (Pool threads too if apply) of the Context exit
blocking call
|
inline |
how many parallel consummers are started
the dynamic value could change after the call returns see max_parallel_consumer Context property
|
inline |
|
inline |
normally not used until you want to run your own message loop
call this function frequently to pump hmbdc message loop in its pool
threadSerialNumber | starting from 0, indicate which thread in the pool is powering the loop |
|
inline |
ipc_creator Context runs a StcuClientPurger to purge crashed (or slow, stuck ...) Clients from the ipc transport to make the ipc trasnport healthy (avoiding buffer full). It periodically looks for things to purge. This is to set the period (default is 60 seconds).
If some Client are known to take long to process messages, increase it. If you need to remove slow Clients quickly reduce it. Only effective for ipc_creator Context.
s | seconds |
|
inline |
start the context by specifying what are in it (Pool and/or direct Clients) and their paired up cpu affinities.
All direct mode or clients in a pool started by a single start statement are dispatched with starting from the same event (subjected to event filtering of each client). many compile time and runtime check is done, for example: won't compile if start a pool in a Context does not support one; exception throw if the Context capacity is reached or try to start a second pool, etc.
Usage example:
typename | ...Args types |
args | paired up args in the form of (pool-thread-count|client, cpuAffinity)*. see examples above. If a cpuAffinity is 0, each thread's affinity rotates to one of the CPUs in the system. |
|
inline |
stop the message dispatching - asynchronously
asynchronously means not garanteed message dispatching stops immidiately after this non-blocking call