VI. Decouple Time

Process asynchronously to avoid coordination and waiting

It’s been said that “silence is golden”, and it is as true in software systems as in the real world. Amdahl’s Law new tab and the Universal Scalability Law new tab show that the ceiling on scalability can be lifted by avoiding needless communication, coordination, and waiting.

There are still times when we have to communicate and coordinate our actions. The problem with blocking on resources—for example I/O new tab but also when calling a different service—is that the caller, including the thread it is executing on, is held hostage waiting for the resource to become available. During this time the calling component (or a part thereof) is unavailable for other requests.

This can be mitigated or avoided by employing temporal decoupling. Temporal decoupling helps break the time availability dependency between remote components. When multiple components synchronously exchange messages, it presumes the availability and reachability of all these components for the duration of the exchange. This is a fragile assumption in the context of distributed systems, where we can’t ensure the availability or reachability of all components in a system at all times. By introducing temporal decoupling in our communication protocols, one component does not need to assume and require the availability of the other components. It makes the components more independent and autonomous and, as a consequence, the overall system more reliable. Popular techniques to implement temporal decoupling include durable message queues, append-only journals, and publish-subscribe topics with a retention duration.

With temporal decoupling, we give the caller the option to perform other work, asynchronously new tab, rather than be blocked waiting on the resource to become available. This can be achieved by allowing the caller to put its request on a queue, register a callback new tab to be notified later, return immediately, and continue execution (e.g., non-blocking I/O new tab). A great way to orchestrate callbacks is to use a Finite State Machine new tab (FSM), other techniques include Futures/Promises new tab, Dataflow Variables, new tab Async/Await new tab, Coroutines new tab, and composition of asynchronous functional combinators new tab in streaming libraries.

The aforementioned programming techniques serve the higher-level purpose of affording each component in a Reactive application more freedom in choosing to process incoming information in their own time, according to their own prioritization. As such, this decoupling also gives components more autonomy.