4. Coordinate Dataflow

Orchestrate a continuous steady flow of information

Reactive shines in the creation of data-driven applications through the composition of components in workflows. Thinking in terms of dataflow, how the data flows through the system, what behavior it is triggering and where, and how components are causally related new tab allows focusing on the behavior instead of the only on the structure. Orchestrate workflow and integration by letting components (or subsystems) subscribe to each other’s event streams, consuming on-demand the asynchronously published facts.

Consumers should control the rate of consumption — which may well be decoupled from the rate of production, depending on the use-case. It is impossible to overwhelm a consumer that controls its own rate of consumption. This is one of the reasons some architectures employ message queues: they absorb the extra load and allow consumers to drain the work at their leisure. Some architectures design “poison-pill” messages as a way to altogether cancel the production of messages. This combination—a consumer that controls its rate of consumption and an out-of-band mechanism to halt the rate of production—supports flow control. Flow control is an obvious win at the systems architecture level, but all too easy to ignore at lower levels.

Flow control needs to be managed end-to-end with all its participants playing ball lest overburdened consumers fail or consume resources without bound. Libraries that support it natively and compose using standardized protocols can help immensely—such as the Reactive Streams protocol new tab with its wide range of implementations (e.g. Reactor new tab, RxJava new tab, Vert.x new tab, Akka Streams new tab, Mutiny new tab, and RSocket new tab).

Reactive Streams employ a scheme called back-pressure new tab in which the producer is forced to slow down when the consumer cannot keep up. Another scheme is to place a message queue between producer and consumer and react to the utilization of this queue, for example by giving the consumer more resources or by slowing down or degrading the functionality of the producer.

Particular care for the dataflows needs to be taken at the edges, where components compose and interact—with each other or with third-party systems. Establishing protocols for graceful degradation and flow control means decreasing the likelihood of failure, and when it strikes—which it inevitably will—it is beneficial to have a control mechanism in place to manage it.