This post is about how to schedule workers across a pipeline of queues in order to minimise total processing time, and an unexpected connection between this kind of scheduling and Thomas Jefferson.
You know how assembly lines in manufacturing and instruction pipelines in CPUs give a form of parallelism, without breaking determinism (the order of inputs to the pipeline is preserved in the outputs)?
We’ll call this implicit parallelism via pipelining. Implicit, because by merely splitting the task at hand up in stages, we get parallelism for free. When this technique is applied to software, it allows us to write purely sequential programs (for each stage), while still utilising our parallel hardware.
The way it works is that each stage runs independently (one CPU/core) and the stages are connected via queues, so when the first CPU/core is done with the first stage of the first item, it passes it on to the second stage while continuing to work at the first stage of the second item. As the queues between the stages saturate we get parallelism while retaining determinism (the outputs arrive in the same order as the inputs).
Each stage is connected by conveyor belts (queues) and runs in parallel with the other, e.g. after as the first bottle has been filled, the second can be filled while at the same time the first bottle is being capped, etc.