# History of Asynchronous Algorithms

Synchronous systems makes life a little easier where operations are coordinated by a central clock. However many a times, we have to deal with asynchronous systems where states depend on how instructions are interleaved. In simple terms, asynchornous algorithms make no assumptions about time. Asynchronous systems allows better recovery of communications and allows failures. Asynchronous systems also has the advantage of better adaptability in the computing infrastructure, such as changes in topology, changes in the number of machines etc.

When we consider the execution time of asynchrous algorithms, there is no bound on the time to execute local process steps. However, time to execute a local step is finite. There are no synchronised clocks in asynchronous processes. Thus asynchronous systems do not depend on strict arrival times of signals for reliable operation. It is often necessary when requests are coming from the external systems and agents.

The first significant theoretical work on asynchronous algorithms dates back to 1969, known as the Chazan and Mirankar Model which focused on linear system resolutions. Their seminal paper is known as ‘Chaotic Relaxation’ model. Chazan and Mirankar introduced their approach in the following words.

In this paper we consider relaxation methods for solving linear systems of equations. These methods are suited for execution on a parallel system of processors. They have the feature of allowing a minimal amount of communication of computational status between the computers, so that the relaxation process, while taking on a chaotic appearance, reduces programming and processor time of a bookkeeping nature. We give a precise characterization of chaotic relaxation, some examples of divergence, and conditions guaranteeing convergence.

It is interesting to note that the problem of chaotic relaxation was suggested to Chazan and Mirankar by J. Rosenfeld.

Rosenfeld found that the use of more processors decreased the computation time for solving the linear system. The precise form of the decrease depended on the relative number of processors, r, to n, the dimension. The chaotic form of the relaxation eliminated considerable programming and computer time in coordinating the processors and the algorithm. His experiments also exhibited a diminution in the amount of overrelaxation allowable for convergence in a chaotic mode

Years later, John C. Strikwerda published a set of interesting observations about the Chazan — Mirankar Model in a paper titled, a convergence theorem for chaotic asynchronous relaxation. In this research work, John C. Strikwerda makes following observations.

In chaotic relaxation the order in which components of the solution are updated is arbitrary and the past values of components that are used in the updates are also selected arbitrarily. This is meant to be a model for parallel computation in which different processors work independently and have access to data values in local memory. The chaotic model of Chazan and Miranker is interesting because of the great generality allowed and because there is a simple necessary and sufficient condition for the system to converge for chaotic updates.

Later, specific type of asynchronous algorithm was formulated known as asynchronous iterative algorithm, where the components of the iterated vector are updated in parallel without any prior order of synchronization between the machines.

Thus the mathematical model around asynchrnous algorithms originated from nonlinear formulations about systems and processes and later converged to the simplified notions of vectors and linear algebra.

Today we are discussing about asynchronous consensus algorithms and asynchronous sharding systems because of the shared memory model. Yet, the true trajectory of asychnrous systems and algorthms is still estoteric as the bridge between chaos and autonomy is still elusive!

When we embark on the terrains of real world systems, we have to factor in asynchronous processes and algorithms. In this context, chaos, turbulance, stochasticity, and randomness comes to prominence. They become poweful measures of computation. It is qutie ironic that even in partially synchronous systems, we employ verifiable randomness and verifiable delay functions. Thus, non-linear spectres looms larger in the normative plains of linearity.