Systems, Generativity and Interactional Effects

August 15, 2021

I was reading Elie Adam's PhD thesis, and I think it is very important. It discussed "generative effects" and "cascading failures" in complex systems (ie. power grids and pandemics) from an algebraic perspective. This is novel because it is a departure from simulation. Historically, the only way we could reason about emergent properties was to set up a high-fidelity simulation and observe the results. This is obviously computationally expensive, especially since these systems are so chaotic and large. 

This work gives us a principled, mathematical  intuition for emergent phenomena, which is important because complex systems are the hardest and most pressing areas of physics.

So what is his appraoch?

First, consider the expected behavior of the system as would be dictated by just the subsystems. For example, in a power grid failure (say, a transformer blows up) it is expected that the subsystem will stop delivering power to downstream systems and endpoints. However, in practice, this is not neccesarily the only outcome. 

Consider the fact that if one system goes down, the power formerly dissapted by that system will resdistibute accros the other nodes. Now, this increase in current at other stations may blow up their transformers, which in turn will further increase the current on the rest. This is a "cascading failure", because a failure in one node may take down the whole system.

Adam defines "generative effect" as the difference between the expected outcome and actual outcome. He goes on to rigourosly define this statement, but it is sufficent for intuition. In  our power system example, the generative effect is all the behavior in the system that is not consistent with our original prediction (that only the nodes downstream will shutdown)

Furthermore, he argues that generative effects are caused by a lack information. In the power grid, wether there is a cascading failure or not depends on the current carrying capacity of every component. This is why it's impossible to see it at a subsystem level. 

The rest of the paper is makeing these ideas precise.