At the beginning of my research experience my major focus was software fault-tolerance. The leitmotiv of that period is possibly the synchronous system assumption: although the target architectures I was working on were supposedly “high performance”, in fact those systems were quite simple: dedicated networks, dedicated processors, with predefined and immutable assumptions. Later on I described this class of systems as ataraxiessystems that have a complete “faith” in the validity of the designers’ assumptions. Clearly this is an approach that introduces very much fragility. Such systems could maybe described as sitting ducks with respect to change: they fail as soon as any of their system assumptions is invalidated.

In those days I completed my doctoral studies with a thesis in which I introduced the concept of “recovery language”: a special-purpose programming language that runs next to the conventional programming language and deals with error recovery and reconfiguration. This recovery language comes into play as soon as an error is detected by an underlying error detection layer or when some erroneous condition is signaled by the application processes. Error recovery and reconfiguration are specified as a set of guarded actions that operate on coarse-grained entities of the application (tasks and groups of tasks) and are enacted depending on the current state of those entities. An important aspect was that such “recovery code” is run-time interpreted, which means it can be changed dynamically.

From the point of view of General Systems Theory, this meant that systems built with my approach were more-than-ataraxies: when embedded into a context-aware feedback loop, those systems would achieve teleological (reactive) properties.

Later on I became acquainted with General Systems Theories such as the behavioral classification by Wiener and the system of Kenneth Boulding. I started considering “more-than-reactive” systems: systems able to proactively create models of possible futures and adapt their action with respect to those hypothesized conditions. I initiated, promoted, and supervised the doctoral studies of three students, with whom I explored “advanced” adaptive behaviors. With such behaviors, the adaptation code is assembled dynamically, by composing the adaptation planners best-matching the current contextual conditions. This resulted in filing a patent.

In my little “cybernetic journey” I then came to the realization that a significant limitation of my approaches was what I call the genotypical feedback. The systems I was devising were adaptive rather than evolving systems. The lessons learned while facing the environments and adapting to them had no influence on the identity of those systems. The systems were merely resilient (at-work to-stay-the-same [Sachs, 1995]). This persistence of the system identity was a guarantee of trustworthiness; and yet, it came also as a limitation to the ability to evolve. Therefore I started to consider systems that are at-work to-get-better—thus they are able to evolve beyond what initially prescribed by the “designer”. Inspired by the work and the terminology introduced by Nassim N. Taleb, I called such systems antifragile, and came to the idea of studying the properties and the engineering of such systems. I launched a workshop on computational antifragility and antifragile engineering, called ANTIFRAGILE. Professor Taleb himself kindly participated in the second edition with a keynote speech (in teleconference). Dr. Kennie Jones from NASA Langley gave keynote speeches sharing with the attendees his lessons learned in antifragile engineering at NASA. Furthermore, I launched a LinkedIn group on computational antifragility, which has attracted the interest of more than 150 people.

In parallel to the above mentioned explorations, several years ago I started to realize that a second major limitation in my approaches was in the social dimension. Although I was able to manipulate system components, my approaches were basically treating systems as individual entities, thus neglecting their inherently social nature. I understood that this was a major mistake: paraphrasing Margaret Thatcher,
there is no such thing as an individual system.
Every system is collective—every system is a social system. This new perspective allowed me to “see” problems from a new and wider angle. Concepts such as a system’s organization became central and provided me with a new research path to focus my attention on. Preliminary explorations were carried out: I wrote a paper on quality indicators for collective systems resilience, in which I began considering the match between the “social persona” of the Whole and that of the Parts. I discussed centrifugal and centripetal social forces, which were able to weaken or strengthen the resilience of the Whole. The link with the philosophies of Aristotle and Leibniz became very much apparent and ignited an ancillary line of exploration. I began realizing that several of the problems and concepts I had encountered in science had an established “philosophical counterpart”—for instance the Leibnizian concepts of compossibles and substantiata; genotypical and phenotypical conservation of modularity as a foundation to evolvability; and a resorce-constrained world evolving its substances to ever increasing quality and complexity; all mapped to supposedly “modern” concepts such as emergence, evolution, cellular automata, artificial life, and many others.
All the above led me to several new realizations. For instance, I came to realize that many of our “societal systems” (such as healthcare, civil defense, and crisis management organization) are built with “fragility assumptions” quite similar to those of the “sitting ducks” I mentioned earlier—which possibly explains why those systems are so inefficient and incapable to deal with our turbulent, overpopulated, and resource-scarce new world. In fact, the problem is quite the same: I tried to express this in the following sentence from paper “How Resilient Are Our Societies? Analyses, Models, and Preliminary Results”:
“Regardless of its nature, any system is affected by its design assumptions. Our societies are no exception. The emergence of sought properties such as economic and social welfare for all; sustainability with respect to natural ecosystems; and especially manageability and resilience, highly depends on the way social organizations are designed.”
Social organization is obviously the major gestalt in the above quote. Social organization—“a set of roles tied together with channels of communication” [Boulding, 1956]—is the invariant that captures the essence of collective systems as different in scale and behavior as a colony of bacteria in comparison to one of our cities.

In my next post I will focus my attention on two social organization "templates" that I defined a few years ago: the service-oriented community and the fractal social organization.

References

[Sachs, 1995] Joe Sachs, "Aristotle's Physics: A Guided Study". Rutgers University Press. ISBN 0-8135-2192-0.
[Boulding, 1956] Kenneth Boulding, "General Systems Theory—The Skeleton of Science". Management Science 2(3), April 1956, pp. 197-208.