ESOS

From CasGroup

Jump to: navigation, search

The engineering of self-organizing systems (ESOS) is a contradiction in itself: how can you organize something which organizes itself? If we want to build a self-organizing system with autonomous agents, then how can we ensure the function ? Agents do by defintion what they want. How can you construct a self-organizing system? The answer is: in a balanced, iterative process which combines top-down analysis with bottom-up simulation, where we step by step define the 'rules of the game'. The bottom-up process is needed ensure diversity (innovative, random, surprising elements). The top-down process ensure unity (e.g. function, quality and goal-orientation). Together they form a cyclic round-trip process, which can be named synthetic microanalysis.

Contents

A Two-Way Approach to the MML

The micro-macro link (MML) problem probably needs a two-way or two-phase approach to find the necessary micro-macro connections, including a bottom-up and a top-down process. The way up means synthesis, simulation or experiments, and determines how individual actions are combined and aggregated to collective behavior. The way down means analysis, creation of testable hypotheses, or translation of requirements, and defines how collective forces influence and constrain individual actions.

We can only generate complex self-organizing systems with emergent properties in a goal-directed, straightforward way if we look at the microscopic level and the macroscopic level (for local and global patterns, properties and behaviors), examine causal dependencies across different scale and levels, and if we consider the congregation and composition of elements as well as their possible interactions and relations. A complex system can only be understood in terms of its parts and the interactions between them, if we consider static and dynamic aspects. In other words we need a combination of top-down and bottom-up approach, which considers all sides: static parts and dynamic interactions between them, together with the macroscopic states of the system and the microscopic states of the constituents. Sunny Y. Auyang has proposes a method named “synthetic microanalysis” which claims to combine synthesis and analysis, composition and decomposition, a bottom-up and a top-down view, and finally micro- and macrodescriptions. She describes the idea vividly in chapter 2 of her interesting book, but unfortunately she does not say how her approach works for MAS exactly. The book focuses on complex systems in general, and physical systems (with particles instead of agents) in particular.

The general idea is a “bottom-up deduction guided by a top-down view”. You have to delineate groups of “microstates” according to causal related macroscopic criteria (by “partitioning the microstate space”, for instance through selection of all elements with a certain property or role related to some macroscopic structure). Making a round trip from the whole to its parts and back, you can use the desired global macroscopic phenomena to design suitable local properties and interactions. This two-way approach is a generalization of the “experimental method” proposed by Bruce Edmonds and Joanna Bryson. In the theoretical top-down phase you have to create testable hypotheses, which have to be verified in the experimental bottom-up phase. Instead of “Synthetic Microanalysis” you could also name it iterative goal-directed simulations (where the goals are determined by high-level objectives and overall requirements). The experimental bottom-up approach alone is successful only for small and simple systems like 1-dim Cellular Automata, where you can enumerate all possible systems. For large systems the amount of possibilities and number of configurations grows so large (or even "explodes") that the goal gets lost or the thicket of microscopic details becomes impenetrable. To quote Auyang again: “blind deduction from constituent laws can never bulldoze its way through the jungle of complexity generated by large-scale composition” (p.6). The macroscopic view is useful and necessary to delineate possible configurations, to identify composite subsystems on medium and large scales, to set goals for microscopic simulations and finally to prevent scientists from losing sight of desired macroscopic phenomena when they are immersed in analytic details.

Iterations and Refinements

Top-Down vs. Bottom Up

One round trip from the whole to its parts and back is probably not enough to generate complex self-organizing systems with emergent phenomena. If the two-way method of “synthetic microanalysis” works at all, you will certainly need some iterations and a number of stepwise refinements until the method converges to a suitable solution. It is important to identify and refine before each iteration suitable subsystems, basic compounds and essential phenomena on the macroscopic level, which are big and frequent enough to be typical or characteristic of the system, but small and regular enough to be explained well by a set of microscopic processes. Many macroscopic descriptions are only an approximation, idealization and simplification of real processes.

In the first top-down phase towards the bottom level, we must find the significant, relevant and salient properties, events and interactions, especially the crucial events responsible for butterfly effects, avalanches and cascades. We seek the concrete, precise and deterministic realization of abstract concepts. Many microscopic details are insignificant, irrelevant and inconsequential to macroscopic phenomena. In the second bottom-up phase towards the top level, you have to compare the results of the synthesis and simulation which the desired structure.

In a typical iteration of “synthetic microanalysis”, you start from the “top” and work your way down to the micro-level, constructing agent roles and interaction rules in just the way necessary to generate the behavior observed on “top”. This procedure can be iterated by stepwise refinement of agents and their interactions, which should include necessary changes in the environment, until the desired function is achieved. In the next round, you start start again from the global structure or macroscopic pattern, and try to refine the possible underlying microstates and micromechanisms. Could these states and mechanisms lead to the desired large-scale structure? What kind of coordination, conflict-resolution and local guidance is needed additionally? What kind of roles and role-transitions are possible?

Thus you would proceed roughly like this while trying to determine possible states, roles and role transitions:

Phase 1. Analysis and Delineation
Starting from requirements and global objectives, what macroscopic and microscopic patterns, configurations, :situations and contexts are possible in principle? From the answers you can try to delineate what roles, behaviors, :local states and local interactions are roughly possible or necessary:
a) What roles and local behavior are possible? Try to determine and deduce local behavior from global behavior, :identify possible roles and role transitions.
b) What states possible? Determine and define local properties from global properties.
c) What kind of local communication and coordination mechanisms possible? Determine tolerable conflicts and :inconsistencies.
Phase 2. Synthesis and Simulation
Is the desired global behavior achievable with the set of roles and role transitions? In the second phase, you work :your way up to the top again through comprehensive simulations and experiments.
Since emergent properties are possible, simulation is the only major way up from the bottom to the top. As Giovanna :Di Marzo Serugendo says “the verification task turns out to be an arduous exercise, if not realized through :simulation”.
Sometimes the term “emergence” itself is even defined through simulation, for instance in the following way: a :macrostate is weakly emergent if it can be derived from microstates and microdynamics but only by simulation.

The way up is simpler than the way down and requires mainly simulations. Since these simulations can be quite time consuming, it can be slower than the top-down process. In mathematical calculus, the situation is quite similar: many integrals can only be solved and determined numerical by numeric calculations, whereas differentiation is much easier and requires often only sophisticated analysis and analytic techniques. There are more similarities: the fundamental theorem of calculus also connects the purely algebraic indefinite integral and the purely analytic (or geometric) definite integral. Likewise, a method of synthetic microanalysis should combine simulations (preferably bottom-up) and “analytic” (preferably top-down) considerations.

Genetic Algorithms

Synthetic Microanalysis

The methods of synthetic microanalysis and evolutionary algorithms are quite similar, see the figure for a comparison. Both require the use of simulation, experimentation and selection. In the case of evolutionary algorithms without “humans in the loop”, the fitness evaluation is done automatically by fitness functions, in the case of synthetic microanalysis with “humans in the loop” it is done by the human engineer.

These two approaches – Synthetic Microanalysis and Genetic Algorithms – are probably the only two systematic solutions to create self-organizing Multi-Agent Systems with emergent properties. There are two obvious solutions to build a self-organizing system that meets the requirements and objectives: The imitation of natural systems, for instance in form of biologically and sociologically inspired system, or manual trial-and-error. The first method can only be applied to transfer existing solutions, the second is not systematic. As Edmonds says, “we are to do better than trial and error…we will need to develop explicit hypotheses about our systems and these can only become something we rely on via replicated experiment.”

  • The advantage of Synthetic Microanalysis is that we are able to understand the solution. The drawback is that is still requires a human-in-the-loop: constant manual intervention, observation, consideration, delineation and design are essential.
  • The advantage of Genetic Algorithms is that they do not require a human-in-the-loop. The drawback is that we are often not able to understand the result. It is often hard to understand why the result is optimal (and none of the other solutions) and how it works exactly.

Science vs. Engineering

The Scientific Method

This SMA method is nothing else but the scientific method applied to engineering, the combination of engineering and science. The application of the scientific method by the engineer is the solution to the fundamental ESOA and ESOS problem It is the step-by-step investigation of hypotheses with experiments and simulations.

The scientific method is an iterative process that is the basis for any scientific inquiry, and it can also be used to examine artificial systems and simulated worlds (for instance synthetic societies of multi-agent systems). The scientific method follows a series of four basic steps: observe-formulate-predict-test

(1) identify a problem you would like to solve,
(2) formulate a hypothesis,
(3) test the hypothesis,
(4) collect and analyze the data,
(5) make conclusions and restart with (1)

Remarkably, some computer scientists do not want to hear this: the scientific method applied to engineering. They are of course scientists, and as scientists they use of course the scientific method. How do we dare to question this? Yet there is a clear difference between science and engineering, between the scientist and the engineer. The scientist tries to explain complexity by simple rules, the engineer tries to hide complexity by simple user interfaces. The scientist tries to explore nature by building machines, the engineer tries to build machines by exploring possible constructions.

The scientist seeks to understand what is,
the engineer seeks to create what never was.
The engineer explores in order to build, 
the scientist builds in order to explore.

What happens if both meet each other, if we combine the characteristics of pure engineering with pure science? Surprisingly, the best and worst. The worst are "buzzword engineers" who produce only hot air and "engineering scientists" who only seek to create problems that never were before. They are scientists and engineers who have got it wrong: engineers should not conceal the truth and produce complexity, they should hide complexity and produce the truth. (unfortunately, many computer scientists fall in this category. There are so many hot air merchants at the universities.. In German you call them "Schwindler" or "Schaumschläger". All they do is producing hot air and complex useless frameworks while inventing new buzzwords and acronyms. In Marketing this is ok, but in computer science.. They are a bit like intelligent ELIZA bots, which never will achieve real intelligence, they only produce a perfect illusion of intelligence. Likewise, Schaumschläger will never produce any real progress to science, but a perfect illusion of progress. They are good in selling themselves, in getting jobs and grants, and in pretending to be important.).

But there are also the opposite cases. The best cases are "theory engineers" or "scientific engineers": scientists who construct new theories, or engineers who discover new laws in engineering, and new ways to build new types of systems. Albert Einstein comes to mind.

These are the extremes. Between the extremes, if we leave the best and the worst cases behind, we find the engineer who seeks to understand the useful system he his building, and the scientist who seeks to create new kind of interesting theories that never existed before. Exactly what we need for a cyclic round-trip process, which can be named synthetic microanalysis (the scientifc method for the engineer which means rapid prototyping and agile development).

Books and References

  • Ottino, J. M., Engineering complex systems, Nature 427 (2004) 399
  • Johnson, S., Emergence, Scribner, 2002
  • Holland, J. H., Emergence from chaos to order, Oxford University Press, 1998
  • Axelrod, R., Chapter 6 “Building New Political Actors” of Complexity of Cooperation, Princeton University Press, 1997
  • Maes, P, Modeling Adaptive Autonomous Agents, in Artificial Life, Christopher G. Langton (Ed.), The MIT Press, (1995)
  • Edmonds, B. & Bryson, J. (2004) The Insufficiency of Formal Design Methods - the necessity of an experimental approach for the understanding and control of complex MAS. In Proceedings of the 3rd Internation Joint Conference on Autonomous Agents & Multi Agent Systems (AAMAS'04), New York, ACM Press
  • Edmonds, B. (2004) Using the Experimental Method to Produce Reliable Self-Organised Systems. In Brueckner, S. et al. (eds.) Engineering Self Organising Sytems: Methodologies and Applications, Springer LNAI 3464, (2005) 84-99
  • Yamins, D., Towards a Theory of "Local to Global" in Distributed Multi-Agent Systems, In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2005), Utrecht, ACM Press
  • Jonathan Rauch, Seeing Around Corners, The Atlantic Monthly, April 2002
  • Conte, R. and Castelfranchi, C., Simulating multi-agent interdependencies. A two-way approach to the micro-macro link, in Klaus G. Troitzsch et al. (Eds.), Social science microsimulation, Springer (1995) 394-415
  • Auyang, S.Y., Foundations of Complex-system theories, Cambridge University Press, 1998
  • Serugendo, G.D.M., Engineering Emergent Behaviour: A Vision, Multi-Agent-Based Simulation III. 4th International Workshop, MABS 2003 Melbourne, Australia, July 2003, David Hales et al. (Eds), LNAI 2927, Springer, 2003
  • Mark A. Bedau, Weak Emergence, In J. Tomberlin (Ed.) Philosophical Perspectives: Mind, Causation, and World, Vol. 11, Blackwell (1997) 375-399
  • Suzuki, J. and Suda, T., A Middleware Platform for a Biologically Inspired Network Architecture Supporting Autonomous and Adaptive Applications, In IEEE Journal on Selected Areas in Communications (JSAC), Special Issue on Intelligent Services and Applications in Next Generation Networks, vol. 23, no. 2 (2005) 249-260
  • Montresor, A., Meling, H. and Babaoglu, O., Messor: Load-Balancing through a Swarm of Autonomous Agents, In Proceedings of the 1st International Workshop on Agents and Peer-to-Peer Computing, Bologna, Italy, July 2002, also a Technical Report UBLCS-2002-11, University of Bologna, Italy.
  • Gershenson, C., A General Methodology for Designing Self-Organizing Systems, submitted preprint at http://uk.arxiv.org/abs/nlin.AO/0505009


Links

Henry Petroski, Scientists as Inventors, American Scientist Vol. 96 Sep-Oct. (2008) 368-371

Personal tools