Self-Star Properties
From CasGroup
Self-* properties refer to any properties or processes of a system which are caused and maintained by the system itself. They are an essential feature of living systems, and can be found in artificial systems as well: as an attempt to imitate and recreate the patterns and properties of natural self-organizing systems. Self-* properties are useful for very distributed systems, or "megaservices", "megasystems" and giant-scale services with high scalability, i.e. Internet application on "planetary scale" with thousands of servers and millions of users.
Contents |
History
A self-* process refers to any action which starts with "self-" and was caused by the system itself. A self-* property is a result of such a process. Already traditional mechanic machines and automata have self-properties. They are self-propelled and self-moving (in general "self-operating") machines which have automotive and automatic properties. They can run automatic and move themselves. Automatic refers to any self-operating machine or automaton (from the Greek automatos, "acting of one’s own will, self-moving").Dijkstra introduced the notion of self-stabilization in the context of distributed systems in 1973. He defined a system as self-stabilizing when "regardless of its initial state, it is guaranteed to arrive at a legitimate state in a finite number of steps". The concept is therefore related to the mathematical concept of an attractor in dynamical systems and chaos theory.
A self-stabilizing system has two important properties:
- It does not need to be initialized
- It is fault-tolerant and can recover from failures and faults
While it is obvious that self-stabilization is a desirable property and possible in principle, it is not clear how many self-stabilizing systems and algorithms exist, and how fast a system converges to a safe state. A recovery should occur in a reasonable amount of time. However, the construction of a self-stabilizing system is still a difficult task, as Marco Schneider writes in his survey (1993)
Forms and Types
All positive processes like optimization, protection, recovery, etc. are of course desirable if they occur in autonomic systems without external guidance and control. In artificial software systems, the predominant approach is to use more a form of Self-Management instead of true self-organization (i.e. organization completely without organizer). Self-management means the use of managers and managed elements to control and to manage a system. In their vision of autonomic computing, the IBM researcher Kephart and Chess mention the following four desirable self-* properties in autonomic systems: Self-configuration, Self-healing, Self-optimization and Self-protection.
Self-Configuration is the automated configuration of components and systems. Autonomic systems will configure themselves automatically in accordance with high-level policies that specify what is desired, not how it is to be accomplished. Self-Optimization means systems, sub-systems and components continually seek opportunities to improve their own performance and efficiency. It requires the autonomous ability of identifying and seizing opportunities to make the system more efficient in performance or cost. Self-Cleaning, Self-Healing or Self-Repairing is the ability of a system to automatically detect, diagnose, and repair localized software and hardware problems. Self-Healing requires monitoring, detection and diagnosis of faults and errors: an old Latin proverb says "Bene diagnoscitur, bene curatur" (Something that is well diagnosed can be cured well).
Self-Stabilization refers to a system's ability to recover automatically from unexpected faults. Self-Protection is finally a property of systems which automatically defend themselves against malicious attacks or cascading failures. It uses early warning to anticipate and prevent systemwide failures. Other useful self-properties are related to analysis and diagnosis: Self-Describing and Self-Explaining systems are useful if we want to understand distributed systems which are getting more and more complex. They are also necessary for intelligent autonomous systems with the abilities of self-healing. Less intelligent systems must rely on restart of affected components, which is the foundation of the Recovery-Oriented Computing (ROC). All these properties are desirable, because the deploying, operating and maintaining of complex systems can be very costly, difficult and expensive. Modern distributed systems are inherently complex.
Self-Design can be found in (unsupervised) learning and adaptation. As the prefix self-* suggests, such self-* properties occur indeed often in self-organizing systems. The usually involve some form of Self-Reference. Yet the existing self-* properties of living systems are not exactly identical with the desirable self-* properties of artificial systems. Living systems are characterized by the following self-* properties or self-organizing processes. What all living systems have in common is Autopoiesis and Self-Organization. Self-Regeneration and Self-Reproduction can be found even in plants through metabolism and sex (in Greek, 'metabolos' means something fluctuating or changing, that is always changing or in perpetual change) , Self-Movement, Self-Control and Self-Defense in animals due to digestion, cognition, and the immune system, and finally Self-Awareness and Self-Consciousness in humans. In other words the application of the principles found in natural systems to artificial and distributed systems is, unfortunately, not an easy, straightforward process. We cannot simply obtain amazing self-* properties in artificial systems just by imitating nature. A self-reproductive application is known as a dangerous virus, a self-defending system can be a nightmare, and a self-regenerative application can perhaps prevent a deactivation.
Self-* Properties in living system | Self-* Properties in artificial system | ||
---|---|---|---|
in Plants
in Animals
in Humans
|
in Autonomic Systems
in Fault-Tolerant Systems
in Self-Analyzing systems
|
Even desirable self-* properties can be a potential drawback: A self-describing or self-explaining application can be talkative or chatty, a self-optimizing and self-configuring system can reject manual changes, a self-protecting system can begin to attack itself (similar to allergies and autoimmune diseases), and a self-diagnostic application can be overcareful (Think of the Security Center in Windows XP: "Your computer might be at risk"). If these properties are implemented in autonomic and autonomous systems, they must be equipped with adjustable autonomy, i.e. the human administrator must always have the highest priority and the right to change every part of the system.
Balancing Contradicting Properties
Self-Optimization and Self-Protection
One contradicting pair of self-* properties is self-optimization and self-protection. Self-protection requires high security, strong protection and high encryption, but high encryption in turn leads to low ease of use and high response or reaction times for a system. It would require too much time and effort to treat everything as top secret with the highest possible encryption and protection. It would damage or endanger security if nothing is really protected. Always highest encryption would take too much time and effort, always lowest encryption would threaten security. A tradeoff between both sides often results in classification or security levels, for instance the well-known classification Top secret, Secret, Confidential, and Restricted.
Self-Optimization and Self-Reconfiguration
Another contradicting pair of self-* properties and capabilities is self-optimization and self-reconfiguration. Self-optimization is possible by adaptation, and strong adaptation to a certain problem can be an obstacle for easy reconfiguration (see the "No Free Lunch Theorem" from Wolpert and Macready). In the extreme case, a nerd or autistic savant has severe handicaps in solving general problems but extraordinary abilities in a very special, limited area.
Another example are games, for example mobile robots for soccer games. In order to optimize their behavior for a certain task, it useful to adapt them to it as much as possible. If they play soccer it is useful to equip them with special kickers, with special cameras and mirrors, and with special software that recognizes only certain objects: goals (yellow or blue), fields (green), balls (red) and dark obstacles which have the form of garbage cans. If they are adapted in this way, the mobile robots can be used well to play soccer, but they cannot be reconfigured easily for other games or completely other tasks. Adaptation allows them to solve one special task well, but it makes the reconfiguration for other tasks difficult.
A context-dependent trade-off for the conflict between optimization and reconfiguration is possible by two things: first a dynamic adaptation (build agents or robots which are able to learn from experience), and second a dynamic type system for adaptive agents (build agents or robots which are what they are doing, i.e. one has to optimize suitable agents for each situation and context and tag them accordingly).
Dynamic context-dependent adaptation is realized in natural organisms (esp. mammals) by the urge to play. It can be considered as a strategy of organisms to organize and optimize their own behavior, even if future situations are unpredictable. We play and laugh because it feels good: it is rewarding and pleasant. At the same time, we learn the rules of the game while we play. We gain new experience and new insights. We examine different scenarios and try to find the best strategies. In this way, each organism learns the behavior that is best in his context and situation. While young animals play they explore the world, acquire knowledge and gain all the necessary skills to survive. The desire to play drives animals to gain new insights at the edge of their knowledge: well-known games are boring, too complicated games are frustrating.
A dynamic type system for adaptive agents characterizes the adaptation kind of each agent. Such a type system allows the selection of certain agents for certain contexts. The type of an adaptive agent is changed by its roles and activities. The longer an agent does something, the better it becomes. Strong specialization and adaptation allow high performance and efficiency, while the type system allows the exchange and selection of agents. For a completely new tasks, new agents can trained.
Self-Optimization of contradicting properties
One example for contradicting properties is the self-optimization of reaction time and the self-optimization of resource usage. Optimal resource usage is low, and optimal reaction time is also low, but both contradict each other. Optimal reaction time requires high activity and high resource usage. Full attention is necessary to react as fast as possible. On the contrary, optimal resource usage leads to low activity and low reaction time. If only occasional attention is required, resources can be spared. Stress is an important trade-off between both sides: optimal response or reaction time on the one hand and optimal resource usage on the other hand. It is the same form of stress (or alarm of the body) that makes us sick if it occurs too frequently.
Stress means here the context-depent short-term activation of all available resource, which are deactivated in the long term. It can occur in various forms (readiness, alertness, preparedness, etc.), levels (severe, high, elevated, low, etc.) and phases (for example alert phase in a fire brigade, or the three emergency phases from the coast guard: uncertainty phase, alert phase, and distress phase).
The DefCon leves from 1 to 6 of the american army and the different threat levels of the Homeland Security Advisory System also belong into this category. A level of high readiness or alertness is selected if the risk of potential danger or the probability of an increased resource use in the near future is very high. A level of low readiness and alertness is chosen if the risk of potential danger or the probability of an increased resource use in the future is very low.
Conclusion
Although conflicts between two contradicting self-* properties are not unusual, the situation is not hopeless. A trade-off or compromise between two contradicting self-* properties is possible. An optimal trade-off depends on the relative importance of the properties compared to each other, on the situation and the particular context of the system. It is often formulated as a set of discrete levels. Security levels are such a trade-off between self-protection and self-optimization. Always highest encryption would take too much time and effort, always lowest encryption would endager security. Threat or stress levels are similarly a trade-off between different kinds of self-optimization (fast reaction and low resource usage). Although stress is associated with negative things, it is the best compromise between too much resource usage and too long response times. Alert and readiness levels are a universal compromise which can be found everywhere in the real world where resources have to be activated efficiently: in the military, in the fire brigade, in the coast guard, etc.
Drawbacks
Replication and Viruses
This is the most well-known example illustrating the drawback of a self-* property. A computer virus is a software system with the property of self-replication. It replicates and executes itself without the permission or knowledge of the user. Computer viruses can infect millions of computers. They will pass from one computer to another in the same way that a real life biological virus passes from person to person.
Healing, Preservation and Persistence
An essential element for the preservation and survival of cultures is education. it Education is a self-* mechanism of a culture to preserve itself. It is also a self-* mechanism to rejuvenate the elements of the system. This basic process has to be distinguished from more complex self-* mechanisms to rejuvenate the system itself, which are discussed later in the section about cancer and rejuvenation. There are many drawbacks of education as we all know: it can go wrong if the wrong things are teached, it can have expansive and missionary effects, and it can also preserve false beliefs and illusions. A self-healing system that automatically repairs damage and also pinpoints where it has been wounded would certainly be a good thing. But what happens if the system uses this property or capability to keep working although it is no longer necesarry? It could lead to robust applications or systems which are not stopable or removable. If there is a bug in the self-healing mechanism itself, it can also lead to a worsening of the situation or to the destruction of the whole system. Self-healing and self-protection are related to each other: protection against intruders can prevent diseases which would requires healing.
Protection, Pain and Allergies
Self-protection can have very severe consequences and side-effects if the integrity of the self is affected: if the self is not recognized correctly, allergies and autoimmune diseases occur. The system attacks itself if the self cannot be distinguished correctly from the non-self. Pain is also a consequence of self-protection, it seems to be a general, necessary mechanism of systems with the capability of self-protection, because it signals the place where the self-protecting mechanisms fail or where they are badly needed.
Rejuvenation and Cancer
Self-rejuvenation and self-regeneration can have very severe consequences, because they affect the integrity and structure of the system itself. If a complex adaptive system has a built-in self-rejuvenation ability, and it grants it’s agents the rights to create and to found new elements, these elements can severely damage or threaten the function of the whole system. Any system which is able to remould and reinvent itself can replace itself by something else. It seems as if rejuvenation and cancer are closely related to each other.
Conclusion
In general, any self-* mechanism or self-* property is acting on “the self”, and it can destroy, damage or threaten the system - the self - if something goes wrong. The postive consequences are wonderful, but the negative consequences and side-effects of self-* properties range from unpleasant to disastrous:
- self-healing, self-rejuvenation -> cancer
- self-protection -> pain, allergies, and autoimmune diseases
- self-reconfiguration -> errors, faults, and failures
- self-optimizing -> slow-down, halt, stop
Any automatic or strong self-* property without a conscious controller in the loop has the advantage that it can react fast and immediately. But it could also lead to rejection or overwriting of manual changes, or to the inability to deactivate the self-* mechanism - to the loss of control. The fear that we will lose control of our software, if it is modified to be more like living systems is justified.
Do we really want to build autonomous or autonomic systems with self-* properties? Self-protecting systems that can feel pain and are able to attack themselves? Bugs appear everywhere. Things can always go wrong, and if things go wrong in the code for the self-* properties itself, the result can be disastrous and utterly devastating. It is possible that we open the box of pandora if we build systems with deeply embedded self-* properties.
Papers
Jeffrey O. Kephart and David M. Chess, The Vision of Autonomic Computing, IEEE Computer, January 2003 [1]
Marco Schneider, Self-stabilization, ACM Computing Surveys, 25, 45–67 (1993)
Links
- Michael F. Clarke Irving L. Weissman Tannishtha Reya, Sean J. Morrison, Stem cells, cancer, and cancer stem cells Nature 414 (2004), no. 6859, 105–111. 4.4.5
- Nicholas Wade, Stem Cells May Be Key to Cancer New York Times February 21 (2006).
- Gina Kolata, Slowly, cancer genes tender their secrets, New York Times, December 27 (2005).
Books
Shlomi Dolev, Self-Stabilization, The MIT Press, 2000, ISBN 0-262-04178-2