<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="https://wiki.cas-group.net/skins/common/feed.css?270"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.cas-group.net/index.php?feed=atom&amp;target=Admin&amp;title=Special%3AContributions%2FAdmin</id>
		<title>CasGroup - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.cas-group.net/index.php?feed=atom&amp;target=Admin&amp;title=Special%3AContributions%2FAdmin"/>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Special:Contributions/Admin"/>
		<updated>2026-04-18T04:29:00Z</updated>
		<subtitle>From CasGroup</subtitle>
		<generator>MediaWiki 1.16.2</generator>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Main_Page"/>
				<updated>2011-03-06T19:40:08Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Changed protection level for &amp;quot;Main Page&amp;quot; ([edit=autoconfirmed] (indefinite) [move=autoconfirmed] (indefinite))&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;'''Welcome to the CAS Group Wiki.'''&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Emergence.gif|left|Weak Emergence]] The CAS Group is a community for people interested in the science and theory of [[Complex_Adaptive_System|complex adaptive systems]] (CAS), and in their [[Basic_System_Theory|basic principles]] and [[Applied_System_Theory|applied principles]]. CASs are perhaps the most interesting complex systems, they are not only 'complex' - they are diverse and made up of multiple interconnected elements - and 'adaptive' - they have also the capacity to change and learn from experience. The basic element of CAS theory is the [[Agent-Based_Model|agent-based model]]. The logo describes the phenomenon of [[Emergence|emergence]], as it can be observed for example in swarms and flocks. Like [[Self-Organization|self-organization]] it is a essential principle in the theory of complex adaptive systems.&lt;br /&gt;
&lt;br /&gt;
There are also pages about the [[Chaos_in_the_mind|role of chaos in neural networks]], [[Self-Consciousness|self-consciousness]], [[Beyond_AI|new forms of AI]], [[New_Kinds_of_Science|new kinds of science]], and new forms of technology and interesting philosophical questions as well, for &lt;br /&gt;
example if we can describe the mind as a [[Society_of_Mind|society of agents]].&lt;br /&gt;
&lt;br /&gt;
Sources and related projects are [http://en.wikipedia.org Wikipedia], [http://www.cscs.umich.edu/~crshalizi/notebooks/ C.R. Shalizi's Notebooks], and the [http://pespmc1.vub.ac.be/ Principia Cybernetica].  Related Wikis are for example the [http://www.swarm.org/wiki/Main_Page SwarmWiki] specialized on Swarm Intelligence and agent-based modelling, the [http://intersci.ss.uci.edu/wiki/index.php/Main_Page InterSciWiki] on cross-disciplinary and complexity sciences research, &lt;br /&gt;
the [http://www.aaai.org/AITopics/pmwiki/pmwiki.php AITopics Library], and the [http://www.scholarpedia.org/ Scholarpedia]. Recommendable resources are also [http://mathworld.wolfram.com/ Wolfram MathWorld] and [http://atlas.wolfram.com/ The Wolfram Atlas of Simple Programs].&lt;br /&gt;
&lt;br /&gt;
At this time the Wiki contains [[Special:Allpages|{{NUMBEROFARTICLES}} articles.]] A complete overview about [[Special:Allpages|all pages]] and [[Special:Categories|all categories]] also exists.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=CasGroup:Community_portal</id>
		<title>CasGroup:Community portal</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=CasGroup:Community_portal"/>
				<updated>2011-03-06T14:46:53Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot;Take a look at the complete overview of all pages and all categories.&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Take a look at the complete overview of [[Special:Allpages|all pages]] and [[Special:Categories|all categories]].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Collective_intelligence</id>
		<title>Collective intelligence</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Collective_intelligence"/>
				<updated>2011-03-06T13:03:57Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: moved Collective intelligence to Collective Intelligence: wording&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#REDIRECT [[Collective Intelligence]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Collective_Intelligence</id>
		<title>Collective Intelligence</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Collective_Intelligence"/>
				<updated>2011-03-06T13:03:57Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: moved Collective intelligence to Collective Intelligence: wording&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
'''Collective intelligence''' is a shared or group intelligence that emerges from the collaboration and competition of many individuals in a group. A group of people can act smarter than a single individual. A [[Crowd|crowd]] for example can show surprising levels of wisdom (&amp;quot;The Wisdom of the Crowd&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia entry for [http://en.wikipedia.org/wiki/Collective_intelligence collective intelligence]&lt;br /&gt;
&lt;br /&gt;
[[Category:Psychology]] [[Category:Social Systems]] [[Category:Collective Processes]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Main_Page"/>
				<updated>2011-02-26T12:29:29Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;'''Welcome to the CAS Group Wiki.'''&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Emergence.gif|left|Weak Emergence]] The CAS Group is a community for people interested in the science and theory of [[Complex_Adaptive_System|complex adaptive systems]] (CAS), and in their [[Basic_System_Theory|basic principles]] and [[Applied_System_Theory|applied principles]]. CASs are perhaps the most interesting complex systems, they are not only 'complex' - they are diverse and made up of multiple interconnected elements - and 'adaptive' - they have also the capacity to change and learn from experience. The basic element of CAS theory is the [[Agent-Based_Model|agent-based model]]. The logo describes the phenomenon of [[Emergence|emergence]], as it can be observed for example in swarms and flocks. Like [[Self-Organization|self-organization]] it is a essential principle in the theory of complex adaptive systems.&lt;br /&gt;
&lt;br /&gt;
There are also pages about the [[Chaos_in_the_mind|role of chaos in neural networks]], [[Self-Consciousness|self-consciousness]], [[Beyond_AI|new forms of AI]], [[New_Kinds_of_Science|new kinds of science]], and new forms of technology and interesting philosophical questions as well, for &lt;br /&gt;
example if we can describe the mind as a [[Society_of_Mind|society of agents]].&lt;br /&gt;
&lt;br /&gt;
Sources and related projects are [http://en.wikipedia.org Wikipedia], [http://www.cscs.umich.edu/~crshalizi/notebooks/ C.R. Shalizi's Notebooks], and the [http://pespmc1.vub.ac.be/ Principia Cybernetica].  Related Wikis are for example the [http://www.swarm.org/wiki/Main_Page SwarmWiki] specialized on Swarm Intelligence and agent-based modelling, the [http://intersci.ss.uci.edu/wiki/index.php/Main_Page InterSciWiki] on cross-disciplinary and complexity sciences research, &lt;br /&gt;
the [http://www.aaai.org/AITopics/pmwiki/pmwiki.php AITopics Library], and the [http://www.scholarpedia.org/ Scholarpedia]. Recommendable resources are also [http://mathworld.wolfram.com/ Wolfram MathWorld] and [http://atlas.wolfram.com/ The Wolfram Atlas of Simple Programs].&lt;br /&gt;
&lt;br /&gt;
At this time the Wiki contains [[Special:Allpages|{{NUMBEROFARTICLES}} articles.]] A complete overview about [[Special:Allpages|all pages]] and [[Special:Categories|all categories]] also exists.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Main_Page"/>
				<updated>2011-02-26T12:28:51Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;'''Welcome to the CAS Group Wiki.'''&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Emergence.gif|left|Weak Emergence]] The CAS Group is a community for people interested in the science and theory of [[Complex_Adaptive_System|complex adaptive systems]] (CAS), and in their [[Basic_System_Theory|basic principles]] and [[Applied_System_Theory|applied principles]]. CASs are perhaps the most interesting complex systems, they are not only 'complex' - they are diverse and made up of multiple interconnected elements - and 'adaptive' - they have also the capacity to change and learn from experience. The basic element of CAS theory is the [[Agent-Based_Model|agent-based model]]. The logo describes the phenomenon of [[Emergence|emergence]], as it can be observed for example in swarms and flocks. Like [[Self-Organization|self-organization]] it is a essential principle in the theory of complex adaptive systems.&lt;br /&gt;
&lt;br /&gt;
There are also pages about the [[Chaos_in_the_mind|role of chaos in neural networks]], [[Self-Consciousness|self-consciousness]], new [[Beyond_AI|forms of AI]], [[New_Kinds_of_Science|new kinds of science]], and new forms of technology and interesting philosophical questions as well, for &lt;br /&gt;
example if we can describe the mind as a [[Society_of_Mind|society of agents]].&lt;br /&gt;
&lt;br /&gt;
Sources and related projects are [http://en.wikipedia.org Wikipedia], [http://www.cscs.umich.edu/~crshalizi/notebooks/ C.R. Shalizi's Notebooks], and the [http://pespmc1.vub.ac.be/ Principia Cybernetica].  Related Wikis are for example the [http://www.swarm.org/wiki/Main_Page SwarmWiki] specialized on Swarm Intelligence and agent-based modelling, the [http://intersci.ss.uci.edu/wiki/index.php/Main_Page InterSciWiki] on cross-disciplinary and complexity sciences research, &lt;br /&gt;
the [http://www.aaai.org/AITopics/pmwiki/pmwiki.php AITopics Library], and the [http://www.scholarpedia.org/ Scholarpedia]. Recommendable resources are also [http://mathworld.wolfram.com/ Wolfram MathWorld] and [http://atlas.wolfram.com/ The Wolfram Atlas of Simple Programs].&lt;br /&gt;
&lt;br /&gt;
At this time the Wiki contains [[Special:Allpages|{{NUMBEROFARTICLES}} articles.]] A complete overview about [[Special:Allpages|all pages]] and [[Special:Categories|all categories]] also exists.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Main_Page"/>
				<updated>2011-02-16T22:22:55Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;'''Welcome to the CAS Group Wiki.'''&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Emergence.gif|left|Weak Emergence]] The CAS Group is a community for people interested in the science and theory of [[Complex_Adaptive_System|complex adaptive systems]] (CAS), and in their [[Basic_System_Theory|basic principles]] and [[Applied_System_Theory|applied principles]]. CASs are perhaps the most interesting complex systems, they are not only 'complex' - they are diverse and made up of multiple interconnected elements - and 'adaptive' - they have also the capacity to change and learn from experience. The basic element of CAS theory is the [[Agent-Based_Model|agent-based model]]. The logo describes the phenomenon of [[Emergence|emergence]], as it can be observed for example in swarms and flocks. Like [[Self-Organization|self-organization]] it is a essential principle in the theory of complex adaptive systems.&lt;br /&gt;
&lt;br /&gt;
There are also pages about the [[Chaos_in_the_mind|role of chaos in neural networks]], [[Self-Consciousness|self-consciousness]], new [[Beyond_AI|forms of AI]], new kinds of science, and new forms of technology and interesting philosophical questions as well, for &lt;br /&gt;
example if we can describe the mind as a [[Society_of_Mind|society of agents]].&lt;br /&gt;
&lt;br /&gt;
Sources and related projects are [http://en.wikipedia.org Wikipedia], [http://www.cscs.umich.edu/~crshalizi/notebooks/ C.R. Shalizi's Notebooks], and the [http://pespmc1.vub.ac.be/ Principia Cybernetica].  Related Wikis are for example the [http://www.swarm.org/wiki/Main_Page SwarmWiki] specialized on Swarm Intelligence and agent-based modelling, the [http://intersci.ss.uci.edu/wiki/index.php/Main_Page InterSciWiki] on cross-disciplinary and complexity sciences research, &lt;br /&gt;
the [http://www.aaai.org/AITopics/pmwiki/pmwiki.php AITopics Library], and the [http://www.scholarpedia.org/ Scholarpedia]. Recommendable resources are also [http://mathworld.wolfram.com/ Wolfram MathWorld] and [http://atlas.wolfram.com/ The Wolfram Atlas of Simple Programs].&lt;br /&gt;
&lt;br /&gt;
At this time the Wiki contains [[Special:Allpages|{{NUMBEROFARTICLES}} articles.]] A complete overview about [[Special:Allpages|all pages]] and [[Special:Categories|all categories]] also exists.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Meme</id>
		<title>Meme</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Meme"/>
				<updated>2011-02-16T21:06:03Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot;A meme identifies ideas or beliefs that are transmitted from one person or group of people to another. The concept comes from an analogy: as genes transmit biological in...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A meme identifies ideas or beliefs that are transmitted from one person or group of people to another. The concept comes from an analogy: as [[Gene|genes]] transmit biological information, memes can be said to transmit idea and belief information.&lt;br /&gt;
&lt;br /&gt;
== Reconstruct a Mind ==&lt;br /&gt;
&lt;br /&gt;
Can we (re-)construct minds from different &lt;br /&gt;
parts or pieces ? Is there a blueprint for &lt;br /&gt;
a soul (whatever that is)? If [[Gene|genes]] are&lt;br /&gt;
blueprints used to construct bodies, then &lt;br /&gt;
maybe [[Meme|memes]] can be considered as &lt;br /&gt;
blueprints to construct [[Mind|minds]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia entry for [http://en.wikipedia.org/wiki/Meme meme]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Mind</id>
		<title>Mind</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Mind"/>
				<updated>2011-02-16T21:04:47Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''mind''' is simply what the brain does. It is related to the [[Self]] and is considered &lt;br /&gt;
as responsible for one's thoughts and feelings.&lt;br /&gt;
It is an abstract concept which describes all of the brain's conscious and unconscious cognitive processes.&lt;br /&gt;
Although it does not exist as a single, unified substance, it emerges from the coordinated action of a group&lt;br /&gt;
of agents and acts to orchestrate them in turn. In this sense, the coordinated hallucination of people&lt;br /&gt;
is often associated with team spirit or god, while the coordinated hallucination of neurons is associated &lt;br /&gt;
with the personal spirit, the soul or the mind.&lt;br /&gt;
&lt;br /&gt;
There are are many theories of the mind and its function,&lt;br /&gt;
the [[Society_of_Mind|society of mind]] approach tries to&lt;br /&gt;
explain the mind as a society of agents (or CAS).&lt;br /&gt;
&lt;br /&gt;
== Reconstruct a Mind ==&lt;br /&gt;
&lt;br /&gt;
Can we (re-)construct minds from different &lt;br /&gt;
parts or pieces ? Is there a blueprint for &lt;br /&gt;
a soul (whatever that is)? If [[Gene|genes]] are&lt;br /&gt;
blueprints used to construct bodies, then &lt;br /&gt;
maybe [[Meme|memes]] can be considered as &lt;br /&gt;
blueprints to construct minds.&lt;br /&gt;
&lt;br /&gt;
An autobiography is maybe the thing which&lt;br /&gt;
is perhaps the most similar to such a &lt;br /&gt;
blueprint. One difference to genetic blueprints &lt;br /&gt;
is the temporal relationship: genetic blueprints &lt;br /&gt;
exist before the life of the individual, whereas &lt;br /&gt;
autobiographies exist only after the life of &lt;br /&gt;
the individual. During our life, our personality&lt;br /&gt;
is reinforced and we become more like ourselves.&lt;br /&gt;
&lt;br /&gt;
Yet autobiographies of other people and&lt;br /&gt;
ancestors can be used to &amp;quot;build new souls&amp;quot;.&lt;br /&gt;
&amp;quot;Holy books&amp;quot; are often autobiographies of &lt;br /&gt;
famous prophets or represent the history of &lt;br /&gt;
whole countries and cultures. Maybe stories, &lt;br /&gt;
fairy tales, myths, &amp;quot;holy books&amp;quot; and belief systems &lt;br /&gt;
in general (or all set of rules and ideas which specify &lt;br /&gt;
the right kind of behavior) can be considered &lt;br /&gt;
as &amp;quot;memetic blueprints&amp;quot; to build souls? Are &lt;br /&gt;
they the scripts which contain the rules that &lt;br /&gt;
direct our plays?&lt;br /&gt;
&lt;br /&gt;
If a body is a 3-dimensional entity, how much &lt;br /&gt;
dimensions does a mind or a soul have? How many&lt;br /&gt;
memes are needed to &amp;quot;make a mind&amp;quot;? I would &lt;br /&gt;
say it depends. Maybe at least as many dimensions &lt;br /&gt;
as roles which a person plays. A person plays &lt;br /&gt;
many roles, related to nationality, language, family, &lt;br /&gt;
work, etc. Each role is associated with a bundle &lt;br /&gt;
of behavior patterns or a set of memes. Do you&lt;br /&gt;
agree?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia entry for [http://en.wikipedia.org/wiki/Mind Mind]&lt;br /&gt;
&lt;br /&gt;
[[Category:Consciousness]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Mind</id>
		<title>Mind</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Mind"/>
				<updated>2011-02-16T21:02:34Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''mind''' is related to the [[Self]] and is considered as responsible for one's thoughts and feelings.&lt;br /&gt;
It is an abstract concept which describes all of the brain's conscious and unconscious cognitive processes.&lt;br /&gt;
Although it does not exist as a single, unified substance, it emerges from the coordinated action of a group&lt;br /&gt;
of agents and acts to orchestrate them in turn. In this sense,&lt;br /&gt;
&lt;br /&gt;
: god = coordinated hallucination of people&lt;br /&gt;
: mind = coordinated hallucination of neurons&lt;br /&gt;
&lt;br /&gt;
There are are many theories of the mind and its function,&lt;br /&gt;
the [[Society_of_Mind|society of mind]] approach tries to&lt;br /&gt;
explain the mind as a society of agents (or CAS).&lt;br /&gt;
&lt;br /&gt;
== Reconstruct a Mind ==&lt;br /&gt;
&lt;br /&gt;
Can we (re-)construct minds from different &lt;br /&gt;
parts or pieces ? Is there a blueprint for &lt;br /&gt;
a soul (whatever that is)? If [[Gene|genes]] are&lt;br /&gt;
blueprints used to construct bodies, then &lt;br /&gt;
maybe [[Meme|memes]] can be considered as &lt;br /&gt;
blueprints to construct minds.&lt;br /&gt;
&lt;br /&gt;
An autobiography is maybe the thing which&lt;br /&gt;
is perhaps the most similar to such a &lt;br /&gt;
blueprint. One difference to genetic blueprints &lt;br /&gt;
is the temporal relationship: genetic blueprints &lt;br /&gt;
exist before the life of the individual, whereas &lt;br /&gt;
autobiographies exist only after the life of &lt;br /&gt;
the individual. During our life, our personality&lt;br /&gt;
is reinforced and we become more like ourselves.&lt;br /&gt;
&lt;br /&gt;
Yet autobiographies of other people and&lt;br /&gt;
ancestors can be used to &amp;quot;build new souls&amp;quot;.&lt;br /&gt;
&amp;quot;Holy books&amp;quot; are often autobiographies of &lt;br /&gt;
famous prophets or represent the history of &lt;br /&gt;
whole countries and cultures. Maybe stories, &lt;br /&gt;
fairy tales, myths, &amp;quot;holy books&amp;quot; and belief systems &lt;br /&gt;
in general (or all set of rules and ideas which specify &lt;br /&gt;
the right kind of behavior) can be considered &lt;br /&gt;
as &amp;quot;memetic blueprints&amp;quot; to build souls? Are &lt;br /&gt;
they the scripts which contain the rules that &lt;br /&gt;
direct our plays?&lt;br /&gt;
&lt;br /&gt;
If a body is a 3-dimensional entity, how much &lt;br /&gt;
dimensions does a mind or a soul have? How many&lt;br /&gt;
memes are needed to &amp;quot;make a mind&amp;quot;? I would &lt;br /&gt;
say it depends. Maybe at least as many dimensions &lt;br /&gt;
as roles which a person plays. A person plays &lt;br /&gt;
many roles, related to nationality, language, family, &lt;br /&gt;
work, etc. Each role is associated with a bundle &lt;br /&gt;
of behavior patterns or a set of memes. Do you&lt;br /&gt;
agree?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia entry for [http://en.wikipedia.org/wiki/Mind Mind]&lt;br /&gt;
&lt;br /&gt;
[[Category:Consciousness]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Main_Page"/>
				<updated>2011-02-11T21:49:24Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Protected &amp;quot;Main Page&amp;quot; ([edit=sysop] (indefinite) [move=sysop] (indefinite))&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;big&amp;gt;'''Welcome to the CAS Group Wiki.'''&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Emergence.gif|left|Weak Emergence]] The CAS Group is a community for people interested in the science and theory of [[Complex_Adaptive_System|complex adaptive systems]] (CAS), and in their [[Basic_System_Theory|basic principles]] and [[Applied_System_Theory|applied principles]]. CASs are perhaps the most interesting complex systems, they are not only 'complex' - they are diverse and made up of multiple interconnected elements - and 'adaptive' - they have also the capacity to change and learn from experience. The basic element of CAS theory is the [[Agent-Based_Model|agent-based model]]. The logo describes the phenomenon of [[Emergence|emergence]], as it can be observed for example in swarms and flocks. Like [[Self-Organization|self-organization]] it is a essential principle in the theory of complex adaptive systems.&lt;br /&gt;
&lt;br /&gt;
There are also pages about the [[Chaos_in_the_mind|role of chaos in neural networks]], [[Self-Consciousness|self-consciousness]], new [[Beyond_AI|forms of AI]], new kinds of science, and new forms of technology and interesting philosophical questions as well, for &lt;br /&gt;
example if we can describe the mind as a [[Society_of_Mind|society of agents]].&lt;br /&gt;
&lt;br /&gt;
Sources and related projects are [http://en.wikipedia.org Wikipedia], [http://www.cscs.umich.edu/~crshalizi/notebooks/ C.R. Shalizi's Notebooks], and the [http://pespmc1.vub.ac.be/ Principia Cybernetica].  Related Wikis are for example the [http://www.swarm.org/wiki/Main_Page SwarmWiki] specialized on Swarm Intelligence and agent-based modelling, the [http://intersci.ss.uci.edu/wiki/index.php/Main_Page InterSciWiki] on cross-disciplinary and complexity sciences research, &lt;br /&gt;
the [http://www.aaai.org/AITopics/pmwiki/pmwiki.php AITopics Library], and the [http://www.scholarpedia.org/ Scholarpedia]. At this time the Wiki contains [[Special:Allpages|{{NUMBEROFARTICLES}} articles.]] A complete overview about [[Special:Allpages|all pages]] and [[Special:Categories|all categories]] also exists.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Loose_Coupling</id>
		<title>Loose Coupling</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Loose_Coupling"/>
				<updated>2011-02-11T21:46:21Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Loose coupling''' means low dependency between two [[System|systems]]. &lt;br /&gt;
It is a way to achieve high [[Fault Tolerance|fault tolerance]]&lt;br /&gt;
and often used in association with [[Web Service|Web Services]] and [[Service-Oriented_Architecture|Service-Oriented Architecture (SOA)]].&lt;br /&gt;
Loosely coupled systems are not strongly connected with each other,&lt;br /&gt;
they have only a minimal coupling between the components in the system.&lt;br /&gt;
They are obviously the opposite of tightly coupled systems with&lt;br /&gt;
strong dependencies between the components.&lt;br /&gt;
In loosely coupled systems, there is a low probability that changes &lt;br /&gt;
within one module or component will create unanticipated changes &lt;br /&gt;
within other modules or components.&lt;br /&gt;
The modular approach to design and develop systems&lt;br /&gt;
associated with loose coupling makes the applications more &lt;br /&gt;
agile and flexible, and enables quicker change.&lt;br /&gt;
A common way to achieve low coupling is to use interfaces&lt;br /&gt;
and messages: one module does not have to be concerned &lt;br /&gt;
with the internal implementation of another module, &lt;br /&gt;
and interacts with another module with a stable &lt;br /&gt;
interface.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[http://dev2dev.bea.com/pub/a/2004/02/orchard.html Achieving Loose Coupling], David Orchard&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Emergent_Entity</id>
		<title>Emergent Entity</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Emergent_Entity"/>
				<updated>2011-02-11T21:46:18Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Emergent entities''' are properties, patterns or substances which 'arise' out of more fundamental entities through an [[Emergence|emergence]] process. A property of a system is emergent, if it is not a property of any fundamental element. Emergent entities are not reducible. Although they arise out of more fundamental items, they are 'novel' or 'irreducible' with respect to them. Emergent properties are dependent on underlying processes, and yet independent from underlying processes. This is the paradox of emergence.&lt;br /&gt;
&lt;br /&gt;
The difference between the different terms is:&lt;br /&gt;
&lt;br /&gt;
* '''Emergent Property''': A property of a system is emergent, if it is not a property of any fundamental element. If a property is not necessary (or does not appear) in the description of all properties of the fundamental elements, elementary actors or basic building blocks, it can only be an emergent property.&lt;br /&gt;
&lt;br /&gt;
* '''Emergence''': Emergence is the appearance of emergent properties on a higher level of organization without central organizer. It can be observed in complex systems due to the pattern of interactions between the elements of a system over time.&lt;br /&gt;
&lt;br /&gt;
* '''Entity''': Emergent properties lead to the appearance of new entities and objects on a larger scale. They play an important role in the definition of general terms like being, entity and substance. Russ Abbott defined an entity as follows: &amp;quot;An aggregation counts as an entity if it has emergent properties&amp;quot;, see his talk [http://abbott.calstatela.edu/PapersAndTalks/EmergenceAndEntities.pdf Emergence and Entities]&lt;br /&gt;
&lt;br /&gt;
== Agent based Models ==&lt;br /&gt;
&lt;br /&gt;
The basic [[Agent-Based Model|agent-based model]]s explain the emergence of fundamental properties:&lt;br /&gt;
&lt;br /&gt;
* [[Boids Model]]: swarms &lt;br /&gt;
* [[Dissemination Model]]: culture&lt;br /&gt;
* [[Segregation Model]]: ghettos and segregation&lt;br /&gt;
* [[Tribute Model]]: power&lt;br /&gt;
* [[El_Farol_Bar_Model|El Farol Bar Model]]: chaos&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://plato.stanford.edu/entries/properties-emergent/ Emergent Properties] at SEP&lt;br /&gt;
* Talk from Russ Abbott about [http://abbott.calstatela.edu/PapersAndTalks/EmergenceAndEntities.pdf Emergence and Entities]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Edge_of_Chaos</id>
		<title>Edge of Chaos</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Edge_of_Chaos"/>
				<updated>2011-02-11T21:46:13Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Edge of Chaos''' is a loosely defined concept and a phrase coined by Christopher Langton in 1990. The phrase originally refers to an area in the range of a variable, &amp;amp;lambda; (lambda), which was varied while examining the behavior of a [[Cellular Automata|cellular automata]] (CA). As &amp;amp;lambda; varied, the behavior of the CA went roughly through a [[phase transition]] of behaviors. It is a general, vague and ambiguous phrase, sometimes used in a more or less concrete context, but more often it is used as a coarse metaphor for a phase transition, a critical point with maximal complexity or a boundary between order and disorder, stability and instability, regularity and irregularity, organization and chaos. In other words it marks the boundary between order and chaos.&lt;br /&gt;
&lt;br /&gt;
[[Image:Edge_of_Chaos.png|right|thumb|400px|Edge of Chaos]]&lt;br /&gt;
&lt;br /&gt;
== General meaning == &lt;br /&gt;
&lt;br /&gt;
In the sciences in general, the phrase has come to refer to a metaphor that some physical, biological, economic and social systems operate in a region between order and disorder (characterized by randomness or chaos), where the [[Complexity|complexity]] is maximal. In this general case it describes a point of transition between order and chaos, between stable, solid, frozen and crystalline behavior on one side and unstable, chaotic and noisy behavior on the other side. The LINUX community for example is sometimes addressed as&lt;br /&gt;
[http://www.firstmonday.org/issues/issue5_3/kuwabara/ A bazaar at the edge of chaos].&lt;br /&gt;
&lt;br /&gt;
The edge of chaos is the point between order and&lt;br /&gt;
chaos with the highest complexity where the most &lt;br /&gt;
interesting things happen. The complexity has a price, &lt;br /&gt;
since this point is often also the most unstable point, &lt;br /&gt;
where things are barely stable due to the clash of opposite &lt;br /&gt;
forces. The edge of chaos in this sense is the point&lt;br /&gt;
with the highest complexity and lowest stability.&lt;br /&gt;
&lt;br /&gt;
== Definitions ==&lt;br /&gt;
&lt;br /&gt;
=== To or At the Edge ? ===&lt;br /&gt;
&lt;br /&gt;
The following frequent statement is obviously very imprecise: many complex adaptive systems, including life itself, seem to evolve naturally towards a regime that is delicately poised between order and chaos. Do living system evolve ''to'' the edge of chaos, or preferably ''at'' the edge of chaos? &lt;br /&gt;
&lt;br /&gt;
The term edge of chaos refers originally to an area in the range of &amp;amp;lambda;, which was varied while examining the behavior of a cellular automata (CA). As &amp;amp;lambda; varied, the behavior of the CA went through a phase transition of behaviors. For a given CA rule table, &amp;amp;lambda; can be computed as follows: one state q is chosen arbitrarily to be quiescent for a CA. The &amp;amp;lambda; value of a given CA rule is then the fraction of non-quiescent output states in the rule table.&lt;br /&gt;
&lt;br /&gt;
For a two-state CA, &amp;amp;lambda;=1/2 is a critical point or &amp;quot;phase transition&amp;quot; between two phases, one phase near lambda=0 where are cells have value 0, and one phase near &amp;amp;lambda;=1 where are cells have value 1. This definition of the &amp;quot;edge of chaos&amp;quot; would be identical or similar to a critical point.&lt;br /&gt;
&lt;br /&gt;
If you consider only &amp;amp;lambda; in the range 0 &amp;lt;= &amp;amp;lambda; &amp;lt;= 1/2, because every CA with &amp;amp;lambda; &amp;gt; 1/2 corresponds to a CA with &amp;amp;lambda; &amp;lt; 1/2 after interchanging the roles of black and white cells, you can observe another &amp;quot;edge of chaos&amp;quot;. As &amp;amp;lambda; increases, CA roughly go through the four basic Wolfram classes in this order: I (fixed), II (periodic), IV (complex), III (chaotic). Class IV can be seen as the edge of chaos at the boundary between class II (order) and III (chaos), see&lt;br /&gt;
[http://classes.yale.edu/Fractals/CA/CAPatterns/Langton/Langton.html]&lt;br /&gt;
&lt;br /&gt;
If the four Wolfram classes are ill-defined, as [http://www.ics.uci.edu/~eppstein/ca/wolfram.html Eppstein] argues, it is doubtful that this definition still valid. There are roughly two useful definitions of &amp;quot;edge of chaos&amp;quot;, one in the more specific sense of ''Self-organized criticality'' and ''Phase-Transition'', the other in the more general and descriptive sense of border between order and chaos:&lt;br /&gt;
&lt;br /&gt;
{{SelfOrg}}&lt;br /&gt;
&lt;br /&gt;
=== Self-organized criticality ===&lt;br /&gt;
&lt;br /&gt;
The first definition is a specific definition related to [[Self-Organized Criticality|self-organized criticality]]: &lt;br /&gt;
The edge of chaos can be found at a critical point with strong fluctuations and &lt;br /&gt;
scale-free correlations. Systems can cross this point more or less rapidly, but&lt;br /&gt;
usually they evolve to this point, i.e. they are '''evolving to the edge of chaos'''. &lt;br /&gt;
A sandpile for example evolves naturally towards a certain shape if we add grains to it, &lt;br /&gt;
until it has a certain critical slope. This point can often be measured precisely (for example &lt;br /&gt;
lambda=1/2 for Langton's parameter, a certain critical slope, or a critical temperature like 100 Degrees).&lt;br /&gt;
&lt;br /&gt;
The criticial point is point of high complexity, because&lt;br /&gt;
it combines stability and instability: the system evolves&lt;br /&gt;
automatically to a certain state (which increases stability,&lt;br /&gt;
certainty, and predictability), but at these state chain &lt;br /&gt;
reactions can lead to avalanches and cascades of any size &lt;br /&gt;
(which increases instability, uncertainty and unpredictability). &lt;br /&gt;
Thus the criticial point in self-organized criticality is a &lt;br /&gt;
&amp;quot;metastable&amp;quot; and complex state characterized by instability in &lt;br /&gt;
stability, uncertainty in certainty, or unpredictability in &lt;br /&gt;
predictability.&lt;br /&gt;
&lt;br /&gt;
=== Phase Transition ===&lt;br /&gt;
&lt;br /&gt;
The second definition is a more general and less precise definition: The edge of chaos is &lt;br /&gt;
the general point of highest complexity between order and disorder or regularity and chaos. &lt;br /&gt;
It defines the small region with highest complexity between order (stability, no change or &lt;br /&gt;
periodic change, rigid or fixed structures, too static) and chaos (instability, constant &lt;br /&gt;
or aperiodic change, no rigid or fixed structures, too noisy). The evolution at this point, &lt;br /&gt;
'''at the edge of chaos''', produces naturally the most complex structures in &lt;br /&gt;
[[Adaptation|adaptive]] systems, because the complexity in the environment reaches its &lt;br /&gt;
peak here. This point can often not be measured precisely (for example lambda somewhere &lt;br /&gt;
between 0 and 1/2, except the transition or accumulation point on the period doubling &lt;br /&gt;
route to chaos.)&lt;br /&gt;
&lt;br /&gt;
As you go from the unified, ordered and regular phase stepwise to the diverse, disordered and&lt;br /&gt;
chaotic phase, you reach a complex phase transition point between unity and diversity which &lt;br /&gt;
can be characterized as unity in diversity (or vice versa).&lt;br /&gt;
Similar to the peak of complexity at the border between order and chaos, we get in fact &lt;br /&gt;
complex small-world networks if we add randomness to order in regular networks, and complex &lt;br /&gt;
scale-free networks if we add order to randomness (through preferential attachment) in &lt;br /&gt;
random networks, see also the entry about [[Complex_Network|complex networks]].&lt;br /&gt;
&lt;br /&gt;
=== Comparison ===&lt;br /&gt;
&lt;br /&gt;
The first definition (1) related to self-organized criticality describes a point&lt;br /&gt;
and a state where the system evolves to, where the overall system is evolving ''to'' &lt;br /&gt;
the edge of chaos, whereas the second definition (2) related to phase transitions &lt;br /&gt;
describes a point and place in an environment where other systems can evolve, where &lt;br /&gt;
systems are evolving ''at'' the edge of chaos.&lt;br /&gt;
&lt;br /&gt;
What both definitions or &amp;quot;edge of chaos&amp;quot; types have in common is the following:&lt;br /&gt;
&lt;br /&gt;
* the behavior of the system can change abruptly at this critical point&lt;br /&gt;
* point of high(est) complexity: neither a diverse system with extreme disorder (for example a gas), nor a unified system with extreme order (for example a crystal or magnet) is very complex. Complexity reaches its peak between order and disorder, in (1) as instability in stability, and in general in (2) as diversity in unity&lt;br /&gt;
* Power-law behavior: critical exponents and scale-free correlations in (1) and scale-free distributions of events in (2)&lt;br /&gt;
* a delicate balance between two opposite complementary forces, for example one organizing and regulatory force, and one random or chaotic force, or one expanding and one contracting force, or one attracting and one repulsive force&lt;br /&gt;
&lt;br /&gt;
At the edge of chaos local microscopic influcences can have far-reaching macroscopic effects.&lt;br /&gt;
Therefore another definition maybe a system near a point with sensitive dependence on initial &lt;br /&gt;
conditions (&amp;quot;[[Butterfly Effect]]&amp;quot;). A small step (&amp;quot;over the edge&amp;quot;) can have a large and &lt;br /&gt;
dramatic effect, or nearly no effect at all.  The macroscopic behavior of such a system &lt;br /&gt;
can change dramatically as a result of small changes in microscopic conditions.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&lt;br /&gt;
=== Game of Life ===&lt;br /&gt;
&lt;br /&gt;
The activity in the Game of Life is on the edge of chaos in several &lt;br /&gt;
ways. It is on the edge between growing and shrinking, between expansion&lt;br /&gt;
and contraction. Some clusters grow, others shrink. &lt;br /&gt;
In the Game of Life you can observe all different forms of behavior:&lt;br /&gt;
[http://llk.media.mit.edu/projects/emergence/life-zoo.html stable objects],&lt;br /&gt;
[http://llk.media.mit.edu/projects/emergence/life-intro.html propagating objects] like gliders or spaceships and&lt;br /&gt;
[http://llk.media.mit.edu/projects/emergence/on-the-edge.html instable objects] like the R Pentomino.&lt;br /&gt;
A Glider has five active cells, and propagates virtuously and silently&lt;br /&gt;
through the empty space. The R Pentomino has a five-square shape, too. &lt;br /&gt;
It looks very simple, but it creates a wild flurry of  activity. Both &lt;br /&gt;
structures seem to be near the edge of chaos: with a change in only &lt;br /&gt;
two cells you can switch between both, but the one creates a chaotic &lt;br /&gt;
proliferation of activity, while the other remains harmless.&lt;br /&gt;
&lt;br /&gt;
=== Habitable Zones ===&lt;br /&gt;
&lt;br /&gt;
Within each galaxy there is an area that could &lt;br /&gt;
support life, a ring around the center called&lt;br /&gt;
the &amp;quot;Galactic Habitable Zone&amp;quot; (GHZ) (see the &lt;br /&gt;
Scientific American article &amp;quot;Refugees of Life in a Hostile Universe&amp;quot; from &lt;br /&gt;
Gonzalez et al. (2001) [http://atropos.as.arizona.edu/aiz/teaching/a204/etlife/SciAm01.pdf]).&lt;br /&gt;
Within each plantery system there is a again an&lt;br /&gt;
area that could support life, the planetary &lt;br /&gt;
or &amp;quot;Circumstellar Habitable Zone&amp;quot; (CHZ).&lt;br /&gt;
On each planet, there is again an area that&lt;br /&gt;
could support life: life is only possible near&lt;br /&gt;
the surface, and at the surface in turn at the &lt;br /&gt;
edge between icy and hot regions, between fluid &lt;br /&gt;
and dry areas. One could classify all these habitable &lt;br /&gt;
areas as the &amp;quot;edge of chaos&amp;quot; between order and chaos.&lt;br /&gt;
&lt;br /&gt;
==Emergence of rules and evolution of laws==&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;edge of chaos&amp;quot; is related to the evolution of laws&lt;br /&gt;
and to the [[Emergence|emergence]] of rigid rules from lax rules.&lt;br /&gt;
Lax rules are similar to a liquid phase, rigid rules are&lt;br /&gt;
similar to a solid phase, and they are connected by&lt;br /&gt;
a critical point or a phase transition.&lt;br /&gt;
To speak in social terms, if only the interests of the &lt;br /&gt;
individual count (the liquid phase with lax rules), we get too &lt;br /&gt;
much chaos and anarchy, disorder and disorientation, and no &lt;br /&gt;
coherent structure or order can emerge. If only the interests of &lt;br /&gt;
the group matter (the solid phase with rigid rules), we got &lt;br /&gt;
too much order, stagnation and suppression. A suitable &lt;br /&gt;
compromise between the interests of the individual and &lt;br /&gt;
the interests of the group can therefore possibly be found &lt;br /&gt;
at the &amp;quot;edge of chaos&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Selfish agents and people will recognize the need &lt;br /&gt;
for conventions, rules, norms and laws (most) notably &lt;br /&gt;
if their personal interests are heavily violated. &lt;br /&gt;
People will cry for a law against theft or murder if &lt;br /&gt;
it happens to themselves, if their things are robbed&lt;br /&gt;
or relatives are murdered. They will long for rigid and&lt;br /&gt;
strict traffic rules if they are seriously injured&lt;br /&gt;
or killed in a traffic accident. They will appreciate culture &lt;br /&gt;
and long for justice and peace after times of trouble, after &lt;br /&gt;
&amp;quot;dark ages&amp;quot; with constant conflicts, battles and wars &lt;br /&gt;
(the &amp;quot;warring states period&amp;quot; in ancient China, the Middle &lt;br /&gt;
Ages in Europe, the Dark Ages in ancient Greece, &lt;br /&gt;
the &amp;quot;intermediate periods&amp;quot; in ancient Egypt). In other &lt;br /&gt;
words, conflicts, battles and wars will increase the threshold&lt;br /&gt;
of constraints and restrictions that people are willing to &lt;br /&gt;
accept. &lt;br /&gt;
&lt;br /&gt;
Yet too much order, organization, and stagnation is just &lt;br /&gt;
as bad as too much chaos, terror, and anarchy. Too much&lt;br /&gt;
tyranny will increase the wish for freedom. The interests &lt;br /&gt;
of the individuals can be violated if there is too litte &lt;br /&gt;
order and organization (Hobbes' war of everyone against everyone) &lt;br /&gt;
or too much order (tyranny and bureaucracy). Rules, norms and &lt;br /&gt;
laws will evolve in an oscillating process at this &amp;quot;edge &lt;br /&gt;
of chaos&amp;quot; between too much chaos and too much  order, as a &lt;br /&gt;
compromise between the interests of the group and the &lt;br /&gt;
interests of the individual.&lt;br /&gt;
&lt;br /&gt;
If we consider for example the transition from loosely&lt;br /&gt;
coupled tribes to the first cultures and states, we can not&lt;br /&gt;
observe a clear self-organization into more and more complexity.&lt;br /&gt;
People in the Bronze Age didn't sit together and said &amp;quot;we need &lt;br /&gt;
to create a society based on parliamentary democracy&amp;quot;. What we&lt;br /&gt;
observe is first war, fight and terror. War of everybody&lt;br /&gt;
against everybody. After one has won, usually a kingship, &lt;br /&gt;
a tyranny or dictatorship is established. That's a transition &lt;br /&gt;
from chaos and anarchy to chiefdom and hierarchical order. &lt;br /&gt;
Because many people are not satisfied with tyranny or &lt;br /&gt;
dictatorship, eventually a revolution breaks free, chaos &lt;br /&gt;
prevails, and the same process starts all over again.&lt;br /&gt;
&lt;br /&gt;
==Articles==&lt;br /&gt;
&lt;br /&gt;
* Christopher G. Langton. &amp;quot;Computation at the edge of chaos:phase transitions and emergent computation&amp;quot;. ''Physica D'', '''42''', 1990.&lt;br /&gt;
* [http://cse.ucdavis.edu/~cmg/papers/wlboac.pdf What Lies Between Order and Chaos ?], James P. Crutchfield in Art and Complexity, J. Casti, editor, Oxford University Press (2002). &lt;br /&gt;
* [http://cse.ucdavis.edu/~cmg/papers/CompOnset.pdf Computation at the Onset of Chaos], James P. Crutchfield and K. Young, in ''Entropy, Complexity, and the Physics of Information'', W. Zurek, editor, SFI Studies in the Sciences of Complexity, VIII, Addison-Wesley, Reading, Massachusetts (1990) pp. 223-269.&lt;br /&gt;
* [http://cse.ucdavis.edu/~evca/Papers/DynCompEdge.pdf Dynamics, Computation, and the 'Edge of Chaos': A Re-examination], M. Mitchell, J. P. Crutchfield, and P. T. Hraber, In Complexity: Metaphors, Models, and Reality. G. Cowan, D. Pines, and D. Melzner, editors. SFI Series in the Sciences of Complexity, volume XIX, Reading, MA: Addison-Wesley, 1994: 497-513.&lt;br /&gt;
* [http://www.cs.pdx.edu/~mm/RevEdge.html Revisiting the Edge of Chaos], Melanie Mitchell, Peter T. Hraber, and James P. Crutchfield, ''Complex Systems'', '''7''':89--130, 1993.&lt;br /&gt;
&lt;br /&gt;
==Links and References==&lt;br /&gt;
&lt;br /&gt;
* Ko Kuwabara, [http://www.firstmonday.org/issues/issue5_3/kuwabara/ A Bazaar at the Edge of Chaos], First Monday, volume 5, number 3 (2000)&lt;br /&gt;
&lt;br /&gt;
* Shalizi's notebook entry on [http://cscs.umich.edu/~crshalizi/notebooks/edge-of-chaos.html the Edge of Chaos]&lt;br /&gt;
&lt;br /&gt;
[[Category:Basic Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Cognitive_dissonance</id>
		<title>Cognitive dissonance</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Cognitive_dissonance"/>
				<updated>2011-02-11T21:46:10Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Cognitive dissonance''' is an uncomfortable and unpleasant feeling caused by holding two contradictory ideas simultaneously. It is a psychological term denoting the mental state in which two or more incompatible or contradictory ideas are both held to be true. The &amp;quot;ideas&amp;quot; or &amp;quot;cognitions&amp;quot; in question may include attitudes and beliefs, the awareness of one's behavior, and facts. The theory of cognitive dissonance proposes that people have a motivational drive to reduce dissonance by changing their attitudes, beliefs, and behaviors, or by justifying or rationalizing their attitudes, beliefs, and behaviors. &lt;br /&gt;
&lt;br /&gt;
In an [[Agent-Based Model]], cognitive dissonance can be described as a conflict between agents. Conflicts and contradictions are unpleasant for the mind, society or population as a whole, because they damage the group integrity. They lead to confusion and displeasure, while the mind wants to gain insights and pleasure.&lt;br /&gt;
&lt;br /&gt;
Cognitive dissonance is the opposite of [[Cognitive consonance|cognitive consonance]].&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia page on [http://en.wikipedia.org/wiki/Cognitive_dissonance cognitive dissonance]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=AOSE</id>
		<title>AOSE</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=AOSE"/>
				<updated>2011-02-11T21:46:05Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Agent Oriented Software Engineering''' (AOSE) is the name for the attempt to construct and engineer&lt;br /&gt;
new forms of flexible and robust software with [[Agent|Agents]] and [[Multi-Agent System|Multi-Agent Systems]]. &lt;br /&gt;
Without doubt agents are well suited to simulate complex systems. If you can use them&lt;br /&gt;
to build and construct complex software systems in order to achieve&lt;br /&gt;
a kind of [[Agent-Based Computing|agent-based computing]] is still an &lt;br /&gt;
open question.&lt;br /&gt;
&lt;br /&gt;
== Problems and Challenges ==&lt;br /&gt;
&lt;br /&gt;
=== Basic Problems ===&lt;br /&gt;
&lt;br /&gt;
There are many problems related to AOSE. The first problem is for instance an agreement about the basic terms:&lt;br /&gt;
what is an [[Agent|agent]], what characteristics define an [[Agent|agent]], what types of architectures are possible,&lt;br /&gt;
what forms of organization are available, what kind of methodology is suitable...&lt;br /&gt;
There is no doubt that you can use object-oriented software to construct useful software&lt;br /&gt;
applications. One can also use object-oriented software to construct agents, but if it is possible&lt;br /&gt;
to build useful applications out of software agents is still another question.&lt;br /&gt;
What forms of software you can create in turn with agents is still controversial. Probably &lt;br /&gt;
these types of software are different from traditional applications in software engineering.&lt;br /&gt;
Yet the situation is even worse: there are not only many problems related to AOSE, &lt;br /&gt;
AOSE itself is more a name for a problem than a solution.&lt;br /&gt;
&lt;br /&gt;
=== A Name for a Problem ===&lt;br /&gt;
&lt;br /&gt;
Each technology for building software has advantages and disadvantages. Agent technology&lt;br /&gt;
is not an exception, it has advantages and positive properties, but there are also many &lt;br /&gt;
pitfalls and drawbacks, see the section &amp;quot;Potentials and Pitfalls&amp;quot; of the [[Agent|agent]] page.&lt;br /&gt;
The advantage of traditional software techniques is the &amp;quot;easy&amp;quot; analysis, engineering and design &lt;br /&gt;
of predictable object-oriented systems with [[Unified Modeling Language|UML]]. The drawbacks &lt;br /&gt;
are brittleness and rigidity, low fault-tolerance and low scalability. &lt;br /&gt;
Agents and [[Multi-Agent System]]s offer on the contrary robustness, scalability and&lt;br /&gt;
fault-tolerance, but they go hand in hand with heavy problems in engineering, design&lt;br /&gt;
and predictability.&lt;br /&gt;
&lt;br /&gt;
Thus [[Agent|agents]] are a promising approach to get rid of traditional drawbacks &lt;br /&gt;
like brittleness and low scalability, but they introduce (often unnoticed) new problems &lt;br /&gt;
through the backdoor: the engineering and design of &lt;br /&gt;
[[Multi-Agent System|Multi-Agent Systems]], especially those with desirable &lt;br /&gt;
[[Emergence|emergent]] properties is notoriously difficult. Therefore&lt;br /&gt;
&amp;quot;Agent Oriented Software Engineering&amp;quot; or AOSE is more a name for a problem than&lt;br /&gt;
a name for a solution, like many other names as for example&lt;br /&gt;
&amp;quot;Agent Based Software Engineering&amp;quot; (ABSE), &amp;quot;Agent Based Software Development&amp;quot; (ABSD),&lt;br /&gt;
&amp;quot;Agent Oriented Programming&amp;quot; (AOP), or &amp;quot;Interaction Oriented Programming&amp;quot; (IOP).&lt;br /&gt;
The problem in AOSE is familiar and well-known to many researchers: &amp;quot;I &lt;br /&gt;
have a Multi-Agent System, but what is its purpose and function ?&amp;quot;&lt;br /&gt;
If an agent decides itself what it needs to do, how can we make sure that &lt;br /&gt;
it does something useful or something we want it to do?&lt;br /&gt;
&lt;br /&gt;
The '''Engineering of Self-Organizing Applications''' ([[ESOA|ESOA]]) is&lt;br /&gt;
like AOSE a name for a central problem, not its solution. The problem in ESOA &lt;br /&gt;
is this: &amp;quot;I know some self-organizing systems in nature, but how do I engineer a &lt;br /&gt;
specific system for a certain problem?&amp;quot; In other words &amp;quot;How do I organize&lt;br /&gt;
a system which is allowed to organize itself?&amp;quot; Whereas AOSE focuses&lt;br /&gt;
on the conflict between autonomy and service-delivery (between agents and&lt;br /&gt;
objects, autonomy vs. heteronomy), ESOA is about the conflict &lt;br /&gt;
between engineering and emergence (between imposed purpose and independent goals,&lt;br /&gt;
or planned organization vs. self-organization).&lt;br /&gt;
&lt;br /&gt;
== Proposed Solutions and Methodologies ==&lt;br /&gt;
&lt;br /&gt;
=== Demands and Methodologies ===&lt;br /&gt;
&lt;br /&gt;
There is a huge number of agent systems and frameworks, which have &lt;br /&gt;
been developed because they are certainly an interesting research object. &lt;br /&gt;
The AOSE problem arises if we want to do something useful with them.&lt;br /&gt;
It manifests itself in the huge number and diversity of &lt;br /&gt;
AOSE Methodologies. These methodologies are not a real solution &lt;br /&gt;
of the AOSE problem, otherwise we would have a huge number&lt;br /&gt;
of useful new applications. They document a demand for a solution, &lt;br /&gt;
and the demand for AOSE is mainly self-made: I have build a&lt;br /&gt;
[[Multi-Agent System]], but what do I do with it?&lt;br /&gt;
The huge number of different methodologies seem to prove that there &lt;br /&gt;
is a big need for such a methodology, yet it points at the same &lt;br /&gt;
time to the existence of a major obstacle: &lt;br /&gt;
engineering (in form of software applications) and &lt;br /&gt;
[[Autonomy|autonomy]] (in form of [[Multi-Agent System|Multi-Agent Systems]] &lt;br /&gt;
(MAS) with emergent properties) don't fit well together.&lt;br /&gt;
&lt;br /&gt;
There are two possible potential solutions for this core&lt;br /&gt;
problem 'autonomy' of AOSE: (1) to limit autonomy by external&lt;br /&gt;
constraints, i.e. to organize MASs through roles and &lt;br /&gt;
organizational structures, or (2) to limit autonomy by internal &lt;br /&gt;
constraints, i.e. let the system organize itself and try to apply the &lt;br /&gt;
concept of self-organization, which is known as [[ESOA]].&lt;br /&gt;
Most methodologies try the first way.&lt;br /&gt;
The most favorite and famous methodologies are&lt;br /&gt;
ADELFE, GAIA, Tropos, MASSIVE, MaSE, Prometheus.&lt;br /&gt;
Each country has proposed at least one methodology:&lt;br /&gt;
&lt;br /&gt;
* French, '''ADELFE''' (Atelier de Développment de Logiciels à Fonctionnalité Emergente) from Bernon and Gleizes [Ber03] &lt;br /&gt;
* British, '''GAIA''' 1st version from Wooldridge, Jennings &amp;amp; Kinny [Woo00], refined version from Zambonelli, Jennings &amp;amp; Wooldridge [Zam03] &lt;br /&gt;
* Italian/Canadian, '''Tropos''' 1st Version from Bresciani, Perini, Giorgini, Giunchiglia, Mylopoulos, [Bre01], refined version from Kolp, Giorgini and Mylopoulos [Kol02] &lt;br /&gt;
* German, '''MASSIVE''' (Multi Agent Systems Iterative View Engineering) Lind  [Lin01] &lt;br /&gt;
* American, '''MaSE''' (Multiagent Systems Engineering) from Wood and DeLoach [DeLo99]  &lt;br /&gt;
* Australian, '''Prometheus''' from Padgham and Winikoff [Pad02]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, [[Autonomy|autonomy]] is one of the major characteristics &lt;br /&gt;
of agents. If we limit autonomy too much, we have no longer a multi-agent system.&lt;br /&gt;
[[Autonomy]] makes [[Multi-Agent System|Multi-Agent Systems]] and&lt;br /&gt;
agent oriented systems interesting, but it also is the reason why they&lt;br /&gt;
are not always useful and often very prolematic.&lt;br /&gt;
&lt;br /&gt;
=== A possible general solution ===&lt;br /&gt;
&lt;br /&gt;
A possible solution to the problem of AOSE is the combination of modern agent-oriented&lt;br /&gt;
techniques and classic object-oriented software. If agents are used for &lt;br /&gt;
self-management and other [[Self-Star_Properties|self-* properties]],&lt;br /&gt;
a combination of [[Agent|agents]] and servies can be a promising extension of &lt;br /&gt;
normal software, especially for distributed applications (i.e. applications in&lt;br /&gt;
[[Distributed_System|distributed systems]]). Such a combination can potentially&lt;br /&gt;
solve the AOSE conflict between autonomy (agents) and heteronomy (objects and services).&lt;br /&gt;
Agents can be used to achieve autonomy, services to guarantee service-delivery.&lt;br /&gt;
Since [[Agent|agents]] (esp. mobile agents) are usually part of a distributed system,&lt;br /&gt;
[[Distributed_System|distributed systems]], and especially distributed software&lt;br /&gt;
applications are a natural application for agent oriented software engineering.&lt;br /&gt;
&lt;br /&gt;
== References and Links ==&lt;br /&gt;
&lt;br /&gt;
=== Books ===&lt;br /&gt;
&lt;br /&gt;
Proceedings of the Workshops AOSE 2000, AOSE 2001, AOSE 2002, AOSE 2003 and AOSE 2004,&lt;br /&gt;
published by Springer as Lecture Notes in Computer Science (LNCS) Vol. 1957,&lt;br /&gt;
2222, 2585, 2935, and 3382. These Workshops were hosted by the International&lt;br /&gt;
Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS).&lt;br /&gt;
&lt;br /&gt;
=== General Articles ===&lt;br /&gt;
&lt;br /&gt;
* [http://citeseer.ist.psu.edu/242172.html Agent-Oriented Software Engineering], Nicholas R. Jennings, Michael Wooldridge (2000)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseer.ist.psu.edu/wooldridge00agentoriented.html Agent-Oriented Software Engineering: The State of the Art], Michael Wooldridge, Paolo Ciancarini (2000)&lt;br /&gt;
&lt;br /&gt;
* [http://citeseer.ist.psu.edu/zambonelli04challenges.html Challenges and Research Directions in Agent-Oriented Software Engineering],  Franco Zambonelli (2004)&lt;br /&gt;
&lt;br /&gt;
=== Methodology Articles ===&lt;br /&gt;
&lt;br /&gt;
[Ber03]&lt;br /&gt;
ADELFE, a Methodology for Adaptive Multi-Agent Systems Engineering&lt;br /&gt;
Carole Bernon, Marie-Pierre Gleizes, Sylvain Peyruqueou, Gauthier Picard&lt;br /&gt;
Proceedings of the 3rd International Workshop on &lt;br /&gt;
&amp;quot;Engineering Societies in the Agents World III&amp;quot; (ESAW 2002), &lt;br /&gt;
Paolo Petta, Robert Tolksdorf,  Franco Zambonelli (Eds.) LNCS 2577, Springer (2003) 156-169&lt;br /&gt;
&lt;br /&gt;
[Bre01]&lt;br /&gt;
Paolo Bresciani, Anna Perini, Paolo Giorgini, Fausto Giunchiglia, John Mylopoulos&lt;br /&gt;
A knowledge level software engineering methodology for agent oriented programming. &lt;br /&gt;
In Proceedings of the 5th International Conference on Autonomous Agents, pages 648–655. &lt;br /&gt;
ACM Press (2001)&lt;br /&gt;
&lt;br /&gt;
[DeLo99]&lt;br /&gt;
Scott A. DeLoach&lt;br /&gt;
Multiagent Systems Engineering: A Methodology And Language for Designing Agent Systems (1999)&lt;br /&gt;
Procs. Agent-Oriented Information Systems '99 (AOIS'99), Seattle, WA, USA, 1 May (1999) 1999&lt;br /&gt;
&lt;br /&gt;
[Kol02] &lt;br /&gt;
Manuel Kolp, Paolo Giorgini, John Mylopoulos. &lt;br /&gt;
A goal-based organizational perspective on multiagent architectures. &lt;br /&gt;
In Intelligent Agents VIII: Agent Theories, Architectures, and Languages,&lt;br /&gt;
LNAI 2333, Springer (2002) 128–140&lt;br /&gt;
&lt;br /&gt;
[Lin01]&lt;br /&gt;
Jürgen Lind&lt;br /&gt;
Iterative Software Engineering for Multiagent Systems - The MASSIVE Method. &lt;br /&gt;
LNCS 1994, Springer, 2001&lt;br /&gt;
&lt;br /&gt;
[Pad02]&lt;br /&gt;
Lin Padgham and Michael Winikoff, &lt;br /&gt;
Prometheus: A Methodology for Developing Intelligent Agents, &lt;br /&gt;
Proceedings of the First International Joint Conference &lt;br /&gt;
on Autonomous Agents and Multi-Agent Systems (AAMAS 2002), &lt;br /&gt;
ACM Press (2002) 37-38&lt;br /&gt;
&lt;br /&gt;
[Woo00]&lt;br /&gt;
Michael Wooldridge, Nicholas R. Jennings, David Kinny,&lt;br /&gt;
The Gaia methodology for agent-oriented analysis and design. &lt;br /&gt;
Journal of Autonomous Agents and Multi-Agent Systems, Vol. 3 No. 3 (2000) 285–312&lt;br /&gt;
&lt;br /&gt;
[Zam03]&lt;br /&gt;
Franco Zambonelli, Nicholas R. Jennings, Michael Wooldridge,&lt;br /&gt;
Developing multiagent systems: the Gaia Methodology. &lt;br /&gt;
ACM Transactions on Software Engineering and Methodology 12(3) (2003) 317-370&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Scaling_Laws</id>
		<title>Scaling Laws</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Scaling_Laws"/>
				<updated>2011-02-11T21:46:02Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Geoffrey B. West and James H. Brown discovered that allometric &lt;br /&gt;
'''scaling laws''', including the 3/4 power law for metabolic rates, &lt;br /&gt;
are characteristic of all organisms.&lt;br /&gt;
One factor which increases biological diversity is body size,&lt;br /&gt;
which varies over 21 orders of magnitude in biological organisms.&lt;br /&gt;
Yet some universal rates and regular relationships stay constant.&lt;br /&gt;
The metabolic rate of biological life-forms scales approximately as &lt;br /&gt;
the 3/4-power of mass from complex molecules up to the largest &lt;br /&gt;
multicellular organisms.&lt;br /&gt;
&lt;br /&gt;
Similarly, time-scales (such as lifespans and growth-rates) and &lt;br /&gt;
sizes (such as genome lengths, RNA densities, and tree heights) &lt;br /&gt;
scale as power laws with exponents which are typically simple &lt;br /&gt;
multiples of 1/4.&lt;br /&gt;
West and Brown have shown that these 1/4 power scaling laws&lt;br /&gt;
follow from underlying principles embedded in the dynamical &lt;br /&gt;
and geometrical structure of space-filling, fractal-like, &lt;br /&gt;
hierarchical branching networks.&lt;br /&gt;
&lt;br /&gt;
These scaling laws can be applied to artificial structures as&lt;br /&gt;
well. Like biological systems, counties and cities have evolved &lt;br /&gt;
branching networks that transport a variety of resources.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia Entry for [http://en.wikipedia.org/wiki/Power_law Power Law]&lt;br /&gt;
&lt;br /&gt;
== Papers ==&lt;br /&gt;
&lt;br /&gt;
* Geoffrey B. West, James H. Brown, and Brian J. Enquist [http://biology.unm.edu/jhbrown/Documents/Publications/WestBrown&amp;amp;Enquist1997S.pdf A General Model for the Origin of Allometric Scaling Laws in Biology], Science Vol 276 April (1997) 122-126&lt;br /&gt;
&lt;br /&gt;
* Geoffrey B. West and James H. Brown, [http://biology.unm.edu/jhbrown/Documents/Publications/West&amp;amp;Brown2004PT.pdf Life's Universal Scaling Laws], Physics Today, September (2004) 36-42&lt;br /&gt;
&lt;br /&gt;
[[Category:Basic Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Flexibility</id>
		<title>Flexibility</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Flexibility"/>
				<updated>2011-02-11T21:45:57Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Flexibility''' is defined as the ability to react to changes in the environment,&lt;br /&gt;
for example to expand or contract in response to external requirements.&lt;br /&gt;
It is the opposite of stiffness and brittleness, and related to [[Scalability|scalability]], [[Adaptation|adaptation]] and [[Robustness|robustness]].&lt;br /&gt;
&lt;br /&gt;
[[Category:Applied Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Organic_Computing</id>
		<title>Organic Computing</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Organic_Computing"/>
				<updated>2011-02-11T21:45:54Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Organic Computing''' is a German Initiative launched by the two&lt;br /&gt;
German national computer societies GI (German Informatics Society) and &lt;br /&gt;
ITG (Informationstechnische Gesellschaft), and a group of researchers &lt;br /&gt;
from three German universities (Universität Hannover, Universität Karlsruhe &lt;br /&gt;
and Universität Augsburg). It defines an Organic Computing &lt;br /&gt;
system as &amp;quot;a technical system which adapts dynamically to the current conditions of its&lt;br /&gt;
environment. It is self-organizing, self-optimizing, self-configuring, self-healing, &lt;br /&gt;
self-protecting, self-describing, self-explaining and context-aware”.&lt;br /&gt;
Thus it is very similar to IBM's [[Autonomic Computing|autonomic computing]] Initiative,&lt;br /&gt;
which also is based on systems with [[Self-* Properties]].&lt;br /&gt;
Like Autonomic Computing, it is a vision for future information &lt;br /&gt;
processing systems.&lt;br /&gt;
&lt;br /&gt;
While Autonomic Computing focuses mainly on servers, computing centers&lt;br /&gt;
and large-scale Internet software,&lt;br /&gt;
Organic Computing aims at the development of robust, flexible&lt;br /&gt;
and highly adaptive embedded systems, [[Ubiquitous Computing]] and&lt;br /&gt;
Hardware (Embedded Processors, Microcontroller, etc.).&lt;br /&gt;
&lt;br /&gt;
== History - Organic IT vision ==&lt;br /&gt;
&lt;br /&gt;
The idea of organic computing and [[Autonomic Computing|autonomic computing]] goes &lt;br /&gt;
back to research reports from Forrester. In a report from April 2002 about &lt;br /&gt;
''Organic IT'', the report defines ''Organic IT'' as a &amp;quot;Computing infrastructure &lt;br /&gt;
built on cheap, redundant components that automatically shares and manages &lt;br /&gt;
enterprise computing resources -- software, processors, storage, and &lt;br /&gt;
networks -- across all applications within a datacenter&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
The vision of ''Organic IT '' is based on a next-generation data center &lt;br /&gt;
architecture, and was the precursor for the&lt;br /&gt;
organic computing and [[Autonomic Computing|autonomic computing]] initiatives.&lt;br /&gt;
Forrester has proposed for example the &amp;quot;principle of least escalation&amp;quot;, &lt;br /&gt;
meaning that in an layered organic architecture with many management &lt;br /&gt;
layers the lowest capable level solves urging problems immediately and &lt;br /&gt;
informs the higher levels later (similar to biological reflexes).&lt;br /&gt;
&lt;br /&gt;
Like many of the later initiatives and visions, ''Organic IT'' envisioned&lt;br /&gt;
an IT architecture which solves the basic problems in many&lt;br /&gt;
[[Distributed System|distributed systems]], namely&lt;br /&gt;
[[Fault Tolerance|fault tolerance]], [[Failover|failover]], &lt;br /&gt;
[[Flexibility|flexibility]], [[Robustness|robustness]], [[Scalability|scalability]]&lt;br /&gt;
with the help of [[Self-Star Properties|self-* properties]] and [[redundancy]].&lt;br /&gt;
[[Scalability|Scalability]] for instance means that applications can scale up &lt;br /&gt;
and down to match demand, [[Robustness|robustness]] and [[Fault Tolerance|fault tolerance]] &lt;br /&gt;
(including failover and recovery) mean an application can run 24/7, &lt;br /&gt;
non-stop 24 hours per day and seven days a week,&lt;br /&gt;
[[Flexibility|flexibility]] means automated server provisioning, application&lt;br /&gt;
installation and maintenance.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
The objective of Organic Computing is the technical usage of principles&lt;br /&gt;
observed in natural systems, to create biologically inspired life-like &lt;br /&gt;
computer systems. As computers and the tasks they perform become increasingly &lt;br /&gt;
complex, researchers are looking to nature -- as model and as &lt;br /&gt;
metaphor -- for inspiration.&lt;br /&gt;
&lt;br /&gt;
There are many examples how biology has alredy been affecting &lt;br /&gt;
computer science. Many attempts exist for example to develop artificial neural networks,&lt;br /&gt;
evolvable hardware, evolutionary algorithms, nanoscale self-assembly, and security systems &lt;br /&gt;
that mimic nature's immune systems (see the book &amp;quot;Imitation of Life&amp;quot; by Nancy Forbes).&lt;br /&gt;
&lt;br /&gt;
The overall objectives of organic computing are the same goals &lt;br /&gt;
as in biologically inspired computing: &lt;br /&gt;
* the use of biology as a metaphor or inspiration for the development of algorithms and systems; &lt;br /&gt;
* the construction of information processing systems that use biological materials or are modeled on biological processes, or both; &lt;br /&gt;
* the effort to understand how biological organisms &amp;quot;compute,&amp;quot; or process information.&lt;br /&gt;
&lt;br /&gt;
== Biologically Inspired Computing ==&lt;br /&gt;
&lt;br /&gt;
Another name for Organic Computing is '''biologically inspired computing'''&lt;br /&gt;
or short bio-inspired computing. Organic means having properties &lt;br /&gt;
associated with living organisms: &lt;br /&gt;
its original meaning is &amp;quot;Part of or derived from living matter&amp;quot;.&lt;br /&gt;
Organic Computing is the use of the self-* principles found in organic, &lt;br /&gt;
living and evolving systems (self-management, self-organization and &lt;br /&gt;
self-healing) to reach scaleability, robustness and autonomy.&lt;br /&gt;
Organic systems grow, change, evolve, suffer illnesses and recover &lt;br /&gt;
again. They are robust and flexible. If organic computing can&lt;br /&gt;
identify and use some of the basic principles behind organic&lt;br /&gt;
systems to reach similar properties in artificial systems, then&lt;br /&gt;
this would be a success.&lt;br /&gt;
&lt;br /&gt;
Biological [[Evolution|evolution]] has managed to produce a&lt;br /&gt;
wide variety of complex organisms and lifeforms that build, &lt;br /&gt;
adapt, repair and reproduce themselves.&lt;br /&gt;
There have always been similarities, exchange and overlap between&lt;br /&gt;
the worlds of biology and computer science. The basic terminology&lt;br /&gt;
of viruses and infection in the field of computer security&lt;br /&gt;
is borrowed from biology, and the modern biomedicine and molecular &lt;br /&gt;
biology would not be possible without computers at all.&lt;br /&gt;
&lt;br /&gt;
By studying biological phenomena such as brains, swarming insects, &lt;br /&gt;
evolution and immune systems, scientists try to make computers &lt;br /&gt;
do the same sorts of things and to reach the same amount of&lt;br /&gt;
flexibility and robustness. Using ideas from biology to improve&lt;br /&gt;
artificial systems can be a useful way to stimulate thought&lt;br /&gt;
and to inspire new architectures. Biologically inspired computing&lt;br /&gt;
methods and systems are:&lt;br /&gt;
&lt;br /&gt;
* Reinforcement Learning&lt;br /&gt;
* Neural Networks&lt;br /&gt;
* Evolutionary Computing&lt;br /&gt;
* Swarm Intelligence and Collective Systems&lt;br /&gt;
* Artificial Immune Systems&lt;br /&gt;
* DNA Computing and Biological Haardware&lt;br /&gt;
* Biologically Inspired Robotics&lt;br /&gt;
&lt;br /&gt;
== Papers ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.gi-ev.de/download/VDE-ITG-GI-Positionspapier%20Organic%20Computing.pdf Positionspapier Organische Computersysteme (German)]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.virginia.edu/~evans/bio/Embryonics_ProcIEEE.pdf D. Mange et al., Toward robust integrated circuits: The embryonics approach], Proc. of the IEEE, vol. 88, no. 4 (2000) 516-541&lt;br /&gt;
&lt;br /&gt;
== Books ==&lt;br /&gt;
&lt;br /&gt;
Nancy Forbes, ''Imitation of Life : How Biology Is Inspiring Computing'', The MIT Press, 2004, ISBN 0262062410&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Biologically-inspired_computing Biologically-inspired computing]&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.unibo.it/bison/index.shtml BISON Biology-Inspired techniques for Self-Organization in dynamic Networks]&lt;br /&gt;
&lt;br /&gt;
* [http://www.organic-computing.org/ The Organic Computing Home Page]&lt;br /&gt;
&lt;br /&gt;
* [http://www.informatik.uni-augsburg.de/lehrstuehle/sik/research/organiccomputing/ Organic Computing @ University of Augsburg]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:x-Computing_Techniques]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Level_of_Organization</id>
		<title>Level of Organization</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Level_of_Organization"/>
				<updated>2011-02-11T21:45:51Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Scientific knowledge is organized hierarchically in '''levels of organization''',	&lt;br /&gt;
where each level describes nature on a certain scope or resolution,&lt;br /&gt;
from [[Microlevel|microscopic]] to [[Macrolevel|macroscopic]] levels.&lt;br /&gt;
Although at any given level of organization the behavior emerges from &lt;br /&gt;
the behavior of the subsystems at the immediately lower level of &lt;br /&gt;
organization, it can be described in an independent way.&lt;br /&gt;
&lt;br /&gt;
: Scientific knowledge is organised in levels, not because reduction in principle is impossible, but because nature is organised in levels, and the pattern at each level is most clearly discerned by abstracting from the detail of the levels far below. [. . .] And nature is organised in levels because hierarchic structures – systems of Chinese boxes – provide the most viable form for any system of even moderate complexity. — Herbert A. Simon ('The Organisation of Complex Systems’ in &amp;quot;Hierarchy Theory: The Challenge of Complex Systems&amp;quot;, 1973, p. 26)&lt;br /&gt;
&lt;br /&gt;
In ecology, the levels of organizations are the five known levels of environmental classification: &lt;br /&gt;
&lt;br /&gt;
* Biosphere&lt;br /&gt;
* Ecosystem&lt;br /&gt;
* Community &lt;br /&gt;
* Population&lt;br /&gt;
* Organism&lt;br /&gt;
&lt;br /&gt;
In the first chapter of their book &amp;quot;The Superorganism&amp;quot;, Bert Hölldobler and E.O. Wilson provide an elegant description of life as a scale-free hierarchy of biological complexity:&lt;br /&gt;
&lt;br /&gt;
: &amp;quot;Life is a self-replicating hierarchy of levels. Biology is the study of the levels that compose the hierarchy. No phenomenon at any level can be wholly characterized without incorporating other phenomena that arise at all levels. Genes prescribe proteins, proteins self-assemble into cells, cells multiply and aggregate to form organs, organs arise as parts of organisms, and organisms gather sequentially into societies, populations and ecosystems. Natural selection that targets a trait at any of these levels ripples in effect across all the others.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In anatomy and biology, the levels of organic life-forms and organisms are &lt;br /&gt;
&lt;br /&gt;
* Organism&lt;br /&gt;
* Organ System&lt;br /&gt;
* Organ&lt;br /&gt;
* Tissue&lt;br /&gt;
* Cell&lt;br /&gt;
* Proteins&lt;br /&gt;
* Genes&lt;br /&gt;
&lt;br /&gt;
In physics, the levels of organizations for matter in general are:&lt;br /&gt;
&lt;br /&gt;
* Molecule (biomolecule, cells)&lt;br /&gt;
* Atom&lt;br /&gt;
* Elementary particle&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Organization]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Distributed_Algorithm</id>
		<title>Distributed Algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Distributed_Algorithm"/>
				<updated>2011-02-11T21:45:47Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A '''distributed algorithm''' is an decentralized algorithm that is executed in a [[Distributed System|distributed system]], on more than one machine, node or processor.&lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
A distributed algorithm for a collection P of processes is, according to Gerard Tel and his book ''Introduction to Distributed Algorithms'', simply a collection of local algorithms, one for each process in P. In his &lt;br /&gt;
earlier work ''Topics in Distributed Algorithms'' he defined it as follows: &amp;quot;a distributed algorithm executes&lt;br /&gt;
as a collection of sequential processes, all executing their part of the algorithm independently, but &lt;br /&gt;
coordinating their activity through communication.&amp;quot; Nancy A. Lynch argues &amp;quot;Distributed algorithms are algorithms &lt;br /&gt;
designed to run on hardware consisting of many interconnected processors. The algorithms are supposed to work &lt;br /&gt;
correctly, even if the individual processors and communication channels operate at different speeds and even if&lt;br /&gt;
some of the components fail.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In sequential algorithms steps are taken in a strict sequence and well-defined order. In distributed systems&lt;br /&gt;
steps are taken in a strict sequence only locally, gloablly the sequence of steps depends on the&lt;br /&gt;
transmission of messages and can be unpredictable. The order of events is not always well-defined,&lt;br /&gt;
and failures make the situation even worse.&lt;br /&gt;
Nearly all distributed algorithms are based on more or less sophisticated communication through message passing. Since a [[Distributed System|distributed system]] consists of many nodes, processors or processes interconnected by a message passing network, any algorithm which involves more than one node must use some form of message passing, if there is no [[Distributed Shared Memory|distributed shared memory]] or other way of [[Interprocess Communication|interprocess communication]]. The complexity analysis of distributed algorithms involves therefore usually the attempt to measure the total number of messages.&lt;br /&gt;
&lt;br /&gt;
Distributed algorithms are to sequential algorithms what Einstein's physics is to Newton's physics:&lt;br /&gt;
sequential algorithms are a special, simplified case of distributed algorithms, and in&lt;br /&gt;
distributed alogrithms there is no global time or common clock, and the observer and speed&lt;br /&gt;
may influence causality. There are more similarities: for instance according to F. Mattern, the Lorentz &lt;br /&gt;
transformation corresponds roughly to the Rubberband transformation which leaves causality invariant.&lt;br /&gt;
&lt;br /&gt;
== Different Forms and Types ==&lt;br /&gt;
&lt;br /&gt;
=== Asynchronous and Synchronous ===&lt;br /&gt;
&lt;br /&gt;
In general, one can distinguish between asynchronous and synchronous algorithms, as Nancy Lynch does in her book.&lt;br /&gt;
In asynchronous algorithms, the nodes and processes are not acting at the same time, no timing assumptions&lt;br /&gt;
exist and messages can have an arbitrary delay, while in synchronous algorithms the nodes are acting in lockstep&lt;br /&gt;
at the same time. Asynchronous algorithms are not synchronized, they are not occurring at predetermined or &lt;br /&gt;
regular intervals, and messages can be delivered in any order.&lt;br /&gt;
&lt;br /&gt;
Totally synchronous algorithms are easy to handle, because they all nodes act like&lt;br /&gt;
a single node at the same time, but they are not practical. They are difficult to justify &lt;br /&gt;
in real-world situations and difficult to achieve in general distributed systems, because there &lt;br /&gt;
is no absolute global time in general, unsynchronized processes operate at different speeds and &lt;br /&gt;
messages have often a considerable time delay.&lt;br /&gt;
In summary, systems with pure synchrony (perfect timing) or no faults at all would be nice,&lt;br /&gt;
but they are not realistic: nodes and links fail, and messages may have a time delay.&lt;br /&gt;
&lt;br /&gt;
Totally asynchronous algorithms are powerful in theory, because they are meant to work with &lt;br /&gt;
arbitrary time delay, but they are notoriously difficult to design as well. They are &lt;br /&gt;
often very hard or even impossible to construct, because they are too general&lt;br /&gt;
(for example a node which sends arbitrary slow messages is indistinguishable &lt;br /&gt;
from a node that really failed).&lt;br /&gt;
Some problems proved impossible or expensive in the fully asynchronous model can &lt;br /&gt;
indeed be solved in practice.&lt;br /&gt;
Therefore elegant theoretical assumptions such as pure asynchrony &lt;br /&gt;
(no timing assumptions whatsoever), or Byzantine faults (no assumptions limiting faulty behavior)&lt;br /&gt;
are not practical, too, they lead to pessimistic and frustrating results that are not useful &lt;br /&gt;
for complex real world systems.&lt;br /&gt;
&lt;br /&gt;
Real systems are complex, they tend to fall somewhere in between the two extreme classes &lt;br /&gt;
of full synchrony and full asynchrony, for example asynchronous systems with&lt;br /&gt;
finite average response times or upper bounds for message delivery times.&lt;br /&gt;
&lt;br /&gt;
=== Topologies and Graphs ===&lt;br /&gt;
&lt;br /&gt;
Besides asynchronous and synchronous forms, one can differentiate further between algorithms for particular topologies &lt;br /&gt;
(rings, trees, etc.). There are also a number of [[Distributed Graph Algorithm]]s, which is not surprising, because nearly all [[Distributed System|distributed systems]] based on message passing can be described by a graph (except those who use only some form of shared common memory). &lt;br /&gt;
&lt;br /&gt;
=== Fundamental algorithms ===&lt;br /&gt;
&lt;br /&gt;
A fundamental block used in many distributed algorithms are tokens (which circulate in rings) and&lt;br /&gt;
waves (which spread through arbitrary topologies). A token which moves through a ring can&lt;br /&gt;
be considered as a wave for a ring topology. The importance of waves is not surprising, since a wave is &lt;br /&gt;
one of the most basic forms of [[Emergence|emergence]] in a system. If all nodes are visited &lt;br /&gt;
sequentially, or an action like &amp;quot;inform all&amp;quot; or &amp;quot;query all&amp;quot; is required, a kind of wave must be used. &lt;br /&gt;
Fundamental algorithms where all nodes of a network are visited are [[Total Algorithm]]s and &lt;br /&gt;
[[Heart Beat Algorithm]]s.&lt;br /&gt;
&lt;br /&gt;
A common problem besides &amp;quot;inform all&amp;quot; and &amp;quot;query all&amp;quot; is &amp;quot;select one&amp;quot;, &amp;quot;elect one&amp;quot;, &amp;quot;admit one&amp;quot;, etc.&lt;br /&gt;
The resulting algorithms are named [[Election Algorithm|election algorithms]], where a single node or &lt;br /&gt;
process (for example the leader) that is to play a distinguished role in a subsequent computation must &lt;br /&gt;
be (s)elected. Further typical algorithms in this area are distributed [[mutal exclusion]] and &lt;br /&gt;
[[deadlock detection]] algorithms, where the concurrent use of un-shareable resources must be &lt;br /&gt;
avoided.&lt;br /&gt;
&lt;br /&gt;
Finally it is problem to determine and change the &amp;quot;global state&amp;quot; or the &amp;quot;global order&amp;quot;, the associated&lt;br /&gt;
distributed algorithms are named [[Termination Detection|termination detection]], where the end of a &lt;br /&gt;
distributed computation has to be detected, and distributed [[garbage collection]], where unused memory and &lt;br /&gt;
references must be released. A basic &amp;quot;toy&amp;quot; algorithm used to explain distributed algorithms &lt;br /&gt;
in classes is the [[Distributed GCD Algorithm|distributed GCD algorithm]].&lt;br /&gt;
&lt;br /&gt;
== Problems and Difficulties ==&lt;br /&gt;
&lt;br /&gt;
=== Uncertainties and Failures ===&lt;br /&gt;
&lt;br /&gt;
Distributed algorithms are like [[Distributed System|distributed systems]]&lt;br /&gt;
hard to understand and hard to design, because of their high complexity.&lt;br /&gt;
Algorithms are the step-by-step definitions of computations,&lt;br /&gt;
detailed instructions, rules and recipes for producing a solution &lt;br /&gt;
to a given problem in a finite number of steps.&lt;br /&gt;
The problem with many real [[Distributed System|distributed systems]] is &lt;br /&gt;
that every node and every link can fail at any time, messages&lt;br /&gt;
can get lost or arrive with an arbitrary time delay.&lt;br /&gt;
One cannot say for sure what will happen in the next step. &lt;br /&gt;
One can specify the behavior for each node, but the overall global&lt;br /&gt;
behavior which results from the local interactions is often&lt;br /&gt;
hard to predict.&lt;br /&gt;
&lt;br /&gt;
The accidental or intended [[Emergence|emergence]] of a &lt;br /&gt;
desirable behavior is more the exception than the rule.&lt;br /&gt;
Analysis, design, verification and correctness proofs of &lt;br /&gt;
distributed algorithms are difficult issues. Among the different &lt;br /&gt;
types of uncertainties and difficulties are for example &lt;br /&gt;
(according to Nancy Lynch, 1996):&lt;br /&gt;
&lt;br /&gt;
# processor and link failures (node or message loss)&lt;br /&gt;
# uncertain message delivery (arbitrary transmisson time)&lt;br /&gt;
# unknown message ordering&lt;br /&gt;
# unknown network topologies&lt;br /&gt;
# unknown number of processors&lt;br /&gt;
&lt;br /&gt;
Some problems which are characteristic and unique for distributed algorithms are&lt;br /&gt;
&lt;br /&gt;
# [[Race Condition|race conditions]] (where the result depends on the timing of events)&lt;br /&gt;
# deadlock detection, esp. phantom- or pseudo-deadlocks &lt;br /&gt;
# termination detection&lt;br /&gt;
&lt;br /&gt;
The detection if a centralized, serial or non-distributed algorithm &lt;br /&gt;
is terminated is trivial, since there is only one&lt;br /&gt;
processor, one clock and one well-defined state or time.&lt;br /&gt;
The detection if a distributed algorithm is terminated or &lt;br /&gt;
not is a problem of its own, since there is no global state or time in &lt;br /&gt;
a general [[Distributed System|distributed system]]. Another&lt;br /&gt;
problem which occurs in distributed algorithms but not in serial&lt;br /&gt;
ones is deadlock: mutual blocking of processes, where each process&lt;br /&gt;
is waiting for a resource one of the other processes holds.&lt;br /&gt;
Obviously it does not occur in serial algorithms for one processors,&lt;br /&gt;
and was first met in implementing operating systems.&lt;br /&gt;
&lt;br /&gt;
=== Problem Fields ===&lt;br /&gt;
&lt;br /&gt;
Termination detectection is difficult, because a distributed system has no global state which can be detected instantly, and there is no global time which is valid for all computers, nodes or entities. In order to define a global state, some authors have proposed algorithms for consistent [[Global Snapshots|global snapshots]], for example the Chandy-Lamport algorithm. A snapshot is &amp;quot;consistent&amp;quot; if it appears as if it were take at the same instant everywhere in the system, without any violation of causality. In order to define a global time, some authors have proposed methods for consistent global time (logical clocks by Lamport, which find their extension in vector clocks and vector time, etc.). The concept of a &amp;quot;logical time&amp;quot; or timestamps introduced by Lamport allows an asynchronous system to simulate one in which the nodes have access to synchronized clocks. Both real and logical time are monotonically increasing, but the real time is uniformly continuous, whereas the logical time can have discontinuous jumps. But these methods are problematical and doubtful, because they attempt to make distributed computing follow the model of local, centralized computing. As Waldo noticed in 1994, this method ignores &amp;quot;the different failure modes and basic indeterminacy inherent in distributed computing&amp;quot; and leads to systems that are neither reliable and nor scalable.&lt;br /&gt;
&lt;br /&gt;
Thus we have roughly the following problem fields:&lt;br /&gt;
&lt;br /&gt;
# Attempts to imitate local computing (Synchronous Communication or Synchronization, Logical Time)&lt;br /&gt;
# Attempts to determine global state  (Global Snapshots, Deadlock Detection, Termination)&lt;br /&gt;
# Attempts to reach unified state (Agreement or Consensus)&lt;br /&gt;
# Attempts to coordinate access (Contention Problems as Election and Mutual Exclusion)&lt;br /&gt;
&lt;br /&gt;
=== Impossibility Results ===&lt;br /&gt;
&lt;br /&gt;
Given these difficulties, it is not surprising that the analysis and design of distributed algorithms that work in a general [[Distributed System|distributed system]] (where each node and link can fail at any time, and messages can have an arbitrary time delay) is very hard and sometimes even impossible. Already one faulty process can render any guaranty about achieving of a common consensus impossible, as the famous  &amp;quot;FLP impossibility argument&amp;quot; says. The &amp;quot;FLP impossibility result&amp;quot; or &amp;quot;FLP impossibility argument&amp;quot; from Fisher, Lynch and Patterson says it is impossible to reach consensus in a distributed, asynchronous systems if only one process is faulty. To be more precise it says there is no guruantee a common consensus can be reached, if a faulty process exists. This fact is intuitive plausible, since a faulty process that is not responding anymore is indistinguishable from a process that answers slowly (if there is an arbitrary time delay in the connection of the asynchronous network).&lt;br /&gt;
&lt;br /&gt;
Consensus and agreement problems are a fundamental challenge in [[Distributed System|distributed systems]]. The consensus problem is one of the most thoroughly investigated problem in [[Distributed Computing|distributed computing]], where several process have to agree on a certain value or decision. Processes in a database system may need to agree whether or not a transaction should be commited or aborted. Processes in a control or monitoring system may need to agree whether or not a particular other process is faulty. Processes in a general distributed system may need to agree whether or not a message has been received. As Nancy Lynch says (in &amp;quot;Chapter 12&amp;quot; Consensus of her book), &amp;quot;the impossibility result implies that there is no purely asynchronous algorithm that reaches the needed agreement and tolerates any failures at all.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Proof and Verification ==&lt;br /&gt;
&lt;br /&gt;
The design and verification of distributed programs and algorithms is without &lt;br /&gt;
doubt a very difficult task. A common way to verify distributed algorithms&lt;br /&gt;
despite these difficulties is to verify liveness and safety properties.&lt;br /&gt;
The traditional definition of liveness and safety are:&lt;br /&gt;
&lt;br /&gt;
* '''Liveness''' means &amp;quot;something good will eventually occur&amp;quot; or &amp;quot;something good eventually happens&amp;quot;&lt;br /&gt;
* '''Safety''' means &amp;quot;something bad will never happen&amp;quot; or &amp;quot;no bad thing ever happens&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Liveness and safety are two complementary properties, one says that&lt;br /&gt;
the system is changing, the other that the system is not changing. One&lt;br /&gt;
claims that the program never enters an unacceptable state, the other&lt;br /&gt;
assumes that the program always enters a desirable state after a finite &lt;br /&gt;
number of steps.&lt;br /&gt;
&lt;br /&gt;
=== Liveness ===&lt;br /&gt;
&lt;br /&gt;
Liveness implies that the '''system is changing'''. There is a guarantee of progress, &lt;br /&gt;
which in turn is guaranteed by lack of deadlocks, the absence of infinite &lt;br /&gt;
loops and the ensurance of termination. It is a property stating that eventually &lt;br /&gt;
(after a finite number of steps) some requirement holds. The program eventually enters a desirable &lt;br /&gt;
state, and some assertion will eventually hold. In other words, every computation contains &lt;br /&gt;
finally a state where a certain assertion is true. A liveness requirement requires that some &lt;br /&gt;
property in some configuration which is reachable will eventually hold in every execution.&lt;br /&gt;
Typical liveness properties are&lt;br /&gt;
* Program termination: the algorithm will terminate in a finite amount of time&lt;br /&gt;
* Upper/Lower bounds: a numerical value or parameter must reach a certain upper (lower) bound (in this case liveness can be proved if the value is monotonically increasing or (decreasing), and never remains constant for an infinite amount of time)&lt;br /&gt;
&lt;br /&gt;
=== Safety ===&lt;br /&gt;
&lt;br /&gt;
Safety implies the '''system does not change''', it means that the program does nothing wrong, &lt;br /&gt;
and there is a guarantee that no bad or evil change takes place, which in turn is often proved &lt;br /&gt;
by invariants. Invariants are assertions that always hold during the execution&lt;br /&gt;
of the algorithms and are not affected by any action or operation of the algorithm.&lt;br /&gt;
An invariant property must hold in every execution and in each reachable &lt;br /&gt;
configuration. Safety properties specify that 'something bad never happens',&lt;br /&gt;
the program never enters an unacceptable state and some assertion always holds.&lt;br /&gt;
In other words, a certain assertion is true in every state of every&lt;br /&gt;
computation of the algorithm. Typical safety properties are&lt;br /&gt;
(see Owicki and Lamport, [http://research.microsoft.com/users/lamport/pubs/liveness.pdf Proving Liveness Properties of Concurrent Programs])&lt;br /&gt;
* Partial correctness: if the algorithm begins with the precondition true, then it can never terminate with the postcondition false.&lt;br /&gt;
* Absence of deadlock: the algorithm never enters a state in which no further progress is possible.&lt;br /&gt;
* Absence of infinite loops: the algorithm never enters a state where one or more processes are involved in an infinite loop&lt;br /&gt;
* Mutual exclusion: two different processes are never in their critical sections at the same time.&lt;br /&gt;
&lt;br /&gt;
== Articles and Papers ==&lt;br /&gt;
&lt;br /&gt;
Many classic publications from Leslie Lamport can be found on his [http://research.microsoft.com/users/lamport/pubs/pubs.html website] at Microsoft Research. Papers from Edsger W. Dijkstra can be found in the [http://www.cs.utexas.edu/users/EWD/ Dijkstra archives].&lt;br /&gt;
&lt;br /&gt;
General:&lt;br /&gt;
&lt;br /&gt;
[http://www.sunlabs.com/techrep/1994/abstract-29.html A Note on Distributed Computing], Jim Waldo et al., Sun Technical Report (1994) TR-94-29&lt;br /&gt;
&lt;br /&gt;
Impossibility Theorems and Results:&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=214121 Impossibility of distributed consensus with one faulty process] M. Fisher, N. Lynch, and M. Patterson, Journal of the ACM, Vol. 32 No. 2 April (1985) 274-382&lt;br /&gt;
&lt;br /&gt;
[http://doi.ieeecomputersociety.org/10.1109/MC.1992.10061 The Many Faces of Consensus in Distributed Systems] John Turek and Dennis Shasha, IEEE Computer Vol. 25 No. 6 (1992) 8-17&lt;br /&gt;
&lt;br /&gt;
[http://citeseer.ist.psu.edu/572373.html A Hundred Impossibility Proofs for Distributed Computing] Nancy Lynch &lt;br /&gt;
&lt;br /&gt;
[http://citeseer.ist.psu.edu/623509.html Hundreds of Impossibility Results for Distributed Computing]&lt;br /&gt;
Faith Fich and Eric Ruppert&lt;br /&gt;
&lt;br /&gt;
== Books ==&lt;br /&gt;
&lt;br /&gt;
* Nancy A. Lynch, ''Distributed Algorithms'', Morgan Kaufmann 1996, ISBN 1558603484&lt;br /&gt;
&lt;br /&gt;
* Gerard Tel, ''Introduction to Distributed Algorithms'', Cambridge University Press, 2nd edition, 2001, ISBN 0521794838&lt;br /&gt;
&lt;br /&gt;
* Gerard Tel, ''Topics in Distributed Algorithms'', Cambridge University Press, 1991, ISBN 0521403766&lt;br /&gt;
&lt;br /&gt;
* Valmir C. Barbosa, ''An Introduction to Distributed Algorithms'', The MIT Press, 1996, ISBN 0262024128&lt;br /&gt;
&lt;br /&gt;
* Michel Raynal, ''Distributed algorithms and protocols'', John Wiley and Sons, 1988, ISBN 0471917540&lt;br /&gt;
&lt;br /&gt;
* Hagit Attiya and Jennifer Welch, ''Distributed Computing'', John Wiley and Sons, Inc., 2nd edition, 2004, ISBN 0471453242&lt;br /&gt;
&lt;br /&gt;
* Friedemann Mattern, ''Verteilte Basisalgorithmen'', Springer, 1989, ISBN 3540518355&lt;br /&gt;
&lt;br /&gt;
== Lectures == &lt;br /&gt;
&lt;br /&gt;
Lecture in German from Hans P. Reiser and Rüdiger Kapitza, University Erlangen-Nuremberg&lt;br /&gt;
&lt;br /&gt;
2003&lt;br /&gt;
[http://www4.informatik.uni-erlangen.de/Lehre/WS03/V_VA/Skript/]&lt;br /&gt;
&lt;br /&gt;
Lectures in German from Prof. Dr. Friedemann Mattern, ETH Zurich&lt;br /&gt;
&lt;br /&gt;
1999/2000&lt;br /&gt;
[http://www.vs.inf.ethz.ch/edu/WS9900/VA/]&lt;br /&gt;
&lt;br /&gt;
2001/2002&lt;br /&gt;
[http://www.vs.inf.ethz.ch/edu/WS0102/VA/]&lt;br /&gt;
&lt;br /&gt;
2002/2003&lt;br /&gt;
[http://www.vs.inf.ethz.ch/edu/WS0203/VA/]&lt;br /&gt;
&lt;br /&gt;
2003/2004&lt;br /&gt;
[http://www.vs.inf.ethz.ch/edu/WS0304/VA/]&lt;br /&gt;
&lt;br /&gt;
2004/2005&lt;br /&gt;
[http://www.vs.inf.ethz.ch/edu/WS0405/VA/]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Autocatalysis</id>
		<title>Autocatalysis</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Autocatalysis"/>
				<updated>2011-02-11T21:45:45Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Autocatalysis occurs in a (chemical) reaction, if the reaction product itself catalyzes further reactions. It is a form of catalysis in which one of the products of a reaction serves as a &lt;br /&gt;
catalyst for the reaction. In chemistry and biology, &lt;br /&gt;
[http://en.wikipedia.org/wiki/Catalysis catalysis] is the acceleration &lt;br /&gt;
the reaction rate of a chemical reaction by means of a substance, called a [http://en.wikipedia.org/wiki/Catalyst catalyst]&lt;br /&gt;
or [http://en.wikipedia.org/wiki/Enzyme enzyme] (in biology). A catalyst accelerates, facilitates &lt;br /&gt;
and enhances a bio-chemical reation. Nearly all biochemical processes are catalyzed by enzymes.&lt;br /&gt;
&lt;br /&gt;
Autocatalysis describes a self-enhancing and self-amplifying reaction, and&lt;br /&gt;
is a special form of positive [[Feedback|feedback]]. An example of &lt;br /&gt;
autocatalysis is the hypercycle, in which self-replicating entities or reations catalyze each other.&lt;br /&gt;
M. Eigen and P. Schuster proposed the model of hypercycles to explain the origin of the DNA through self-replicating molecules.&lt;br /&gt;
&lt;br /&gt;
== Books ==&lt;br /&gt;
&lt;br /&gt;
Manfred Eigen and Peter Schuster, ''The Hypercycle: A Principle of Natural Self Organization'', Springer-Verlag 1979, ISBN 0387092935&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hypercycle Wikipedia Hypercycle Entry]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Autocatalysis Wikipedia Autocatalysis Entry]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Catalysis Wikipedia Catalysis Entry]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Catalyst Wikipedia Catalyst Entry]&lt;br /&gt;
&lt;br /&gt;
[[Category:Basic Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Total_Algorithm</id>
		<title>Total Algorithm</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Total_Algorithm"/>
				<updated>2011-02-11T21:45:41Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A '''total algorithm''' is a [[Distributed_Algorithm|distributed algorithm]] where the participation of all nodes or processes in the network is required before a decision can be taken. It is sometimes also named '''wave algorithm'''. A special form of total algorithm is a '''traversal algorithm''' where all events of a wave a strictly ordered by a causality relation and in which the last event occurs in the same process as the first event, namely in the initiator node.&lt;br /&gt;
&lt;br /&gt;
Total algorithms and wave algorithms can be considered as [[Distributed_Algorithm|distributed algorithms]] initiated by a single event where the participation of all processes in the network is required before a particular final event, often a decision, takes place. In wave algorithms and related flooding techniques a node sends new information to all of its neighbors. The activity spreads like a wave or a controlled chain-reaction through the network, and all nodes are visited until the decision can take place and the algorithm is finally terminated. They can be used to spread and disseminate information in a [[Distributed System|distributed system]], to calculate the topology of a network, or to collect and gather information in a distributed system.&lt;br /&gt;
&lt;br /&gt;
Examples for a total algorithm are the&lt;br /&gt;
&lt;br /&gt;
* [[Echo Algorithm]]&lt;br /&gt;
* [[Heart Beat Algorithm]]&lt;br /&gt;
* [[Phase Algorithm]]&lt;br /&gt;
&lt;br /&gt;
In order to reach all nodes of a system, the algorithm has to broadcast the corresponding&lt;br /&gt;
information through the whole system. Often this is done in a first phase.&lt;br /&gt;
There are two possibilities to &lt;br /&gt;
&lt;br /&gt;
* '''Flooding''': a node receiving data sends it immediately to all its neighbors by broadcasting&lt;br /&gt;
* '''Gossiping''': a node selects from time to time randomly one of its neighbors to send the data&lt;br /&gt;
&lt;br /&gt;
Both result in an epidemic spread of information.&lt;br /&gt;
Gossiping is of course much slower than flooding.&lt;br /&gt;
Flooding leads to an ordered wave, whereas gossiping leads &lt;br /&gt;
to disordered epidemic spread of information.&lt;br /&gt;
&lt;br /&gt;
[[Category:Distributed Algorithms]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Mind</id>
		<title>Mind</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Mind"/>
				<updated>2011-02-11T21:45:34Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''mind''' is related to the [[Self]] and is considered as responsible for one's thoughts and feelings.&lt;br /&gt;
It is an abstract concept which describes all of the brain's conscious and unconscious cognitive processes.&lt;br /&gt;
Although it does not exist as a single, unified substance, it emerges from the coordinated action of a group&lt;br /&gt;
of agents and acts to orchestrate them in turn. In this sense,&lt;br /&gt;
&lt;br /&gt;
: god = coordinated hallucination of people&lt;br /&gt;
: mind = coordinated hallucination of neurons&lt;br /&gt;
&lt;br /&gt;
There are are many theories of the mind and its function,&lt;br /&gt;
the [[Society_of_Mind|society of mind]] approach tries to&lt;br /&gt;
explain the mind as a society of agents (or CAS).&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia entry for [http://en.wikipedia.org/wiki/Mind Mind]&lt;br /&gt;
&lt;br /&gt;
[[Category:Consciousness]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Cloud_Computing</id>
		<title>Cloud Computing</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Cloud_Computing"/>
				<updated>2011-02-11T21:45:31Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Cloud Computing''' a style of computing where IT-related capabilities are provided &amp;quot;as a service&amp;quot;, allowing users to access technology-enabled services &amp;quot;in the cloud&amp;quot; without knowledge of, expertise with, or control over the technology infrastructure that supports them. The cloud is a metaphor for the Internet.&lt;br /&gt;
&lt;br /&gt;
== Relations == &lt;br /&gt;
&lt;br /&gt;
The idea of Web-based applications is old, Sun proposed &amp;quot;the Network is the Computer&amp;quot; many years before cloud computing became popular. The idea of computing as a commodity or utility can also be found in [[Grid Computing|grid computing]] visions. Cloud computing became ultimately possible through large companies like Amazon, Google, Sun, IBM, and Microsoft who offer a part of their huge computational capacity and extremely large scale infrastructures (including storage, servers, etc.) as a service. Only these large companies can deliver reliable services through their large data centers. Like any next big trend and buzzword it sounds nebulous, and means many things to different people, and it is based on many different previous trends.&lt;br /&gt;
&lt;br /&gt;
Cloud computing combines ideas of [[Distributed Computing|distributed computing]] and [[Grid Computing|grid computing]] (&amp;quot;a 'super and virtual computer' is composed of a cluster of networked, loosely-coupled computers, acting in concert to perform very large tasks&amp;quot;), utility computing (the &amp;quot;packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility such as electricity&amp;quot;) and [[Autonomic Computing|autonomic computing]] (&amp;quot;computer systems capable of self-management&amp;quot;). Indeed many cloud computing deployments are today powered by grids, have autonomic characteristics and are billed like utilities, but cloud computing can be seen as a natural next step from the grid-utility model.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia entry for [http://en.wikipedia.org/wiki/Cloud_computing Cloud Computing]&lt;br /&gt;
&lt;br /&gt;
* InfoWorld article [http://www.infoworld.com/article/08/04/07/15FE-cloud-computing-reality_1.html What cloud computing really means]&lt;br /&gt;
&lt;br /&gt;
* Technology Review article [http://www.technologyreview.com/Infotech/19397/?a=f Computer in the Cloud]&lt;br /&gt;
&lt;br /&gt;
* NYTimes.com article about [http://www.nytimes.com/2008/05/25/technology/25proto.html Cloud Computing]&lt;br /&gt;
&lt;br /&gt;
* HPCC 2008 keynote [http://arxiv.org/abs/0808.3558 Market-Oriented Cloud Computing] - Vision, Hype, and Reality for Delivering IT Services as Computing Utilities&lt;br /&gt;
&lt;br /&gt;
[[Category:X-Computing_Techniques]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Category:Systems</id>
		<title>Category:Systems</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Category:Systems"/>
				<updated>2011-02-11T21:45:28Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The category page contains links to various systems&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Redundancy</id>
		<title>Redundancy</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Redundancy"/>
				<updated>2011-02-11T21:45:25Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Redundancy''' is related to duplication and [[Replication|replication]] and refers in general to the quality or state of being redundant, i.e. there are more components than needed to perform a function. This can have a negative connotation, superfluous, but also positive, serving as a duplicate for preventing failure of an entire system. It is a common way to achieve high levels of [[Fault Tolerance|fault tolerance]] in a computer system. If one subsystem, element or component fails, there is an redundant &lt;br /&gt;
replacement ready to begin operating.&lt;br /&gt;
&lt;br /&gt;
== TMR and NMR ==&lt;br /&gt;
&lt;br /&gt;
In engineering, redundancy means simply the duplication of critical components of a system with the intention of increasing reliability. In safety-critical systems, such as fly-by-wire aircraft, some parts of the control system may be triplicated. An error in one component then may then be out-voted by the other two. In a triply redundant system, the system has three sub components, all three of which must fail before the system fails. Since each one rarely fails, and the sub components are expected to fail independently, the probability of all three failing is calculated to be extremely small.&lt;br /&gt;
&lt;br /&gt;
This common form of redundancy is also known as '''triple modular redundancy''' (TMR):&lt;br /&gt;
the threefold replication of a component to compensate and correct&lt;br /&gt;
the failure of a single component. The primary flight computer of the Boeing 777 &lt;br /&gt;
for example uses triple modular redundancy [1]. Triplex or triple redundancy was originally &lt;br /&gt;
envisaged by John Von Neumann. Having three units or components or elements&lt;br /&gt;
offers an essential advantages, because three redundant components are&lt;br /&gt;
enough elements to reach an unambiguous decision in a majority voting,&lt;br /&gt;
as long as at least two components still work correctly.&lt;br /&gt;
Systems with dual redundancy have difficulties to come to an&lt;br /&gt;
agreement and to correct each other in most cases because the two&lt;br /&gt;
redundant components can end up in a continuous loop of chatter about &lt;br /&gt;
which one is the more correct (as in some marriages).&lt;br /&gt;
'''N modular redundancy''' (NMR) is a generalization of TMR. A system&lt;br /&gt;
of N = 2n + 1 redundant elements can mask or tolerate n faulty elements,&lt;br /&gt;
if the elements act as voters and make a majority decision.&lt;br /&gt;
The basic structure of TMR and NMR is very simple: in TMR (NMR) &lt;br /&gt;
you have three (n) units with a voter.&lt;br /&gt;
&lt;br /&gt;
In the Space Shuttle computer control system [2], four redundant computers are used to achieve &lt;br /&gt;
reliability during flight-critical phases of a mission. Flight-critical or [[Mission-critical|mission-critical]] &lt;br /&gt;
phases are at the beginning (launch/ascent) and the end (landing/entry), periods where a loss of the &lt;br /&gt;
system might mean loss of the vehicle. Before launch and during the on-orbit&lt;br /&gt;
phase, the degree of active [[Replication|replication]] is reduced and different computers are &lt;br /&gt;
running different applications. The four redundant computers during flight-critical phases&lt;br /&gt;
are synchronized at the applications level and provide bit-for-bit identical output. The system is designed&lt;br /&gt;
to cope with two successive failures. If a computer becomes defective, it is overruled&lt;br /&gt;
by the other three and further ignored. If another computer fails, it is overruled by&lt;br /&gt;
the remaining two. A fifth computer which was independently programmed can perform&lt;br /&gt;
critical functions if all four computers fail. It can only be engaged manually by&lt;br /&gt;
crew action.&lt;br /&gt;
&lt;br /&gt;
== Hybrid Redundancy ==&lt;br /&gt;
&lt;br /&gt;
'''Hybrid redundancy''' offers the highest reliability.&lt;br /&gt;
It is a combination of NMR (for error masking) &lt;br /&gt;
and spare switching (for fault prevention and rejuvenation).&lt;br /&gt;
An NMR system masks permanent and intermittent&lt;br /&gt;
failures but its reliability drops below that &lt;br /&gt;
of a single module for very long operation or mission &lt;br /&gt;
times. Hybrid redundancy overcomes this by adding spare&lt;br /&gt;
modules to renew the system by replacing active &lt;br /&gt;
modules. A hybrid NMR system with spares consists of &lt;br /&gt;
a core of N processors (NMR), and M spares.&lt;br /&gt;
&lt;br /&gt;
There are other forms of hybrid redundancy, for&lt;br /&gt;
example self-purging redundancy: all units actively &lt;br /&gt;
participate in a NMR system, and each module &lt;br /&gt;
has a capability to remove itself from the &lt;br /&gt;
system if its faulty&lt;br /&gt;
&lt;br /&gt;
== Related Concepts ==&lt;br /&gt;
&lt;br /&gt;
Contrary to traditional software programs or machines, [[Self-Organization|self-organizing systems]] and [[Multi-Agent System|agent based systems]] have often a high redundancy. This can be observed for example in nature, where the death of a single ant, termite, or honey bee does not affect the existence of the whole colony. Likewise the failure of a single neuron does not affect the function of a whole brain, although this does not mean that the principle of [[Self-Organization|self-organization]] explains how the brain works or that a brain functions in the same way as an ant colony. However, it is clear that traditional software programs have a very low redundancy:&lt;br /&gt;
&lt;br /&gt;
 * a program does not work if arbitrary code lines are removed&lt;br /&gt;
 * a self-organizing network/system usually still works if arbitrary nodes/agents are removed&lt;br /&gt;
&lt;br /&gt;
== Literature ==&lt;br /&gt;
&lt;br /&gt;
[1] Y.C. (Bob) Yeh, ''Triple-triple redundant 777 primary flight computer'',&lt;br /&gt;
Proceedings of the 1996 IEEE Aerospace Applications Conference,&lt;br /&gt;
Vol. 1, New York  (1996) 293-307&lt;br /&gt;
&lt;br /&gt;
[2] Alfred Spector and David Gifford,&lt;br /&gt;
[http://portal.acm.org/citation.cfm?coll=GUIDE&amp;amp;dl=GUIDE&amp;amp;id=358246 The space shuttle primary computer system], Communications of the ACM Volume 27, Issue 9 (September 1984)&lt;br /&gt;
&lt;br /&gt;
[3] B. J. Flehinger, [http://www.research.ibm.com/journal/rd/022/ibmrd0202G.pdf Reliability Improvement through Redundancy at Various System Levels], IBM J. Res. and Dev., vol. 2, April (1958) 148-158&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Applied Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Aging</id>
		<title>Aging</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Aging"/>
				<updated>2011-02-11T21:45:22Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;All [[System|systems]] are subject to '''aging''', even machines and software systems. Besides [[Cancer|cancer]], understanding and controlling the aging process is a central problem of applied biology. Aging means the accumulation of changes in a system, organism or object over time. For organic lifeforms, the problems of aging are senescence (the general deterioration of the body with increasing age). It is still unclear why senescence — the general deterioration of vitality and resistance to adversity with advancing age — occurs and how it works. Reasons may be:&lt;br /&gt;
&lt;br /&gt;
* normal wear and tear damage, damage that naturally and inevitably occurs in daily life&lt;br /&gt;
* accumulation of waste products that interfere with metabolism&lt;br /&gt;
* accumulation of mutations which violates the genetic integrity and gradually damage the genetic code&lt;br /&gt;
* suppression of mechanisms that prevent further regeneration (e.g. by shortened telomeres)&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia link for [http://en.wikipedia.org/wiki/Aging Aging]&lt;br /&gt;
&lt;br /&gt;
[[Category:Systems]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Autopoiesis</id>
		<title>Autopoiesis</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Autopoiesis"/>
				<updated>2011-02-11T21:45:18Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{SelfOrg}}&lt;br /&gt;
&lt;br /&gt;
'''Autopoiesis''' literally means &amp;quot;self-production&amp;quot;, &amp;quot;self-producing&amp;quot; or &amp;quot;self-making&amp;quot; (from the Greek: auto for self- and poiesis for creation or production) and expresses a fundamental relationship between structure and function of a [[Self-Organization|self-organizing system]]. The idea of autopoiesis is simply that living things are produced and maintained by themselves. It is not a theory, rather an observation or phenomenon that describes systems with the [[Self-Star Properties|self-* property]] of self-production. It should be emphasized that autopoiesis is like [[Self-Organization|self-organization]] an abstract concept, similar to metabolism, but more general. &lt;br /&gt;
&lt;br /&gt;
The term was originally introduced by Chilean biologists Francisco Varela (1946-2001) and Humberto Maturana in the early 1970s. Maturana and Varela take this form of autopoiesis (metabolic selforganization) to be the real essence of life and all living systems. The canonical example of an autopoietic system, and one of the entities that motivated Varela and Maturana to define autopoiesis, is the biological cell. The eukaryotic cell, for example, is made of various biochemical components such as nucleic acids and proteins, and is organized into bounded structures such as the cell nucleus, various organelles, a cell membrane and cytoskeleton. These structures, based on an external flow of molecules and energy, produce the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components. An autopoietic system is to be contrasted with an allopoietic system, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is something other than itself (a factory).&lt;br /&gt;
&lt;br /&gt;
More generally, the term autopoiesis refers to the dynamics of [[Self-Organization|self-organizing]] non-equilibrium structures; that is, organized states (sometimes also called dissipative structures) that remain stable for long periods of time despite matter and energy continually flowing through them. A vivid example of such a non-equilibrium structure is the Great Red Spot on Jupiter, which is essentially a gigantic whirlpool of gases in Jupiter's upper atmosphere. This vortex has persisted for a much longer time (on the order of centuries) than the average amount of time any one gas molecule has spent within it. From this very general point of view, the notion of autopoiesis is often associated with that of [[Self-Organization|self-organization]].&lt;br /&gt;
&lt;br /&gt;
An application of the concept to sociology can be found in Luhmann's systems theory. John von Neumann tried to create a self-reproducing machine which resulted in a self-reproducing [[Cellular Automata|cellular automaton]].&lt;br /&gt;
&lt;br /&gt;
==Relations==&lt;br /&gt;
&lt;br /&gt;
Autopoiesis is closely related to [[Self-Star Properties]], [[Autocatalysis|autocatalysis]] and the biological principle of [http://en.wikipedia.org/wiki/Metabolism metabolism], because it means &amp;quot;self-production&amp;quot;. Metabolism enables the built-up of own complex molecular material (anabolism) through breakdown of other material (catabolism).&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Autopoiesis Wikipedia Entry for Autopoiesis] &lt;br /&gt;
&lt;br /&gt;
* Capra, Fritjof (1997). ''The Web of Life''. Random House. ISBN 0385476760 &amp;amp;mdash;general introduction to the ideas behind autopoiesis&lt;br /&gt;
* Maturana, Humberto &amp;amp;amp; Varela, Francisco ([1st edition 1973] 1980). ''Autopoiesis and Cognition: the Realization of the Living''. Robert S. Cohen and Marx W. Wartofsky (Eds.), Boston Studies in the Philosophy of Science '''42'''. Dordecht: D. Reidel Publishing Co. ISBN 9027710155 (hardback), ISBN 9027710163 (paper) &amp;amp;mdash;the main published reference on autopoiesis&lt;br /&gt;
* Mingers, John (1994). ''Self-Producing Systems''. Kluwer Academic/Plenum Publishers. ISBN 0306447975 &amp;amp;mdash;a book on the autopoiesis concept in many different areas&lt;br /&gt;
* Luisi, Pier L. (2003). Autopoiesis: a review and a reappraisal. ''Naturwissenschaften'' '''90''' 49&amp;amp;ndash;59. &amp;amp;mdash;biologist view of autopoiesis&lt;br /&gt;
* Varela, Francisco J.; Maturana, Humberto R.; &amp;amp;amp; Uribe, R. (1974). Autopoiesis: the organization of living systems, its characterization and a model. ''Biosystems'' '''5''' 187&amp;amp;ndash;196. &amp;amp;mdash;one of the original papers on the concept of autopoiesis&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Articles ==&lt;br /&gt;
&lt;br /&gt;
Margaret A. Boden, [http://cognition.iig.uni-freiburg.de/csq/pdf-files/boden.pdf Autopoiesis and Life]&lt;br /&gt;
&lt;br /&gt;
[[Category:Basic Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Category:Consciousness</id>
		<title>Category:Consciousness</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Category:Consciousness"/>
				<updated>2011-02-11T21:45:16Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This category contains topics related to consciousness, self-consciousness and self-awareness.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Self-Protection</id>
		<title>Self-Protection</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Self-Protection"/>
				<updated>2011-02-11T21:45:12Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Self-Protection''' is a [[Self-Star_Properties|Self-* Property]]. It means proactive identification and protection from arbitrary attacks. Central mechanisms to protect the body in organic lifeforms are [[Stress|stress]] and unpleasant &lt;br /&gt;
sensations. Pain is the most prominent member of a class of unpleasant sensations known as bodily sensations, which &lt;br /&gt;
include itches:&lt;br /&gt;
&lt;br /&gt;
* itches: sensation indicating light physical annoyance (for example by parasites), evokes the desire or reflex to scratch&lt;br /&gt;
* pain: sensation indicating severe physical damage of the system, evokes the desire or reflex to avoid the situation&lt;br /&gt;
&lt;br /&gt;
Pain is an unpleasant sensation resulting from the intricate interplay between&lt;br /&gt;
sensory and cognitive mechanisms. It is associated with actual or&lt;br /&gt;
potential tissue damage in natural organisms. Although it is unpleasant,&lt;br /&gt;
it is a necessary mechanism of systems with the capability of effective self-protection.&lt;br /&gt;
Effective self-protection means fast self-protection. The rapid&lt;br /&gt;
warning through pain is a critical component of the body’s defense system.&lt;br /&gt;
&lt;br /&gt;
Pain seems to be a general, necessary mechanism of systems with the&lt;br /&gt;
capability of self-protection, because it signals the place where the &lt;br /&gt;
self-protecting mechanisms fail or where they are badly needed.&lt;br /&gt;
A painful stimulus leads to a massive activation of multiple units, and prevents &lt;br /&gt;
at the same time any actions associated with it. It draws the attention of the whole &lt;br /&gt;
system to a certain part, but inhibits any action associated with it.&lt;br /&gt;
It is characterized by a loss in the flow of information, or&lt;br /&gt;
in the members of the system. &lt;br /&gt;
&lt;br /&gt;
It is for every person and organization unpleasant if the income drops &lt;br /&gt;
while the spendings increase. For an organization, especially the &lt;br /&gt;
departments which have very high costs and zero revenue contribute &lt;br /&gt;
to this unpleasant situation. In companies, there is a capital inflow &lt;br /&gt;
and capital outflow (usually in form of products). If there is a sink &lt;br /&gt;
for the capital flow in between, it becomes painful for the company: &lt;br /&gt;
a department or product which has cost lots of time and money but &lt;br /&gt;
never results in real money from customers is very unpleasant.&lt;br /&gt;
&lt;br /&gt;
A conflict between members or a loss of good members is unpleasant, &lt;br /&gt;
especially if they cannot be replaced by suitable members of the same value:&lt;br /&gt;
&lt;br /&gt;
* A trainer of a sports team feels pain if his best players are banned from the field, and he cannot replace them with suitable new players&lt;br /&gt;
* A general feels pain if his army loses in a continued campaign lots of soldiers, and he cannot replace them with new ones&lt;br /&gt;
* A bishop feels pain if his church loses lots of members, while the number of new members is sinking, too. A chief of a political party will do the same&lt;br /&gt;
* A CEO feels pain if his company loses a lot of employees which take valuable experience with them, if he cannot replace them with suitable new ones&lt;br /&gt;
&lt;br /&gt;
Like other self-properties, self-protection can have severe consequences&lt;br /&gt;
and side-effects if the integrity of the self is affected: if the self &lt;br /&gt;
is not recognized correctly.&lt;br /&gt;
A negative side-effect of self-protection are autoimmune diseases&lt;br /&gt;
and allergies. In autoimmune diseases the body attacks the ‘self’ and its&lt;br /&gt;
own cells, examples are Diabetes Mellitus (type 1) or Multiple Sclerosis. In&lt;br /&gt;
allergies, the body attacks harmless targets which are normal parts of the&lt;br /&gt;
body: allergens such as dust, pollen, or certain foods. In both cases, the body&lt;br /&gt;
attacks parts of itself which are harmless. The distinction between self/nonself&lt;br /&gt;
and harmless/harmful goes wrong. In autoimmune diseases, parts of the&lt;br /&gt;
self are mistaken for hostile agents, and in allergies, harmless targets are&lt;br /&gt;
mistaken for harmful intruders.&lt;br /&gt;
Most autoimmune diseases are probably the result of multiple circumstances,&lt;br /&gt;
for example, a genetic predisposition triggered by an infection. Autoimmune&lt;br /&gt;
diseases result from at least three different interacting components:&lt;br /&gt;
genetic, environmental and regulatory. &lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* SEP Entry for [http://plato.stanford.edu/entries/pain/ Pain]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Self-Star Properties]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Level_of_Abstraction</id>
		<title>Level of Abstraction</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Level_of_Abstraction"/>
				<updated>2011-02-11T21:45:08Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A '''level of abstraction''' describes a system at a certain scope or resolution, and it used to explain the system at a certain [[Level_of_Organization|level]]. The relationship between different levels of abstraction is described by [[Emergence|emergence]]. The different types and classes of [[Emergence|emergence]] describe different relationships between higher and lower levels of abstractions. Strong emergence requires at least one completely new level of abstraction, if it is defined as the appearance of a new code. The &amp;quot;level of abstraction&amp;quot; is related to the &amp;quot;level of detail&amp;quot;, the &amp;quot;scope of view&amp;quot; or the &amp;quot;degree of generality&amp;quot;. The higher the level, the less detail. The lower the level, the more detail. The highest level of abstraction is the single system itself. The next level would be only a handful of components, and so on, while the lowest level could be millions of objects or agents. Abstraction comes from the Latin word 'abstrahere', which means to 'drag away from, remove, abort': we remove the details associated with any specific instance of the system.&lt;br /&gt;
&lt;br /&gt;
A &amp;quot;level of abstraction&amp;quot; for computing is the number of abstraction layers between the physical layer (of bits and binary code) and the current representation. A layer is a code, language or protocol which offers a certain service. For example in a computer we have the language cascade from binary code to machine code, byte code and high level code, or the OSI Reference Model. The most common abstraction layer is the programming interface (API) between an application and a framework or operating system. High-level calls are made to the system, which executes the necessary instructions to perform the task.&lt;br /&gt;
&lt;br /&gt;
Russ Abbott has argued in &amp;quot;The reductionist blind spot&amp;quot; that the best way to understand [[Emergence|emergence]] is through the lens of implementation - emergent properties can be described as a high level abstraction which is implemented by low level elements. The lower level of abstraction implements the higher level.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia entry for [http://en.wikipedia.org/wiki/Abstraction_(computer_science) Abstraction (computer science)]&lt;br /&gt;
&lt;br /&gt;
* Russ Abbott, [http://cs.calstatela.edu/wiki/images/c/ce/The_reductionist_blind_spot.pdf The reductionist blind spot]&lt;br /&gt;
&lt;br /&gt;
[[Category:Organization]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Random_Boolean_Network</id>
		<title>Random Boolean Network</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Random_Boolean_Network"/>
				<updated>2011-02-11T21:45:05Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Random Boolean Networks''' are networks of boolean nodes that can &lt;br /&gt;
be in one of two possible states, 0 or 1, and whose evolution &lt;br /&gt;
from one time point to another is governed by simple boolean &lt;br /&gt;
transition rules. The nodes change their states according to &lt;br /&gt;
this boolean transition rules that depend on their current states &lt;br /&gt;
and those of their neighbors. &lt;br /&gt;
RBNs are closely related to [[Cellular Automata]] (CA) and are&lt;br /&gt;
used to study [[Complex System|complex systems]]. Both are usually&lt;br /&gt;
based on local boolean transition functions.&lt;br /&gt;
In RBNs we have nodes instead of Cells in CA, and connections &lt;br /&gt;
to remote neighbors instead of a local neighborhood grid as in CA.&lt;br /&gt;
RBNs and NK Networks have been proposed as a biological&lt;br /&gt;
model by Stuart Kauffman, see his book &amp;quot;The Origins of Order&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== NK Network ==&lt;br /&gt;
&lt;br /&gt;
An NK-Boolean network is defined as a network of N nodes with connectivity K, &lt;br /&gt;
where K refers to the maximum number of nodes that regulate some othernode, &lt;br /&gt;
i.e. each of the N nodes has K inputs and one output.&lt;br /&gt;
It can be considered as a network of N light bulbs. At every timestep,&lt;br /&gt;
each bulb changes the state: the light bulbs can only be on or off, and each &lt;br /&gt;
of the bulbs influences K other bulbs in the network.&lt;br /&gt;
&lt;br /&gt;
* [http://www-users.cs.york.ac.uk/susan/cyc/n/nk.htm Applet for NK networks]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
Tutorials&lt;br /&gt;
&lt;br /&gt;
* [http://www.itee.uq.edu.au/~kaiw/RBN/ RBN Tutorial] from Kai Willadsen &lt;br /&gt;
&lt;br /&gt;
* [http://homepages.vub.ac.be/~cgershen/rbn/tut/index.html RBN Tutorial] from Carlos Gershenson&lt;br /&gt;
&lt;br /&gt;
arXiv papers&lt;br /&gt;
&lt;br /&gt;
* [http://www.arxiv.org/abs/nlin.AO/0408006 Introduction to Random Boolean Networks] from Carlos Gershenson&lt;br /&gt;
&lt;br /&gt;
* [http://arxiv.org/abs/nlin/0204062 Boolean Dynamics with Random Couplings] from Leo Kadanoff et al.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&lt;br /&gt;
The Finite State Machine (FSM) for the whole boolean network reveals &lt;br /&gt;
the attractor structures and the basins of attraction. The&lt;br /&gt;
attractor - if one exists - is a fixed point or a discrete &lt;br /&gt;
limit cycle. The limit cycle is of course shorter than the&lt;br /&gt;
total number of states, which is 2^N for N nodes (2^3=8 for &lt;br /&gt;
3 nodes).&lt;br /&gt;
&lt;br /&gt;
[[Image:RBNExample1.png|left|RBN Examples]]&lt;br /&gt;
&lt;br /&gt;
[[Image:RBNExample2.png|left|RBN Examples]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Termination_Detection</id>
		<title>Termination Detection</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Termination_Detection"/>
				<updated>2011-02-11T21:45:02Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Distributed Algorithm|Distributed algorithms]] for '''termination detection''' are unique for [[Distributed System|distributed systems]] because the problem of termination detection does not exist in non-distributed systems. A large variety of algorithms exist, for instance the ones proposed by&lt;br /&gt;
&lt;br /&gt;
* Mattern (e.g. &amp;quot;Vectoralgorithm&amp;quot; or &amp;quot;Credit-Recovery&amp;quot;)&lt;br /&gt;
* Dijkstra-Feijen-Van Gasteren (1983)&lt;br /&gt;
* Dijkstra-Scholten (1980)&lt;br /&gt;
&lt;br /&gt;
[[Category:Distributed Algorithms]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Complex_Network</id>
		<title>Complex Network</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Complex_Network"/>
				<updated>2011-02-11T21:44:58Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A '''complex network''' forms the backbone of a [[Complex System|complex system]]: the nodes correspond to the agents, entities or parts of the complex system, the edges to the interactions between them.  A network is essentially anything which can be represented by a graph: a set of points, nodes or vertices, connected by links, ties or edges. In social networks, the nodes are people, and the ties between them are (variously) acquaintance, friendship, political alliance or professional collaboration. In [[Multi-Agent System|multi-agent systems]], the nodes are agents, and two nodes are connected if they interact with each other. In [[Distributed Computing|distributed computing]] and [[Distributed System|distributed systems]], the nodes are computers or processes, and the links are channels for messages. In the case of the Internet, the nodes are actual machines, and they are joined by a link when they are physically tied together. In the case of the World Wide Web (WWW), the nodes are Web sites, and they are joined when there is a hyper-link from one to the other, see C.R. Shalizi's Article &amp;quot;Growth, Form, Function, Crashes&amp;quot;  below. Complex networks are special networks at the edge of chaos where the degree of connectivity is neither regular nor random. The most complex networks of the real world are either small-world networks or scale-free networks at the border between regular and random networks, between order and randomness. &lt;br /&gt;
&lt;br /&gt;
== Scale-Free Networks ==&lt;br /&gt;
&lt;br /&gt;
A network is named scale-free, if it does not have a certain scale. A network&lt;br /&gt;
with a single scale is similar to grid: every node has the same degree or the&lt;br /&gt;
same number of links/edges. In a [[scale-free network]], some nodes have a huge number of connections to other nodes, whereas most nodes have only a few. Typically the degree of connectivity can be described by a [[power law]].&lt;br /&gt;
&lt;br /&gt;
According to Mark Newman in [http://arxiv.org/abs/cond-mat/0412004], &amp;quot;when the probability of measuring a particular value of some quantity varies inversely as a power of that value, the quantity is said to follow a power law, also known variously as Zipf's law or the Pareto distribution. The distributions of the sizes of cities, earthquakes, solar flares, moon craters, wars and people's personal fortunes all appear to follow power laws&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Scale-free networks and networks which can be described by power-laws are robust against accidental failures but vulnerable to deliberate attacks against hubs and supernodes.&lt;br /&gt;
&lt;br /&gt;
== Small-World Networks ==&lt;br /&gt;
&lt;br /&gt;
A network is called [[small-world network]] by analogy with the [[small-world phenomenon]], (popularly known as [[six degrees of separation]]). The small world hypothesis,&lt;br /&gt;
which has been tested in experiments (see [http://smallworld.columbia.edu/results.html The Small World Project]), is the idea that two arbitrary people are connected by only six degrees of separation, i.e. the diameter of the corresponding graph of social connections &lt;br /&gt;
is not much larger than six.&lt;br /&gt;
&lt;br /&gt;
Small-world networks have been described first by Duncan J. Watts and Steven H. Strogatz. They appear to be 'small' because they have a small average and characteristic path length, like random or complete graphs. Yet they can be highly clustered, like regular lattices. They can be found at the edge or boundary between regular networks and random networks, and are created from regular networks through rewiring of a few short cuts.&lt;br /&gt;
&lt;br /&gt;
== Common characteristics ==&lt;br /&gt;
&lt;br /&gt;
Both classes of complex networks, small-world and scale-free networks&lt;br /&gt;
are very similar. They can be found at the edge of chaos between&lt;br /&gt;
complete randomness and total order. Small-world networks or graphs &lt;br /&gt;
emerge through the random rewiring of regular grids or lattices:&lt;br /&gt;
''you add randomness to order''. Scale-free networks arise in &lt;br /&gt;
networks if ''you add order to randomness'': instead of considering &lt;br /&gt;
a pure random growth of a network, you consider a random growth with &lt;br /&gt;
preferential attachment.&lt;br /&gt;
&lt;br /&gt;
The small-world property can be associated with global connectivity&lt;br /&gt;
and the shortest path length, it arises in regular networks through &lt;br /&gt;
the addition of random shortcuts.&lt;br /&gt;
The scale-free property can be associated with local connectivity,&lt;br /&gt;
it arises in random networks through clustering.&lt;br /&gt;
&lt;br /&gt;
== Researchers and Scientists ==&lt;br /&gt;
&lt;br /&gt;
*[http://amaral.chem-eng.northwestern.edu/ L.A.N. Amaral] &lt;br /&gt;
*[http://tam.cornell.edu/Strogatz.html Steven Strogatz]&lt;br /&gt;
*[http://www.nd.edu/~alb/ Albert-László Barabási]&lt;br /&gt;
*[http://smallworld.columbia.edu/watts.html Duncan J. Watts]&lt;br /&gt;
*[http://www-personal.umich.edu/~mejn/ Mark Newman]&lt;br /&gt;
*[http://www.ingenuitygap.com/ Thomas Homer-Dixon]&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
*[http://www.nd.edu/~networks Self-Organized Networks]&lt;br /&gt;
*[http://www.sfu.ca/~insna/INSNA/Hot/scale_free.htm Scale-Free and Small-World Networks]&lt;br /&gt;
&lt;br /&gt;
== Articles ==&lt;br /&gt;
&lt;br /&gt;
* C.R. Shalizi, ''Growth, Form, Function, Crashes'', Santa Fe Institute Bulletin 15:2 (2000) [http://discuss.santafe.edu/dynamics/stories/storyReader$54]&lt;br /&gt;
* C.R. Shalizi, ''Complex Networks'' Notebook-Entry [http://cscs.umich.edu/~crshalizi/notebooks/complex-networks.html]&lt;br /&gt;
&lt;br /&gt;
*L.A.N. Amaral and J.M. Ottino , ''Complex networks - Augmenting the framework for the study of complex systems'', Eur. Phys. J. B 38 (2004) 147-162&lt;br /&gt;
*A. Barabasi and E. Bonabeau, ''Scale-Free Networks'', Scientific American, May 2003, 50-59&lt;br /&gt;
*K. Wiesenfeld, P. Colet, S.H. Strogatz, ''Exploring Complex Networks'',  Nature Vol 410 (2001) 268-276&lt;br /&gt;
*D.J. Watts and S. H. Strogatz., ''Collective dynamics of 'small-world' networks'', Nature Vol 393 (1998) 440-442&lt;br /&gt;
&lt;br /&gt;
*M. E. J. Newman [http://arxiv.org/abs/cond-mat/0412004 Power laws, Pareto distributions and Zipf's law]&lt;br /&gt;
*M. E. J. Newman [http://arxiv.org/abs/cond-mat/0303516 The structure and function of complex networks]&lt;br /&gt;
&lt;br /&gt;
== Books ==&lt;br /&gt;
&lt;br /&gt;
*Duncan J. Watts, ''Six Degrees: The Science of a Connected Age'', W. W. Norton &amp;amp; Company, 2003, ISBN 0393041425&lt;br /&gt;
*Duncan J. Watts, ''Small Worlds : The Dynamics of Networks between Order and Randomness'', Princeton University Press, 2003, ISBN 0691117047&lt;br /&gt;
*Albert-László Barabási, ''Linked: How Everything Is Connected to Everything Else and What It Means'', Plume, 2003, ISBN 0452284392&lt;br /&gt;
*Mark Buchanan, ''Nexus: Small Worlds and the Groundbreaking Theory of Networks'', W. W. Norton &amp;amp; Company, 2003, ISBN 0393324427&lt;br /&gt;
*E. Ben-Naim, H. Frauenfelder, Z. Toroczkai, ''Complex Networks'', Springer, 2004, ISBN 3540223541&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Group_Selection</id>
		<title>Group Selection</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Group_Selection"/>
				<updated>2011-02-11T21:44:55Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Group selection''' describes the interaction of two distinct evolutionary processes on two different scales: the level of the gene,&lt;br /&gt;
and the level of group (or meme). It is a special case of [[Multilevel Selection|multilevel selection]], where two processes of [[Natural Selection|natural selection]] interact with each other. In group selection, different forms of replicators support each other: genes increase the fitness of the memes in the groups of the individuals, and memes increase in turn the fitness of genes in their groups.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
see also &lt;br /&gt;
* Wikipedia Entry for [http://en.wikipedia.org/wiki/Group_selection Group Selection]&lt;br /&gt;
&lt;br /&gt;
* Nicholas S. Thompson, [http://www.behavior.org/journals_BP/2000/thompson.pdf Shifting the Natural Selection Metaphor to the Group Level], Behavior and Philosophy, 28, 83-101 (2000)&lt;br /&gt;
&lt;br /&gt;
* David S. Wilson &amp;amp; Elliott Sober, [http://www.bbsonline.org/Preprints/OldArchive/bbs.wilson.html Reintroducing group selection to the human behavioral sciences], Behavioral and Brain Sciences 17 (4) (1994) 585-654&lt;br /&gt;
&lt;br /&gt;
[[Category:Basic Principles]] [[Category:Evolutionary Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Self-Rejuvenation</id>
		<title>Self-Rejuvenation</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Self-Rejuvenation"/>
				<updated>2011-02-11T21:44:51Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Self-Rejuvenation''' or '''Self-Renewal''' is a [[Self-Star_Properties|Self-* Property]]. It is related to [[Self-Regeneration|self-regeneration]] and means that a system is able to rejuvenate and regenerate itself by replacing old parts with new ones. The major drawback of systems with  [[Self-Regeneration|self-regeneration]] or [[Self-Rejuvenation|self-rejuvenation]] is related to [[Cancer|cancer]]: such a system can obviously generate a new system which destroys the old one, especially if the rejuvenation or regeneration process is organic and associated with growth, see [[Rejuvenation and Cancer]]. Self-renewal is stem cells's most fundamental property and most dangerous property.&lt;br /&gt;
&lt;br /&gt;
== Related ==&lt;br /&gt;
&lt;br /&gt;
* [[Rejuvenation and Cancer]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Self-Star Properties]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Category:Organization</id>
		<title>Category:Organization</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Category:Organization"/>
				<updated>2011-02-11T21:44:48Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Pages about organization and levels of organization&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Causality_Violation</id>
		<title>Causality Violation</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Causality_Violation"/>
				<updated>2011-02-11T21:44:42Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Causality means the cause of an event is observed before its effect is observed.&lt;br /&gt;
'''Causality violation''' means that the cause of an event is observed after its &lt;br /&gt;
effect is observed.&lt;br /&gt;
Violation of causality are a problem in [[Distributed System|distributed systems]]&lt;br /&gt;
and [[Distributed Algorithm|distributed algorithms]], because causes and causal relations&lt;br /&gt;
are essential to any scientific explanation. If the causal structure is unclear,&lt;br /&gt;
it is hard to understand a system.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Tribute_Model</id>
		<title>Tribute Model</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Tribute_Model"/>
				<updated>2011-02-11T21:44:38Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''Tribute Model''' from Robert Axelrod captures the essential properties of power and explains the origin of nations and empires. The model is based upon a dynamic of &amp;quot;pay or else&amp;quot;. It shows that this dynamics combined with mechanisms to&lt;br /&gt;
increase and decrease commitments can lead to clusters of actors that behave largely according to the criteria for independent political states.&lt;br /&gt;
&lt;br /&gt;
In the model an active actor, A, may demand tribute from one of the other actors, which then have a choice of paying tribute to the demander, or fighting. The model uses two decision making '''rules''':&lt;br /&gt;
&lt;br /&gt;
* demanding rule: determines which actor is selected for tribute demands. The ideal target of a demand is weak enough so that it might choose to pay rather than fight, and so that it won't cause much damage if it does choose to fight. And it should be strong enough to be able to afford to pay as much possible. A suitable decision rule combining both of these considerations is to choose among the potential targets the one that maximizes the product of target's vulnerability times its possible payment.&lt;br /&gt;
&lt;br /&gt;
* fighting rule: determines if the target pays the tribute. The decision rule used for the target is simpler: fight if and only if it would cause less than the paying would.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the actors develop degrees of commitment to each other. The commitments are caused by their choices to pay or fight, and in turn have consequences for how they will pay or fight in the future. The '''rules for commitment''' are as follows. Initially, no agent has any commitments to other agents, and each agent is fully committed to itself.  Commitment of i to j increases when:&lt;br /&gt;
&lt;br /&gt;
* i pays tribute to j (subservience),&lt;br /&gt;
* i receives tribute from j (protection), or&lt;br /&gt;
* i fights on the same side as j (friendship).&lt;br /&gt;
&lt;br /&gt;
Similarly, commitment decreases whenever: i fights on the opposite side as j (hostility).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Axelrod paper about [http://www-personal.umich.edu/~axe/research/Building.pdf Building New Political Actors]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Agent-Based Model]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Replication</id>
		<title>Replication</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Replication"/>
				<updated>2011-02-11T21:44:35Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Replication''' in computer science refers to the creation and the use of redundant resources, such as &lt;br /&gt;
software or hardware components, to improve availability, reliability, [[Fault Tolerance|fault tolerance]], or performance. &lt;br /&gt;
Replication typically involves replication in space, in which the same application or data is stored on multiple &lt;br /&gt;
file systems or the same computing task is executed on multiple devices, or replication in time, in which &lt;br /&gt;
a computing task is executed repeatedly on a single device.&lt;br /&gt;
&lt;br /&gt;
One can distinguish roughly between two types of replication&lt;br /&gt;
&lt;br /&gt;
* '''Active Replication''' all replicas work productively, each replica plays the same role and is active in every operation&lt;br /&gt;
* '''Passive Replication''' only one replica works productively (the primary), the other replicas become active only if the primary fails&lt;br /&gt;
&lt;br /&gt;
Strong usage of active replication increases of course the [[Redundancy|redundancy]] among the &lt;br /&gt;
active components of the systems, and leads therefore to a higher probability of inconsistencies&lt;br /&gt;
and consistency problems.&lt;br /&gt;
'''Optimistic replication''' tries to achieve a trade-off between high consistency and &lt;br /&gt;
low latency by assuming that problems and conflicts during updates will occur rarely, &lt;br /&gt;
if at all.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
[http://portal.acm.org/citation.cfm?id=1057980&amp;amp;dl=GUIDE&amp;amp;coll=GUIDE Optimistic replication], Yasushi Saito and Marc Shapiro, ACM Computing Surveys (CSUR),  Volume 37, Issue 1 (March 2005)&lt;br /&gt;
&lt;br /&gt;
[[Category:Applied Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Dissemination_Model</id>
		<title>Dissemination Model</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Dissemination_Model"/>
				<updated>2011-02-11T21:44:32Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''Dissemination Model''' describes the emergence of culture. Culture means here the differences between individuals and groups which continue to exist in beliefs, attitudes, and behavior. It was created by Robert Axelrod. The model shows how local convergence can create global polarization of cultural &amp;quot;traits&amp;quot;. &lt;br /&gt;
&lt;br /&gt;
In the model the agents are placed at fixed sites. The basic premise is that the more similar an agent is to a neighbor, the more likely that that agent will adopt one of the neighbor's traits:&lt;br /&gt;
&lt;br /&gt;
* step1: at random, pick a site to be active, and pick one of its neighbors&lt;br /&gt;
* step2: with probability equal to their cultural similarity, these two sites interact. An interaction consists of selecting at random a feature on which the active site and its neighbor differ (if there is one) and changing the active site's trait on this feature to the neighbor's trait on this feature&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Axelrod's paper about [http://www-personal.umich.edu/~axe/research/Dissemination.pdf The Dissemination of Culture]&lt;br /&gt;
&lt;br /&gt;
[[Category:Agent-Based Model]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=MML</id>
		<title>MML</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=MML"/>
				<updated>2011-02-11T21:44:26Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''micro-macro link''' (MML) problem in sociology and distributed artificial intelligence (DAI) is the connection of [[Microlevel|microscopic]] and [[Macrolevel|macroscopic]] levels, how actors give rise to macroscopic social patterns and structures, and how these structures influence in turn individual actors. Understanding the micro-macro link means to understand the possible [[Emergence|emergent properties]] of the system. A solution of the MML is needed for the engineering of self-organizing systems ([[ESOS]]).&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.sociologica.mulino.it/journal/article/index/Article/Journal:ARTICLE:179 The Micro-Macro Link in Social Simulation]&lt;br /&gt;
&lt;br /&gt;
* [http://www.virtosphere.de/data/publications/articles/schillo+.lnai1979.pdf The Micro-Macro Link in DAI and Sociology]&lt;br /&gt;
&lt;br /&gt;
[[Category:Organization]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=ESOS</id>
		<title>ESOS</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=ESOS"/>
				<updated>2011-02-11T21:44:24Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''engineering of self-organizing systems''' (ESOS) is a contradiction in itself: how can you organize something which organizes itself? If we want to build a self-organizing system with autonomous agents, then how can we ensure the function ? Agents do by defintion what they want. How can you construct a self-organizing system? The answer is: in a balanced, iterative process which combines top-down analysis with bottom-up simulation, where we step by step define the 'rules of the game'. The bottom-up process is needed ensure diversity (innovative, random, surprising elements). The top-down process ensure unity (e.g. function, quality and goal-orientation). Together they form a cyclic round-trip process, which can be named synthetic microanalysis. &lt;br /&gt;
&lt;br /&gt;
== A Two-Way Approach to the MML ==&lt;br /&gt;
&lt;br /&gt;
The [[MML|micro-macro link]] (MML) problem probably needs a two-way or two-phase approach to find the necessary micro-macro connections, including a bottom-up and a top-down process. The way up means synthesis, simulation or experiments, and determines how individual actions are combined and aggregated to collective behavior. The way down means analysis, creation of testable hypotheses, or translation of requirements, and defines how collective forces influence and constrain individual actions.&lt;br /&gt;
&lt;br /&gt;
We can only generate complex [[Self-Organization|self-organizing]] systems with [[Emergence|emergent]] properties in a goal-directed, straightforward way if we look at the microscopic level and the macroscopic level (for local and global patterns, properties and behaviors), examine causal dependencies across different scale and levels, and if we consider the congregation and composition of elements as well as their possible interactions and relations. A complex system can only be understood in terms of its parts and the interactions between them, if we consider static and dynamic aspects. In other words we need a combination of top-down and bottom-up approach, which considers all sides: static parts and dynamic interactions between them, together with the macroscopic states of the system and the microscopic states of the constituents. Sunny Y. Auyang has proposes a method named “synthetic microanalysis” which claims to combine synthesis and analysis, composition and decomposition, a bottom-up and a top-down view, and finally micro- and macrodescriptions. She describes the idea vividly in chapter 2 of her interesting book, but unfortunately she does not say how her approach works for MAS exactly. The book focuses on complex systems in general, and physical systems (with particles instead of agents) in particular.&lt;br /&gt;
&lt;br /&gt;
The general idea is a “bottom-up deduction guided by a top-down view”. You have to delineate groups of “microstates” according to causal related macroscopic criteria (by “partitioning the microstate space”, for instance through selection of all elements with a certain property or role related to some macroscopic structure). Making a round trip from the whole to its parts and back, you can use the desired global macroscopic phenomena to design suitable local properties and interactions. This two-way approach is a generalization of the “experimental method” proposed by Bruce Edmonds and Joanna Bryson. In the theoretical top-down phase you have to create testable hypotheses, which have to be verified in the experimental bottom-up phase. Instead of “Synthetic Microanalysis” you could also name it iterative goal-directed simulations (where the goals are determined by high-level objectives and overall requirements).&lt;br /&gt;
The experimental bottom-up approach alone is successful only for small and simple systems like 1-dim Cellular Automata, where you can enumerate all possible systems. For large systems the amount of possibilities and number of configurations grows so large (or even &amp;quot;explodes&amp;quot;) that the goal gets lost or the thicket of microscopic details becomes impenetrable. To quote Auyang again: “blind deduction from constituent laws can never bulldoze its way through the jungle of complexity generated by large-scale composition” (p.6).&lt;br /&gt;
The macroscopic view is useful and necessary to delineate possible configurations, to identify composite subsystems on medium and large scales, to set goals for microscopic simulations and finally to prevent scientists from losing sight of desired macroscopic phenomena when they are immersed in analytic details.&lt;br /&gt;
&lt;br /&gt;
== Iterations and Refinements ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Topdown_vs_Bottomup.png|300px|thumb|left|Top-Down vs. Bottom Up]]&lt;br /&gt;
One round trip from the whole to its parts and back is probably not enough to generate complex self-organizing systems with emergent phenomena. If the two-way method of “synthetic microanalysis” works at all, you will certainly need some iterations and a number of stepwise refinements until the method converges to a suitable solution.&lt;br /&gt;
It is important to identify and refine before each iteration suitable subsystems, basic compounds and essential phenomena on the macroscopic level, which are big and frequent enough to be typical or characteristic of the system, but small and regular enough to be explained well by a set of microscopic processes. Many macroscopic descriptions are only an approximation, idealization and simplification of real processes. &lt;br /&gt;
&lt;br /&gt;
In the first top-down phase towards the bottom level, we must find the significant, relevant and salient properties, events and interactions, especially the crucial events responsible for butterfly effects, avalanches and cascades. We seek the concrete, precise and deterministic realization of abstract concepts. Many microscopic details are insignificant, irrelevant and inconsequential to macroscopic phenomena. In the second bottom-up phase towards the top level, you have to compare the results of the synthesis and simulation which the desired structure. &lt;br /&gt;
&lt;br /&gt;
In a typical iteration of “synthetic microanalysis”, you start from the “top” and work your way down to the micro-level, constructing agent roles and interaction rules in just the way necessary to generate the behavior observed on “top”. This procedure can be iterated by stepwise refinement of agents and their interactions, which should include necessary changes in the environment, until the desired function is achieved.&lt;br /&gt;
In the next round, you start start again from the global structure or macroscopic pattern, and try to refine the possible underlying microstates and micromechanisms. &lt;br /&gt;
Could these states and mechanisms lead to the desired large-scale structure? What kind of coordination, conflict-resolution and local guidance is needed additionally? What kind of roles and role-transitions are possible? &lt;br /&gt;
&lt;br /&gt;
Thus you would proceed roughly like this while trying to determine possible states, roles and role transitions:&lt;br /&gt;
&lt;br /&gt;
:'''Phase 1. Analysis and Delineation''' &lt;br /&gt;
:Starting from requirements and global objectives, what macroscopic and microscopic patterns, configurations, :situations and contexts are possible in principle? From the answers you can try to delineate what roles, behaviors, :local states and local interactions are roughly possible or necessary:&lt;br /&gt;
:a) What roles and local behavior are possible? Try to determine and deduce local behavior from global behavior, :identify possible roles and role transitions.&lt;br /&gt;
:b) What states possible? Determine and define local properties from global properties.&lt;br /&gt;
:c) What kind of local communication and coordination mechanisms possible? Determine tolerable conflicts and :inconsistencies.&lt;br /&gt;
&lt;br /&gt;
:'''Phase 2. Synthesis and Simulation''' &lt;br /&gt;
:Is the desired global behavior achievable with the set of roles and role transitions? In the second phase, you work :your way up to the top again through comprehensive simulations and experiments.&lt;br /&gt;
:Since emergent properties are possible, simulation is the only major way up from the bottom to the top. As Giovanna :Di Marzo Serugendo says “the verification task turns out to be an arduous exercise, if not realized through :simulation”.&lt;br /&gt;
:Sometimes the term “emergence” itself is even defined through simulation, for instance in the following way: a :macrostate is weakly emergent if it can be derived from microstates and microdynamics but only by simulation.&lt;br /&gt;
&lt;br /&gt;
The way up is simpler than the way down and requires mainly simulations. Since these simulations can be quite time consuming, it can be slower than the top-down process. In mathematical calculus, the situation is quite similar: many integrals can only be solved and determined numerical by numeric calculations, whereas differentiation is much easier and requires often only sophisticated analysis and analytic techniques.&lt;br /&gt;
There are more similarities: the fundamental theorem of calculus also connects the purely algebraic indefinite integral and the purely analytic (or geometric) definite integral. Likewise, a method of synthetic microanalysis should combine simulations (preferably bottom-up) and “analytic” (preferably top-down) considerations.&lt;br /&gt;
&lt;br /&gt;
== Genetic Algorithms ==&lt;br /&gt;
&lt;br /&gt;
[[Image:SMA.png|300px|thumb|right|Synthetic Microanalysis]]&lt;br /&gt;
The methods of synthetic microanalysis and evolutionary algorithms are quite similar, see the figure for a comparison. Both require the use of simulation, experimentation and selection. In the case of evolutionary algorithms without “humans in the loop”, the fitness evaluation is done automatically by fitness functions, in the case of synthetic microanalysis with “humans in the loop” it is done by the human engineer. &lt;br /&gt;
&lt;br /&gt;
These two approaches – Synthetic Microanalysis and Genetic Algorithms – are probably the only two systematic solutions to create self-organizing Multi-Agent Systems with emergent properties. There are two obvious solutions to build a self-organizing system that meets the requirements and objectives: The imitation of natural systems, for instance in form of biologically and sociologically inspired system, or manual trial-and-error. The first method can only be applied to transfer existing solutions, the second is not systematic. As Edmonds says, “we are to do better than trial and error…we will need to develop explicit hypotheses about our systems and these can only become something we rely on via replicated experiment.”&lt;br /&gt;
&lt;br /&gt;
* The advantage of '''Synthetic Microanalysis''' is that we are able to understand the solution. The drawback is that is still requires a human-in-the-loop: constant manual intervention, observation, consideration, delineation and design are essential.&lt;br /&gt;
&lt;br /&gt;
* The advantage of '''Genetic Algorithms''' is that they do not require a human-in-the-loop. The drawback is that we are often not able to understand the result. It is often hard to understand why the result is optimal (and none of the other solutions) and how it works exactly.&lt;br /&gt;
&lt;br /&gt;
== Science vs. Engineering ==&lt;br /&gt;
&lt;br /&gt;
[[Image:Scientific_Method.png|320px|thumb|left|The Scientific Method]]&lt;br /&gt;
This SMA method is nothing else but the &lt;br /&gt;
scientific method applied to engineering,&lt;br /&gt;
the combination of engineering and science.&lt;br /&gt;
The application of the scientific method by the engineer&lt;br /&gt;
is the solution to the fundamental [[ESOA]] and ESOS problem&lt;br /&gt;
It is the step-by-step investigation of hypotheses with experiments &lt;br /&gt;
and simulations. &lt;br /&gt;
&lt;br /&gt;
The scientific method is an iterative process that is the basis for any scientific inquiry, and it can also be used to examine artificial systems and simulated worlds (for instance synthetic societies of multi-agent systems). The scientific method follows a series of four basic steps: observe-formulate-predict-test &lt;br /&gt;
&lt;br /&gt;
:(1) identify a problem you would like to solve, &lt;br /&gt;
:(2) formulate a hypothesis,&lt;br /&gt;
:(3) test the hypothesis, &lt;br /&gt;
:(4) collect and analyze the data, &lt;br /&gt;
:(5) make conclusions and restart with (1)&lt;br /&gt;
&lt;br /&gt;
Remarkably, some computer scientists do not&lt;br /&gt;
want to hear this: the scientific method applied &lt;br /&gt;
to engineering. They are of course scientists,&lt;br /&gt;
and as scientists they use of course the &lt;br /&gt;
scientific method. How do we dare to question&lt;br /&gt;
this? Yet there is a clear difference between science&lt;br /&gt;
and engineering, between the scientist and the engineer.&lt;br /&gt;
The scientist tries to explain complexity by simple rules,&lt;br /&gt;
the engineer tries to hide complexity by simple user interfaces.&lt;br /&gt;
The scientist tries to explore nature by building machines,&lt;br /&gt;
the engineer tries to build machines by exploring possible constructions.&lt;br /&gt;
&lt;br /&gt;
 The scientist seeks to understand what is,&lt;br /&gt;
 the engineer seeks to create what never was.&lt;br /&gt;
 The engineer explores in order to build, &lt;br /&gt;
 the scientist builds in order to explore.&lt;br /&gt;
&lt;br /&gt;
What happens if both meet each other, if we &lt;br /&gt;
combine the characteristics of pure engineering&lt;br /&gt;
with pure science? Surprisingly, the best &lt;br /&gt;
and worst. The worst are &amp;quot;buzzword engineers&amp;quot; who&lt;br /&gt;
produce only hot air and &amp;quot;engineering scientists&amp;quot; &lt;br /&gt;
who only seek to create problems that never were before.&lt;br /&gt;
They are scientists and engineers who have got it wrong:&lt;br /&gt;
engineers should not conceal the truth and produce &lt;br /&gt;
complexity, they should hide complexity and produce the truth.&lt;br /&gt;
(unfortunately, many computer scientists fall in &lt;br /&gt;
this category. There are so many hot air merchants&lt;br /&gt;
at the universities.. In German you call them&lt;br /&gt;
&amp;quot;Schwindler&amp;quot; or &amp;quot;Schaumschläger&amp;quot;. All they do is &lt;br /&gt;
producing hot air and complex useless frameworks &lt;br /&gt;
while inventing new buzzwords and acronyms.&lt;br /&gt;
In Marketing this is ok, but in computer science..&lt;br /&gt;
They are a bit like intelligent ELIZA bots,&lt;br /&gt;
which never will achieve real intelligence, they&lt;br /&gt;
only produce a perfect illusion of intelligence.&lt;br /&gt;
Likewise, Schaumschläger will never produce any&lt;br /&gt;
real progress to science, but a perfect illusion&lt;br /&gt;
of progress. They are good in selling themselves,&lt;br /&gt;
in getting jobs and grants, and in pretending&lt;br /&gt;
to be important.). &lt;br /&gt;
&lt;br /&gt;
But there are also the opposite &lt;br /&gt;
cases. The best cases are &amp;quot;theory engineers&amp;quot; or &amp;quot;scientific&lt;br /&gt;
engineers&amp;quot;: scientists who construct new theories,&lt;br /&gt;
or engineers who discover new laws in engineering,&lt;br /&gt;
and new ways to build new types of systems.&lt;br /&gt;
Albert Einstein comes to mind.&lt;br /&gt;
&lt;br /&gt;
These are the extremes.&lt;br /&gt;
Between the extremes, if we leave the best and &lt;br /&gt;
the worst cases behind, we find the&lt;br /&gt;
engineer who seeks to understand the &lt;br /&gt;
useful system he his building, &lt;br /&gt;
and the scientist who seeks to create &lt;br /&gt;
new kind of interesting theories that never &lt;br /&gt;
existed before.&lt;br /&gt;
Exactly what we need for a cyclic &lt;br /&gt;
round-trip process, which can be named &lt;br /&gt;
synthetic microanalysis (the scientifc method &lt;br /&gt;
for the engineer which means rapid prototyping &lt;br /&gt;
and agile development).&lt;br /&gt;
&lt;br /&gt;
== Books and References ==&lt;br /&gt;
&lt;br /&gt;
* Ottino, J. M., Engineering complex systems, Nature 427 (2004) 399&lt;br /&gt;
* Johnson, S., Emergence, Scribner, 2002&lt;br /&gt;
* Holland, J. H., Emergence from chaos to order, Oxford University Press, 1998&lt;br /&gt;
* Axelrod, R., Chapter 6 “Building New Political Actors” of Complexity of Cooperation, Princeton University Press, 1997&lt;br /&gt;
* Maes, P, Modeling Adaptive Autonomous Agents, in Artificial Life, Christopher G. Langton (Ed.), The MIT Press, (1995)&lt;br /&gt;
* Edmonds, B. &amp;amp; Bryson, J. (2004) The Insufficiency of Formal Design Methods - the necessity of an experimental approach for the understanding and control of complex MAS. In Proceedings of the 3rd Internation Joint Conference on Autonomous Agents &amp;amp; Multi Agent Systems (AAMAS'04), New York, ACM Press&lt;br /&gt;
* Edmonds, B. (2004) Using the Experimental Method to Produce Reliable Self-Organised Systems. In Brueckner, S. et al. (eds.) Engineering Self Organising Sytems: Methodologies and Applications, Springer LNAI 3464, (2005) 84-99&lt;br /&gt;
* Yamins, D., Towards a Theory of &amp;quot;Local to Global&amp;quot; in Distributed Multi-Agent Systems, In Proceedings of the 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2005), Utrecht, ACM Press&lt;br /&gt;
* Jonathan Rauch, Seeing Around Corners, The Atlantic Monthly, April 2002&lt;br /&gt;
* Conte, R. and Castelfranchi, C., Simulating multi-agent interdependencies. A two-way approach to the micro-macro link, in Klaus G. Troitzsch et al. (Eds.), Social science microsimulation, Springer (1995) 394-415&lt;br /&gt;
* Auyang, S.Y., Foundations of Complex-system theories, Cambridge University Press, 1998&lt;br /&gt;
* Serugendo, G.D.M., Engineering Emergent Behaviour: A Vision, Multi-Agent-Based Simulation III. 4th International Workshop, MABS 2003 Melbourne, Australia, July 2003, David Hales et al. (Eds), LNAI 2927, Springer, 2003&lt;br /&gt;
* Mark A. Bedau, Weak Emergence, In J. Tomberlin (Ed.) Philosophical Perspectives: Mind, Causation, and World, Vol. 11, Blackwell (1997) 375-399&lt;br /&gt;
* Suzuki, J. and Suda, T., A Middleware Platform for a Biologically Inspired Network Architecture Supporting Autonomous and Adaptive Applications, In IEEE Journal on Selected Areas in Communications (JSAC), Special Issue on Intelligent Services and Applications in Next Generation Networks, vol. 23, no. 2 (2005) 249-260&lt;br /&gt;
* Montresor, A., Meling, H. and Babaoglu, O., Messor: Load-Balancing through a Swarm of Autonomous Agents, In Proceedings of the 1st International Workshop on Agents and Peer-to-Peer Computing, Bologna, Italy, July 2002, also a Technical Report UBLCS-2002-11, University of Bologna, Italy.&lt;br /&gt;
* Gershenson, C., A General Methodology for Designing Self-Organizing Systems, submitted preprint at http://uk.arxiv.org/abs/nlin.AO/0505009&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
Henry Petroski, [http://www.americanscientist.org/issues/pub/2008/3/scientists-as-inventors Scientists as Inventors], American Scientist Vol. 96 Sep-Oct. (2008) 368-371&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Kin_Selection</id>
		<title>Kin Selection</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Kin_Selection"/>
				<updated>2011-02-11T21:44:20Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Kin Selection''' is the [[Natural Selection|natural selection]] of shared genes in the genotype by altruistic behavior in the related phenotypes. Organisms tend to favor the reproductive success of their relatives, even at a cost to their own survival and/or reproduction, because they share genes with them, and therefore indirectly increase the reproductive success of their own genes.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia Entry for [http://en.wikipedia.org/wiki/Kin_selection Kin Selection]&lt;br /&gt;
&lt;br /&gt;
[[Category:Basic Principles]] [[Category:Evolutionary Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Supervenience</id>
		<title>Supervenience</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Supervenience"/>
				<updated>2011-02-11T21:44:17Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Supervenience''' is a form of strong [[Emergence|emergence]] characterized by independence in interdependence: a system A is causal independent from a system B, and yet physically embedded in it. To say that A supervenes on B means that B implements A: there can be no change in A without a change in B. In this sense, implementation is the opposite of supervenience. If a system A is embedded in and implemented by a system B, it is usually not possible to say who chances in system B affect system A. The system A can be unaffected, or it can stop to work at all. They are causal independent of each other. And yet there cannot be a difference in the system A without difference in the underlying system B, because the system A is embedded, realized and implemented by system B.&lt;br /&gt;
&lt;br /&gt;
In a more formal way, supervenience is a kind of dependency relationship, typically held to obtain between sets of properties. A set of properties A is supervenient on a set of properties B, if and only if any two objects x and y which share all properties in B (are &amp;quot;B-indiscernible&amp;quot;) must also share all properties in A.&lt;br /&gt;
&lt;br /&gt;
Supervenience is related to trancendence, the state of being or existence above and beyond the limits of a system. It can be considered as a first step towards trancendence: if a system A which supervenes B is embedded and implemented in a system C, it has transcended B, because it exists above and beyond the limits of system B.&lt;br /&gt;
&lt;br /&gt;
== Examples ==&lt;br /&gt;
&lt;br /&gt;
A common example is the mind (system A) which supervenes on the brain (system B): any change in one's mental state implies that there has been some kind of change in one's brain state. Another example is a virtual machine which runs on a host computer. A virtual machine is a software implementation of a machine (computer) that executes programs like a real machine. A program which is executed on the virtual machine is independent from the hardware of the host machine, and yet it is executed by it. The commom examples are:&lt;br /&gt;
&lt;br /&gt;
* biological properties supervene on physical properties&lt;br /&gt;
* mental states supervene on neurophysiological states&lt;br /&gt;
* software supervenes on hardware&lt;br /&gt;
* a virtual machine supervenes on a a real computer&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Supervenience Supervenience] in Wikpedia&lt;br /&gt;
* [http://plato.stanford.edu/entries/supervenience/ Supervenience] in Stanford Encyclopedia of Philosophy&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Invisible_Hand</id>
		<title>Invisible Hand</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Invisible_Hand"/>
				<updated>2011-02-11T21:44:13Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;In economics, the '''invisible hand''' is used to describe the self-regulating nature of the marketplace. The invisible hand is a metaphor coined by the economist Adam Smith in &amp;quot;The Wealth of Nations&amp;quot;. For Smith, the invisible hand was created by the conjunction of the forces of self-interest, competition, and supply and demand, which he believed would provide the best outcome for society provided that government did not interfere with these forces.&lt;br /&gt;
&lt;br /&gt;
== Origin of the Metaphor ==&lt;br /&gt;
&lt;br /&gt;
Adam Smith (1723-1790) uses the metaphor in Book IV of ''The Wealth of Nations'', arguing that people in any society will employ their capital in foreign trading only if the profits available by that method far exceed those available locally. In such a case, Smith argues, it is better for society as a whole if they so do.&lt;br /&gt;
&lt;br /&gt;
: &amp;quot;[An individual is] led by an '''invisible hand''' to promote an end which was no part of his intention. Nor is it always the worse for the society that it was not part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it. I have never known much good done by those who affected to trade for the public good. It is an affectation, indeed, not very common among merchants, and very few words need be employed in dissuading them from it.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Interpretation ==&lt;br /&gt;
&lt;br /&gt;
Adam Smith argued that, in a free market where actions are not orchestrated or organized by a central command, an individual pursuing his own interests tends also to promote that of the society as a whole through a principle that he called “the invisible hand”. Producers and consumers, in the pursuit of profits, are led, as if by an invisible hand, to do what is best for the community. This sounds paradox: a population of self-centered, selfish human beings, working in their own self-interest, can make the world a better place for the whole population. Everyone creates value for himself, and as a side-effect valuable contribution is made for the community as a whole. &lt;br /&gt;
&lt;br /&gt;
Adam Smith's invisible hand can be considered as [[Self-Organization|self-organization]], but the mystery behind the invisible hand is just the free market and the principle of supply and demand. A market is a mechanism which finds for every lack of supply someone who is responsible to organize it, by connecting the interests of the individual (making profit) with the interests of the public (guarantee supply). A severe lack of supply is an irresistible incentive to make money by founding or opening a business because it creates a true market niche. A surplus of supply an incentive to shut down a business, because the market niche closes and one can no longer make profit. Therefore greed will drive actors to beneficial behavior which is good for the society as a whole, because there is no lack of products and the supply is evenly distributed. And when all people constantly struggle to become wealthier, then they will increase therefore the total sum of wealth.&lt;br /&gt;
&lt;br /&gt;
Thus by pursuing their private self-interest, people can create a common public good as a side-effect. This works well if the interests of the self and the collective are closely connected.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* Wikipedia Entry for the [http://en.wikipedia.org/wiki/Invisible_hand Invisible Hand]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Cellular_Automata</id>
		<title>Cellular Automata</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Cellular_Automata"/>
				<updated>2011-02-11T21:44:10Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Cellular Automata''' are regular arrays of identical finite state automata whose next state is determined solely by their &lt;br /&gt;
current state and the state of their neighbours, usually by a boolean transition function. They are closely related to &lt;br /&gt;
[[Random Boolean Network|Random Boolean Networks (RBN)]]. A CA contains many cells and each cell is a finite-state automaton &lt;br /&gt;
connected to its neighbors - and so the whole machine or device is called a cellular automaton (pl. cellular automata). They &lt;br /&gt;
were introduced by the mathematician John von Neumann in the 1950s as simple models of biological self-reproduction, and they &lt;br /&gt;
are elementary models for [[Complex System|complex systems]] and processes consisting of a large number of simple, homogeneous, &lt;br /&gt;
locally interacting components. &lt;br /&gt;
&lt;br /&gt;
== Definition ==&lt;br /&gt;
&lt;br /&gt;
A suitable definition of [http://en.wikipedia.org/wiki/Cellular_Automata Cellular Automata] is according to [http://mathworld.wolfram.com/CellularAutomaton.html mathworld] the following statement: &amp;quot;A cellular automaton &lt;br /&gt;
is a collection of colored cells on a grid that evolves through a number of discrete time steps according to a set &lt;br /&gt;
of rules based on the states of neighboring cells.&amp;quot; The two most common neighborhoods in the case of a &lt;br /&gt;
two-dimensional cellular automaton on a square grid are the  &lt;br /&gt;
[http://mathworld.wolfram.com/MooreNeighborhood.html Moore neighborhood] (a square neighborhood) and the &lt;br /&gt;
[http://mathworld.wolfram.com/vonNeumannNeighborhood.html von Neumann neighborhood] (a diamond-shaped neighborhood).&lt;br /&gt;
&lt;br /&gt;
== Types ==&lt;br /&gt;
&lt;br /&gt;
Stephen Wolfram proposed a classification of cellular automaton rules into four types, according to the results of evolving the system from a &amp;quot;disordered&amp;quot; initial state:&lt;br /&gt;
&lt;br /&gt;
* I.  Evolution leads to a homogeneous state.&lt;br /&gt;
* II. Evolution leads to a set of separated simple stable or periodic structures.&lt;br /&gt;
* III. Evolution leads to a chaotic pattern.&lt;br /&gt;
* IV. Evolution leads to complex localized structures, sometimes long-lived.&lt;br /&gt;
&lt;br /&gt;
David Epstein proposed a [http://www.ics.uci.edu/~eppstein/ca/wolfram.html classification] of cellular automaton rules into only three types:&lt;br /&gt;
&lt;br /&gt;
* Contraction impossible&lt;br /&gt;
* Expansion impossible&lt;br /&gt;
* Both expansion and contraction possible&lt;br /&gt;
&lt;br /&gt;
== Applets ==&lt;br /&gt;
&lt;br /&gt;
Good Cellular Automata applets, including 1-dimensional CA and 2-dimensional CA where you can edit the rules online, can be found at the site [http://www.sussex.ac.uk/space-science/ca.html]. Rules with complex patterns are for instance Wolfram's [http://mathworld.wolfram.com/Rule30.html Rule 30] and or [http://mathworld.wolfram.com/Rule110.html Rule 110]. Mirek's Java Cellebration [http://psoup.math.wisc.edu/mcell/mjcell/mjcell.html MJCell] is a Java applet that allows playing 300+ Cellular Automata rules and 1400+ patterns. It can play rules from 13 CA rules families. A nice tutorial from David J. Eck about Cellular Automata and &amp;quot;the edge of chaos&amp;quot; can be found [http://math.hws.edu/xJava/CA/index.html here].&lt;br /&gt;
&lt;br /&gt;
== Game of Life ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Conway's_Game_of_Life Conway's Game of Life] is one of the most popular two-dimensional CA.&lt;br /&gt;
It was invented by John H. Conway. The rules are very simple:&lt;br /&gt;
&lt;br /&gt;
 '''Birth'''    If an unoccupied cell has 3 occupied neighbors, it becomes occupied.&lt;br /&gt;
 '''Survival''' If an occupied cell has 2 or 3 neighbors, the organism survives to the next generation.&lt;br /&gt;
 '''Death'''    If an occupied cell has 0..1 or 4..8 occupied neighbors, &lt;br /&gt;
          the organism dies (0,1: of loneliness; 4-8: of overcrowding).&lt;br /&gt;
&lt;br /&gt;
A description of the game can be found at the [http://mathworld.wolfram.com/Life.html mathworld] site.&lt;br /&gt;
&lt;br /&gt;
Interactive Website by Paul Callahan: [http://www.math.com/students/wonders/life/life.html What is the Game of Life]&lt;br /&gt;
&lt;br /&gt;
More about Conway's Game of Life :&lt;br /&gt;
http://www.tech.org/~stuart/life/rules.html&lt;br /&gt;
&lt;br /&gt;
Interactive Essay: Exploring Emergence [http://llk.media.mit.edu/projects/emergence/life-intro.html The Facts of Life]&lt;br /&gt;
&lt;br /&gt;
John Conway's Game of Life - Applet by Edwin Martin&lt;br /&gt;
[http://www.bitstorm.org/gameoflife/]&lt;br /&gt;
&lt;br /&gt;
John Conway's Game of Life - Applet by Alan Hensel&lt;br /&gt;
[http://www.ibiblio.org/lifepatterns/]&lt;br /&gt;
&lt;br /&gt;
== Larger than Life ==&lt;br /&gt;
&lt;br /&gt;
Available in MJCell: Larger than Life&lt;br /&gt;
(an extension of the Game of Life to a larger radius or diameter)&lt;br /&gt;
[http://psoup.math.wisc.edu/mcell/mjcell/mjcell.html]&lt;br /&gt;
&lt;br /&gt;
== Scientists ==&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Stephen_Wolfram Stephen Wolfram] is the author &lt;br /&gt;
of the computer program Mathematica, the founder of Wolfram Research, and mainly &lt;br /&gt;
known for his work about cellular automata.&lt;br /&gt;
Andrew Ilachinski works for the [http://www.cna.org/ Center for Naval Analyses (CNA)], USA.&lt;br /&gt;
&lt;br /&gt;
Tommaso Toffoli is a Professor in the Electrical and Computer Engineering Department at Boston University. He has done a lot of work about Cellular Automata (CA), and a large parts of his CA work resembles Stephen Wolfram's and Edward Fredkin's approach to understand physical systems trough CA simulations. Some Publications can be found [http://pm1.bu.edu/~tt/publ.html here].&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [http://cscs.umich.edu/~crshalizi/notebooks/cellular-automata.html C.R. Shalizi's Notebook Entry on CA]&lt;br /&gt;
* [http://mathworld.wolfram.com/CellularAutomaton.html Mathworld Entry for CA]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Cellular_automaton Main Wikipedia Entry for CA]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Conway's_Game_of_Life Main Wikipedia Entry for Conway's Game of Life]&lt;br /&gt;
&lt;br /&gt;
== Books == &lt;br /&gt;
&lt;br /&gt;
There are two major books on Cellular Automata:&lt;br /&gt;
&lt;br /&gt;
* Stephen Wolfram, ''A new kind of science'', Wolfram Media, Inc., 2002 [http://www.wolframscience.com/nksonline]&lt;br /&gt;
* Andrew Ilachinski, ''Cellular Automata: A Discrete Universe'', World Scientific, 2001, ISBN 9810246234&lt;br /&gt;
&lt;br /&gt;
Other, less popular books:&lt;br /&gt;
&lt;br /&gt;
* Tommaso Toffoli and Norman Margolus, ''Cellular Automata Machines: A New Environment for Modeling'', MIT Press, 1987, ISBN 0262200600&lt;br /&gt;
* Howard Gutowitz (editor), ''Cellular Automata: Theory and Experiment'', MIT Press, 1990, ISBN 0262570866&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Context</id>
		<title>Context</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Context"/>
				<updated>2011-02-11T21:44:07Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''context''' of an entity is specified by the current&lt;br /&gt;
situation, the actual state of the environment, &lt;br /&gt;
and the circumstances and conditions which &amp;quot;surround&amp;quot; &lt;br /&gt;
it. For mobile entities (for example mobile [[Agent|agents]],&lt;br /&gt;
mobile robots or mobile phones) it is determined by the &lt;br /&gt;
the current place and position in space and time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
Wikipedia Entries for [http://en.wikipedia.org/wiki/Context Context],&lt;br /&gt;
[http://en.wikipedia.org/wiki/Context_adaptation Context_adaptation], and&lt;br /&gt;
[http://en.wikipedia.org/wiki/Context_awareness Context_awareness]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Scalability</id>
		<title>Scalability</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Scalability"/>
				<updated>2011-02-11T21:44:03Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Scalability''' means how well a solution to some problem will &lt;br /&gt;
work when the size of the problem increases. The degree of scalability&lt;br /&gt;
determines how much a system can be expanded without performance degradation or &lt;br /&gt;
alteration of the internal structure of the system.&lt;br /&gt;
&lt;br /&gt;
== Definition == &lt;br /&gt;
&lt;br /&gt;
Scalability can be defined formally as the capability of a system or application &lt;br /&gt;
or product to continue to function well without performance loss under increasing &lt;br /&gt;
load by adding additonal resources or instances of the system. Couloris et al. &lt;br /&gt;
[http://www.cdk4.net/] define scalability as follows: &amp;quot;a distributed system is ''scalable'' &lt;br /&gt;
if the cost of adding a user is a constant amount in terms of the resources that must be added&amp;quot;.&lt;br /&gt;
The system can serve an increasing demand or a larger number of users by adding additional instances &lt;br /&gt;
or devices. The IEEE definition says it is &amp;quot;the ability to grow the power or capacity of a system by adding components&amp;quot; [http://www.ieeetcsc.org/content/tfcc-4-1-gray.shtml]. The key scalability technique &lt;br /&gt;
is just as in [[Fault Tolerance|fault tolerance]] replication: a service can be replicated at many nodes to serve a larger demand. Scalability is also related to and sometimes required for [[Load Balancing|load balancing]]. A large-scale Internet application must be parallelized and replicated to scale well.&lt;br /&gt;
&lt;br /&gt;
Werner Vogels, CTO of Amazon.com, mentions on his personal weblog: &amp;quot;A service is said to be scalable &lt;br /&gt;
if when we increase the resources in a system, it results in increased performance in a manner &lt;br /&gt;
proportional to resources added.&amp;quot; (see [http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html]).&lt;br /&gt;
Scalability refers to the property of a system architecture which&lt;br /&gt;
determines the limit of the ability to grow and to scale up.&lt;br /&gt;
Ken Birman gives in his article &amp;quot;Can Web Services Scale Up ?&amp;quot; the following definition &lt;br /&gt;
of a scalable system: &amp;quot;In a nutshell, a scalable system is one that can flexibly accommodate&lt;br /&gt;
growth in its client base. Such systems typically run on a clustered computer&lt;br /&gt;
or in a large data center and must be able to handle high loads or sudden&lt;br /&gt;
demand bursts and a vast number of users. They must reliably respond even&lt;br /&gt;
in the event of failures or reconfiguration. Ideally, they’re self-managed and&lt;br /&gt;
automate as many routine services such as backups and component upgrades as possible&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Ways and Means == &lt;br /&gt;
&lt;br /&gt;
The key to scalablity are the [http://en.wikipedia.org/wiki/Kiss_principle KISS principle]&lt;br /&gt;
and the four &amp;quot;S&amp;quot;: to keep it simple, stateless, scriptable and small.&lt;br /&gt;
'''Simple''' components have their advantages. &lt;br /&gt;
It is well-known in software engineering that simple&lt;br /&gt;
components and simple processes offer less possibilities&lt;br /&gt;
to make errors and mistakes. The less code you write, the less can go wrong &lt;br /&gt;
and the less there is to maintain. Hotmail engineer Phil Smoot said&lt;br /&gt;
in an [http://acmqueue.com/modules.php?name=Content&amp;amp;pa=showpage&amp;amp;pid=355 ACM interview] &lt;br /&gt;
(ACM queue, Vol. 3 No. 10, December/January 2005-2006):  &lt;br /&gt;
&amp;quot;New hires tend to want to do complex things,&lt;br /&gt;
but we know that complex things break in complex ways. The veterans&lt;br /&gt;
want simple designs, with simple interfaces and simple constructs&lt;br /&gt;
that are easy to understand and debug and easy to put back together&lt;br /&gt;
after they break.&amp;quot; A lot of relatively small and simple commodity &lt;br /&gt;
machines will not only be cheaper than a few big and expensive machines, they&lt;br /&gt;
also scale much better and offer a better fault-tolerance.&lt;br /&gt;
&lt;br /&gt;
Statelessness is often the essential key.&lt;br /&gt;
'''Stateless''' nature of the system means high scalability.&lt;br /&gt;
If servers, objects, components or EJBs are stateless &lt;br /&gt;
and have no private state, they can be replicated easily.&lt;br /&gt;
[[Redundancy]] and replication become very difficult &lt;br /&gt;
subjects if the corresponding entities or objects are stateful:&lt;br /&gt;
the state must be consistent among all replicated instances &lt;br /&gt;
(consistency), and an access should either affect all or none &lt;br /&gt;
of the replicated objects (atomicity).&lt;br /&gt;
&lt;br /&gt;
Stateless means simple processes, and simple solutions are&lt;br /&gt;
always good. Communication with Web Servers over pure &lt;br /&gt;
HTTP is stateless - probably one reason why the world-wide &lt;br /&gt;
web is very scalable and successful.&lt;br /&gt;
Phil Smoot said about the MSN service in the ACM interview&lt;br /&gt;
&amp;quot;In general, we try to keep no session affinity between a&lt;br /&gt;
Web server that is performing a given page paint&lt;br /&gt;
and the middle-tier servers that manage the transactions&lt;br /&gt;
against the underlying data stores.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In order to reach statelessness, one can store something&lt;br /&gt;
for instance in the database layer, if it needs to be &lt;br /&gt;
stateful. This is sometines considered as &amp;quot;push all state &lt;br /&gt;
down to the database&amp;quot;.&lt;br /&gt;
A typical request in such a logical three tier architecture&lt;br /&gt;
triggers the following actions: &amp;quot;loading state for a set of &lt;br /&gt;
objects from the database, operating on them, pushing their &lt;br /&gt;
state back down into the database (if needed), writing the &lt;br /&gt;
response, and then getting the hell out of there i.e. releasing &lt;br /&gt;
all references to objects loaded for this request, leaving them &lt;br /&gt;
for garbage collection&amp;quot;, see [http://naeblis.cx/rtomayko/2005/05/28/ibm-poop-heads].&lt;br /&gt;
&lt;br /&gt;
'''Scriptable''' applications are easy to automate.&lt;br /&gt;
Hotmail engineer Phil Smoot said in an ACM interview:&lt;br /&gt;
&amp;quot;Our operation group never wants to rely on any&lt;br /&gt;
sort of user interface [or complex GUI]. Everything&lt;br /&gt;
has to be scriptable and run from some sort of command&lt;br /&gt;
line. That's the only way you're going to be able&lt;br /&gt;
to execute scripts and gather the results over&lt;br /&gt;
thousands of machines&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
In a large system, you have to assume that everything&lt;br /&gt;
is going to fail, whether software or hardware.&lt;br /&gt;
Failure of components in any small-scale system is the exception, not &lt;br /&gt;
the rule, whereas failure of components in any large-scale system is &lt;br /&gt;
the rule, not the exception. Therefore you have to design &amp;quot;for failure&amp;quot; &lt;br /&gt;
in any large-scale system, i.e. you have to design the system as &lt;br /&gt;
if any component could fail at any time.&lt;br /&gt;
Thus an important means to achieve scalability is to make &lt;br /&gt;
[[Self-Star Properties|self-* properties]] a part of the system &lt;br /&gt;
(self-configuration, self-management, self-inspection, self-repair), the&lt;br /&gt;
system has to observe, heal and regenerate itself constantly.&lt;br /&gt;
There are completely new problems on massive scales&lt;br /&gt;
(the probability of failures and faults increases,&lt;br /&gt;
and there is always some node that fails, thus&lt;br /&gt;
assume that nodes fail and try to keep the system &lt;br /&gt;
healthy through continuous refresh and recovery &lt;br /&gt;
oriented computing).&lt;br /&gt;
&lt;br /&gt;
== Multi Tier Architecture == &lt;br /&gt;
&lt;br /&gt;
A physical three tier architecture alone is not necessarily&lt;br /&gt;
more scalable than a two tier architecture, because it&lt;br /&gt;
expands in the wrong direction and can cause &amp;quot;remote object&lt;br /&gt;
hell&amp;quot;. This is illustrated nicely by Ryan Tomayko in his blog, see [http://naeblis.cx/rtomayko/2005/05/28/ibm-poop-heads]&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Many large enterprise web applications tried really hard to implement a Physical Three Tier Architecture, or they did in the beginning. The idea is that you have a physical presentation tier (usually JSP, ASP, or some other *SP) that talks to a physical app tier via some form of remote method invocation (usually EJB/RMI, CORBA, DCOM) that talks to a physical database tier (usually Oracle, DB2, MS-SQL Server). The proposed benefits of this approach is that you can scale out (i.e. add more boxes) to any of the physical tiers as needed.&lt;br /&gt;
&lt;br /&gt;
Great, right? Well, no. It turns out this is a horrible, horrible, horrible way of building large applications and no one has ever actually implemented it successful. If anyone has implemented it successfully, they immediately shat their pants when they realized how much surface area and moving parts they would then be keeping an eye on.&lt;br /&gt;
The main problem with this architecture is the physical app box in the middle. We call it the remote object circle of hell. This is where the tool vendors solve all kinds of interesting what if type problems using extremely sophisticated techniques, which introduce one thousand actual real world problems, which the tool vendors happily solve, which introduces one thousand more real problems, ad infinitum... It's hard to develop, deploy, test, maintain, evolve; it eats souls, kills kittens, and hates freedom and democracy.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Examples and Applications ==&lt;br /&gt;
&lt;br /&gt;
Open source software is an important factor in achieving&lt;br /&gt;
scalability. Yahoo uses open source software as - FreeBSD, &lt;br /&gt;
Apache, Python, PHP, Perl, and MySQL - to achieve scalability.&lt;br /&gt;
Like Yahoo, Google is an academic organization and has &lt;br /&gt;
an open source attitude.&lt;br /&gt;
&lt;br /&gt;
Massively scalable architectures are rare. They can be found&lt;br /&gt;
in the proprietary systems of the large IT companies, which&lt;br /&gt;
have literally thousands of servers, millions of users and &lt;br /&gt;
billions of transactions per day. The question if you&lt;br /&gt;
get better performance or not if you are adding more and more&lt;br /&gt;
boxes and computers is of crucial importance to these&lt;br /&gt;
companies. &lt;br /&gt;
&lt;br /&gt;
'''Google''' has a special design of its hardware infrastructure - Google uses a highly redundant cluster architecture of custom made commodity desktop PCs - and an own Google file system (GFS). The principle is to use lots of relatively small and inexpensive machines instead of a few big and expensive machines. Goolge has also functional server pools (for instance web servers, index servers and document servers) like eBay. Google uses Python extensively since the beginning, and they &lt;br /&gt;
have even hired the Python creator Guido van Rossum. Many components of the Google spider and &lt;br /&gt;
search engine seem to be written in Python.&lt;br /&gt;
&lt;br /&gt;
'''Amazon''' uses mainly its own proprietary custom software in C/C++, Java, SQL, Perl, on Linux servers, and partially Oracle's Real Application Clusters (RAC) software. It uses &amp;quot;a homegrown message queuing architecture and Web services to wire together its collection of internally written applications.&amp;quot; (see [http://www.baselinemag.com/article2/0,3959,1455549,00.asp]) The large-scale system from Amazon is built out of many different custom-made  applications or &amp;quot;services&amp;quot;, about 100-150. Each of this applications has different requirements in respect of availability and consisteny, for example the order service (taking an order), the customer service (updating customer records), the reviewer service (storing customer reviews), or the catalog service (checking the availability of a particular item). Amazon uses a massive parallel and modular architecture: all small pieces of business functionality is encapsulated by different isolated and separated services, and each service is responsible for the corresponding data and data management. Amazon also uses the autonomous &amp;quot;team method&amp;quot;:the same team of developers that has built and developed the service is also responsible for operating it. It can use the tools and methods it wants, as long as the desired functionality is delivered. Amazon finally uses a simple [http://www.amazon.com/gp/browse.html/104-4296443-9698339?node=13584001&amp;amp;no=14256891 queue service] to connect the applications of the auonomous teams that offers a reliable, highly scalable hosted queue for buffering messages between distributed application components. Components of the huge distributed application are decoupled with messages and queues so that they run independently.&lt;br /&gt;
       &lt;br /&gt;
'''Yahoo''' has it's own page rendering [http://patft.uspto.gov/netacgi/nph-Parser?u=/netahtml/srchnum.htm&amp;amp;Sect1=PTO1&amp;amp;Sect2=HITOFF&amp;amp;p=1&amp;amp;r=1&amp;amp;l=50&amp;amp;f=G&amp;amp;d=PALL&amp;amp;s1=5983227.WKU.&amp;amp;OS=PN/5983227&amp;amp;RS=PN/5983227   patent]. Yahoo uses like Microsoft's Hotmail service the FreeBSD operating system [http://www.serverwatch.com/stypes/servers/article.php/15915_1299361], besides the Apache Web Server and other open source software. Similar to Google, Yahoo uses open source software extensively, for example Python for its groups site. The Yahoo online groups, a comprehensive public archive of Internet mailing lists, was originally implemented in pure Python. &lt;br /&gt;
&lt;br /&gt;
'''Microsoft''' MSN's services include e-Mail, Instant Messenger, news, weather, sport, etc. The Hotmail e-Mail is one of MSN's larger services and relies on more than 10,000 servers spread around the globe: they have special mail servers and storage servers. They use of course Microsoft&lt;br /&gt;
products (what else), &amp;quot;clusters&amp;quot; as a unit which can be build in and out on demand, and scriptable applications which can be automated.&lt;br /&gt;
&lt;br /&gt;
'''eBay''' uses since the Version 3 eBay architecture J2EE technologies and IBM WebSphere [[Application Server]]. They emphasize stateless design and functional server pools (partitioning of application servers based on use cases). They don't seem to use entity beans very much. &amp;quot;In general, the approach that eBay is alluding to (and Google has confirmed) is that architectures that consist of pools or farms of machines dedicated on a use-case basis will provide better scalability and availability as compared to a few behemoth machines.&amp;quot; [http://www.manageability.org/blog/j2ee?b_start:int=5]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* General&lt;br /&gt;
&lt;br /&gt;
[http://www.allthingsdistributed.com/2006/03/a_word_on_scalability.html Werner Vogels: A word on scalability]&lt;br /&gt;
&lt;br /&gt;
* eBay:&lt;br /&gt;
&lt;br /&gt;
[http://www.manageability.org/blog/j2ee?b_start:int=5 Nuggets of Wisdom from eBay's Architecture]&lt;br /&gt;
&lt;br /&gt;
[http://www.sun.com/service/about/success/recent/Sun_eBay6-2_forWeb.pdf eBay Creates Technology Architecture for the Future]&lt;br /&gt;
&lt;br /&gt;
* Amazon&lt;br /&gt;
&lt;br /&gt;
[http://www.baselinemag.com/article2/0,3959,1455549,00.asp Amazon.com at LinuxWorld: All Linux, All the Time]&lt;br /&gt;
&lt;br /&gt;
* Google: &lt;br /&gt;
&lt;br /&gt;
[http://www.google.com/corporate/facts.html Google's facts]&lt;br /&gt;
&lt;br /&gt;
[http://www.networkworld.com/newsletters/accel/2001/00991542.html Google's secrets (from 2001)]&lt;br /&gt;
&lt;br /&gt;
[http://insight.zdnet.co.uk/hardware/servers/0,39020445,39175560,00.htm The magic that makes Google tick] (ZDNet 2004)&lt;br /&gt;
&lt;br /&gt;
[http://www.fastcompany.com/magazine/69/google.html How Google Grows]&lt;br /&gt;
&lt;br /&gt;
[http://labs.google.com/papers/gfs.html The Google File System]&lt;br /&gt;
&lt;br /&gt;
[http://labs.google.com/papers/googlecluster.html Web Search For A Planet: The Google Cluster Architecture]&lt;br /&gt;
&lt;br /&gt;
* IBM:&lt;br /&gt;
&lt;br /&gt;
[http://www-128.ibm.com/developerworks/websphere/library/techarticles/hipods/scalability.html Design for Scalability - an Update]&lt;br /&gt;
&lt;br /&gt;
== Papers ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.cs.cornell.edu/projects/quicksilver/public_pdfs/webtech2.pdf Can Web Services Scale Up ?], Ken Birman&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Applied Principles]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	<entry>
		<id>https://wiki.cas-group.net/index.php?title=Segregation_Model</id>
		<title>Segregation Model</title>
		<link rel="alternate" type="text/html" href="https://wiki.cas-group.net/index.php?title=Segregation_Model"/>
				<updated>2011-02-11T21:43:59Z</updated>
		
		<summary type="html">&lt;p&gt;Admin: Reverted edits by Eboxytezi (Talk) to last version by Jfromm&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Thomas Schelling's '''model of segregation''' is a classic study of the effects of local decisions on the global dynamics of housing segregation patterns. It describes the emergence of segregated areas or &amp;quot;ghettos&amp;quot;. In the model residents are described as agents. Agents with mild preferences for same-type neighbors, but without preferences for segregated neighborhoods, can end up producing complete segregation.&lt;br /&gt;
&lt;br /&gt;
It is one of the first [[Agent-Based Model|agent-based models]] at all, where agents represent people and agent interactions represent a socially relevant process. In 1971, Thomas Schelling published an article dealing with racial dynamics called &amp;quot;Models of Segregation&amp;quot;. In this paper he showed that a small preference for one's neighbors to be of the same color could lead to total segregation. He used coins on graph paper to demonstrate his theory by placing pennies and nickels in different patterns on the &amp;quot;board&amp;quot; and then moving them one by one if they were in an &amp;quot;unhappy&amp;quot; situation. The positive feedback cycle of segregation - prejudice - in-group preference can be found in most human populations, with great variation in what are regarded as meaningful differences – gender, age, race, ethnicity, language, sexual preference, religion, etc. Once a cycle of separation-prejudice-discrimination-separation has begun, it has a self-sustaining momentum.&lt;br /&gt;
&lt;br /&gt;
In the model, agents interact only locally, with their direct neighbors. Each agent agrees to stay in a neighborhood with people that are mainly of another color, on condition that there are at least 37,5% with the same color in the neighborhood. More specifically, Schelling uses the following '''rules''': &lt;br /&gt;
&lt;br /&gt;
* an agent with one or two neighbors will try to move if there is not at least one neighbor of the same color &lt;br /&gt;
* an agent with three to five neighbors needs at least two like him &lt;br /&gt;
* an agent with six to eight wants at least three agents of the same colour &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* NetLogo model of Schelling's [http://ccl.northwestern.edu/netlogo/models/Segregation Segragation Model]&lt;br /&gt;
&lt;br /&gt;
[[Category:Agent-Based Model]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>	</entry>

	</feed>