Organic Computing is a German Initiative launched by the two German national computer societies GI (German Informatics Society) and ITG (Informationstechnische Gesellschaft), and a group of researchers from three German universities (Universität Hannover, Universität Karlsruhe and Universität Augsburg). It defines an Organic Computing system as "a technical system which adapts dynamically to the current conditions of its environment. It is self-organizing, self-optimizing, self-configuring, self-healing, self-protecting, self-describing, self-explaining and context-aware”. Thus it is very similar to IBM's autonomic computing Initiative, which also is based on systems with Self-* Properties. Like Autonomic Computing, it is a vision for future information processing systems.
While Autonomic Computing focuses mainly on servers, computing centers and large-scale Internet software, Organic Computing aims at the development of robust, flexible and highly adaptive embedded systems, Ubiquitous Computing and Hardware (Embedded Processors, Microcontroller, etc.).
History - Organic IT vision
The idea of organic computing and autonomic computing goes back to research reports from Forrester. In a report from April 2002 about Organic IT, the report defines Organic IT as a "Computing infrastructure built on cheap, redundant components that automatically shares and manages enterprise computing resources -- software, processors, storage, and networks -- across all applications within a datacenter".
The vision of Organic IT is based on a next-generation data center architecture, and was the precursor for the organic computing and autonomic computing initiatives. Forrester has proposed for example the "principle of least escalation", meaning that in an layered organic architecture with many management layers the lowest capable level solves urging problems immediately and informs the higher levels later (similar to biological reflexes).
Like many of the later initiatives and visions, Organic IT envisioned an IT architecture which solves the basic problems in many distributed systems, namely fault tolerance, failover, flexibility, robustness, scalability with the help of self-* properties and redundancy. Scalability for instance means that applications can scale up and down to match demand, robustness and fault tolerance (including failover and recovery) mean an application can run 24/7, non-stop 24 hours per day and seven days a week, flexibility means automated server provisioning, application installation and maintenance.
The objective of Organic Computing is the technical usage of principles observed in natural systems, to create biologically inspired life-like computer systems. As computers and the tasks they perform become increasingly complex, researchers are looking to nature -- as model and as metaphor -- for inspiration.
There are many examples how biology has alredy been affecting computer science. Many attempts exist for example to develop artificial neural networks, evolvable hardware, evolutionary algorithms, nanoscale self-assembly, and security systems that mimic nature's immune systems (see the book "Imitation of Life" by Nancy Forbes).
The overall objectives of organic computing are the same goals as in biologically inspired computing:
- the use of biology as a metaphor or inspiration for the development of algorithms and systems;
- the construction of information processing systems that use biological materials or are modeled on biological processes, or both;
- the effort to understand how biological organisms "compute," or process information.
Biologically Inspired Computing
Another name for Organic Computing is biologically inspired computing or short bio-inspired computing. Organic means having properties associated with living organisms: its original meaning is "Part of or derived from living matter". Organic Computing is the use of the self-* principles found in organic, living and evolving systems (self-management, self-organization and self-healing) to reach scaleability, robustness and autonomy. Organic systems grow, change, evolve, suffer illnesses and recover again. They are robust and flexible. If organic computing can identify and use some of the basic principles behind organic systems to reach similar properties in artificial systems, then this would be a success.
Biological evolution has managed to produce a wide variety of complex organisms and lifeforms that build, adapt, repair and reproduce themselves. There have always been similarities, exchange and overlap between the worlds of biology and computer science. The basic terminology of viruses and infection in the field of computer security is borrowed from biology, and the modern biomedicine and molecular biology would not be possible without computers at all.
By studying biological phenomena such as brains, swarming insects, evolution and immune systems, scientists try to make computers do the same sorts of things and to reach the same amount of flexibility and robustness. Using ideas from biology to improve artificial systems can be a useful way to stimulate thought and to inspire new architectures. Biologically inspired computing methods and systems are:
- Reinforcement Learning
- Neural Networks
- Evolutionary Computing
- Swarm Intelligence and Collective Systems
- Artificial Immune Systems
- DNA Computing and Biological Haardware
- Biologically Inspired Robotics
- D. Mange et al., Toward robust integrated circuits: The embryonics approach, Proc. of the IEEE, vol. 88, no. 4 (2000) 516-541
Nancy Forbes, Imitation of Life : How Biology Is Inspiring Computing, The MIT Press, 2004, ISBN 0262062410