Edge computing tries to address the performance, reliability, and scalability problems that can be found in many larger Web applications. It emphasizes the importance of servers at the edge between users and client on one side, and applications and servers on the other side. The edge servers are located at the Internet Service Provider (ISP) between clients or customers and application service provider. The proximity of the "edge servers" to the clients are useful to decrease response time. The most frequently used part of the application is globally distributed to ensure that applications run close to their end users, automatically providing capacity both when and where it is needed.
Although it sounds similar, it has nothing to do with the edge of chaos principle. It is a paradigm proposed by IBM, Akamai and Sun. In order to to use "EdgeComputing", a site developer should - according to Akamai - split the application into two components: an edge component and an origin component. The code in the edge component is deployed onto data centers distributed around the world, whereas the origin part is deployed in the traditional manner within the central enterprise data center, see . Many client requests can be processed directly at the "edge" without (Wide Area Network) WAN communication. If the edge server can answer a request immediately, it does, otherwise it will ask the origin component at the enterprise data center, stores the answer for later reuse in a cache, and delivers the response to the client.