Learn something new every day
More Info... by email
Cluster computing is a form of computing in which a group of computers are linked together so that they can act like a single entity. There are a number of reasons for people to use cluster computers for computing tasks, ranging from an inability to afford a single computer with the computing capability of a cluster, to a desire to ensure that a computing system is always available. The precise date at which this technique was developed is unknown, and there are competing claims for the invention credit, with some people suggesting that individual users probably developed it independently to meet their computing needs long before the technique was used in industrial settings.
One common reason to use cluster computing is a desire to create redundancy in a computer network to ensure that it will always be available and that it will not fail. A common application for this form of computing is in hosting web sites, with the cluster distributing the load of the visitors across an array of machines so that many visitors can be accommodated. This technique is also used for gaming servers used by large groups, to avoid lag and log-on problems.
High availability (HA) cluster computing is often used in this way, to create a redundant network that will be accessible to users at all times, with fail-safes in case parts of the cluster break down. Load balancing clusters are designed to address a large load of incoming requests, coordinating requests in a way which will maximize efficiency and usability.
Another application is in big projects that require high performance computing. Some computations are extremely complex, and they require the use of multiple computers that can talk quickly with each other, as changes in one can change the entire system. For example, the simulations used to test theories in meteorology are often run on computing clusters. Without a cluster, the calculation might be impossible to do, or it might take a very long time to process.
Cluster computing can also be used to distribute a work load in the form of many small chunks of data, a technique known as grid computing. In this case, a single computer couldn't handle all the work, but many small computers can. The various @home projects use this technique to distribute a data processing workload across a huge network that includes many home computers that pitch in to do work when they are idle.
Is this the same thing as Cloud Computing, or more like a Network instead?