Distributed Computing is a new model for solving massive computational problems using a large number of computers organized in clusters embedded in a distributed telecommunications infrastructure. Previously we published information on Grid Computing, Grid Computing is a type of Distributed computing. Distributed Systems as well as Distributed Computing is defined as a collection of physically separate computers connected together by a communications network which is distributed, each machine has its hardware and software components, but the user perceives it as a single system (no need to know what things are in which machines individually). The user accesses remote resources ( RPC ) in the same way to access local resources or a group of computers that use software to achieve a common goal.
Features of Distributed Computing
Distributed systems need to be very reliable, since if a component of another system is decomposed, the system should be able to replace it. This is called fault tolerance. The size of a Distributed Computing system can be very varied in nature, it can be composed of tens of hosts using local area network or hundreds of hosts (like in a metropolitan area network ) or thousands, or millions of hosts ( Internet ), this is called scalability.
Distributed Computing has some basic features :
---
- For each of the users’ work should be similar in the Centralized System.
- Internal security is present in the distributed system.
- It runs on multiple computers.
- Have multiple copies of the same operating system or different operating systems that provide the same services.
- Comfortable working environment.
- Dependent on networks (LAN, MAN, WAN, etc.)
- Compatibility between varied range of devices.
- Transparency (using multiple processors and remote access should be invisible).
- Interaction between the teams is possible.
- Supports multiple users and operating systems.
Objective and Classification of Distributed Computing
Distributed computing has been designed to solve problems too big for any supercomputer and mainframe computers to solve, while maintaining the flexibility to work on multiple smaller problems. Therefore, grid computing is naturally a multiuser environment, hence, secure authorization techniques are essential before allowing the computer resources to be controlled by the remote users.
In terms of functionality, the meshes are classified into computational and data grids. The web services based on XML provide a way to access various services / applications in a distributed environment. Recently, the world of grid computing and web services walk together to provide the mesh as a web service. The architecture is defined by the Open Grid Services Architecture (OGSA). The Globus Toolkit version 3.0, which is currently in alpha stage, is a reference implementation in line with the standard OGSA.
Distributed Computing offer a way to solve big challenges, such as protein folding and drug discovery, financial modeling, simulation of earthquakes, floods and other natural disasters, climate modeling / time, etc.. They provide a way to use the resources of information technology optimally in an organization.
Another method to create supercomputer systems is the clustering. A cluster or computer cluster is a group of relatively low-cost computers interconnected by a system of high-speed network (usually gigabit optical fiber) and software which distributes the workload among the equipments. Usually these systems have one data storage center only. Clusters have the advantage of being redundant, to be out of the main processor and the second feature is that when triggered, acts as a Fail Over.