Load balancing is the process of splitting requests among multiple resources, division dictated by some metrics, which are used on Cloud Computing has been explained. Almost similarly named article exists – Cloud Computing, Load balancing, Scalability, Redundancy and Performance. In essence, learning and knowing the basics can help you in practical life. We explained Scalability in Cloud Computing before, we explained why Scalability and Service Continuity Are Not Equivalent in Cloud Computing, last but not the least is Redundancy, that topic also has been discussed in details – Redundancy in Computing.
The ideal system capacity increases linearly by adding hardware. In this system, if we have one car and we are adding another car, theoretically it should be doubling our capacity. If we have added three cars and our capacity is expected to increase by 33%. This ability is called “to scale”. From the point of view of an ideal system, fault is not revolutionized by the loss of a server. Losing a server should simply decrease the ability of the system of the same quantity added by the same server. This ability is called redundancy. Both scalability and redundancy are usually obtained through load balancing.
Load balancing is the process of splitting requests among multiple resources, division dictated by some metric (random, round robin, random weight based on the ability of the machine, etc. – for details read Types of Load Balancing Technology) and the current state of the resources. The load must be balanced between the user requirements and your Web server, but it must also be balanced at each stage to achieve scalability and complete redundancy for your system. A moderately large system could balance the load on three levels : the user to your web server, from your web server to a level of inner platform, and finally by the latter to your database. There are several ways to implement load balancing.
---
Load Balancing with Dedicated Hardware
The most expensive solution but high end setup for load balancing is to buy a dedicated hardware load balancer, for example Citrix NetScaler. While they may solve many types of hardware problems, solutions are really expensive and not easy to configure. Even big companies with generous budgets generally try to avoid to be equipped with dedicated hardware for their needs for load balancing, uses them only in part as the first point of contact requests from the users to their infrastructure and uses other mechanisms to make load balancing within their network.
Load Balancing Software
If you buy dedicated hardware but it seems to be high spending, you can opt for a hybrid approach with the so-called load-balancer software. HAProxy is a good example of this approach. It runs locally on each of your boxes and each service has a port that you want to balance directly. For example, you could have your machines accessible via localhost: 9000, your pool of reading the database is via localhost: 9001 and your pool of writing database is on localhost: 9002. HAProxy manages the state of health, removes and returns the machine to those pools depending on your configuration, as well as balance even between machines in those pools. For most systems it is recommended to start with a load-balancing software and then move on smart client or hardware, depending on the needs at different times.
Tagged With how should load balancing scalability and redundancy be implemented in the same system utilizing the same system resources , load balancing principles , redundancy scalibility