Load Balancing
The steady growth of the Internet is causing many performance problems, including low response times, network congestion and disruption of services either caused by normal system overload or by cyber attacks (DDoS). The most widely used solution to minimize or solve these problems in Load Balancing.
Load balancing is dividing the amount of work that a computer has to do between two or more computers so that more work gets done in the same amount of time and, in general, all users get served faster.
Load Balancing (sometimes also referred as to Network Load Balancing or Server Load Balancing) can also be described as the process of distributing service requests across a group of servers. This addresses several requirements that are becoming increasingly important in networks:
- Increased scalability: When many content-intensive applications scale beyond the point where a single server can provide adequate processing power, it is increasingly important to have the flexibility to deploy additional servers quickly and transparently to end-users;
- High performance: The highest performance is achieved when the processing power of servers is used intelligently. An advanced load balancing infrastructure can direct end-user service requests to the servers that are least busy and therefore capable of providing the fastest response time;
- High availability and disaster recovery: The third benefit of load balancing is its ability to improve application availability. If an application or server fails, load balancing can automatically redistribute end-user service requests to other servers within a server cluster or to servers in another location;
On the Internet, companies whose Web sites get a great deal of traffic usually use load balancing. When a single Web Server machine isn’t enough to handle the traffic in a Web site it’s time to look into building a Web Farm that uses multiple machines on the network acting as a single server. In a web farm, services or applications can be installed onto multiple servers that are configured to share the workload. This type of configuration is a load-balanced cluster which scales the performance of server-based programs, such as a Web server, by distributing client requests across multiple servers.
Load balancing technologies, commonly referred to as load balancers, receive incoming requests and redirect them to a specific host if necessary and the load-balanced hosts concurrently respond to different client requests, even multiple requests from the same client. For example, a Web browser may obtain the multiple images within a single Web page from different hosts in the cluster. This distributes the load, speeds up processing, and shortens the response time to clients. The figure shows the basic components of load balancing:
Web Farms are an obvious choice if the limits of a single server have been exceeded and Load Balancing can also provide benefits in the overload scenario. It’s generally cheaper to solve a load problem using mid-level machines instead of buying one top of the line high end machine. Even with server licenses involved multiple low end machines might provide a more cost efficient solution.
Load balancing is also a way of making sure that the information on your server remains easily and efficiently accessible, even during the busiest periods by enabling multiple servers to respond to visitors with the same content, depending on the load each server has at the time. Users are therefore not affected by demands placed on the server by other users and this is an increase in the reliability of a web site by routing requests to other servers in the cluster when one of the servers is too busy or fails.
Load balancing can be implemented with hardware, software, or a combination of both and typically is the main reason for computer server clustering. Remember the previous chart with all the main High Availability solutions? In the followings posts I will go into detail on the ones used to create Load Balanced Solutions:
There are other approaches to address the aforementioned problems of system overload:
- Site mirroring
- Proxy servers
Upgrading the hardware is certainly one other option available if we consider that today’s hardware is incredibly capable and should be sufficient to handle all but the most demanding Web applications on a single server. Multiprocessor machines make a pretty powerful platform to run Web applications with, even if those high end machines are rather pricey. While the big corporate websites won’t run on a single machine, a vast majority of applications are likely to be able to comfortably serve millions of transactional hits a day from a single server even with a single processor.
Site mirroring
A mirror site is an exact copy of another website or datacenter that contains the same information as the original. Mirror sites are most commonly used to provide multiple sources of the same information and as a set up for backup and disaster recovery as well as to balance the traffic load for numerous download requests on the Web.
Such "download mirrors" are often placed in different locations throughout the Internet with file servers that contain a duplicate set of files of another file server, thereby sharing the burden of distribution to ensure rapid availability of data when there is heavy demand.
Proxy servers
These are servers that all computers on the local network have to go through before accessing information on the Internet. By using a proxy server, an organization can improve the network performance and filter what users connected to the network can access. There are several kinds of distinct proxy servers but that is not the topic of this post. However I will point out that proxy servers have two main purposes:
- Improve Performance: A proxy server improves Internet access speeds from a network primarily by using a caching system. Caching saves recently viewed Web sites, images, and files on a local hard drive so that they don't have to be downloaded from the Web again. While your Web browser might save recently viewed items on your computer, a proxy server caches everything accessed from the network;
- Filter Requests: The other main purpose a proxy server is to filter what is allowed into the network. While HTTP, FTP, and Secure protocols can all be filtered by a proxy server, HTTP is the most common thus the proxy server can limit what Web sites users on the network can access.
4 comments:
Great and very informative post, Thanks a lot for sharing with us. HostPapa hosting Review
At VPS.NET we make it easy to get started selling cloud hosting with our cloud reseller hosting plans. Sign up today and make money as a cloud reseller.VPS Reseller
What is SSL?is this important to have it when u order web hosting?
video cdn
Post a Comment