You are here
Home > PHP > Scaling PHP Applications

Scaling PHP Applications

Moving a website or application to production has its own challenges, but when it gets the right friction it’s a great achievement. Don’t you think it always motivates us to see the visitor’s number going up on our website? Except, a case where the traffic number is so high that it crashes our LAMP stack. For a growing e-commerce business, the cost of having the website offline is just too high, and in many cases it can also bring a colossal loss to the growing business.

However, you don’t have to panic for a situation like this as you can take precautions to make your PHP applications much more reliable and consistent. Yes, you have guessed it right what we’re trying to say it’s none other than “scalability”.

In a nutshell “scalability” is a characteristic of a system, model or function that describes its capability to cope and perform under an increased or expanding workload. A system that scales well will be able to maintain or even increase its level of performance or efficiency when tested by larger operational demands.

Basically there are two ways to scale a system: a vertical scaling, also known as scaling up & horizontal scaling known as scaling out.

What is vertical scaling?

Vertical scaling can essentially resize your server with no change to your source code. It is the ability to increase the capacity of an existing hardware or software by adding more resources. Vertical scaling is limited by the fact that you can only get as big as the size of the server.

So, having said that, we always like to provide an example that you might be able to visually imagine.

Imagine, if you will, an apartment building that has many rooms and floors where people move in and out all the time. In this apartment building, 200 spaces are available but not all are taken at one time. So, in a sense, the apartment scales vertically as more people come and there are rooms to accommodate them. As long as the 200-space capacity is not exceeded, life is good.

This could even apply to a restaurant. You have seen the signs that tell you how many people could be held in the establishment. As more patrons come in more tables may be set up and more chairs added (scaling vertically). However, when capacity is reached no more patrons would be able to fit. You can only be as big as the building and patio of the restaurant. This is much like in your cloud environment, where you could add more hardware to the existing machine (RAM and hard drive space) but you are limited to capacity of your actual machine.

Vertical Scaling

On the other hand,

What is horizontal scaling?

Horizontal scalability is the ability to increase capacity by connecting multiple hardware or software entities so that they work as a single logical unit.

When servers are clustered, the original server is being scaled out horizontally. If a cluster requires more resources to improve performance and provide high availability, an sys-admins can scale out by adding more servers to the cluster.

An important advantage of horizontal scalability is that it can provide administrators with the ability to increase capacity on the fly. Another advantage is that in theory, horizontal scalability is only limited by how many entities can be connected successfully. The distributed storage system Cassandra, for example, runs on top of hundreds of commodity nodes spread across different data centers. Because the commodity hardware is scaled out horizontally, Cassandra is fault tolerant and does not have a single point of failure (SPoF).

Horizontal Scaling

Role of load balancer in scaling

A load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers. Load balancers are used to increase capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.

Load balancers are generally grouped into two categories: Layer 4 and Layer 7. Layer 4 load balancers act upon data found in network and transport layer protocols (IP, TCP, FTP, UDP). Layer 7 load balancers distribute requests based upon data found in application layer protocols such as HTTP.

Requests are received by both types of load balancers and they are distributed to a particular server based on a configured algorithm. Some industry standard algorithms are:

  • Round robin
  • Weighted round robin
  • Least connections
  • Least response time

Layer 7 load balancers can further distribute requests based on application specific data such as HTTP headers, cookies, or data within the application message itself, such as the value of a specific parameter.
Load balancers ensure reliability and availability by monitoring the “health” of applications and only sending requests to servers and applications that can respond in a timely manner.

Top