Ask the Experts: Making the transition from standalone servers to server clusters
Question: I'm to the point in our organization that load balancing across multiple servers is really becoming necessary. Making the transition from standalone servers to server clusters is a daunting task to say the least. What's the best way to make such a change? Is it possible to start small and increase the "cluster" as we go? What services do well across multiple servers, and which do not? --Cory Whiteman of Atlanta, GA
Kalyana Krishna Chadalavada, Director – Systems & Technology at HPC Systems, Inc. responds: The best way to introduce clusters in to your infrastructure is to roll out a pilot implementation. In most cases, it is possible to start small and expand as needs grow. However, this is the view from a 1000 ft above the ground. There are a lot of applications that do support clustered configurations, databases have been the first to do so. File servers, email servers, web servers followed shortly after. For applications that are not cluster-aware themselves, load balancing can be achieved by introducing a new device like IP load balancers in to the network.
The specifics of implementation will vary based on the goals you are trying to achieve. It is also important to note that not all services scale indefinitely as you scale the available resources. Simply put, adding 10 servers will not necessarily improve the performance by 10 times.
The good news is that most vendors to provide pre-configured systems for cluster-aware applications like Oracle, SQLServer, Lustre, PVFS and more.
James Ward of Adobe Systems responds: One of the most difficult things to deal with in clustered environments is replication of state. The more stateless your application is the easier it will be to cluster. If you can layer a cluster of stateless services in back of a single server that handles the stateful stuff then your job might be easier.
David Thomas of dthomasdigital responds: Given today's computer environments clustering can help many organizations utilize existing resources. A daunting task it truly can be but a good plan goes a long way. Pick the Linux distribution that's right for your organization's needs one that some expertise exist within your organization or your outside support group. Think of additional cooling and electricity requirements as well as security and a robust backup plan. Do all this up front and you'll be glad you did later.
Linux Clustering is very scalable so starting small and growing big is thinking in the right direction but remember design your cluster with scalability as your first requirement and you should have no problem adding resources as needed.
I like to think in terms of protocols when working with clustered services. Protocols such as HTML, DNS, and SMTP handle cluster environments quite well. Start with web servers then databases, as you get more comfortable working in a clustered environment move on to FTP, DNS, and email servers.
Clustering has many advantages such as high availability, load balancing, and can even be configured for high-performance computing. If planned properly a Linux cluster can be quite an asset to any organization.
Bill Childers, an IT Manager in Silicon Valley responds: Well, Cory, load balancing is a great way to scale a service to get added capacity. Traditionally, people have allocated more CPU and RAM to a service (scaling vertically), but this only goes so far. By spreading the load of the service across multiple machines (scaling horizontally) you pickup more fault tolerance as well as increased capacity for that service. A great way to scale horizontally is through the use of a load balancer.
Let's answer the last part of your question first -- "What services do well across multiple servers, and which do not?" Most read-only services or services that can perform a write in a single transaction lend themselves well to load balancing, so things like HTTP, LDAP reads, and DNS lookups work very well when deployed behind a load balancer. Write-intensive operations, such as database updates, don't work well behind a load balancer (unless your application can mine data from different database servers or you have shared storage). Any SSL-enabled service (such as HTTPS) can pose a special case when load balancing, because the "cn" (common name) attribute in the SSL certificate must match the fully-qualified domain name (FQDN) that the client is making the request of. There are ways around this but they're best left to a network architect.
The best way to make the change really depends on your network architecture and service deployment. I do recommend to start small -- hopefully you have a lab environment where you can test this out before you plop it in your production environment. It is possible to roll out load balancing with minimal disruption to your production services, if you plan properly and your environment allows for it. For example, let's assume you have a basic web server and you wish to move to a cluster of three web servers behind a load balancer. You would setup the new web servers on the same subnet and start those services, and verify they were functional. Then, you could place the load balancer on the same subnet as the web servers, and give it the virtual IP (VIP) you want to present to the world as your "www.mydomain.com" address. Configure the load balancer to route traffic between the web servers, and then setup a DNS record like "test.mydomain.com" to point to the load balancer's VIP. At this point, (assuming your web servers are setup to serve traffic for test.mydomain.com as well) you can browse to test.mydomain.com and your requests should be served through the load balancer. This would give you an opportunity to run tests and verify that things were working. Once you are satisfied, simply change your www.mydomain.com DNS record to point the the load balancer's VIP and your services would dynamically switch to being load balanced. In the event something went wrong, you could roll the change back quickly as well.
Good luck!
Chris Stark of the University of Hawaii College of Education responds: Hi Cory. Being a great sysadmin is all about making informed decisions. You asked, "What is the best way to transition from stand-alone servers to clusters?". The best way is to step out of the server room for a bit and lay down some solid plans. Your plans should be informed by performance data that you have collected over the course of the previous weeks, months, or even year. And, your plans should look to the future to ensure that your investment of time, energy, and equipment are not just a "band-aid" fix or dirty hack.
You will probably need to involve the management of your organization in this process, as they'll surely want a hand in the direction of the organization's IT infrastructure. Ask the following questions (but these are just a start -- you'll need to come up with your own, too)...
What are the organization's most important services?
How much does it cost the organization if these core services are unavailable for a given amount of time?
What auxiliary services are necessary to support the core services?
What is the current resource utilization on the critical servers?
How many users are using the critical services at a given time?
How many users do we project will be using the core services 1/3/5 years from now?
Use the answers to these questions (as well as the ones you come up with) to help shape your plans and justify your expenses. Though it may seem like an obvious step, far too many IT operations I've encountered have no written plans anywhere, leaving the organizations in bad shape when personnel inevitably changes. Furthermore, proper planning helps to avoid unnecessary server sprawl and provides an important benchmark for your IT operation's performance objectives.
I'm going to skip your second question for now, and go directly to your third, "What services do well across multiple servers, and which do not?". Since your question wasn't specific with the services your organization offers, I'll go over a few common services: DNS, LDAP, database, web applications, file sharing, and email.
Services like BIND DNS, OpenLDAP, and MySQL have their own provisions for replication and clustering. For the most part, these services are easy to scale just by adding more physical boxes. To take the most advantage of the additional boxes, you may need to adjust some of your configurations.
Java-based web apps can generally be clustered if the servlet container supports clustering. I'm not a Java guy, but I'm pretty sure Tomcat and Glassfish both handle clustering without too much fuss.
Web applications written in PHP may need a bit of tweeking to handle load-balancing (especially those involving login sessions). The key to clustered session management in PHP is storing the session data in a database rather than the server's filesystem.
Since HTTP is stateless, web applications are particularly well-suited to load-balancing. However, for best results, you should start your web developers on some sort of source code management tool like subversion. This will make it easy to keep the content on each node in the web cluster in sync.
Email and file sharing services are significantly more difficult to cluster and may require you to experiment with tricky configurations and NAS or SAN based storage.
So back to your second question, "Is it possible to start small and increase the 'cluster' as we go?". With some services, yes. However, when it comes to file sharing and email, the amount of new and expensive infrastructure likely means a substantially larger 'first step' into clustering.
I hope this helps you get on the right path. There's no silver bullet or magic formula for some of this stuff, so you will need to learn to rely on your network's performance data and your organization's plan for IT in order to make the right decisions.
Aloha.
Tim Chase responds: The most common case for a request to load-balance involves a database-backed web-server. The first thing to do is to profile and find out where the bottleneck is happening. The bottleneck is likely the database or the dynamic HTML content generation (PHP, Python, Ruby, Perl, etc). In some cases it may also be the static-content server or merely the network connection.
Additionally, are you currently running all the components (DB, static content, & dynamic content) off the same machine? Try splitting these each into their own server -- the divisions between them are clean, and will prevent them from fighting for resources on the same server. The Django folks have a nice writeup[1] on scaling these out.
Finally, is there shared-state between transactions? A shared-nothing architecture keeps all state on the client, allowing any server in your pool to answer the request because the request is self-contained. For more, read up on REST[2]. If you rope yourself to maintaining shared-state on the server, that one server can become your choke-point.
Database
It may be as simple as an explosion in the number of queries, or bringing back volumes of data from the DB server, only to filter them locally. Query optimization (doing all the sorting/filtering/grouping on the server-side) can help. Hundreds(or even thousands) of small queries that are round-tripped to the DB may possibly be rolled into a single round-trip query. So start with smart SQL.
If the majority of queries are read-only, database replication with master-slave works nicely (though you have to ensure that reads-immediately-after-writes come from the master server). For sites with higher write-volume, some databases also support master-master replication with various caveats regarding replication latencies. Read your database docs to learn more (both PostgreSQL and MySQL support database clustering). It also helps to know your application's tolerance for latency between writes and propagation. If you need up-to-the-minute data, your requirements are a lot greater. You have more options if you can tolerate higher latencies -- such as a blog-post slowly propagating across servers where it doesn't matter as much if folks on the other side of the globe have to wait a whole hour for it to appear.
When data becomes gargantuan, it takes application-specific rewriting to scale. Options include:
- sharding: splitting a large table across multiple database servers
- partitioning: splitting along boundaries like a user/company, keeping their data on a single server, but having a way of using that user/company information to decide which server they get routed to
- using a backend like Google's BigTable or the open-source Hadoop project
Dynamic Content Generation
This may involve poor choice of algorithms or data-structures. Many of the languages support profiling the response to a web-page -- use these timing facilities to find out where the bottleneck is. Are you using an O(N^2) algorithm where an O(N log N) or better algorithm would do better? Is debugging (such as query-logging) accidentally enabled in production? Are you performing synchronous actions (such as video transcoding, mass emailing, file-conversion, bulk data uploads) which would be better farmed out to an asynchronous process, perhaps even on another server?
Are your requests frequently hitting the database for content you can cache? Look into integrating memcached to hold common content for faster retrieval.
Static Content Serving
If content is static, it can be replicated and balanced fairly easily. Check your web-server (apache, lighttpd, egenix, or tux are all popular candidates) configuration to make sure it's optimized. Simple DNS round-robin can distribute requests across a pool of servers.
Network Connection
If the public-facing network connection is your bottleneck, your options are pretty limited:
1) get a fatter pipe
2) use DNS to distribute your servers across geographic locations
3) use a content-delivery network for cached content
If your content generation and network connection are fast, but dumping it onto the network is limited by users on the remote end (say, a low-end dial-up connection), you might investigate using something like perlbal[4] to spoon-feed the data, while your application server can resume replies to other requests.
"What's the best way to make such a change? Is it possible to start small and increase the "cluster" as we go?" As you solve one bottleneck, you'll find others below the surface. However a well-tiered architecture will allow you to treat a single "layer" as a black box. The contents of that layer may be a single layer or a cluster. Usually you can scale out each layer fairly transparently of the other layers.
"What services do well across multiple servers, and which do not?"
Scale well:
HTTP (especially with client-maintained state), Database, DNS, NTP, Compute-time, NNTP, Jabber, LDAP
May scale well with some extra work:
Mail (SMTP/IMAP/POP), firewalling, filtering, proxying, NFS file-serving, remote shells
Don't scale as well: SMB & FTP file-serving
Not sure: IRC & print Serving
Hope this answers some of your questions as well as gives you additional questions to guide your search.
[1] the section on Scaling at http://www.djangobook.com/en/beta/chapter21/
[2] http://en.wikipedia.org/wiki/Representational_State_Transfer
[3] http://hadoop.apache.org/core/
>> Ask the Experts is a new weekly column featured exclusively on LinuxJournal.com.
>> Question for our experts? E-mail them.