Clustering Overview

Clustering refers to grouping one or more tc Runtime instances so that they appear to work as a single server. A cluster provides:

  • Session replication. When a client, typically using a browser, connects to a tc Runtime instance, tc Runtime creates a session Object that it uses to manage all subsequent interaction between itself and that client. Depending on how the Web application was programmed, the session Object can contain a lot of useful information, such as user security credentials, current items in a user's shopping cart, and so on. If the tc Runtime instance is part of a cluster, the session is automatically copied to each member of the cluster group, and is updated each time the session is modified, such as when the user adds a new item to their shopping cart. This means that if the first tc Runtime instance crashes, any tc Runtime instance in the group can immediately take over the session without interruption, completely hiding the server crash from the client who continues to work as if nothing had happened. This capability greatly increases the usability of Web applications.

    You can use the VMware GemFire HTTP Session Management Module to provide HTTP session management for a tc Server cluster. The module provides tc Server templates to configure GemFire session management in either a peer-to-peer configuration or client/server configuration. In the peer-to-peer configuration, each tc Runtime instance becomes a GemFire peer, using multicast to discover each other and replicating session data between them. In the client-server configuration, you run a GemFire cache server and tc Runtime instances replicate session data to the cache server. See the GemStone documentation for help obtaining the templates and configuring GemStone HTTP Session Management.

  • Context attribute replication. A context represents a Web application that is deployed to a tc Runtime instance. In the same way that client sessions can be replicated, the Web application context itself can also be replicated to all members of a cluster group.

A tc Runtime cluster can be as small as two server instances hosted on the same computer to hundreds of tc Runtime instances hosted on many different computers of different operating systems.

Typically, you configure a tc Runtime cluster to use multicast for the communication between member servers. The cluster is then uniquely identified by the combination of its multicast IP address and port. Each member of the cluster must have the same multicast address and port configured so that the cluster can automatically discover each member and react appropriately if a member does not respond. You can create multiple clusters, such as one for testing and another for production, by configuring different multicast address/ports for each cluster.

In addition to creating a tc Runtime cluster, you might also want to configure a load balancer in front of the cluster so as to split up the incoming requests between multiple tc Runtime instances. Load balancing attempts to direct requests to the tc Runtime instance with the smallest load at that point in time. The load balancer can also detect when a tc Runtime instance has failed, in which case it stops directing requests to it until the tc Runtime instance restarts, adding to the high availability of tc Runtime.

See High Level Steps for Creating and Using tc Runtime Clusters for the basic steps to get started with tc Runtime clusters.

Additional Cluster Documentation from Apache

For additional information about configuring tc Runtime clusters, see:

High-Level Steps for Creating and Using tc Runtime Clusters

The following procedure outlines the main tasks you perform to create and configure a tc Runtime cluster from two or more tc Runtime instances.

  1. Prepare your Web applications so they can be deployed to a cluster and take full advantage of the tc Runtime clustering features. See Web Application Requirements for Using Session Replication.
  2. Be sure that you have correctly configured your network to enable multicast, which is the typical method of communication between cluster members. See Network Considerations.
  3. Configure your tc Runtime instances into a simple cluster using the default values for most of the configuration options. See Configuring a Simple tc Runtime Cluster.
  4. If the default configuration does not suit your needs, configure other cluster configuration options. See Advanced Cluster Configuration Options.
  5. Start your cluster by starting all the tc Runtime instances that make up the cluster group. You can do this manually, as described in "Starting and Stopping tc Runtime Instances" in Getting Started with VMware tc Server.

Web Application Requirements for Using Session Replication

In addition to configuring the cluster from a server administration point of view, make sure your Web application meets these requirements:

  • All servlet and JSP session data must be serializable. In Java terms, this means that every field in the session object must either implement the java.io.Serializable interface or it must be transient.

  • tc Runtime uses cookies to track session state, which means that the Web application URLs for a particular session always look the same. If they do not, the tc Runtime instance creates a new session each time a client sends a message, which essentially disables session replication for that Web application.

  • Configure your Web application to be distributable, that is, suitable for running in a distributed environment such as a tc Runtime cluster. You can do this in one of two ways:

    • Add the <distributable /> element to the web.xml deployment descriptor of your Web application; <distributable /> is a child-element of the root <web-app> element. The web.xml file is located in the WEB-INF directory of your Web application. For example:

      <?xml version="1.0" encoding="UTF-8" ?>
      
      <web-app xmlns="http://java.sun.com/xml/ns/j2ee"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://java.sun.com/xml/ns/j2ee 
                              http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd"
          version="2.4">
      
          <distributable />
      
          <display-name>HelloWorld Application</display-name>
          <servlet>
          ...
      <web-app> 
      
    • If you do not want to change the web.xml deployment descriptor file of your Web application, you can use the tc Runtime-specific <Context distributable="true"> element to specify that one or all Web applications are distributable. You can specify this element in the CATALINA_BASE/conf/context.xml file if you want to make ALL Web applications of a particular tc Runtime instance distributable. For example:

      <?xml version="1.0" encoding="ISO-8859-1" ?>
      
      <Context distributable="true" >
      ...
      </Context >
      

      You can also add this element to specific context files to narrow its scope. For details, see The Context Container.

  • To enable application context replication, specify that your application context use the org.apache.catalina.ha.context.ReplicatedContext context implementation, rather than the default (org.apache.catalina.core.StandardContext). As described in the preceding bullet, you can update the CATALINA_BASE/conf/context.xml file as shown:

    <?xml version="1.0" encoding="ISO-8859-1" ?>
    <Context distributable="true" className="org.apache.catalina.ha.context.ReplicatedContext" >
    ...
    </Context > 
    

Network Considerations

Be sure that multicast is working on each computer that hosts members of the tc Runtime cluster.

If the computers that host your tc Runtime cluster also host other applications that use multicast communications, be sure that the other applications do not use the same multicast address and port as the tc Runtime cluster. This precaution eliminates unnecessary processing of irrelevant messages by the tc Runtime cluster. In addition to overhead and decreased performance, unnecessary processing can delay cluster communications, causing a cluster member to be marked failed when in fact it is alive but broadcast of its heartbeat messages is taking too long.

Configuring a Simple tc Runtime Cluster

In this section you set up a simple tc Runtime cluster that uses default values for most configuration options. A description of this default cluster configuration follows the procedure.

  1. For each tc Runtime instance that will be a member of the cluster, update its CATALINA_BASE/conf/server.xml by adding a <Cluster> child-element of the <Engine> element, as shown in the following example (only relevant sections shown):

    <?xml version='1.0' encoding='utf-8'?>
    <Server port="-1" shutdown="SHUTDOWN">
      ...
      <Service name="Catalina">
        ...
        <Engine name="Catalina" defaultHost="localhost">
            <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
            ...
        </Engine> 
      </Service>
    </Server> 
    

    The server.xml file for many tc Runtime instances already contains a commented-out <Cluster>; in which case, simply remove the comment tags.

    You can also add the <Cluster> element to the <Host> element of the server.xml file, thus enabling clustering in all virtual hosts of the tc Runtime instance. When you add the <Cluster> element inside the <Engine> element, the cluster appends the host name of each session manager to the manager's name so that two contexts that have the same name but are part of two different hosts are distinguishable.

  2. If you will run more than one tc Runtime instance on the same computer, be sure the various TCP/IP listen ports for each tc Runtime instance are unique. You configure the listen ports using the port and redirectPort attributes of the <Connector> element in the server.xml file. See Simple tc Runtime Configuration.

  3. If the cluster is hosted on more than one computer, time-synchronize the computers with the Network Time Protocol (NTP). See The Network Time Protocol.

The cluster that results from the preceding procedure has the following configuration:

  • The cluster is enabled with all-to-all session replication, wherein a session on one member of the cluster that is modified by the client is replicated to all other members of the cluster, even members in which the application is not deployed. This is the recommended session replication scheme for small clusters, but as the cluster gains members, VMware recommends a primary-secondary replication scheme in which session data is replicated to a single backup member, and only to members in which the application is deployed. See Replicating a Session to a Single Backup Member.
  • The multicast address is 203.0.113.4.
  • The multicast port is 45564.
  • The members of the cluster send out heartbeats (to broadcast that they are alive and well) every 500 milliseconds.
  • If a heartbeat is not received from a member of the cluster after 3000 milliseconds, the cluster is notified and the member may be marked failed.
  • The IP address that members of the cluster broadcast to the other members is the local value of java.net.InetAddress.getLocalHost().getHostAddress().
  • The TCP/IP port that members use to listen for replication messages is the first available server socket in range 4000-4100.

For additional detailed information about tc Runtime clusters and a description of the default cluster configuration, see Clustering /Session Replication HOW-TO.

Advanced Cluster Configuration Options

This section describes a small subset of the cluster configuration options that are more advanced than those described in Configuring a Simple tc Runtime Cluster, which describes how to set up a very simple cluster using mostly default values. Read this section if the default cluster values do not suit your needs.

In all cases the configuration requires you to add child elements or attributes to the basic <Cluster> element.

This section includes the following subsections:

tc Runtime clusters are highly configurable and this section describes only a few use cases. For more information, see Clustering/Session Replication HOW-TO.

Changing the Default Multicast Address and Port

The default multicast address and port of a cluster are 203.0.113.4 and 45564, respectively. Sometimes you need to change these values; for example, suppose you want to configure two clusters on the same computer, one for testing and one for production. The easiest way to set this up is to specify different multicast/port combinations for the two clusters.

To change the multicast address and port of a cluster, update the server.xml file for each tc Runtime instance that is a member of the cluster and add or update the <Membership> child element of the <Channel> element, which itself is a child member of the <Cluster> element.

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
   <Channel className="org.apache.catalina.tribes.group.GroupChannel">

      <Membership className="org.apache.catalina.tribes.membership.McastService"
                  address="203.0.113.5"
                  port="55564" /> 

   </Channel>

</Cluster>

Use the address and port attributes of the <Membership> element to set the multicast address and port; in the preceding example, the new values are 203.0.113.5 and 55564, respectively.

For more information on the <Membership> element, its default behavior, and the attributes you can set to further configure it, see The Cluster Membership Object.

Changing the Maximum Time After Which an Unresponsive Cluster Member Is Dropped

The default implementation of the cluster group notification is built on multicast heartbeats sent using UDP packets to a multicast IP address. As described in the general cluster documentation, you group cluster members by specifying the same multicast address/port combination (either using the default values or custom values). Each member then sends out a heartbeat within a given interval (frequency); this heartbeat is used for dynamic discovery. The cluster membership listener listens for these heartbeats; if the membership listener does not receive a heartbeat from a node within a certain timeframe (droptime), the cluster considers the member suspect and notifies the channel to take appropriate action.

The default frequency at which members send heartbeats (500 milliseconds) is typically adequate. On high-latency networks, you might want to increase the default value of the drop time (3000 milliseconds) to protect against false positives.

To change the drop time, update the server.xml file for each tc Runtime instance that is a member of the cluster and add or update the <Membership> child element of the <Channel> element, which itself is a child member of the <Cluster> element.

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
   <Channel className="org.apache.catalina.tribes.group.GroupChannel">
      <Membership className="org.apache.catalina.tribes.membership.McastService"
                  dropTime="6000" /> 
   </Channel>

</Cluster>

Use the dropTime attribute of the <Membership> element to set the drop time; in the preceding example, the new drop time value is 6000 milliseconds.

For more information on the <Membership> element, its default behavior, and the attributes you can set to further configure it, see The Cluster Membership Object.

Replicating a Session to a Single Backup Member

The default cluster configuration uses the DeltaManager object to enable all-to-all session replication, which means that the cluster replicates the session information (typically session deltas) to all the other nodes in the cluster, including nodes in which the application is not even deployed. (In this context, a node refers to a tc Runtime instance that is a member of the cluster.) All-to-all replication works well for smaller clusters that are made up of just a few nodes. However, the DeltaManager requires that all nodes in the cluster be homogeneous and that all nodes must deploy the same applications and be exact replicas.

Therefore, if you have configured a large cluster with many nodes, or you find the requirements of the DeltaManager too limiting, VMware recommends that you configure the cluster so that it replicates to just a single backup node by using the BackupManager object. The cluster ensures that the node to which it replicates also has the application deployed. The location of the backup node is known to all nodes in the cluster. Finally, because the cluster is replicating to just one node, the cluster supports heterogeneous deployment.

To configure use of a single backup node for session replication, add or update <Manager> child element of the <Cluster> element in the server.xml files for all tc Runtime instances that are members of the cluster, as shown in the following snippet:

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster">
   <Manager className="org.apache.catalina.ha.session.BackupManager" /> 
</Cluster>

For additional information about the BackupManager, its default behavior, and the attributes you can set on the <Manager> element, see The ClusterManager Object.

check-circle-line exclamation-circle-line close-line
Scroll to top icon