Enterprise IT departments deal with a conflicting set of priorities and constraints in data centers. The number of IT systems identified as critical to core business processes continues to increase as enterprises seek higher levels of operational efficiency and business growth, and because of increasing awareness about the costs of downtime. IT departments are being required to ensure the continuous operation of a growing number of these critical IT systems. Traditional solutions to these demands typically call for adding more servers to the data center.
At the same time, IT data centers are reaching their limits in terms of power, space, cooling and budget. Power costs from IT are expected to rise substantially over the next five years; many newer hardware products require more power; and energy costs continue to rise. With pressure on IT to reduce hardware, adding more computers is not an option. Even for those enterprises with data center capacity, adding expensive hardware runs counter to cost-cutting measures, a high priority for most CIOs.
The answer to many of these challenges could lie with shared-disk database clustering. Shared-disk database clustering can enhance performance and help reduce infrastructure costs. Although typically deployed with no more than four computers in the cluster, shared-disk clustering architectures may support dozens of computers (or nodes), each running one or more instances of a core database server. The systems simultaneously access the same data stored in a storage area network (SAN). Should one or more nodes in the cluster fail, others automatically take over.
Virtualization technology is widely used to maximize the use of server resources, and therefore help IT departments to eliminate under-utilized hardware. By combining the power of virtualization with database clustering, IT departments can reduce the number of servers running mission-critical systems while still ensuring their continuous operation. This can be realized by creating many virtual clusters running on a physical shared-disk cluster that implements virtualized resource management technology. Each virtual cluster acts as if it were its own independent cluster, having dedicated resources from the physical cluster, along with its own routing, load balancing and failover rules.
As Figure 1 demonstrates, virtualization within the cluster allows IT departments to better meet service level agreements (SLAs) in the following ways:
- Dividing resources among multiple applications to maximize hardware utilization
- Distributing system load to protect service levels through planned and unplanned downtime and periods of peak demand
- Intelligently moving connections from one instance of the cluster to another, all without affecting the application
Figure 1: Organization of Virtual Workloads and Virtual Clusters
Continuous Operation of Mission-Critical Servers
Applications have differing business priorities and therefore differing requirements for responsiveness and non-stop operation. Depending on the cluster solution, database administrators (DBAs) often have the ability to define service levels at the application level. DBAs accomplish this by assigning more or less resources to each application, as well as by selecting different load balancing and failover strategies depending on its required level of service.
These pre-established resource rules ensure that service is maintained through:
- Protecting against hardware and software failures-The odds are that at some point the software and hardware components on a given machine will fail. Virtualizing the cluster simplifies the task of defining what failover strategy the system will automatically take when inevitable failures occur.
Applications can have different failover strategies depending on the level of service that needs to be maintained. For example, administrators can specify that if the Sales application loses some or all of its computing resources, it can share resources with the HR application. If those fail, it can share resources with the Finance application. However, the rules for the HR application may specify that if it loses its resources, none of the resources given to the Sales or Finance will be made available because the HR application has a lower priority service level.
- Operating during maintenance activities-Ideally, vendors should offer solutions in which IT departments can take a machine offline for maintenance without impacting performance. Virtual clusters simplify the task of taking a machine offline because a DBA can specify a time that a particular machine will go offline for maintenance. The virtual cluster then automatically moves all connections from that node to others, without impacting performance.
- Handling periods of peak demand-Another advantage with virtual clusters, typically through a load distribution feature, is the ability to minimize the response time of every database request. DBAs specify a load distribution policy at the virtual cluster level, using customizable load profiles that describe what performance metrics matter, including user connections, CPU utilization and I/O load. Load profiles dictate thresholds where the data management system should start distributing load to other areas of the cluster. Additionally, cluster offerings can take into account each connection's history story and load type to determine if returning it to the same instance of the architecture will minimize response times.
Consolidation of Mission-Critical Servers
With a platform for running mission-critical systems in place, IT departments can begin to reduce the hardware footprint and associated costs of their data centers. Without sacrificing service levels, virtual shared-disk clusters help enterprises reduce hardware costs in three different ways: by reducing redundant hardware, by eliminating underutilized hardware, and by reducing expensive hardware.
|
Figure 2: Reducing seven servers to four by using one server instead of two as standby and consolidating the underutilized servers into one |
In reducing redundant hardware, as illustrated in Figure 2 (right), the cluster doesn't have to provide a standby node for each active node in the cluster. Instead, some designated nodes can serve as a stand-by for all others, or all the nodes can be active and also serve as standby for each other. This reduces the number of standby machines required.
Figure 2 also shows how clustering can maximize the use of a computer's resources by enabling a number of servers to operate on each machine in the cluster. This can reduce the total number of servers by up to 80 percent.
Clusters can use much less expensive hardware in place of larger symmetric multiprocessing (SMP) machines because system reliability now depends on the failover potential of the cluster rather than the reliability of individual machines. The subsequent savings can be dramatic. For example, four $50,000 computers might replace a high-end machine that costs nearly $1 million. Additional savings accrue because the lower-cost computers are easier to maintain, and do not have expensive maintenance fees.
In summary, to meet the conflicting demands placed on it, enterprise IT departments require a carefully targeted, low-cost solution that can be adapted to the service level needs of individual applications and changed dynamically to meet new demands.
Combining virtualization technology in a shared-disk database cluster allows enterprises to take application availability to new levels while lowering infrastructure costs. With such solutions, enterprises overcome the conflicting usage and resource demands inherent in the industry. They will enjoy simplified cluster deployment and administration, and ultimately realize a higher ROI from their IT infrastructure.