Application load balancing is done by distributing workload across many computer servers. In the hosting industry, it is commonly used for balancing http(s) traffic over multiple nodes acting together as a web service. A load balancer will enable users to optimally distribute traffic from a single access point to any number of servers using any number of different protocols. And as a result of this, the application load can be shared across many nodes, rather than being limited to a single server, increasing performance during times of high activity. Load balancers increase the reliability of your web application and allow you to build your application with redundancy in mind. If one of your server nodes fails, the traffic is automatically distributed to your other nodes without any visible interruption of service for users.
Load balancers can be used to keep your site up through traffic spikes, or help grow your site as it gains popularity. By limiting your available points of failure, you increase your uptime. If you load balance between two or more identical nodes, in the event that one of the nodes in your cluster experiences any kind of hardware or software failure the traffic can be redistributed to the other nodes keeping your site up.
However, people responsible for application service levels and guarantees are still nervous about the prospect of a complete data center to fail, therefore almost all mission critical application environments require some sort of backup/business continuity location. Such a location is usually geographically dispersed, set up as a fully redundant data center, configured for either hot or cold backup and fail-over.
Even with the most sophisticated redundant systems, loss of power remains the cause of more than 50% of all application outages. True, there are many possible causes of application downtime. But those caused by the IT equipment have been virtually eliminated by the virtualization of the physical IT infrastructure. By minimizing or eliminating single points of failure in the servers, storage and networking systems, hardware failures rarely cause you any application downtime today. System software has experienced similar improvements in both stability and self-recovery capabilities, making the “soft crash” also now a rare occurrence.
While “five nines” (99.999%) data center reliability equates to 5.26 minutes downtime per year, the effect on application is substantially greater. Add to that the fact that more than 80% of disaster recovery fail-overs to a secondary site do not complete successfully, and usually require extensive human intervention, which end in outages that often last for hours, or even days, resulting in extensive financial loss and reputational damage.
For you to maximize application uptime, you need to be proactive and not reactive. To be proactive, you will need the means to associate IT infrastructure capacity with power dependability and quality both within and across your data centers. You will also need the ability to accommodate variable application workloads and service levels, and the ability to shift applications from one data center to another, while continually making changes to avoid as many power-related problems as possible.
Fortunately, your existing IT infrastructure likely has the foundational elements needed to proactively mitigate against potential problems, and thereby minimize the application downtime they cause. All it will take is giving your application software one additional layer of abstraction.
This is where the need for Software Defined Power (SDP) comes from. It requires creating a layer of abstraction that isolates the application from local power dependencies and maximizes application uptime by leveraging existing fail-over and load-balancing capabilities to shift application capacity across data centers in ways that always utilize the power with the highest availability, dependability and quality.
SDP is implemented as an extension to your existing load balanced and software-defined environments. It will collect real-time power and IT usage data to understand demand and capacity and the variability of each. Creating automated run-books for each application to adjust capacity either up or down will allow data center operators to achieve the maximum energy and cost savings from running the correct capacity needed for the applications to support the ongoing demand. Shutting down idle/backup servers and free them up as a pool of spare resources will allow to share them by any application. As primary and backup applications are load balanced, the hardware allocation for the primary application can be adjusted based on real time demand, while the hardware allocation for the backup site can be minimal to support system management agents and automated patch management. This usually leaves between 40-50% of hardware available to dynamic allocation, assuming a primary and fully populated backup site configuration. Last but not least, current customer implementations show that such adjustments, even when done across data centers can be completed in less than 5 minutes, fully automated.
Once configured with the service level and other application requirements, Software Defined Power continuously and automatically optimizes resource levels, both within and between data centers. You can implement Software Defined Power in your Enterprise Data Centers by following the steps below:
- Monitor in real-time
- Install data aggregation appliances that integrates with all facility and IT equipment and software in the data center
- Gain real-time insight into power consumption and IT utilization metrics
- Integrate IT and Facilities
- Quickly visualize IT and facilities by bringing them to a central management platform
- Create dashboards, use trending tools and track KPIs to benchmark performance and to generate reports
- Analyze Data Center Efficiency
- Use application load forecasts, power pricing and energy markets intelligence in order to define operational procedures
- Integrate reference data like PAR4 into server upgrade, virtualization and consolidation decisions
- Automate across IT and Facility
- Balance applications across data centers through automation and dynamic adjustments based on service level requirements and variable application load levels.
- Gain ultimate reliability and flexibility in load management by continuous dynamic application level load shifting across data centers
- Free applications from physical power dependencies by shifting application load to the data center currently experiencing the best availability, dependability and quality of power
- Save costs from operational efficiency
- Through reduced power consumption by powering down unnecessary capacity in back-up sites
- Cut on-going power consumption by 50% or more
- Participate in Energy Markets and Monetize the Data Center
- Fund Software Defined Power implementation through energy savings and incentives
Software Defined Power integrates with all common load balanced environments and leverages its capabilities to distribute application load on the basis of application QoS requirements, power cost and availability. Software Defined Power is an emerging solution to adjust hardware capacity, migrate applications and maintain pools of spare capacities to be allocated dynamically to applications as demand changes. It enhances your investments into IT, facilities and application monitoring, load balancing, virtualization and system management by making it possible to avoid power related issues by proactively moving applications to other environments/locations as a matter of routine, achieving the ultimate application reliability.
Clemens Pfeiffer is the CTO of Power Assure and is a 25-year veteran of the software industry, where he has held leadership roles in process modeling and automation, software architecture and database design, and data center management and optimization technologies.