Image courtesy of Shutterstock
Data centers occupy a unique corporate position. From IT and finance, to sales and marketing, they enable every aspect of a modern business. As a result, enterprises have to allocate massive amounts of energy and budget to maintain a reliable data center service.
Few organizations can cope with a service interruption and even fewer can recover from a major data center outage. This is because facility downtime is expensive, disruptive and damaging to an organization’s reputation. If a data center fails, so could the business.
It is hardly surprising then, to see the results of a recent Data Center Users’ Group survey which named the top management issue as availability. To allay this concern, and to meet customer expectations of an always-available data center, operators are deploying technologies that marry availability with optimized efficiency.
Database and Software Integration is Central
Central to this outcome are database and software integrations. For example, when an enterprise integrates operational intelligence with business planning data, it can create a detailed overview of the relationship between compute demand, power dynamics and the financial impact of specific business projects. As a result, this enables informed decision-making around how much power and cooling is required to ensure processing capacity is always available, and the true cost of providing that availability.
From an environmental perspective, operators can act on recent guidance from ASHRAE that expands the recommended temperature and humidity ranges for safe data center equipment operation. Previously the majority of data centers operated at 55-66° Fahrenheit; now they can run at 80, even 90° Fahrenheit. The financial impacts of just a minor thermal adjustment are significant: an enterprise can save 2-5% in energy costs for every 1° Fahrenheit increase in server inlet air temperature. This thermal data can then be integrated with other control systems and databases to drive efficiency throughout the data center. This includes addressing under-utilized servers through consolidation and virtualization initiatives.
Despite these opportunities, many enterprises still refuse to embrace these changes as they fear an adjustment could affect processing capacity, which in turn might impact availability.
Understanding Asset Data for Business Improvements
Complete IT asset visibility should alleviate this apprehension and the data center operator can move from a reactive management style, to one based on strategic planning and proactive control of a facility.
In an intelligent data center, thousands of sensors can collect information on each asset’s location, condition and status including movement records, temperature, humidity, air pressure, power-use, fan speeds and CPU utilization. If this data is integrated with enterprise resource planning solutions (ERP) and financial software, the level of intelligence can be extended to include an asset’s current value, its depreciation rate and whether it should be classed as an over-provisioned or under-provisioned asset.
This can then be aggregated, normalized and reported on throughout the business. Executive-level dashboards can relay this information to the C-suite so they can initiate management practices to drive down the total cost of ownership (TCO) of the data center. Facilities teams can manage their infrastructure more effectively to implement a strategy of disaster prevention, rather than one based around recovery. Legal can be made aware of potential compliance issues and kept informed around upcoming audits.
Integrated Data Leads to Greater Profits
Audits, in particular, should be reviewed by every organization. Given that data center assets move multiple times throughout their lifespan, manual inventory audits are an outdated process. They are labor-intensive, expensive and error-prone. Automating audits with asset lifecycle management systems that operate on live, as opposed to aged, data can drastically improve an operator’s control capabilities. Not only does the current location of every asset become visible, but the operator can drill down to specifications, maintenance records and warranty histories.
This data can flow back into the databases of integrated solutions to enhance and maximize the value of other business units. One of the major benefits of detailed, longitudinal data collection and analysis is that it enables more accurate measurements of the true cost of providing a service. A business can then identify precisely what application is running on a given server at a specific time and monitor the application’s power and cooling requirements to understand exactly what resources are required to maintain its availability.
Additional metrics include network bandwidth, additional power infrastructure, the personnel required to support the asset and a historical record of its maintenance, storage and transit paths. Possessing data of this volume allows precise calculations for what to charge a specific business unit for the use of data center resources. This insight is equally valuable to hosting, cloud and co-location providers looking to quantify their service level agreements and improve accountability.
To succeed in today’s competitive business climate, enterprises cannot afford to leave data center management up to chance. An enterprise equipped with this combination of real-time data, operational insight and automated technology will always outperform a competitor lacking the same capabilities.
Real-time asset visibility, when integrated with the right software solutions and databases, enables an organization to fully understand and control one of its most valuable business assets - the data center. Ultimately, the enterprise can exploit its facilities to further business growth and significantly improve its competitive advantage.