The end of life date for Windows Server 2003 is July 14, 2015. While the latest version of the operating system is now 10 years old, as of last summer more than 22 million servers around the world were still running production applications. Within the next few weeks, millions of enterprise IT professionals will be forced to decide if they are going to migrate off the OS or secure a custom support contract with Microsoft to manage the end-of-life OS.
The latter option is almost a non-starter given the rumored potential escalating costs. There is also the option of simply leaving the servers as is, but given the current climate of corporate hacking, both from external sources and potentially internal sources, this option is fraught with risk.
According to recent report by Spiceworks, this end of life migration, which impacts nearly every technology segment from hardware, software and cloud, to mobile and services, represents a $100 billion opportunity for migration-related solutions. The same report states that companies are earmarking an average of $60,000 for use in Windows Server 2003 migration-related projects. If we consider a migration plan from the perspective of IT professionals, a litany of questions ensues about how to deal with the migration issue from technology options, including the cost of purchasing new technology, the cost of supporting new technology and the migration timeline.
While these are important questions to address, there are methods of making these calculations to aid in formulating a successful migration path. However, the existence of many of these servers within a corporate environment is to enable a workforce to become more productive. So at some point along the migration path, IT professionals must consider the experience of the end user. When IT personnel reach this stage, here are some primary questions to consider:
Do you have a solid understanding (baseline) of your current end user experience?
A complete understanding of the end user’s experience is critical to determining how they will be impacted by a major technology shift. Having the right measurements and amount of data after the shift may not prove to be valuable if you are unable to compare this to a baseline set prior to the shift.
For example, if a critical business transaction executed many thousands of times a day took 5 seconds after a major technology shift, this may seem like a problem. However, if the average time before the shift was 6 seconds this would highlight a net improvement in performance. Conversely, if the same transaction only took 2 seconds, a 5 second average would certainly highlight a potential problem with the application resulting in lost productivity and increased support call volume.
How many users have an affected application installed and how often do they use the application?
In assessing impact ahead of the shift, a complete and accurate inventory of applications installed is a necessity. Further, the IT team should understand the percentage of the install base that actually use the application. Is the application being launched by the user or is the user logging in? When was the last time it was accessed? How often is it being accessed? For the most critical business applications, it might also be beneficial to understand how much time the user spends interacting with the application.
Application inventory and usage data is critical to know ahead of time so the IT and support teams can deploy their resources to the problems that could have the biggest impact on the productivity of the end user base post shift.
What are the KPIs used to determine the impact of the migration on the end user?
The Application Performance Management space is full of incredibly useful and sophisticated tools to monitor performance and has come to encompass a wide range of technologies and approaches. Literally thousands of metrics can be collected and analyzed.
In the area of the technology shift, the IT team must determine which metrics are most critical to track around the end user’s experience. Here are some common ones to consider:
- Business activity response time
- Business activity standard deviation and percentiles
- Activity volume
- Application and activity SLAs / thresholds
- Number of application crashes and hangs
- Application errors or faults
How are you going to surface performance and experience information back to the business?
After defining KPIs, base lining user performance and understanding where to deploy resources, it’s important to put together a plan on how to disseminate all the critical information. The first step in this process is to understand the targeted audience for this information. Which KPIs are they interested in? Which applications do they have under management? What level analysis will they perform (i.e. review KPIs, trending, deep dive)?
Many of the APM tools have excellent analytic dashboards for displaying KPIs and trending data and have access control capabilities to help distribute the appropriate information. The dashboards and analysis capabilities should be leveraged by a central NOC team monitoring for deep dive analysis but can also be surfaced to business users for self-serve analysis. In addition, many of the tools have customizable alerting capabilities. Rather than hear about the performance problems from the user base via support requests, the alerting mechanisms should be utilized to provide a proactive approach in anticipating the problems. This allows IT and application owners to marshal the resources before the support request volume hits a critical stage.
Have a Proactive Plan in Place
The impact to the business user should be at the center of many of the strategic decisions made with a Windows 2003 migration. By understanding the user experience before and after the technology shift, IT leaders can build a proactive plan to maintain business user productivity by responding immediately to performance problems and disseminating valuable performance information throughout the organization.