Image courtesy of Shutterstock
SQL Server 2012/2014 delivers compelling new capabilities that make an upgrade worthwhile. Many IT professionals agree that the AlwaysOn Availability Group (AG) feature offers the most value with its high availability (HA) capabilities to ensure continuous application availability.
Synchronous replication technology in AlwaysOn lets applications scale out like never before, using more resources on more servers. With read-only access to secondary databases, SQL Server 2012/2014 can dramatically increase database I/O capacity on the back end. Grouping databases together into an AG also allows companies to specify that certain databases connect to particular applications, ensuring that they failover together.
However, along the upgrade path, companies have also discovered key obstacles to achieving the new opportunities. Without awareness and understanding about these challenges – and their potential consequences – an organization’s upgrade to SQL Server 2012/2014 can be costly, inefficient, and plagued by system errors.
Here are the top 5 SQL Server 2012/2014 ‘gotchas’ as well as tips for addressing them:
- Backward compatibility
- Licensing changes
- Read/write split
- Replication awareness
- Automatic failover
1-Maintain backward compatibility
To avoid problems, as well as significant future rework, organizations need to address deprecated T-SQL features. For SQL Server 2012/2014, Microsoft will no longer support non-ANSI outer join syntax, requiring that all code be ANSI compliant.
Because every new release brings deprecation of earlier compatibility levels, companies must be aware of level changes and the impact on their applications. In the case of SQL Server 2012, applications will deprecate to 2000 compatibility, while SQL Server 2014 deprecates to 2005.
Companies also need to check with their ISVs to verify application support. Support may not be a given when compatibility levels become switched. An ISV with a contract to support an application on 2005 might not necessarily support it on 2012 running compatibility levels for 2005.
If at all possible, companies should test their applications and scale them to the most modern standards during any upgrade. Best practices dictate that organizations switch to the highest compatibility level of the upgraded SQL Server version. Not doing this puts off the inevitable and the work associated with it, making more work for companies in the long run.
2-Be aware of licensing changes
With SQL Server 2012/2014, Microsoft has moved to a core-based licensing model for its Enterprise Edition. Regardless of the number of sockets or cores per-socket, organizations must now pay per core, and the licensing requires a minimum of 4 core licenses per physical processor. While minor variations exist due to individual licensing terms, moving to a 4-core enterprise license from a server license equates to a 4x cost difference on paper for most companies
The licensing changes also affect customers installing SQL Server 2012/2014 in a virtual environment. If a company runs SQL Server on a VM guest, a minimum of 4 cores must be licensed in the Enterprise Edition. This requirement can be frustrating, as a company using a 2-core VM, for example, must still license for 4 cores even if they are only using half of them.
It’s not all bad news. With significantly increased resource limits to the Standard Edition (up to 64 GB memory and 16 cores), companies also have a viable alternative to Enterprise Edition for the first time, as well as a solid migration path for the future. In addition, Microsoft has created the Business Intelligence Edition, delivering a Standard Edition SQL engine with the Enterprise Edition intelligence features.
In addition to paying attention to core count and understanding the licensing pitfalls, companies should consider other ways to offset the higher licensing costs. By maximizing usage of secondary servers to process read loads, organizations can reduce the need to buy larger servers with more cores. Caching can also help reduce the number of queries to be processed, letting companies maintain the needed performance while buying fewer cores. In some instances, database query caching has been able to handle 80% of the database load, sending just 20% to the SQL Server database.
3-Leverage read/write split – with app changes
By allowing secondary servers to process read traffic instead of just sitting idle, AlwaysOn enables companies to serve more traffic and improve their application response times.
However, to leverage this capability, applications must be modified to build support for read-intent strings to connect to secondaries. Furthermore, applications need to be modified to identify which queries/stored procedures to route to secondaries. This application modification takes up a lot of time and resources, often delaying companies from benefitting from SQL Server 2012/2014’s new scale out features.
To solve this problem and avoid application modification, some companies have implemented either homegrown or third-party database traffic management software. Sitting between the application and the database, database traffic management software understands SQL syntax and can classify them as reads or writes. It can also discover the definition of a stored procedure from the database server to find out whether a particular stored procedure is a read-only or a read/write operation. With this detailed SQL state knowledge, the software can automatically load balance SQL traffic amongst primary and secondary servers, sending transactions and writes to the primary and distributing read-only traffic amongst the secondaries.
By offloading the need to build this intelligence into the application, companies can quickly take advantage of SQL Server 2012/2014’s advanced features.
4-Ensure replication awareness
Because AlwaysOn is constantly replicating data from the primary server to the secondaries, the possibility emerges for replication lag. If the secondaries fall too far behind in replication, the data on secondary servers can become stale.
While the primary server remains the authoritative source for writing to the database in SQL Server 2012/2014, the secondary server now has the same data, which allows users to read from it. However, in supporting reads from the secondary, organizations must be aware that 1) the secondary server will always be a little behind the primary server; and 2) applications need to define the threshold of tolerance for delay, i.e., to what level that data can be “behind” in timing.
If applications are reading stale data, it can lead to inconsistent behavior. Different users might see different values for the same query, or – even worse – users might see older, incomplete data even when they have finished a transaction.
Modifying applications to support replication awareness can be a highly complex and costly process, requiring the organization to build a replication-monitoring engine and integrate it with application code. That modified application can then check to make sure that each server to which it is sending reads is in sync with the primary server. Adding to the complexity, most applications, such as .NET or PHP, are written like an integrated script, making it even more difficult and time-consuming to achieve replication awareness.
By integrating replication monitoring with load balancing and SQL flow control through the use of software, companies can establish a maximum replication delay threshold – without modifying any applications. Such software will not send reads to a secondary server that is more than one second behind the primary, for example. This approach keeps data access consistent even when supporting reads from secondary servers. The replication monitoring process can even alert administrators of potential networking or database issues that might have caused the replication lag or breakage in the first place.
5-Enable automated failover
SQL Server 2012/2014 limits automated failover to servers sitting within a single data center, despite having the ability to scale beyond a single data center. With virtual network name (VNN, also referred to as the Active Group Listener), SQL Server 2012/2014 provides a method to discover the new primary when a server fails.
However, VNN has significant limitations since it redirects read-intent connections directly to the secondary server and does not act as an aggregation endpoint. This redirection can lead to increased downtime and application errors during failover as both the primary being failed and the secondary being promoted will drop their connections.
One way around this limitation is to deploy software that acts as an abstraction layer between the database and the application. In this architecture, that software replaces the VNN as the destination for SQL connections. The software then routes SQL queries on behalf of the app, with detailed understanding of all queries. This architecture enables failover that is transparent to the application – with no dropping of the connections and queuing of inbound traffic so applications see few to no errors.
Bottom Line - Focus on the Big Picture
At the end of the day, a database upgrade is never just about the database. SQL Server 2012/2014’s new capabilities, including the AlwaysOn HA features, promise huge benefits, but they also create significant complexity.
Companies must consider how an upgrade affects not only performance and costs but also other parts of their infrastructure. The challenges of programming applications to understand and leverage scaled out database infrastructure are many.
An upgrade can be a good time to consider making other architectural changes – such as creating an abstraction layer using a SQL proxy or database load balancing software.
About the author:
Varun Singh, Founder and CTO, ScaleArc
As Chief Technology Officer, Varun leads ScaleArc’s product development and technology strategy. Varun also served as ScaleArc’s CEO from March 2009 to Sept 2013. Varun is a technologist with broad experience in several fields, from running Linux-based BBS systems in the pre-Internet days to writing about technology and open source software at PC Quest, CHIP, Digit, and Network Computing, to creating India’s top online technology brands such as TechTree and Tech2, to hosting technology shows on CNBC, TV18, CNN-IBN and ET Now. He led large web and application development teams as CTO at Network 18's web division and built the highly scalable technology behind large websites like IN.com, MoneyControl.com, and various other portals.