The month of June heralded the long-awaited release of Microsoft’s SQL Server 2016. The company had debuted the modern database on its Azure cloud platform earlier this year – and June 1 marked the availability of the software for on-premise deployments.
Since overtaking Oracle in the Gartner Magic Quadrant for Operational DBMS last fall, Microsoft has been touting the capabilities of its flagship database technology. The 2016 edition include uniques security functionality with Always Encrypted that protects data both at rest and in motion, ground-breaking performance and scale as evidenced by the number-one performance benchmarks, and accelerated hybrid cloud scenarios and Stretch Database functionality that supports historical data on the cloud.
The Modern Way to Scale
While these new features offer good business functionality, Microsoft – like a lot of database providers – struggles with a more fundamental problem in adoption. The biggest demarcation between traditional and modern database architecture is the way in which they provide scale.
Historically, when enterprises needed more database capacity, they looked to “scale up” the database. They bought ever-bigger servers on ever-more powerful hardware to gain horsepower. Modern database architectures, in contrast, offer the ability to “scale out.” In that architecture, enterprises deploy multiple smaller servers and have them work in concert to increase capacity. The additional servers are called secondaries, and they serve read-only traffic that’s copied from the primary server, which serves writes and reads.
Most IT infrastructure follows the “scale out” approach. Think web servers: You don’t buy one big server – you buy lots of smaller servers and route traffic across them simultaneously. So it makes sense that database architecture would follow the same path.
But here’s the rub – to make use of scaled out database infrastructure, the applications talking to those databases have to be reprogrammed to understand that architecture. Turns out, this kind of application recoding is not simple, or may not be possible. For home-grown software, you have the technical capacity to recode, but you may lack the in-house familiarity with a code base developed years ago, with personnel who wrote it no longer on staff. And for commercial software, you lack the access to the code base to modify it.
So Microsoft’s biggest challenge with the SQL Server 2016 release has nothing to do with the features in 2016. Instead, starting with the introduction of SQL Server 2012, the first release to feature scale out capabilities, Microsoft has struggled to have customers able to tap into this capability because of challenges rewriting code.
Here’s one example of how challenging this adoption has been. Microsoft has been hosting SQL Server 2016 launch events around the country. At one such recent seminar, with 42 folks in the room from Microsoft’s largest accounts, about of them were using some form of Modern SQL – SQL Server 2012, 2014, or even 2016 already. Of those 20 on Modern SQL, however, only one customer was benefitting from scale out – the most fundamental improvement Modern SQL offers.
A New Architecture to Complement Scale Out
Why? Why, four years after the introduction of scale out capabilities in SQL Server, was only a single customer able to take advantage of that capacity? It’s that application challenge.
Applications connect directly to the database, so if you want applications to talk to multiple database servers acting together, you have to program the application to know which queries it can send to the secondary servers that support read traffic. And that’s proven to be very difficult, as highlighted earlier, with staff knowledge or code access being the big challenges.
Turns out, having applications talk directly to the database poses a number of problems, beyond the challenge of programming read queries to go to secondary servers. That architecture also poses challenges to application availability, because every hiccup in the database is felt in the application.
The companies you think of as the “always on” kinds of companies – Facebook, Google, etc. – they figured out this problem years ago and introduced a new technology. They created a Data Access Layer – software that sits between the applications and the database to break that 1:1 connection. When you architect with that access layer, you increase application resiliency, scale, and performance. Now your apps don’t have to be recoded for scale out and they’re shielded from hiccups in the database.
Not every enterprise has the engineering capacity or significant resources to build their own custom Data Access Layer, so a new technology, a database proxy or database load balancing software, is emerging to provide that functionality for “the rest of us.” The MySQL database was the first to architect for scale out, and proxies for that database have been around for several years. They’re now emerging for SQL Server and Oracle as well.
Database load balancing software provides the easiest way to scale out. Now, with no application changes, enterprises get app-transparent failover, app-transparent scale, and faster performance. Microsoft recognizes the value of complementing modern SQL with an ecosystem of partners to accelerate adoption. For database load balancing, Microsoft works with ScaleArc, and other vendors, to simplify scale out and improve application resiliency, scale, and performance.
SQL Server 2016 features a host of new capabilities. Once enterprises can solve the fundamental challenge of adopting scale out, they’ll be in a stronger position to take advantage of all the other advanced capabilities in the new release.