If money was not an issue, we wouldn’t be having this conversation. But money is front and center in every major database roll-out and optimization project, and even more so in the age of server virtualization and consolidation. It often forces us to settle for good enough, when we first aspired for swift and non-stop.
The financial tradeoffs are never more apparent than they have become with the arrival of lightning fast solid state technologies. Whether solid state disks (SSDs) or flash memories, we lust for more of them in the quest for speed, only to be moderated by silly constraints like shrinking budgets.
You know too well the pressure to please those pushy people behind the spreadsheets. The ones keeping track of what we spent in the past, and eager to trim more expenses in the future. They drive us to squeeze more from what we have before spending a nickel on new stuff. But if we step out of our technical roles and take the broader business view, their requests are really not that unreasonable. To that end, let’s see how we can strike a balance between flashy new hardware and the proven gear already on the data center floor. By that, I mean arriving at a good mix between solid state technologies and the conventional spinning disks that have served us well in years’ gone.
On the face of it, the problem can be rather intractable. Even after spending tedious hours of fine tuning you’d never really be able to manually craft the ideal conditions where I/O intensive code sections are matched to flash, while hard disk drives (HDDs) serve the less demanding segments. Well, let me take that back - you could when databases ran on their own private servers. The difficulty arises when the company opts to consolidate several database instances on the same physical server using server virtualization. And then wants the flexibility to move these virtualized databases between servers to load balance and circumvent outages.
Removing the Guesswork
When it was a single database instance on a dedicated machine, life was predictable. Guidelines for beefing up the spindle count and channels to handle additional transactions or users were well-documented. Not so when multiple instances collide in incalculable ways on the same server, made worse when multiple virtualized servers share the same storage resources. Under those circumstances you need little elves running alongside to figure out what’s best. And the elves have to know a lot about the behavioral and economic differences between SSDs and HDDs to do what’s right.
Turns out you can hire elves to help you do just that. They come shrink-wrapped in storage virtualization software packages. Look for the ones that can do automated storage tiering objectively - meaning, they don’t care who makes the hardware or where it resides.
On a more serious note, this new category of software really takes much of the guesswork, and the costs, out of the equation. Given a few hints on what should take priority, it makes all the right decisions in real time, keeping in mind all the competing I/O requests coming across the virtual wire. The software directs the most time-sensitive workloads to solid state devices and the least important ones to conventional drives or disk arrays. You can even override the algorithms to specifically pin some volumes on a preferred class of storage, say end-of-quarter jobs that must take precedence.
Better Storage Virtualization Products
The better storage virtualization products go one better. They provide additional turbo charging of disk requests by caching them on DRAM. Not just reads, but writes as well. Aside from the faster response, write caching helps reduce the duty cycle on the solid state memories to prolong their lives. Think how happy that makes the accountants. The storage assets are also thin provisioned to avoid wasteful over-allocation of premium-priced hardware.