Storage area networks (SANs) and network-attached storage (NAS) owe their popularity to some compelling advantages in scalability, utilization and data management. But achieving high performance for some applications with a SAN or NAS can come at a premium price. In those database applications where performance is critical, direct-attached storage (DAS) offers a cost-effective high-performance solution. This is true for both dedicated and virtualized servers, and derives from the way high-speed flash memory storage options can be integrated seamlessly into a DAS configuration.
Revival of DAS in the IT Infrastructure
Storage subsystems and their capacities have changed significantly since the turn of the millennium, and these advancements have caused a revival of DAS in both small and medium businesses and large enterprises. To support this trend, vendors have added support for DAS to their existing product lines and introduced new DAS-based solutions. Some of these new solutions combine DAS with solid state storage, RAID data protection and intelligent caching technology that continuously places “hot” data in the onboard flash cache to accelerate performance.
Why the renewed interest DAS now after so many organizations have implemented SAN and/or NAS? There are three reasons. The primary reason is performance: DAS is able to outperform all forms of networked storage owing to its substantially lower latency. The second is cost savings that result from minimizing the need to purchase and administer SAN or NAS storage systems and the host bus adapters (HBAs) required to access these systems. Third is ease of use. Implementing and managing DAS are utterly simple compared to the other storage architectures. This is particularly true for Oracle database applications.
The Evolution of DAS
DAS technology has evolved considerably over the years. For example, Serial-Attached SCSI (SAS) expanders and switches enable database administrators (DBAs) to create very large DAS configurations capable of containing hundreds of drives, while support for both SAS and SATA enables DBAs to deploy those drives in tiers. And new management tools, including both graphical user and command line interfaces, have dramatically simplified DAS administration.
While networked storage continues to have an advantage in resource utilization compared to DAS, the cost of unused spindles today is easily offset by the substantial performance gains DAS delivers for applications running software with costly per-server licenses. In fact, having some unused spindles on a database server offers the ability to “tune” the storage system as needed.
A DBA could, for example, use the spare spindles to either isolate certain database objects for better performance, or allocate them to an existing RAID LUN. When using only HDDs for a database that requires high throughput in I/O operations per second (IOPS), allocating database objects over more spindles increases database performance. Allocating more spindles for performance rather than for capacity is referred to as “short stroking.” With a smaller number of tracks containing data, repositioning of the drive’s head is minimized, thereby reducing latency and increasing IOPS.
As is often the case in data centers, the ongoing operational expenditures, especially for management, often eclipse the capital expenditure involved. Such is the case for SAN or NAS, which require a storage administrator. No such ongoing OpEx is incurred with DAS, especially when using Oracle’s Automatic Storage Management (ASM) system. And with the need for costly HBAs, switches and other infrastructure in SAN/NAS environments, DAS often affords a lower CapEx today, as well, particularly in database applications.
Today’s Oracle DBA
Being an Oracle DBA today is quite different compared to even just a few years ago. As organizations strive to do more with less, Oracle has been teaming with partners to provide the tools and functionality DBAs need to be more productive while enhancing performance. Consider just one example of how much a DBA’s responsibilities have changed: improving performance by minimizing I/O waits, or the percentage of time processors are waiting on disk I/O.
To increase storage performance by minimizing I/O waits in a typical database using exclusively HDDs, a DBA might need to take one or more the following actions:
- Isolate “hot” datafiles to cold disks, or if the storage device is highly utilized, moving datafiles to other spindles to even out the disk load.
- Rebuild the storage to a different RAID configuration, such as from RAID 5 to RAID 10 to increase performance.
- Add more “short stroked” disks to the array to get more IOPS.
- Increase the buffer space in the System Global Area and/or make use of the different caches inside the SGA to fine-tune how data is accessed.
- Move “hot” data to a higher performance storage tier, such as HDDs with faster spindles or solid state drives (SSDs).
- Minimize or eliminate fragmentation in tables and index tablespaces.
Note that many of these actions require the DBA to be evaluating the database continuously to determine what currently constitutes “hot” data, and constantly making adjustments to optimize performance. Some also require scheduling downtime to implement and test the changes being made.
An alternative to such constant and labor-intensive fine-tuning is the use of server-side flash storage solutions that plug directly into the PCIe bus and integrate intelligent caching with support for DAS in RAID configurations. Intelligent caching software automatically and transparently moves “hot” data—that which is being accessed the most frequently—from the attached DAS HDDs to fast, on-board NAND flash memory, thereby significantly decreasing the latency of future reads.
Testing of Flash Cache for DAS
Extensive evaluation of server-side flash-based application acceleration solutions under different scenarios to assess improvements in IOPS, transactions per second, application response times and other performance metrics, reveals that for I/O-intensive database applications, moving data closer to the CPU delivers improvements in performance ranging from a factor of three to an astonishing factor of 100 in some cases.
In all test scenarios, the use of server-side flash caching consistently delivered superior performance. The reduction in application response time ranged from 5X to 10X with no fine-tuning of the configuration. When the database was tuned for both the HDD-only and flash cache configurations, response times were reduced by nearly 30X from 710 milliseconds (HDD-only) to 25 milliseconds with the use of cache.
These results demonstrate that while tuning efforts are effective, they are substantially more effective with the use of flash cache. And even without tuning, flash cache is able to reduce response times by up to an order of magnitude.
Superior Performance with DAS
The use of direct-attached storage has once again become the preferred option for Oracle databases for a variety of reasons. Not only does DAS deliver superior performance in database servers to get the most from costly software licenses, it is also easier to administer, especially when using Oracle’s Automatic Storage Management system. Some solutions also now enable DAS to be shared by multiple servers.
Even better performance and cost efficiency can be achieved by complementing DAS with intelligent server-side flash cache acceleration cards that minimize I/O latency and maximize IOPS. In addition, by allowing infrequently accessed data to remain on HDD storage, organizations can deploy an economical mix of high-performance flash and high-capacity hard-disk storage to optimize both the cost per IOPS and the cost per gigabyte of storage.
Server-side flash caching solutions can also be used in SAN environments to improve performance. Such tests have revealed both significant reductions in response times and dramatic increases in transaction throughput. So whether using DAS or SAN, the combination of server-side flash and intelligent caching has proven to be a cost-effective way to maximize performance and efficiency from the storage subsystem.
About the author:
Tony Afshary is director of marketing, Accelerated Solutions Division, LSI, which designs semiconductors and software that accelerate storage and networking in datacenters, mobile networks and client computing.