The world’s data has doubled in 18 months’ time. The industry estimates that the global amount of storage will reach 40 ZB by 2020. Historically, storage architectures were built on solutions that could only scale vertically. This legacy approach to storage presents significant challenges to being able to store the tremendous quantities of data being created today in a way that is cost-effective and maintains high levels of performance. Today, most of the world’s data centers are still using vertical scaling solutions for storage, which means that organizations are seeking alternatives that allow them to scale cheaply and efficiently in order to remain competitive. And now, with software defined storage moving forward, we see the use of more scale-out storage solutions in data centers.
Hybrid cloud presents a way for organizations to gain the maximum amount of business flexibility from cloud architectures, which helps maximize budget efficiency and performance goals at the same time. In a nutshell, hybrid cloud is a cloud computing environment that uses a mix of on-premises, private cloud and public cloud services, with orchestration between the two platforms.
Since hybrid cloud architectures are so new, many are still learning about the benefits and challenges associated with deploying a hybrid cloud approach. In this article, we go through some design elements you can use to ensure your hybrid cloud delivers the performance, flexibility and scalability you need.
Scale-Out NAS is Critical
The cornerstone of this hybrid-cloud storage solution needs to be a scale-out NAS. Since hybrid cloud architectures are relatively new to the market – and even newer in full-scale deployment – many organizations are unaware of the importance of consistency in a scale-out NAS. Many environments are eventually consistent, meaning that files that you write to one node are not immediately accessible from other nodes. This can be caused by not having a proper implementation of the protocols, or not tight enough integration with the virtual file system. The opposite of that is being strictly consistent: files are accessible from all nodes at the same time. Compliant protocol implementations and tight integration with the virtual file system is a good recipe for success.
An ideal hybrid cloud architecture that incorporates a scale-out NAS approach should be based on three layers. Each server in the cluster will run a software stack based on these layers.
- The first layer is the persistent storage layer. We base this layer on an object store, which provides advantages like extreme scalability. However, the layer must be strictly consistent in itself.
- The virtual file system is the heart of any scale-out NAS. It is in this second layer that features like caching, locking, tiering, quota and snapshots are handled.
- The third layer contains the protocols like SMB and NFS but also integration points for hypervisors, for example.
It is very important to keep the architecture symmetrical and clean. If you manage to do that, many future architectural challenges will be much easier to solve.
Now, let’s take a closer look at the storage layer. Since the storage layer is based on an object store, we can now easily scale our storage solution. With a clean and symmetrical architecture, we can reach exabytes of data and trillions of files.
Ensuring redundancy is one of the responsibilities of the storage layer, so a fast and effective self-healing mechanism is needed. To keep the data footprint low in the data center, the storage layer needs to support different file encodings. Some are good for performance and some for reducing the footprint.
Metadata: What Is It and Where to Store It?
A very important piece of the virtual file system is metadata. In a virtual file system, metadata are pieces of information that describe the structure of the file system. For example, one metadata file can contain information about what files and folders are contained in a single folder in the file system. That means that we will have one metadata file for each folder in our virtual file system. As the virtual file system grows, we will get more and more metadata files.
Some users prefer centralized storage of metadata. For smaller set-ups that might be a good solution, but here we are talking about scale-out. So, let’s look at where not to store metadata. Storing metadata in a single server can cause poor scalability, poor performance and poor availability. Since our storage layer is based on an object store, a better place to store all our metadata is in the object store – particularly when we are talking about high quantities of metadata. This will ensure good scalability, good performance and good availability.
Cache: Increasing Performance
Software-defined storage solutions need caching devices to increase performance. From a storage solution perspective, both speed and size matter – as well as price; finding the sweet spot is important. For an SDS solution, it is also important to protect the data at a higher level by replicating the data to another node before destaging the data to the storage layer.
Supporting multiple file systems and domains becomes more important as the storage solution grows in both capacity and features, particular in virtual or cloud environments. Supporting multiple file systems is also very important. Different applications and use cases prefer different protocols. And sometimes, it is also necessary to be able to access the same data across different protocols.
Virtual Environments
The cloud element of the hybrid cloud naturally requires support for hypervisors. Therefore, the scale-out NAS needs to able to runs as hyper-converged as well. Being software-defined makes sense here.
If we have a flat architecture with no external storage systems, then the scale-out NAS must be able to run as a virtual machine and make use of the hypervisor host’s physical resources. The guest virtual machine’s (VM) own images and data will be stored in the virtual file system that the scale-out NAS provides. The guest VMs can use this file system to share files between them, making it perfect for VDI environments as well.
Now, why is it important to support many protocols? Well, in a virtual environment, there are many different applications running, having different needs for protocols. By supporting many protocols, we keep the architecture flat, and we have the ability to share data between applications that speak different protocols, to some extent.
Being software-defined, supporting both fast and energy-efficient hardware, having an architecture that allows us to start small and scale up, supporting bare-metal as well as virtual environments, and having support for all major protocols make a very flexible and useful storage solution.
Hybrid Cloud: Features
Each site has its own independent file system. A likely scenario is that different offices have a need for both a private area and an area that they share with other branches. So only parts of the file system will be shared with others.
By selecting a section of a file system and letting others mount it at any given point in the other file systems provides the flexibility needed to scale the file system outside the four walls of the office – making sure that the synchronization is made at the file system level in order to have a consistent view of the file system across sites. Being able to specify different file encodings at different sites is useful, for example, if one site is used as a backup target.
Looking Ahead
This creates a next-generation hybrid cloud system. The result? Clean, efficient and linear scaling up to exabytes of data. A single file system spans all servers, which offers multiple entry points and removes potential performance bottlenecks. This solution includes native support of protocols, flash support for high performance and flexible scale-out by adding nodes. With a scale-out NAS, you will have much better control over your investments and expansion in your data center.
Image courtesy of Shutterstock.