The rapid change taking place in software-defined applications and infrastructure was one of the top 10 IT tech trends to watch, according to Gartner last year. Certainly, some advancements have been made in this area over the past year. But how far does the industry still need to go?
Here are 5 trends that will shape application and infrastructure availability in 2016:
1. There will still be misconceptions about availability, even across many different industries.
Some IT professionals mistakenly perceive availability to be a “one- or two-size- fits-all” proposition. It’s really not. Different applications have a wide range of needs, and customers can get into trouble when they assume that a few differences in the underlying infrastructure mean one type of availability must work for everything. But, when you start to look at availability as a service (a notion we call software-defined availability), which allows applications to automatically apply the availability they need based on their requirements, customers can develop new applications faster, test them more efficiently, and more effectively achieve operational service level agreements.
2. These misconceptions will vary by geographic area.
The bias toward standardized infrastructure is more pronounced in North America, largely because it’s easier to centralize IT here versus in other regions. Outside of the U.S., networking and IT skills are more distributed, and there may be different regulatory controls. This makes it harder to standardize systems, which naturally shifts the mindset to think differently about what these customers deploy, why they are deploying it, and how they treat availability.
3. Availability is really going to evolve in 2016.
People are really starting to “get” the need now. They are feeling the pain when they try to deploy newer technologies such as SDN and NFV or try and retrofit existing technologies into the edge. It’s just apples and oranges. People are demanding new ideas, and the software-defined approach is an example of this.
4. Big changes are coming for availability at the edge of the network.
In the data center, the trend reflects concepts that have been mainstream for years. In the hyper-converged market, we put a lot of value on these technologies and benefits. Being “highly reliable, simple to use and manage, and built from premium hardware” is very much in line with what we have always done with our platforms, so we think about these concepts a lot. At the edge, the change is less technical but just as profound. The notion of low-touch at the edge is giving way to no-touch. For example, you may have a couple of people working at a hydroelectric production facility who can reboot a server or who can do some basic admin. But, at a wind farm, there are no people day-to-day. Everything has to be no-touch. That is a big, big change. I predict that many of the automation technologies emerging in the cloud will have some role to play at the edge because of this.
5. IT teams are going to have to go back to the basics as they think about how availability fits into this new, unfamiliar world of software.
They can no longer just blindly trust that what they already have in place is sufficient for everything. Because in reality, it’s probably not. They need to think about what the users expect, how they consume the technology, and what the impacts are beyond the SLA that IT has signed up for. And if you just want vanilla technology versus something customer-focused, consider it will end up in the public cloud.
Although some companies have made progress in creating new software-based infrastructure, many of them struggle to determine how to rethink availability in a new software-based world. Old hardware-based approaches for system availability and reliability may not provide the same value they once did. These 5 trends should be considered by IT teams as they evaluate their infrastructure and applications for the coming year.