Newsletters




Is the Ultimate ‘Data Deluge’ Finally Upon Us?


The “data deluge” is as familiar to IT professionals as it is cliché, but what we are encountering today is very different. Will our typical response be enough?

Often hyped, the data deluge has been an omnipresent theme in IT circles for years as enterprises struggled to manage an ever-increasing barrage of data. After all, the data centers, enterprise networks, systems, hardware, and applications under IT’s purview always existed to use, move, and store the information that enables work to be done.

That’s why it can be argued that however one defines it, data deluge is a continual cycle and the result of a simple fact: We will create and use as much data as we can effectively keep, manage, and protect.

Viewed through this lens, the data deluge isn’t a cause for concern at all—it’s simply a state that will continue to drive the innovation needed to keep data volumes in check.

On one level, history supports this view. While the rocket scientists of the Apollo space program would be in awe of the amount of information at our fingertips in today’s smartphones, they had their own data deluge to contend with—the need to include flight plan data in the Apollo Guidance
Computer, which operated at about 0.0000002% of the speed of a current cellphone.

The same can be said of the first IBM mainframes. They kept pace with the information they were required to process, even as advancements in those same systems enabled more and more data to be created, gathered, and used. In a very real way, such innovations enabled the very data growth that led to their obsolescence.

More recently, we saw this same cyclical data deluge with the widespread adoption of the cloud. The growing data volumes that required IT departments to continually increase storage capacity made companies such as EMC some of the most profitable in history and ultimately drove enterprises to embrace the cloud for the virtually unlimited capacity it offered and unprecedented compute power it provided—power that enabled the creation of more data.

Today, we are seeing another data deluge take shape that is being driven by two transformative computing trends: AI and the dramatic increase in machine-generated data.

But this data deluge is different and begs these questions: Has our typical response to more data, namely creating faster chips and more powerful hardware—accompanied by transformative innovations such as Ethernet, virtualization, and Wi-Fi—finally met its match? And if so, what can we do about it?

The data deluge we now face took shape faster and at a scale that dwarfs those of the past, something readily apparent when one looks at its impact on data centers. Many operators rightfully worry that they don’t have the bandwidth, computing power, or electrical power to keep up with AI.

Research is beginning to put their concerns in stark focus, with the Financial Times finding that data centers in Northern Virginia—where the world’s highest concentration are found—are using two-thirds more water now than they did in 2019. Similarly, AI is radically increasing power consumption, with Goldman Sachs predicting data centers will use 160% more by 2030. Cooling and power consumption are of course two leading indicators of how much data we are asking facilities to handle. Tech stalwarts are even looking to nuclear power as an option, but that of course brings with it additional challenges and concerns.

Importantly though, AI is but one issue. IDC predicts that the billions of IoT (Internet of Things) devices already online will generate 90 zettabytes of data in 2025. It’s a reality that is already overwhelming wired, wireless, and satellite networks, even as edge computing grows in scope and the “Internet of Everything” becomes a reality.

What then should be done? Clearly building more data centers that use more power and water is problematic—even a nonstarter in many areas. And that is not considering the overt impact such growth will have on the environment if data centers simply grow in scale, pipes get bigger, and more satellites fill the sky.

Yes, innovations in cooling technologies, coupled with more efficient, more powerful chips and servers, will help, but it will not be enough to tame the data deluge we now face. What is needed is a different approach, one that will force us to break ways with our long-standing response of creating more powerful processors and data compression schemes that are already inadequate.

What is needed is innovation that focuses on data itself—solutions that make it lighter and denser and enables it to be more easily transferred, stored, and protected. How we react to this demand, and the paradigm shift it will require, will ultimately determine whether we tame this data deluge like those that came before it—just as it will determine if we are able to realize the full potential of AI and a fully connected world where devices share information like never before.


Sponsors