At the core of big data lie the three Vs - volume, velocity, and variety. What’s required are solutions that can extract valuable insights from new sources such as social networks email, sensors, connected devices as sensors, the web, and smartphones.
Addressing the three Vs are the combined forces of open source, with its rapid crowd-sourced innovation, cloud, with its unlimited capacity and on-demand deployment options, and NoSQL database technologies, with their ability to handle unstructured, or schema-less, data.
The need for big data velocity imposes unique demands on the underlying compute infrastructure. The computing power required to quickly process huge volumes and varieties of data can overwhelm a single server or server cluster. Organizations must apply adequate compute power to big data tasks to achieve the desired velocity.
One of the most promising big data frameworks is Apache Hadoop, with organizations creating a flourishing and robust ecosystem around the solution.
Data lakes and streaming along with a host of other tools are emerging in the marketplace, promising to give users needed insights and with real-time capabilities.
Public cloud computing has also emerged as a primary vehicle for hosting big data analytics projects. A public cloud provider can store petabytes of data and scale up thousands of servers just long enough to accomplish the big data project.
Best Big Data Platform
MongoDB
FINALISTS
Cloudera Enterprise
MarkLogic