Abstract:
There is a clear need nowadays for extremely large data processing.
This is especially true in the area of scientific data management where soon we expect
data inputs in the order of multiple Petabytes.
However, current data management technology is not suitable for such data sizes.
In the light of such new database applications, we can rethink some of the strict
requirements database systems adopted in the past.
We argue that correctness is such a critical property, responsible for performance degradation.
In this paper, we propose a new paradigm towards building database kernels
that may produce wrong but fast, cheap and indicative results.
Fast response times is an essential component of data analysis for exploratory applications;
allowing for fast queries enables
the user to develop a ``feeling" for the data through a series of ``painless" queries which eventually leads
to more detailed analysis in a targeted data area.
We propose a research path where a database kernel autonomously and on-the-fly
decides to reduce the processing requirements of a running query
based on workload, hardware and
environmental parameters.
It requires a complete redesign of database operators
and query processing strategy.
For example, typical and very common scenarios were query processing performance degrades significantly
are cases where a database operator has to spill data
to disk, or is forced to perform random access, or has to follow long linked lists, etc.
Here we ask the question: What if we simply avoid these steps, ``ignoring" the side-effect
in the correctness of the result?