In many domains, data now arrives faster than we are able to learn from it. To avoid wasting this data, we must switch from the traditional "one-shot" machine learning approach to systems that can mine continuous, high-volume, open-ended data streams as they arrive. We have identified a set of desiderata for such systems and developed an approach to building stream mining algorithms that satisfies all of them. The approach is based on explicitly minimizing the number of examples used in each learning step, while guaranteeing that user-defined targets for predictive performance are met. So far, we have applied this approach to four major (and widely differing) types of learner: decision tree induction, Bayesian network learning, k-means clustering, and the EM algorithm for mixtures of Gaussians. Our versions of these algorithms are able to mine orders of magnitude more data than the best previous algorithms (e.g., our decision tree learner can mine on the order of a billion examples per day on an ordinary PC). We are currently applying our approach to the difficult problem of large-scale relational learning and have already obtained an order-of-magnitude speedup on a Web prediction task. We have released a beta version of the VFML toolkit with our current suite of stream mining algorithms. Our ultimate goal is to develop a set of primitives (or, more generally, a language) such that any learning algorithm built using them scales automatically to arbitrarily large data streams.