Machine learning is fundamentally changing the ways that people build and maintain software.
We are a CS research group at Stanford led by Professor Chris Ré interested in understanding those shifts and building the foundations for the next generation of machine learning systems. On the machine learning side, we’re fascinated by how we can learn from increasingly weak forms of supervision and understand the mathematical foundations of these techniques. On the systems side, we want to exploit our theoretical insights to help people more effectively build, validate, and maintain machine learning models. And we are most excited when we can do both at the same time (e.g., Snorkel).
Check out our Software 2.0 blog post for an overview of our work and future directions we’re especially excited about, and some of our active projects below!
Model validation and maintenance are critical parts of the model deployment pipeline but are still poorly understood. Major challenges include monitoring critical data slices and accounting for problems like hidden stratification, where incomplete class labels obscure true performance.
A necessary component for any language-based task is to understand the words and entities participating in the text. Models access this information through embeddings, but the performance of these models greatly depends on the type of embedding used. Our research on embeddings aims to understand and develop techniques for both generating and using embeddings for improved model performance.
When humans engage with images, they passively provide a rich source of information through their eye movements that is useful for downstream image classification. In this project, we investigate techniques for using this passive data for training, called observational supervision.
Data augmentation is a critical and necessary component to building machine learning models. But practitioners still rely on manual methods and hand-defined transformations to define augmentations. In this project, we explore how to automate the art of data augmentation in theory and in practice.
FlyingSquid is a first step towards enabling more rapid and iterative model development cycles. In this project, we focus on reducing the turnaround time for generating training labels, speeding up key parts of the model creation pipeline by orders of magnitude.
How do we move from manually specifying to learning augmentations? How do we best make use of augmentations when training? We extend work from domain translation and robust training to demonstrate the benefits of learned augmentations for learning invariant classifier representations.