AI and ML products now permeate every aspect of our digital lives–from recommendations of what to watch, to divining our search intent, to powering increasingly-present virtual assistants in consumer and enterprise settings. While quality improvements are the main focus of traditional ML and AI research, a second and arguably less... [Read More]
Addressing Hidden Stratification: Fine-Grained Robustness in Coarse-Grained Classification Problems
The classes in classification tasks are often composed of finer-grained subclasses. Models trained using only the coarse-grained class labels tend to exhibit highly variable performance across different subclasses. Moreover, the subclasses are often unknown ahead of time, making it difficult to identify and reduce such performance gaps. This hidden stratification... [Read More]
Ivy: Instrumental Variable Synthesis for Causal Inference
In science and medicine, randomized controlled experiments (RCEs) are a reliable way to measure cause-and-effect relationships. It’s not always practical to conduct RCEs due to cost, time, ethics and other concerns, and a popular alternative is to use instrumental variables (IVs), variables in observational data that resemble the behavior of... [Read More]
Weak Supervision for Science and Medicine: A Year in Review
Jared Dunnmon and Chris Ré. Referencing work by other members of Hazy Research.
While we are proud of the adoption that weak supervision has seen in industry, we are just as excited about the impact it can have in science and medicine. This post reviews recent work that leverages weak supervision to provide real-world value to scientists and clinicians. [Read More]
When Multi-Task Learning Works -- And When It Doesn’t
Sen Wu, Hongyang Zhang and Chris Ré.
Multi-task learning applied to heterogeneous task data can often result in suboptimal models (or negative transfer in more technical terms). We provide conceptual insights to explain why negative transfer happens. Based on the explanation, we propose methods to improve multi-task training. Based on our work in ICLR’20. [Read More]