Autoregressive Conditional Neural Processes

Conditional neural processes (CNPs; Garnelo et al., 2018a) are attractive meta-learning models which produce well-calibrated predictions and are trainable via a simple maximum likelihood procedure. Although CNPs have many advantages, they are unable to model dependencies in their predictions. Various works propose solutions to this, but these come at the cost of either requiring approximate inference or being limited to Gaussian predictions. In this work, we instead propose to change how CNPs are deployed at test time, without any modifications to the model or training procedure. Instead of making predictions independently for every target point, we autoregressively define a joint predictive distribution using the chain rule of probability, taking inspiration from the neural autoregressive density estimator (NADE) literature. We show that this simple procedure allows factorised Gaussian CNPs to model highly dependent, non-Gaussian predictive distributions. Perhaps surprisingly, in an extensive range of tasks with synthetic and real data, we show that CNPs in autoregressive (AR) mode not only significantly outperform non-AR CNPs, but are also competitive with more sophisticated models that are significantly more computationally expensive and challenging to train. This performance is remarkable given that AR CNPs are not trained to model joint dependencies. Our work provides an example of how ideas from neural distribution estimation can benefit neural processes, and motivates research into the AR deployment of other neural process models

Details

Publication status:
Published Online
Author(s):
Authors: Bruinsma, Wessel P., Markou, Stratis, Requiema, James, Foong, Andrew Y.K., Andersson, Tom R. ORCIDORCID record for Tom R. Andersson, Vaughan, Anna, Buonomo, Anthony, Hosking, J. Scott ORCIDORCID record for J. Scott Hosking, Turner, Richard E.

On this site: Scott Hosking, Tom Andersson
Date:
1 March, 2023
Journal/Source:
Link to published article:
https://doi.org/