Recognition: unknown
Variational Autoencoder with Arbitrary Conditioning
read the original abstract
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in "one shot". The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
Missingness-aware Data Imputation via AI-powered Bayesian Generative Modeling
MissBGM jointly models data generation and missingness in a Bayesian neural generative framework to produce consistent imputations with principled posterior uncertainty.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.