pith. machine review for the scientific record. sign in

arxiv: 1709.07871 · v2 · submitted 2017-09-22 · 💻 cs.CV · cs.AI· cs.CL· stat.ML

Recognition: unknown

FiLM: Visual Reasoning with a General Conditioning Layer

Aaron Courville, Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin

Authors on Pith no claims yet
classification 💻 cs.CV cs.AIcs.CLstat.ML
keywords filmreasoningconditioninglayersvisualfeature-wiseneuralablations
0
0 comments X
read the original abstract

We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - answering image-related questions which require a multi-step, high-level process - a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-the-art error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 11 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Discrete Diffusion for Complex and Congested Multi-Agent Path Finding with Sparse Social Attention

    cs.AI 2026-05 unverdicted novelty 7.0

    DiffLNS uses a discrete diffusion initializer to produce warm-start plans that lift LNS2 success rates to 95.8% across 20 congested MAPF settings, generalizing from 96 to 312 agents.

  2. SeBA: Semi-supervised few-shot learning via Separated-at-Birth Alignment for tabular data

    cs.LG 2026-05 unverdicted novelty 7.0

    SeBA is a joint-embedding framework that separates tabular data into two complementary views and aligns one view's representations to the nearest-neighbor structure of the other, improving feature-label relationships ...

  3. Diffusion Models Beat GANs on Image Synthesis

    cs.LG 2021-05 accept novelty 7.0

    Diffusion models with architecture improvements and classifier guidance achieve superior FID scores to GANs on unconditional and conditional ImageNet image synthesis.

  4. MoMo: Conditioned Contrastive Representation Learning for Preference-Modulated Planning

    cs.LG 2026-05 unverdicted novelty 6.0

    MoMo uses Feature-Wise Linear Modulation and low-rank neural modulation to condition contrastive planning representations on user preferences while preserving inference efficiency and probability density ratios.

  5. Self-Supervised Spatial And Zero-Shot Angular Super-Resolution by Spatial-Angular Implicit Representation For Rotating-View SNR-Efficient Diffusion MRI

    cs.CV 2026-05 unverdicted novelty 6.0

    SA-INR achieves self-supervised spatial super-resolution and zero-shot angular super-resolution in rotating-view dMRI from single-view acquisitions per direction, reaching 34.82 dB PSNR on trained and 33.08 dB on unse...

  6. Lucid-XR: An Extended-Reality Data Engine for Robotic Manipulation

    cs.RO 2026-04 unverdicted novelty 6.0

    Lucid-XR uses XR-headset physics simulation and physics-guided video generation to create synthetic data that trains robot policies transferring zero-shot to unseen real-world manipulation tasks.

  7. The Override Gap: A Magnitude Account of Knowledge Conflict Failure in Hypernetwork-Based Instant LLM Adaptation

    cs.LG 2026-04 conditional novelty 6.0

    Knowledge conflicts in hypernetwork LLM adaptation stem from constant adapter margins losing to frequency-dependent pretrained margins; selective layer boosting and conflict-aware triggering raise deep-conflict accura...

  8. Train-Small Deploy-Large: Leveraging Diffusion-Based Multi-Robot Planning

    cs.RO 2026-04 unverdicted novelty 6.0

    A diffusion-based multi-robot planner trained on few agents generalizes to larger numbers during deployment using inter-agent attention and temporal convolution.

  9. AE-ViT: Stable Long-Horizon Parametric Partial Differential Equations Modeling

    cs.LG 2026-04 unverdicted novelty 6.0

    AE-ViT combines a convolutional autoencoder with a latent-space transformer and multi-stage parameter plus coordinate injection to deliver stable long-horizon predictions for parametric PDEs, cutting relative rollout ...

  10. Federated Medical Image Classification under Class and Domain Imbalance exploiting Synthetic Sample Generation

    cs.CV 2026-04 unverdicted novelty 5.0

    FedSSG generates and shares synthetic samples within a federated setup to reduce class imbalance and domain shift problems in medical image classification.

  11. The Override Gap: A Magnitude Account of Knowledge Conflict Failure in Hypernetwork-Based Instant LLM Adaptation

    cs.LG 2026-04 unverdicted novelty 5.0

    Knowledge conflicts in hypernetwork LLM adaptation stem from constant adapter margins losing to frequency-dependent pretrained margins; selective layer boosting and conflict-aware triggering close the gap.