pith. machine review for the scientific record. sign in

arxiv: 1703.00395 · v1 · submitted 2017-03-01 · 📊 stat.ML · cs.CV

Recognition: unknown

Lossy Image Compression with Compressive Autoencoders

Authors on Pith no claims yet
classification 📊 stat.ML cs.CV
keywords autoencoderscompressioncomputationallyimageimageslosslossyneed
0
0 comments X
read the original abstract

We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 3 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Soft Anisotropic Diagrams for Differentiable Image Representation

    cs.CV 2026-04 unverdicted novelty 7.0

    SAD is a new explicit differentiable image representation based on soft anisotropic additively weighted Voronoi partitions that achieves higher PSNR and 4-19x faster training than Image-GS and Instant-NGP at matched bitrate.

  2. Finite Scalar Quantization: VQ-VAE Made Simple

    cs.CV 2023-09 conditional novelty 7.0

    Finite scalar quantization simplifies VQ-VAE latents by independently rounding a few dimensions to fixed levels, producing an equivalent-sized implicit codebook with competitive performance and no collapse.

  3. Cool-chic 5.0: Faster Encoding and Inter-Feature Entropy Modeling for Overfitted Image Compression

    eess.IV 2026-05 unverdicted novelty 5.0

    Cool-chic 5.0 delivers 11% lower rate than H.266/VVC and matches modern autoencoders like MLIC++ with 250 times lower decoding complexity through an updated decoder architecture and faster optimization for overfitted codecs.