Recognition: unknown
Image-to-Image Translation Framework Embedded with Rotation Symmetry Priors
Pith reviewed 2026-05-10 16:22 UTC · model grok-4.3
The pith
Embedding rotation symmetry priors via equivariant convolutions preserves domain-invariant features in unsupervised image-to-image translation.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that rotation group equivariant convolutions create an image-to-image translation framework that preserves rotation symmetry as a domain-invariant property throughout the network, and that a learnable variant (TL-Conv) extends this benefit by adapting the group while maintaining exact equivariance in continuous domains and a bounded error in discrete cases, resulting in measurably better generation quality on real datasets.
What carries the argument
Rotation group equivariant convolutions, which enforce that rotating the input image produces the correspondingly rotated output at every layer, thereby carrying the symmetry constraint through the entire translation network.
If this is right
- The same equivariant design can be applied to other I2I tasks such as style transfer or medical image adaptation while keeping rotation consistency.
- TL-Conv removes the need for manual selection of symmetry groups, allowing the framework to handle diverse datasets without extra tuning.
- The error bound proved for discrete TL-Conv gives implementers a concrete limit on how much symmetry is lost when moving from theory to pixels.
- Because symmetry is enforced structurally rather than learned from data, the method can operate effectively even when paired examples are absent.
Where Pith is reading between the lines
- The approach could be tested on reflection or scale symmetries to check whether the same structural enforcement improves results in those cases as well.
- In domains like satellite or microscopic imaging where rotation is physically meaningful, the method might reduce the number of training examples needed to reach a given quality level.
- If the equivariant layers are inserted only at certain depths, one could measure whether partial enforcement still yields most of the benefit while lowering compute cost.
Load-bearing premise
Rotation symmetry is a primary domain-invariant property whose enforcement through these convolutions will raise generation quality without creating new artifacts or forcing dataset-specific changes that break the unsupervised setting.
What would settle it
If side-by-side comparisons on standard benchmarks show that images generated by the equivariant network exhibit greater rotation inconsistency or lower perceptual quality scores than those from a matched non-equivariant baseline, the performance benefit would be refuted.
Figures
read the original abstract
Image-to-image translation (I2I) is a fundamental task in computer vision, focused on mapping an input image from a source domain to a corresponding image in a target domain while preserving domain-invariant features and adapting domain-specific attributes. Despite the remarkable success of deep learning-based I2I approaches, the lack of paired data and unsupervised learning framework still hinder their effectiveness. In this work, we address the challenge by incorporating transformation symmetry priors into image-to-image translation networks. Specifically, we introduce rotation group equivariant convolutions to achieve rotation equivariant I2I framework, a novel contribution, to the best of our knowledge, along this research direction. This design ensures the preservation of rotation symmetry, one of the most intrinsic and domain-invariant properties of natural and scientific images, throughout the network. Furthermore, we conduct a systematic study on image symmetry priors on real dataset and propose a novel transformation learnable equivariant convolutions (TL-Conv) that adaptively learns transformation groups, enhancing symmetry preservation across diverse datasets. We also provide a theoretical analysis of the equivariance error of TL-Conv, proving that it maintains exact equivariance in continuous domains and provide a bound for the error in discrete cases. Through extensive experiments across a range of I2I tasks, we validate the effectiveness and superior performance of our approach, highlighting the potential of equivariant networks in enhancing generation quality and its broad applicability. Our code is available at https://github.com/tanfy929/Equivariant-I2I
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript claims to introduce rotation group equivariant convolutions for an image-to-image translation framework, with a novel TL-Conv that learns transformation groups adaptively. It provides theoretical analysis of equivariance error (exact in continuous, bounded in discrete) and reports superior experimental results across I2I tasks, emphasizing preservation of rotation symmetry as a domain-invariant property.
Significance. Should the theoretical bound hold with negligible error in practice and the approach not require dataset-specific adjustments that violate the unsupervised premise, the work would be significant in demonstrating how equivariant networks can enhance generative models by enforcing geometric priors. This could have broad applicability in fields like medical imaging where symmetry is crucial. The open-sourced code is a positive aspect for verification.
major comments (2)
- [Theoretical analysis of TL-Conv equivariance error] The paper states that TL-Conv maintains exact equivariance in continuous domains and provides a bound for the error in discrete cases. However, this bound's practical magnitude is not quantified for the network depths and image resolutions employed in the experiments (such as 256×256 grids), leaving open the possibility that discrete sampling errors accumulate in the deep generator and compromise the symmetry preservation throughout the I2I pipeline.
- [Description of TL-Conv and its integration] TL-Conv is described as learning transformation groups from data, which introduces free parameters. This raises a concern that any observed improvements may result from increased model capacity or data fitting rather than the enforcement of a first-principles rotation symmetry prior, potentially affecting the claim of domain-invariance in unsupervised settings.
minor comments (1)
- [Abstract] The abstract mentions 'extensive experiments' and 'superior performance' but does not specify the datasets, baselines, or quantitative metrics used, which would help readers assess the claims quickly.
Simulated Author's Rebuttal
We sincerely thank the referee for the constructive and insightful comments on our manuscript. We appreciate the opportunity to clarify key aspects of our work and outline the revisions we intend to make. Below we respond point-by-point to the major comments.
read point-by-point responses
-
Referee: The paper states that TL-Conv maintains exact equivariance in continuous domains and provides a bound for the error in discrete cases. However, this bound's practical magnitude is not quantified for the network depths and image resolutions employed in the experiments (such as 256×256 grids), leaving open the possibility that discrete sampling errors accumulate in the deep generator and compromise the symmetry preservation throughout the I2I pipeline.
Authors: We agree that an explicit quantification of the practical error magnitude for the network depths and 256×256 resolutions used in our experiments would strengthen the presentation. In the revised manuscript we will add a dedicated analysis (in the main text or an appendix) that either derives a tighter practical bound or reports empirical measurements of equivariance error accumulation through the full generator depth on 256×256 grids. This will confirm that the accumulated discrete sampling error remains negligible and does not compromise symmetry preservation in the I2I pipeline. revision: yes
-
Referee: TL-Conv is described as learning transformation groups from data, which introduces free parameters. This raises a concern that any observed improvements may result from increased model capacity or data fitting rather than the enforcement of a first-principles rotation symmetry prior, potentially affecting the claim of domain-invariance in unsupervised settings.
Authors: We acknowledge the concern about additional parameters. However, the learnable parameters in TL-Conv are not unconstrained; they operate strictly within the group-equivariant convolution framework, so that the rotation symmetry prior is enforced by construction rather than learned as a soft objective. This structural constraint distinguishes the approach from simply increasing model capacity. To address the point directly, we will add ablation experiments that compare TL-Conv against standard (non-equivariant) convolutions with matched parameter counts, and we will clarify in the revised text how the symmetry prior remains domain-invariant and compatible with the unsupervised setting. These additions will help demonstrate that the performance gains arise from the enforced prior rather than capacity alone. revision: partial
Circularity Check
No significant circularity in the derivation chain
full rationale
The paper's core derivation introduces TL-Conv and supplies an independent mathematical proof of exact equivariance in the continuous limit together with a discrete error bound expressed in terms of grid sampling and learned parameters. This analysis is self-contained within the layer definition and does not reduce to the I2I translation outcomes or to any fitted performance metric. Claims of improved generation quality rest on experimental validation across multiple datasets and tasks rather than on any first-principles prediction that collapses back to the inputs by construction. No load-bearing self-citations, ansatz smuggling, or renaming of known results appear in the central argument.
Axiom & Free-Parameter Ledger
free parameters (1)
- Learned transformation groups in TL-Conv
axioms (1)
- domain assumption Rotation symmetry is one of the most intrinsic and domain-invariant properties of natural and scientific images
invented entities (1)
-
TL-Conv
no independent evidence
Forward citations
Cited by 1 Pith paper
-
Aligning Network Equivariance with Data Symmetry: A Theoretical Framework and Adaptive Approach for Image Restoration
A new dataset-level non-strict symmetry measure allows deriving bounded equivariance for restoration models and motivates an adaptive network that aligns with per-sample symmetry to reduce expected risk.
Reference graph
Works this paper leans on
-
[1]
Image-to- image translation with conditional adversarial networks
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to- image translation with conditional adversarial networks. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017
2017
-
[2]
Vector quantized image-to-image translation
Yu-Jie Chen, Shin-I Cheng, Wei-Chen Chiu, Hung-Yu Tseng, and Hsin- Ying Lee. Vector quantized image-to-image translation. InEuropean Conference on Computer Vision, pages 440–456. Springer, 2022
2022
-
[3]
Quantitative cerebral blood volume image synthesis from standard mri using image-to-image translation for brain tumors.Radiology, 308(2):e222471, 2023
Bao Wang, Yongsheng Pan, Shangchen Xu, Yi Zhang, Yang Ming, Ligang Chen, Xuejun Liu, Chengwei Wang, Yingchao Liu, and Yong Xia. Quantitative cerebral blood volume image synthesis from standard mri using image-to-image translation for brain tumors.Radiology, 308(2):e222471, 2023. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 15
2023
-
[4]
Stytr2: Image style transfer with transformers
Yingying Deng, Fan Tang, Weiming Dong, Chongyang Ma, Xingjia Pan, Lei Wang, and Changsheng Xu. Stytr2: Image style transfer with transformers. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11326–11336, 2022
2022
-
[5]
Umgan: Underwa- ter image enhancement network for unpaired image-to-image translation
Boyang Sun, Yupeng Mei, Ni Yan, and Yingyi Chen. Umgan: Underwa- ter image enhancement network for unpaired image-to-image translation. Journal of Marine Science and Engineering, 11(2):447, 2023
2023
-
[6]
Guided image-to-image transla- tion with bi-directional feature transformation
Badour AlBahar and Jia-Bin Huang. Guided image-to-image transla- tion with bi-directional feature transformation. InProceedings of the IEEE/CVF international conference on computer vision, pages 9016– 9025, 2019
2019
-
[7]
Cross- domain correspondence learning for exemplar-based image translation
Pan Zhang, Bo Zhang, Dong Chen, Lu Yuan, and Fang Wen. Cross- domain correspondence learning for exemplar-based image translation. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5143–5153, 2020
2020
-
[8]
Cocosnet v2: Full-resolution correspondence learning for image translation
Xingran Zhou, Bo Zhang, Ting Zhang, Pan Zhang, Jianmin Bao, Dong Chen, Zhongfei Zhang, and Fang Wen. Cocosnet v2: Full-resolution correspondence learning for image translation. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11465–11475, 2021
2021
-
[9]
Spatially-adaptive pixelwise networks for fast image translation
Tamar Rott Shaham, Michaël Gharbi, Richard Zhang, Eli Shechtman, and Tomer Michaeli. Spatially-adaptive pixelwise networks for fast image translation. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 14882–14891, 2021
2021
-
[10]
Unpaired image-to-image translation using cycle-consistent adversarial networks
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. InProceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017
2017
-
[11]
Con- trastive learning for unpaired image-to-image translation
Taesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu. Con- trastive learning for unpaired image-to-image translation. InComputer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IX 16, pages 319–345. Springer, 2020
2020
-
[12]
Unpaired image-to-image translation with shortest path regularization
Shaoan Xie, Yanwu Xu, Mingming Gong, and Kun Zhang. Unpaired image-to-image translation with shortest path regularization. InPro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10177–10187, 2023
2023
-
[13]
Deep convolutional dictionary learning network for sparse view ct reconstruction with a group sparse prior.Computer Methods and Programs in Biomedicine, 244:108010, 2024
Yanqin Kang, Jin Liu, Fan Wu, Kun Wang, Jun Qiang, Dianlin Hu, and Yikun Zhang. Deep convolutional dictionary learning network for sparse view ct reconstruction with a group sparse prior.Computer Methods and Programs in Biomedicine, 244:108010, 2024
2024
-
[14]
Mhf- net: An interpretable deep network for multispectral and hyperspectral image fusion.IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3):1457–1473, 2020
Qi Xie, Minghao Zhou, Qian Zhao, Zongben Xu, and Deyu Meng. Mhf- net: An interpretable deep network for multispectral and hyperspectral image fusion.IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(3):1457–1473, 2020
2020
-
[15]
Infrared small target detection via joint low rankness and local smoothness prior.IEEE Transactions on Geoscience and Remote Sensing, 2024
Pei Liu, Jiangjun Peng, Hailin Wang, Danfeng Hong, and Xiangyong Cao. Infrared small target detection via joint low rankness and local smoothness prior.IEEE Transactions on Geoscience and Remote Sensing, 2024
2024
-
[16]
Ds-net: A model driven network framework for lesion segmentation on fundus image.Knowledge-Based Systems, 315:113242, 2025
Feiyu Tan, Yuhan Wang, Qi Xie, Jiahong Fu, Renzhen Wang, and Deyu Meng. Ds-net: A model driven network framework for lesion segmentation on fundus image.Knowledge-Based Systems, 315:113242, 2025
2025
-
[17]
Group equivariant convolutional net- works
Taco Cohen and Max Welling. Group equivariant convolutional net- works. InInternational conference on machine learning, pages 2990–
-
[18]
Rotation equivariant arbitrary-scale image super-resolution.arXiv preprint arXiv:2508.05160, 2025
Qi Xie, Jiahong Fu, Zongben Xu, and Deyu Meng. Rotation equivariant arbitrary-scale image super-resolution.arXiv preprint arXiv:2508.05160, 2025
-
[19]
Image-to- image translation: Methods and applications.IEEE Transactions on Multimedia, 24:3859–3881, 2021
Yingxue Pang, Jianxin Lin, Tao Qin, and Zhibo Chen. Image-to- image translation: Methods and applications.IEEE Transactions on Multimedia, 24:3859–3881, 2021
2021
-
[20]
He Huang, Philip S Yu, and Changhu Wang. An introduction to image synthesis with generative adversarial nets.arXiv preprint arXiv:1803.04469, 2018
-
[21]
Neural style transfer: A review.IEEE transactions on visualization and computer graphics, 26(11):3365–3385, 2019
Yongcheng Jing, Yezhou Yang, Zunlei Feng, Jingwen Ye, Yizhou Yu, and Mingli Song. Neural style transfer: A review.IEEE transactions on visualization and computer graphics, 26(11):3365–3385, 2019
2019
-
[22]
Shizuo Kaji and Satoshi Kida. Overview of image-to-image translation by use of deep neural networks: denoising, super-resolution, modality conversion, and reconstruction in medical imaging.Radiological physics and technology, 12(3):235–248, 2019
2019
-
[23]
Deep learning on image denoising: An overview
Chunwei Tian, Lunke Fei, Wenxian Zheng, Yong Xu, Wangmeng Zuo, and Chia-Wen Lin. Deep learning on image denoising: An overview. Neural Networks, 131:251–275, 2020
2020
-
[24]
Uctgan: Diverse image inpainting based on unsupervised cross-space translation
Lei Zhao, Qihang Mo, Sihuan Lin, Zhizhong Wang, Zhiwen Zuo, Haibo Chen, Wei Xing, and Dongming Lu. Uctgan: Diverse image inpainting based on unsupervised cross-space translation. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5741–5750, 2020
2020
-
[25]
Discriminative region proposal adversarial networks for high-quality image-to-image translation
Chao Wang, Haiyong Zheng, Zhibin Yu, Ziqiang Zheng, Zhaorui Gu, and Bing Zheng. Discriminative region proposal adversarial networks for high-quality image-to-image translation. InProceedings of the European conference on computer vision (ECCV), pages 770–785, 2018
2018
-
[26]
Content and style disentanglement for artistic style transfer
Dmytro Kotovenko, Artsiom Sanakoyeu, Sabine Lang, and Bjorn Om- mer. Content and style disentanglement for artistic style transfer. In Proceedings of the IEEE/CVF international conference on computer vision, pages 4422–4431, 2019
2019
-
[27]
Dualast: Dual style-learning networks for artistic style transfer
Haibo Chen, Lei Zhao, Zhizhong Wang, Huiming Zhang, Zhiwen Zuo, Ailin Li, Wei Xing, and Dongming Lu. Dualast: Dual style-learning networks for artistic style transfer. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 872–881, 2021
2021
-
[28]
Multi-scale transformer network with edge-aware pre-training for cross- modality mr image synthesis.IEEE Transactions on Medical Imaging, 42(11):3395–3407, 2023
Yonghao Li, Tao Zhou, Kelei He, Yi Zhou, and Dinggang Shen. Multi-scale transformer network with edge-aware pre-training for cross- modality mr image synthesis.IEEE Transactions on Medical Imaging, 42(11):3395–3407, 2023
2023
-
[29]
Multi-modal modality-masked diffusion network for brain mri synthesis with random modality missing.IEEE Transactions on Medical Imaging, 2024
Xiangxi Meng, Kaicong Sun, Jun Xu, Xuming He, and Dinggang Shen. Multi-modal modality-masked diffusion network for brain mri synthesis with random modality missing.IEEE Transactions on Medical Imaging, 2024
2024
-
[30]
Unsupervised cross- domain image generation.arXiv preprint arXiv:1611.02200, 2016
Yaniv Taigman, Adam Polyak, and Lior Wolf. Unsupervised cross- domain image generation.arXiv preprint arXiv:1611.02200, 2016
-
[31]
Hrinversion: High-resolution gan inversion for cross-domain image synthesis.IEEE Transactions on Circuits and Systems for Video Technology, 33(5):2147– 2161, 2022
Peng Zhou, Lingxi Xie, Bingbing Ni, Lin Liu, and Qi Tian. Hrinversion: High-resolution gan inversion for cross-domain image synthesis.IEEE Transactions on Circuits and Systems for Video Technology, 33(5):2147– 2161, 2022
2022
-
[32]
Ganhopper: Multi-hop gan for unsupervised image-to- image translation
Wallace Lira, Johannes Merz, Daniel Ritchie, Daniel Cohen-Or, and Hao Zhang. Ganhopper: Multi-hop gan for unsupervised image-to- image translation. InComputer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16, pages 363–379. Springer, 2020
2020
-
[33]
Unsupervised image-to-image translation with stacked cycle- consistent adversarial networks
Minjun Li, Haozhi Huang, Lin Ma, Wei Liu, Tong Zhang, and Yugang Jiang. Unsupervised image-to-image translation with stacked cycle- consistent adversarial networks. InProceedings of the European conference on computer vision (ECCV), pages 184–199, 2018
2018
-
[34]
A simple framework for contrastive learning of visual representations
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. InInternational conference on machine learning, pages 1597–1607. PMLR, 2020
2020
-
[35]
Momentum contrast for unsupervised visual representation learning
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738, 2020
2020
-
[36]
Instance-wise hard negative example generation for contrastive learning in unpaired image-to-image translation
Weilun Wang, Wengang Zhou, Jianmin Bao, Dong Chen, and Houqiang Li. Instance-wise hard negative example generation for contrastive learning in unpaired image-to-image translation. InProceedings of the IEEE/CVF international conference on computer vision, pages 14020– 14029, 2021
2021
-
[37]
Comogan: contin- uous model-guided image-to-image translation
Fabio Pizzati, Pietro Cerri, and Raoul De Charette. Comogan: contin- uous model-guided image-to-image translation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14288–14298, 2021
2021
-
[38]
Puff-net: Efficient style transfer with pure content and style feature fusion network
Sizhe Zheng, Pan Gao, Peng Zhou, and Jie Qin. Puff-net: Efficient style transfer with pure content and style feature fusion network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8059–8068, 2024
2024
-
[39]
Quantart: Quantizing image style transfer towards high visual fidelity
Siyu Huang, Jie An, Donglai Wei, Jiebo Luo, and Hanspeter Pfister. Quantart: Quantizing image style transfer towards high visual fidelity. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5947–5956, 2023
2023
-
[40]
Taming transformers for high-resolution image synthesis
Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 12873– 12883, 2021
2021
-
[41]
Understanding image representations by measuring their equivariance and equivalence
Karel Lenc and Andrea Vedaldi. Understanding image representations by measuring their equivariance and equivalence. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 991–999, 2015
2015
-
[42]
Euclidean symmetry and equivariance in machine learning.Trends in Chemistry, 3(2):82–85, 2021
Tess E Smidt. Euclidean symmetry and equivariance in machine learning.Trends in Chemistry, 3(2):82–85, 2021
2021
-
[43]
PhD thesis, Taco Cohen, 2021
Taco Cohen et al.Equivariant convolutional networks. PhD thesis, Taco Cohen, 2021. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 16
2021
-
[44]
Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems, 25, 2012
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks.Advances in neural information processing systems, 25, 2012
2012
-
[45]
Revisiting data augmentation for rotational invariance in con- volutional neural networks
Facundo Quiroga, Franco Ronchetti, Laura Lanzarini, and Aurelio F Bariviera. Revisiting data augmentation for rotational invariance in con- volutional neural networks. InModelling and Simulation in Management Sciences: Proceedings of the International Conference on Modelling and Simulation in Management Sciences (MS-18), pages 127–141. Springer, 2020
2020
-
[46]
Fourier series ex- pansion based filter parametrization for equivariant convolutions.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4537– 4551, 2022
Qi Xie, Qian Zhao, Zongben Xu, and Deyu Meng. Fourier series ex- pansion based filter parametrization for equivariant convolutions.IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(4):4537– 4551, 2022
2022
-
[47]
Hexaconv.arXiv preprint arXiv:1803.02108, 2018
Emiel Hoogeboom, Jorn WT Peters, Taco S Cohen, and Max Welling. Hexaconv.arXiv preprint arXiv:1803.02108, 2018
-
[48]
Oriented response networks
Yanzhao Zhou, Qixiang Ye, Qiang Qiu, and Jianbin Jiao. Oriented response networks. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 519–528, 2017
2017
-
[49]
Rotation equivariant vector field networks
Diego Marcos, Michele V olpi, Nikos Komodakis, and Devis Tuia. Rotation equivariant vector field networks. InProceedings of the IEEE International Conference on Computer Vision, pages 5048–5057, 2017
2017
-
[50]
Harmonic networks: Deep translation and rotation equivariance
Daniel E Worrall, Stephan J Garbin, Daniyar Turmukhambetov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 5028–5037, 2017
2017
-
[51]
Learning steerable filters for rotation equivariant cnns
Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant cnns. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 849–858, 2018
2018
-
[52]
General e (2)-equivariant steerable cnns.Advances in neural information processing systems, 32, 2019
Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns.Advances in neural information processing systems, 32, 2019
2019
-
[53]
Pdo- econvs: Partial differential operator based equivariant convolutions
Zhengyang Shen, Lingshen He, Zhouchen Lin, and Jinwen Ma. Pdo- econvs: Partial differential operator based equivariant convolutions. InInternational Conference on Machine Learning, pages 8697–8706. PMLR, 2020
2020
-
[54]
Pdo-es2cnns: Partial differential operator based equivariant spherical cnns
Zhengyang Shen, Tiancheng Shen, Zhouchen Lin, and Jinwen Ma. Pdo-es2cnns: Partial differential operator based equivariant spherical cnns. InProceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 9585–9593, 2021
2021
-
[55]
Affine equivariant networks based on differential invariants
Yikang Li, Yeqing Qiu, Yuxuan Chen, Lingshen He, and Zhouchen Lin. Affine equivariant networks based on differential invariants. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5546–5556, 2024
2024
-
[56]
Rotation-equivariant self-supervised method in image denoising
Hanze Liu, Jiahong Fu, Qi Xie, and Deyu Meng. Rotation-equivariant self-supervised method in image denoising. InProceedings of the Computer Vision and Pattern Recognition Conference, pages 12720– 12730, 2025
2025
-
[57]
Group symmetry in pac learning
Bryn Elesedy. Group symmetry in pac learning. InICLR 2022 workshop on geometrical and topological representation learning, 2022
2022
-
[58]
Equivariant neural networks for inverse problems.Inverse Problems, 37(8):085006, 2021
Elena Celledoni, Matthias J Ehrhardt, Christian Etmann, Brynjulf Owren, Carola-Bibiane Schönlieb, and Ferdia Sherry. Equivariant neural networks for inverse problems.Inverse Problems, 37(8):085006, 2021
2021
-
[59]
Rotation equivariant proximal operator for deep unfolding methods in image restoration
Jiahong Fu, Qi Xie, Deyu Meng, and Zongben Xu. Rotation equivariant proximal operator for deep unfolding methods in image restoration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
2024
-
[60]
A model-driven deep neural network for single image rain removal
Hong Wang, Qi Xie, Qian Zhao, and Deyu Meng. A model-driven deep neural network for single image rain removal. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3103–3112, 2020
2020
-
[61]
Enhanced deep residual networks for single image super- resolution
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super- resolution. InProceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 136–144, 2017
2017
-
[62]
Deep convolutional neural networks as generic feature extractors
Lars Hertel, Erhardt Barth, Thomas Käster, and Thomas Martinetz. Deep convolutional neural networks as generic feature extractors. In2015 International Joint Conference on Neural Networks (IJCNN), pages 1–
-
[63]
Ntire 2017 challenge on single image super-resolution: Dataset and study
Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. InProceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 126–135, 2017
2017
-
[64]
Modulated contrast for versatile image synthesis
Fangneng Zhan, Jiahui Zhang, Yingchen Yu, Rongliang Wu, and Shijian Lu. Modulated contrast for versatile image synthesis. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18280–18290, 2022
2022
-
[65]
Qs-attn: Query-selected attention for contrastive learning in i2i translation
Xueqi Hu, Xinyue Zhou, Qiusheng Huang, Zhengyi Shi, Li Sun, and Qingli Li. Qs-attn: Query-selected attention for contrastive learning in i2i translation. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18291–18300, 2022
2022
-
[66]
Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks
Chanyong Jung, Gihyun Kwon, and Jong Chul Ye. Exploring patch-wise semantic relation for contrastive learning in image-to-image translation tasks. InProceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 18260–18269, 2022
2022
-
[67]
Tsit: A simple and versatile framework for image-to-image translation
Liming Jiang, Changxu Zhang, Mingyang Huang, Chunxiao Liu, Jian- ping Shi, and Chen Change Loy. Tsit: A simple and versatile framework for image-to-image translation. InComputer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16, pages 206–222. Springer, 2020
2020
-
[68]
arxiv preprint arXiv:1805.04687 (Apr 2020)
Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan, Trevor Darrell, et al. Bdd100k: A diverse driv- ing video database with scalable annotation tooling.arXiv preprint arXiv:1805.04687, 2(5):6, 2018
-
[69]
Gans trained by a two time-scale update rule converge to a local nash equilibrium.Advances in neural information processing systems, 30, 2017
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium.Advances in neural information processing systems, 30, 2017
2017
-
[70]
Image synthesis in multi-contrast mri with conditional generative adversarial networks.IEEE transactions on medical imaging, 38(10):2375–2388, 2019
Salman UH Dar, Mahmut Yurt, Levent Karacan, Aykut Erdem, Erkut Erdem, and Tolga Cukur. Image synthesis in multi-contrast mri with conditional generative adversarial networks.IEEE transactions on medical imaging, 38(10):2375–2388, 2019
2019
-
[71]
Mcmt-gan: multi-task coherent modality trans- ferable gan for 3d brain image synthesis.IEEE Transactions on Image Processing, 29:8187–8198, 2020
Yawen Huang, Feng Zheng, Runmin Cong, Weilin Huang, Matthew R Scott, and Ling Shao. Mcmt-gan: multi-task coherent modality trans- ferable gan for 3d brain image synthesis.IEEE Transactions on Image Processing, 29:8187–8198, 2020
2020
-
[72]
Stargan: Unified generative adversarial networks for multi-domain image-to-image translation
Yunjey Choi, Minje Choi, Munyoung Kim, Jung-Woo Ha, Sunghun Kim, and Jaegul Choo. Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 8789– 8797, 2018
2018
-
[73]
Dual generator generative adversarial networks for multi-domain image-to- image translation
Hao Tang, Dan Xu, Wei Wang, Yan Yan, and Nicu Sebe. Dual generator generative adversarial networks for multi-domain image-to- image translation. InAsian Conference on Computer Vision, pages 3–21. Springer, 2018
2018
-
[74]
Unpaired multi-contrast mr image synthesis using generative adversarial networks
Muhammad Sohail, Muhammad Naveed Riaz, Jing Wu, Chengnian Long, and Shaoyuan Li. Unpaired multi-contrast mr image synthesis using generative adversarial networks. InInternational Workshop on Simulation and Synthesis in Medical Imaging, pages 22–31. Springer, 2019
2019
-
[75]
Combogan: Unrestrained scalability for image domain translation
Asha Anoosheh, Eirikur Agustsson, Radu Timofte, and Luc Van Gool. Combogan: Unrestrained scalability for image domain translation. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pages 783–790, 2018
2018
-
[76]
A unified hyper-gan model for unpaired multi-contrast mr image translation
Heran Yang, Jian Sun, Liwei Yang, and Zongben Xu. A unified hyper-gan model for unpaired multi-contrast mr image translation. In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part III 24, pages 127–137. Springer, 2021
2021
-
[77]
Joint rain detection and removal from a single image with contextualized deep networks.IEEE transactions on pattern analysis and machine intelligence, 42(6):1377–1393, 2019
Wenhan Yang, Robby T Tan, Jiashi Feng, Zongming Guo, Shuicheng Yan, and Jiaying Liu. Joint rain detection and removal from a single image with contextualized deep networks.IEEE transactions on pattern analysis and machine intelligence, 42(6):1377–1393, 2019
2019
-
[78]
Removing rain from a single image via discriminative sparse coding
Yu Luo, Yong Xu, and Hui Ji. Removing rain from a single image via discriminative sparse coding. InProceedings of the IEEE international conference on computer vision, pages 3397–3405, 2015
2015
-
[79]
Rain streak removal using layer priors
Yu Li, Robby T Tan, Xiaojie Guo, Jiangbo Lu, and Michael S Brown. Rain streak removal using layer priors. InProceedings of the IEEE conference on computer vision and pattern recognition, pages 2736– 2744, 2016
2016
-
[80]
Joint convolutional analysis and synthesis sparse representation for single image layer separation
Shuhang Gu, Deyu Meng, Wangmeng Zuo, and Lei Zhang. Joint convolutional analysis and synthesis sparse representation for single image layer separation. InProceedings of the IEEE international conference on computer vision, pages 1708–1716, 2017
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.