DualTCN is the first deep-learning model for time-domain marine CSEM inversion that regresses four earth parameters, achieves high accuracy on simulated data, and runs up to 21,000 times faster than classical optimizers.
Title resolution pending
25 Pith papers cite this work, alongside 71,895 external citations. Polarity classification is still indexing.
citation-role summary
citation-polarity summary
authors
co-cited works
fields
cs.CV 4 cs.LG 4 astro-ph.SR 2 eess.SP 2 math.OC 2 cond-mat.mtrl-sci 1 cs.AI 1 cs.AR 1 cs.LO 1 physics.ao-ph 1years
2026 25roles
background 1polarities
background 1representative citing papers
Broximal Alignment is a novel condition under which the Ball Proximal Point Method converges to global minima in non-convex settings, generalizing quasiconvexity, star convexity, and related frameworks.
Neural decompositionality is defined via decision-boundary semantic preservation, and language transformers largely satisfy it under SAVED while vision models often do not.
Scaling vision models by depth and parameter count does not consistently improve localisation-based explanation quality across architectures, datasets, and post-hoc methods; smaller models often perform comparably or better.
Mixing real UAV imagery with 2101 AI-generated image-mask pairs improves semantic segmentation F1 scores for fine-grained forest species by over 15 percentage points overall and up to 30 points for rare classes.
MetaRL pre-trained on GBWM problems delivers near-optimal dynamic strategies in 0.01s achieving 97.8% of DP optimal utility and handles larger problems where DP fails.
Lottery BP adds randomness to belief propagation decoding and uses syndrome voting to achieve far higher accuracy on topological quantum codes while reducing reliance on expensive global decoders.
TSMNet uses a dual-branch text encoder and text-guided fusion module to integrate scene-level semantic and object-level label features from text with visual embeddings, achieving superior open-vocabulary segmentation on new multimodal remote sensing datasets.
Mistake-gated plasticity reduces neural network updates by 50-80% by gating changes on classification errors, improving efficiency for continual learning without added hyperparameters.
Koopman theory plus knowledge distillation yields linearized models from pre-trained nets that outperform standard least-squares Koopman approximations on MNIST and Fashion-MNIST in accuracy and stability.
Neural networks predict orographic gravity wave momentum fluxes from coarse state variables with offline R² of 0.56-0.72, learn physically meaningful relationships via SHAP, and are compared to the Lott-Miller parameterization.
Machine learning methods including denoising autoencoders, unsupervised interference mitigation, blind source separation, and certifiable classification are developed and experimentally validated to improve multi-species laser spectroscopy under complex conditions.
Hybrid neural network predicts eruptive versus confined solar flares from SDO/HMI magnetogram sequences, reports good performance, and links results to magnetic flux cancellation in polarity inversion lines.
CNNs trained on speckle images from levitating TiO2 suspension microdroplets classify droplet diameter with better than 6% accuracy and provide useful discrimination for nanoparticle concentration and diameter, including simultaneous three-parameter classification.
An operator-based Energy Concentration Index yields the IMRED detector that identifies defect-induced changes in impulse responses with AUC 0.908, outperforming standard Fourier and wavelet energy measures.
Joint sparse coding and temporal dynamics in mPFC and computational networks reduce cross-context interference and enhance separability, enabling better retention in lifelong learning without extra heuristics.
Cascaded neural networks classify 10 eye-movement classes from single-cycle EOG signals at 99% accuracy with sub-83 ms latency below human reaction time.
DINO-based ViT models pretrained on HPA FOV achieve macro F1 of 0.822 zero-shot and 0.860 after fine-tuning for protein localization on OpenCell, demonstrating effective transfer from SSL pretraining.
ZTF high-cadence data shows RR Lyrae stars and flaring sources can mimic UV transients, with pre-existing ML catalogs offering a concrete mitigation approach.
Machine learning classifiers using fifteen cluster-level descriptors from time and ADC distributions effectively separate signal from background hits in prototype RPC detectors.
A simulation-driven digital twin framework is shown to generate interpretable diabetes trajectories for decision-aware analysis by combining benchmark data with controlled synthetic scenarios.
IA-QCNN applies quantum principles via ring-topology convolution and importance weighting to achieve claimed high-accuracy MGMT methylation prediction from MRI with fewer parameters and noise robustness than classical models.
A critical review of AI surrogate models for multiscale combustion that compares supervised, unsupervised, and physics-guided methods, identifies transferability and consistency challenges, and outlines future opportunities.
Neural networks and random forests predict surface roughness from laser parameters and material data with high accuracy, speeding up optimization and reducing experimental effort.
citing papers explorer
-
DualTCN: A Physics-Constrained Temporal Convolutional Network for 2 Time-Domain Marine CSEM Inversion
DualTCN is the first deep-learning model for time-domain marine CSEM inversion that regresses four earth parameters, achieves high accuracy on simulated data, and runs up to 21,000 times faster than classical optimizers.
-
Broximal Alignment for Global Non-Convex Optimization
Broximal Alignment is a novel condition under which the Ball Proximal Point Method converges to global minima in non-convex settings, generalizing quasiconvexity, star convexity, and related frameworks.
-
On the Decompositionality of Neural Networks
Neural decompositionality is defined via decision-boundary semantic preservation, and language transformers largely satisfy it under SAVED while vision models often do not.
-
Scaling Vision Models Does Not Consistently Improve Localisation-Based Explanation Quality
Scaling vision models by depth and parameter count does not consistently improve localisation-based explanation quality across architectures, datasets, and post-hoc methods; smaller models often perform comparably or better.
-
Leveraging Image Generators to Address Training Data Scarcity: The Gen4Regen Dataset for Forest Regeneration Mapping
Mixing real UAV imagery with 2101 AI-generated image-mask pairs improves semantic segmentation F1 scores for fine-grained forest species by over 15 percentage points overall and up to 30 points for rare classes.
-
A Meta Reinforcement Learning Approach to Goals-Based Wealth Management
MetaRL pre-trained on GBWM problems delivers near-optimal dynamic strategies in 0.01s achieving 97.8% of DP optimal utility and handles larger problems where DP fails.
-
Lottery BP: Unlocking Quantum Error Decoding at Scale
Lottery BP adds randomness to belief propagation decoding and uses syndrome voting to achieve far higher accuracy on topological quantum codes while reducing reliance on expensive global decoders.
-
Open-Vocabulary Semantic Segmentation Network Integrating Object-Level Label and Scene-Level Semantic Features for Multimodal Remote Sensing Images
TSMNet uses a dual-branch text encoder and text-guided fusion module to integrate scene-level semantic and object-level label features from text with visual embeddings, achieving superior open-vocabulary segmentation on new multimodal remote sensing datasets.
-
Mistake gating leads to energy and memory efficient continual learning
Mistake-gated plasticity reduces neural network updates by 50-80% by gating changes on classification errors, improving efficiency for continual learning without added hyperparameters.
-
Extraction of linearized models from pre-trained networks via knowledge distillation
Koopman theory plus knowledge distillation yields linearized models from pre-trained nets that outperform standard least-squares Koopman approximations on MNIST and Fashion-MNIST in accuracy and stability.
-
Interpretable Neural Networks to Predict Momentum Fluxes of Orographic Gravity Waves
Neural networks predict orographic gravity wave momentum fluxes from coarse state variables with offline R² of 0.56-0.72, learn physically meaningful relationships via SHAP, and are compared to the Lott-Miller parameterization.
-
Machine Learning Enhanced Laser Spectroscopy for Multi-Species Gas Detection in Complex and Harsh Environments
Machine learning methods including denoising autoencoders, unsupervised interference mitigation, blind source separation, and certifiable classification are developed and experimentally validated to improve multi-species laser spectroscopy under complex conditions.
-
Predicting Associations between Solar Flares and Coronal Mass Ejections Using SDO/HMI Magnetograms and a Hybrid Neural Network
Hybrid neural network predicts eruptive versus confined solar flares from SDO/HMI magnetogram sequences, reports good performance, and links results to magnetic flux cancellation in polarity inversion lines.
-
Determination of Nanoparticle and Microdroplet Parameters in Levitating Microdroplets of Suspension by Speckle Image Analysis Using Convolutional Neural Networks
CNNs trained on speckle images from levitating TiO2 suspension microdroplets classify droplet diameter with better than 6% accuracy and provide useful discrimination for nanoparticle concentration and diameter, including simultaneous three-parameter classification.
-
Operator-Theoretic Energy Functionals for Impulse-Excited Nonstationary Signal Analysis
An operator-based Energy Concentration Index yields the IMRED detector that identifies defect-induced changes in impulse responses with AUC 0.908, outperforming standard Fourier and wavelet energy measures.
-
Joint sparse coding and temporal dynamics support context reconfiguration
Joint sparse coding and temporal dynamics in mPFC and computational networks reduce cross-context interference and enhance separability, enabling better retention in lifelong learning without extra heuristics.
-
Single-Cycle Multidirectional EOG Classification Faster than Human Reaction Time for Wearable Human-Computer Interactions
Cascaded neural networks classify 10 eye-movement classes from single-cycle EOG signals at 99% accuracy with sub-83 ms latency below human reaction time.
-
Using Deep Learning Models Pretrained by Self-Supervised Learning for Protein Localization
DINO-based ViT models pretrained on HPA FOV achieve macro F1 of 0.822 zero-shot and 0.860 after fine-tuning for protein localization on OpenCell, demonstrating effective transfer from SSL pretraining.
-
The ZTF-ULTRASAT experiment: Characterizing the non-transients in ULTRASAT's high cadence survey
ZTF high-cadence data shows RR Lyrae stars and flaring sources can mimic UV transients, with pre-existing ML catalogs offering a concrete mitigation approach.
-
Machine Learning-Based Cluster Classification to Suppress Background in a Prototype RPC Detector
Machine learning classifiers using fifteen cluster-level descriptors from time and ADC distributions effectively separate signal from background hits in prototype RPC detectors.
-
A Proof-of-Concept Simulation-Driven Digital Twin Framework for Decision-Aware Diabetes Modeling
A simulation-driven digital twin framework is shown to generate interpretable diabetes trajectories for decision-aware analysis by combining benchmark data with controlled synthetic scenarios.
-
A Specialized Importance-Aware Quantum Convolutional Neural Network with Ring-Topology (IA-QCNN) for MGMT Promoter Methylation Prediction in Glioblastoma
IA-QCNN applies quantum principles via ring-topology convolution and importance weighting to achieve claimed high-accuracy MGMT methylation prediction from MRI with fewer parameters and noise robustness than classical models.
-
AI-Powered Surrogate Modelling for Multiscale Combustion: A Critical Review and Opportunities
A critical review of AI surrogate models for multiscale combustion that compares supervised, unsupervised, and physics-guided methods, identifies transferability and consistency challenges, and outlines future opportunities.
-
Enhancing Laser Surface Texturing through Advanced Machine Learning Techniques
Neural networks and random forests predict surface roughness from laser parameters and material data with high accuracy, speeding up optimization and reducing experimental effort.
-
Deep Learning for Sequential Decision Making under Uncertainty: Foundations, Frameworks, and Frontiers
A tutorial framing deep learning as a complement to optimization for sequential decision-making under uncertainty, with applications in supply chains, healthcare, and energy.