pith. machine review for the scientific record. sign in

arxiv: 1906.02569 · v1 · submitted 2019-06-06 · 💻 cs.LG · cs.HC· stat.ML

Recognition: unknown

Gradio: Hassle-Free Sharing and Testing of ML Models in the Wild

Authors on Pith no claims yet
classification 💻 cs.LG cs.HCstat.ML
keywords gradioaccessibilityallowcollaborationinterfacelearningmachinemakes
0
0 comments X
read the original abstract

Accessibility is a major challenge of machine learning (ML). Typical ML models are built by specialists and require specialized hardware/software as well as ML experience to validate. This makes it challenging for non-technical collaborators and endpoint users (e.g. physicians) to easily provide feedback on model development and to gain trust in ML. The accessibility challenge also makes collaboration more difficult and limits the ML researcher's exposure to realistic data and scenarios that occur in the wild. To improve accessibility and facilitate collaboration, we developed an open-source Python package, Gradio, which allows researchers to rapidly generate a visual interface for their ML models. Gradio makes accessing any ML model as easy as sharing a URL. Our development of Gradio is informed by interviews with a number of machine learning researchers who participate in interdisciplinary collaborations. Their feedback identified that Gradio should support a variety of interfaces and frameworks, allow for easy sharing of the interface, allow for input manipulation and interactive inference by the domain expert, as well as allow embedding the interface in iPython notebooks. We developed these features and carried out a case study to understand Gradio's usefulness and usability in the setting of a machine learning collaboration between a researcher and a cardiologist.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 7 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Benchmarking Vision-Language Models under Contradictory Virtual Content Attacks in Augmented Reality

    cs.CV 2026-04 unverdicted novelty 7.0

    ContrAR benchmark reveals that current VLMs show reasonable understanding of contradictory virtual content in AR but need improvement in detection, reasoning, and balancing accuracy with latency.

  2. JAMMEval: A Refined Collection of Japanese Benchmarks for Reliable VLM Evaluation

    cs.CV 2026-04 conditional novelty 7.0

    JAMMEval delivers refined Japanese VQA benchmarks that produce evaluation scores more reflective of true model capability, with lower run-to-run variance and stronger separation between models of differing ability.

  3. Seconds-Aligned PCA-DAC Latent Diffusion for Symbolic-to-Audio Drum Rendering

    cs.SD 2026-05 unverdicted novelty 6.0

    Sec2Drum-DAC renders drum audio from symbolic inputs via diffusion on PCA-reduced DAC latents, improving spectral and transient metrics over regression baselines on 1733 held-out windows.

  4. BabelDOC: Better Layout-Preserving PDF Translation via Intermediate Representation

    cs.CV 2026-05 unverdicted novelty 6.0

    BabelDOC uses an intermediate representation to decouple layout from content for improved layout-preserving PDF translation.

  5. Seeing Through Touch: Tactile-Driven Visual Localization of Material Regions

    cs.CV 2026-04 unverdicted novelty 6.0

    The model uses dense visuo-tactile feature interactions and material-diversity pairing on expanded datasets to generate tactile saliency maps for material segmentation, outperforming prior global-alignment methods.

  6. Benchmarking Logistic Regression, SVM, Naive Bayes, and IndoBERT Fine-Tuning for Sentiment Analysis on Indonesian Product Reviews

    cs.CL 2026-05 conditional novelty 2.0

    Linear SVC reached 97.60% accuracy and 0.5510 Macro F1 on the full Tokopedia 2025 reviews corpus, beating fine-tuned IndoBERT's 88.70% accuracy and 0.5088 Macro F1 on a sampled subset, with the gap attributed to data ...

  7. From Knowledge to Action: Outcomes of the 2025 Large Language Model (LLM) Hackathon for Applications in Materials Science and Chemistry

    cond-mat.mtrl-sci 2026-05 unverdicted novelty 2.0

    Hackathon submissions indicate LLMs are moving from general assistants toward composable multi-agent systems for structuring scientific knowledge and automating tasks in materials science and chemistry.