Recognition: unknown
MMLSpark: Unifying Machine Learning Ecosystems at Massive Scales
read the original abstract
We introduce Microsoft Machine Learning for Apache Spark (MMLSpark), an ecosystem of enhancements that expand the Apache Spark distributed computing library to tackle problems in Deep Learning, Micro-Service Orchestration, Gradient Boosting, Model Interpretability, and other areas of modern computation. Furthermore, we present a novel system called Spark Serving that allows users to run any Apache Spark program as a distributed, sub-millisecond latency web service backed by their existing Spark Cluster. All MMLSpark contributions have the same API to enable simple composition across frameworks and usage across batch, streaming, and RESTful web serving scenarios on static, elastic, or serverless clusters. We showcase MMLSpark by creating a method for deep object detection capable of learning without human labeled data and demonstrate its effectiveness for Snow Leopard conservation.
This paper has not been read by Pith yet.
Forward citations
Cited by 2 Pith papers
-
Spark Policy Toolkit: Semantic Contracts and Scalable Execution for Policy Learning in Spark
Spark Policy Toolkit supplies semantic contracts plus mapInPandas/mapInArrow inference and executor-side split search so policy learning remains correct and fast on Spark clusters up to tens of millions of rows.
-
Driving Engagement in Daily Fantasy Sports with a Scalable and Urgency-Aware Ranking Engine
An urgency-aware adaptation of the Deep Interest Network with temporal encodings and listwise neuralNDCG loss delivers a 9% nDCG@1 lift over an optimized LightGBM baseline on a 650k-user industrial DFS dataset.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.