ML Experiment Management
Cnvrg.io -
Manage - Easily navigate machine learning with dashboards, reproducible data science, dataset organization, experiment tracking and visualization, a model repository and more
Build - Run and track experiments in hyperspeed with the freedom to use any compute environment, framework, programming language or tool - no configuration required
Automate - Build more models and automate your machine learning from research to production using reusable components and drag-n-drop interface
Comet.ml - Comet lets you track code, experiments, and results on ML projects. It’s fast, simple, and free for open source projects.
Floyd - notebooks on the cloud, similar to colab / kaggle, etc. gpu costs 4$/h
Missing link - RIP
Databricks
Koalas - pandas API on Apache Spark
Intro to DB on spark, has some basic sklearn-like tool and other custom operations such as single-vector-based aggregator for using features as an input to a model
Documentations (read me, has all libraries)
Medium tutorial, explains the 3 pros of DB with examples of using with native and non native algos
Spark sql
Mlflow
Streaming
SystemML DML using keras models.
Utilizing spark nodes for grid searching with sklearn
from spark_sklearn import GridSearchCV
How can we leverage our existing experience with modeling libraries like scikit-learn? We'll explore three approaches that make use of existing libraries, but still benefit from the parallelism provided by Spark.
These approaches are:
Grid Search
Cross Validation
Sampling (random, chronological subsets of data across clusters)
Github spark-sklearn (needs to be compared to what spark has internally)
Ref: It's worth pausing here to note that the architecture of this approach is different than that used by MLlib in Spark. Using spark-sklearn, we're simply distributing the cross-validation run of each model (with a specific combination of hyperparameters) across each Spark executor. Spark MLlib, on the other hand, will distribute the internals of the actual learning algorithms across the cluster.
The main advantage of spark-sklearn is that it enables leveraging the very rich set of machine learning algorithms in scikit-learn. These algorithms do not run natively on a cluster (although they can be parallelized on a single machine) and by adding Spark, we can unlock a lot more horsepower than could ordinarily be used.
Using spark-sklearn is a straightforward way to throw more CPU at any machine learning problem you might have. We used the package to reduce the time spent searching and reduce the error for our estimator
Medium and sklearn random trees
Last updated