ML Experiment Management

  1. Cnvrg.io -

    1. Manage - Easily navigate machine learning with dashboards, reproducible data science, dataset organization, experiment tracking and visualization, a model repository and more

    2. Build - Run and track experiments in hyperspeed with the freedom to use any compute environment, framework, programming language or tool - no configuration required

    3. Automate - Build more models and automate your machine learning from research to production using reusable components and drag-n-drop interface

  2. Comet.ml - Comet lets you track code, experiments, and results on ML projects. It’s fast, simple, and free for open source projects.

  3. Floyd - notebooks on the cloud, similar to colab / kaggle, etc. gpu costs 4$/h

  4. Missing link - RIP

  5. Databricks

    1. Koalas - pandas API on Apache Spark

    2. Intro to DB on spark, has some basic sklearn-like tool and other custom operations such as single-vector-based aggregator for using features as an input to a model

    3. Documentations (read me, has all libraries)

    4. Medium tutorial, explains the 3 pros of DB with examples of using with native and non native algos

      1. Spark sql

      2. Mlflow

      3. Streaming

      4. SystemML DML using keras models.

    5. Utilizing spark nodes for grid searching with sklearn

      1. from spark_sklearn import GridSearchCV

    6. How can we leverage our existing experience with modeling libraries like scikit-learn? We'll explore three approaches that make use of existing libraries, but still benefit from the parallelism provided by Spark.

These approaches are:

  • Grid Search

  • Cross Validation

  • Sampling (random, chronological subsets of data across clusters)

  1. Github spark-sklearn (needs to be compared to what spark has internally)

    1. Ref: It's worth pausing here to note that the architecture of this approach is different than that used by MLlib in Spark. Using spark-sklearn, we're simply distributing the cross-validation run of each model (with a specific combination of hyperparameters) across each Spark executor. Spark MLlib, on the other hand, will distribute the internals of the actual learning algorithms across the cluster.

    2. The main advantage of spark-sklearn is that it enables leveraging the very rich set of machine learning algorithms in scikit-learn. These algorithms do not run natively on a cluster (although they can be parallelized on a single machine) and by adding Spark, we can unlock a lot more horsepower than could ordinarily be used.

    3. Using spark-sklearn is a straightforward way to throw more CPU at any machine learning problem you might have. We used the package to reduce the time spent searching and reduce the error for our estimator

Last updated