Benchmarking
Last updated
Last updated
scikit bench - "scikit-learn_bench benchmarks various implementations of machine learning algorithms across data analytics frameworks. It currently support the scikit-learn, DAAL4PY, cuML, and XGBoost frameworks for commonly used machine learning algorithms."
Numpy Blas:
GLUE:
State of the art in AI:
In terms of domain X datasets
Cloud providers:
Datasets:
Hardware:
Nvidia 1070 vs 1080 vs 2080
Cpu vs GPU benchmarking for CNN\Test\LTSM\BDLTSM - google and amazon vs gpu
Nvidia GPUs - titax Xp\1080TI\1070 on googlenet
March\17 - Nvidia GPUs for desktop, in terms of price and cuda units, the bottom line is 1060-1080.
Another bench up to 2013 - regarding many GPUS vs CPUs in terms of BW
Platforms
Algorithms:
Comparing accuracy, speed, memory and 2D visualization of classifiers:
SVM, k-nearest neighbors, Random Forest, AdaBoost Classifier, Gradient Boosting, Naive, Bayes, LDA, QDA, RBMs, Logistic Regression, RBM + Logistic Regression Classifier
LSTM vs cuDNN LS1TM - batch size of power 2 matters, the latter is faster.
Scaling networks and predicting performance of NN:
A great overview of NN types, but the idea behind the video is to create a system that can predict train time and possibly accuracy when scaling networks using multiple GPUs, there is also a nice slide about general hardware recommendations.
NLP
Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics (Yarin Gal) GitHub - "In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task’s loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. "
Ruder on Multi Task Learning - "By sharing representations between related tasks, we can enable our model to generalize better on our original task. This approach is called Multi-Task Learning (MTL) and will be the topic of this blog post."