Benchmarking
Algorithms
Numpy Blas:
GLUE:
State of the art in AI:
Cloud providers:
Datasets:
Hardware:
Platforms
Algorithms:
Scaling networks and predicting performance of NN:
NLP
Multi-Task Learning
Last updated
Numpy Blas:
GLUE:
State of the art in AI:
Cloud providers:
Datasets:
Hardware:
Platforms
Algorithms:
Scaling networks and predicting performance of NN:
NLP
Last updated
- "scikit-learn_bench benchmarks various implementations of machine learning algorithms across data analytics frameworks. It currently support the scikit-learn, DAAL4PY, cuML, and XGBoost frameworks for commonly used machine learning algorithms."
In terms of
,
1070 vs 1080 vs 2080
- google and amazon vs gpu
- titax Xp\1080TI\1070 on googlenet
March\17 - , in terms of price and cuda units, the bottom line is 1060-1080.
- regarding many GPUS vs CPUs in terms of BW
accuracy, speed, memory and 2D visualization of classifiers:
+ Logistic Regression Classifier
- batch size of power 2 matters, the latter is faster.
s, but the idea behind the video is to create a system that can predict train time and possibly accuracy when scaling networks using multiple GPUs, there is also a nice slide about general hardware recommendations.
(Yarin Gal) - "In this paper we make the observation that the performance of such systems is strongly dependent on the relative weighting between each task’s loss. Tuning these weights by hand is a difficult and expensive process, making multi-task learning prohibitive in practice. We propose a principled approach to multi-task deep learning which weighs multiple loss functions by considering the homoscedastic uncertainty of each task. "
- "By sharing representations between related tasks, we can enable our model to generalize better on our original task. This approach is called Multi-Task Learning (MTL) and will be the topic of this blog post."