Comment on page
Data Science
Microsoft on Team DS Lifecycle - "The Team Data Science Process (TDSP) is an agile, iterative data science methodology to deliver predictive analytics solutions and intelligent applications efficiently. TDSP helps improve team collaboration and learning by suggesting how team roles work best together. TDSP includes best practices and structures from Microsoft and other industry leaders to help toward successful implementation of data science initiatives. The goal is to help companies fully realize the benefits of their analytics program.
This article provides an overview of TDSP and its main components. We provide a generic description of the process here that can be implemented with different kinds of tools. A more detailed description of the project tasks and roles involved in the lifecycle of the process is provided in additional linked topics. Guidance on how to implement the TDSP using a specific set of Microsoft tools and infrastructure that we use to implement the TDSP in our teams is also provided."
"When I used to do consulting, I’d always seek to understand an organization’s context for developing data projects, based on these considerations:
- Strategy: What is the organization trying to do (objective) and what can it change to do it better (levers)?
- Data: Is the organization capturing necessary data and making it available?
- Analytics: What kinds of insights would be useful to the organization?
- Implementation: What organizational capabilities does it have?
- Maintenance: What systems are in place to track changes in the operational environment?
- Constraints: What constraints need to be considered in each of the above areas?"
- 1.
- 1.
- 2.
- 4.
- 1.DS vs DA vs MLE - the most intensive diagram post ever. This is the motherload of figure references.
References:
- 1.
- 1.
- 2.Reed hastings on netflix' keeper test - "netflixs-keeper-test-is-the-secret-to-a-successful-workforce"
- 1.
- 3.
- 1.
- 2.
- 2.
-
- 1.Kadenze - deep learning tensor flow - Histograms for (Image distribution - mean distribution) / std dev, are looking quite good.
- 6.
- 7.
- 1.
- 1.DP1 - transform Moving an ML model to production is much easier if you keep inputs, features, and transforms separate
- 2.DP2 - checkpoints Saving the intermediate weights of your model during training provides resilience, generalization, and tunability
- 3.DP3 - virtual epochs Base machine learning model training and evaluation on total number of examples, not on epochs or steps
- 4.
- 5.DP5 - repeatable sampling use the hash of a well distributed column to split your data into training, validation, and testing
- 2.Gensim notebooks - from w2v, doc2vec to nmf, lda, pca, sklearn api, cosine, topic modeling, tsne, etc.
- 3.Deep learning with python - francois chollet, deep learning & vision git notebooks!, official notebooks.
- 4.
- 5.
- 7.
- 1.
(really good) Practical advice for analysis of large, complex data sets - distributions, outliers, examples, slices, metric significance, consistency over time, validation, description, evaluation, robustness in measurement, reproducibility, etc.