Timeseries
Last updated
Last updated
Random walk - what is?
Time series decomposition book - stl x11 seats
SKtime - is a sk-based api, medium, integrates algos from tsfresh and tslearn
(really good) A LightGBM Autoregressor — Using Sktime, explains about the basics in time series prediction, splitting, next step, delayed step, multi step, deseason.
TSFresh - extracts 1200 features, filters them using FDR for time series classification etc
DTAIDistance - Library for time series distances (e.g. Dynamic Time Warping) used in the DTAI Research Group. The library offers a pure Python implementation and a faster implementation in C. The C implementation has only Cython as a dependency. It is compatible with Numpy and Pandas and implemented to avoid unnecessary data copy operations dtaidistance.clustering.hierarchical
Darts is a Python library for user-friendly forecasting and anomaly detection on time series. Forecasting models & Examples
Ddtaidistance.clustering.kmeans
Dtaidistance.clustering.medoids
* Identify anomalies, outliers or abnormal behaviour (see for example the anomatools package).
Semi supervised with DTAIDistance - Active semi-supervised clustering
The recommended method for perform active semi-supervised clustering using DTAIDistance is to use the COBRAS for time series clustering: https://github.com/ML-KULeuven/cobras. COBRAS is a library for semi-supervised time series clustering using pairwise constraints, which natively supports both dtaidistance.dtw and kshape.
Affine warp, a neural net with time warping - as part of the following manuscript, which focuses on analysis of large-scale neural recordings (though this code can be also be applied to many other data types)
Neural warp - NeuralWarp: Time-Series Similarity with Warping Networks
A great introduction into time series - “The approach is to come up with a list of features that captures the temporal aspects so that the auto correlation information is not lost.” basically tells us to take sequence features and create (auto)-correlated new variables using a time window, i.e., “Time series forecasts as regression that factor in autocorrelation as well.”. we can transform raw features into other type of features that explain the relationship in time between features. we measure success using loss functions, MAE RMSE MAPE RMSEP AC-ERROR-RATE
Interesting idea on how to define ‘time series’ dummy variables that utilize beginning\end of certain holiday events, including important information on what NOT to filter even if it seems insignificant, such as zero sales that may indicate some relationship to many sales the following day.
A trend (a,b,c) exists when there is a long-term increase or decrease in the data.
A seasonal (a - big waves) pattern occurs when a time series is affected by seasonal factors such as the time of the year or the day of the week. The monthly sales induced by the change in cost at the end of the calendar year.
A cycle (a) occurs when the data exhibit rises and falls that are not of a fixed period - sometimes years.
Some statistical measures (mean, median, percentiles, iqr, std dev, bivariate statistics - correlation between variables)
But correlation can LIE, the following has 0.8 correlation for all of the graphs:
Autocorrelation measures the linear relationship between lagged values of a time series.
L8 is correlated, and has a high measure of 0.83
Average: Forecasts of all future values are equal to the mean of the historical data.
Naive: Forecasts are simply set to be the value of the last observation.
Seasonal Naive: forecast to be equal to the last observed value from the same season of the year
Drift: A variation on the naïve method is to allow the forecasts to increase or decrease over time, the drift is set to be the average change seen in the historical data.
Log
Box cox
Back transform
Calendrical adjustments
Inflation adjustment
Transforming time series data to tabular (in order to use tabular based approach)
Dummy variables: sunday, monday, tues,wed,thurs, friday. NO SATURDAY!
notice that only six dummy variables are needed to code seven categories. That is because the seventh category (in this case Sunday) is specified when the dummy variables are all set to zero. Many beginners will try to add a seventh dummy variable for the seventh category. This is known as the "dummy variable trap" because it will cause the regression to fail.
Outliers: If there is an outlier in the data, rather than omit it, you can use a dummy variable to remove its effect. In this case, the dummy variable takes value one for that observation and zero everywhere else.
Public holidays: For daily data, the effect of public holidays can be accounted for by including a dummy variable predictor taking value one on public holidays and zero elsewhere.
Easter: is different from most holidays because it is not held on the same date each year and the effect can last for several days. In this case, a dummy variable can be used with value one where any part of the holiday falls in the particular time period and zero otherwise.
Trading days: The number of trading days in a month can vary considerably and can have a substantial effect on sales data. To allow for this, the number of trading days in each month can be included as a predictor. An alternative that allows for the effects of different days of the week has the following predictors. # Mondays in month;# Tuesdays in month;# Sundays in month.
Advertising: $advertising for previous month;$advertising for two months previously
“compute parameter estimates over a rolling window of a fixed size through the sample. If the parameters are truly constant over the entire sample, then the estimates over the rolling windows should not be too different. If the parameters change at some point during the sample, then the rolling estimates should capture this instability”
estimate the trend cycle
3-5-7-9? If its too large its going to flatten the curve, too low its going to be similar to the actual curve.
two tier moving average, first 4 then 2 on the resulted moving average.
Visual example of ARIMA algorithm - captures the time series trend or forecast.
Creating curves to explain a complex seasonal fit.
1, scikit-lego with a decay estimator
Level. The baseline value for the series if it were a straight line.
Trend. The optional and often linear increasing or decreasing behavior of the series over time.
Seasonality. The optional repeating patterns or cycles of behavior over time.
Noise. The optional variability in the observations that cannot be explained by the model.
All time series have a level, most have noise, and the trend and seasonality are optional.
One step forecast using a window of “1” and a typical sample “time, measure1, measure2”:
linear/nonlinear classifiers: predict a single output value - using the t-1 previous line, i.e., “measure1 t, measure 2 t, measure 1 t+1, measure 2 t+1 (as the class)”
Neural networks: predict multiple output values, i.e., “measure1 t, measure 2 t, measure 1 t+1(class1), measure 2 t+1(class2)”
One-Step Forecast: This is where the next time step (t+1) is predicted.
Multi-Step Forecast: This is where two or more future time steps are to be predicted.
Multi-step forecast using a window of “1” and a typical sample “time, measure1”, i.e., using the current value input we label it as the two future input labels:
“measure1 t, measure1 t+1(class) , measure1 t+2(class1)”
This article explains about ML Methods for Sequential Supervised Learning - Six methods that have been applied to solve sequential supervised learning problems:
sliding-window methods - converts a sequential supervised problem into a classical supervised problem
recurrent sliding windows
hidden Markov models
maximum entropy Markov models
input-output Markov models
conditional random fields
graph transformer networks
What is? A time series without a trend or seasonality, in other words non-stationary has a trend or seasonality
There are ways to remove the trend and seasonality, i.e., take the difference between time points.
T+1 - T
Bigger lag to support seasonal changes
pandas.diff()
Plot a histogram, plot a log(X) as well.
Test for the unit root null hypothesis - i.e., use the Augmented dickey fuller test to determine if two samples originate in a stationary or a non-stationary (seasonal/trend) time series
Shay on stationary time series, AR, ARMA
(amazing) STL and more.
PDarima - Pmdarima‘s auto_arima function is extremely useful when building an ARIMA model as it helps us identify the most optimal p,d,q parameters and return a fitted ARIMA model.
More mastery on short time series.
Autoregression (AR)
Moving Average (MA)
Autoregressive Moving Average (ARMA)
Autoregressive Integrated Moving Average (ARIMA)
Seasonal Autoregressive Integrated Moving-Average (SARIMA)
Seasonal Autoregressive Integrated Moving-Average with Exogenous Regressors (SARIMAX)
Vector Autoregression (VAR)
Vector Autoregression Moving-Average (VARMA)
Vector Autoregression Moving-Average with Exogenous Regressors (VARMAX)
Simple Exponential Smoothing (SES)
Holt Winter’s Exponential Smoothing (HWES)
Predicting actual Values of time series using observations
Using kalman filters - explains the concept etc, 1 out of 55 videos.
There are three types of gates within a unit:
Forget Gate: conditionally decides what information to throw away from the block.
Input Gate: conditionally decides which values from the input to update the memory state.
Output Gate: conditionally decides what to output based on input and the memory of the block.
Using lstm to predict sun spots, has some autocorrelation usage
Stackexchange - Yes, you can use DTW approach for classification and clustering of time series. I've compiled the following resources, which are focused on this very topic (I've recently answered a similar question, but not on this site, so I'm copying the contents here for everybody's convenience):
UCR Time Series Classification/Clustering: main page, software page and corresponding paper
Time Series Classification and Clustering with Python: a blog post
Capital Bikeshare: Time Series Clustering: another blog post
Time Series Classification and Clustering: ipython notebook
Dynamic Time Warping using rpy and Python: another blog post
Mining Time-series with Trillions of Points: Dynamic Time Warping at Scale: another blog post
Time Series Analysis and Mining in R (to add R to the mix): yet another blog post
And, finally, two tools implementing/supporting DTW, to top it off: R package and Python module
What is stationary (process), stationary time series analysis (shay palachi),
AD techniques, part 2, part 3
Adtk a sklearn-like toolkit with an amazing intro, various algorithms for non seasonal and seasonal, transformers, ensembles.
Awesome TS anomaly detection on github
Ransac is a good baseline - random sample consensus for outlier detection
You can feed ransac with tsfresh/tslearn features.
AD for TS, recommended by DTAIDistance, anomatools
Sliding windows
Twitters ESD test for outliers, using z-score and t test
Another esd test inside here
DTW, ie., how to compute a better distance for two time series.
Myth 1: The ability of DTW to handle sequences of different lengths is a great advantage, and therefore the simple lower bound that requires different-length sequences to be reinterpolated to equal length is of limited utility [10][19][21]. In fact, as we will show, there is no evidence in the literature to suggest this, and extensive empirical evidence presented here suggests that comparing sequences of different lengths and reinterpolating them to equal length produce no statistically significant difference in accuracy or precision/recall. Myth 2: Constraining the warping paths is a necessary evil that we inherited from the speech processing community to make DTW tractable, and that we should find ways to speed up DTW with no (or larger) constraints[19]. In fact, the opposite is true. As we will show, the 10% constraint on warping inherited blindly from the speech processing community is actually too large for real world data mining. Myth 3: There is a need (and room) for improvements in the speed of DTW for data mining applications. In fact, as we will show here, if we use a simple lower bounding technique, DTW is essentially O(n) for data mining applications. At least for CPU time, we are almost certainly at the asymptotic limit for speeding up DTW.
Python code with a good tutorial.
Another function for dtw distance in python
Medium, mentions prunedDTW, sparseDTW and fastDTW
(duplicate above in classification) Stackexchange - Yes, you can use DTW approach for classification and clustering of time series. I've compiled the following resources, which are focused on this very topic (I've recently answered a similar question, but not on this site, so I'm copying the contents here for everybody's convenience):
UCR Time Series Classification/Clustering: main page, software page and corresponding paper
Time Series Classification and Clustering with Python: a blog post
Capital Bikeshare: Time Series Clustering: another blog post
Time Series Classification and Clustering: ipython notebook
Dynamic Time Warping using rpy and Python: another blog post
Mining Time-series with Trillions of Points: Dynamic Time Warping at Scale: another blog post
Time Series Analysis and Mining in R (to add R to the mix): yet another blog post
And, finally, two tools implementing/supporting DTW, to top it off: R package and Python module
(nice) With time series
TSlearn - DTW, shapes, shapelets (keras layer), time series kmeans/clustering/svm/svr/KNN/bary centers/PAA/SAX
Bivariate Formula: this correlation measures the extent of a linear relationship between two variables. high number = high correlation between two variable. The value of r always lies between -1 and 1 with negative values indicating a negative relationship and positive values indicating a positive relationship. Negative = decreasing, positive = increasing.
White-noise has autocorrelation of 0.