time series forecasting

In part 1 of this two part series we described the overall approach used by RapidMiner for time series forecasting. The idea involved converting a time series into a pseudo cross sectional dataset which would make it possible to apply any of the standard predictive analytics algorithms to forecast future points.

In this part, we will apply this approach to an actual time series from a manufacturing business that is using cost modeling and forecasting to improve their operations. The dataset consists of historical monthly sales for a commodity which is manufactured by the company from July 2004 to July 2012. In this exercise, we separate the last 7 months of data (from Jan 2012 to July 2012) to make up a test set and use the remaining months to train the model.

We will show the advantage of using machine learning algorithms for forecasting problems as compared to conventional (averaging or smoothing type) forecasting algorithms. The process consists of the following three steps and is also explained very nicely in this video by Thomas Ott on S&P 500 data.

Step 1: Set up Windowing 

Step 2: Train the model with several different algorithms

Step 3: Evaluate the forecasts 

1. Set up Windowing

After separating the data into training and testing parts, the first step is to set up the windowing in RapidMiner. The process window below shows the necessary operators for doing this. All time series will have a date column and this must be treated with special care. RapidMiner must be informed that one of the columns in the data set is a date and should be considered as an “id”. This is accomplished by the “Set Role” operator. If you have multiple commodities in the input data, you may also want to “Select Attributes” that you want to forecast. In this case, we will select only “commodity A” among the several commodities that are in the data. The final operator is the Windowing operator (you may need to install the Series Extension, if you have not already. Go to “Help -> Manage Extensions” to verify).

Time series windowing in RapidMiner

The main items to consider in Windowing are the following:

Horizon: determines how far out to make the forecast. If the window size is 3 and the horizon is 1, then the 4th row of the original time series becomes the first sample of label. 

Window size: determines how many “attributes” are created for the cross sectional data. Each row of the original time series within the window width will become a new attribute

Step size: determines how to advance the window

2. Train the model

Once the windowing is done, then the real power of predictive analytics algorithms may be deployed using a “Sliding Window Validation” operator. This works very similar to the standard “Split Window” operator, in that it is a nested operator. The first window inside the nesting allows you to use any available machine learning algorithm such as regression, neural networks or support vector machines, for example. This is where the advantage of using RapidMiner comes into play. Now because the time series is encoded and transformed into a cross-sectional dataset, we can use any of these powerful machine learners to improve our prediction accuracy. 

RapidMiner: Time series sliding window validation

As usual, the second window of the nesting is used for “Apply Model” and “Performance (Forecasting)”. An initial run with a Neural Net gives us about 80% prediction trend accuracy.

Dont worry much about the Sliding Window Validation parameters for now. They will be adjusted in the next step.

3. Evaluate the forecasts

Replicate the above series of operators on the test set which was created before step 1. Connect them as shown below to now apply the generated model (using Apply Model, of course!) from Step 2 to the test set. Make sure that you select the correct csv file for the Read CSV (2) operator! You dont need to change any settings for the other operators if the headers in the test and training sets are identical.

Time series forecasting with general ML models

The easiest way to evaluate the forecasts is to plot the “lab”eled data from the Apply Model (2) operator. 

The main point about any time series forecasting is that one should not place too much emphasis on “point” forecasts. A complex quantity like a stock price or sales demand for a manufactured good, is influenced by too many factors and to claim that any forecasting will predict the exact value of a stock 2 days in advance or the exact value of demand 3 months in advance is unrealistic. However, what is far more valuable is the fact that recent undulations in the price or demand can be effectively captured and predicted. This is where RapidMiner excels as seen in the Series plot below. Finally, switching learners to see if accuracy can be improved at all is also really easy.

Time series forecasting: point predictions vs actual values
Time series forecasting: Trend predictions

The blue line is the actual demand for commodity A from Jan to Jun 2012 and the red line is the predicted demand. Even though there is a difference in the point forecasts, the trends are almost identical.

PS: If your first trend prediction does not look impressive, tune the Sliding Window Validation parameters. You can even use the fancy “Optimizer” tools in RapidMiner if you prefer.

Originally posted on Tue, Sep 11, 2012 @ 09:10 AM

No responses yet

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

simafore.ai