Learn advanced forecasting models through a practical course with Python programming language using S&P 500® Index ETF prices historical data. It explores main concepts from proficient to expert level which can help you achieve better grades, develop your academic career, apply your knowledge at work or do your advanced investment management or sales forecasting research. All of this while exploring the wisdom of best academics and practitioners in the field.
Become an Advanced Forecasting Models Expert in this Practical Course with Python
Become an Advanced Forecasting Models Expert and Put Your Knowledge in Practice
Learning advanced forecasting models is indispensable for finance careers in areas such as portfolio management and risk management. It is also essential for academic careers in advanced applied statistics, econometrics and quantitative finance. And it’s necessary for advanced sales forecasting research.
But as learning curve can become steep as complexity grows, this course helps by leading you step by step using S&P 500® Index ETF prices historical data for advanced forecast modelling to achieve greater effectiveness.
Content and Overview
This practical course contains 46 lectures and 7 hours of content. It’s designed for advanced forecasting models knowledge level and a basic understanding of Python programming language is useful but not required.
At first, you’ll learn how to read S&P 500® Index ETF prices historical data to perform advanced forecasting models operations by installing related packages and running code on Python PyCharm IDE.
Then, you’ll define Box-Jenkins autoregressive integrated moving average models. Next, you’ll identify autoregressive integrated moving average models integration order through level and differentiated time series first order trend stationary augmented Dickey-Fuller unit root test. After that, you’ll identify autoregressive integrated moving average models autoregressive and moving average orders through autocorrelation and partial autocorrelation functions. For autoregressive integrated moving average models, you’ll define random walk with drift and differentiated first order autoregressive models. Later, you’ll define seasonal autoregressive integrated moving average models. Then, you’ll identify seasonal autoregressive integrated moving average models seasonal integration order through level and seasonally differentiated time series first order seasonal stationary deterministic test. Next, you’ll identify seasonal autoregressive integrated moving average models seasonal autoregressive and seasonal moving average orders through autocorrelation and partial autocorrelation functions. For seasonal autoregressive integrated moving average models, you’ll define seasonal random walk with drift and seasonally differentiated first order autoregressive. After that, you’ll select non-seasonal or seasonal autoregressive integrated moving average model with lowest information loss criteria. For information loss criteria, you’ll define Akaike and Schwarz Bayesian information criteria. Later, you’ll evaluate autoregressive integrated moving average models forecasting accuracy. For forecasting accuracy metrics, you’ll define scale-dependent mean absolute error and root mean squared error.
Next, you’ll define general autoregressive conditional heteroscedasticity models. Then, you’ll identify general autoregressive conditional heteroscedasticity modelling need through autoregressive integrated moving average model squared residuals or forecasting errors second order stationary Engle autoregressive conditional heteroscedasticity test. After that, you’ll identify general autoregressive conditional heteroscedasticity model autoregressive and moving average orders through autocorrelation and partial autocorrelation functions. Later, you’ll define autoregressive integrated moving average models with residuals or forecasting errors assumed as Gaussian or normally distributed and with Bollerslev simple, Nelson exponential or Glosten-Jagannathan-Runkle threshold general autoregressive conditional heteroscedasticity effects. For general autoregressive conditional heteroscedasticity models, you’ll define random walk with drift and differentiated first order autoregressive. Then, you’ll evaluate general autoregressive conditional heteroscedasticity models forecasting accuracy.
After that, you’ll define non-Gaussian general autoregressive conditional heteroscedasticity models. Next, you’ll identify non-Gaussian general autoregressive conditional heteroscedasticity modelling need through autoregressive integrated moving average and general autoregressive conditional heteroscedasticity model with highest forecasting accuracy standardized residuals or forecasting errors multiple order stationary Jarque-Bera normality test. Then, you’ll define autoregressive integrated moving average models with residuals or forecasting errors assumed as Student-t distributed and with Bollerslev simple, Nelson exponential or Glosten-Jagannathan-Runkle threshold general autoregressive conditional heteroscedasticity effects. Later, you’ll evaluate non-Gaussian general autoregressive conditional heteroscedasticity models forecasting accuracy. Finally, you’ll evaluate autoregressive integrated moving average and non-Gaussian general autoregressive conditional heteroscedasticity model with highest forecasting accuracy standardized residuals or forecasting errors strong white noise modelling requirement