Literature Review on Volatility

Literature Review
What is Volatility?
Volatility is defined as the spread of all likely outcomes of an uncertain variable (Poon, 2005). Statistically, it is often measured as the sample standard deviation (as seen below), but can also be measured by variance.

Where rt = return on day t, and
μ = average return over the T-day period.
The common misconception is to equate volatility to risk. However, whilst volatility is related to risk, it is not the same. Risk represents an undesirable outcome, whilst volatility is a measure for uncertainty that could arise from a positive outcome. Furthermore, volatility as a measure for the spread of a distribution contains no information on the shape, this represents another reason for volatility being an imperfect measure for risk. The sole exception to this being a normal distribution or lognormal distribution where mean and standard deviation are appropriate statistics for the whole distribution (Poon, 2005).

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

In dealing with volatility as a subject matter in financial markets, the focus is on the spread of asset returns. High volatility is generally undesirable as it indicates security values are unreliable and capital markets aren’t functioning efficiently (Poon, 2005; Figlewski, 1997). Financial market volatility has been the subject of much research and the number of studies continues to rise since Poon and Granger (2003)’s original survey first identified 93 papers in the field. A whole host of drivers for volatility have been explored (including political events, macroeconomic factors and investor’s behavior) in an attempt to better capture volatility and decrease risk (Poon, 2005).
This study will add to that list, hoping to contribute something novel to the field by scrutinizing the appropriateness of different volatility models for different country indexes.
The Importance of Volatility Forecasting

Investment strategies, Portfolio Optimization and Asset Valuation

Volatility when taken as uncertainty transforms into an important component in a wide range of financial applications including Investment strategies for trading or hedging, Portfolio optimization and Asset price valuation. The Markowitz mean-variance portfolio theory (Markowitz, 1952), Capital Asset Pricing Model (Sharpe, 1964) and Sharpe ratio (Sharpe 1966) signify three cornerstones for optimal decision-making and measurement of performance, advocating a focus on the risk-return interrelationship with volatility taken as a risk proxy. With Investors and portfolio managers having limits as to the risk they can bear, accurate forecasts of the volatility of asset prices for long-term horizons is necessary to reliably assess investment risk. Such forecasts allow investors to be better informed and hold stocks for longer rather than constantly reallocating their portfolio in reaction to movements in prices; an often expensive exercise in general (Poon and Granger, 2003). In terms of stock price valuation French, et al. (1987) analyse NYSE common stocks for the period of 1928-1984 and find expected market risk premium to be positively related to the predictable volatility of stock returns, which is further strengthened by the indirect relationship between stock market returns and the unexpected change in the volatility of stock returns.

Derivatives pricing

Volatility is a key element in Modern option pricing theory that enables estimation of the fair value of options and other derivative instruments. According to Poon and Granger (2003) the trading volume of derivative securities had quadrupled in the recent years leading up to their research and since then this growth has accelerated with the global derivatives market now estimated to be around $544 Trillion excluding credit default swaps and commodity contracts (BIS, 2017). As one of five input variables (including Stock Price, Strike price, time to maturity and risk-free interest rate), expected volatility over the options life in the Black-Scholes model theorized by Black and Scholes (1973) is crucially also the only variable that is not directly observable and must be forecast (Figlewski, 1997). Implied volatility and realized volatility can be computed by referencing observed market prices for options and historical data. Whilst the former is attractive for requiring little input data and delivering excellent results when analysed in some empirical studies compared to time series models utilizing just historical information, it is deficient by not having a firm statistical basis and different strike prices yielding different implied volatilities creating confusion over which implied volatility to use (Tse, 1991; Poon, 2005). Lengthening maturities of derivative instruments also weakens the assumption that volatility realized in the recent past can be used as a fairly reliable proxy for volatility in the near future. (Figlewski, 1997). With recent developments, derivatives written on volatility can now also be purchased whereby volatility represents the underlying asset, thus further necessitating volatility forecasting practices (Poon and Granger, 2003).

Financial Risk Management

Volatility forecasting plays a significant role in Financial Risk Management of the finance and banking industries. The practice aids in estimation of value-at-risk (VaR), a measure introduced by the Basel Committee in 1996 through an amendment to the Basel Accords (an international standard for minimum capital requirement among international banks to safeguard against various risks). Whilst many risks are examined within, volatility forecasting is most relevant for Market risk and VaR. However, calculating VaR is necessary only if banks choose to adopt its own internal proprietary model for calculating market risk related capital requirement. By choosing to do so, there is greater flexibility for banks in specifying model parameters but with an attached condition of regular backtesting of the internal model. Apart from banks, other financial institutions may also use VaR voluntarily for internal risk management purposes. (Poon and Granger 2003; Poon 2005) Christoffersen and Diebold (2000) do however contend the limits of relevance of Volatility Forecasting for Financial Risk Management, arguing that for reliable forecastablity much depends on whether the horizon of interest is of a short term or long-term nature (taken to be more than 10 or 20 days) with the practice deemed more relevant for the former than the latter due to the limitations in forecastability.

Policymaking

Financial market volatility can have wide-reaching consequences on economies. As an example, large recessions create ambiguity and hinder public confidence. To counter such negative impacts and disruptions, policy makers utilize market estimates of volatility as a means for identifying the vulnerability of financial markets, equipping them with more reliable and complete information with which to respond with appropriate policies. (Poon and Granger, 2003) The Federal Reserve of the United States is one such entity that incorporates volatility of various financial instruments into its monetary policy decision-making (Nasar, 1991). Bernanke and Gertler (2000) explore the degree to which implications of asset price volatility impact monetary policy decision-making. A side-by-side comparison of U.S. and Japanese monetary policy is the basis of the study. The researchers find that inflation-targeting is desirable, however, monetary policy decisions based on changes in asset prices should only be made to the extent that such changes help to forecast inflationary or deflationary pressures. Meanwhile, Bomfim (2003) investigates the relationship between monetary policy and stock market volatility from the other perspective. Interest rate policy decisions that carry an element of surprise appear to increase short run, stock market volatility significantly with positive surprises also having a greater effect than negative surprises.
Empirical stylized facts of asset returns and volatility
Any attempt to model volatility appropriately must be done with an understanding of the common, recurring set of properties identified from numerous empirical studies carried out across financial instruments, markets and time periods. Contrary to the event-based theory in which it is hypothesized different assets respond differently to different economic and political events, empirical studies show that different assets do in fact share some generalizable, qualitative statistical properties. Volatility models should thus seek to capture these features of asset returns and volatility so as to enhance the forecasting process; herein lays the challenge. (Cont, 2001; Bollerslev et al 1994) Presented are some of these stylized facts, along with their corresponding empirical studies that have contributed to the evolving literature aimed at improving volatility-forecasting practices and which this study will also look to capture.
Return Distributions
Stock Market returns are not normally distributed and it is therefore an unsuitable distribution for modeling returns according to Mandelbrot (1963) and Fama (1965). Returns are approximately symmetrical but can display negative skewness and significantly have leptokurtic features (excess kurtosis with heavier tails and taller, narrower peaks than found in a normal distribution) that see large moves occur with greater frequency than under normal distributions (Sinclair, 2013). Cont (2001) asserts that these large moves in the form of gains and losses are asymmetric by nature with the scale of downward movements in stock index values dwarfing upward movements. He further argues that the introduction of GARCH-type models to counter the effects of volatility clustering can reduce the heaviness of tails in the residual time series to some small extent. However, as GARCH models can at times struggle to fully incorporate heavy-tail features of returns, this has necessitated the use of alternative distributions such as the student’s t-distribution employed in Bollerslev (1987). Alberg et al (2008) employ a skewed version of this distribution to various models with the EGARCH model delivering the best performance in forecasting the volatility of Tel Aviv stock indices. Cont (2001) does however also highlight an important consideration with the notion of aggregational gaussianity that as one increases time scale (t) for calculation of returns, the distribution of returns seems more normally distributed in appearance.
Leverage effect/Asymmetric volatility
In most markets, volatility and returns are negatively correlated (Cont, 2001). First elucidated by Black (1976) and particularly prevalent for stock indices, Volatility will tend to increase when stock price declines. The justification for this is because a decline in equity stock price will increase a company’s debt-to-equity ratio and consequently its risk and volatility (Figlewski and Wang, 2000; Engle and Patton, 2001). Importantly, this relationship is asymmetric, with negative returns having a more marked effect on volatility than positive returns as documented by Christie (1982) and Schwert, (1989). However they also argue that the leverage effect is not enough on its own to explain all of the change in volatility with Christie (1982) incorporating interest rate as another element that has a partial effect. Hence, whilst, ARCH (Engle, 1982) and GARCH (Bollerslev, 1986) models do well to account for volatility clustering and leptokurtosis, their symmetric distribution fails to account for the leverage effect. In response to this, various asymmetric modifications of GARCH have been developed, the most significant of these being Exponential GARCH (EGARCH; Nelson, 1991) and GJR (Glosten et al, 1993). Other models like GARCH-in-Mean have also endeavored to capture the leverage effect along with the risk premium effect, another concept that has been theorized to contribute to volatility asymmetry by studies such as Schwert (1989) (Engle and Patton, 2001).
Volatility Distribution
The distribution of volatility is taken to be approximately log-normal. Various studies such as Andersen et al (2001) have postulated this. More significantly than the actual distribution is the high positive skewness indicating volatility spends longer in lower states than higher states. (Sinclair, 2013)
Volatility-Volume correlation
All measures of volatility and trading volume are highly positively correlated (Cont, 2001). Lee and Rui (2002) show this relationship to be foundationally robust, however what is more complex is determining the causality between the two. Strong arguments can be made either way. As an example, Brooks (1998) utilizes linear and non-linear Granger causality tests and finds the relationship to be stronger from volatility to volume than the converse. He concludes by highlighting that for forecasting accuracy, predicting volume using volatility is more productive than forecasting stock index volume and using such forecasts in trading. According to Gallant et al (1992) this relationship is also closely linked with the leverage effect and incorporating lagged volume weakens the effect considerably.
Non-Constant Volatility
Volatility is not constant. The changing nature of volatility occurs in a particular manner; Merton (1980) was critical of researchers who failed to incorporate this feature in their models. Firstly volatility is mean reverting. Indeed LeBaron (1992) found a strong negative relationship between volatility and autocorrelation for stock indices in the United States. Secondly, Volatility clusters. This is a phenomenon first noted by Mandelbrot (1963) that allows a good estimation of future volatility based on current volatility. Other studies such as Chou (1988) have also empirically shown the existence of clustering. Mandelbrot (1963) wrote, “large changes tend to be followed by large changes of either sign, and small changes tend to be followed by small changes”. In other words, a turbulent day of trading usually comes after another turbulent trading day, whilst a calm period will usually be followed by another calm period. Importantly, the phenomenon is not exclusive to the underlying product and can be seen in stock indices, commodities and currencies. It also tends to be more pronounced in developed than emerging markets. (Taylor, 2008; Sinclair, 2013) Engle and Patton (2001) argue that volatility clustering indicates volatility goes through phases whereby periods of high volatility eventually give way to more normal volatility with the contrary also holding. Engle’s (1982) landmark paper incorporated these features of volatility persistence using his ARCH model, whereby time varying, non-constant volatility that persists in high or low states is taken account of.

A FinTech Strategy for Momentum and Volatility Effect in Emerging Markets Stocks

The project
Financial technology (FinTech) can be describes as “Computer programs and other technologies used to support or assist banking and financial services”, Hassnian, A (2017). Financial technology (FinTech) describes new tech that improve and automate the delivery and use of financial services. FinTech new technologies, machine learning/artificial intelligence are used to predict behavioural analytics for financial decisions and is transforming the business of finance; creating novel services and products.
The purpose of the research is to investigate the relationship between momentum, volatility effect and emerging markets by focusing in create a FinTech model (Artificial Intelligence, Machine Learning) and compare it with traditional statistical model (Hypothesis Test).
The project will address both theoretical and empirical point of view of the new FinTech strategy for momentum and volatility effect in emerging market stocks. From a methodological point of view, it will make use of the vanguard standard parametric and semi-parametric techniques and the programming implementation of Artificial Intelligence and Machine Learning algorithms, such Python, cloud-based Linux and R. Furthermore, an empirical analysis will be implemented based on techniques such random forest regression to prove the fit of the strategies. Although, researcher have attempted to analysed momentum phenomena and volatility using traditional statistical regression models for their studies, some of their conclusions are unconvincing due the lack of prediction ability. Hence, this project is going to satisfy that prediction ability using FinTech strategy models as a predictor to enhance the decision-making trading. The research is going to be organized as follows: 1) theoretical background of the study that is going to cover several aspects regarding momentum effect, volatility and FinTech strategies in the emerging markets. 2) The data is going to be presented for the study and the preparatory process is going to be carry out on the data as well as the descriptive statistics and artificial intelligence / machine learning strategy. 3) Methodology for this research is going to be adapted for the study. 4) Results are going to be presented. 5) Discussion of the study is going to be followed up with the final section; 6 ) Conclusion.
Research question
 The questions proposed for the research project:
–          How the momentum volatility and effect prediction in emerging markets stocks could challenge the Efficient Market Hypothesis?
–          How efficient are FinTech strategies for volatility and momentum effect as a constructed prediction model?
–          Which model is more efficient the statistical or the proposed FinTech strategy for the emerging market stock?
Objectives
Objective 1. To examine the indicators which drive momentum and volatility effect in the emerging markets stocks.
Objective 2. To determine the relationship between momentum and volatility effect.
Objective 3. To undertake a best Fintech strategy and identify the best performing model within the emerging markets stocks and compare it with the traditional statistical regression model.
Objective 4. To ensure the development of a practical and robust framework for adopting the best Fintech strategy model within the emerging markets stocks.
Literature review
Market anomaly refers to the difference in stock’s performance from its assumed price trajectory, as establish efficient market hypothesis (EMH). The efficient market hypothesis not always hold true and is have been proved by the appearance of financial market anomalies. The momentum anomaly effect is probably the most difficult to explain and represent. Volatility is associated with uncertainty and has implications for the market.

Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Essay Writing Service

Momentum and Volatility effects are an interesting phenomenon in the stock markets. The momentum effect states that stocks which have performed well in the past would continue to perform well. Furthermore, stocks which have performed poorly in the past would continue to perform badly. The evidence for momentum has been found across international equities in developed and numerous other classes (Asness, Moskowitz, & Pedersen, 2013). Volatility is a statistical measure of the degree of variation in their trading price observed over a period. The more dramatic the price swings are in that instrument, the higher the level of volatility, and vice versa. According with Zhixi Li and Vincent Tam (2018), momentum effect means that the stock that have perform well will probably continue to outperform those that have performed poorly in the past in the future. Relevant studies have been conducted in this topic, however, researchers stated that stock markets have varying degree of momentum, reversal effect and volatility. Santamaria, R., observed the momentum effect in Latin American emerging markets.
Efficient Market Hypothesis (EMH), have been challenged by the volatility and momentum effects. Because investors may take extra advantage if they can predict the movement of the market. However, studies and concluded that this are highly dependent on human experience upon a specific market.
With the advance in new technologies such Artificial Intelligence (AI), new methods could be used as an alternative statistical tool to predict these phenomena and make comparisons regarding its efficiency and accuracy.  Machine Learning is an application of Artificial Intelligence (AI) that study algorithms and statistical model that computer system use to perform a specific task, relying on patterns and inference instead. Machine learning is capable of automatically recognizing potentially useful patterns in financial data according with Li, Z.; Tam, V.
According with Lingaraja K. (2014), the emerging markets consist of retail investors and other stake holders who would expect to get higher benefits for their investments taking higher risk. According with Morgan Stanley Capital International (MSCI), the emerging markets are group into three categories; Americas, Europe, Middle East and Africa, and Asia. Investing in emerging markets are treated as highly volatile and therefore have a great growth potential. 
Research techniques
The methodology that it will be implemented in this project is going to be developed and explained in more detail during the project. However, a brief summary is going to be explained for the research proposal for the methodology that is going to be used for the study.
Breiman (2001) introduced the random forest (RF) algorithm as an ensemble approach that can also be thought of as a form of nearest neighbour predictor. Decision Trees (DT) is a random forest machine learning technique. Decision Trees (DT) algorithms is an approach that uses a set of binary rules to calculate a target class or value. Given training data, decision tree can learn decision rules inferred from the data features during the training process.
Support Vector Machine (SVM) are a supervised learning models with associated learning algorithms that analyse data used for classification and regression analysis. Support Vector Machine (SVM), is known as one of the powerful machine learning algorithms.
Multilayer Perceptron Neural Network (MLP), is a class of feedforward artificial neural network. Multilayer perceptron is sometimes referred to as “vanilla” neural networks, especially when they have single hidden layer. In this project, various topologies are going to try to acquire a good one that fit in the research methodology.
Long Short-Term Memory Neural Network (LSTM), is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. Unlike standard feedforward neural networks, Long Short-Term Memory has feedback connections. LSTM is expected to be a suitable algorithm for financial prediction.
In this project, we are going to investigate the emerging markets stocks from 01/01/2015 and 31/12/2019 to evaluate their momentum and volatility prediction using FinTech strategies.  The data is going to be cleaning carefully to remove exotic values and evaluate the performance using the model that is going to be constructed for the research. The project is going to be conducted and perform using the advance function algorithms programs such Phyton and R, and with other new program that will allow to perform the project.
The research proposal is going to be developed in three years from October 2020 at the City University of London, CASS Business School. During the first year a theoretical framework is going to be conducted. In the second year, a prediction model is going to be constructed for momentum effect and volatility in emerging markets using FinTech strategy such Artificial Intelligence / Machine Learning. Additionally, data is going to be collected and clean in order to be able to be implemented in the model. In the third year, results and conclusions of the research are going to be presented. On the other hand, papers are going to be written about the topic and in finance in order to support the research, as wells as assisting to conferences to present and learn about new findings in the financial area that could contribute to the project. During the project, some teaching assistant duties are going to be done to enrich the academic work for this project.
Although few research have conducted in recent times in this field of finance, the project is important because is going to attempt the momentum effect and volatility in emerging market stock using FinTech strategies and going to provide a new horizon helping to predict and analyse  the market more accurately, creating clever models that gathering different Artificial Intelligence / Machine Learning models to harmonize different intricate markets. Hence, could contribute to the academia and the society in the finance field by improving efficiency and quality of financial services, cutting costs and sooner or later establish FinTech new scenarios and approaches. Financial Technology is a very important topic nowadays because contributes to new findings in finance and creating knowledge that it’s going to help for future research.
References and bibliography.

Asness, C., Moskowitz, T., and Pedersen, L. 2013. Value and Momentum Everywhere. The Journal of Finance, 68(3), 929–985.
Breiman, Leo. 2001. Random Forests. Machine Learning 45: 5–32.
Hassnian, Ali. 2017. Conference: International Conference on Business, Economics and Finance (icbef). At University Brunei Darussalam.
Lingarja, Kasilingam.  2014. The Stock Market Efficiency of Emerging Markets: Evidence from Asia Region. Department of Commerce and Financial Studies, Bharathidasan University. India, 158.
Muga, L.; Santamaría, R. The momentum effect in Latin American emerging markets. Emerg. Mark. Financ. Trade 2007, 43, 24–45.
Stulz, R.; Brown, G.; Bartram, S. 2011. Why are U.S. stocks more volatile? Charles A. Dice Center for Research in Financial Economics. Fisher College of Business.
Zhixi Li and Vincent Tam. 2018. A Machine Learning View on Momentum and Reversal Trading. Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, China.