Search for an essay or resource:

Essay: Risk Measurements of the Economy

Essay details:

  • Subject area(s): Economics essays
  • Reading time: 21 minutes
  • Price: Free download
  • Published: July 24, 2019*
  • File format: Text
  • Words: 3,201 (approx)
  • Number of pages: 13 (approx)
  • Risk Measurements of the Economy
    4.0 rating based on 12,345 ratings
    Overall rating: 4 out of 5 based on 1 reviews.

Text preview of this essay:

This page of the essay has 3,201 words. Download the full version above.

Every new method that emerges in risk measurement has been developed by improving the weak aspects of previous models. Researchers and investors discovered those aspects in consequence of past experiences and economic crises in financial markets. For example in 1970’s, fluctuations in interest rates caused high inflation rates which were resulted as economic stagnation in the market. Also, on 19 October 1987 (Black Monday), stock markets around the world crashed with a significant margin ( around %23 in the United States) because of the program trading , illiquid markets, and excessive valuations. These events have shown to investors and academicians that conventional risk management techniques are not sufficient to predict the emerging crisis. Following the Black Monday crisis, at the beginning of the 1990s, these judgments were verified with the unexpected bankruptcy of large investment/finance organizations such as Barings Bank and Orange County. In order to understand Value at Risk method properly, this section will examine the events that underlie the development of the methodology. However, here the focus is on both the magnitude of loss and the events that have similar characteristics regarding applied strategy.
When Barings Bank, one of the UK\’s largest banks, with nearly 200 years of history, announced that it had ended its banking operations in February 1955, the community witnessed how such a large bank could be bankrupted by a Singapore broker. As a result of the transactions made by the chief trader of Barings Futures, the bank\’s Singapore subsidiary, 1.3 billion dollars was lost in derivative markets, the resulting loss completely eradicated the bank\’s equity, and as a result, the bank had to declare bankruptcy by failing to fulfill its obligations.
The process that led Barings Bank to this end began with the trader\’s position in the stock index futures contract on the Japan Nikkei 225 index. At that time, Barings Futures positions in the Singapore and Osaka stock exchanges rose by about $ 7 billion. However, later on in the first two months of 1995, as the current market declined by more than 15%, Barings Futures faced with the obligation to buy securities at a high price according to the contract despite this drop in the market. This meant that the bulk of Baring Futures\’ capital was gone. On the other hand, it continued to take a position in the market. However, the bank had to declare that it would not be able to meet this obligation at the end of the contract when the cash exchange delivery was requested by the relevant stock exchange.
Since Barings is known as a conservative bank in the world financial system, bankruptcy of the bank has had a cautionary effect on financial institutions around the world. The trader responsible for the transactions above was involved with both trading desk and back office. In general, the function of the back office is to check all business activities are conducted by rules and to ensure that the trade is verified. As in any bank, this bank also had to limit the amount of capital that traders could use, and therefore the position limits had to be tightly controlled. Also, it has become a necessity for banks to establish a separate risk management unit that provides different forms of control over traders.
In spite of this necessity, Barings Bank did not control the trader very well because of his successful career. In 1994, this person almost contributed $ 20 million to Barings, which is about %20 of the bank\’s total profit. This meant a huge premium for both trader and his superiors. Hence, it can be clearly seen here that the reason for the control over the trader is weak. For that reason, there were allegations that senior executives were aware of the risks facing the bank, and that it had transferred $ 1 billion for marginal payments arising from contracts entered by this trader. In addition, an internal audit report, which was presented in 1994 before the bankruptcy of Barings and warned that he had excessive authority, was not considered by the top management.
Ultimately, this event forced Barings\’ shareholders to meet the full loss they had. The company\’s market capitalization value of $1 billion has disappeared, and the value of shares have fallen to zero. Barings was then purchased by the Internationale Nederlanden Group (ING), a Dutch-origin financial services group, at the cost of $ 1.50 per share, provided that the resulting losses were met. The trader was sentenced to heavy imprisonment by Singapore law.
Orange County case is another example of market risk. This publicly owned local fund management agency was responsible for a $ 7.5 billion portfolio of fund managers, schools, private administrations and municipal governments. In order to increase the value of the portfolio, the manager invested approximately $ 12.5 billion in reverse repurchase agreements with a total of $ 20 billion in repayment at the end of the four-year maturity period. Since this transaction represents a higher investment than the current portfolio, a guarantor agreement has been made with Wall Street bankers to meet marginal liabilities. This strategy provided a significant return (especially when interest rates were falling) because short-term funding costs at that time were lower than medium term yields.
However, when market interest rates rose in February 1994, public debt securities in the portfolio began to suffer losses. Wall Street bankers, who provided short-term financing, demanded that their funds be covered by spreading the news that the funds they are insured were suffered loss. As a result, when Orange County declared its failure to meet its margin obligations, the loss after the liquidation of securities on the portfolio had exceeded $ 1.64 billion.
The process of dragging Orange County into bankruptcy have similarities with Barings Bank case. The common point in these organizations is the inadequacy of fund managers\’ control. In both cases, the managers have shown great success in the beginning to increase the welfare of their superiors. In this sense, for example, when the crisis began to manifest itself a few months before the bankruptcy of Barings Bank, another $ 850 million was sent by top management to support the hedged position. Likewise, in the case of Orange County, municipal inspectors have approved $ 600 million in additional support. However, a few months ago, the municipal government ignored warnings by the municipal treasurer that the fund manager\’s strategy was too risky and the fund would probably lose $ 1 billion. In addition, according to US legislation and accounting standards applied, the portfolio is shown only at a cost in the records since it is not compulsory for the local governments to keep records of earnings and losses arising from the fund management activities. Therefore, the audit was carried out at the cost value, not at the current prices. This has created a misleading effect on both the investors and the managers regarding the risk that the portfolio is being faced. However, investors and portfolio management could be more rational in decision-making if the risk value of the portfolio was calculated based on current prices at regular intervals, for example, months.
However, effective risk management is possible with sufficient information and data flow as well as effective control. In this respect, standard reports from trading desk and back offices, as well as audit reports and reports from additional risk management systems, provide a strong measure against malicious managers. The robustness of this measure can be ensured by the existence of an independent risk management unit and a good risk management system from other units within the organization.
In this context, the Value at Risk requirement has emerged in the last thirty years with an increase in the number of unusual fluctuations in exchange rates, interest rates and product prices subject to the financial system, and the corresponding number of derivative instruments. This increase is directly proportional to the increase in transaction volume of securities trading and the diversification of financial opportunities. Therefore, this means that growth in foreign trade and the increase of international financial relations between companies. As a result, many companies have begun to build portfolios that include large amounts of cash and derivatives. Due to the diversity of securities within the scope and the increase in transaction volume, the size of portfolio risk of companies is frequently changing and can not be monitored clearly. All of these developments have led to a claim that a senior manager responsible for the management of risk management can present a numerical benchmark against which a portfolio manager can report a summary report in order to express the market risk faced by the portfolio. Value at Risk is one of the strong criteria developed for this demand.
2.1 Value at Risk
The studies by the companies to measure all the risks within their institutions as a whole started in the 1970s. Later, these studies were sold to consulting firms and financial institutions and companies that are not in a position to develop a model but need such systems. The most famous of these systems is RiskMetrics, which is developed by JP Morgan and uses the Value at Risk.
Developed Value at Risk systems was not only based on portfolio theory, some using the historical method and others based on the Monte Carlo simulation technique. JP Morgan offered RiskMetrics and the data set for it free of charge in November 1994. Value at Risk then became more widely accepted and used, not only by those engaged in securities but also by banks, other financial institutions, and non-financial companies.
As Value at Risk systems become widespread, besides measuring market risk, which is the first development objective, it is developed to include credit, liquidity and cash flow risks. Value at Risk method can be defined in many ways in parallel with the diversity of studies related to the subject. Here are a few different definitions that point to distinctive features of the method.
• VaR measures the worst expected loss over a given horizon under normal market conditions at a given level of confidence

• VaR models seek to measure the minimum loss (of value) on a given asset or liability over a given time period at a given confidence level(e.g., 95 percent, 97.5 percent, 99 percent.)

• Value-at-Risk is a measure of the maximum potential change in value of a portfolio of financial instruments with a given probability over a pre-set horizon

The above definitions also include certain common attributes related to the concept of Value at Risk. These common characteristics that point at a given time, a certain probability and a particular hand can be expressed as:

• The data used in the VaR calculations for a certain time horizon is applied for a certain period. These periods can be daily, weekly, or monthly based on the risk priority of the institution that calculates the VaR.

• As in other statistical methods used in risk measurement, VaR calculation is based on a certain confidence interval. Therefore, the possibility of Value at Risk values also includes a probability. The existence of probability also points to a specific numerical, statistical or mathematical computing process. This means that in order to reach VaR values, information technologies should be used.

• The use of information technologies in the VaR calculation also pioneered the development of the Value at Risk methodology, which can also be used to calculate other risk types such as credit and cash flow risk

• VaR is calculated as a value rather than a coefficient. Therefore, unlike other risk measures, it shows the amount of loss that can be experienced under certain constraints.

• Above all, a unique Value at Risk approach can be mentioned within risk management. This points to and measures how VaR values can be used, how the institution should be restructured for this purpose, and how various common risk management resources will be implemented.

Value at Risk measures the amount of loss expected based on the likelihood of specific market movements over a given period of time. Thus, with this method, financial institutions have the possibility to summarize the market risk that they may face due to unexpected market conditions with a single numerical criterion. The capital requirement linked to market risk is based on VaR estimates calculated by banks and non-bank financial institutions using their risk management models. These models have been developed to predict the time-varying distributions of portfolio revenues. Actual VaR values are lower than estimates of these distributions. In other words, the VaR value is the estimate of the maximum portfolio loss that can occur over a given holding period, as determined by a certain confidence interval

In the case of the capital required to be held in exchange for the market risk, the holding period should be considered as overnight or weekly rather than a few months, annually or longer. Also, the analysis of the investment decision according to the daily or overnight holding period is difficult; it can lead to miscalculation of the VaR values by using the short holding period for the options (low liquidity) and the long holding period for the securities (high liquidity).

Many mathematical models are applied in the VaR calculation. These models based on linearity is insufficient for gap analysis as part of the pricing objective. The misstatement caused by the fact that the long-term holding period was chosen has caused many losses for brokerage houses in the developed countries. Because of this, testing of the sensitivity of VaR models is a necessity.

Under normal market conditions, since many positions in the bank portfolio can be converted to liquid within a shorter period of time, the 10-day holding period is criticized as being extremely conservative. However, the 10-day standard also reflects a need for risks that arise from options with non-linear price features and other positions. Sensitivities of options against changes in market risk factors should be selected as a one-day rather than a longer holding period, as these fluctuations may increase at a high rate, depending on the magnitude of these changes. For that reason, the choice of the 10-day holding period arises from the view that the VaR estimates used to calculate the capital requirement should be combined with the impact of the 10-day momentary price movements in market risk factors. Sensitivities of options against changes in market risk factors may increase at a high rate depending on the magnitude of these changes. So, longer holding period should be selected rather than a day period. For that reason, the choice of the 10-day holding period arises from the view that the VaR estimates used to calculate the capital requirement should be combined with the impact of the 10-day momentary price movements in market risk factors.

Another problem on the holding period arises from the comparison of VaR values calculated according to different periods. In order to be able to make a comparison by assumption or to be able to translate the obtained results according to different time periods, the series used in the calculation should be normally distributed. Under this assumption, the Basel Committee suggests the application of the \”square root of time\” technique as the conversion method.

The selection of the holding period is very important when there are extraordinary fluctuations in financial markets. The reason for this is that calculated volatility varies according to the holding period.

2.3.3 Present Value of the Portfolio
In Value at Risk calculations, one of the parameters required to calculate the risk of a portfolio is the present value of the portfolio. The amount of risk that has to be endured for a portfolio is directly proportional to the size of the current value of the portfolio.

2.3.4 Volatility
Another important element of Value at Risk calculation is volatility. he concept of volatility is very important in the VaR calculation because it is a risk criterion in itself. While there is no such thing as the prediction of the fluctuation in the past, volatility calculation for the future is made based on past figures.

Risk calculations are based on the calculation of the standard deviations of the returns, taking into account the assumed probability distributions of the returns of the investments. Volatility is the measurement of expected changes in the price of a financial asset over a specified period of time. The volatility of portfolio income depends on the sensitivity of each asset to its risk factors, as well as covariance and variance between portfolio risk factors.

The volatility of various variables such as interest rates, currencies, inflation rate, stock market, production costs is a measure of how much the volatility actually deviates from the expected values of the relevant parameters. Rapid changes in the economy are causing the volatility to increase in particular. It is very important to forecast the volatility to be protected against future surprises. It is also a known fact that individual and institutional investors, especially those who risk aversiveness of high volatility, negatively impact financial demands. For this reason, the positive and negative aspects of the volatility that has been experienced in financial markets over the last years are the subject of research in detail.

Portfolio volatility depends not only on the volatility of the assets in the portfolio but also on the correlation between the assets. For this reason, it will not be enough to just calculate the standard deviation. In addition to standard deviation calculations, the methods used to measure volatility are Exponentially Weighted Moving Average (EWMA), Autoregressive Conditional Hetereoskedasticity (ARCH), and Generalized Autoregressive Hetereoskedasticity (GARCH)

Exponentially Weighted Moving Average (EWMA)
Particularly in 1980’s, with the rapid development of computers and financial systems, along with the evolving needs, financial modelling become a key tool in the sector. In this context, variance modelling is one of these evolving needs. Many studies on asset returns show that variance and covariances vary over time, so it is necessary to exclude the old data from the calculation.

EWMA is a popular technique developed by JP Morgan and used for risk estimation in the RiskMetrics value at risk model, which was offered free of charge in 1994. In this method, “a moving average of historical observations is used, where the latest observations carry the highest weight in the estimates.”
Variance formula for Exponentially Weighted Moving Average is:
〖σ_t〗^2= λ〖σ_(t-1)〗^2 +(1-λ) 〖r_(t-1)〗^2
Here, the λ shows how much weight will be given to the data in the last days. This coefficient is accepted as 0.94(daily) and 0.97(monthly) in the RiskMetrics system. If the λ value is closer to 1, it means that more weight will be given to last data.

There is criticism that the EWMA method may not be able to fully represent the series, so the estimates to be made according to this method will not be healthy. But the advantage of EWMA is that it can express it in the volatility calculation in sudden changes that may occur. This feature helps to calculate when the volatility is high.

Autoregressive Conditional Hetereoskedasticity (ARCH) and Generalized Autoregressive Hetereoskedasticity (GARCH)
The Autoregressive Conditionally Varying Variance (ARCH) used in the volatility model was developed by Engle in 1982. Developing the ARCH model out of the regression models, Engle has shown that it can be modeled not only of the mean but also of the variance. The autoregressive in the model represents that volatility depends on volatility in the past period. The first-order autoregressive model described by Engle (1982):
〖σ_t〗^2=a_0+∑_(i=1)^q▒a_(i ) ε_(t-i)^2,
“where a_0>0,a_i≥0,i=1,2,3,,q the series is stationary if a_i<1. The ARCH model creates a process where today’s variance depends on its own previous variance. This allows the model to capture the volatility clustering observed in financial markets. The a_i parameter explains how fast the model reacts to news on the market. The one step ahead forecast for the ARCH (1) model is done by using the equation,”
〖σ_(t+1)〗^2=a_0+〖a_1 ε〗_t^2
For the ARCH method to be meaningful, all the “a” parameters in the equation must be positive. (Engle,1982)
One of the most widely used methods considering volatility variability and previous volatility dependence is the GARCH model, developed by Tim Bollerslev in 1986, with the extended form of the ARCH model.
GARCH model can be expressed as:
〖σ_t〗^2= λw+∑_(i=1)^N▒a_(i ) R_(t-i)^2
If the market is volatile in the current period, the variance of the future period will also be high, depending on the magnitude of the return deviation in this period. On the other hand, if today\’s volatility is relatively low, the volatility in the next period will be low, unless the portfolio return deviates significantly from the average.
Statistical Value at Risk Calculations
In VaR analysis, although there are many methods that are derived from each other, there is no consensus on which is the most appropriate method. Nevertheless, the vast majority of methodological studies focus on the estimation of statistical distributions of securities returns. In this context, the basic approaches used in VaR analysis can be written as follows.
Variance-Covariance Method ( Parametric Approach)
Historical Approach
Monte Carlo Simulation
In this study, only the approaches of variance-covariance, historical simulation and Monte Carlo Simulation proposed by the Basel Committee for VaR analysis have been addressed.
Variance-Covariance Method
The variance-covariance approach is based on the assumption that portfolio returns are a linear combination of the returns of securities that constitute the portfolio composition and portfolio returns are a linear combination of these risk factors. This means that portfolio returns have a normal distribution as the derivation of the risk factors in the portfolio is a linear function. This approach therefore explains portfolio’s VaR on the basis of the normal distribution. In this respect, it would be sufficient to calculate the mean and variance of the normal distribution to achieve portfolio’s Value at Risk.
In order to calculate Value at Risk, value of the portfolio or stock should be multiplied by the standard deviation of the portfolio, normal value for given confidence interval, and square root of holding period. (Pietro Penza, Vipul Bansal: 2001)
〖VaR〗_(%,t)=P* σ*k*√t
〖VaR〗_(%,t) ∶Value at Risk value for given confidence interval and holding period
P: Value of portfolio or stock
σ: standard deviation of the portfolio or stock
t: holding period
In general, portfolios consist more than one stock and calculation of standard deviation is more complex. In order to achieve value at risk value with variance-covariance method, there are different ways to calculate the variance of the portfolio.
Normal Method ( Investigate is there any difference between delta-normal method)
The expected return of the portfolio by model is defined as the weighted average of the expected returns of the stocks in the portfolio. This means that the sensitivity of each risk factor in the portfolio will be proportional to the share of the risk in the portfolio.
E(r_p )=∑_(i=1)^N▒w_i E(r_i )
E(r_p )=w_1 E(r_1 )+w_2 E(r_2 )+⋯+ w_n E(r_n )
E(r_p ) : Expected return of the portfolio
w_i : Weight of stocks in the portfolio
E(r_i ) : Represents the individual expected returns of the stocks in the portfolio.
Since stocks represent investment ratios within the portfolio, weights must be equal to 1 in total.
According to the model, the risk measure of a portfolio is the statistically known measure of variance. The variance, based on normal distribution, is defined as the measure of deviation from the mean. The variance, taking into account the relationship between the returns of the stocks in the portfolio, is calculated by using the following equation for a portfolio consisting of N shares.
σ_(r_p)^2=[∑_(i=1)^N▒∑_(j=1)^N▒〖w_i w_j 〗 σ_ij]
This implies that the contribution of the variance of each stock to the total portfolio variance as a function of the relationship between weights and returns.
σ_(r_p)^2 : Variance of the portfolio
w_i : (i) stock’s weight in the portfolio
w_j : (j) stock’s weight in the portfolio
σ_ij : Refers to the covariance that measures the direction of the relationship between the returns of stocks.(i and j).
The variance of an N-shared portfolio in the form of matrix notation is shown in the chart below.
σ_(r_p)^2=[w_i……w_(N]) |■(σ_11&σ_12&■(σ_13&■(….&σ_1N ))@.&.&■(. &■(….&.)) @■([email protected]σ_N1 )&■([email protected]σ_N2 )&■(■(. &■(….&.) )@■(σ_N3&■(….&σ_N ) )) )||■([email protected]■([email protected])@w_N )|
Or, more generally, to show the Σ covariance matrix,
σ_(r_p)^2=w^\’ ∑▒w
Note that the number of operations is very high due to the matrix structure. The number of transactions in the covariance matrix will be N (N-1) / 2. For example, if a portfolio consists of 20 shares, this means 190 transactions. This number will be 1225 for 50 shares. As we can see, the number of transactions to be done increases geometrically according to the number of assets. This is very important for large portfolios of hundreds of stocks without any difficulty for small portfolios.
Covariance is a statistical measure of the direction of co-movement or variation of two random variables. If the covariance between the returns of the two stocks is positive, it means that the returns of the stocks are in the same direction, if they are negative, they are moving in the opposite direction, and if there is zero, there is no linear relationship between the returns. The covariance is calculated as follows:
σ_ij= 1/(N-1) ∑_(n=1)^N▒〖[(r_(i,n) 〗- (r_i ) ̅) (r_(j,n)-(r_j ) ̅)
If it is calculated for future values, the covariance between returns is the weighted sum of the likelihood of the average expected return deviations of the returns of the stocks. Accordingly, for example, the covariance between the returns of stocks in a multi-asset portfolio is calculated as follows:
σ_ij ∑_(n=1)^N▒〖P_i [(r_(i,n) 〗- E(r_i)][ (r_(j,n)-E(r_j)]
r_i ve r_j : Return values of stocks (i and j) depending on P_i probability,
E(r_i) ve E(r_j) : Expected returns of stocks (i and j),
N: Number of possible situations.
Another statistical measure which should be known according to the method other than covariance is the correlation coefficient. Correlation gives information about the direction of the relationship between random variables, while the correlation coefficient measures the degree of this relationship. The correlation coefficient is calculated as:
ρ_ij= σ_ij/(σ_i σ_j )
This equation shows that covariance can also be obtained through the correlation coefficient. In this case, the covariance is equal to correlation coefficients of the corresponding variables multiplied by the standard deviations of the variables:
σ_ij=ρ_ij σ_i σ_j
The variance-covariance matrix in the form of matrix notation is equal to the multiplications of the standard deviation matrix of the stocks with the correlation matrix:
∑ = (■(■(■(σ[email protected])&■([email protected]σ_2 ))&⋯&■([email protected])@⋮ &⋱&⋮@■(0 &0)&⋯&σ_N )) (■(■(■([email protected]ρ_21 )&■(ρ[email protected]))&⋯&■(ρ[email protected]ρ_2N )@⋮ &⋱&⋮@■(ρ_N1 &ρ_N2 ) &⋯&1)) = (■(■(■(σ[email protected]σ_21 )&■(σ[email protected]σ_22 ))&⋯&■(σ[email protected]σ_2N )@⋮ &⋱&⋮@■(σ_N1 &σ_N2 ) &⋯&σ_N ))x

Correlation coefficient is between +1 and -1. +1 means full positive, -1 means full negative correlation. The full positive relationship shows that the two variables move in exactly the same direction, whereas the negative relationship shows that they move in opposite directions.
According to given information above, Value at Risk value for normal method can be expressed as:
〖VaR〗_p= -a√(w∑w\’)
w : Weight Matrix
w’: Vertical vector of weights
a : standard normal value for the given confidence level (1.65 for %95, 2.33 for %99)
Therefore, the VaR value of a portfolio in the context of a linear approach is a function of the relationship between VaR value of each stock in its scope. For example, Value at Risk value of a portfolio of two stocks:
〖VaR〗_p= (VaR1)2+〖(VaR〗_2)2+2VaR1VaR2ρ12
Value at Risk measurement has been used initially to expose the portfolio risk. However, due to the simplicity of reporting, it has become a strategic tool in active portfolios that change rapidly and continuously over time. Thus, the investor can instantly see how the risk changes during portfolio changes due to the instant reporting provided by VaR measurement. Therefore, they are able to test the effectiveness of investment decisions regarding which vehicle to be included in the scope of portfolio, which should be removed from the scope, or which position should be changed at once.
Monte Carlo Simulation ( more information will be added)
Monte Carlo Simulation is based entirely on a statistical model, different from the historical simulation created by simulating directly from past history, and uses mathematical techniques to reach a large number of possible portfolio return values. The simulation process actually takes into account possible events that occur, rather than events that have been observed in the past.
The fact that the Monte Carlo Simulation relies on statistical modeling also requires random sampling. Unlike the Historical Simulation method, the purpose of this method is to estimate the portfolio value at the end of the holding period according to various scenarios and assumptions determined by defining the price behavior of the portfolio.
The most important resource in defining the portfolio price system is past price movements. In this sense, in order to apply the Monte Carlo simulation method, three type data are needed.
Expected change in asset value
The degree of uncertainty (variance-covariance matrix)
Type of distribution
Monte Carlo simulation is generally carried out according to the following algorithm:
Determination of which statistical distribution model the portfolio return series belongs to,
Correlation between stocks constituting the portfolio composition and formation of the variance covariance matrix,
Determination of the random number generator used to calculate the artificial prices of risk factors with the help of correlation and volatility,
Determination of hypothetical prices from distribution using random number generator,
Determination of profit or loss by calculating portfolio value according to hypothetical prices,
Repetition of the steps (2,3,4) to generate the return distribution of the portfolio,
Calculation of the VaR of this distribution according to the desired confidence level
In essence, the Monte Carlo Simulation Method is a mixture of the Variance Covariance Method and the Historical Simulation Method. In Monte Carlo Simulation, as in the Variance Covariance Method, the variance covariance matrix of historical returns is needed. However, the Monte Carlo Simulation approach does not suffice it, but produces a new correlated series based on the corresponding variance covariance matrix. The next step is the same as in Historical Simulation. If the time slice used in the Historical Simulation method is the same as the time slice used to derive the variance covariance matrix and the portfolio behaves linearly; The results of Monte Carlo Simulation and Historical Simulation will be approximately the same. However, the portfolio will exhibit non-linear behavior (due to reasons such as options etc.), the results will be different.
Although there are similarities between Monte Carlo Simulation Method and Historical Simulation Method, the main difference between the two methods is; real changes observed in market factors in the historical sampling period to generate hypothetical portfolio profits or losses in the Historical Simulation Method, The Monte Carlo Simulation Method shows a statistical distribution which is thought to be able to adequately represent the possible changes in market factor is to produce random market prices and rates that are not selected. These random values that are created will be used to obtain the distribution of hypothetical profits and losses for the current portfolio and the Value at Risk value will be derived from this distribution.
One of the major disadvantages of this method is its high cost. In situations where the exact valuation of assets is complex, it is often difficult to apply this method frequently. To facilitate this situation, assets are grouped or aggregated under certain risk factors. The other weakness of the method is that the scenarios (similar to the pricing model of financial assets such as options) are based on specific sthocastic models. In this context, this method also includes model risk. As a result, if the model is built correctly, it is the most detailed approach for measuring the market risk.
Historical Simulation ( maybe more information will be added, examples- sources will be added- name of the source English or Turkish?)
In the Historical Simulation method, it is assumed that past changes in risk factors will repeat in the future, and the rate of change obtained from historical data is applied to the current market prices. According to the new market prices obtained by this method, the market value distribution within the total portfolio is obtained. For this method, at least one year of data is used retrospectively in standard practice, i.e 252 new values are calculated for all risk factors. The portfolio is assessed using historical changes in risk factors. Accordingly, profit / loss distribution of the portfolio is calculated. (Özlem Kıraç, 2011)
The historical simulation also known as non-parametric method does not rely on certain assumptions about the distributions of market factors and therefore there are no parameters such as standard deviation and correlation to be estimated. The distribution of the probable profit or loss of the portfolio in the method is obtained by applying the changes of the market factors during the previous N period to the current portfolio.
In the historical simulation method, rather than random selection of scenarios, the market value of the portfolio is used directly in the calculation of VaR. The historical simulation method is a simplified form of the Monte Carlo simulation method, since it can be used without depending on certain assumptions. While this method does not require parameters such as correlation and volatility on the one hand, it can be applied other distributions than normal on the other hand. Hence, there is no need to construct and estimate variance-covariance matrices. For this reason, unlike the Monte Carlo Simulation, it is independent of the model risk. In addition, the distribution can be applied for any securities, whether linear or not.
The historical simulation method consists basically of four steps:
The determination of the risk factors required for the recalculation of portfolio value or the series of percentages of price changes of assets
Applying the price changes to the portfolio for the determination of the series of changes in portfolio values
Ranking of portfolio value changes by percentage
Determination of the value change corresponding to the desired confidence level as the portfolio VaR
Historical Simulation method is more appropriate to use when the amount of data is not huge and there is not much information about profit/loss distribution. Though, the method does require time, one of the most important advantages is that it can provide information about the recent collapses in the market.
The advantages and disadvantages of this method can be listed as follows :
Advantages of the method:
It is easily applied for non-linear positions. Each position containing an option type asset is practically non-linear. As with all other methods, every factor that affects prices must be simulated. In other words, not only the rates and prices, but also the future possible values of indicator volatility should be simulated.
There is no assumption about distributions.
Created scenarios with forecasts can easily identify unbalanced and unstable markets.

Disadvantages of the method:
Since the method is exact valuation, computation requires intensive processing
Scenario production can lead us to wrong results. Estimations and random selection from past periods may not be consistent. The number of scenarios and variables expected to be estimated at a reasonable level will be limited.
Possible changes in the future are not taken into consideration. If the volatility is low in the prices and rates in the last 1 year, the volatility movements in the simulation set will not be taken into consideration. This is the most criticized point of the model.
Concept of Portfolio ( title will be changed)
A portfolio can be defined as a combination of financial assets held at the same level. In order to explain the risk of a portfolio, it is first necessary to define the portfolio. In the context of investment, all the securities owned by an investor is called the portfolio. Because it is very important to create an optimal portfolio for investors who form a portfolio from different securities within their investment preferences. Therefore, rather than calculating the risk of any security, the risk and return of a portfolio need to be calculated.
Portfolio diversification is a collection of securities with different risks and returns. The increase in diversification will reduce the total risk of the portfolio. There are two different approaches to portfolio diversification
Classical (Simple Approach)
Modern Approach
Classical Approach
The classical approach, based on the assumption that some of the securities in the portfolio will be lost in returns while others will be profitable, is basically based on the idea of increasing the number of securities in the portfolio. However, in the classical approach, the risk is exist but it is not possible to measure it. Since the approach is based on diversifying and distributing the risk of the portfolio with different securities rather than a single security, it is also called simple diversification

The main purpose of the approach is to maximize the benefit of the portfolio holder. In other words, the investor will choose a portfolio that will maximize the benefits for risk and expected return, as the consumer tends to maximize utility between the goods and services he or she asks. Even if the investors make a simple diversification, they will have a risk-reducing effect on the portfolio by moving the assets in the portfolio to neutralize the value changes.
In the classical approach, it is possible to make a good choice which minimizes the risk for the investor by selecting the securities constituting the portfolio from different sectors and / or industries independently of each other. However, it is better to not include the securities that belong to the same sector/ industry or those that have the same maturity dates.
Classical diversification depends entirely on investors\’ knowledge and financial abilities, since it is not being able to fully minimize the risk, to increase the expected returns under risk minimization, and inevitably ignoring the existing relationship between portfolio securities .

About Essay Sauce

...(download the rest of the essay above)

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Risk Measurements of the Economy. Available from:<> [Accessed 18-06-21].

These Economics essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on at an earlier date.

Review this essay:

Please note that the above text is only a preview of this essay.

Review Content

Latest reviews:

Essay: Risk Measurements of the Economy

It’s very helpful
- Lina