Risk

Misc

  • Also see
  • Packages
    • {PerformanceAnalytics} - Econometric Tools for Performance and Risk Analysis
      • Lots of vignettes, but there’s also a substantial amount of material in the introduction section of the Reference Manual
  • Papers
  • Metrics
    • Omega Ratio - A weighted ratio of gains and losses above a threshold return. It captures more information about returns than similar metrics like the Sharpe ratio

      • Formula
        \[ \Omega = \frac{\sum \text{Returns}_{+,\theta}}{-1 \cdot \sum \text{Returns}_{-,\theta}} \]

        • \(\text{Returns}_{+, \theta}\) : Returns that are greater than the target threshold return, \(\theta\)

        • \(\text{Returns}_{-,\theta}\) : Returns that are less than the target threshold return, \(\theta\)

      Code (Source)

      import yfinance as yf
      import numpy as np
      # Download the stock data for AAPL from Yahoo Finance for the specified date range
      data = yf.download("AAPL", start="2020-01-01", end="2021-12-31")
      returns = data["Adj Close"].pct_change()
      # Calculate the Omega ratio of a strategy's returns
      def omega_ratio(returns, required_return=0.0):
          """Determines the Omega ratio of a strategy.
          Parameters
          ----------
          returns : pd.Series or np.ndarray
              Daily returns of the strategy, noncumulative.
          required_return : float, optional
              Minimum acceptance return of the investor. Threshold over which to
              consider positive vs negative returns. It will be converted to a
              value appropriate for the period of the returns. E.g. An annual minimum
              acceptable return of 100 will translate to a minimum acceptable
              return of 0.018.
          Returns
          -------
          omega_ratio : float
          Note
          -----
          See https://en.wikipedia.org/wiki/Omega_ratio for more details.
          """
          # Convert the required return to a daily return threshold
          return_threshold = (1 + required_return) ** (1 / 252) - 1
          # Calculate the difference between returns and the return threshold
          returns_less_thresh = returns - return_threshold
          # Calculate the numerator as the sum of positive returns above the threshold
          numer = sum(returns_less_thresh[returns_less_thresh > 0.0])
          # Calculate the denominator as the absolute sum of negative returns below the threshold
          denom = -1.0 * sum(returns_less_thresh[returns_less_thresh < 0.0])
          # Return the Omega ratio if the denominator is positive; otherwise, return NaN
          if denom > 0.0:
              return numer / denom
          else:
              return np.nan
      # Calculate the Omega ratio for the given returns and required return
      omega_ratio(returns, 0.07)
      # Compute and plot the rolling 30-day Omega ratio of the returns
      returns.rolling(30).apply(omega_ratio).plot()
      # Plot a histogram of the daily returns to visualize their distribution
      returns.hist(bins=50)
  • Factor Model
    • Notes from {{mosaicperm}} paper
    • Model
      \[ Y_t = L_t X_t + \epsilon_t \]
      • \(Y_t \in \mathbb{R}^p\) is a multivariate matrix of returns
      • \(X_t \in \mathbb{R}^k\) are returns of underlying factors (where \(k \ll p\)) which drive correlation among the assets. (not observed)
        • Typically estimated using cross-sectional regression
      • \(L_t\) are factor loadings. \(L_{j,l,t}\) measures the exposure of \(j^{\text{th}}\) asset to the \(l^{\text{th}}\) factor at time, \(t\). (deterministic matrix)
        • Loadings are known at time \(t\) unlike the factors, \(X_t\).
      • \(\epsilon_t\) the residuals, aka “idiosyncratic returns of \(p\) assets that can’t
    • Financial firms routinely publish exposure matrices \(L_t\) for commercial risk models, including MSCI Barra models, and the BFRE model
    • Black Rock Fundamental Equity Risk (BFRE) model
      • \(L_t\) is based on market fundamentals such as industry membership (e.g. stock \(j\) is industry \(l\)) and accounting data.
    • GOF
      • Issue: Curse of Dimensionality
        • Data for sectors is typically around p ≥ 2000 assets and T ≈ 50 observations, or subsectors with p ≈ 150 assets and T ≈ 50 observations. Methods involving likelihood ratios and bootstrapping will generally be inaccurate in this case.
      • {{mosaicperm}} (Paper)
        • Tests whether \(H_0\) holds for a fixed choice of exposures \(L_t\). Rigorously controls for false positives.
        • Null Hypothesis: \(H_0: \epsilon_{.,1}, \epsilon_{.,2}, \ldots, \epsilon_{.,p} \; \text{are jointly independent}\)
          • Where each \(\epsilon_{.,j} := \epsilon_{1,j}, \epsilon_{2,j}, \ldots \epsilon_{T,j}\) denotes the time series of residuals for the \(j^\text{th}\) asset. The residuals are allowed to be autocorrelated, nonstationary, and heterogeneous.
        • The \(Y\) matrix is tiled (vertically and horizontally). The risk model is fit on each tile in order to obtain the residuals.
        • The \(H_a\) test statistic is calculated from the correlation matrix of residuals. The \(H_0\) test statistic is calcuated by permuting the residuals in each tile. Then you use its correlation matrix to calculate a test statistic that will act as a threshold.
        • The test yields an exact p-value in finite samples under the assumption that the residuals for each asset are locally exchangeable.

Value at Risk (VaR)

  • VaR modeling determines the potential for loss in the entity being assessed and the probability that the defined loss will occur. One measures VaR by assessing the amount of potential loss, the probability of occurrence for the amount of loss, and the time frame.
  • Example: Interpretation (source)
    • A financial firm may determine an asset has a 3% one-month VaR of 2%, representing a 3% chance of the asset declining in value by 2% during the one-month time frame.
    • The conversion of the 3% chance of occurrence to a daily ratio places the odds of a 2% loss at one day per month.
  • Disadvantages
    • If data is from a low volatility period, then in high volatility periods, VaR estimates may underestimate the probabilty and magnitude of extreme events
    • Black swan events not covered.

Copulas

  • Also see Association, Copulas

  • Copula Workflow (source)

    • Data Collection and Preprocessing:
      • Collect historical data for the two variables of interest.
      • Clean the data by handling missing values, outliers, and ensuring consistency.
    • Marginal Distribution Estimation:
      • Estimate the marginal distributions of each variable independently.
      • Fit appropriate probability distributions to the marginal data.
    • Copula Selection and Fitting:
      • Choose a copula family (e.g., Gaussian, t, Clayton, Gumbel, etc.).
      • Fit the copula model to the data, estimating the copula parameters.
    • Simulation and Analysis:
      • Generate joint samples from the fitted copula model.
      • Use the joint distribution to compute risk measures such as VaR and CVaR.
  • Example: Value at Risk and Conditional Value at Risk Using Copulas

    set.seed(123)
    n <- 1000
    asset1 <- 
      rnorm(n, 
            mean = 0.001, 
            sd = 0.02)
    asset2 <- 
      rnorm(n, 
            mean = 0.001, 
            sd = 0.03)
    data <- data.frame(asset1, asset2)
    
    1margins <- list(m1 = fitdistrplus::fitdist(asset1, "norm"),
                    m2 = fitdistrplus::fitdist(asset2, "norm"))
    
    2normal.cop <- copula::normalCopula(param = 0.5, dim = 2)
    
    u1 <- 
      pnorm(asset1, 
            mean = margins$m1$estimate["mean"], 
    3        sd = margins$m1$estimate["sd"])
    u2 <- 
      pnorm(asset2, 
            mean = margins$m2$estimate["mean"], 
            sd = margins$m2$estimate["sd"])
    fit.cop <- 
      copula::fitCopula(normal.cop, 
                        cbind(u1, u2), 
                        method = "ml")
    
    copula.samples <- 
    4  copula::rCopula(n, fit.cop@copula)
    
    5simulated.returns <- data.frame(
      asset1 = qnorm(copula.samples[, 1], 
                     mean = margins$m1$estimate["mean"], 
                     sd = margins$m1$estimate["sd"]),
      asset2 = qnorm(copula.samples[, 2], 
                     mean = margins$m2$estimate["mean"], 
                     sd = margins$m2$estimate["sd"])
    )
    simulated.portfolio.returns <- rowMeans(simulated.returns)
    
    # -0.0285
    6VaR_95 <- quantile(simulated.portfolio.returns, probs = 0.05)
    # -0.0351187638772645
    CVaR_95 <- mean(simulated.portfolio.returns[simulated.portfolio.returns <= VaR_95])
    1
    Estimate marginal distributions (assuming normal for simplicity)
    2
    normalCopula doesn’t have it’s own listing in the reference manual. It’s under ellipCopula, or you can just use ?normalCopula.
    3
    Fit the copula to the data
    4
    Generate samples from the fitted copula. The @ operator is used to access copula object since it’s a S4 object. Also, rCopula doesn’t have it’s own listing in the reference manual. It’s under Copula, or you can just use ?rCopula.
    5
    Simulate portfolio returns by ransforming copula samples back to original scale and averaging (i.e. equal weighting)
    6
    Calculate VaR and CVaR at 95% confidence level. At the 95% CI, the maximum 1-day loss is 3.5%.
    • {PerformanceAnalytics} also has functions for calculation the VaR and CVaR values

      VaR_95 <- VaR(simulated.portfolio.returns, 
                    p = 0.95, 
                    method = "historical")
      CVaR_95 <- ES(simulated.portfolio.returns, 
                    p = 0.95, 
                    method = "historical")