ARCH models in Python

Overview

arch

arch

Autoregressive Conditional Heteroskedasticity (ARCH) and other tools for financial econometrics, written in Python (with Cython and/or Numba used to improve performance)

Metric
Latest Release PyPI version
conda-forge version
Continuous Integration Build Status
Appveyor Build Status
Coverage codecov
Code Quality Code Quality: Python
Total Alerts
Codacy Badge
codebeat badge
Citation DOI
Documentation Documentation Status

Module Contents

Python 3

arch is Python 3 only. Version 4.8 is the final version that supported Python 2.7.

Documentation

Released documentation is hosted on read the docs. Current documentation from the main branch is hosted on my github pages.

More about ARCH

More information about ARCH and related models is available in the notes and research available at Kevin Sheppard's site.

Contributing

Contributions are welcome. There are opportunities at many levels to contribute:

  • Implement new volatility process, e.g., FIGARCH
  • Improve docstrings where unclear or with typos
  • Provide examples, preferably in the form of IPython notebooks

Examples

Volatility Modeling

  • Mean models
    • Constant mean
    • Heterogeneous Autoregression (HAR)
    • Autoregression (AR)
    • Zero mean
    • Models with and without exogenous regressors
  • Volatility models
    • ARCH
    • GARCH
    • TARCH
    • EGARCH
    • EWMA/RiskMetrics
  • Distributions
    • Normal
    • Student's T
    • Generalized Error Distribution

See the univariate volatility example notebook for a more complete overview.

import datetime as dt
import pandas_datareader.data as web
st = dt.datetime(1990,1,1)
en = dt.datetime(2014,1,1)
data = web.get_data_yahoo('^FTSE', start=st, end=en)
returns = 100 * data['Adj Close'].pct_change().dropna()

from arch import arch_model
am = arch_model(returns)
res = am.fit()

Unit Root Tests

  • Augmented Dickey-Fuller
  • Dickey-Fuller GLS
  • Phillips-Perron
  • KPSS
  • Zivot-Andrews
  • Variance Ratio tests

See the unit root testing example notebook for examples of testing series for unit roots.

Cointegration Testing and Analysis

  • Tests
    • Engle-Granger Test
    • Phillips-Ouliaris Test
  • Cointegration Vector Estimation
    • Canonical Cointegrating Regression
    • Dynamic OLS
    • Fully Modified OLS

See the cointegration testing example notebook for examples of testing series for cointegration.

Bootstrap

  • Bootstraps
    • IID Bootstrap
    • Stationary Bootstrap
    • Circular Block Bootstrap
    • Moving Block Bootstrap
  • Methods
    • Confidence interval construction
    • Covariance estimation
    • Apply method to estimate model across bootstraps
    • Generic Bootstrap iterator

See the bootstrap example notebook for examples of bootstrapping the Sharpe ratio and a Probit model from statsmodels.

# Import data
import datetime as dt
import pandas as pd
import numpy as np
import pandas_datareader.data as web
start = dt.datetime(1951,1,1)
end = dt.datetime(2014,1,1)
sp500 = web.get_data_yahoo('^GSPC', start=start, end=end)
start = sp500.index.min()
end = sp500.index.max()
monthly_dates = pd.date_range(start, end, freq='M')
monthly = sp500.reindex(monthly_dates, method='ffill')
returns = 100 * monthly['Adj Close'].pct_change().dropna()

# Function to compute parameters
def sharpe_ratio(x):
    mu, sigma = 12 * x.mean(), np.sqrt(12 * x.var())
    return np.array([mu, sigma, mu / sigma])

# Bootstrap confidence intervals
from arch.bootstrap import IIDBootstrap
bs = IIDBootstrap(returns)
ci = bs.conf_int(sharpe_ratio, 1000, method='percentile')

Multiple Comparison Procedures

  • Test of Superior Predictive Ability (SPA), also known as the Reality Check or Bootstrap Data Snooper
  • Stepwise (StepM)
  • Model Confidence Set (MCS)

See the multiple comparison example notebook for examples of the multiple comparison procedures.

Long-run Covariance Estimation

Kernel-based estimators of long-run covariance including the Bartlett kernel which is known as Newey-West in econometrics. Automatic bandwidth selection is available for all of the covariance estimators.

from arch.covariance.kernel import Bartlett
from arch.data import nasdaq
data = nasdaq.load()
returns = data[["Adj Close"]].pct_change().dropna()

cov_est = Bartlett(returns ** 2)
# Get the long-run covariance
cov_est.cov.long_run

Requirements

These requirements reflect the testing environment. It is possible that arch will work with older versions.

  • Python (3.7+)
  • NumPy (1.16+)
  • SciPy (1.2+)
  • Pandas (0.23+)
  • statsmodels (0.11+)
  • matplotlib (2.2+), optional
  • property-cached (1.6.4+), optional

Optional Requirements

  • Numba (0.35+) will be used if available and when installed using the --no-binary option
  • jupyter and notebook are required to run the notebooks

Installing

Standard installation with a compiler requires Cython. If you do not have a compiler installed, the arch should still install. You will see a warning but this can be ignored. If you don't have a compiler, numba is strongly recommended.

pip

Releases are available PyPI and can be installed with pip.

pip install arch

This command should work whether you have a compiler installed or not. If you want to install with the --no-binary options, use

pip install arch --install-option="--no-binary" --no-build-isolation

The --no-build-isoloation uses the existing NumPy when building the source. This is usually needed since pip will attempt to build all dependencies from source when --install-option is used.

You can alternatively install the latest version from GitHub

pip install git+https://github.com/bashtage/arch.git

--install-option="--no-binary" --no-build-isoloation can be used to disable compilation of the extensions.

Anaconda

conda users can install from conda-forge,

conda install arch-py -c conda-forge

Note: The conda-forge name is arch-py.

Windows

Building extension using the community edition of Visual Studio is simple when using Python 3.7 or later. Building is not necessary when numba is installed since just-in-time compiled code (numba) runs as fast as ahead-of-time compiled extensions.

Developing

The development requirements are:

  • Cython (0.29+, if not using --no-binary)
  • pytest (For tests)
  • sphinx (to build docs)
  • sphinx_material (to build docs)
  • jupyter, notebook and nbsphinx (to build docs)

Installation Notes

  1. If Cython is not installed, the package will be installed as-if --no-binary was used.
  2. Setup does not verify these requirements. Please ensure these are installed.
Comments
  • Out-of-sample forecasts

    Out-of-sample forecasts

    One last question for today ;)

    The 'simulation' method for forecast is very simple and fast. Just curious why it isn't built to forecast beyond the last index of the dataset?

    is it possible to, say, if you want to forecast 30 days beyond the current dataset, you could simply add 30 days to the dataset with np.nan or 0 and set the last_obs to the last day of actual data? I've tried this and the forecasts come out alot different than the in-sample ones. Seems like the residuals from the entire dataset are getting incorporated into a few things, like compute_variance maybe, whether they are actually being used to fit or not.

    Anyway, just wondering if there's a best way to do this. Kind of looking like, if one wants to do, say, 1,000 sims of 30 days forward forecasted, a new model would need to be instantiated and fit 30x1000 = 30,000 times .....

    opened by rjskene 16
  • Multithreading fit ARX causing The optimizer returned code 9

    Multithreading fit ARX causing The optimizer returned code 9

    Hi, I recently start to use your library for modelling time series. It works perfectly fine under single thread. But if I use ARX model and fit a lot of them in a multithreading task. Then I randomly got either

    `The optimizer returned code 9. The message is: Iteration limit exceeded See scipy.optimize.fmin_slsqp for code meaning.

    ConvergenceWarning)`

    or The optimizer returned code 5 problems. So I get nan for the final results. Is it normal? Is ARX thread safe?

    opened by gaussleescorpio 16
  • Computing variance from FIGARCH

    Computing variance from FIGARCH

    Hi, I'm using Python 3.7. In order to compute the value at risk, I have to forecast FIGARCH and calculate the daily conditional mean and standard deviation. To do that, I used the package ARCH which contains the FIGARCH model. Please find my code below:

    import numpy as np
    import pandas as pd
    import scipy as sp
    from scipy.stats import stats,norm
    import scipy.stats as stats
    df = pd.read_excel ('E:\\Book1.xlsx',sheet_name="Sheet3",header=0)
    ret_co1 = df.CO1.pct_change().dropna()
    mu_co1 = np.mean(ret_co1)
    sigma_co1 = np.std(ret_co1)
    
    from arch.univariate import FIGARCH
    mod_4 = FIGARCH(p=1, q=1, power=2.0, truncation=1000)
    resid_4 = ret_co1 - mu_co1
    residu_4 = resid_4.values
    var_4 = np.full((1,4595),sigma_co1)
    vari_4 = var_4[0]
    bc_4 = mod_4.backcast(resid_4)
    bn_4 = mod_4.bounds(resid_4)
    para=[1,1]
    fig_4 = mod_4.compute_variance(parameters=para, resids=residu_4, sigma2=vari_4, backcast=bc_4, var_bounds=bn_4)
    

    ret_co1 are my returns mu_co1 is my mean residu_4 is my vector of mean residuals I computed as resids = returns - E(returns) vari_4 is the vector of conditional variance where I put the same variance on each element of the matrix para[1,1] is for p=1 and q=1

    The message error I got is: "a bytes-like object is required, not 'list'" and came from the File "C:\Users\Dimitry\Anaconda3\lib\site-packages\arch\univariate\volatility.py", line 2519, in compute_variance var_bounds) Can you help me to fix this error please?

    bug 
    opened by dimcarien 15
  • FAQ: The predicted value of the Linux platform is abnormal.

    FAQ: The predicted value of the Linux platform is abnormal.

    We run AR-GARCH model on two different platforms with python3.5.3 and arch 4.6.0 package. One is Windows 7 Enterprise x64, the other is CentOS release 6.4 (Final) x86_64 GNU/Linux. The prediction results of the two platforms are very different on some sequences. for example:

    from arch.univariate import ARX from arch.univariate import GARCH

    L = [0.024537471, 0.011513695, 1.38838E-4, 0.034646548, -0.010029519, 0.013727814, -0.002833435, 0.012121772, -0.045976405, 0.00194646, 9.60483E-4, -0.041520158, 0.022433862, -0.012950044, 0.001248923] ar_asset = ARX(L, lags=[1]) ar_asset_model = ar_asset.fit() mean_asset_pre = ar_asset_model.forecast(horizon=1).mean[-1:]['h.1'].values[0] print('mean_asset_pre is:', mean_asset_pre)

    The result on Windows 7 Enterprise x64 is "mean_asset_pre is: -0.01279744", but the result on CentOS release 6.4 (Final) x86_64 GNU/Linux is "mean_asset_pre is: -57747.628851".

    opened by wzg-rapper 15
  • ENH: Add variance forecasting

    ENH: Add variance forecasting

    Add variance forecasting to the forecast function[

    • [x] Constant Variance Forecasting
    • [x] GARCH Forecasting
    • [ ] EGARCH Forecasting
    • [x] HARCH Forecasting
    • [x] EWMA Forecasting
    • [x] EWMA2006 Forecasting
    • [x] Documentation
    • [x] IPython notebook
    • [x] Hedgehog plot
    • [ ] Test first_obs, last_obs and hold_back interaction with forecasting
    opened by bashtage 14
  • Why doesn't IIDBootstrap support different sample sizes?

    Why doesn't IIDBootstrap support different sample sizes?

    Minimal example to show the current behaviour:

    In [1]: from arch.bootstrap import IIDBootstrap
    
    In [2]: IIDBootstrap([1, 2, 3], [4, 5])
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-2-46a2c8a96fe3> in <module>()
    ----> 1 IIDBootstrap([1, 2, 3], [4, 5])
    
    ~/projects/automattic/data-science/conda_env/lib/python3.6/site-packages/arch/bootstrap/base.py in __init__(self, *args, **kwargs)
        155         for arg in all_args:
        156             if len(arg) != self._num_items:
    --> 157                 raise ValueError("All inputs must have the same number of "
        158                                  "elements in axis 0")
        159         self._index = np.arange(self._num_items)
    
    ValueError: All inputs must have the same number of elements in axis 0
    

    With real-life data, sample sizes of different groups often vary. What is the reason for not supporting different sample sizes in IIDBootstrap? Is there a workaround?

    opened by yanirs 13
  • Distribution moments

    Distribution moments

    Re-opening https://github.com/bashtage/arch/pull/328

    Added dist.moment(h) for each univariate.Distribution subclass. Also added dist.partial_moment(h, z), which can be useful when working with models that have nonzero asymmetric dependence. Both methods have closed-form solutions for all distributions.

    The test code compares the methods against numbers obtained by numeric integration.

    opened by ghost 12
  • Using statsmodels ARMAResults as a mean model

    Using statsmodels ARMAResults as a mean model

    Is it possible to use ARMAResults calculated using statsmodels as a mean model so that a volatility process can be added?

    I wasn't sure how this might work but I was thinking of something along the lines of this:

    from arch.univariate.mean import ARCHModel
    from arch.univariate import ARCH
    
    model = ARCHModel(returns)
    model.mean = arma_res  # statsmodels ARMAResults
    model.volatility = ARCH()
    
    results = model.fit(update_freq=5)
    
    opened by Anjum48 12
  • Error installing arch ...

    Error installing arch ...

    Hi -

    I tried to install first via

    pip install arch ...

    I got this error:

    ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject

    Then I tried to install via

    pip install arch --install-option="--no-binary" ... which resulted in almost the same error:

    AttributeError: type object 'arch.univariate.recursions.array' has no attribute 'reduce_cython'

    Can you help, pls ?

    opened by terminsen 11
  • DEP: Limit statsmodels imports

    DEP: Limit statsmodels imports

    Get cStringIO and range imports from arch.compat.python instead of statsmodels.compat

    Get cache_readonly from pandas (cython implementation, better testing)

    Use dict instead of resettable_cache. I know in sm all usages of resettable_cache are unnecessary; am just assuming the same is true here.

    opened by jbrockmendel 11
  • GARCH Estimation with t distribution

    GARCH Estimation with t distribution

    Hi, the arch model with t distribution works fine with simulated date, but it has very unstable estimate with real data, such as stock daily returns. I tried with a AR(2)-GARCH(1,1)-t model for SP 500 stocks with 5 year daily returns. The AR coefficients of some stock are way beyond 1, like 10 or 1000, which means an non-stationary process, or an error. Then I tried with one stock estimation starting with 1000 observations, and add one obs each time to the sample. The coefficients estimated each time are quite volatile and become over 1 sometimes. If I switch to AR-GARCH-normal, the coefficient estimates are very stable. So I think it has problem with MLE estimation of t distribution. It is challenging to estimate GARCH-t model. Matlab has this function and does fairly well in latest versions. Would you be able to improve that? We can discuss about the estimation methods but I found it difficult to change your code myself. Thank you

    Daniel

    opened by DanielZJInvest 11
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • requirements-dev.txt
    ⚠️ Warning
    Sphinx 1.8.6 has requirement docutils<0.18,>=0.11, but you have docutils 0.18.1.
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    jupyter 1.0.0 requires qtconsole, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 551/1000
    Why? Recently disclosed, Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3180412 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by bashtage 0
  • Cannot reconcile simulation.values against simulation.residual_variances

    Cannot reconcile simulation.values against simulation.residual_variances

    I am trying to understand -- when forecasting using GARCH, how do the simulated "values" compare to the "residual_variances". From a few experiments, I cannot get them to match.

    To be clear, I'm looking at these two properties in the ARCHModelForecast object:

    • simulations.values
    • simulations.residual_variances

    From my understanding of the docs, the volatility series is "simulated" via drawing shocks e_t ~ N(0, 1) many times. So there are (by default) 1000 "volatility paths", which are then averaged out to be the expected volatility. Each of these paths also correspond to a sequence of values (corresponding to epsilon_t in the docs), which is just white noise times sqrt(current modeled volatility).

    However, when I try to compare the following two, I don't get the same answer.

    Calculation 1

    • Compute.fit( .. ).forecast( 5 days ).simulations.residual_variances
    • Take the mean of the last row (corresponding to the expected volatility of the 5th day volatility forecast)

    Calculation 2

    • Compute .fit(..).forecast( 5 days ).simulations.values
    • This results in a giant dataframe whcih is of size (num simulations) x ( 5 days )
    • Take the last row (corresponding to the 5th day simulated return)
    • Square these and take the mean (corresponding to E[ epsilon^2 ] as written in the docs

    These two don't always give the same result. Any ideas why? I can provide more detail if anything is unclear. Have gone through source code too, but no luck

    opened by sameerlal 1
  • [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    [Snyk] Security upgrade setuptools from 39.0.1 to 65.5.1

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • requirements-dev.txt
    ⚠️ Warning
    Sphinx 1.8.6 has requirement docutils<0.18,>=0.11, but you have docutils 0.18.1.
    seaborn 0.9.1 requires numpy, which is not installed.
    scipy 1.2.3 requires numpy, which is not installed.
    pytest-cov 2.12.1 requires coverage, which is not installed.
    pandas 0.24.2 requires numpy, which is not installed.
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    matplotlib 2.2.5 requires numpy, which is not installed.
    jupyter 1.0.0 requires qtconsole, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- low severity | 441/1000
    Why? Recently disclosed, Has a fix available, CVSS 3.1 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-SETUPTOOLS-3113904 | setuptools:
    39.0.1 -> 65.5.1
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by bashtage 0
  • [Snyk] Security upgrade mistune from 0.8.4 to 2.0.3

    [Snyk] Security upgrade mistune from 0.8.4 to 2.0.3

    This PR was automatically created by Snyk using the credentials of a real user.


    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • doc/requirements.txt
    ⚠️ Warning
    Sphinx 1.8.6 has requirement docutils<0.18,>=0.11, but you have docutils 0.18.1.
    notebook 5.7.16 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.3.
    jupyter 1.0.0 requires qtconsole, which is not installed.
    Jinja2 2.11.3 requires MarkupSafe, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 479/1000
    Why? Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-MISTUNE-2940625 | mistune:
    0.8.4 -> 2.0.3
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by bashtage 0
  • [Snyk] Security upgrade mistune from 0.8.4 to 2.0.3

    [Snyk] Security upgrade mistune from 0.8.4 to 2.0.3

    Snyk has created this PR to fix one or more vulnerable packages in the `pip` dependencies of this project.

    Changes included in this PR

    • Changes to the following files to upgrade the vulnerable dependencies to a fixed version:
      • doc/requirements.txt
    ⚠️ Warning
    Sphinx 1.8.6 has requirement docutils<0.18,>=0.11, but you have docutils 0.18.1.
    notebook 5.7.15 requires terminado, which is not installed.
    nbformat 4.4.0 requires jsonschema, which is not installed.
    nbconvert 5.6.1 has requirement mistune<2,>=0.8.1, but you have mistune 2.0.3.
    jupyter 1.0.0 requires qtconsole, which is not installed.
    Jinja2 2.11.3 requires MarkupSafe, which is not installed.
    ipython 5.10.0 requires simplegeneric, which is not installed.
    
    

    Vulnerabilities that will be fixed

    By pinning:

    Severity | Priority Score (*) | Issue | Upgrade | Breaking Change | Exploit Maturity :-------------------------:|-------------------------|:-------------------------|:-------------------------|:-------------------------|:------------------------- medium severity | 479/1000
    Why? Has a fix available, CVSS 5.3 | Regular Expression Denial of Service (ReDoS)
    SNYK-PYTHON-MISTUNE-2940625 | mistune:
    0.8.4 -> 2.0.3
    | No | No Known Exploit

    (*) Note that the real score may have changed since the PR was raised.

    Some vulnerabilities couldn't be fully fixed and so Snyk will still find them when the project is tested again. This may be because the vulnerability existed within more than one direct dependency, but not all of the affected dependencies could be upgraded.

    Check the changes in this PR to ensure they won't cause issues with your project.


    Note: You are seeing this because you or someone else with access to this repository has authorized Snyk to open fix PRs.

    For more information: 🧐 View latest project report

    🛠 Adjust project settings

    📚 Read more about Snyk's upgrade and patch logic


    Learn how to fix vulnerabilities with free interactive lessons:

    🦉 Regular Expression Denial of Service (ReDoS)

    opened by snyk-bot 0
  • Local block bootstrap

    Local block bootstrap

    One feature I think would be useful is bootstrap methods for nonstationary time-series. I am currently looking at the Local block bootstrap (LBB), which was relatively straightforward to implement.

    The LBB works by assuming that the time-series is almost stationary, but with slowly changing properties (e.g. the seasons in a year). Then, it creates bootstrap samples by sampling blocks near each other and stitching them together.

    I have created a prototype, which seems to work, but I don't have any unit tests yet.

    import numpy as np
    from arch.bootstrap.base import IIDBootstrap, _get_random_integers, ArrayLike, RandomState, Generator, Int64Array
    
    
    class LocalBlockBootstrap(IIDBootstrap):
        _name = "Local Block Bootstrap"
    
        def __init__(
            self,
            block_size: int,
            max_step: int,
            *args: ArrayLike,
            random_state: RandomState | None = None,
            seed: None | int | Generator | RandomState = None,
            **kwargs: ArrayLike,
        ) -> None:
            super().__init__(*args, random_state=random_state, seed=seed, **kwargs)
            self.block_size: int = block_size
            self.max_step: int = max_step
            self._parameters = [block_size, max_step]
    
        def clone(
            self,
            *args: ArrayLike,
            seed: None | int | Generator | RandomState = None,
            **kwargs: ArrayLike,
        ) -> 'LocalBlockBootstrap':
    
            block_size = self._parameters[0]
            max_step = self._parameters[1]
            return self.__class__(block_size, max_step, *args, random_state=None, seed=seed, **kwargs)
    
        def update_indices(self) -> Int64Array:
            num_blocks = self._num_items // self.block_size
            if num_blocks * self.block_size < self._num_items:
                num_blocks += 1
    
            m = np.arange(num_blocks)
            lower = np.maximum(0, self.block_size * m - self.max_step - 1)
            upper = np.minimum(self._num_items - self.block_size, self.block_size * m + self.max_step - 1)
            step = upper - lower
            indices = lower + _get_random_integers(self.generator, step, size=len(step))
    
            indices = indices[:, np.newaxis] + np.arange(self.block_size)[np.newaxis, :]
            indices = indices.flatten()
    
            if indices.shape[0] > self._num_items:
                return indices[: self._num_items]
            else:
                return indices
    
    opened by yngvem 0
Releases(v5.3.1)
  • v5.3.1(Jun 22, 2022)

  • v5.3.0(Jun 22, 2022)

    This release contains two small fixes:

    • Relax an overly specific assert that causes issues downstream
    • Fix a typo in a literal type definition
    Source code(tar.gz)
    Source code(zip)
  • v5.2.0(Mar 31, 2022)

  • v5.1.0(Nov 19, 2021)

  • v5.0.1(Jul 22, 2021)

  • v5.0(Jul 22, 2021)

    Release 5.0 contains new features and backward-incompatible changes.

    Unit Root

    • All unit root tests are now immutable, and so properties such as trend cannot be set after the test is created.

    Bootstrap

    • Added seed keyword argument to all bootstraps (e.g., IIDBootstrap and StationaryBootstrap) that allows a NumPy numpy.random.Generator to be used. The seed keyword argument also accepts legacy numpy.random.RandomState instances and integers. If an integer is passed, the random number generator is constructed by calling numpy.random.default_rng The seed keyword argument replaces the random_state keyword argument.
    • The IIDBootstrap.random_state property has also been deprecated in favor of IIDBootstrap.generator.
    • The IIDBootstrap.get_state and IIDBootstrap.set_state methods have been replaced by the IIDBootstrap.state property.

    Volatility Modeling

    • Added seed keyword argument to all distributions (e.g., Normal and StudentsT) that allows a NumPy numpy.random.Generator to be used. The seed keyword argument also accepts legacy numpy.random.RandomState instances and integers. If an integer is passed, the random number generator is constructed by calling numpy.random.default_rng The seed keyword argument replaces the random_state keyword argument.
    • The Normal.random_state property has also been deprecated in favor of Normal.generator.
    • Added ARCHInMean mean process supporting (G)ARCH-in-mean models.
    • Extended VolatilityProcess with VolatilityProcess.volatility_updaterthat contains a VolatilityUpdater to allow ARCHInMean to be created from different volatility processes.
    Source code(tar.gz)
    Source code(zip)
  • v4.19(Mar 16, 2021)

    This is a feature and bug fix release. The two key new features are:

    • The reduction in the size of the data returned when returning forecasts. This can lead to a reduction in memory allocation by factor of 1000x or more. To use the new feature, set reindex=True in forecast().
    • Forecasting with exogenous variables is not possible.
    Source code(tar.gz)
    Source code(zip)
  • v4.18(Mar 3, 2021)

  • v4.17(Mar 2, 2021)

  • v4.16.1(Feb 8, 2021)

  • v4.16.1rc0(Feb 8, 2021)

  • 4.16(Feb 5, 2021)

    This is a feature and big-fix release:

    • Added APARCH volatilty process.
    • Added support for Python 3.9 in pyproject.toml.
    • Fixed a bug in the model degree-of-freedom calculation.
    • Improved HARX initialization.
    Source code(tar.gz)
    Source code(zip)
  • 4.15(Jun 24, 2020)

    This is a minor release with doc fixes and other small updates.

    The only notable feature is PhillipsPerron.regression which returns regression results from the model estimated as part of the test.

    Source code(tar.gz)
    Source code(zip)
  • 4.14(Apr 17, 2020)

    This is a feature and bug release.

    New Features

    Major

    • There are two major new features: long-run covariance estimation and cointegration analysis
      • Added Kernel-based long-run variance estimation in arch.covariance.kernel. Examples include the arch.covariance.kernel.Bartlett and the arch.covariance.kernel.Parzen kernels. All estimators support automatic bandwidth selection.
      • Added Engle-Granger (arch.unitroot.cointegration.engle_granger) and Phillips-Ouliaris arch.unitroot.cointegration.phillips_ouliaris) cointegration tests
      • Added three methods to estimate cointegrating vectors: arch.unitroot.cointegration.CanonicalCointegratingReg, arch.unitroot.cointegration.DynamicOLS, and arch.unitroot.cointegration.FullyModifiedOLS.

    Minor

    • Issue warnings when unit root tests are mutated. Will raise after 5.0 is released.
    • Improved exceptions in arch.unitroot.ADF, arch.unitroot.KPSS, arch.unitroot.PhillipsPerron, arch.unitroot.VarianceRatio, and arch.unitroot.ZivotAndrews when test specification is infeasible to the time series being too short or the required regression model having reduced rank.

    Bugs Fixed

    • Fixed a bug when using "bca" confidence intervals with extra_kwargs.
    • Fixed a bug in arch.univariate.SkewStudent which did not use the user-provided RandomState when one was provided. This prevented reproducing simulated values.
    Source code(tar.gz)
    Source code(zip)
  • 4.13(Feb 3, 2020)

  • 4.12(Feb 3, 2020)

    • Added typing support to all classes, functions, and methods.
    • Fixed an issue that caused tests to fail on SciPy 1.4+.
    • Dropped support for Python 3.5 inline with NEP 29.
    • Added methods to compute moment and lower partial moments for standardized residuals. See, for example, SkewStudent.moment and SkewStudent.partial_moment.
    • Fixed a bug that produced an OverflowError when a time series has no variance.
    Source code(tar.gz)
    Source code(zip)
  • 4.11(Nov 22, 2019)

    This is a feature and bug release.

    • Added ARCHModelResult.std_resid
    • Error in bootstraps if inputs are not ndarrays, DataFrames or Series.
    • Added a check that the covariance is non-zero when using "studentized" confidence intervals. If the function bootstrapped produces statistics with 0 variance, it is not possible to studentized.
    Source code(tar.gz)
    Source code(zip)
  • 4.10.0(Oct 14, 2019)

    This release contains 2 big fixes.

    • Fixed a bug in arch_lm_test that assumed that the model data is contained in a pandas Series.
    • Fixed a bug that can affect use in certain environments that reload modules.
    Source code(tar.gz)
    Source code(zip)
  • 4.9.1(Aug 31, 2019)

  • 4.9.0(Aug 30, 2019)

    This is a feature and bug release.

    • Removed support for Python 2.7.
    • Added auto_bandwidth to compute optimized bandwidth for a number of common kernel covariance estimators. This code was written by Michael Rabba.
    • Added a parameter rescale to arch_model that allows the estimator to rescale data if it may help parameter estimation. If rescale=True, then the data will be rescaled by a power of 10 (e.g., 10, 100, or 1000) to produce a series with a residual variance between 1 and 1000. The model is then estimated on the rescaled data. The scale is reported ARCHModelResult.scale. If rescale=None, a warning is produced if the data appear to be poorly scaled, but no change of scale is applied. If rescale=False, no scale change is applied and no warning is issued.
    • Fixed a bug when using the BCA bootstrap method where the leave-one-out jackknife used the wrong centering variable.
    • Added ARCHModelResult.optimization_result to simplify checking for convergence of the numerical optimizer.
    • Added random_state argument to HARX.forecast to allow a RandomState objects to be passed in when forecasting when method='bootstrap'. This allows the repeatable forecast to be produced.
    • Fixed a bug in VarianceRatio that used the wrong variance in nonrobust inference with overlapping samples.
    Source code(tar.gz)
    Source code(zip)
  • 4.8.1(Mar 28, 2019)

  • 4.8.0(Mar 27, 2019)

    This is a feature and bug release. Highlights include:

    • Added Zivot-Andrews unit root test.
    • Added data dependent lag length selection to the KPSS test.
    • Added IndependentSamplesBootstrap to bootstrap inference on statistics from independent samples that may have uneven length.
    • Added arch_lm_test to ARCH-LM tests on model residuals or standardized residuals.
    • Fixed a bug in ADF when applying to very short time series.
    • Added ability to set the random_state when initializing a bootstrap.
    Source code(tar.gz)
    Source code(zip)
  • 4.7.0(Dec 13, 2018)

    This is a feature and bug release:

    • Added support for Fractionally Integrated GARCH (FIGARCH)
    • Enable user to specify a specific value of the backcast in place of the automatically generated value.
    • Fixed a big where parameter-less models where incorrectly reported as having constant variance
    Source code(tar.gz)
    Source code(zip)
  • 4.6.0(Oct 3, 2018)

  • 4.5.0(Sep 28, 2018)

    This is a feature release with 1 new feature:

    • Added a parameter to forecast that allows a user-provided callable random generator to be used in place of the model random generator
    Source code(tar.gz)
    Source code(zip)
  • 4.4.1(Aug 14, 2018)

  • 4.4(Aug 14, 2018)

    This is a minor release containing mostly bug fixes.

    Changes include:

    • Added named parameters to Dickey-Fuller regressions.
    • Removed use of the module-level NumPy RandomState. All random number generators use separate RandomState instances.
    • Fixed a bug that prevented 1-step forecasts with exogenous regressors
    • Added the Generalized Error Distribution for univariate ARCH models
    • Fixed a bug in MCS when using the max method that prevented all included models from being listed
    Source code(tar.gz)
    Source code(zip)
  • 4.3.1(Dec 14, 2017)

  • 4.3(Dec 13, 2017)

    • Fixed a bug that prevented 1-step forecasts with exogenous regressors
    • Added the Generalized Error Distribution for univariate ARCH models
    • Fixed a bug in MCS when using the max method that prevented all included models from being listed
    • Added FixedVariance volatility process which allows pre-specified variances to be used with a mean model. This has been added to allow so-called zig-zag estimation where a mean model is estimated with a fixed variance, and then a variance model is estimated on the residuals using a ZeroMean variance process.
    Source code(tar.gz)
    Source code(zip)
  • 4.2(Sep 3, 2017)

    Release containing all changes since 4.1 including:

    • Fixed a bug that prevented fix from being used with a new model (:issue:156)
    • Added first_obs and last_obs parameters to fix to mimic fit
    • Added ability to jointly estimate smoothing parameter in EWMA variance when fitting the model
    Source code(tar.gz)
    Source code(zip)
Owner
Kevin Sheppard
Kevin Sheppard
A python wrapper for Alpha Vantage API for financial data.

alpha_vantage Python module to get stock data/cryptocurrencies from the Alpha Vantage API Alpha Vantage delivers a free API for real time financial da

Romel Torres 3.8k Jan 7, 2023
Portfolio and risk analytics in Python

pyfolio pyfolio is a Python library for performance and risk analysis of financial portfolios developed by Quantopian Inc. It works well with the Zipl

Quantopian, Inc. 4.8k Jan 8, 2023
Python sync/async framework for Interactive Brokers API

Introduction The goal of the IB-insync library is to make working with the Trader Workstation API from Interactive Brokers as easy as possible. The ma

Ewald de Wit 2k Dec 30, 2022
bt - flexible backtesting for Python

bt - Flexible Backtesting for Python bt is currently in alpha stage - if you find a bug, please submit an issue. Read the docs here: http://pmorissett

Philippe Morissette 1.6k Jan 5, 2023
ffn - a financial function library for Python

ffn - Financial Functions for Python Alpha release - please let me know if you find any bugs! If you are looking for a full backtesting framework, ple

Philippe Morissette 1.4k Jan 1, 2023
An Algorithmic Trading Library for Crypto-Assets in Python

Service Master Develop CI Badge Catalyst is an algorithmic trading library for crypto-assets written in Python. It allows trading strategies to be eas

Enigma 2.4k Jan 5, 2023
Python library for backtesting trading strategies & analyzing financial markets (formerly pythalesians)

finmarketpy (formerly pythalesians) finmarketpy is a Python based library that enables you to analyze market data and also to backtest trading strateg

Cuemacro 3k Dec 30, 2022
Python Backtesting library for trading strategies

backtrader Yahoo API Note: [2018-11-16] After some testing it would seem that data downloads can be again relied upon over the web interface (or API v

DRo 9.8k Dec 30, 2022
Python Algorithmic Trading Library

PyAlgoTrade PyAlgoTrade is an event driven algorithmic trading Python library. Although the initial focus was on backtesting, paper trading is now pos

Gabriel Becedillas 3.9k Jan 1, 2023
:mag_right: :chart_with_upwards_trend: :snake: :moneybag: Backtest trading strategies in Python.

Backtesting.py Backtest trading strategies with Python. Project website Documentation the project if you use it. Installation $ pip install backtestin

null 3.1k Dec 31, 2022
Q-Fin: A Python library for mathematical finance.

Q-Fin A Python library for mathematical finance. Installation https://pypi.org/project/QFin/ pip install qfin Bond Pricing Option Pricing Black-Schol

Roman Paolucci 247 Jan 1, 2023
personal finance tracker, written in python 3 and using the wxPython GUI toolkit.

personal finance tracker, written in python 3 and using the wxPython GUI toolkit.

wenbin wu 23 Oct 30, 2022
Indicator divergence library for python

Indicator divergence library This module aims to help to find bullish/bearish divergences (regular or hidden) between two indicators using argrelextre

null 8 Dec 13, 2022
A simple and easy to use Python's PIP configuration manager, similar to the Arch Linux's Java manager.

PIPCONF - The PIP configuration manager If you need to manage multiple configurations containing indexes and trusted hosts for PIP, this project was m

João Paulo Carvalho 11 Nov 30, 2022
Tools to make working the Arch Linux Security Tracker easier

This is a collection of Python scripts to make working with the Arch Linux Security Tracker easier.

Jonas Witschel 6 Jul 13, 2022
🐛 Self spreading Botnet based on Mirai C&C Arch, spreading through SSH and Telnet protocol.

HBot Self spreading Botnet based on Mirai C&C Arch, spreading through SSH and Telnet protocol. Modern script fullly written in python3. Warning. This

Ѵιcнч 137 Nov 14, 2022
Wireguard VPN Server Installer for: on Ubuntu, Debian, Arch, Fedora and CentOS

XGuard (Wireguard Server Installer) This Python script should make the installation of a Wireguard VPN server as easy as possible. Wireguard is a mode

Johann 3 Nov 4, 2022
Arch-Net: Model Distillation for Architecture Agnostic Model Deployment

Arch-Net: Model Distillation for Architecture Agnostic Model Deployment The official implementation of Arch-Net: Model Distillation for Architecture A

MEGVII Research 22 Jan 5, 2023
Full featured multi arch/os debugger built on top of PyQt5 and frida

Full featured multi arch/os debugger built on top of PyQt5 and frida

iGio90 1.1k Dec 26, 2022
Providing a working, flexible, easier and faster installer than the one officially provided by Arch Linux

Purpose The purpose is to bring more people to Arch Linux by providing a working, flexible, easier and faster installer than the one officially provid

André Luís 0 Nov 9, 2022